sentences segmentation, ner

I want to break this sentences in order to process it using spacy

Finally, on 1595 July 22 at 2h 40m am, when the sun was at 7° 59' 52" Leo, 101,487 distant from earth, Mars's mean longitude 11s 14° 9' 5", and anomaly 164° 48' 55", and consequent eccentric position from the vicarious hypothesis 17° 16' 36" Pisces: the apparent position of Mars, from the most select observations, was 4° 11' 10" Taurus, lat. 2° 30' S ^37. Thus we twice have Mars in the most opportune position, in quadrature with the sun, while the positions of earth and Mars are also distant by a quadrant.\n

I want to result be like this :

[
Finally, on 1595 July 22 at 2h 40m am, when the sun was at 7° 59' 52" Leo, 101,487 distant from earth, Mars's mean longitude 11s 14° 9' 5", and anomaly 164° 48' 55", and consequent eccentric position from the vicarious hypothesis 17° 16' 36" Pisces: the apparent position of Mars, from the most select observations, was 4° 11' 10" Taurus, lat. 2° 30' S ^37. ,

  Thus we twice have Mars in the most opportune position, in quadrature with the sun, while the positions of earth and Mars are also distant by a quadrant.\n ]

It means two sentences, the first one finish after lat. 2° 30' S ^37.

but I did not find the solution till now I have used

def set_custom_boundaries(doc):
    for token in doc[:-1]:
        if token.text in ("lat."):
            # print("Detected:", token.text)
            doc[token.i].is_sent_start = False
    return doc

nlp.add_pipe(set_custom_boundaries, before="parser")
nlp.pipeline

and

a.split('.')

I think some small mistake in the first code.

both do not work!

generally, what do you recommend in order to segment paragraph to sentences? (especially when we have) such abbreviation cases lie

lat. 

no one can help me by some ideas?

actually I found the solution

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
def SentenceSegmentation(Para):
        punkt_param = PunktParameters()
        abbreviation = ['lat', 'ch']
        punkt_param.abbrev_types = set(abbreviation)
        tokenizer = PunktSentenceTokenizer(punkt_param)
        #tokenizer.train(Para)
        return tokenizer.tokenize(Para)