If you want to train your model, the link you shared is the way to go – you need to follow the steps and run the scripts for preprocessing, and then use either FastText or GloVe to train the vectors: https://github.com/explosion/sense2vec#-training-your-own-sense2vec-vectors
To train meaningful vectors, you typically want to use a lot of text, like 1 billion words. So your 7000 likely won't be enough. Maybe you can find other similar texts from a different source that you can use.
Once you have a sense2vec model, you can then use the vectors to find more similar terms. Not sure if "since" is a good seed term here, because there are not that many similar expressions. It works better for things like (proper) nouns.