Looks like a new trained model has forgotten the old entities

I have trained a new entity type 'MED' (medical diseases) on Spacy's 'en_core_web_md' model.

I managed to train the model on twitter data using the twitter API.
I have got an accuracy of 60% after 6 epochs.

And when I loaded the newly trained model in Spacy to test, I got the below result which is wrong.

import spacy
nlp = spacy.load('med-model')
doc = nlp(u'John is suffering from cough')
[(ent.text, ent.label_) for ent in doc.ents]

[('John', 'MED'), ('suffering', 'MED'), ('from', 'MED'), ('cough', 'MED')]

The model has predicted all tokens in the test sentence as 'MED' entity.
Can anyone please guide ?

How many examples did you collect and how many entities did you label in total? Do they overlap with existing types? And how did you train the model? It seems like your model has learned that "everything is MED now" – so if you want to prevent it from "forgetting" previously predicted labels, you usually want to also include examples of what the model previously got right, not just the new annotations. Alternatively, you can train a new model from scratch using the labels you need, so you don't have to deal with the side-effects of the existing weights.

You can find more details on preventing "catastrophic" forgetting here: