We are facing trouble highlighting entities with prodigy as shown in, however as you have claimed that entity models are built over spacy, the spacy model is showing accurate result for the same text. Kindly look into it.
Which recipe are you using and what are you trying to do?
The above screenshot looks like you’re running ner.teach
, which will only show you one entity and ask you whether the prediction is correct or not. It will also focus on the entities the model is most uncertain about. Prodigy uses beam search to find all possible entity analyses of the text, and will then show you the ones with a prediction closest to 0.5. So you might even see entity suggestions that you don’t see if you’re just running spaCy model over your text. This lets you collect a wider range of annotations and improve the existing model’s predictions.
If you just want to see the default predictions of the model and correct them by hand, you probably want to use the ner.make-gold
recipe instead. This will stream in your data, extract the doc.ents
(only the predictions that the model thinks are “the best”) and will let you correct them if necessary.