ner.teach - couple of questions

One thing to keep in mind here is that the suggestions you see are based on a number of possible interpretations of the document, not just the model's actual predictions. Still, depending on the data, you'd still expect some lower score predictions here. Does this occur from the very beginning, or after doing some annotation? If it happens gradually, this could indicate that the model ends up in a weird state and the beam parse stops

Ah, that's not the correct interpretation of the interface: ner.teach will always ask you about a single entity at a time, and the goal is to give feedback on whether that particular entity is correct. So in the first case, you'd accept if the entity span is correct, and so on. If you're rejecting an entity, the feedback you give the model is "this particular entity span is incorrect, try again". That's not what you want here.

If you want to annotate and correct the complete, actual predictions made by the model, maybe you just want to use ner.correct instead. In Prodigy v1.11.+, you'll also be able to set the --update flag to update the model in the loop from your annotations.

1 Like