What are the key differences between ner. teach and ner.match ?

E.g., does ner.teach suggests to the annotating user both phrases that conform 100% to patterns, and phrases that the model considers similar to patterns, while ner.match provides only the phrases that conform 100% to patterns?
Are outputs differently formatted?

Yes, your analysis is pretty much correct :slightly_smiling_face:

ner.teach uses a statistical model in the loop, updates it with your accept/reject answers as you annotate, and as the model learns about the entities, it combines the pattern matches with model suggestions. To make the active learning work, ner.teach also uses a sorter to prioritise examples with a prediction closest to 0.5, i.e. the ones that the model is most uncertain about. This means you won’t necessarily get to see all examples and only a selection. Running the recipe with patterns helps you get over the so-called “cold-start problem”, i.e. training a new label from scratch that the model doesn’t know anything about yet. In order for the model to make meaningful suggestions, it needs to have seen enough positive examples – and that’s where the patterns come in. If the model already predicts something for the entities you want to train, you can also use ner.teach without patterns and simply accept / reject the model’s suggestions.

ner.match on the other hand doesn’t use or update the statistical model – it only finds pattern matches in your text and will ask you for feedback on them as they come in, in the exact order, without skipping any. It can be a useful recipe if you already have a large terminology list or other patterns describing the entities you’re looking for, and you quickly want to collect data, without having to highlight anything by hand. It also lets you write more general patterns that potentially produce false positives (like, “two upperdcase tokens”), move through them quickly and collect both positive and negative examples for this entity type. That can be super valuable – you might see significantly better results if your data includes both “perfect” examples of entity types, as well as examples of spans that look very similar but are not part of the entity type.


@ines Thank you very much, again!

1 Like