Is there a way to review (NER) examples with the highest loss?

Hi!

I guess this question is similar to this thread? Can active learning help reduce annotation inconsistencies? - #2 by SofieVL

Basically what you could do is train a model on the whole dataset or part of it, make predictions, and compare those to the original annotations you created. Where the two diverge, something may be going on.

1 Like