I am trying to find out which of the entities annotated for NER are either skipped (false negatives) or which pieces of text the model is incorrectly picking up as entities (false positives). Is there an easy way to do this via the Prodigy/Spacy API?
I hacked my way through the code a bit but couldn't find anything. The closest I could get from the
train recipe was the
scores object, but that only contained the scoring. It would be really nice to store the predictions. Then we could compute other metrics / plots (conf. matrix, etc.)