ner.correct not showing suggestions

Hi there,

I am new to Prodigy and I am using it to annotate domain-specific data. I first annotated 8.000 instances using ner.manual and then I went on and trained the model using the Prodigy recipe.

I then tried to use ner.correct on said annotated dataset, using the following command line:

prodigy ner.correct dataset_cw models/model-last final_dataset.jsonl --label MODULE,LOGISTICS,PRODUCT,HR,POLICY

However I do not see any model suggestions here nor do I recognize any sentence I have previously annotated in ner.manual. Any idea why? Am I completely off track here?

Thank you in advance!

Hi @mattdr,

Thanks for your message and welcome to the Prodigy community :wave:

At a glance, nothing seems wrong with your command's syntax.

First, let's check that your model was trained for the labels you're providing. Can you look for the meta.json file in models/model-last? Does it have:

    "ner":[
      "MODULE",
      "LOGISTICS",
      "PRODUCT",
      "HR",
      "POLICY"
    ]

How's the performance? You can view in the meta.json.

Just curious, how did you train the model? If you could provide the command that would be great.

Did you do it from scratch (i.e., with a blank model) or fine tuning a pretrained pipeline?

If you did fine tuning, you may be dealing with catastrophic forgetting.

Since you're creating a couple of new entity types, you should likely train your model from scratch. We discuss this in the docs and in the Prodigy NER flowchart.

On the 2nd part: "you're not recognizing the sentences".

How did you get the final_dataset.jsonl file? Did you db-out your annotations? You can run prodigy stats dataset_cw to show basic stats.

Does dataset_cw have any overlap with final_dataset.jsonl? Are they 100% same sentences? Do either or both have annotations? Data duplication/hashing may be coming into play.

Hi there,

thank you very much for your answer!

in the meta.jsonl I can correctly see all my labels. Performance also looks good:

ents_f:0.7692307692
ents_p:0.7872340426
ents_r:0.7520325203

I first manually annotated some instances with ner.manual

 prodigy ner.manual dataset_cw blank:en final_dataset.jsonl --label HR,POLICY,MODULE,LOGISTICS,PRODUCT

(where final_dataset.jsonl is the pre-processed file input)

and then trained from scratch using a blank model.

prodigy train models/ --ner dataset_cw

here's the output of the stats command:

============================== ✨  Prodigy Stats ==============================

Version          1.11.10                       
Location         /opt/conda/lib/python3.8/site-packages/prodigy
Prodigy Home     /home/jovyan/.prodigy         
Platform         Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.10
Python Version   3.8.10                        
Database Name    SQLite                        
Database Id      sqlite                        
Total Datasets   2                             
Total Sessions   33                            


============================== ✨  Dataset Stats ==============================

Dataset       dataset_cw         
Created       2023-04-25 10:29:23
Description   None               
Author        None               
Annotations   9106               
Accept        8804               
Reject        279                
Ignore        23           

Any lead here? Thx! :smiling_face_with_tear:

Thanks @mattdr for the details! That's helpful.

Nice performance and great work scaling up your annotations!

I can't find anything obvious still :thinking: .

Could you try with your ner.correct command to save your annotations to a different dataset than dataset_cw?

For example:

prodigy ner.correct dataset_cw_gold models/model-last final_dataset.jsonl --label MODULE,LOGISTICS,PRODUCT,HR,POLICY

I named it "gold" because ner.correct used to be called ner.make-gold (see here), and is sometimes thought of as a "gold standard" annotation.

Also, can you try running print-stream on a sample of your final_dataset.jsonl (e.g., if you created a new file with say the first 10 records of that file)

prodigy print-stream models/model-last final_dataset_first10.jsonl

This recipe will make predictions. Just a heads up, this recipe will score all source records so that's why I recommended a smaller file. You can try on more records though :slight_smile: .

To show it's working you can even replace models/model-last with a pretrained ner like en_core_web_sm to show you want you'd expect to see.

Also -- cool trick, you can use your model to score on previously annotated data like:

prodigy print-stream models/model-last dataset:dataset_cw

or just part of the data you've (accept, reject, or ignore). For example, you can score any annotations you've ignored by running:

prodigy print-stream models/model-last dataset:dataset_cw:ignore

Alternatively, you can try your model models/model-last in spacy-streamlit. GitHub - explosion/spacy-streamlit: 👑 spaCy building blocks and visualizers for Streamlit apps

This gives a great interface for showing the model to users/non-data scientists but can also help make sure the model is predicting as you expect.

If you find you're having examples that show the model is predicting entities correct, but still not Prodigy, let us know.