Created new model with Ner.manual, but train only outputs 0 scores

I followed the steps outlined on Prodigy 101 to annotate data with ner.manual. The only difference between my setup and that in Prodigy 101 is that I am using blank:zh model for my Chinese-language data.

However, when I run train (again, just as described in 101), it shows a score of 0. I annotated 500 entries, and I am not running a custom tokenizer (as in this related question), so I'm not really sure how to trouble-shoot this...?

hi @jbat!

Thanks for your question! Yes, I would have thought tokenizer first, but it doesn't seem like that's the issue.

What version of Prodigy are you running? If you're using (or able to install) our latest version Prodigy v1.13.1, you could try to use the new ner.model-annotate recipe to review examples, comparing both the model's predictions and your annotations, and find out for yourself.

This was originally designed to compare different model outputs, but you can do the same, running it once with your model results and comparing those with your actual annotations. I suspect once you start to look through, you'll find a pattern of what's going on underneath.

To do this, you'll first need to use ner.model-annotate to score your data with your model:

prodigy ner.model-annotate ner_model_results ./output/model-best ./input.jsonl ner_model_results --labels PERSON,ORG

This assumes your trained model is in the ./output folder and your raw data is in ./input.jsonl (fyi, you could even pass dataset:ner_data, if your original annotations were saved in ner_data file). Also, make sure to modify the labels as it's not clear what three labels you're using.

This recipe will simply "score" (aka annotate/label) all of your data from your input source and save it into the new dataset ner_model_results.

Then you can review both your ner_model_results and your original labeled data ner_data using the review recipe:

prodigy review ner_data ner_model_results --view-id ner_manual --label PERSON,ORG

Again, you'll need to update the label, but this should provide you examples where you can compare. Let me know if this works and is helpful.

Also, you may want to search a bit through spaCy's discussion forum. Since you're running spaCy for training, I know they've had some posts on similar problems and suggestions.

Thanks!

I am running 1.13.1.

I am trying to run ner.model-annotate, but can't seem to reference the model I created with train.

I ran

python3 -m prodigy ner.model-annotate ner_model_results ./output/ner_refno_vendors /path/to/data_sample.jsonl dataset:ner_refno_vendors --labels REFNO,WIN_VEND,WIN_AMT

And got this error:

OSError: [E050] Can't find model './output/ner_refno_vendors'. It doesn't seem to be a Python package or a valid path to a data directory.

The same thing happens if I remove the './output'.

I'm not really sure what to do, as I didn't change any of the db configuration, and when I ran the train recipe, it just seemed to know where the model was ( python3 -m prodigy train --ner ner_refno_vendors)

I'm also not quite sure what you mean by passing dataset:ner_data -- does this replace the path to the input jsonl? or in addition to? I've tried replacing the jsonl path with dataset:ner_refno_vendors to the command above, and it generates the following error:

prodigy ner.model-annotate: error: the following arguments are required: model_alias

Sorry, I am totally new to all of this and just don't know what I don't know yet.

Thanks!

I am still stuck and unable to move forward on this. I do not understand the recipe you offered, and even after reading the ner.model-annotate docs I am still unsure what to do.

Here is the command you suggested:

prodigy ner.model-annotate ner_model_results ./output/model-best ./input.jsonl ner_model_results --labels PERSON,ORG

Here is what I am trying:

python3 -m prodigy ner.model-annotate ner_refno_vendors ./refno_output_dir/model-last /path/to/data_sample.jsonl ner_refno_vendors --labels REFNO,WIN_VEND,WIN_AMT

where ner_refno_vendors is the dataset I have annotated, ./refno_output_dir/model-last is the model I output after running train, /path/to/data_sample.jsonl is the path to my original jsonl data, ner_refno_vendors is again my dataset, and --labels REFNO,WIN_VEND,WIN_AMT are the labels I used in annotating. (I do not understand what ner_model_results refers to in the command you provided, so I subbed in the name of the dataset I've annotated.)

I am getting the error message:
✘ Requested labels: WIN_VEND,WIN_AMT are not found in the pipeline:
REFNO.

But if I re-open the annotator with ner.manual, they absolutely are there:

Please advise what to do. This software was not an insubstantial expense for me and I am hoping you can provide support so I understand what is going wrong here.

hi @jbat!

Sorry for the delay! I was on holiday last week and we had a few teammates who were too. I already realized I could have added a few more links to the docs for context so it would've avoided the confusion.

I'll start from your first to second responses to make sure I don't miss anything.

So as you may have realized, you needed to set the output for where your model would be saved. Since your model wasn't saved, ner.model-annotate was telling you it couldn't find the model since it wasn't saved anywhere.

We mention this in the prodigy train docs as the first input is output_path. I suppose you were assuming that the model is saved to a specific location but it is only saved to disk when you specifyoutput_path. So try to rerun prodigy train and save your model.

Before you use ner.model-annotate, double check your model is saved and works. First, look at the file path of the output_path folder. Do you see the folder? Within that folder, does it have two folders: model_best and model_last? That's spaCy (which is running prodigy train) default behavior (see this StackOverflow describing both). Second, try to load one of the models using spaCy, i.e., open up Python (e.g., in a Jupyter notebook or Python shell):

import spacy
nlp = spacy.load("output_path/model-best")
doc = nlp("Here's an example sentence.")

If this works, then you can now confirm you saved your model and it can work. Now you can pass it to ner.model-annotate

This is loading from an existing dataset see docs. This prevents the need to export out your data as you can reuse your annotated as a source for a new command.

It seems like you provided both using dataset:ner_refno_vendors and .jsonl file. Yes! You're right, it was an alternative to providing your .jsonl file. You only needed to provide either the .jsonl or the dataset:ner_refno_vendors. I mentioned to give you a quick trick but had I provided the exact link to the docs this would have been clear. You can just ignore the dataset:ner_refno_vendors if it's confusing.

Thanks for mentioning this! Per the ner.model-annotate docs, that's the model_alias which will be the new name to name the scored result of your model.

One other trick -- if you're ever having questions about a built-in recipe and it's arguments, run:

prodigy ner.model-annotate --help

This will show you all of the arguments for the recipe. This can help you debug a lot quicker!

This seems to be that your model (aka pipeline) doesn't have these labels in it. With the same model you're passing into ner.model-annotate, can you run:

import spacy
nlp = spacy.load("output_path/model-best") # assume this model you're passing to ner.model-annotate
nlp.get_pipe('ner').labels

This will show you what your labels are. You may have misspelled them. This could maybe even be the problem with your original model.

Thanks again for your patience! Prodigy is great because it can do a lot, but that also makes it a bit challenging at first. But you're doing great and hoping (:crossed_fingers: ) this can get you moving fast soon!

Thank you so much for this very comprehensive response.

Interestingly, when I wrote my last post, I did NOT have a "model-best" folder in my output directory, but now I do. I have no idea when that appeared.

I've run the code you suggested, and it seems like the trouble is, as you said, two of my labels did not save. I'm not really sure how to fix this. As you can see in the screenshot of my last post, they show up when I run ner.manual. I have annotated hundreds of entries using all three of them. I just annotated few more to make extra sure all three labels have been applied.

Is there something in particular I need to do to make sure all three labels save properly? Why would all three appear when I run ner.manual, but only one appears when I run nlp.get_pipe('ner').labels? I don't think I've misspelled anything, but any misspelling shouldn't matter if I'm just asking it to return the labels I've already made . . .

I think you're getting a few things confused.

There are three different things:

  1. Running Prodigy the interface. This is what your screen shot showed. When you run prodigy ner.manual ... that will simply start a Prodigy server loading your records. Think of this in your first time you've run your server, before you've even saved 1 annotation.

For example, the fact that you can see these labels, simply mean you have run a server with these labels. They do not necessarily tell you what is in your actual data.

  1. What data is in your dataset/saved. This is your actual annotations. What it seems like is your actual data may not have the labels you think you do. One way to check this is to export out your annotations (e.g., prodigy db-out ner_refno_vendors > data.jsonl then run a script to count the labels. An alternative, faster way to do this could be to use data-to-spacy then use data debug to get basic stats about your dataset:
python3 -m prodigy data-to-spacy ./corpus --ner ner_refno_vendors 

View the data-to-spacy docs to understand what each argument means, namely it will convert your dataset ner_refno_vendors to a new output folder ./corpus, converting your data to a spaCy binary datasets (i.e., /corpus/train.spacy and /corpus/dev.spacy), providing a default config file, and providing a file with your label names. FYI, since you're running in Chinese, you may also want to add --lang zh too to data-to-spacy so it uses the right tokenizer.

Now look into that folder. Find the labels/ner.json file. What labels does it show? Does it show all of the labels you expect?

As another option, you can run spacy data debug and providing the paths to your train/dev binary files like this:

python3 -m spacy data debug ./corpus/config.cfg --paths.train ./corpus/train.spacy --paths.dev ./corpus/dev.spacy

You should now get some stats on your dataset. What do you see?

I suspect, you may not see the labels you think, namely only 1 label. Regardless, this may be a very helpful debugging on your original question/model.

  1. The third thing is the model's labels. This would in theory be a function of your annotated data (#2) if you used the exact same dataset ner_refno_vendors. This is what you're checking when you run nlp.get_pipe('ner').labels. Since you're only getting 1 label, this is why I think #2 will show your actual data only has 1 label.

Does this make sense?

Perhaps the two labels you can't find may be very rare, so rare, they didn't even appear in either training or evaluation dataset (or only a handful of times). Nowhere enough for the model to learn. You should likely see that in your spacy data debug.

Thank you so much, this is really helpful (even if I am a bit still confused)!

When I run ner.manual, it shows me as already having annotated several hundred entries (here is the screenshot from a new server I just started):

Isn't this referencing the data I have already saved? Or is what you are trying to say that it doesn't matter what I have saved before, whatever label flags I enter at the command prompt will be what shows up at the top of the screen?

In any case, there are indeed 3 labels in my data. Running

python3 -m prodigy data-to-spacy --lang zh ./corpus --ner ner_refno_vendors 

produces the labels/ner.json file, which indeed shows all 3 labels:

Running

python3 -m spacy debug data ./corpus/config.cfg --paths.train ./corpus/train.spacy --paths.dev ./corpus/dev.spacy

produces the following:

This also shows 3 labels.

I personally annotated all the entries, so I know that the 2 'missing' labels do appear more than a handful of times. But maybe even the dozens-to-hundreds of times they do appear are not enough . . . ? I can always annotate more if we think that will solve the problem.

Again, thank you so much. I have plenty of python experience but this is all very new to me.

Yes! Each time you run prodigy ner.manual ... it'll show whatever labels. This is done because sometimes you may want to label one label at a time and add that to the dataset. This is just to clarify so you have the right idea about labeling. For example, in theory (you may not want to do this to add an incorrect label), you can run prodigy ner.manual ner_refno_vendors blank:zh --label SOME_NEW_NAME and Prodigy would only show that new name SOME_NEW_NAME in your interface while your data in ner_refno_vendors kept your exact same data. Hope this makes sense!

But this is great news! It looks like you have really good labels, and nothing major is wrong with the data.

Looking back at your original question ("train only outputs 0 scores"), what if you run:

prodigy train ./output --ner ner_refno_vendors --lang zh

That is, adding the --lang zh for training? (And of course adding the ./output to save your model). Do you now finally get a legitimate loss/accuracy?

I suspect this may be it. :sweat_smile:

Just for fun, can you also use train-curve with

prodigy train-curve --ner ner_refno_vendors --lang zh

It's a wonderful way to debug on whether you should label more annotations.

But also (completely optional), feel free to try out the ner.model-annotate too, just to give you some examples. I bet you'll quickly find some examples where your model departs from your training examples. You may likely find ways to improve your labeling for newer examples.

Sorry for the roundabout way to figure it out, but to be honest, it's hard to debug without metrics like above (e.g., data-to-spacy, data debug). In fact, I would recommend use your data-to-spacy -> spacy train workflow in the future. When you run data-to-spacy, it'll give you the exact command to run spacy train instead of prodigy train.

The reason being is it'll be more explicit and give you more control as you can do lots more intermediate architecture changes and hyperparameter tuning with your model training with a config.

prodigy train is really just a wrapper for spacy train and more intended for initial or quick-and-dirty training. It doesn't provide a dedicated hold out evaluation set (i.e., each time you train, it'll reshuffle your train/dev split). That's what's nice with data-to-spacy is it'll create these splits.

Last, once you start getting a good workflow, I'd encourage you to start to learn a bit about spaCy projects. It's a nice way to develop your entire project workflow. We have an example of a Prodigy spaCy project template here: https://github.com/explosion/projects/tree/v3/integrations/prodigy

This way, you can have a fully reproducible pipeline including steps and get your model ready for production, like using FastAPI (example).

Hope this helps!

IT WORKED!!! --lang zh was the answer. (I should have guessed given the amount of trouble language conversion has given me in other cases.)

Though going through this entire process was extremely helpful -- I understand better where things might go wrong and how to troubleshoot them. Part of the problem with any new module or program is simply not knowing how to ask the right questions and not knowing any of the common debugging steps. Thank you so much!

1 Like