I followed the steps outlined on Prodigy 101 to annotate data with ner.manual. The only difference between my setup and that in Prodigy 101 is that I am using blank:zh model for my Chinese-language data.
However, when I run train (again, just as described in 101), it shows a score of 0. I annotated 500 entries, and I am not running a custom tokenizer (as in this related question), so I'm not really sure how to trouble-shoot this...?
Thanks for your question! Yes, I would have thought tokenizer first, but it doesn't seem like that's the issue.
What version of Prodigy are you running? If you're using (or able to install) our latest version Prodigy v1.13.1, you could try to use the new ner.model-annotate recipe to review examples, comparing both the model's predictions and your annotations, and find out for yourself.
This was originally designed to compare different model outputs, but you can do the same, running it once with your model results and comparing those with your actual annotations. I suspect once you start to look through, you'll find a pattern of what's going on underneath.
To do this, you'll first need to use ner.model-annotate to score your data with your model:
This assumes your trained model is in the ./output folder and your raw data is in ./input.jsonl (fyi, you could even pass dataset:ner_data, if your original annotations were saved in ner_data file). Also, make sure to modify the labels as it's not clear what three labels you're using.
This recipe will simply "score" (aka annotate/label) all of your data from your input source and save it into the new dataset ner_model_results.
Then you can review both your ner_model_results and your original labeled data ner_data using the review recipe:
OSError: [E050] Can't find model './output/ner_refno_vendors'. It doesn't seem to be a Python package or a valid path to a data directory.
The same thing happens if I remove the './output'.
I'm not really sure what to do, as I didn't change any of the db configuration, and when I ran the train recipe, it just seemed to know where the model was ( python3 -m prodigy train --ner ner_refno_vendors)
I'm also not quite sure what you mean by passing dataset:ner_data -- does this replace the path to the input jsonl? or in addition to? I've tried replacing the jsonl path with dataset:ner_refno_vendors to the command above, and it generates the following error:
prodigy ner.model-annotate: error: the following arguments are required: model_alias
Sorry, I am totally new to all of this and just don't know what I don't know yet.
where ner_refno_vendors is the dataset I have annotated, ./refno_output_dir/model-last is the model I output after running train, /path/to/data_sample.jsonl is the path to my original jsonl data, ner_refno_vendors is again my dataset, and --labels REFNO,WIN_VEND,WIN_AMT are the labels I used in annotating. (I do not understand what ner_model_results refers to in the command you provided, so I subbed in the name of the dataset I've annotated.)
I am getting the error message:
✘ Requested labels: WIN_VEND,WIN_AMT are not found in the pipeline:
But if I re-open the annotator with ner.manual, they absolutely are there:
Sorry for the delay! I was on holiday last week and we had a few teammates who were too. I already realized I could have added a few more links to the docs for context so it would've avoided the confusion.
I'll start from your first to second responses to make sure I don't miss anything.
So as you may have realized, you needed to set the output for where your model would be saved. Since your model wasn't saved, ner.model-annotate was telling you it couldn't find the model since it wasn't saved anywhere.
We mention this in the prodigy train docs as the first input is output_path. I suppose you were assuming that the model is saved to a specific location but it is only saved to disk when you specifyoutput_path. So try to rerun prodigy train and save your model.
Before you use ner.model-annotate, double check your model is saved and works. First, look at the file path of the output_path folder. Do you see the folder? Within that folder, does it have two folders: model_best and model_last? That's spaCy (which is running prodigy train) default behavior (see this StackOverflow describing both). Second, try to load one of the models using spaCy, i.e., open up Python (e.g., in a Jupyter notebook or Python shell):
nlp = spacy.load("output_path/model-best")
doc = nlp("Here's an example sentence.")
If this works, then you can now confirm you saved your model and it can work. Now you can pass it to ner.model-annotate
This is loading from an existing dataset see docs. This prevents the need to export out your data as you can reuse your annotated as a source for a new command.
It seems like you provided both using dataset:ner_refno_vendors and .jsonl file. Yes! You're right, it was an alternative to providing your .jsonl file. You only needed to provide either the .jsonl or the dataset:ner_refno_vendors. I mentioned to give you a quick trick but had I provided the exact link to the docs this would have been clear. You can just ignore the dataset:ner_refno_vendors if it's confusing.
Thanks for mentioning this! Per the ner.model-annotate docs, that's the model_alias which will be the new name to name the scored result of your model.
One other trick -- if you're ever having questions about a built-in recipe and it's arguments, run:
prodigy ner.model-annotate --help
This will show you all of the arguments for the recipe. This can help you debug a lot quicker!
This seems to be that your model (aka pipeline) doesn't have these labels in it. With the same model you're passing into ner.model-annotate, can you run:
nlp = spacy.load("output_path/model-best") # assume this model you're passing to ner.model-annotate
This will show you what your labels are. You may have misspelled them. This could maybe even be the problem with your original model.
Thanks again for your patience! Prodigy is great because it can do a lot, but that also makes it a bit challenging at first. But you're doing great and hoping ( ) this can get you moving fast soon!
Thank you so much for this very comprehensive response.
Interestingly, when I wrote my last post, I did NOT have a "model-best" folder in my output directory, but now I do. I have no idea when that appeared.
I've run the code you suggested, and it seems like the trouble is, as you said, two of my labels did not save. I'm not really sure how to fix this. As you can see in the screenshot of my last post, they show up when I run ner.manual. I have annotated hundreds of entries using all three of them. I just annotated few more to make extra sure all three labels have been applied.
Is there something in particular I need to do to make sure all three labels save properly? Why would all three appear when I run ner.manual, but only one appears when I run nlp.get_pipe('ner').labels? I don't think I've misspelled anything, but any misspelling shouldn't matter if I'm just asking it to return the labels I've already made . . .
Running Prodigy the interface. This is what your screen shot showed. When you run prodigy ner.manual ... that will simply start a Prodigy server loading your records. Think of this in your first time you've run your server, before you've even saved 1 annotation.
For example, the fact that you can see these labels, simply mean you have run a server with these labels. They do not necessarily tell you what is in your actual data.
What data is in your dataset/saved. This is your actual annotations. What it seems like is your actual data may not have the labels you think you do. One way to check this is to export out your annotations (e.g., prodigy db-out ner_refno_vendors > data.jsonl then run a script to count the labels. An alternative, faster way to do this could be to use data-to-spacy then use data debug to get basic stats about your dataset:
View the data-to-spacy docs to understand what each argument means, namely it will convert your dataset ner_refno_vendors to a new output folder ./corpus, converting your data to a spaCy binary datasets (i.e., /corpus/train.spacy and /corpus/dev.spacy), providing a default config file, and providing a file with your label names. FYI, since you're running in Chinese, you may also want to add --lang zh too to data-to-spacy so it uses the right tokenizer.
Now look into that folder. Find the labels/ner.json file. What labels does it show? Does it show all of the labels you expect?
As another option, you can run spacy data debug and providing the paths to your train/dev binary files like this:
python3 -m spacy data debug ./corpus/config.cfg --paths.train ./corpus/train.spacy --paths.dev ./corpus/dev.spacy
You should now get some stats on your dataset. What do you see?
I suspect, you may not see the labels you think, namely only 1 label. Regardless, this may be a very helpful debugging on your original question/model.
The third thing is the model's labels. This would in theory be a function of your annotated data (#2) if you used the exact same dataset ner_refno_vendors. This is what you're checking when you run nlp.get_pipe('ner').labels. Since you're only getting 1 label, this is why I think #2 will show your actual data only has 1 label.
Does this make sense?
Perhaps the two labels you can't find may be very rare, so rare, they didn't even appear in either training or evaluation dataset (or only a handful of times). Nowhere enough for the model to learn. You should likely see that in your spacy data debug.
Isn't this referencing the data I have already saved? Or is what you are trying to say that it doesn't matter what I have saved before, whatever label flags I enter at the command prompt will be what shows up at the top of the screen?
In any case, there are indeed 3 labels in my data. Running
I personally annotated all the entries, so I know that the 2 'missing' labels do appear more than a handful of times. But maybe even the dozens-to-hundreds of times they do appear are not enough . . . ? I can always annotate more if we think that will solve the problem.
Again, thank you so much. I have plenty of python experience but this is all very new to me.
Yes! Each time you run prodigy ner.manual ... it'll show whatever labels. This is done because sometimes you may want to label one label at a time and add that to the dataset. This is just to clarify so you have the right idea about labeling. For example, in theory (you may not want to do this to add an incorrect label), you can run prodigy ner.manual ner_refno_vendors blank:zh --label SOME_NEW_NAME and Prodigy would only show that new name SOME_NEW_NAME in your interface while your data in ner_refno_vendors kept your exact same data. Hope this makes sense!
But this is great news! It looks like you have really good labels, and nothing major is wrong with the data.
Looking back at your original question ("train only outputs 0 scores"), what if you run:
It's a wonderful way to debug on whether you should label more annotations.
But also (completely optional), feel free to try out the ner.model-annotate too, just to give you some examples. I bet you'll quickly find some examples where your model departs from your training examples. You may likely find ways to improve your labeling for newer examples.
Sorry for the roundabout way to figure it out, but to be honest, it's hard to debug without metrics like above (e.g., data-to-spacy, data debug). In fact, I would recommend use your data-to-spacy -> spacy train workflow in the future. When you run data-to-spacy, it'll give you the exact command to run spacy train instead of prodigy train.
The reason being is it'll be more explicit and give you more control as you can do lots more intermediate architecture changes and hyperparameter tuning with your model training with a config.
prodigy train is really just a wrapper for spacy train and more intended for initial or quick-and-dirty training. It doesn't provide a dedicated hold out evaluation set (i.e., each time you train, it'll reshuffle your train/dev split). That's what's nice with data-to-spacy is it'll create these splits.
IT WORKED!!! --lang zh was the answer. (I should have guessed given the amount of trouble language conversion has given me in other cases.)
Though going through this entire process was extremely helpful -- I understand better where things might go wrong and how to troubleshoot them. Part of the problem with any new module or program is simply not knowing how to ask the right questions and not knowing any of the common debugging steps. Thank you so much!