Annotation strategy for gold-standard data

Dear all,

this morning we had a confusing discussion about using prodigy for NER. We use NER.make-gold to annotate our NE but at the moment, we don’t get any suggestions by the system, probably because we have not enough data yet. So we just use it like NER.manual and hope that they will be suggestions one day.

python3.5 -m prodigy ner.make-gold dataset de_core_news_sm ondata.csv --label “k1,k2” --exclude dataset -U

Now we got into struggle:

Case 1: We find entities in the Text, we annotate them and press ACCEPT.
Case 2: The system finds entities, we ACCEPT or REJECT.
Case 3: There are no entities: We Accept? We Decline?

Case 3 is the case we had the discussion about. If there are no entities, then it is CORRECT. Or do we need to press REJECT because we want this kind of data as samples for no entities? Or should we IGNORE these texts without entities?

We are just confused because “NER.make-gold” suggests nothing in cases there is nothing to suggest, so that would be CORRECT (ACCEPT), but seems to be senseless to annotate.

Thank you very much.

Frederik

Hi Frederik,

The --label k1,k2 argument tells Prodigy to only suggest entities that have been assigned those labels. These labels are not in the de_core_news_sm pre-trained model you're using, and the make-gold recipe doesn't update the model. This means no entities will ever be suggested by the model.

We should add a warning (or possibly error) if you specify labels not in the model during make-gold. We have similar warnings for most other recipes, as it's an easy mistake to make, especially by mistyping the label name.

The simple answer for Case 3: Mark accept.

The workflow in ner.make-gold should be that you mark all and only the correct annotations, and then mark it as ACCEPT once it's correct. You can use REJECT to mark deeper problems for you to resolve later. For instance:

  • Sometimes the tokenization is incorrect, preventing you from marking the entity boundaries correctly;

  • Sometimes you don't have a correct category to put the entity in, so you'd like to revisit the example once you've updated your label scheme.

  • Sometimes the entity contains other entities within it, and you'd like to note that in your downstream evaluation.

If there are no problems like this, it's often the case that the correct analysis has no entities. These examples are important for the model to learn from, so you need them in your training data.

1 Like

Ah, yes. You are right, stupid mistake. Thank you. We will change the model once we can train our own model.

Okay, then we will continue ACCEPTING text without entities and where nothing is marked instead of REJECT them. We were wondering since the annotation quality is so bad.

1 Like

I also went through this and notice that my model was getting better by accepting text without entities instead of rejecting it. Happy to read this it kinds of make it official and not only experimental.

What I did to make the training of my NER Gold Faster was to first do a normal train for the new entities with some examples, then do a batch train and export the model. Then I used that model to annotate the GOLD.

I did few 100 examples of GOLD, exported it, used it to train a model with spacy batch train example, by loading the model I used to do the GOLD. Doing this I improved the model, and I did again some 100 examples of GOLD with the newly created model. The suggestion are then getting better and better and you have to do few corrections. I did this a few time.

At some point the iterated model gets pretty good, and it is quick to do 500 or more examples.

4 Likes

@idealley Really appreciate the report! We’re still building up the set of best-practices on these things. There’s only so many tasks we can work through internally, so it’s super useful to hear what’s working well on users’ problems as well.

So the model gets updated annotations. What I am wondering is how to make model more "strict", i.e. do not find entities which are not entities in real world. If, for example, there is no entity within text and I unmark incorrectly labelled one and accept, will the model know that the unmarked text should not be an entity during batch-training?