Prodigy Annoation: Best Practise

I have labelled a data using keywords. Now I want to check the performance. So that, I have used ner.correct. When I found any wrong annotation, I cross the annotated phrase and if I find any word to annotate then I manually annotate using prodigy UI. And then I accept the annotation and save the data. Is it the right way to annotate or we have to reject the examples which are not correct? If we have to reject, then how can I correct the annotation and accept?

Hi @ta13,

You're right in terms of correcting wrong annotations and saving them in ner.correct. Afterwards, you can use those corrected annotations to further improve your model. Rejecting annotations in ner.correct is only done if you don't want to include a particular sample in the gold dataset (i.e., the text is corrupted, labels are unclear and inconsistent, etc.).

1 Like

Hi
Thanks for your answer. I have some related queries:

  1. When I provide the data to annotators, how can I get to know that how much data is annotated by them?
  2. How the annotators can get to know the annotation if they they take break, close the window and again comeback?
  3. After annotating, if it shows that there is no task available, if I again run ner.correct and provide it to another annotator, it again shows no task available. Why it happens and how to solve it?

Hi @ta13,

  1. It's possible to have different datasets for each annotator, and run prodigy stats to check their progress.
  2. When an annotator closes the window and comes back, Prodigy will request the next available unannotated batch from the server.
  3. When this happens in ner.teach, it usually means that there's nothing left to annotate anymore from the source data. Since ner.teach uses the model to suggest entities, then every example that is relevant has already been annotated.