Doubt about the use of review

Searching a way to compare NER annotations in the manuals/documentation, I came across a recipe review, but couldn't find a solution to a problem that emerges in the process. Is there no way I can compare only the differences between the annotations? In that case, ignoring the equals.

Hi! I'm not sure I understand your question correctly – but the review recipe and workflow was specifically designed to compare multiple annotations on the same input data, view them together to see the difference and create one final annotation. You can see an example of the interface here – see the second UI preview for the manual NER view with two conflicting annotations:

By default, you'll see all annotations again because the goal of a the review process is typically to create a final dataset to train from, and there may still be cases where all annotators agree on the wrong answer, etc.

If you want to only see examples with disagreements, you could add you own filter to the stream that skips examples with only one version. Under the hood, the review logic adds a list of "versions" to each task, so an example with only one version is an example without disagreements. You could also customise the recipe to automatically save those to the new dataset you're using for the reviewed examples, so you end up with one final dataset to train from, without re-annotating examples with no conflicts.

1 Like