Review recipe with blocks interface

Hello.
I'm having some difficulties with the review recipe.
The annotation task is basically a ner_manual task, but I use the blocks interface to add an html block to give more information regarding the sentence to the annotator.
We have 2 annotators using 2 different sessions and now I want to review their annotations.
However, when I start the review recipe the web application opens with the message "No blocks available.".

Can you help me?

The Prodigy version is 1.9.4 and I'm starting the recipe as follows:
prodigy review -v ner_manual final_dataset task_dataset-bianca,task_dataset-julio

Thanks for the report! It looks like you hit an interesting edge case here: the review recipe will check the tasks for their "_view_id" and then use that to decide how to render the examples in the review interface. The --view-id is only used as the default and fallback. In your case, the interface is "_view_id": "blocks". So Prodigy will try to render the content with the blocks UI, but since the review recipe doesn't define any blocks, you see "No blocks available.".

The easiest workaround would be to either edit the recipe in review.py and add your blocks to the config, or to create a prodigy.json in your local working directory that sets "blocks": [...]. (A prodigy.json in the working directory overrides the global config so you can use it to set project-specific config like blocks, without having to add it to the recipe or global config.)

I'll think about how to solve this – the easiest solution would probably be to allow the --view-id argument of the review to override the "_view_id" of the task. Then your command would work and Prodigy would render the content using the ner_manual interface.

Thanks for the fast reply.
Including the "blocks": [...] to the config almost worked. It's started to show the annotations and the sessions that agreed to that annotation. However, the sessions that disagreed with some annotation appears as a different sentence. What I mean is that disagreeing annotations appears as different annotations to review and they will be duplicated in the final dataset.
A more complicated workaround that worked for me was to export (db-out) the original dataset to jsonl format, replace each occurrence of "_view_id":"blocks" to "_view_id":"ner_manual" and import (db-in) as a new dataset. Then I started the review recipe in this new dataset.

1 Like

Glad you got it working!

Ahh, I think I know what happened here: the review recipe uses a somewhat complex logic to determine whether annotations are answering the same question, and whether they're disagreements, based on the view ID. For instance, if you're using the binary ner interface, annotations are the same if they have the same task hash (same text and suggested span) and they disagree if they have different "answer" values. For the ner_manual interface, annotations are the same if they have the same input hash (same text) and they disagree if they have different task hashes (different selected spans).

Since the view ID here is blocks, Prodigy can't know that the annotation type is ner_manual, so it doesn't resolve the disagreements the same way. The option to override the view ID that I suggested above should also fix this, since it'd let you define how the annotations should be interpreted. I'll adjust this for the next release :slightly_smiling_face:

I just ran into this exact same situation. My variant of ner_manual uses blocks, and the review recipe doesn't work as is. Changing "_view_id": "blocks" to "_view_id": "ner_manual" in the dataset works.

I am interested in using blocks in a custom review recipe to show related data. Not sure if that would impact this.

Just released v1.9.7, which should now allow the --view-id argument to overwrite the interface used to display the tasks :slightly_smiling_face:

1 Like

@ines, we tried to use the new --view-id argument, yesterday. But, it seems to show cases where annotators provided different labels for the same text as separate items instead of together with the multiple results below.

Doing the rename from "_view_id": "blocks" to "_view_id": "ner_manual" still works. But, leaving the dataset with "_view_id": "blocks" and setting --view-id ner_manual doesn't work correctly when there are different labels for the same text.

@ines, just a friendly ping about this! We're still having to do a search and replace from "_view_id": "blocks" to "_view_id": "ner_manual" because using the --view-id option leads to rows where the annotators don't agree being shown separately instead of together.

Just released v1.9.8, which should now correctly use the custom view ID to determine how to combine examples :slightly_smiling_face:

2 Likes

Awesome. Thanks, @ines!

1 Like