Selecting correct annotation using review recipe unclear in UI

I am using the default review recipe (I haven't customized it at all yet). I am on version v1.11.5 (I will update to the new version this week). I am finding that the default review recipe does no reflect what is being selected in the UI. The text from the utterance does not appear in the card, only in the original annotation for each dataset. I used ner.manual on a dataset and then used the output from the data annotated using ner.manual' with a review` recipe using just one dataset. My setup is attached. The visual representation here with one dataset is closer to what I am looking for, but I am unable to review more than one pre-annotated dataset using this approach.

python -m prodigy db-out context_test_v1 > context_test_v1_output.jsonl
PRODIGY_ALLOWED_SESSIONS=cheyanne prodigy review context_test_v2 context_test_v1 --label positive,negative,neutral,anger,joy,frustration,gratitude 

Here's the resulting display that allows me to see the original annotation and change the original annotation (select spans and label). Is it possible to do this with review and with >1 pre-annotated dataset?

Original:

After I changed the labels (just a test, so please ignore how incorrect this annotation is):

My review setup with two datasets does not display as nicely as this. Is there a way to compare two pre-annotated datasets using review and have you:

  • choose the correct one and display the sentences from the dataset with the correct annotation
  • choose from a list of labels to adjust the spans if the annotation from the two pre-annotated datasets are both incorrect.

db-out is also hard to parse when using review with two datasets.

The "x" that appears next to the entity label also does not actually remove the label. It does not seem to work.

While the highlighting may just be an adjustment in prodigy.json, it still just shows me a label from the sentence in the card without the ability to change the span or "x" out the label.

python -m prodigy db-in temp_entity_annotation2_cb2 temp_entity_annotation2_cb.jsonl
python -m prodigy db-in temp_entity_annotation3_cb2 temp_entity_annotation3_cb.jsonl
PRODIGY_ALLOWED_SESSIONS=cheyanne prodigy review temp_compare_two_entity_annotations_cb temp_entity_annotation2_cb2,temp_entity_annotation3_cb2 --label PERSON,GPE

Please let me know if I can provide more information. I would basically like something closer to the first two screenshots, which used a pre-annotated dataset using ner.manual and a review using review. However, I have >1 datasets to review, and the UI and db-out changes with the review recipe with two datasets make it harder to review in the UI.

Thank you,
Cheyanne

Hi! Just to make sure I understand the question correctly: is the last screenshot you included what you're actually seeing when you're running the review recipe with multiple datasets?

The way it should look is: at the top, you should see the editable example, using the most common annotation (if multiple annotators created the same spans) or the first one if all versions are equally as common (like in this case). You should then be able to correct the editable example if needed and submit it.

If this is the actual output, can you share the JSON being created under the hood? Is there anything that may be different between the various annotations on the same example, e.g. the tokenization?