iaa-score in a context of blocks with multiclass

Greetings,

Looking at prodigy metric.iaa.doc documentation I am wondering how to leverage those IAA metrics for mutiple blocks interface.

In this case we have a classic Accept / Reject / Ignore and inside those blocks there is one multiclass. It there a way to get Inter-Annotator Agreement (IAA) results for this particular "sub block"?

prodigy metric.iaa.doc dataset:classify-transactions multiclass -l "A","B","C"
Using 3 label(s): A, B, C
:information_source: Using 3 annotator IDs: classify-transactions-fr3,
classify-transactions-fr2, classify-transactions-fr1

✘ Requested labels: A, B, C were not found in the dataset.
Found labels: .

Thank you

Welcome to the forum @gregt :wave:

The IAA recipe only looks at the parts of the task that it can compute metrics for so, in your case, I would expect it to "use" just the "multiclass" part of the entire example which is expected to follow the choice view id structure. In other words, the relevant information (chosen labels) should be under the accept key.

If you inspect the structure of your custom annotation example (for example by looking at the dataset stored in the DB) - do all examples have accept key whose value is a list?

From the screen shot it looks like you're are using a NER UI for the "multiclass" block?
If that's the the case, you'd have to restructure your dataset so that the multiclass labels are stored under the accept key. I'm sure that it should be easy to do with a simple Python script.
If you can share the structure of your annotation example, we can help with a script to restructure it, so that you can use it with the IAA command.

Hi magdaaniol,

Here are the blocks and matching dbstructure of table example in sqlite, We can see the accept key. They dont seem to have an accept key.

Thank you for your guidance.

       blocks = [
                {"view_id": "html", "html_template": html_template},
                {"view_id": "ner_manual"},
                {
                    "view_id": "text_input",
                    "field_rows": 3,
                    "field_label": "Explain your decision (optional)",
                },
            ]

{"text":"xxxxxxxxxxxxxxxxxxxxx",
"meta":{"last_digits_iban":"xxxx","date":"2023-04-11","bank_description":"xxxxxxxxxxxxxxxxxxxxxxxxxx","amount":"-764.81","bank_country_code":"xx","is_credit":false,"company_name":"xxxx","company_id":"xxxxxxxxx","is_description_empty":false,"transaction_id":"320ddfb7-c72f-454e-be4a-8427133cfea7"},"_input_hash":1487569597,"_task_hash":1498661915,"options":[{"id":"A","text":"A"},{"id":"B","text":"B"},{"id":"C","text":"C"}],"_view_id":"blocks","user_input":"I am a description","answer":"accept","_annotator_id":"classify-transactions-fr1","_session_id":"classify-transactions-fr1"}

Hi @gregt ,

It looks like there's a mismatch between the kind of data you want to collect and the annotation interface used.
Right now, you are not storing annotations to your classification (A, B or C) because there's not classification block in the blocks.
You are correctly adding options to the task structure, but you're nor really showing it as there is no appropriate UI configured to use that information.
Could you try substituting ner_manualview_id with choice view id in your recipe and see how the structure of the example changes?