Answers are missing for view_id='choice'

Hello!

I discovered that annotation answers are missing in the db- that I'm 100 % certain that we filled in. We use the choice annotation interface with:

"choice_style": "multiple",
"choice_auto_accept": true

and tend to click on the option in the annotation interface directly when we annotate. It is of course highly frustrating that annotations do not get recorded. Is this a known issue? The only pattern I possible can see is that the annotation of the previous question is missing if we happened to jump past the current question, but I'm not sure if this covers the whole extent of it. (This behavior would make sense if the previous annotation only is saved upon answering the current question, not when the view updates to the current question. When we use the annotation interface we thus involuntarily happen to jump past/skip questions, even if we try not to). I hope this will be fixed, we love Prodigy overall!

Best, Sara & team

Hi! Thanks for the report – that's definitely strange :thinking:

That would be unlikely, because what you see on the screen directly reflects the current annotation task. So once you see the next question, that's what you'll be updating. Something that could be possible, though, is that the auto accepting introduces a race condition in some circumstances and the answer gets logged before the selected option – but if that's the case, this would mean that the selection ends up being applied to the next question, and you'd see that reflected in the UI.

Another possible explanation could be accidental unselecting when you undo, because of how the auto-accepting works: if a user selects option 1, then goes back to change it, then decides that option 1 was correct and hits option 1 again, that option will be unselected. But this is more likely to happen if you're annotating via keyboard shortcuts.

If you want to log the instances where you end up with no selected answers so you can investigate them in real time, you could add the following JavaScript that will check whether there are selected options whenever an answer is submitted:

document.addEventListener('prodigyanswer', event => {
    const task = event.detail.task
    if (!task.accept.length) {
        console.log('No options selected', task)
    }
})

Btw, is there a specific reason you're using multiple choice options with auto-accept? I guess it doesn't really make a difference and the result is the same, but it just confused me when I first read it because the auto-selection kinda makes multiple selections impossible.

Hey there,

I am currently experiencing the same thing. I didn't set auto-accept as it automatically accepts after a single choice is selected, and then moves to the next sample. However, I did set answer: accept, in the hopes that I could select multiple labels and just accept them all. But nothing is being saved. Is there another config setting I'm missing?

@Dany Clicking "accept" will submit exactly what you see on the screen – it won't modify anything about the annotation task. So if you want to select all answers, you'll have to select the options first, and then submit the task. (If selecting all options is very common, you could consider pre-populating the task with the list of selected options, e.g. "accept": ["ID1", "ID2", "ID3"] and then un-select if needed. But this depends on your data.)

Thanks, @ines!

I am currently trying to apply this method - I created a custom manual recipe from your sentiment example online. The labels display correctly, I select multiple and click accept, but the annotations aren't being saved at all afterwards. It shows 0 annotations when I finish the session. Here is my recipe:

import prodigy
from prodigy.components.loaders import JSONL

@prodigy.recipe('neg.cat.manual',
    dataset=prodigy.recipe_args['dataset'],
    file_path=("Path to texts", "positional", None, str))
def neg_cat(dataset, file_path):
    """Annotate the sentiment of texts using different mood options."""
    stream = JSONL(file_path)     # load in the JSONL file
    stream = add_options(stream)  # add options to each task

    return {
        'dataset': dataset,   # save annotations in this dataset
        'view_id': 'choice',  # use the choice interface
        'stream': stream,
        'config': {
            'choice_style': 'multiple',
            "choice_auto_accept": False,
            'answer': 'accept',
            "show_flag": True,
            "theme": "eighties"
            }
    }

def add_options(stream):
    """Helper function to add options to every task in a stream."""
    # Colors for label subcategories
    tone = "#FDC564"
    general = "#FDC564"
    food = "#EDFABC"
    service = "#CEFCF0"
    facilities = "#E3C9FC"

    options = [
               # REVIEW TONE
               {'id': 'tone', 'text': 'PROFANITY/ THREATENING TONE', "style": { "background": tone }},
                # GENERAL
               {'id': 'general', 'text': 'GENERAL DISSAPOINTMENT', "style": { "background": general }},
                # FOOD
               {'id': 'illness', 'text': 'FOOD POISONING/ ROTTEN FOOD', "style": { "background": food }},
               {'id': 'quality', 'text': 'POOR QUALITY/ TASTE/ PREPARATION', "style": { "background": food }},
               {'id': 'value', 'text': 'SMALL PORTION/ EXPENSIVE', "style": { "background": food }},
               {'id': 'foreign', 'text': 'FOREIGN BODY', "style": { "background": food }},
               {'id': 'unavailable', 'text': 'NO STOCK/ VARIETY  / SPECIAL UNAVAILABLE', "style": { "background": food }},
               # STAFF/ SERVICE
               {'id': 'staff_abuse', 'text': 'ABUSIVE STAFF/ RACISM/ DISCRIMINATION', "style": { "background": service }},
               {'id': 'incorrect', 'text': 'INCORRECT ORDERS/ CHARGES', "style": { "background": service }},
               {'id': 'unprofessional', 'text': 'UNFRIENDLY/ UNPROFESSIONAL/ INATTENTIVE/ NEED TRAINING', "style": { "background": service }},
               {'id': 'wait', 'text': 'LONG WAIT/ BUSY/ DISORGANIZED/ FOOD COLD', "style": { "background": service }},
               {'id': 'online', 'text': 'PROBLEMS ONLINE DELIVERY/ APP', "style": { "background": service }},
               {'id': 'loyalty', 'text': 'ISSUES WITH VOUCHERS/ LOYALY CARDS', "style": { "background": service }},
               # ATMOSPHERE/ FACILITIES
               {'id': 'injury', 'text': 'ADULT INJURY/ AGGRESSION/ SECURITY', "style": { "background": facilities }},
               {'id': 'childcare', 'text': 'CHILD BULLIED/ INJURY/ CARE', "style": { "background": facilities }},
               {'id': 'hygiene', 'text': 'HYGIENE', "style": { "background": facilities }},
               {'id': 'discomfort', 'text': 'NOISE/ DISRUPTION/ DISCOMFORT', "style": { "background": facilities }},
               {'id': 'concern', 'text': 'CONCERN PREMISES/ FACILITIES/ SAFETY', "style": { "background": facilities }}
               ]
    for task in stream:
        task['options'] = options
        yield task

Do I need to add something so that the selected labels are saved?

@Dany The recipe looks fine! When you're done annotating, are you hitting "save" (the icon in the side bar or cmd+s)? Annotations are saved in the background once you have a full batch annotated – but if you're just testing and do a few clicks, make sure to save manually at the end.

You should then be able to export your annotations using db-out, and each record in the data should have an "accept": [] list containing the selected labels.

That's exactly what I was doing to test - and not saving :see_no_evil:.

Thanks so much, Ines!

@ines, actually, while we are discussing this, can I ask one other question pls?

If I want to load some text that has some labels already allocated - like quality and value - how can I further customize the recipe so that the labels that have been allocated so far are automatically selected? So that I can choose to select more labels, and accept. Or change the labels, etc. Would I need to customize a mark recipe, or can I get the manual recipe to pick up on already allocated labels?

@Dany Prodigy allows you to feed in data in the same format it outputs. For example, given your recipe above, an annotated example might look like this after you've selected the categories and saved the annotations:

{"text": "Some text", "options": [...], "accept": ["illness", "hygiene"]}

If you wanted to pre-select the categories "illness" and "hygiene", all you'd need to do is stream in tasks tasks that already have the list of accepted IDs populated. For example, task["accept"] = ["illness", "hygiene"].

1 Like