Switching between annotation interfaces

I have a feeling the answer to this question will be “don’t do this, do one thing at a time”, but is it possible to switch between annotation interfaces within a single session?

For example, doing manual annotation of intents and slots for a chatbot application. Given utterance #1 from a corpus, present the annotator with utt #1 in the “choice” interface, then present them with utt #1 in the NER interface, before moving on to utt #2.

Additionally, is it possible to get the options in the choice interface to display as a dropdown instead of radio buttons? With many labels the options get cumbersome.

Thanks!

paul

Probably :stuck_out_tongue: At least, we always try to advocate for simplicity and reframing the annotation tasks, rather than rebuilding "traditional" annotation workflows with Prodigy. But of course, this always depends on the use cases, and there can always be exceptions from the rule.

In general, the Prodigy web app will determine the interface to use based on the view_id and will then stick to that. However, some interfaces support rendering different types of content – for instance, the classification interface can render text, text with entities and images. So if your stream contains different task types, Prodigy will "switch" between the different types of presentations (but within the same parent interface). Similarly, choice tasks and their options can be text, text with spans or images. So maybe there's a solution you can come up with that takes advantage of this.

Not at the moment, no. (As a general rule of thumb, if your options are too long for the list, you might also want to consider simplifying your task.) As a workaround, the ner_manual interface supports a dropdown instead of a list of buttons, though – but in order to "lock in" the label, you'd have to select some text, so I'm not sure how useful this will be.

Alternatively, since a dropdown is a pretty standard HTML element, you could easily build something yourself using a HTML template and the new custom JS option. However, this is still experimental and currently undocumented in Prodigy v1.4.x (aside from on the thread), so usage is at your own risk.

Thanks! The custom JS stuff is very useful and just what I needed.

I have a few additional questions.

Is there any way to prevent the user from submitting invalid annotation data? Like preventing them from being able to accept a task in certain conditions. “Please pick a value for X before accepting”.

Can you suggest a way to attach to an event listener or somehow run JS whenever the html task is rendered?

Are any of the built-in annotation views available as components from within the custom html?

Not in the built-in interfaces, no. Any input constraint like this adds more friction between the user and the interface, so whenever possible, we've been trying to reframe the annotation task in a way that doesn't require adding constraints and validation. Binary interfaces are nice in this way, because they're always "valid".

So if validation is a big concern for you or if you find that you receive a lot of invalid data, maybe you can find a way to present the task differently? Another question to consider: How important is an individual answer and annotating every single question vs. collecting as many annotations as possible in total?

If you're looking to train a model afterwards, the number of (correct) annotations is usually what matters. That's also why we generally encourage users and annotators to hit ignore if it takes them more than a few seconds to answer a binary task. Disrupting the "annotation flow" is often more harmful than skipping an example that doesn't have much weight overall. I think the same could easily apply for live validation, too. So if you have a clearly-defined validation logic, you can always filter out the "invalid" annotations afterwards.

That said, you could probably implement some type of validation in custom interfaces – e.g. via the native validation hints in form elements. It won't stop the user from submitting the answer, but it'd show a visual hint. And if you really want to disable the submission, you could even disable the buttons by adding the disabled attribute.

At the moment, Prodigy only fires the custom event prodigyanswer (when the user answers a question) – but we could also add one for prodigyrender and prodigyupdate. In the meantime, this thread has a nice example of a Greasemonkey script that uses the MutationObserver API.

Some of it may feel a little hacky, but as I said, this is all still very experimental, and you're one of the first to try out the new JS API before we make it "official".

No, but when we're considering implementing the custom JS across all interfaces, so you'll be able to also interact with non-HTML tasks via custom scripts.