I have a use case where I need to be able to input some free form text as metadata relevant to my overall modeling task. The simplest way I can think of to do this is use an HTML form and extract the user input as part of the update step in my recipe. From what I can tell, Prodigy doesn’t give access to anything the user enters here. For a minimal code example:
Ideally, something like adding a form key to the annotated examples with keys thing1 and thing2 would give me everything I need. I believe it would also work in the related case discussed in Feature Request: Machine Translation View.
I’ve been using the ner_manual prodigy recipe for an application where I also need some free-form input, though some of the input is probably better handled with a separate, dedicated text-highlighting tool. Do you have a recommendation for a highlighting tool that allows display of text in a browser, arbitrary user highlights, and then output of the highlights into a JSON blob? I had originally tried to overload the ner_manual recipe (with some customization) to include extra highlights that were not going to be used for actual NER models, but that approach is not viable given that I sometimes need to highlight the same snippet of text multiple times.
Another solution would be to have a window where users can type in text that works with the ner_manual recipe - not sure if it’s possible to add this to the ner_manual recipe with similar front-end customization as you mentioned above? Any advice much appreciated!
I haven't used it much myself, but you might want to look into draft.js. If your use case needs to rely more heavily on the combination of user input plus highlights, this would let you handle everything within one editor.
If you wanted to, you could maybe even integrate this into the Prodigy UI – it'd just need to render itself into a given container and then call window.prodigy.update. Alternatively, if you use the Prodigy back-end, you could also just call into the REST API directly, get the data from /get_questions and then send your annotated JSON to /give_answers.
But I also think it'd totally make sense if this type of task was just a better fit for a more custom and free-form solution. Prodigy is really centered around a very structured and automation-heavy annotation workflow that tries to avoid free-form inputs wherever possible. In most cases, this is very beneficial for collecting training data – but there are obviously always exceptions (like your use case).
I was thinking about adding a more generic "feedback"-type input field that would allow annotators to submit comments on individual tasks. While this could probably be repurposed for free-form input, it'd be be more centered around reporting problems with tasks or leaving development notes (like "look into this because XYZ").
I could also imagine some API that would let you compose multi-interface annotation cards – for example, one combining ner_manual and html. The underlying UI components allow that, and since both components would receive the same task data, they could interact with each other. For instance, an input in the html component could update the "spans", which would immediately be reflected in the rendering of ner_manual. I'd have to try it out and put together a little proof-of-concept first, though – and think about how this could be expressed via the available config options.
@ines Thank you for this library recommendation! In the meantime I have a couple more UI questions about prodigy as-is:
For the ner_manual task, is it possible to have the labels be “sticky”, so that when a user scrolls down a long document, they don’t have to scroll back up again in order to select the label they want to highlight?
Is it possible for the text to display in a way that includes white space as presented in the document, instead of as a series of “->” characters? In our case the documents often have a regular white/tab structure that helps guide the eye for the annotators, which is lost when the tabs are collapsed down to an arrow character.
I haven't tested this yet, but one idea would be to use the "card_css" setting and give the card content a fixed or maximum height and make it scrollable. For example:
Speaking of sticky, though: This is a nice idea, actually, and probably a good compromise if users do end up with special cases that require longer texts. I just tested it and it works pretty well – you can try it by opening the developer tools and adding "position: sticky; top: 0" to the class .c0138. I'll test this some more, but we should be able to ship this with the next release!
That's an interesting edge case... The main reason those replacement characters exist is that the ner_manual interface needs to have a clear representation of semi-invisible characters. If you're labelling data for NER training, obscuring these characters can potentially lead to fatal results if the annotators label whitespace by accident, or it's not immediately obvious that the model suggests whitespace characters in entity spans.
But maybe we should have a config option that lets the users disable this, maybe with a disclaimer / warning explaining the risks. I'd also recommend setting the whitespace tokens to "disabled": true in the JSON data, to disable highlighting.
Btw, in the meantime, does it help if you replace the tabs with 4 or more spaces? They'll be rendered as · and might not exactly be as wide as a regular character, but they'll at least be wider than a collapsed tab.