However, the feature is currently experimental and undocumented. You can find some examples, including a code snippet for an input box in my comments on this thread. There’s also an experimental example on this thread which uses a Greasemonkey user script to modify the text and meta.
Of course it always depends on your use case – but I would normally advise against using text input fields or “open questions”. Prodigy is most effective if the annotator can move through examples quickly, ideally in less than a few seconds per question. The ideas is that you design your task in a way that focuses on the most important feedback you need from a human – instead of making the annotator fill in survey-style questions.
If you’re looking to train a model, you’ll also get much better results if you have a clearly defined label set. Even if you’re using the
choice interface with an “other” option, it’ll potentially be very difficult to reconcile the annotations later on if every annotator is free to put whatever they like in the box. This raises all sorts of questions: What are you going to do with this data later on? What if it turns out that your label scheme was incomplete – are you going to re-annotate everything again to include the missing options from your “other” field? The data you gain from this field might not be as valuable as you think – and often, it’s better to ask the user to simply ignore the question if the labels don’t fit and move on.