Thanks – I like that idea! We could even generalise this a bit more and make it a general “text input” interface that lets you render any task (text, image, NER) and adds an input box to the card that you can optionally pre-populate with text. This means you could use it for machine translation, but also for image captioning etc.
{
"text": "It was a pressure to meet you.",
"text_input_default": "It was pleasure."
}
The default content will then be displayed in the text field, and can be edited by the user. It will then be added to the task as "text_input"
(not sure about the exact naming of the keys yet).
You could also very easily convert the annotations to a stream of compare
or diff
examples. This would let you re-annotate the corrections made by the user:
diff_examples = []
for eg in user_input_examples:
before = eg['text_input_default']
after = eg['text_input']
if before != after: # user has edited the text
task = {'input': {'text': eg['text']},
'accept': {'text': after},
'reject': {'text': before}}
# optional: shuffle accept / reject for less biased evaluation
diff_examples.append(task)