Feature Request: Machine Translation View

It would be great to have translation interface where you can type out a completely new sentence in a text box (like in machine translation).

The text box may or may not contain a predicted sentence from an MT model. Data structure could be something like:

    "text": "It was a pressure to meet you.",
    "predicted_text": "It was pleasure.",
    "corrected_text": "It was a pleasure to meet you."

This could be useful for all manner of sequence-to-sequence learning.

Thanks – I like that idea! We could even generalise this a bit more and make it a general “text input” interface that lets you render any task (text, image, NER) and adds an input box to the card that you can optionally pre-populate with text. This means you could use it for machine translation, but also for image captioning etc.

    "text": "It was a pressure to meet you.",
    "text_input_default": "It was pleasure."

The default content will then be displayed in the text field, and can be edited by the user. It will then be added to the task as "text_input" (not sure about the exact naming of the keys yet).

You could also very easily convert the annotations to a stream of compare or diff examples. This would let you re-annotate the corrections made by the user:

diff_examples = []

for eg in user_input_examples:
    before = eg['text_input_default']
    after = eg['text_input']
    if before != after:  # user has edited the text
        task = {'input': {'text': eg['text']}, 
                'accept': {'text': after}, 
                'reject': {'text': before}}
        # optional: shuffle accept / reject for less biased evaluation

Pinging to see if this is on the roadmap. Thanks!

Also interested if there has been any progress on this

@ines any news here ? Or a way of doing it by creating custom recipe.
Thanks in advance.

This should be pretty straightforward now with the blocks UI and a text input block: https://prodi.gy/docs/custom-interfaces#blocks

You can pre-populate the text box with content, e.g. the text produced by your model. I'm showing a similar(ish) workflow with image captioning annotation in my custom recipes video btw: