Hi, I've been using Prodigy for a couple of months now and I'm really happy with the smooth flow of the whole annotation process.
The most difficult thing for me, however, is to reach a good agreement ratio between annotators. Usually the guidelines become a bit more complex and just the process of successfully communicating these to annotators takes a lot of time (writing & reading the annotation guidelines, and then usually guiding them through some first annotations).
So I have been wondering if there couldn't be a more interactive way of showing how the annotations should look like. Specifically I could imagine the following process to be quite effective for that:
- Instead of writing down guidelines as a text document, I collect examples which represent difficult cases.
- For each of these examples I write down a short explanation of what should be the correct annotation.
- Then in Prodigy, each new annotator first sees these hand-picked examples and annotates them as usual, but before moving to the next instance she would first see some pop-up or text box with the short explanation I wrote. Maybe even with specific feedback on why her annotation does not match the intended guidelines.
- After going through these examples (maybe about 30 instances), the annotator would just continue as usual with regular (unseen) instances from the dataset without getting any more feedback.
Basically that would make the process of explaining the guidelines a lot more interactive and I think it would bring some advantages, especially because annotators could just start annotating and learn in small bits of how their responses should ideally look like.
Could that be implemented in any way with Prodigy? As far as I can see, it would be quite hacky if I just try it with a custom recipe, e.g. by adding these examples twice to the start of the stream, first without and then with explanations.
Thanks a lot already for the great tool and I hope that idea is understandable enough.
Christopher