Hello again!
I am building a custom annotation interface for which I previously asked for help here and here.
My task would be kind of repetitive, so I want to check if there is a decrease in performance for a given annotator (e.g., ?session=alex
) over time.
I'm really interested on computing intra-annotator agreement. I was checking at the documentation on task routing and the only alternative seems to be building a custom task router for this endeavour.
However, to avoid temporal recall bias, Iād prefer that repeated samples are not shown too close to the original annotation; ideally, spaced out randomly or after a buffer of N tasks.
What Iād like to implement:
- Assign a repeat probability per task per session (e.g., for
?session=alex
, 10ā20% of the tasks completed by Alex will be shown again). - Ensure repeated items are routed non-sequentially, with a sufficient gap between repetitions to minimize memory effects.
Additional questions:
- If it's possible, I want to know if this behaviour will interfere with the conditions stated in my previous question and if so, how this can be avoided. I was planning on implementing condition #2: Stop after 1 accept AND 1 reject for this category; this is a cumulative condition across all annotations for this category.
- Are all the previous requirements compatible with the global
prodigy.json
configuration"annotations_per_task": 1.5
, or is it needed to also code the behavior for inter-annotation agreement onto the custom task router?
Any help would be hugely appreciated! Thanks in advance.