Hi!
I have a few questions and concerns regarding creating custom review recipes.
Short background: our annotators are going to annotate images with bounding boxes, with an interface similar to image.manual
. We would like to ensure the quality of the annotations by forwarding the annotated images to a review recipe, where other annotators need to verify if the bounding boxes are drawn correctly. If the reviewers decide that a task is not annotated correctly (e.g. two out of three reviewers agree that the annotation is incorrect), it should be sent back to the original interface to be annotated again. Additionally, we need to make sure that an annotator cannot review a task they annotated themselves.
I hope I managed to explain myself somewhat clearly! Now for my questions:
Q1.
So far I have managed to create a custom review recipe where the review tasks are created from annotated examples from the database. So each annotated task from the image.manual
interface is forwarded to the review interface separately, and given options "incorrect" and "correct", or something similar. How can I now route the incorrect examples back to the original recipe (image.manual
) to be annotated again?
Q2.
I also managed to create a custom router based on route_average_per_task
, where the functionality is the same but the router additionally checks that a review task is not routed to the original annotator by checking that _session_id
of the original task does not include the name of any of the annotators the task is routed to (we are planning on using a pre-determined set of session IDs that we will give to our annotators, and this name is extractable from the prodigy-generated _session_id
).
Then I realized that I cannot use the annotations_per_task
setting at all with a custom router... I understand setting this limitation to ensure that prodigy works correctly, but our use case really depends on the custom router as well as the option to set the number of annotations per task. How could I achieve this?
Thanks a lot!