Relabel only mistakes after reviewing image

I am using prodigy to review predictions from an object detection model. I have 9 classes in total, and number of objects in image can vary. Here is what can happen for predictions in an image-

  1. Model predicts some objects correctly.
  2. Model predicts some objects wrong.
  3. Model does not detect some objects.

Now, at present, the person reviewing can mark each bounding box as correct, wrong or ignore. Using these, we can send wrong bounding boxes to image.manual for correction. But how do we deal with third case, i.e. when model does not detect an object?

Is there a way to avoid labeling entire image and just relabel wrong/missing bounding boxes?

Hi! There are several ways you could solve this. One would be to add an “all complete?” annotation step where the annotator gets to see all bounding boxes that were predicted for an image and needs to decide whether all objects are detected or whether something is missing. If you do this after the first annotation step where you accept/reject the individual bounding boxes, you could even use that information here and only include the boxes that were accepted in the first step. If you’re lucky, many of those will already be correct, so you’d only have to re-label the rejects from the second step manually.