I am using prodigy to review predictions from an object detection model. I have 9 classes in total, and number of objects in image can vary. Here is what can happen for predictions in an image-
- Model predicts some objects correctly.
- Model predicts some objects wrong.
- Model does not detect some objects.
Now, at present, the person reviewing can mark each bounding box as correct, wrong or ignore. Using these, we can send wrong bounding boxes to image.manual for correction. But how do we deal with third case, i.e. when model does not detect an object?
Is there a way to avoid labeling entire image and just relabel wrong/missing bounding boxes?