Unless I am mistaken, there's no obvious review mode to re-evaluate/modify object bounding boxes created through the image.manual recipe (no model working to draw them). As I am prone to make mistakes, I'd like to review work done in previous sessions.
I am new to the product and likely missing something? Otherwise, please count this as a feature request (couldn't spot duplicates on the forum)!
P.S. in the meantime, is there any workaround to identify mistakes in labels via a visualization of the object detection annotations (without the need to look at annotation JSONs directly)?
Yes, at the moment, you should see a message like "Vote for this feature on the forum" if you're trying to use review with image data
One reason is that we haven't really found a satisfying way yet to display conflicting conflicting image annotations. Displaying all variations together can get pretty messy – and it's kind of unclear how to handle subtle differences. If you're annotating text and tokens, there are only so many possible variations, but if you're actually drawing bounding boxes, they're pretty much always going to be different (at least, it's super unlikely that two people will draw a box with identical pixel coordinates). So there probably need to be additional settings to define those things.
The input and output format are the same, so you can always export your existing annotations as JSONL, load them back into the recipe and go through them again. You could also write a custom recipe that loads in the data and creates a new example for each bounding box, so you can step through them one by one if that's easier.
By default, Prodigy will skip examples that are already annotated in the current set (which is typically the desired behaviour). Datasets are append-only, so you'll never overwrite or lose any data. So using a fresh dataset for the review session is probably a good idea anyways, because it gives you a clear separation between the different stages of the data (and makes it easy to start over if you make a mistake).
There's no need to import anything, though! The datasets hold the annotations, not the source data. So by importing your data to a new set, you're pre-populating the existing annotations and make Prodigy think all examples have been annotated already.
So you want to be doing something like this with a blank dataset: