Add `review` mode to `image_manual`

Unless I am mistaken, there's no obvious review mode to re-evaluate/modify object bounding boxes created through the image.manual recipe (no model working to draw them). As I am prone to make mistakes, I'd like to review work done in previous sessions.

I am new to the product and likely missing something? Otherwise, please count this as a feature request (couldn't spot duplicates on the forum)! :slight_smile:

P.S. in the meantime, is there any workaround to identify mistakes in labels via a visualization of the object detection annotations (without the need to look at annotation JSONs directly)?

Yes, at the moment, you should see a message like "Vote for this feature on the forum" if you're trying to use review with image data :smiley:

One reason is that we haven't really found a satisfying way yet to display conflicting conflicting image annotations. Displaying all variations together can get pretty messy – and it's kind of unclear how to handle subtle differences. If you're annotating text and tokens, there are only so many possible variations, but if you're actually drawing bounding boxes, they're pretty much always going to be different (at least, it's super unlikely that two people will draw a box with identical pixel coordinates). So there probably need to be additional settings to define those things.

The input and output format are the same, so you can always export your existing annotations as JSONL, load them back into the recipe and go through them again. You could also write a custom recipe that loads in the data and creates a new example for each bounding box, so you can step through them one by one if that's easier.

This looks like what I need for now! Thanks! I just need the labels to be "overridable" though (e.g the issue in https://support.prodi.gy/t/editing-image-label-after-undo-not-reflected/ may be an hurdle for correcting labels, I need to verify this)

I thought I could do this but failed. Can someone help me?

Having a dataset "images" (partially annotated during some session) I did export annotations with

python -m prodigy db-out images labeling

and then (tried) to load them back with

python -m prodigy db-in images labeling/images.jsonl

And tried to review the annotations with

python -m prodigy image.manual images images_dir --label spam, eggs

However, when opening the app I don't see any already-annotated image to review/modify.
What's the right procedure/recipe to follow?

What happens if you use a new dataset name?

By default, Prodigy will skip examples that are already annotated in the current set (which is typically the desired behaviour). Datasets are append-only, so you'll never overwrite or lose any data. So using a fresh dataset for the review session is probably a good idea anyways, because it gives you a clear separation between the different stages of the data (and makes it easy to start over if you make a mistake).

Thanks! Tried also

python -m prodigy dataset some-other-images-dataset
python -m prodigy db-in some-other-images-dataset labeling\images.jsonl
python -m prodigy image.manual some-other-images-dataset images_dir --label spam, eggs

annotations get imported:

Imported 90 annotations for 'some-other-images-dataset' to database
SQLite
Added 'accept' answer to 0 annotations
Session ID: 2019-08-30_13-25-48

but in the app no already-annotated images appear in the history, nor I am prompted with images that are annotated bounding boxes :frowning:

I'll try from new datasets from scratch to re-reproduce.

There's no need to import anything, though! The datasets hold the annotations, not the source data. So by importing your data to a new set, you're pre-populating the existing annotations and make Prodigy think all examples have been annotated already.

So you want to be doing something like this with a blank dataset:

prodigy image.manual new_dataset images.jsonl --loader jsonl --label spam,eggs
1 Like

Perfect, works like a charm and what I was looking for. Really appreciate the help from you devs @ines!

For future reference, totally untested, use at own risk, one way to review old annotations may be

# get annotations out of the dataset in the labeling folder
python -m prodigy db-out original-dataset labeling

# create new dataset to review annotations
python -m prodigy dataset debug
# load annotations for review
python -m prodigy image.manual debug labeling/original-dataset.jsonl --loader jsonl --label spam,eggs

[open browser and do review of annotations]

# output reviewed annotations
python -m prodigy db-out debug labeling/
# create new dataset to keep annotating beyond images already annotated
python -m prodigy drop original-dataset
python -m prodigy dataset reviewed-dataset
# load reviewed annotations
python -m prodigy db-in reviewed-dataset labeling/debug.jsonl
python -m prodigy image.manual reviewed-dataset images --label spam,eggs

[open browser keep annotating new images]

@ines Is there any update now for using review recipe in image.manual recipe?

Not at the moment, but I would be very interested in your take on this question for your specific use case: