✨🖼️ Beta testers wanted: new manual image UI (v1.10)

Hey everyone! We've been making good progress on some really cool new features for Prodigy v1.10. One of them is going to be a fully new and improved manual UI for annotating images :sparkles::framed_picture: So if that's an area you're working in, andy you're a Prodigy user, maybe you want to help us test it?

image_manual_new_demo

Features include

  • allow moving and resizing existing shapes
  • allow moving polygon points
  • freehand mode (lasso) for drawing fully custom shapes
  • enhanced data format: all bounding boxes now also expose their width, height, x, y and center, and all image spans include a type ("rect", "polygon", "freehand")
  • new config settings:
    • image_manual_stroke_width: stroke width of boxes, also used to calculate transform points
    • image_manual_font_size: font size of the box label
    • image_manual_show_labels: (not pictured) default setting for toggle to show/hide box labels
    • image_manual_modes: annotation modes to allow and display, in order. Defaults to ["rect", "polygon", "freehand"].
    • image_manual_from_center: enable drawing and resizing boxes starting with the center, which can be much more efficient

Beta testing and requirements

If you want to help beta test the new feature and try them on your data, feel free to send me an email at ines@explosion.ai or a DM on Twitter. Requirements are:

  • Current Prodigy user on v1.9 (just include your order ID starting with #EX... when you get in touch)
  • Should have an immediate project you can test it on, ideally involving some of the features added in the new interface
  • Big plus: you already have a downstream pipeline in place that uses the data to train a model, so you can test the end-to-end workflow

Obvious disclaimer: This is a beta, so things may be broken :sweat_smile: (Hopefully not too much, though!) Definitely test it with a fresh and separate dataset, though.

If you have any questions, I'm also happy to answer them in this thread! P.S. If you're working with dependency/relation annotation, including complex NER relations, coref etc. tasks, keep an eye out for another call for beta testers soon :wink:

4 Likes

I would like it if it was possible to do single (or multiple) point annotations. Currently it might be possible to use tiny bounding boxes to do this but it would require a click+drag action (vs a single click) which is more work for the annotator.

At the moment, you do have to draw something – another option would be to use the freehand tool and draw a really short line and then use the first point of that. But not sure if that's better than a mini bounding box that clearly marks the center point of what you're annotating.

Loving these options for configuring the image annotation. I wonder if they have left beta status yet and thus can be added into the documentation?

Thanks! And yes, all the new settings are official now and they're currently documented with the annotation interface here: Annotation interfaces · Prodigy · An annotation tool for AI, Machine Learning & NLP (Sorry if this was difficult to find!)

1 Like