Hey everyone! We've been making good progress on some really cool new features for Prodigy v1.10. One of them is going to be a fully new and improved manual UI for annotating images So if that's an area you're working in, andy you're a Prodigy user, maybe you want to help us test it?
- allow moving and resizing existing shapes
- allow moving polygon points
- freehand mode (lasso) for drawing fully custom shapes
- enhanced data format: all bounding boxes now also expose their
center, and all image
- new config settings:
image_manual_stroke_width: stroke width of boxes, also used to calculate transform points
image_manual_font_size: font size of the box label
image_manual_show_labels: (not pictured) default setting for toggle to show/hide box labels
image_manual_modes: annotation modes to allow and display, in order. Defaults to
["rect", "polygon", "freehand"].
image_manual_from_center: enable drawing and resizing boxes starting with the center, which can be much more efficient
Beta testing and requirements
If you want to help beta test the new feature and try them on your data, feel free to send me an email at firstname.lastname@example.org or a DM on Twitter. Requirements are:
- Current Prodigy user on v1.9 (just include your order ID starting with #EX... when you get in touch)
- Should have an immediate project you can test it on, ideally involving some of the features added in the new interface
- Big plus: you already have a downstream pipeline in place that uses the data to train a model, so you can test the end-to-end workflow
Obvious disclaimer: This is a beta, so things may be broken (Hopefully not too much, though!) Definitely test it with a fresh and separate dataset, though.
If you have any questions, I'm also happy to answer them in this thread! P.S. If you're working with dependency/relation annotation, including complex NER relations, coref etc. tasks, keep an eye out for another call for beta testers soon