ines
(Ines Montani)
May 29, 2018, 10:29am
2
Hi! Prodigy's manual image interface is still under development. You can see an early demo here:
This is a very early and experimental draft – but I like sharing our work in progress to show what we're working on. This interface shows a simple, manual UI for annotating image spans (both rectangular and free-form polygon shapes). I ended up writing the whole thing from scratch, so it's still a bit rough.
To illustrate the span annotations the interface is producing behind the scenes, I've added a box underneath the image. This is obviously just for demo purposes and won't be present in Prod…
If you have annotated bounding boxes using a different tool and want to import them to Prodigy to re-annotate them or train a model in the loop, you can convert them to Prodigy's JSON format. This thread has some more details and strategies:
Yes, this is definitely possible – you’ll just have to plug in your own model implementation. The LightNet codebase (our Python port of DarkNet ) is still very experimental. We did use it internally to try an active learning workflow, and it looked promising, but it’s not production-ready and still fairly brittle, which is why there’s currently no built-in image.teach recipe, and only the image.test to try out the image interface on your own data.
Instead of LightNet, you probably want to use a …
In my comment here, I've also shared a converter recipe for the VGG Image Annotator :
Yes, it doesn't yet work out of the box, but you can still use Prodigy to train an image model in the loop. We did experiment with the YOLO models a lot (and even wrote our own little Python wrapper for Darknet – but it's not that stable). With Prodigy, we first wanted to focus on the NLP capabilities, since this is what we know best – but computer vision is definitely on our radar and something we're actively working on for both the downloadable tool, as well as the upcoming annotation manager. …