✨ Demo: fully manual image annotation interface

This is a very early and experimental draft – but I like sharing our work in progress to show what we're working on. This interface shows a simple, manual UI for annotating image spans (both rectangular and free-form polygon shapes). I ended up writing the whole thing from scratch, so it's still a bit rough.

To illustrate the span annotations the interface is producing behind the scenes, I've added a box underneath the image. This is obviously just for demo purposes and won't be present in Prodigy. All spans are [x, y] coordinates in pixels, relative to the original image size (even if the image is resized in the browser). The annotated data is fully compatible with Prodigy's regular image interface.

Click here to open the demo in a new window.

Current features in the draft:

  • Shapes can be drawn by clicking on the image. The bounding box / line is shown on mouseover, so the user doesn't have to drag.
  • Polygon shapes can be closed by a double click, or by clicking on any other line point.
  • Shapes can be selected by click – for example, to delete them.
  • ESC "exits" the current shape in polygon mode.

Not implemented yet:

  • Keyboard shortcuts for the shapes (R for rect, something else for polygon).
  • Changing the label of a selected span.
  • One-click label button option instead of dropdown and better auto-focusing.
  • "Select all" or "Clear all" option.
  • Solution for overlapping shapes.
  • Various small UI fixes, e.g. preventing user from adding "null" shapes.
1 Like

This looks really cool, and something i would very likely incorporate in my workflow. Some feedback from trying it on an iPad:

  • Creating the rectangle feels a bit weird on the iPad, i actually feel like i want to just press somewhere then drag it out a shape instead of doing one press for start and one for end.
  • When i press there is no feedback that I’m drawing anything, so I’m not sure if i started a rectangle or not. I guess this is since touch devices have no mouse cursor.

It’s kind of a edge-case, but annotating on the iPad would be pretty cool and handy.

@espenjutte Thanks! I haven’t implemented any touch handlers yet, so this is a good idea! On touch interfaces, we could use a drag interaction instead of the hover, which should be pretty easy to implement. Will play around with this!

I think the interface should actually work pretty well on touch screens (unlike the manual NER annotation, which is pretty difficult, considering how badly text selection is handled on touch screens, especially in iOS). In the built-in Prodigy version of the interface, we should probably also disable the swipe gestures for this interface. Otherwise, it’s too easy to accidentally accept or reject a task while annotating.

Update on the work in progress! :framed_picture:

1 Like

Hi, This is exactly what I am looking for. Is there an estimated release date for this? I have been looking at other tools like labelImg to do the annothation, and then tie in ML to get it plumbed. But this looks like it will handle everything I need including building and training the model. Thank you

@jmrichardson Thanks – I’m glad this looks useful! :smiley: There’s no ETA yet and I don’t want to overpromise… I’d obviously love to ship it as soon as possible, but getting the UX flow right is quite tricky, and we want to make sure it’s actually good and useful when we release it.

It’d also be nice to have some more solid image interfaces for training and evaluating. (So far, we’ve tested the active learning annotation with a model in the loop using our LightNet package and it worked surprisingly well. But it needs some more testing and a more stable back-end.)

That said, if you do have a model that already predicts something (even if it’s not that good), you can easily wire it up using a custom recipe and the image interface. All you need is a function that takes a stream of incoming images, scores them and yields (score, example) tuples, and an update function that takes a list of annotated examples and updates the model in the loop. For example, something like this

import prodigy
from prodigy.components.loaders import Images
from prodigy.components.sorters import prefer_uncertain

@prodigy.recipe('image.teach')
def image_teach(dataset, source):
    stream = Images(source)  # load images from a directory
    model = LOAD_YOUR_MODEL()
    
    def predict(stream):
        for eg in stream:
            score, spans = model(eg['image'])  # get score and bounding box
            eg['spans'] = spans
            yield (score, eg)

   def update(examples):
       model.update(examples)  # update model with annotations

    return {
        'dataset': dataset,
        'stream': prefer_uncertain(predict(stream)),  # the sorted stream
        'update': update,
        'view_id': 'image'  # use the image interface
    }

You can then use the recipe as follows:

prodigy image.teach your_dataset /path/to/images -F recipe.py

For more details, see the usage workflow on custom recipes.

Thank you and looking forward to the release when it’s ready. In the mean time, I will have a look at your suggestions.

Hey @ines, it looks exactly what i need, apologies to budge with the same question but is there any ETA to it’s shipping, or releasing it in beta ?

1 Like

@bhanu Sorry I can’t give you an exact date or a more satisfying answer at this point – but essentially, our current priorities for Prodigy are as follows:

  1. complete wrappers for PyTorch and TensorFlow/Keras in Thinc, so Prodigy can natively support models from other libraries (without requiring a fully custom recipe)
  2. get the Prodigy Annotation Manager ready for beta testing (the Prodigy extension to scale up annotation projects with multiple annotators etc.)
  3. build out Prodigy’s image capabilities, add a built-in image model and improve the interfaces – this will also include the stable and production-ready version of the image_manual interface
  4. launch the Prodigy Annotation Manager

We’re hoping to have all of this ready by the end of summer. If you follow us on Twitter and/or sign up for the mailing list, you’ll also find out about any private betas and how to sign up. Especially for the bigger features we’re definitely planning on getting the community involved to test them – just like we did for the initial Prodigy beta :blush:

@espenjutte @jmrichardson @bhanu

Update! :tada: We just released Prodigy v1.5.0, which includes an experimental manual image interface, plus an image.manual recipe that takes a directory of images and lets you draw bounding boxes and polygon shapes on them. You can add a shape by clicking and moving your mouse or by dragging and dropping. You can close polygon shapes by clicking on the start point, or double-clicking anywhere on the image (like in Photoshop etc.). To delete a shape, select it and hit the delete button. To change its label, select it and click on the label (or use the respective keyboard shortcut).

Note that the interface is currently mostly designed for less fine-grained selections (which is also more common for regular ML tasks). You can test the interface in our updated online demo:

There are still various things we want to add and improve to make the interface more efficient. But because so many of you were interested in this, I wanted to ship a usable version as soon as possible, so you can try it out :blush:

Some features that are still coming: moving shapes via drag and drop, editing shapes by resizing box corners and moving polygon points, undo / redo buttons, better layer management (selected shape should be on top, prevent overlaps of labels and boxes wherever possible) and full tested touchscreen support.

2 Likes

Hi, I was wondering if you are still planning on adding a feature to edit boxes/polygons or if there is a current hack to do this.

Yes, see my comment on this thread: