✨ Idea: Image segmentation annotation interface

Inspired by this tweet, I started working on some experiments for an addition to the image annotation interface. Here's a first draft – thoughts and feedback appreciated! :smiley:

Similar to the ner mode, the image mode could then also take an optional spans property, consisting of the segment coordinates (in px, relative to the original image), and an optional label and colour:

    "image": "desk.jpg", 
    "spans": [{
        "points": [[150,80], [270,100], [250,200], [170,240], [100,200]],
        "color": "yellow",
        "label": "LAPTOP"

Note: Prodigy currently doesn't come with built-in image models, but they're definitely on our list. We've mostly been focusing on NLP so far, since this is what we know best. In the meantime, you should be able to plug in your own image segmentation model via a custom recipe. All the model needs to do is predict segments and attach scores, and provide a method to update it with annotated examples.

import prodigy
from prodigy.components.loaders import Images
from prodigy.components.sorters import prefer_uncertain

def segment_images(dataset, image_dir):
    stream = Images(image_dir)  # stream in image tasks from a directory
    model = load_my_model()  # load model that extracts segments & assigns score
    return {
        'dataset': dataset,
        'view_id': 'image',
        'stream': prefer_uncertain(model(stream)),
        'update': model.update
prodigy segment-images my_dataset /images -F recipe.py

Image Segmentation would be perfect feature for my computer vision use case. Curious, is semantic/panoptic segmentation visualization supported currently or is there a roadmap that includes support for pixel level annotations like that?

Hi Gerald,

we don't support pixel-level annotations at the moment. I'll keep it in mind for a future version.

For now, you can convert the polygon annotations to pixel annotations with a custom Python script. You might enjoy this answer for some inspiration:

1 Like