Inspired by this tweet, I started working on some experiments for an addition to the image
annotation interface. Here's a first draft – thoughts and feedback appreciated!
Similar to the ner
mode, the image
mode could then also take an optional spans
property, consisting of the segment coordinates (in px, relative to the original image), and an optional label and colour:
{
"image": "desk.jpg",
"spans": [{
"points": [[150,80], [270,100], [250,200], [170,240], [100,200]],
"color": "yellow",
"label": "LAPTOP"
}]
}
Note: Prodigy currently doesn't come with built-in image models, but they're definitely on our list. We've mostly been focusing on NLP so far, since this is what we know best. In the meantime, you should be able to plug in your own image segmentation model via a custom recipe. All the model needs to do is predict segments and attach scores, and provide a method to update it with annotated examples.
import prodigy
from prodigy.components.loaders import Images
from prodigy.components.sorters import prefer_uncertain
@prodigy.recipe('segment-images')
def segment_images(dataset, image_dir):
stream = Images(image_dir) # stream in image tasks from a directory
model = load_my_model() # load model that extracts segments & assigns score
return {
'dataset': dataset,
'view_id': 'image',
'stream': prefer_uncertain(model(stream)),
'update': model.update
}
prodigy segment-images my_dataset /images -F recipe.py