Clarification on annotation capabilities

Hello,

I have a few thousand images that I'd like to annotate (either polygon or segmented) and looking for a tool that can run local on my computer. Does prodigy require previously annotated datasets to help with pre-annotations or does it come ready right out the box? For example, I am annotating people within images, when I load the images, will it already have a label/annotation around people (or segmented every pixel) that I can then correct, fine tune, or further define?

Also, does it allow more attributes connected to the same label? For example, a label of "Person" with attributes like height, weight, etc...

Does it use publicly available models or do I need to load my own model for the assisted annotation capability?

I didn't see a clear answer so wanted to ask for clarification. Thank you.

Hi! Pre-annotation in Prodigy works via Python functions, also called recipes – they can use an existing model, an API, a set of rules or any other logic to add annotations to the incoming examples so you can accept/reject or correct them. These models are not built into Prodigy itself, but you can integrate pretty much anything you can load in Python. For text-based and NLP-related workflows, Prodigy provides an out-of-the-box integration with spaCy (since that's easy and a library we develop as well).

So if you're doing computer vision and you have a model that already predicts something useful, you can use it to add bounding boxes. You can see examples of how this works here: https://prodi.gy/docs/computer-vision#custom-model

Depending on your use case (and if your categories are pretty generic, like PERSON), you might find an existing pretrained model that's already working well enough to use it in the loop. If your data and/or categories are more specific, you might need to collect a few manual annotations first and then pre-train a model that can help you annotate later on.

This is definitely possible, although we'd recommend doing it in two steps: first, make sure your data includes the bounding boxes, next loop over all bounding boxes one at a time and annotate attributes, e.g. using a multiple choice UI, a free-form input etc. See the docs on custom interfaces for how to combine different blocks: https://prodi.gy/docs/custom-interfaces#blocks

Doing it in two steps means that you can evaluate both tasks separately and only go deeper once you've verified that the bounding boxes are correct and your label scheme works. It also makes it easy to sort annotations by type for faster annotation, lets you automate things and reduces the cognitive load on the annotator and helps focus on one decision at a time (bounding boxes or attributes for a given bounding box).

1 Like

Thank you Ines. Very helpful. I'm a newbie, do you all offer to setup the workflow for new users for a fee?

From what you've described, it sounds like you'd be most interested in some setup work related to the computer vision model, right? We currently don't offer consulting work around computer vision but you could post on this thread: spaCy/prodigy consultants?

Another thing to keep in mind is that a lot of the custom work you'd be looking for here isn't necessarily that specific to Prodigy – you'd mostly want a Python script that loads a computer vision model and API and outputs the predictions in a given JSON format. And a training workflow that loads annotations in this format. If you have that, the integration will be pretty straightforward. So even if you get someone to help you who hasn't worked with Prodigy before, they can likely still get you set up with the most important things you need :slightly_smiling_face: