I want to manually annotate my images as shown in the demo (image manual) but am not sure how to reach the step. I have worked with text classification on Prodigy before so familiar with Prodigy but I couldn’t find any resource to explain how to give my input images and tags and which command to use. So can you guide how I should go about it?
Yes, I think what you’re looking for is the image.manual recipe (see here). It takes the path to a directory of images and a list of labels you want to assign:
The data saved to your dataset will then include a "spans" property, containing one span for each shape. The "points" are [x, y] pixel coordinates, relative to the original image. You can also find more details on this in the “Annotation task formats” section of your PRODIGY_README.html.
Thanks for your response. Yes, it seems like I can do this so my task is to classify emotion on faces (sad, happy, neutral) but I don’t understand why I have to surround the object with bounding box because as far as I know, Prodigy doesn’t have active learning capability for images?
Bounding boxes are needed if you want to create training data for a model that can replicate this decision – e.g. detect objects in images. If you want to train a model to assign labels to images, you likely also want to create training data consisting of image + label.
Prodigy currently doesn't have any built-in image models, but you can always plug in your own via custom recipes. We haven't tested the active learning workflow for images very extensively yet, so for many tasks, you'll likely want to start off by creating data by hand and then train your model afterwards.
What does your data look like? Do you have one photo per image or do you want detect faces in your images and then classify emotion?
I probably can't give you the best advice regarding the model architecture, but to label your data, I'd suggest using a custom recipe to stream in images and then annotate whether the emotion label applies. You can either use a simple binary interface (classification) and make several passes over the data and collect one decision for each face and emotion. Alternatively, you could also use the choice interface and select one or more label per image. For an example, see this custom sentiment analysis recipe – instead of the JSONL loader, you'd use the Images loader with a directory of images.
Oh okay, I understand the purpose of bounding box now to create training date for our model, which can be called in prodigy.
So I tried performing the manual annotation with and without the bounding box but when I didn’t make a bounding box and just classified, I couldn’t get the label in the ‘db-out’ output because there was no span tag, containing the id and label available. On the other hand, when I put a bounding box, I could see the span tag. In my task, I have to label 30000 images and these images are just one face per image, so it will be more time consuming if I have to put a bounding box on every picture, which is sth I don’t have any use of in this task. So is there a way I can just click on the label on the web UI and not worry abt the bounding box and still get the span tag, containing the label?
Yes, this makes sense, because the manual image interface is intended to add bounding boxes to images. So if you don't add any boxes, the "spans" will be empty.
Yeah, as I mentioned above, you task sounds like you don't actually need the bounding boxes and just want to add labels to the whole image. So you probably want to build an interface that shows the image and one or more labels: