From what I understand, image.test
is only testing Prodigy’s built-in object detection.
How would I go about testing my manual image annotations?
From what I understand, image.test
is only testing Prodigy’s built-in object detection.
How would I go about testing my manual image annotations?
Yes, image.test
is really just an experimental implementation using the YOLO models, to show how active learning-powered image annotation could work.
When you train a model from the collected annotations, you probably want to use a different and more stable solution, like a PyTorch or Tensorflow implementation. How you do this is really up to you and depends on the task. The manual image annotations you create will have the following format:
{
"image": "data:image/png;base64,iVBORw0KGgoAAAANSUh...",
"width": 800,
"height": 600,
"spans": [{
"label": "PERSON",
"points": [[150,80], [270,100], [250,200], [170,240], [100, 200]]
}]
}
The "points"
of each span are pixel coordinates relative to the original image, so you can convert them to whichever format you need to train your model. The above format is also what Prodigy expects if you want to show the model's predictions in the interface (e.g. to accept and reject them). So you can usually write a relatively simple Python function that takes the model's predictions and outputs the JSON format.
You might also find this thread helpful, which has more details on using image models in Prodigy, including some strategies for writing your own recipes with a model in the loop: