image.manual with model in the loop

Hi there,

I am trying to build an annotation recipe with image.manual but with a mask-Rcnn model in the loop to identify bounding boxes and display to the annotator so that he/she can verify the model's prediction. This is my recipe code.

import prodigy
from prodigy.components.loaders import Images
from prodigy.util import split_string
from typing import List, Optional
from model_functions import load_models, getBBwLabels, b64_uri_to_array

# Recipe decorator with argument annotations: (description, argument type,
# shortcut, type / converter function called on value before it's passed to
# the function). Descriptions are also shown when typing --help.
@prodigy.recipe(
    "image_mod.manual",
    dataset=("The dataset to use", "positional", None, str),
    source=("Path to a directory of images", "positional", None, str),
    label=("One or more comma-separated labels", "option", "l", split_string),
    exclude=("Names of datasets to exclude", "option", "e", split_string),
    darken=("Darken image to make boxes stand out more", "flag", "D", bool),
)
def image_mod_manual(
    dataset: str,
    source: str,
    label: Optional[List[str]] = None,
    exclude: Optional[List[str]] = None,
    darken: bool = False,
):
    """
    Manually annotate images by drawing rectangular bounding boxes or polygon
    shapes on the image.
    """
    model_a, model_b, model_c = load_models()
    # Load a stream of images from a directory and return a generator that
    # yields a dictionary for each example in the data. All images are
    # converted to base64-encoded data URIs.
    def get_stream():
        stream = Images(source)
        for eg in stream:
            image = b64_uri_to_array(eg["image"])
            bbs, labels = getBBwLabels(image, model_a, model_b, model_c)
            for i in range(len(bbs)):
                eg['spans'][i]['label'] = labels[i]
                eg["spans"][i]['points'] = bbs[i]
            yield eg

    return {
        "view_id": "image_manual",  # Annotation interface to use
        "dataset": dataset,  # Name of dataset to save annotations
        "stream": get_stream(),  # Incoming stream of examples
        "exclude": exclude,  # List of dataset names to exclude
        "config": {  # Additional config settings, mostly for app UI
            "label": ", ".join(label) if label is not None else "all",
            "labels": label,  # Selectable label options,
            "darken_image": 0.3 if darken else 0,
        },
    }

There are about seven bounding boxes with labels that the model outputs and this has to be tucked into the eg['spans'][i]['points'] and eg['spans'][i]['labels'] of the stream. However its giving me a

KeyError: 'spans'

Please let me know how to resolve this issue

Thank you
DU

Hi! The recip looks good so far :+1: I think the only problem here is that by default, the examples that come in don't have a key "spans" – your stream generator is adding the spans. However, in your loop, you're assuming that it's a list of len(bbs) and are trying to write to it. Instead, you probably want to do something like this:

bbs, labels = getBBwLabels(image, model_a, model_b, model_c)
eg["spans"] = []
for i in range(len(bbs)):
    eg["spans"].append({"label": labels[i], "points": bbs[i]})

Thank you for the reply..

Will prodigy recognize and show it in the front end if there is no {"id":__} along with label and points? When I checked the label schema it was like

{'id': 'e2237a87-4235-4d49-88fd-d33942bb7d75',
 'label': 'foo',
 'points': [[36, 273.6], [36, 350.7], [168, 350.7], [168, 273.6]],
 'color': 'yellow'}

Thank you
DU

Hi Ines.. So I tried the code suggested, the script ran without any error, but and when I opened the localhost, it was taking for ever to load the image and when I checked the console, the logs showed that the model was running on an infinite loop (Logging

"Detecting objects.."
"Identifying objects.. "

"Detecting objects.."
"Identifying objects.. " 
..
..

on and on..). I could not find anything in the recipe that would cause this. Greatly appreciate your help..

That should be no problem – if no ID is present, the app will generate one. The data that you export will always have generated IDs, because that's how the app identifies the bounding boxes.

What happens if you print the examples in the stream generator? And which ML library does your model use?

I wonder if it could be related to multiprocessing, e.g. PyTorch / TensorFlow / etc. starting multiple processes for inference. If there's a way you can control that, definitely try that.

Another thing to try would be to move the inference out into its own process, e.g. by moving the stream generator to a separate script and then piping that forward. See here for an example: Loaders and Input Data · Prodigy · An annotation tool for AI, Machine Learning & NLP Your command could then look like this:

python load_data.py | prodigy image.manual your_dataset - --label LABEL1,LABEL"

Note the - as the source argument, which tells Prodigy to load from standard input (the JSONL printed by your load_data.py function). If this solves the problem, then the most likely explanation is that it's related to multiprocessing.

Hi Ines,

It prints the example with each iteration of the infinite loop. I think you are on point with the multiprocessing as I had similar issues during the training of the model with multiprocessing. I am using Mask RCNN models here. I would have to go back and train a set of new models to turn the multiprocessing off (I think) so I would rather if I can solve the problem using the second option. I have some questions in this method though.

  1. As I understand it, we are taking the stream generator in the recipe that is
stream = Images(source)

to a seperate script. So I should take this line out of the recipe, correct?
2. So my final recipe would look like this

import prodigy
from prodigy.components.loaders import Images
from prodigy.util import split_string
from typing import List, Optional
from model_functions import load_models, getBBwLabels, b64_uri_to_array

# Recipe decorator with argument annotations: (description, argument type,
# shortcut, type / converter function called on value before it's passed to
# the function). Descriptions are also shown when typing --help.
@prodigy.recipe(
    "image_mod.manual",
    dataset=("The dataset to use", "positional", None, str),
    source=("Path to a directory of images", "positional", None, str),
    label=("One or more comma-separated labels", "option", "l", split_string),
    exclude=("Names of datasets to exclude", "option", "e", split_string),
    darken=("Darken image to make boxes stand out more", "flag", "D", bool),
)
def image_mod_manual(
    dataset: str,
    source: str,
    label: Optional[List[str]] = None,
    exclude: Optional[List[str]] = None,
    darken: bool = False,
):
    """
    Manually annotate images by drawing rectangular bounding boxes or polygon
    shapes on the image.
    """
    rem, sir, miner = load_models()

    def get_stream():
        for eg in stream:
            image = b64_uri_to_array(eg["image"])
            bbs, labels = getBBwLabels(image, rem, sir, miner)
            eg["spans"]=[]
            for i in range(len(bbs)):
                eg["spans"].append({"label": labels[i], "points": bbs[i]})
            yield eg

    return {
        "view_id": "image_manual",  # Annotation interface to use
        "dataset": dataset,  # Name of dataset to save annotations
        "stream": get_stream(),  # Incoming stream of examples
        "exclude": exclude,  # List of dataset names to exclude
        "config": {  # Additional config settings, mostly for app UI
            "label": ", ".join(label) if label is not None else "all",
            "labels": label,  # Selectable label options,
            "darken_image": 0.3 if darken else 0,
        },
    }
  1. This is the loader I have created.
from pathlib import Path 
import json

data_path = Path("./sample_images")
for file_path in data_path.iterdir():  # iterate over directory
    lines = Path(file_path).open("r", encoding="utf8")  # open file
    for line in lines:
        task = {"text": line}  # create one task for each line of text
        print(json.dumps(task))  # dump and print the JSON

My stream input is images, so I think I might have make some changes accordingly as I am getting this error message when I run the code.

Traceback (most recent call last):
  File "loader.py", line 7, in <module>
    for line in lines:
  File "/home/dileep/miniconda3/envs/tf15/lib/python3.7/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

Thank you so much for helping.
DU

Yeah, this makes a lot of sense then!

You want to move all your logic from loading the images from a path, to adding your bounding boxes created by the model into that separate loader file. So what's written to standard output by that Python file is JSON-formatted examples in Prodigy's format, that you can then pipe forward to the recipe.

For example, something like this:

from model_functions import load_models, getBBwLabels, b64_uri_to_array
from prodigy.components.loaders import Images
import json

stream = Images("/your/path")
for eg in stream:
    image = b64_uri_to_array(eg["image"])
    bbs, labels = getBBwLabels(image, rem, sir, miner)
    eg["spans"]=[]
    for i in range(len(bbs)):
        eg["spans"].append({"label": labels[i], "points": bbs[i]})
    print(json.dumps(eg))

If you run that file separately, it should just print you a bunch of examples.

And in your recipe, you can remove your custom get_stream stuff and replace that with Prodigy's stream helper that will automatically read from standard input if you pass in "-" as the source:

from prodigy.components.loaders import get_stream

# in your recipe
stream = get_stream(source, loader="jsonl", input_key="image")

That error happens because you're reading in your directory path as if it were a directory of text files and then try to iterate over the file contents (image data) line by line.

I am so sorry to bug you again and I have a feeling that I am making a very silly mistake here, this is where I am now..

I am able to run the loader with no problem individually now. It is printing valid jsonlines with model predictions no problem.

However when I pipe the file to my custom recipe, it gives me an error like this

✘ Failed to load task (invalid JSON on line 1)
This error pretty much always means that there's something wrong with this line
of JSON and Python can't load it. Even if you think it's correct, something must
confuse it. Try calling json.loads(line) on each line or use a JSON linter.

Traceback (most recent call last):
  File "loader.py", line 16, in <module>
    print(json.dumps(eg))
BrokenPipeError: [Errno 32] Broken pipe

This is how I my recipe looks like now

@prodigy.recipe(
    "image_mod.manual",
    dataset=("The dataset to use", "positional", None, str),
    source=("Path to a directory of images", "positional", None, str),
    label=("One or more comma-separated labels", "option", "l", split_string),
    exclude=("Names of datasets to exclude", "option", "e", split_string),
    darken=("Darken image to make boxes stand out more", "flag", "D", bool),
)
def image_mod_manual(
    dataset: str,
    source: str,
    label: Optional[List[str]] = None,
    exclude: Optional[List[str]] = None,
    darken: bool = False,
):
    """
    Manually annotate images by drawing rectangular bounding boxes or polygon
    shapes on the image.
    """
    
    stream = get_stream(source, loader="jsonl", input_key="image")

    return {
        "view_id": "image_manual",  # Annotation interface to use
        "dataset": dataset,  # Name of dataset to save annotations
        "stream": stream,  # Incoming stream of examples
        "exclude": exclude,  # List of dataset names to exclude
        "config": {  # Additional config settings, mostly for app UI
            "label": ", ".join(label) if label is not None else "all",
            "labels": label,  # Selectable label options,
            "darken_image": 0.3 if darken else 0,
        },
    }

From logging at different points in the recipe and loader I could find that the problem is at the connection between the recipe and the loader. This is how I am doing the piping at the CLI

python loader.py | prodigy image_mod.manual sample - --label L1,L2,L3,L4 -F recipe.py

Thank you for your help

Hmm, this is difficult to debug from afar, but did you verify that the data your loader pipes forward is valid and can be read? For instance, by piping it to a test script that reads from standard input?

import sys
import json

for line in sys.stdin:
    line = line.strip()
    print(line)
    print(json.loads(line))