Multiple users review same image dataset

So we have 29k images that need to be reviewed, we installed Prodigy on a Ubuntu server, and give the link to a few teammates. They will probably review this dataset at the same time.

The question is: if more than one users open a session, they will see the same images from start to finish in their own session? if two people review the same images and one accepts and one rejects, the last one wins?

Can you please clarify?

The way the web app works is actually pretty straightforward: Everytime a user opens the site, it makes a request to the /get_questions endpoint, which will return the next batch from the stream. This means that if two clients connect to the same session, they will get different batches of data, whatever is next up on the queue.

If you want to have multiple people annotate the same data, I'd recommend starting multiple instances – for example, run Prodigy on different ports (e.g. by setting the PRODIGY_PORT environment variable when you execute the command). Each annotator could then also have their own dedicated dataset that their answers are saved to. This means you'll be able to compare the work performed by the individual people.

There's no simple answer for this and how you want to use conflicting annotations later on is something you have to decide. If annotators all add to their own datasets, you'll be able to export the data and compare it to find and resolve conflicts.

One strategy could be to take the datasets, find answers with the same _task_hash (same question) but with different answers. You could then use a threshold of, say, 80% agreement to decide whether to include the example or not. So if 80% of annotators agree, you include the example – otherwise, you don't, or reannotate it yourself to make the final decision.

Maybe you'll also find that it's usually the same annotator who disagrees with everyone else – this could indicate a misunderstanding about the annotation scheme. This is obviously super important and something you want to find out as soon as possible. So I'd recommend exporting and analysing the data with this type of objective very early on in the process.

I'd also recommend checking out the following addon, which was developed by a fellow Prodigy user. It includes a range of features to use the tool with multiple annotators and get stats and analytics:

We're also working on an extension product, the Prodigy Annotation Manager, which is very close to a public beta now :tada: The app will have a service component and let you manage multiple users, analyse their results and performance, create complex annotation workflows interactively and build larger corpora and labelled datasets. If that's sounds relevant, definitely keep an eye on the forum for the official announcement.

Thanks for your explanation, it does make sense.

One additional question (irrelevant to this one). So I am the only person who review the images, I started Prodigy and reviewed about 200 images, then I stopped Prodigy.

After about 2 hours, I restarted Prodigy and tried to continue and review more. And I realized that Prodigy loaded some images that I have reviewed hours ago.

I can see that the data were stored in sqlite database correctly, but why Prodigy didn’t load the reviewed records when we restart it? Any configurations required?

1 Like

Yes, by default, Prodigy makes no assumptions what the current dataset "means". But you can use the --exclude option to explicitly tell it to exclude examples present in one or more datasets – for example: --exclude dataset_one,dataset_two.

1 Like

Thank you, it works. Just to make sure, I can exclude the same dataset which I currently used, this way, it will pick up where it left of and continue the rest of images?

Yes, exactly. Or, more precisely, Prodigy will skip an incoming example if it's already present in the dataset (i.e. if it has the same _task_hash property).

Thank you!

1 Like