Prodigy web api access ?

Hi,
I have an app which parses a document and shares it with the user.
In case the document is not parsed correct, the user can flag it.

In case the document is flagged, it is sent to the admin team for review.
Now can we send the document to prodigy (via API) so that a user can annotate it. That information is saved to existing model. Our program reloads the updated model and starts using it.

So summarize it, this is what I am looking to achieve:

  1. User flags a result
  2. The data is sent to prodigy
  3. Prodigy loads existing model, allows the admin user to annotate the flagged result.
  4. Prodigy saves the model.
  5. Our program reloads the updated model and uses it.

I am planning to host prodigy on a public server accessible by admin.

Any inputs / suggestions are more than welcome.

Thanks,
Tushar

Hi! In theory, this would be possible by having the flagging action trigger a web service that starts the Prodigy server with the given example as the input file (e.g. by calling prodigy.serve under the hood or starting the server in a subprocess). But it might not be very effective and useful to update your model on single examples like that.

Normally, when you improve your model with more annotations, you want to create them in batches, retrain your model from scratch and carefully evaluate the results on evaluation data representative of the runtime data to make sure your model actually improved. You also want to make sure that the training and evaluation data is annotated consistently – inconsitent data is one of the main problems for ML models.

Updating your model with a single example may not really make a difference or solve the underlying problem. In a lot of cases, the mistakes a user flags might point to a deeper issue that's not just solved by adding a single example to the training data. If the user doesn't have an overview of the specifics of the model, or the annotation scheme, they may also create annotations that are completely inconsistent with the rest of the data and make the model worse, often in ways that are pretty unintuitive.

Instead, it seems much more useful to collect the flagged examples, maybe have the user enter some notes, and then use this feedback to improve the data you're using to train the model.

Hi,
Well, that's another point of view. Thanks for explaining it.

So here is my problem. I have documents with the heading as "City Name". At times the model doesn't understand the city name correctly.
So, I thought if a user can quickly annotate the heading as GPE, it might work correctly. But anyway, I'll think of doing it using a different approach.

Thanks