Hi! If you're annotating data for different tasks, you typically want to do this in different instances so you can keep state in memory, load a model if needed, save the different annotation types to separate datasets you can run training experiments with etc. We'd also recommend running longer annotation sessions at a time instead of annotating a single example at a time, which often isn't that useful.
So a better approach might be to include a feature to flag an example for annotation, and then periodically annotating the selected examples in batches.
If you want to implement this, you could have a loader that periodically queries from an external source, like an API or your database containing the flagged examples for annotation. Here are some basic examples loading from a file path or a custom source: Loaders and Input Data · Prodigy · An annotation tool for AI, Machine Learning & NLP – only that in your case, you'd make a request to your database or similar, and wrap in in while True:
so it keeps looping until new data is available. New examples will then be queued up when you refresh the browser.