Hi all, and thanks for your continued work on this great tool.
One of our annotators just informed me that she has annotated a few hundred
mark tasks incorrectly -- she was rejecting and accepting based on faulty domain knowledge.
What is the best practice for correcting this?
My thought was to end the current Prodigy server we have running,
db-out the dataset, load the resulting jsonl file locally, remove the "answer" from the rows that I know were annotated incorrectly (I have a way to identify this rows), then export the file to jsonl again and start a new prodigy instance with a new prodigy dataset. I'm afraid this approach will result in the annotator having to re-annotate all of the rows in the entire dataset, as I'm not entirely sure how the "memorize" functionality works.
What is the preferred explosion.ai method for approaching this problem?