It appears that using --auto-accept with the prodigy review recipe for data that was a binary text classification unexpectedly sets the reviewed dataset answer field to
accept when all sessions agree that the label should be
- Created a simple binary single label classification task and used prodigy textcat.manual on a single label.
- Annotate with multiple user sessions ensuring that some of the user sessions accept or reject classifications disagree. Ensure that at least one item all user sessions reject.
- Use prodigy review with --auto-accept flag and resolve the conflicts.
- Perform prodigy db-out on the reviewed dataset.
The result will be that all annotations where the reviewers agreed it the annotation should be
reject will be marked as
accept in the resulting .jsonl file.
I see why this may be happening given that accept and reject have a slightly different meaning for binary classification tasks. I can also appreciate that I could easily work around this by creating my own filter_review_stream function as indicated in this post.
But since this was added as a feature and had some unexpected behavior for my task, I figured I should point this out. Perhaps a quick documentation update could help.
Again, thanks for Prodigy and spaCy in general!