Just wanted to ask if there is a reason behind the default binary interface for active learning recipes like ner.teach?
I assume this is to increase speed but I wonder whether in some cases its faster to be able to correct the annotation while keeping the sorting that comes from the model in the loop.
This is motivated from a test to annotate ORG entities in a news dataset. ORG entities in that dataset are not that infrequent. Manual annotation gives me predictable performance gains as data size increases whereas active learning using en_core_web_md struggles with the first hundred examples since I have to reject a lot of recommendations and keep only the ones that it already finds as correct. I still feel sorting the examples with a model in the loop provides benefits but i think what might work better in this example is to actively correct the labels instead of accept / reject.
I wonder whether there is a best practice I might be missing here which would suggest to use active learning when the model is performant above a reasonable threshold in order for most of the binary suggestions to make sense.
A major design philosophy of Prodigy is to provide smart defaults, but enable extensibility to developers because the best solutions are likely custom. Custom recipes are the way to implement customized tasks.
Thanks for your response. I understand that in most cases binary annotation speeds up the time to label and reach a certain level of performance which makes sense. Its just that in a toy example I was trying it seemed not to work as expected hence my question.
I did in fact experiment with correcting the annotations vs the binary process and the problem persists so its probably unrelated. Will open a new thread to continue the discussion.