Hi, thanks for the report
I'm not 100% sure why this would matter (maybe something related to existing hashes?), but could you try again with a new and blank dataset? I was able to make it behave weirdly with one of my messy test datasets, but if I try with a fresh dataset and textcat.manual
, I can't reproduce the problem.
As mentioned on the other thread, the only scenario where a glitch could still happen is if the annotator holds down the answer key – but that's also typically not a common use case.
Btw, just to clarify, you shouldn't be using a forced stream order with active learning recipes like textcat.teach
, ner.teach
or a recipe using a stream that continuously responds to outside state. That's currently not supported.