Hi Prodigy team,
First, I'd like to say first that I love this tool.
I have a question about one of the new configuration setting "force_stream_order". If force_stream_order == True, is this setting ignored when using ner.teach or textcat.teach? Is it at all possible to combine the power of a model in-the-loop with a forced_stream_order? I would love to have the best of both worlds: more efficient annotation and have all annotators see the same examples. However, it seems that ner.teach and textcat.teach only serve up examples with a score close to 0.50 confidence when prefer_uncertain() is used.
The documentation says,
force_stream_order
NEW: 1.9 Always send out tasks in the same order and re-send them until they’re answered, even if the app is refreshed in the browser.
If force_stream_order were combined with textcat.teach or ner.teach, wouldn't the recipe ignore examples with a low or high score that are part of the forced stream order and ignore them no matter how many times they are re-sent?
Thank you for the help, as always.