Practical use of rejected textcat.teach annotations for downstream tasks

Hello,

I am concerned about the usefulness of textcat.teach exported annotations in a multi-class classification task.

Here is my problem:

Each time the model suggests the wrong label, I can only reject it but cannot suggest the correct label instead (like in textcat.manual I always just select the correct label). When exporting, those rejected annotations are useless for downstream tasks. All I get back is the suggested wrong label and my "reject" decision. If that happens in 90% of the cases, that means I can throw away 90% of my annotations.

This just feels not right. I wonder if I am just very unlucky here. I.e., it is not expected to have 90% rejections. For me, that sounds like I cannot rely on text.teach and instead need to go back to dump text.manual annotations to maximize my annotation productivity.

Before I bury textcat.teach as a "nice try", please let me know if there is something I am missing here.

Best,
Paul

Hi @dedupedude,

One thing to have in mind is that the "reject" answers are also a useful source of information for training the model. Ideally, we should be training both on positive and negative examples. That's actually the idea of behind the binary accept/reject textcat.teach workflow: to provide positive and negative samples of most challenging examples. That said, the proportion between positive and negative should be more like 50/50 not 90/10 as is your case.
To ensure better predictions (and, thus, more positive examples) from the beginning you could either start with a stronger baseline i.e do some manual annotations with texcat.manual and train the model to be then used with textcat.teach to get more labels faster. Alternatively, if it's possible to come up with some keywords that are likely triggers of a given label, you could implement them as patterns and use that with textcat.teach to guide the model in the early stages.

And just for completeness textcat.correct will let you manually edit model's prediction but it wouldn't do any sampling for you, it would just go through examples one by one. The advantage of uncertainty sampling is clear for highly unbalanced dataset and sparsely distributed labels - not sure if that's the case for you.

Thanks for your response. Makes sense.

Sounds like the 90/10 split was bad luck (or poor strategy).

I will check textcat.correct.

1 Like