TextCat Training Results on a per label basis.

@ines,
Just a question about text categorization training using textcat.batch-train. If I have multiple text categories that I have trained the model on, is there any way I can get a breakup of the training accuracy at the end of using the recipe? It currently prints out the stats for all the labels together.

The reason I ask is that I have a simple supervised classifier to use a baseline to compare Spacy/Prodigy models against (using NLTK), and I print out the stats on a per label basis. I was just wondering if I could do that here to have a comparison showing the CNN does better (which it definitely should and does) in this case.

Thanks in advance.

Unfortunately the built-in evaluate method doesn’t currently offer that break-down, no :(. I hope it won’t be too difficult for you to implement it. You should just need to run the model over the data and compare against the gold-standard.