Add metrics like accuracy, recall, precision etc. to output

Are there existing tools in prodigy to easily output other measures than ROC AUC for textcat?

Sorry, I think I missed this question before. The built-in training currently outputs the main results from the Scorer and for text classification, that depends on the type of task. Also see here:

F-score on positive label for binary exclusive, macro-averaged F-score for 3+ exclusive, macro-averaged AUC ROC score for multilabel ( -1 if undefined).

If you look at the source of the train recipe, the scores variable in the non-binary workflow holds the scorer object. You can also add your own metrics here, but you'd have to modify the recipe script, or create your own.

1 Like