Hello, I see that the new (Version 1.9) train
recipe does not output an evaluation file (as in older versions of Prodigy), or anything like an evaluation file. How would you recommend calculating accuracy for the model, for example?
Thanks!
Hello, I see that the new (Version 1.9) train
recipe does not output an evaluation file (as in older versions of Prodigy), or anything like an evaluation file. How would you recommend calculating accuracy for the model, for example?
Thanks!
After you train, Prodigy will output the detailed accuracy stats. If you want a stable evaluation, you typically want to be using a dedicated evaluation set and pass that in via the --eval-id
, instead of just holding back a random X% of examples every time. This also makes it easier to interpret and compare results, since you're always evaluating against the same data.
If you want to train and evaluate with spaCy directly, you can also use the new data-to-spacy
recipe to create a training data file in spaCy's format. It also supports splitting the data into a training and evaluation set.