ner.eval does not print the detailed stats after annotating the evaluation set

My ran ner.eval on my data and I finished all annotating of evaluation data on website, and I then close the website. Then I click Ctrl+C in anaconda prompt. However, it just shows things below without detailed stats.

(base) C:\Users\i25112\Documents\Prodigy>python -m prodigy ner.eval barrel_eval model_text_barrel_1 barrel_eval.jsonl --label BARREL
Using 1 labels: BARREL

? Starting the web server at http://localhost:8080
Open the app in your browser and start annotating!

Saved 121 annotations to database SQLite
Dataset: barrel_eval
Session ID: 2019-01-14_11-52-18

Sorry if this was unclear or documented badly – I just checked and the ner.eval recipe doesn’t actually print any stats at the end. It just helps you build up an evaluation set, which you can then pass into the ner.batch-train command. For example, --eva-id name_of_your_eval_set.

If you want to perform a live evaluation with results at the end, check out the ner.eval-ab recipe! It performs an A/B evaluation and asks you which output is better / which one you prefer. This lets you get a quick estimate of which model is performing better and produces more suitable results.

I just double check the prodigy recipes website. It stated in that website as below. It is misleading to users. I will try ner.batch-train command with --eval-id. Thanks for your reply.

Sorry, this is a mistake in the docs then – thanks for pointing this out. Will fix :+1: