Ner evaluation probability threshold

I am sorry if it has already been asked, but after running prodigy train ner ... the evaluation metrics appear. What I would like to know is under which score do predictions are considered valid during this evaluation.
Thank you.

Hi, I'm not 100% sure I understand your question! Prodigy's training command is mostly a thin wrapper around spaCy's API and what you see are the results returned by nlp.evaluate. For named entities, that's the precision, recall and f-score.

Thank you for the answer!
Yes, but on which score do we take an answer as an answer to evaluate?
I mean, suppose, the model predicts something with a score 0.4, do we take it with a bunch of other answers and evaluate? Or should it be higher? Or can it be lower?