Models management

Hello

I built a pipeline to retrain my models using new input data.

Here is an example for improving french model on [‘ORG’, ‘PRODUCT’, ‘PER’]

When I train, I get that:

Loaded model fr_core_news_sm
Using 20% of accept/reject examples (292) for evaluation
Using 100% of remaining examples (1172) for training
Dropout: 0.2  Batch size: 16  Iterations: 10  

BEFORE     0.379     
Correct    153
Incorrect  251
Entities   885       
Unknown    279       

#          LOSS       RIGHT      WRONG      ENTS       SKIP       ACCURACY  
01         56.563     275        129        463        0          0.681                                                                                                                                                                                                       
02         53.317     296        108        428        0          0.733                                                                                                                                                                                                       
03         58.704     318        86         437        0          0.787                                                                                                                                                                                                       
04         53.754     327        77         468        0          0.809                                                                                                                                                                                                       
05         56.246     334        70         488        0          0.827                                                                                                                                                                                                       
06         54.256     339        65         539        0          0.839                                                                                                                                                                                                       
07         58.559     340        64         541        0          0.842                                                                                                                                                                                                       
08         74.089     338        66         672        0          0.837                                                                                                                                                                                                       
09         62.757     337        67         729        0          0.834                                                                                                                                                                                                       
10         63.274     345        59         1305       0          0.854                                                                                                                                                                                                       

Correct    345
Incorrect  59
Baseline   0.379     
Accuracy   0.854     

Model: /Users/iero/models/temporary
Training data: /Users/iero/models/temporary/training.jsonl
Evaluation data: /Users/iero/models/temporary/evaluation.jsonl

First question : I was looking into /Users/iero/models/temporary directory for above information (ie accuracy). Do you keep those numbers somewhere ? I will use that to perform an aval-ab and see if my model is improved since last train.

Second question: I use fr_core_news_sm model.
In meta.json file, I see a reference to core_news_sm. Is it normal ?

{
  "lang":"fr",
  "pipeline":[
    "sbd",
    "tagger",
    "parser",
    "ner"
  ],
  "name":"core_news_sm",
  "license":"LGPL",
  "author":"Explosion AI",
  "url":"https://explosion.ai",
  "notes":"Because the model is trained on Wikipedia, it may perform inconsistently on many genres, such as social media text. The NER accuracy refers to the \"silver standard\" annotations in the WikiNER corpus. Accuracy on these annotations tends to be higher than correct human annotations.",
  "vectors":{
    "width":0,
    "vectors":0,
    "keys":0,
    "name":null
  },
  "sources":[
    "Sequoia Corpus (UD)",
    "Wikipedia"
  ],
  "version":"2.0.0",
  "spacy_version":">=2.0.0a19",
  "description":"French multi-task CNN trained on the French Sequoia (Universal Dependencies) and WikiNER corpus. Assigns context-specific token vectors, POS tags, dependency parse and named entities. Supports identification of PER, LOC, ORG and MISC entities.",
  "parent_package":"spacy",
  "email":"contact@explosion.ai",
  "accuracy":{
    "token_acc":100.0,
    "ents_p":82.053766045,
    "ents_r":81.3738441215,
    "uas":87.1639784946,
    "tags_acc":94.52,
    "ents_f":81.7123907145,
    "las":84.4310035842
  }
}

Third question: If the answer of first question is ‘no’, Can I update meta.json to keep these training information ?

Thanks

I just checked and it seems like Prodigy's training commands currently don't update the accuracy data in the "meta.json". I think we might have initially decided against this because the accuracy number you get back from Prodigy usually really only refers to a specific experiment and isn't always perfectly representative. But I guess it'd make sense to add it somewhere for reference.

If you check out the source of ner.batch-train (or the data returned by the ner_batch_train function, if you're calling it from Python), you'll see that it returns the best_stats, a dictionary of the results. You can then do something with that – for example, save it to a JSON file in the model directory. You could also write to model.nlp.meta["accuracy"] before the model is exported.

Yes, that's expected - the model name consists of the language and the name (so [lang]_[name]). The name can be anything you want – for spaCy's pre-trained models, we chose the [type]_[data]_[size] convention to communicate what exactly the model is and what it includes.

When you update an existing pre-trained model, the name stays the same – but you can edit it in the meta.json and then use the spacy package to create a Python package from your model.

1 Like

Thanks @ines !

If I encapsulate the ner.batch-train in a try/catch I get the exception and I cannot get back the best stats from this return !

What happens if you actually call the function instead of trying to serve it? See here:

Sorry I didn't think of this explanation earlier – this is also how Prodigy does it under the hood.

1 Like

Perfect !

Thanks @ines

1 Like