Confused about --base-model in train recepie with output from ner.teach

My use case is fine-tuning a model trained with spacy for NER, using the dataset from ner.teach.

I am using train receipt to do so. The documentation says the base model is

--base-model, -m str Optional spaCy pipeline to update or use for tokenization and sentence segmentation.

However the output seems to be that the base model with tok2vec and NER in my case is finetuned.
I just want clarifications if what I am doing is correct.
Thanks

Hi Than,

the ner.teach recipe will update as you are annotating with the purpose of finding more high quality examples to annotate first. That means that if you provide it a base model that it will be finetuned in the loop. However, once you're done annotating it won't save the updated model to disk. If you want a finetuned model on disk you'll probably want to use the train command after annotating some examples. This will train a model on your new data, saved on disk so that you may re-use it elsewhere.

Does this help? Feel free to ask more questions if not.