Model Training for NER

Yes! :slightly_smiling_face:

Exactly! Also, keep in mind that recipes that do support active learning also don't just modify the model in place. (That would be bad – the model should always be properly batch trained afterwards.) The active learning only helps create better annotations.

Yes, the model passed into the training command is the model you want to update.

If you want to annotate with a model in the loop, you ideally want to be using the model you previously updated, yes. For recipes that don't use any active learning it usually doesn't matter – they only really use the model for tokenization etc., so you could theoretically also pass in the untrained base model.

We usually recommend starting with the blank base model every time and then updating it with all of the annotations. This may help prevent different interactions between the existing weights and new examples, and also makes it easier to compare your results, because they were produced by the same process, just with different amounts of data.