how to write a model.update() function



My team want to load and re-train some Chinese models through Prodigy, e.g. NER model and text-classification model. Since spaCy doesn’t provide any basic Chinese models, we are trying to implement these recipes.

'update': model.update,  # update model with annotations

I am still confused about the input, output and main process logic of this bound method model.update. Is there any paradigm to help me write a correct one?

How is the support for Languages other than English?
Turkish language that spaCy doesn’t yet provide pre-trained models
(Matthew Honnibal) #2

Really glad to help you do this. I do hope we’ll be able to add some Chinese models for spaCy soon too.

You should actually be able to use the built-in recipes for NER and textcat, even with Chinese. But to answer your question about the update() function: there’s some documentation in the PRODIGY_README.html file that you might want to look at. The signature of the function is very simple. Example:

examples = [{"text": "some text", "spans": [{"start":0, "end":4, "label": "DT"}], "answer": "accept"}]

def update(answered_examples):
    loss = 0.0
    return loss


The update() function must take a minibatch of dict objects, where each dict should have a key answer, with value one of "accept", "reject" or "ignore". For the NER recipe the example should have a key spans, which should be a list of dicts. Each span dict should have the keys "start", "end" and "label", where start and end are character offsets, and label is a string.

To make the update function work well, there are a few things to consider. First, in the NER update, you’re not going to have complete annotations for the inputs. You might only have one entity for the sentence. You also need a way to learn from "reject" examples. If the answer is "reject", it’s easy to calculate the gradient of the error for the class you got wrong, but for other classes you probably want to zero the gradient. I’m not sure what the neatest way to express this in Tensorflow or PyTorch would be. Personally I wouldn’t bother trying to express it as a loss — I would just calculate the gradient and pass that in.

Stopping criteria for classification in prodigy

Many thanks for your help.
I am curious about “You should actually be able to use the built-in recipes for NER and textcat, even with Chinese”.
As prodigy take spacy as the default ner model to handle coldstart problem, it works well with English(German, Spanish, etc) texts. However, when we take spacy model as a default one to fullfill chinese ner, i am afraid that all text spans would get the same probability 0. As a consequence, perfer_function() will not recommend valuable questions.

(Ines Montani) #4

Yes, to get over the cold start problem, you’ll have to start off with examples of the entity first to give the model something to learn from. The ner.teach recipe supports passing in a JSONL file containing match patterns (like the patterns used by spaCy’s Matcher). Prodigy will then start showing you matches of those patterns in your texts. As you annotate those examples, the model is updated and will eventually start suggesting examples as well, based on the updated weights. We actually just recorded a video tutorial that shows this workflow for training a new entity type.
There’s also more information in this thread and this comment.

In this example, we use the terms.teach recipe to bootstrap a terminology list from word vectors and then convert it to a patterns file using But you could also generate the list of patterns manually – see the PRODIGY_README.html for an example of what’s possible.


many thanks
I have another question. We are trying to train a chinese-ner model through spacy. Will prodigy get access with this model seamlessly?

(Ines Montani) #6

Yes! Any model you export from spaCy, e.g. via .to_disk(), can be loaded directly into Prodigy. The model you specify on the command line can either be a path to a data directory containing the exported model, or a model Python package created with spacy package.

(If you’ve made modifications to spaCy, for example, to the Chinese language data or other parts of the library, those will have to be available to Prodigy as well. So you can either use a custom recipe, run Prodigy with a fork of spaCy containing your modifications, or create a Python package for your model and include your custom code in the model’s


I just test our spacy-Chinese-NER-model, and it works well with prodigy.
Now i turn to train a relation-extractor based on the spacy-Chinese-NER-model, any suggestions?
Again, thank you all.

(Ines Montani) #8

Cool, that’s nice to hear! :+1:

Here’s a thread with some ideas for how to use Prodigy for relationship and dependency annotation – this might be helpful to figure out the best interface to use: