Training NER does not make any progress

Hmm, yeah, this means more like a separate GPU machine. Transformers are usually large and may not be a good fit for a laptop (e.g., I even tried a gaming laptop). If you can configure it using a cloud service (like just through a Virtual Machine in GCP/AWS/Azure with GPU), you may be able to train one.

I believe you should be able to do this via the Prodigy CLI itself. You can overwrite the values for training.batcher.size.start and training.batch.er.size.stop. Something like this:

prodigy train output ... --training.batcher.size.start=35 --training.batcher.size.stop=50

Highly suggest checking this thread (Flag --batch-size not recognized by prodigy train - #2 by SofieVL) out for more info on the config!

3 Likes