Since updating prodigy to the latest version and using spacy 2.1, ner.batch-train is no longer using multiple cores on the SageMaker Notebook Instance I have set up for hyper parameter tuning. This is an annoying problem as models are taking 7-8x longer to train, slowing down my experimentation.
Is there some flag for number of cores/threads that I am missing or something I should be doing different when installing? My current data set size doesn’t really warrant using gpu acceleration so I have been using the cpu installation of spacy.
This may be related to: https://github.com/explosion/spaCy/issues/3820; however, updating numpy and installing spacy using conda instead of pip does not change the behavior.