Hi,
I was wondering whether it would be possible to get some advice on saving memory/space efficient by reusing model parts across multiple related text classifiers.
I recently trained a set of models in the same domain (trying to classify unusual employment statuses from job titles), each using en_core_web_sm. These models were intended to semantically discriminate out false positives from a sub-string extraction. I want to keep these models separate, rather than turn them into a multi-classifier, as they've all been trained on a narrow pre-filtered selection of input data points, and therefore perform badly when this is widened.
However it would be amazing if the four output models I've trained could reuse common pieces through the same spacy loader: the parser, the tagger etc, as this would reduce the memory and storage demands by ~ 4x. I was formerly advised this could be done by using the first output model as the spacy_model argument to train the subsequent models as below, but this hasn't seemed to make the model directories any smaller.
python -m prodigy train textcat grads_final en_core_web_sm --output grad_model --eval-split 0.15
python -m prodigy train textcat intern_final grad_model --output intern_model --eval-split 0.15
python -m prodigy train textcat contractor_final grad_model --output contractor_model --eval-split 0.
python -m prodigy train textcat trainee_final grad_model --output trainee_model --eval-split 0.15
Any suggestions appreciated ..