NER training - high RAM usage - Memory leak ?

Hello,

I guess it’s more a Spacy issue, but I didn’t try to train directly from Spacy, so I post here :). Anyway I’m using the ner.batch_train recipe, with the following command: prodigy ner.batch-train ner-3 en_core_web_md --output /tmp/ner-3.model -es 0.3 -n 15 -b 32

I observe a HUGE ram consumption, see the screen below (It’s even at 15G right now :open_mouth: )… Hopefully OSX compress most of it, and that’s why I didn’t noticed at first. But it’s quite problematic…

By the way I’m using the lastest spacy - 2.0.5

Hm! Sorry about this – thanks for the report.

Edit: What version of Thinc are you using?

thinc==6.10.2

Memory use in spaCy parser beam training looks stable, so either the memory leak is within Prodigy, or it’s something to do with serialising the vectors.

Just to confirm I switched to training directly with Spacy and memory usage is OK. The leak must be in prodigy then. Maybe it’s related to the training part that include negative example…

Actually I’m wondering does using the negative examples is something that you added only for boostrapping models in prodigy (with few example), or is it something that you would recommend in general ? I didn’t found any documentation in spacy of this training scenario. I guess that the idea is to use negative examples to constraint the beam search, but not sure…

is there an update on this?