train-curve command stuck for multilabel textcat model

Hello,

I am using release 1.11 of prodigy and attempting to check the train-curve for a multi-label textcat classifier I have been training (in 1.10.7) to assess which labels would benefit from more annotation. The command seems to get stuck early on and not run through. I tried enabling the verbose flag listed in the docs but I don't think this is supported in the new version? It complained when I attempted to include it. I also tried running in a few different frameworks in case it was a Jupyter Lab issue as I have had a few of those, but find the same result in the conda or Spyder command line. I was able to use the train command for this model successfully and had previously run a train-curve for it in my 1.10.7 setup. Should it be possible to work on a model in one version and then update it in a newer version or will I need to start from scratch? Or is there something else I have missed which could be causing this problem?

Command I've been running:
python -m prodigy train-curve --textcat-multilabel more_issues --eval-split 0.4

Output I see:

========================= Generating Prodigy config =========================
:information_source: Auto-generating config with spaCy
:heavy_check_mark: Generated training config

=========================== Train curve diagnostic ===========================
Training 4 times with 25%, 50%, 75%, 100% of the data

% Score textcat_multilabel


and then just a blinking cursor. I don't think I'm being impatient as I've left it to run for 30 mins+ and seen no change. Any tips or ideas would be much appreciated!

Hi! 30+ minutes with no update definitely sounds a bit suspicious and too long. It's expected that the train curve takes longer than the regular training, since it trains multiple times, and it might take slightly longer in v1.11 than before. But it's surprising that the 25% run would take over 30 minutes already.

For comparison, how long does it take you to train with the same settings using prodigy train? And how many examples are in your more_issues?

The train and train-curve commands both support config overrides (just like spaCy's CLI), so for debugging, you could also add the following arguments to only train for one step/epoch:

--training.max_epochs=1 --training.max_steps=1  

Hi Ines,

Thanks a lot for the reply. I think it is a combination of the model containing a lot of labels and the system I am running it on, as modifying the numbers you suggested has got me some results. When I timed my train command it was actually taking about 30 minutes real time, I just hadn't realised. I have: Training: 942 | Evaluation: 235 with 22 labels but quite a few are rare so I know I definitely need more annotations focussing on these, which I'm experimenting with now. I'm running within Anaconda on a Windows machine which does tend to have other greedy processes so perhaps there isn't much I can do about the speed, but for quick tests this is now working sufficiently.