`prodigy train` not reading configuration file

I am having trouble ensuring prodigy reads my config file.

My config file looks like this:

$ cat test_config.cfg
...
[training]
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
accumulate_gradient = 1
patience = 100
max_epochs = 1
max_steps = 100
eval_frequency = 10
frozen_components = []
annotating_components = []
before_to_disk = null
...

and my train command is below:

prodigy train --textcat-multilabel my_data --label-stats True --base-model en_core_web_lg --lang "en" --eval-split 0.2 --training.max_epochs=0 --training.eval_frequency=10  --config test_config.cfg

based on this set-up, I am expecting training to stop after max_steps = 100 steps, but training continues beyond this.

However, if I replace my train command with the below, I get the desired result.

prodigy train --textcat-multilabel my_data --label-stats True --base-model en_core_web_lg --lang "en" --eval-split 0.2 --training.max_epochs=0 --training.eval_frequency=10  --training.max_steps=100

Based on this, i am fairly certain prodigy is not reading the configuration file.

Can someone please point me to what I am doing wrong?

Hi @JBunr,

The problem might be a conflict with the --base-model. Whenever you provide a value to that parameter, Prodigy will use its config and settings to ensure that the training "matches." You can try removing that part and see if the training follows the config you've given. This might be related to your other post here about "training that doesn't stop."

Thank you LJ, removing base-model seems to do the trick.