1.11.0: Incorrect generation of config files?

Yesterday I installed the latest Prodigy release 1.11.0 after coming from the nightly 1.11.0a11, running on Windows 10.

When trying to train on my dataset with python -m prodigy train -n dataset -m en_core_web_lg I get the following error:
original_logger = config["training"]["logger"]
KeyError: 'logger'

I did not receive this error with version 1.11.0a11.
Also this error does not occur when training without a model specified after -m.

When inspecting the generated .cfg file (generated with spacy-config in stead of train) and comparing it with that generated without a model specified (that worked), I noticed the following:

  • [training.logger] was missing (likely the cause of the problem)
  • [training.score_weights] are specified incorrectly: the elements not trained are set to NULL which is correct, but ents_f, ents_p ents_r are set to 0.33, 0, 0, while it should be 1, 0, 0 respectively.
  • [training.optimizer] has use_averages = true in stead of false
  • [training] many parameters like patience are set to very different values, like 5000
  • [training] seed = ${system:seed}, gpu_allocator = ${system:gpu_allocator} use a : in stead of a . after system

Interestingly, these differences also exist when comparing a .cfg generated with 1.11.0 vs 1.11.0a11 even with exactly the same command line command (i.e. including -m en_core_web_lg)

This leads me to believe something has changed with the auto-generation of .cfg that does not work well with a model specified. I also tried using en_core_sci_lg but it had similar results.

Is this a 1.11.0 issue or is something else going on?

Thanks!

Hi!

You're right that the functionality of this did change in the 1.11.0 release, but it sounds like there might be a few problems we need to look into.

The intended behaviour when specifying a base_model, is that it will copy the training settings from that base_model. If you don't have a base_model, it will use the default values. This will explain the differences you saw with e.g. training.optimizer. In general it does make sense to copy the settings from the base model, because if it's e.g. a transformer-based pipelines, the default settings wouldn't work well.

I'd personally recommend using the data-to-spacy command and double checking the config and then running spacy train, which gives you more control then running only prodigy train.

I'll have a look into the issue of score_weights and the logger though - hopefully those are easy fixes.

Thanks for the reply Sofie, and for looking into it!
A starting point might be to look into what changed in the config generator script in 1.11.0 vs 1.11.0a11, since identical commands ("prodigy train -n dataset -m en_core_web_lg") give quite different results.

I attached the 2 configs below, changed the file name to *.jsonl to bypass the filters, you just have to change it back to .cfg of course.
main_weblg.jsonl (2.3 KB)
nightly_weblg.jsonl (2.2 KB)

Regards,
Tom

Hi Tom,

Sorry for not having been entirely clear in my previous reply. We are aware of the fact that the functionality changed since the nightly version, as we've specifically implemented it so that parts of the config of the base_model would be copied over, including the training parameters. This is to better support transformer-based pipelines, which need different training parameters. In general, if you specify a base_model then Prodigy will try to copy as much as possible from that model instead of using default values. I think we didn't do this correctly in the nightly version, and I do think the behaviour makes more sense now, but happy to discuss further if you've found otherwise.

So some of the differences you're seeing are expected. But the logger is not copied over, and should be specifically set to the default one. You can obtain this by running fill-config, but Prodigy should probably do that in the background for you.

Update: we've got a fix in progress for the crash on the "logger" entry, that we'll hopefully
release soon!

2 Likes

Just released v1.11.1, which should fix the underlying problem :slightly_smiling_face:

2 Likes