1.11.0: Incorrect generation of config files?

Hi Tom,

Sorry for not having been entirely clear in my previous reply. We are aware of the fact that the functionality changed since the nightly version, as we've specifically implemented it so that parts of the config of the base_model would be copied over, including the training parameters. This is to better support transformer-based pipelines, which need different training parameters. In general, if you specify a base_model then Prodigy will try to copy as much as possible from that model instead of using default values. I think we didn't do this correctly in the nightly version, and I do think the behaviour makes more sense now, but happy to discuss further if you've found otherwise.

So some of the differences you're seeing are expected. But the logger is not copied over, and should be specifically set to the default one. You can obtain this by running fill-config, but Prodigy should probably do that in the background for you.

Update: we've got a fix in progress for the crash on the "logger" entry, that we'll hopefully
release soon!

1 Like