How textcat.teach works under the hood

Hi @qu-genesis,

It's odd that you're getting this error because the config snippet you provided is definitely valid and it works as expected on my end.
Could you share your spaCy version, spacy-llm version and the prodigy version (e.g. by running python -m pip list | grep -E "spacy|spacy-llm|prodigy"?
Could you also share the entire config file?
Thanks!

As to the recommendations to improve the performance of the LLM, it's hard to say anything without a deeper understanding of the use case, but it usually boils down to prompt engineering. You can experiment with providing label definitions and examples, for instance. It might also be that you'd need to write an entirely custom prompt, which requires the implementation of a custom spacy-llm task. This post shows an example of a custom task in a Prodigy recipe. Not sure I've you seen it, but Prodigy has some recipes (for example ab.llm.tournament) that can help with selecting the best prompt for your use case by measuring preference in a systematic and structured way (rather than relying purely on the impressions), which might be worth trying out.
Finally, since you're using OpenAI, it's probably worth reviewing their guide to effective prompt writing.