We're excited to announce that we have released prodigy v1.13.2 that comes with two new spacy-llm
recipes: terms.llm.fetch
for terminology generation and ab.llm.tournament
for prompt engineering.
These recipes are successors to our original *openai*
recipes and our recommended way to leverage LLMs for annotation going forward. To find out more about the advantages of spacy-llm
recipes please check our docs.
The terms.llm.fetch
recipe offers a similar functionality to terms.openai.fetch
but it allows for plugging in an LLM backend of choice (including models running locally) as long as it can be configured via spacy-llm
.
You can use this recipe generate phrases and terms and convert them into Prodigy patterns for bootstrapping NER and spancat annotations. If you're interested in this workflow you can check an example here.
The ab.llm.tournament
recipe offers two improvements over its predecessor ab.openai.tournament
: since it's powered by spacy-llm
you can use a custom LLM backend, and you can include the LLM backend in the tournament as well!
As you annotate you can see the scores in the terminal so that you can take an informed decision about which combination of a prompt/LLM works best!
If you're interested in the algorithm behind the tournament recipe and happen to be at
PyData Amsterdam do check out our colleague's Andy's talk on the topic. You can learn all about "Promptly Evaluating Prompts with Bayesian Tournaments"!
Talk link: https://amsterdam2023.pydata.org/cfp/talk/HZB8JU/.
In this release we have also added an llm_io
annotation interface to facilite the development of custom LLM recipes.
We also included a fix to task router issue related to server restarts.
Finally, we have treat for those who would you like to explore these new features but have an old version of Prodigy.
You can get 10% off a new personal license, but also off any upgrade. Make sure you use PRODIGY10 as a discount code at checkout. The code is valid all through September!
Looking forward to hearing what you think