Slightly Customized Prompt Tournament Results in too many API Calls

Hello, I have a slightly modified prompt tournament recipe. The recipe works as intended for about 10-20 selections, but then fails. It enters an infinite loop of response generation. Although the API calls return string responses, Prodigy continues requesting additional responses without updating the interface. I verified that the responses are valid strings by adding a log statement inside PromptConfigPair.generate(). I am also not seeing any duplicates in the _task_hash.

Happy to provide additional details. None of this work is sensitive. Some more details below:

PromptConfigPair.generate is this:

    def generate(self, **input_data):
        doc = self.nlp(self.template.render(**input_data))
        # log(doc._.response[0].strip())
        return doc._.response[0].strip()

I see 3+ messages like this after each annotation, but I am expecting exactly two:

17:42:44: RECIPE: Generating new candidates.
17:42:44: RECIPE: Picked [additional_context_chat.jinja2 + config1.cfg] and [standard.jinja2 + config1.cfg].
17:42:48: RECIPE: Candidate generation took 3.309509038925171s.

Can anyone help me understand what might be causing this problem? I suspect it may have something to do with the database (I have encountered this problem both with a Postgres and a SQLite db). Once I reach this failure mode, wiping the database or creating a new named dataset seems to resolve the issue (temporarily).

I had a nearly identical recipe and the exact same spacy_llm config working with feed_overlap=false. I think that introducing named sessions may be related, but I'm not sure.

Hi @langdonholmes ,

Have you changed batch_size by any chance? The current version of the recipe requires batch_size to be one as what to serve next depends on the current answer. Setting it to something greater than 1 might result in such loopy behavior.
(The next release with the fix to the response bug is on the way btw.)