UI or Bug? Prodigy not highlighting the full response from LLM for ner.correct?

Hi, sorry to be such a spammer the past few days: I got my ner.llm.correct working with some examples and spacy-llm with flaky, slow GPT4. But I notice something odd in the labeling ui vs. the LLM response. In short, the response from the LLM has a correct reply for this case:

  1. Seared Scallops with Ginger, Persimmon and
    Heirloom Tomatoes | True | DISH | a full dish, with food ingredients and flavors

There is a newline in the middle of it, which the LLM can handle. I have set "allow-newline-highlight" to true in the config so i can correct these things, but it would be nice if Prodigy actually selected the right text span from the model?

Am I missing another setting?

Updated: OTOH, this response from the LLM is wrong, but again Prodigy isn't highlighting all the spans. Not sure what's going on... related to the newlines no doubt.

Lynn

Hi @arnicas!

Thanks for the background and the example. Yes, it seems like it could be an issue with the newlines in the llm-io interface. We'll make a ticket and investigate it. We have a few teammates out the rest of the week but we'll respond back when we have any update. Thank you!

hi ryan! thanks, can't wait for this, it will help a lot :slight_smile:

Hi @arnicas,

Apologies for taking so long on following up on this! I've been testing your example with latest Prodigy version and it seems that after upgrading spacy-llm (currently at 0.7.1 in Prodigy 1.15.3) the issue is gone.
I know it's been a while so I won't be asking to confirm, but if you're still running into these issues after upgrading to latest Prodigy do ping us!
Thanks and, again, apologies for the late reply!