Is there a "normal" way to do the LLM textcat correct recipe with a single label? When I try this, the LLM is instructed to return POS or NEG for whether or not the single label is appropriate, but there's nothing visible in the UI to indicate what the LLM output is without expanding the LLM response.
I also tried to just do a textcat.llm.fetch and the stored response from the LLM is POS/NEG which isn't that useful in a textcat.correct session AFAIK.
I'm sure I could modify the recipe or add a second label that is the negation of what I'm interested in to get what I want out of this, but before expending that effort, I thought it was worth mentioning to see if there's a "right way" or if this is maybe a bug or unintended use case.
The config used with our label and description removed:
Do you have a screenshot that demonstrates that it does not appear?
I wonder ... wouldn't it make sense to see POS and NEG as two separate binary labels? In theory a sentence can be neither positive or negative or even be both at the same time. Does this line of thinking apply to your labels?
There's no difference in the UI for POS or NEG. I looked back at textcat.manual for a single label and now understand that I'm supposed to accept/reject the sentence if the label applies. So I think the real issue here with textcat.llm.correct (with the current recipe as of 1.14.12) is that it doesn't display the LLM response in a helpful way.
Additionally, if you do textcat.llm.fetch, it doesn't store an answer of "accept" or "reject" that you can correct.
It's true that there's space for improvement in the UI of binary llm.textcat.correct. We appreciate the feedback!
About the textcat.llm.fetch, the POS/NEG model's response could be translated to an appropriate value of the answer field (just like it's done for the non-binary case). We put it on our TODO list, but one workaround in the meantime would be to db-out the binary textcat dataset and posprocess it outside Prodigy so that the answer field is correctly populated.