Labels not being served,

Using Prodigy 1.9. The custom classify recipe below servers the content just fine, but the labels are not showing up.

def _xml_to_conversations(source):

def _doc_as_json(doc, keys=[TEXT, TOKENS]):
    return {key: doc.to_json().get(key) for key in keys}

def _get_tasks(source, model):
    nlp = spacy.load(model)
    conversations = _xml_to_conversations(source)
    for conversation in conversations:
        doc = nlp(conversation.content)
        yield _doc_as_json(doc)

    dataset = ('Dataset', 'positional', None, str),
    model = ('Model', 'positional', None, str),
    source = ('Source', 'positional', None, str)
def classify(dataset, model, source):
        "dataset": dataset, 
        "stream": _get_tasks(source, model),
        "view_id": "classification",
        "config": {"labels": ["Positive", "Negative"]}

    dataset = ('Dataset', 'positional', None, str),
    model = ('Model', 'positional', None, str),
    source = ('Source', 'positional', None, str)
def relations(dataset, model, source):
    return {
        DATASET: dataset,
        STREAM: _get_tasks(source, model),

When i try the relations recipe above, which invokes the ner_manual view_id, the web page serves up an oops:

Oops, something went wrong :frowning:
You might have come across a bug in Prodigy's web app – sorry about that. We'd love to fix this, so feel free to open an issue on the Prodigy Support Forum and include the steps that led to this message.

Any help appreciated.

Hi! The problem in your classify recipe is that the classification interface doesn't have any label options it could display – it renders the given content with a "label". See here for examples:

If you want to add multiple choice options, you probably want to use the choice interface and add a set of "options" to your incoming tasks. See here for examples and the data format:

(I assume you're defining the constants like DATASET, LABELS or RELATIONAL_LABELS somewhere else in your file?)

I think the problem here is that you're creating your annotation tasks by calling spaCy's doc.to_json. That's an okay way to get a JSON representation of a Doc – but it doesn't necessarily match the data Prodigy's interfaces expect.

For instance, your task will have a "tokens" property that looks different, which is likely what's causing the problem here. Instead of processing the texts and then converting the doc, you could just make your tasks {"text": conversation.content} and then use Prodigy's add_tokens
preprocessor to add "tokens" to each task. You can see an example of this here: