Unable to use Prodigy annotations with SpaCy CLI train

I am able to use prodigy annotations to train NER models using prodigy ner.batch-train and python code. But when I try to use the same annotation dataset (exported using ner.gold-to-spacy) with the cli "train" command I get the following error. What am I doing wrong?
python -m spacy train en ner_system_model ner_system_train_manual_annotations.json Preformatted textner_system_test_manual_annotations.json -b model -p ner
Training pipeline: ['ner']
Starting with base model 'model'
Counting training words (limit=0)

Itn    Dep Loss    NER Loss      UAS    NER P    NER R    NER F    Tag %  Token %  CPU WPS  GPU WPS
---  ----------  ----------  -------  -------  -------  -------  -------  -------  -------  -------
✔ Saved model to output directory

Traceback (most recent call last):
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\spacy\cli\train.py", line 281, in train
    scorer = nlp_loaded.evaluate(dev_docs, debug)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\spacy\language.py", line 631, in evaluate
    docs, golds = zip(*docs_golds)
ValueError: not enough values to unpack (expected 2, got 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\spacy\__main__.py", line 35, in <module>
    plac.call(commands[command], sys.argv[1:])
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\plac_core.py", line 328, in call
    cmd, result = parser.consume(arglist)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\plac_core.py", line 207, in consume
    return cmd, self.func(*(args + varargs + extraopts), **kwargs)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\spacy\cli\train.py", line 368, in train
    best_model_path = _collate_best_model(meta, output_path, nlp.pipe_names)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\spacy\cli\train.py", line 425, in _collate_best_model
    bests[component] = _find_best(output_path, component)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\spacy\cli\train.py", line 444, in _find_best
    accs = srsly.read_json(epoch_model / "accuracy.json")
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\srsly\_json_api.py", line 49, in read_json
    file_path = force_path(location)
  File "C:\Users\ojustwin.naik\AppData\Local\Continuum\anaconda3\envs\tf-gpu\lib\site-packages\srsly\util.py", line 11, in force_path
    raise ValueError("Can't read file: {}".format(location))
ValueError: Can't read file: ner_system_model\model0\accuracy.json

We've been trying to make training with spaCy's CLI easier, so we've moved the conversion over into spaCy so that non-prodigy users can make use of it. So, the easiest way is to use the spacy convert command, which supports Prodigy's jsonl format. This creates the full json format used by spaCy, so you can use the train CLI.

The ner.gold-to-spacy produces a simpler format that's easier to work with if you're writing your own training loop. We should probably remove or update this now that the spacy convert function handles Prodigy's jsonl format.

I just tried the following on a dataset I happened to have in my DB:

python -m prodigy db-out tmp-per-manual-2 > ~/tmp/ner_per.jsonl
python -m spacy convert ~/tmp/ner_per.jsonl . --converter jsonl --lang en

The --lang argument was necessary because spaCy needs to tokenize the file to produce its json format, and no tokens were already set in the dataset.

This produced a file ner_per.json in my current directory, the start of which looks like this:

        "raw":"Thanks fellas",



Thanks Matthew. The converter worked, but with a hiccup. When I first ran "train" on the converted annotations I got the following error:

KeyError: "[E022] Could not find a transition with the name 'B-' in the NER model."

I found that in the annotations there was an instance of multi word entity which was tagged with only "ner":"B-", "ner":"I-", "ner":"L-", essentially without the Entity name after the "B-". So I manually fixed this and then was able to run the CLI "train" command successfully.