--eval-id in textcat.batch-train not working in 1.8

I updated prodigy to 1.8 yesterday and am no longer able to provide a dataset to --eval-id in textcat.batch-train. Previously the experiment would run and report out results, but now I have this error:

Traceback (most recent call last):                                                                                                         
  File "/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/james/Personal/Prodigy/prodigy_venv/lib/python3.7/site-packages/prodigy/__main__.py", line 380, in <module>
    controller = recipe(*args, use_plac=True)
  File "cython_src/prodigy/core.pyx", line 212, in prodigy.core.recipe.recipe_decorator.recipe_proxy
  File "/Users/james/Personal/Prodigy/prodigy_venv/lib/python3.7/site-packages/plac_core.py", line 328, in call
    cmd, result = parser.consume(arglist)
  File "/Users/james/Personal/Prodigy/prodigy_venv/lib/python3.7/site-packages/plac_core.py", line 207, in consume
    return cmd, self.func(*(args + varargs + extraopts), **kwargs)
  File "/Users/james/Personal/Prodigy/prodigy_venv/lib/python3.7/site-packages/prodigy/recipes/textcat.py", line 256, in batch_train
    acc = model.evaluate(tqdm.tqdm(evals, leave=False))
  File "cython_src/prodigy/models/textcat.pyx", line 263, in prodigy.models.textcat.TextClassifier.evaluate
KeyError: 'cats'

For evaluation sets I went through a bunch of examples and annotated them with textcat.mark. Has there been a change to the data formats I need to fix?

Damn, thanks. I made a mistake in the recipe there. Will ship an update. In the meantime, you can apply the patch to the recipe, as the source for it is provided. If you edit the file:

You should be able to add the line evals = convert_options_to_cats(evals) under line 230. The result should look like:

229     if eval_id:
230         evals = DB.get_dataset(eval_id)
231         evals = convert_options_to_cats(evals)
232         print_("Loaded {} evaluation examples from '{}'".format(len(evals), eval_id))
233     else:

Thanks! Made the change and works fine now.