KeyError: 'score' in ner.teach

Hi:

I'm trying to run ner.teach on a custom ner model with some custom tags.

command line:

prodigy ner.teach <dataset> <model_name> data/<annotation_file>.jsonl

I've tried appending the explicit labels. I've also loaded the model in a terminal and validated that it was able to successfully run nlp.evaluate() on some gold docs. the ner.correct model runs successfully with the same model and same dataset.

Irrespectively of anything I do, prodigy will ultimately throw the following stacktrace when I first load the webserver against ner.teach:

Task exception was never retrieved
future: <Task finished coro=<RequestResponseCycle.run_asgi() done, defined at /anaconda3/envs/prodigy/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py:383> exception=KeyError('score',)>
Traceback (most recent call last):
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py", line 388, in run_asgi
self.logger.error(msg, exc_info=exc)
File "/anaconda3/envs/prodigy/lib/python3.6/logging/init.py", line 1337, in error
self._log(ERROR, msg, args, **kwargs)
File "/anaconda3/envs/prodigy/lib/python3.6/logging/init.py", line 1444, in _log
self.handle(record)
File "/anaconda3/envs/prodigy/lib/python3.6/logging/init.py", line 1453, in handle
if (not self.disabled) and self.filter(record):
File "/anaconda3/envs/prodigy/lib/python3.6/logging/init.py", line 720, in filter
result = f.filter(record)
File "cython_src/prodigy/util.pyx", line 120, in prodigy.util.ServerErrorFilter.filter
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in call
return await self.app(scope, receive, send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/fastapi/applications.py", line 142, in call
await super().call(scope, receive, send) # pragma: no cover
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/applications.py", line 134, in call
await self.error_middleware(scope, receive, send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/middleware/errors.py", line 178, in call
raise exc from None
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/middleware/errors.py", line 156, in call
await self.app(scope, receive, _send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/middleware/cors.py", line 84, in call
await self.simple_response(scope, receive, send, request_headers=headers)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/middleware/cors.py", line 140, in simple_response
await self.app(scope, receive, send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/exceptions.py", line 73, in call
raise exc from None
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/exceptions.py", line 62, in call
await self.app(scope, receive, sender)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/routing.py", line 590, in call
await route(scope, receive, send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/routing.py", line 208, in call
await self.app(scope, receive, send)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/fastapi/routing.py", line 129, in app
raw_response = await run_in_threadpool(dependant.call, **values)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/starlette/concurrency.py", line 25, in run_in_threadpool
return await loop.run_in_executor(None, func, *args)
File "/anaconda3/envs/prodigy/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/prodigy/app.py", line 374, in get_session_questions
return _shared_get_questions(req.session_id, excludes=req.excludes)
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/prodigy/app.py", line 345, in _shared_get_questions
tasks = controller.get_questions(session_id=session_id, excludes=excludes)
File "cython_src/prodigy/core.pyx", line 138, in prodigy.core.Controller.get_questions
File "cython_src/prodigy/components/feeds.pyx", line 68, in prodigy.components.feeds.SharedFeed.get_questions
File "cython_src/prodigy/components/feeds.pyx", line 73, in prodigy.components.feeds.SharedFeed.get_next_batch
File "cython_src/prodigy/components/feeds.pyx", line 153, in prodigy.components.feeds.SessionFeed.get_session_stream
File "cython_src/prodigy/components/feeds.pyx", line 135, in prodigy.components.feeds.SessionFeed.validate_stream
File "/anaconda3/envs/prodigy/lib/python3.6/site-packages/toolz/itertoolz.py", line 376, in first
return next(iter(seq))
File "cython_src/prodigy/components/sorters.pyx", line 98, in iter
File "cython_src/prodigy/components/sorters.pyx", line 10, in genexpr
File "cython_src/prodigy/models/ner.pyx", line 281, in call
File "cython_src/prodigy/models/ner.pyx", line 269, in get_tasks
File "cython_src/prodigy/models/ner.pyx", line 244, in prodigy.models.ner.EntityRecognizer.call.get_tasks.sort_by_entity
KeyError: 'score'

I'm almost certain I'm doing something dumb, but I can't figure out what it is. Help?

Also, for what it's worth, running validate against the model returns the green check box.

Resolved!

The problem was in my *.jsonl file, which did not have scores for the annotations. Once I regex-added default values, it worked.

1 Like

Thanks for updating, glad you got it working! It sounds like you were doing everything correctly – and the ner.teach recipe should be assigning the "score" values based on the model's predictions – however, if your data includes existing annotations, it's possible that this was throwing off the built-in annotation model.

@jschirmer @ines

in the jsonl file where did you add the score value like in ["meta"]["score"] = 0.9
like the above or somewhere else
would be better if you explain with an example annotation

I also added ["meta"]["score"] = 0.9 in the input example and tried but still i am getting the same error

Thanks in advance!!!!!!

This is strange, because you shouldn't have to add scores to the incoming data. That should be taken care of by the recipe.

The only problem I see is if the data you're loading in already has pre-defined "spans". It's possible that Prodigy gets confused here, because it expects existing spans to have scores. So maybe have a look in your data – if there are "spans", just remove them. There's no need to have them set in ner.teach – if anything, it just makes the workflow more confusing.

@ines yes after removing the spans from the input it is working correctly
Thank you so much

And I have one more doubt , In ner.teach is there any possibility of editing the results it shows.
Because I have a problem where I want the input spacy model to predict the spans and if it is wrong I want to make the neccessary change and store it in the database. Basically a combination of ner.correct and ner.teach

Is there any way to do it ??

Thanks in advance!!!!!!!!!

If you want to edit and correct the suggestions, you should use ner.correct. The ner.teach recipe will show you suggestions from different possible analyses, so you're not even necessarily seeing what the model would have predicted. You're seeing the suggestions where a yes / no answer might have a high impact on the model. So editing those and "correcting" them wouldn't really make sense here.

But what happens if most of the predictions given but ner.teach is wrong or needs small correction??

ner.teach being "wrong" doesn't exactly mean the same thing as the model's output being wrong. What you see in ner.teach is based on a number of different possible analyses of the text (beam search). So you might see suggestions that the model would have never actually produced, but where your binary feedback would have the biggest impact and produce the most relevant gradient for updating the model. So you need the binary feedback on the one particular span here, because otherwise, the recipe wouldn't make sense.

We haven't really experimented much with using ner.correct with a model in the loop, but it's something you could implement in a custom recipe if you want. It'd be more straightforward than the updating in ner.teach, because you really just need to update the model with the incoming annotations (nlp.update). I'm not sure how effective it'd be and if the updates make enough of a difference during annotation.

@ines Thanks for you response and would try using the custom reciepe