Internal Server Error

I updated Prodigy from 1.9.0 to 1.9.7 and now I'm getting an Internal Server Error when trying to load the Prodigy web app.

The logs don't give much away either:

✨  Starting the web server at ...
Open the app in your browser and start annotating!
 INFO: - "GET /?session=simon HTTP/1.1" 500 Internal Server Error

How do I debug this?

Hi @simon.gurcke,

It's quite strange that you're not getting a traceback with the error.

I tried debugging it here, injecting an artificial error in the code, to force it to throw an Internal Server Error, but when triggering it I'm still getting the full traceback leading to the point where I injected the error. It seems the fact that you are not getting any extra info about the error, any traceback, would be the first to debug.

Which version of Python are you using?

Also, are you running it behind a proxy/load balancer? If so, do you get an error when running locally, directly in your machine?

Thanks for your response @tiangolo. I'm running Prodigy in a Docker container orchastrated by Kubernetes on AWS. So in fact there is a load balancer in between. And I'm using Python 3.7.

I can't easily test this locally as I'm making requests to Kubernetes internal services to get predictions.

Rolling Prodigy back to 1.9.0 fixes the error and I can use Prodigy. If I inject an artificial error I never get a traceback. Always just 500 Internal Server Error.

Is there anything I can do to help resolve this?

Humm, do you know what load balancer is running on top? Can you get the logs from the load balancer?

It's because I think this log:

wouldn't show up if you were running locally. So I suspect it could actually come from the load balancer and not Prodigy (even if the error was in the code related to Prodigy), and that might be hiding the actual error underneath.

Also because given it's a server error, it wouldn't show up in an INFO but in an ERROR log.

Are you running Prodigy directly or do you have a custom recipe in Python code?

Do you control the Dockerfile for that container? Can you add more code to it?

If so, you could try:

Another option to try is to test with a small FastAPI app instead of Prodigy in that container and see if you can inject an error and get a traceback, that would help finding out where the error can be, if it's related to configs in Prodigy, the environment, etc.

Unrelated note: just checked/reconfigured my settings here to make sure I get notifications for your replies :sweat_smile:

Okay, if I run Prodigy locally it gives me the expected traceback.

I don't understand how a load balancer makes a difference. Eventually the request would hit the Prodigy code or my custom recipe code and that's where the exception occurs and should be logged. Or am I missing something?

Yes, but if the INFO log is not shown locally, then it would probably mean that it's coming from somewhere else, not from Prodigy, possibly the load balancer. It could mean that, for example, you are getting the logs of the load balancer but not the logs of the container behind it. Or maybe the container with Prodigy could be configured with an internal load balancer, e.g. an Nginx with a proxy pass to Prodigy, on the same container.

Something else you could try is adding something like Sentry as part of your recipe to log the error with the traceback to a remote server, that could work and help you figure out what's happening independent of how the logs are being handled in your cluster.

I see what you mean, however there are other log entries that clearly come from Prodigy. So if Prodigy was logging something related to the exception I assume it should appear in the same place.

Sentry is a good idea, I will try that tomorrow.

1 Like

Prodigy has its own logger, and all other log statements are produced by uvicorn. I think uvicorn also outputs log statements from other applications if they just log via So that's whar could be happening here? But yeah, Sentry is probably the most systematic approach here.

1 Like

I've finally gotten around to doing more testing regarding this.

I'm now able to reproduce this locally when running Prodigy in a Docker container. @tiangolo you were right that the log entry stating the Internal Server Error is indeed coming from the nginx Ingress in Kubernetes, as this does not appear when running locally.

The frontend still receives an Internal Server Error and the logs are not showing anything. Only when manually exiting out of Prodigy (using Ctrl-C in terminal) the traceback appears. I also tried Sentry but it doesn't log anything at all.

I suspect this has something to do with the exception not bubbling up from an async worker until the worker is exited forcefully. Sometimes that can happen when the exception can't be pickled. Just guessing here. Weird though that this only happens in Docker (Linux) and not when running on macOS.

Not sure if this is Prodigy-specific or something that's happening upstream (FastAPI / uvicorn), but it definitely needs to be resolved.

For reference, here's the traceback I get after pressing Ctrl-C. Maybe this helps? (The error itself is no concern here, just that it doesn't appear in the logs at the time it occurs).

✨  Starting the web server at ...
Open the app in your browser and start annotating!

^CException during reset or similar
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/", line 693, in _finalize_fairy
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/", line 880, in _reset
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/", line 538, in do_rollback
psycopg2.OperationalError: SSL SYSCALL error: EOF detected

Task exception was never retrieved
future: <Task finished coro=<RequestResponseCycle.run_asgi() done, defined at /opt/conda/lib/python3.7/site-packages/uvicorn/protocols/http/> exception=KeyError('label')>
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/asyncio/", line 249, in __step
    result = coro.send(None)
File "/opt/conda/lib/python3.7/site-packages/uvicorn/protocols/http/", line 388, in run_asgi
    self.logger.error(msg, exc_info=exc)
File "/opt/conda/lib/python3.7/logging/", line 1407, in error
    self._log(ERROR, msg, args, **kwargs)
File "/opt/conda/lib/python3.7/logging/", line 1514, in _log
File "/opt/conda/lib/python3.7/logging/", line 1523, in handle
    if (not self.disabled) and self.filter(record):
File "/opt/conda/lib/python3.7/logging/", line 751, in filter
    result = f.filter(record)
File "cython_src/prodigy/util.pyx", line 120, in prodigy.util.ServerErrorFilter.filter
File "/opt/conda/lib/python3.7/site-packages/uvicorn/protocols/http/", line 385, in run_asgi
    result = await app(self.scope, self.receive, self.send)
File "/opt/conda/lib/python3.7/site-packages/uvicorn/middleware/", line 45, in __call__
    return await, receive, send)
File "/opt/conda/lib/python3.7/site-packages/fastapi/", line 140, in __call__
    await super().__call__(scope, receive, send)
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 134, in __call__
    await self.error_middleware(scope, receive, send)
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 178, in __call__
    raise exc from None
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 156, in __call__
    await, receive, _send)
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 84, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 140, in simple_response
    await, receive, send)
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 25, in __call__
    response = await self.dispatch_func(request, self.call_next)
File "/opt/conda/lib/python3.7/site-packages/prodigy/", line 183, in reset_db_middleware
    response = await call_next(request)
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 45, in call_next
File "/opt/conda/lib/python3.7/asyncio/", line 181, in result
    raise self._exception
File "/opt/conda/lib/python3.7/asyncio/", line 251, in __step
    result = coro.throw(exc)
File "/opt/conda/lib/python3.7/site-packages/starlette/middleware/", line 38, in coro
    await, receive, send)
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 73, in __call__
    raise exc from None
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 62, in __call__
    await, receive, sender)
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 590, in __call__
    await route(scope, receive, send)
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 208, in __call__
    await, receive, send)
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 41, in app
    response = await func(request)
File "/opt/conda/lib/python3.7/site-packages/fastapi/", line 129, in app
    raw_response = await run_in_threadpool(, **values)
File "/opt/conda/lib/python3.7/site-packages/starlette/", line 25, in run_in_threadpool
    return await loop.run_in_executor(None, func, *args)
File "/opt/conda/lib/python3.7/asyncio/", line 263, in __await__
    yield self  # This tells Task to wait for completion.
File "/opt/conda/lib/python3.7/asyncio/", line 318, in __wakeup
File "/opt/conda/lib/python3.7/asyncio/", line 181, in result
    raise self._exception
File "/opt/conda/lib/python3.7/concurrent/futures/", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
File "/opt/conda/lib/python3.7/site-packages/prodigy/", line 399, in get_session_questions
    return _shared_get_questions(req.session_id, excludes=req.excludes)
File "/opt/conda/lib/python3.7/site-packages/prodigy/", line 370, in _shared_get_questions
    tasks = controller.get_questions(session_id=session_id, excludes=excludes)
File "cython_src/prodigy/core.pyx", line 138, in prodigy.core.Controller.get_questions
File "cython_src/prodigy/components/feeds.pyx", line 68, in prodigy.components.feeds.SharedFeed.get_questions
File "cython_src/prodigy/components/feeds.pyx", line 78, in prodigy.components.feeds.SharedFeed.get_next_batch
File "cython_src/prodigy/components/feeds.pyx", line 79, in prodigy.components.feeds.SharedFeed.get_next_batch
File "/root/src/prodigy/", line 140, in get_stream
    for e in stream:
File "cython_src/prodigy/components/sorters.pyx", line 98, in __iter__
File "cython_src/prodigy/components/sorters.pyx", line 29, in genexpr
File "/root/src/prodigy/", line 248, in score_examples
    for score, example in model(stream):
File "/root/src/prodigy/", line 96, in __call__
    for eg in stream:
File "/root/src/prodigy/", line 220, in filter_existing
    for e in stream:
File "cython_src/prodigy/components/filters.pyx", line 37, in filter_duplicates
File "/root/src/prodigy/", line 241, in set_hashes
    e['_task_hash'] = xxh32_intdigest(dataset.lower() + '-' + e['digest'] + '-' + e['label'].lower())
KeyError: 'label'

@tiangolo, @ines please let me know if I can help resolve this. Also, maybe it’s worth removing the solved tag to better reflect that this issue persists?

Okay, so if I understand this correctly, we did find out that the underlying problem here is that something in the async stack swallows the exception and it only surfaces as an INFO log, right?

Yes, I think that's what's going on.
However the exception doesn't even surface as an INFO log message. That INFO message seen above comes from a load balancer, not anything in Prodigy. I could verify that by running Prodigy in Docker without a load balancer and nothing showed up in the logs. Rather the exception only surfaces once Prodigy is exited.