ner.llm.correct timeout for big dataset

I have configured a NER task that i wish to label with the help of gpt. Running the recipe with source data containing just a single jsonl runs just fine, however if i try to load the full source with around 4000 lines i receive a timeout error for the openai api after a few minutes.

My expectation for the behaviour of ner.llm.correct would be that it calls openai on demand for each task. However the timeout leads me to believe that this is not the case. Is there any documentation that can help me understand the workings of ner.llm.correct and the nature of the timeout?

My spacy-llm config:

[paths]
examples = "examples.json"

[nlp]
lang = "en"
pipeline = ["llm"]

[components]

[components.llm]
factory = "llm"

[components.llm.task]
@llm_tasks = "spacy.NER.v3"
labels = ["FOOD", "COMPOUND"]
description = Entity description

[components.llm.task.label_definitions]
LABELS

[components.llm.task.examples]
@misc = "spacy.FewShotReader.v1"
path = "${paths.examples}"

[components.llm.model]
@llm_models = "spacy.GPT-3-5.v1"

[components.llm.cache]
@llm_misc = "spacy.BatchCache.v1"
path = "local-ner-cache"
batch_size = 1
max_batches_in_mem = 10

I figured I might dive into the code a bit. In particular, notice this segment. You can see how it loops over each prompt in the batch and sends a request for each. So unless I'm missing an important detail, it feels safe to say that each prompt is sent to OpenAI one by one.

It's still possible to see a timeout though. If you're looping over a large batch the odds of catching a single timeout does increase and it could also be the case that there's some very big documents in your longer document. Could you check if this is the case?

I guess my advice at this point is to explore two remedies. You may already be aware of these, but I'm mentioning it just in case.

  1. You can use a cache to make sure that your pipeline doesn't call OpenAI if it already ran the same prompt before. It's explained in more detail here.
  2. You can run the fetch recipe first. That way, you can download a bunch of examples in batch mode upfront so that the flow does not break when you are annotating.

Let me know if you appreciate more help and if you have more details to share.

Okay, here are some trials i did to get an understanding for the issue:

I created a very long singular entry dataset which yields a max context length error from the openai api, no timeout though.

ValueError: Request to OpenAI API failed: This model's maximum context length is 4097 tokens. However, your messages resulted in 9498 tokens. Please reduce the length of the messages.

If i cut i down below the token limit it runs just fine. So i think the length of the dataset entries is not an issue as they would be called out by the openai api.

The full timeout error message:

Task exception was never retrieved
future: <Task finished name='Task-8' coro=<RequestResponseCycle.run_asgi() done, defined at /home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py:402> exception=TimeoutError("Request time out. Check your network connection and the API's availability.")>
Traceback (most recent call last):
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/connectionpool.py", line 537, in _make_request
    response = conn.getresponse()
               ^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/connection.py", line 461, in getresponse
    httplib_response = super().getresponse()
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/http/client.py", line 1374, in getresponse
    response.begin()
  File "/usr/lib/python3.11/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
                              ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/http/client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TimeoutError: The read operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/connectionpool.py", line 845, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/util/retry.py", line 470, in increment
    raise reraise(type(error), error, _stacktrace)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/util/util.py", line 39, in reraise
    raise value
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/connectionpool.py", line 791, in urlopen
    response = self._make_request(
               ^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/connectionpool.py", line 539, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/urllib3/connectionpool.py", line 371, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=30)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/models/rest/base.py", line 120, in _call_api
    return call_method(url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/requests/api.py", line 115, in post
    return request("post", url, data=data, json=json, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/requests/adapters.py", line 532, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=30)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 409, in run_asgi
    self.logger.error(msg, exc_info=exc)
  File "/usr/lib/python3.11/logging/__init__.py", line 1518, in error
    self._log(ERROR, msg, args, **kwargs)
  File "/usr/lib/python3.11/logging/__init__.py", line 1634, in _log
    self.handle(record)
  File "/usr/lib/python3.11/logging/__init__.py", line 1643, in handle
    if (not self.disabled) and self.filter(record):
                               ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/logging/__init__.py", line 830, in filter
    result = f.filter(record)
             ^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/prodigy/__init__.py", line 21, in filter
    raise rec.exc_info[1]
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 404, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/fastapi/applications.py", line 1115, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/middleware/cors.py", line 91, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/middleware/cors.py", line 146, in simple_response
    await self.app(scope, receive, send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/fastapi/routing.py", line 274, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/fastapi/routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/prodigy/app.py", line 501, in get_session_questions
    return _shared_get_questions(controller, req.session_id, excludes=req.excludes)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/prodigy/app.py", line 456, in _shared_get_questions
    tasks = controller.get_questions(session_id=session_id, excludes=excludes)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "cython_src/prodigy/core.pyx", line 528, in prodigy.core.Controller.get_questions
  File "cython_src/prodigy/core.pyx", line 529, in prodigy.core.Controller.get_questions
  File "cython_src/prodigy/components/session.pyx", line 129, in prodigy.components.session.Session.get_questions
  File "cython_src/prodigy/components/stream.pyx", line 298, in iter_queue
  File "cython_src/prodigy/components/stream.pyx", line 278, in prodigy.components.stream.Stream.get_next
  File "cython_src/prodigy/components/stream.pyx", line 317, in prodigy.components.stream.Stream._get_from_iterator
  File "cython_src/prodigy/components/preprocess.pyx", line 572, in make_ner_suggestions
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/tqdm/std.py", line 1170, in __iter__
    for obj in iterable:
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy/language.py", line 1574, in pipe
    for doc in docs:
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy/language.py", line 1618, in pipe
    for doc in docs:
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy/util.py", line 1685, in _pipe
    yield from proc.pipe(docs, **kwargs)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/pipeline/llm.py", line 186, in pipe
    error_handler(self._name, self, doc_batch, e)
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy/util.py", line 1704, in raise_error
    raise e
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/pipeline/llm.py", line 184, in pipe
    yield from iter(self._process_docs(doc_batch))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/pipeline/llm.py", line 210, in _process_docs
    responses_iters = tee(self._model(prompts_iters[0]), n_iters)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/models/rest/openai/model.py", line 115, in __call__
    responses = _request(
                ^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/models/rest/openai/model.py", line 86, in _request
    r = self.retry(
        ^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/models/rest/base.py", line 140, in retry
    response = _call_api(i + 1)
               ^^^^^^^^^^^^^^^^
  File "/home/user/.local/share/virtualenvs/prodigy-wI0GCwLa/lib/python3.11/site-packages/spacy_llm/models/rest/base.py", line 125, in _call_api
    raise TimeoutError(
TimeoutError: Request time out. Check your network connection and the API's availability.

I fear there's not much we can do about those timeouts because OpenAI can rate-limit to their preference. A colleague did have one idea though: does it help to add some time.sleep(1) in between some of these requests? Mainly to confirm that OpenAI isn't punishing you for sending a lot of requests?

A colleague of mine suggested that the Azure GPT endpoints are more stable and may also be a valid alternative here. I've not done a proper benchmark to confirm it, but it may also be worth a try.

The task-8 in the error makes me believe that prodigy calls for more than just one prompt, or do i misunderstand the terminology here. So configuring prodigy to call on demand should have the same effect as introducing a sleep into the loop, correct?

I've checked with a colleague who works on spacy-llm and he confirm this feels like the "same old timeout issue over at OpenAI". The OpenAI API sometimes just stalls or rate-limits for no externally conclusive reason in our experience. I also imagine that it's worse now after their announcements.

The task-8 that you see in the error message seems to relate to some internal async code in the requests library. The spacy-llm plugin doesn't do async stuff because of rate-limiting on OpenAI's end.

Ok, thanks for the reply. I will have a look at azure then.