The specified model 'gpt-4-32k' is not available.

Available supported names:

Error when run prodigy ner.llm.correct:

ValueError: The specified model 'gpt-4-32k' is not available. Choices are: ['ada-code-search-code', 'ada-code-search-text', 'ada-search-document', 'ada-search-query', 'babbage-002', 'babbage-search-document', 'babbage-search-query', 'curie-search-document', 'curie-search-query', 'dall-e-2', 'dall-e-3', 'davinci-002', 'davinci-search-document', 'davinci-search-query', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0301', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-1106', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-instruct', 'gpt-3.5-turbo-instruct-0914', 'gpt-4', 'gpt-4-0613', 'gpt-4-1106-preview', 'gpt-4-vision-preview', 'text-embedding-ada-002', 'tts-1', 'tts-1-1106', 'tts-1-hd', 'tts-1-hd-1106', 'whisper-1']

spacy-llm-config.cfg:

[nlp]
lang = "nl"
pipeline = ["llm"]

[components]

[components.llm]
factory = "llm"
save_io = true

[components.llm.task]
@llm_tasks = "spacy.NER.v2"
labels = ["ORG", "PERSON"]

[components.llm.task.label_definitions]
ORG = "Extract the names of the companies and their associated branding."
PERSON = "Extract the names of individuals."

[components.llm.model]
@llm_models = "spacy.GPT-4.v1"
name = "gpt-4-32k"
config = {"temperature": 0.3}

[components.llm.cache]
@llm_misc = "spacy.BatchCache.v1"
path = "local-ner-cache"
batch_size = 3
max_batches_in_mem = 10

We can simply use @llm_models = spacy.GPT-4.v3 , which allows us to use any model name supported by the OpenAI API.

Hi @nikolaysm ,

Thanks for posting a workaround. This name-related ValueError comes from spacy-llm - I'll forward it to the team.

Hi @nikolaysm ,

Thanks again for pointing out outdated information in the docs. The vendor's API changes so often that it's become a challenge to keep up. spaCy team will refactor the docs so that they are more future proof.