Error loading transformer model into ner.correct

Hi there,

I am currently stuck trying to load a custom ner model trained using spacy 3.2.3 and spacy-transformers 1.1.6.

When loading the model into prodigy (prodigy-1.11.6-cp39-cp39-linux_x86_64.whl), I get the following error:

ner_correct_compose-prodigy | catalogue.RegistryError: [E893] Could not find function 'spacy-transformers.Tok2VecTransformer.v3' in function registry 'architectures'. If you're using a custom function, make sure the code is available. If the function is provided by a third-party package, e.g. spacy-transformers, make sure the package is installed in your environment.
ner_correct_compose-prodigy |
ner_correct_compose-prodigy | Available names: spacy-legacy.CharacterEmbed.v1, spacy-legacy.EntityLinker.v1, spacy-legacy.HashEmbedCNN.v1, spacy-legacy.MaxoutWindowEncoder.v1, spacy-legacy.MishWindowEncoder.v1, spacy-legacy.MultiHashEmbed.v1, spacy-legacy.Tagger.v1, spacy-legacy.TextCatBOW.v1, spacy-legacy.TextCatCNN.v1, spacy-legacy.TextCatEnsemble.v1, spacy-legacy.Tok2Vec.v1, spacy-legacy.TransitionBasedParser.v1, spacy-transformers.Tok2VecTransformer.v1, spacy-transformers.TransformerListener.v1, spacy-transformers.TransformerModel.v1, spacy.CharacterEmbed.v2, spacy.EntityLinker.v1, spacy.HashEmbedCNN.v2, spacy.MaxoutWindowEncoder.v2, spacy.MishWindowEncoder.v2, spacy.MultiHashEmbed.v2, spacy.PretrainCharacters.v1, spacy.PretrainVectors.v1, spacy.SpanCategorizer.v1, spacy.Tagger.v1, spacy.TextCatBOW.v2, spacy.TextCatCNN.v2, spacy.TextCatEnsemble.v2, spacy.TextCatLowData.v1, spacy.Tok2Vec.v2, spacy.Tok2VecListener.v1, spacy.TorchBiLSTMEncoder.v1, spacy.TransitionBasedParser.v2

despite the fact that I my training config shows:

[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "roberta-base"

I am running Prodigy within a docker container which has python 3.9, spacy 3.1.3 (hopefully not causing issues for the model trained with 3.2.3), and spacy-transformers 1.1.4 installed.

Please help :pray:

See full error and config below:

ner_correct_compose-prodigy |   warnings.warn(warn_msg)
ner_correct_compose-prodigy | Traceback (most recent call last):
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/runpy.py", line 197, in _run_module_as_main
ner_correct_compose-prodigy |     return _run_code(code, main_globals, None,
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/runpy.py", line 87, in _run_code
ner_correct_compose-prodigy |     exec(code, run_globals)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/prodigy/__main__.py", line 61, in <module>
ner_correct_compose-prodigy |     controller = recipe(*args, use_plac=True)
ner_correct_compose-prodigy |   File "cython_src/prodigy/core.pyx", line 329, in prodigy.core.recipe.recipe_decorator.recipe_proxy
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/plac_core.py", line 367, in call
ner_correct_compose-prodigy |     cmd, result = parser.consume(arglist)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/plac_core.py", line 232, in consume
ner_correct_compose-prodigy |     return cmd, self.func(*(args + varargs + extraopts), **kwargs)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/prodigy/recipes/ner.py", line 215, in correct
ner_correct_compose-prodigy |     nlp = spacy.load(spacy_model)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/__init__.py", line 51, in load
ner_correct_compose-prodigy |     return util.load_model(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/util.py", line 349, in load_model
ner_correct_compose-prodigy |     return load_model_from_path(Path(name), **kwargs)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/util.py", line 415, in load_model_from_path
ner_correct_compose-prodigy |     nlp = load_model_from_config(config, vocab=vocab, disable=disable, exclude=exclude)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/util.py", line 452, in load_model_from_config
ner_correct_compose-prodigy |     nlp = lang_cls.from_config(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/language.py", line 1714, in from_config
ner_correct_compose-prodigy |     nlp.add_pipe(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/language.py", line 776, in add_pipe
ner_correct_compose-prodigy |     pipe_component = self.create_pipe(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/language.py", line 660, in create_pipe
ner_correct_compose-prodigy |     resolved = registry.resolve(cfg, validate=validate)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/thinc/config.py", line 729, in resolve
ner_correct_compose-prodigy |     resolved, _ = cls._make(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/thinc/config.py", line 778, in _make
ner_correct_compose-prodigy |     filled, _, resolved = cls._fill(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/thinc/config.py", line 833, in _fill
ner_correct_compose-prodigy |     filled[key], validation[v_key], final[key] = cls._fill(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/thinc/config.py", line 833, in _fill
ner_correct_compose-prodigy |     filled[key], validation[v_key], final[key] = cls._fill(
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/thinc/config.py", line 832, in _fill
ner_correct_compose-prodigy |     promise_schema = cls.make_promise_schema(value, resolve=resolve)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/thinc/config.py", line 1023, in make_promise_schema
ner_correct_compose-prodigy |     func = cls.get(reg_name, func_name)
ner_correct_compose-prodigy |   File "/usr/local/lib/python3.9/site-packages/spacy/util.py", line 136, in get
ner_correct_compose-prodigy |     raise RegistryError(
ner_correct_compose-prodigy | catalogue.RegistryError: [E893] Could not find function 'spacy-transformers.Tok2VecTransformer.v3' in function registry 'architectures'. If you're using a custom function, make sure the code is available. If the function is provided by a third-party package, e.g. spacy-transformers, make sure the package is installed in your environment.
ner_correct_compose-prodigy |
ner_correct_compose-prodigy | Available names: spacy-legacy.CharacterEmbed.v1, spacy-legacy.EntityLinker.v1, spacy-legacy.HashEmbedCNN.v1, spacy-legacy.MaxoutWindowEncoder.v1, spacy-legacy.MishWindowEncoder.v1, spacy-legacy.MultiHashEmbed.v1, spacy-legacy.Tagger.v1, spacy-legacy.TextCatBOW.v1, spacy-legacy.TextCatCNN.v1, spacy-legacy.TextCatEnsemble.v1, spacy-legacy.Tok2Vec.v1, spacy-legacy.TransitionBasedParser.v1, spacy-transformers.Tok2VecTransformer.v1, spacy-transformers.TransformerListener.v1, spacy-transformers.TransformerModel.v1, spacy.CharacterEmbed.v2, spacy.EntityLinker.v1, spacy.HashEmbedCNN.v2, spacy.MaxoutWindowEncoder.v2, spacy.MishWindowEncoder.v2, spacy.MultiHashEmbed.v2, spacy.PretrainCharacters.v1, spacy.PretrainVectors.v1, spacy.SpanCategorizer.v1, spacy.Tagger.v1, spacy.TextCatBOW.v2, spacy.TextCatCNN.v2, spacy.TextCatEnsemble.v2, spacy.TextCatLowData.v1, spacy.Tok2Vec.v2, spacy.Tok2VecListener.v1, spacy.TorchBiLSTMEncoder.v1, spacy.TransitionBasedParser.v2
[paths]
train = null
dev = null
vectors = null
init_tok2vec = null

[system]
gpu_allocator = "pytorch"
seed = 0

[nlp]
lang = "en"
pipeline = ["transformer","tagger","parser","ner"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}

[components]

[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100

[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = false
nO = null

[components.ner.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"

[components.parser]
source = "en_core_web_trf"
replace_listeners = ["model.tok2vec"]

[components.tagger]
source = "en_core_web_trf"
replace_listeners = ["model.tok2vec"]

[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}

[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "roberta-base"

[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96

[components.transformer.model.tokenizer_config]
use_fast = true

[corpora]

[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null

[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 500
gold_preproc = false
limit = 0
augmenter = null

[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
before_to_disk = null
annotating_components = []

[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001

[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005

[training.score_weights]
tag_acc = 0.33
dep_uas = 0.17
dep_las = 0.17
dep_las_per_type = null
sents_p = null
sents_r = null
sents_f = 0.0
ents_f = 0.33
ents_p = 0.0
ents_r = 0.0
ents_per_type = null

[pretraining]

[initialize]
vectors = null
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null

[initialize.components]

[initialize.tokenizer]

Hi @BlakeList ,

There were some changes in spacy-transformers from 3.1 to 3.2 and it may have affected how your model is being read, since you trained it in an earlier version. Is it possible to "copy" exactly your development and Docker environment? I presume this is mostly a library compatibility error.

Hi @ljvmiranda921

The issue was the prodigy version being 1.11.6 and not 1.11.7. But I will also upgrade spacy transformers.

Thanks,
Blake

1 Like