TypeError: 'FullTransformerBatch' object is not iterable

Hi!

Firstly, thanks very much for this great product and support which is very useful and efficient for my research!

I am trying to use en_core_web_trf base model, but I keep receiving the following error (There is no problem when I use en_core_web_lg).

TypeError("'FullTransformerBatch' object is not iterable")
Traceback (most recent call last):
  File "C:\Users\Asli\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\Asli\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\prodigy\__main__.py", line 61, in <module>
    controller = recipe(*args, use_plac=True)
  File "cython_src\prodigy\core.pyx", line 325, in prodigy.core.recipe.recipe_decorator.recipe_proxy
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\plac_core.py", line 367, in call
    cmd, result = parser.consume(arglist)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\plac_core.py", line 232, in consume
    return cmd, self.func(*(args + varargs + extraopts), **kwargs)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\prodigy\recipes\train.py", line 283, in train
    silent=silent,
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\prodigy\recipes\train.py", line 197, in _train
    spacy_train(nlp, output_path, use_gpu=gpu_id, stdout=stdout)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 122, in train
    raise e
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 105, in train
    for batch, info, is_best_checkpoint in training_step_iterator:
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 224, in train_while_improving
    score, other_scores = evaluate()
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 281, in evaluate
    scores = nlp.evaluate(dev_corpus(nlp))
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\language.py", line 1385, in evaluate
    examples,
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\util.py", line 1488, in _pipe
    yield from proc.pipe(docs, **kwargs)
  File "spacy\pipeline\trainable_pipe.pyx", line 79, in pipe
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\util.py", line 1507, in raise_error
    raise e
  File "spacy\pipeline\trainable_pipe.pyx", line 75, in spacy.pipeline.trainable_pipe.TrainablePipe.pipe
  File "spacy\pipeline\tagger.pyx", line 111, in spacy.pipeline.tagger.Tagger.predict
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\model.py", line 315, in predict
    return self._func(self, X, is_train=False)[0]
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\layers\chain.py", line 54, in forward
    Y, inc_layer_grad = layer(X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\model.py", line 291, in __call__
    return self._func(self, X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\layers\chain.py", line 54, in forward
    Y, inc_layer_grad = layer(X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\model.py", line 291, in __call__
    return self._func(self, X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy_transformers\layers\trfs2arrays.py", line 23, in forward
    for trf_data in trf_datas:
TypeError: 'FullTransformerBatch' object is not iterable

I use:
spacy==3.1.1
spacy-transformers==1.0.4
prodigy==1.11.1

Thanks for the report! If you have the latest versions of spacy and spacy-transformers installed (which seems to be the case based on your details), this might be a spaCy bug that should have been fixed, but maybe still occurs in some situations :thinking: We'll look into it!

Hi! This should now be fixed after upgrading to spacy-transformers 1.0.5.

Hi @SofieVL ,

Yes, I just installed the spacy-transformers 1.0.5 and it works!

Thank you very much!

1 Like

Happy to hear it, thanks for reporting back!

Hi All,

I'm trying to pretrain/fine-tune a transformer and getting this error. I'm probably making some mistake in my configuration, but I'm at a loss as to what it would be. This is the only result I'm getting when searching for this error message. (Im not using prodigy, just spacy. Let me know if I need to go elsewhere...) Can you help?
-Kendra

Traceback

Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.dense.weight', 'lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.bias']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
...
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/cli/pretrain.py\", line 70, in pretrain_cli
    pretrain(
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/training/pretrain.py\", line 40, in pretrain
    model = create_pretraining_model(nlp, P)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/training/pretrain.py\", line 163, in create_pretraining_model
    model.initialize(X=[nlp.make_doc(\"Give it a doc to infer shapes\")])
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 299, in initialize
    self.init(self, X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/ml/models/multi_task.py\", line 169, in mlm_initialize
    wrapped.initialize(X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 299, in initialize
    self.init(self, X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/layers/chain.py\", line 88, in init
    layer.initialize(X=curr_input)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 299, in initialize
    self.init(self, X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/layers/chain.py\", line 90, in init
    curr_input = layer.predict(curr_input)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 315, in predict
    return self._func(self, X, is_train=False)[0]
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/layers/list2array.py\", line 22, in forward
    lengths = model.ops.asarray1i([len(x) for x in Xs])
TypeError: 'FullTransformerBatch' object is not iterable

Command and config excerpts

python -m spacy pretrain config_transformer.cfg ./pretrain_transformer

...
vectors = "en_core_web_md"
init_tok2vec = pretrain_transformer
verbose = true
raw_text = Data/pretrain.spacy

[system]
gpu_allocator = "pytorch"
seed = 0

[nlp]
lang = "en"
pipeline = ["transformer","textcat"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}

[components]

[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}

[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "roberta-base"

[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96

[components.transformer.model.tokenizer_config]
use_fast = true


[components.textcat]
...
[components.textcat.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"

[corpora]

...
[corpora.pretrain]
@readers = "spacy.Corpus.v1"
path = ${paths.raw_text}
gold_preproc = false
max_length = 500
limit = 0
augmenter = null


[initialize]
vectors = ${paths.vectors}
vocab_data = null
lookups = null
init_tok2vec = ${paths.init_tok2vec}
before_init = null
after_init = null

[initialize.components]

[initialize.tokenizer]


[pretraining]
max_epochs = 20
dropout = 0.2
n_save_every = null
component = "transformer"
layer = ""
corpus = corpora.pretrain

[pretraining.batcher]
@batchers = "spacy.batch_by_words.v1"
size = 3000
discard_oversize = false
tolerance = 0.2
get_length = null

[pretraining.objective]
@architectures = "spacy.PretrainVectors.v1"
maxout_pieces = 3
hidden_size = 300
loss = "cosine"

[pretraining.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = true
eps = 1e-8
learn_rate = 0.001


[training]
...

Info about spaCy

  • spaCy version: 3.1.3
  • Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.17
  • Python version: 3.9.7
  • Pipelines: en_core_web_trf (3.1.0), en_core_web_md (3.1.0)
  • Spacy-transformers: 1.16.0

Hi @kchalkSGS ! Can we double check what version your spacy-transformers is in? There's no 1.16.0 version. As a temporary measure, can you try again with spacy-transformers==1.0.6?

Hi @ljvmiranda921 , thank you! I got the version .16 from an install message that I now know was not a valid shortcut to getting this info. pip says I do actually have version 1.0.6.

Since posting, I'm wondering if pretraining a transformer is just not supported? https://spacy.io/usage/embeddings-transformers#pretraining does suggest pretraining if you're not using a transformer. I have reasons for wanting to finetune the transformer, but maybe the functionality is just not built in yet?

Note that I also ended up discovering the spacy discussion space and posting this there. Finetuning transformer into TextCat · Discussion #9599 · explosion/spaCy · GitHub (including the full version of the config file)

Hi @kchalkSGS, thanks for reporting, we'll investigate if there's any problem with spacy-transformers.
There should be support for transformers as in this blogpost. Also, "pretraining if you're not using a transformer" means that it's possible to match a transformer's accuracy by pretraining, it doesn't mean that it's not supported.

As there is a duplicate post over on spaCy's forums, Finetuning transformer into TextCat · Discussion #9599 · explosion/spaCy · GitHub, we can continue that part of the discussion there.

1 Like

Hi @ljvmiranda921

tried that like this:

pip install spacy-transformers==1.0.6

with this config

[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
raw_text = null

[system]
gpu_allocator = "pytorch"
seed = 0

[nlp]
lang = "en"
pipeline = ["transformer","tagger","parser","ner"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}

[components]

[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100

[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = false
nO = null

[components.ner.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"

[components.parser]
factory = "parser"
learn_tokens = false
min_action_freq = 30
moves = null
scorer = {"@scorers":"spacy.parser_scorer.v1"}
update_with_oracle_cut_size = 100

[components.parser.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "parser"
extra_state_tokens = false
hidden_width = 128
maxout_pieces = 3
use_upper = false
nO = null

[components.parser.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"

[components.tagger]
factory = "tagger"
neg_prefix = "!"
overwrite = false
scorer = {"@scorers":"spacy.tagger_scorer.v1"}

[components.tagger.model]
@architectures = "spacy.Tagger.v2"
nO = null
normalize = false

[components.tagger.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"

[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}

[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v3"
name = "roberta-base"
mixed_precision = false

[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96

[components.transformer.model.grad_scaler_config]

[components.transformer.model.tokenizer_config]
use_fast = true

[components.transformer.model.transformer_config]

[corpora]

[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null

[corpora.pretrain]
@readers = "spacy.JsonlCorpus.v1"
path = ${paths.raw_text}
min_length = 5
max_length = 500
limit = 0

[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null

[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null

[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001

[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005

[training.score_weights]
tag_acc = 0.33
dep_uas = 0.17
dep_las = 0.17
dep_las_per_type = null
sents_p = null
sents_r = null
sents_f = 0.0
ents_f = 0.33
ents_p = 0.0
ents_r = 0.0
ents_per_type = null

[pretraining]
max_epochs = 1000
dropout = 0.2
n_save_every = null
n_save_epoch = null
component = "transformer"
layer = ""
corpus = "corpora.pretrain"

[pretraining.batcher]
@batchers = "spacy.batch_by_words.v1"
size = 3000
discard_oversize = false
tolerance = 0.2
get_length = null

[pretraining.objective]
@architectures = "spacy.PretrainCharacters.v1"
maxout_pieces = 3
hidden_size = 300
n_characters = 4

[pretraining.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = true
eps = 0.00000001
learn_rate = 0.001

[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null

[initialize.components]

[initialize.tokenizer]

and still getting this:

  File "/home/ubuntu/.local/lib/python3.8/site-packages/spacy/util.py", line 139, in get
    raise RegistryError(
catalogue.RegistryError: [E893] Could not find function 'spacy-transformers.TransformerModel.v3' in function registry 'architectures'. If you're using a custom function, make sure the code is available. If the function is provided by a third-party package, e.g. spacy-transformers, make sure the package is installed in your environment.

Available names: spacy-legacy.CharacterEmbed.v1, spacy-legacy.EntityLinker.v1, spacy-legacy.HashEmbedCNN.v1, spacy-legacy.MaxoutWindowEncoder.v1, spacy-legacy.MishWindowEncoder.v1, spacy-legacy.MultiHashEmbed.v1, spacy-legacy.Tagger.v1, spacy-legacy.TextCatBOW.v1, spacy-legacy.TextCatCNN.v1, spacy-legacy.TextCatEnsemble.v1, spacy-legacy.Tok2Vec.v1, spacy-legacy.TransitionBasedParser.v1, spacy-transformers.Tok2VecTransformer.v1, spacy-transformers.TransformerListener.v1, spacy-transformers.TransformerModel.v1, spacy.CharacterEmbed.v2, spacy.EntityLinker.v2, spacy.HashEmbedCNN.v2, spacy.MaxoutWindowEncoder.v2, spacy.MishWindowEncoder.v2, spacy.MultiHashEmbed.v2, spacy.PretrainCharacters.v1, spacy.PretrainVectors.v1, spacy.SpanCategorizer.v1, spacy.Tagger.v2, spacy.TextCatBOW.v2, spacy.TextCatCNN.v2, spacy.TextCatEnsemble.v2, spacy.TextCatLowData.v1, spacy.Tok2Vec.v2, spacy.Tok2VecListener.v1, spacy.TorchBiLSTMEncoder.v1, spacy.TransitionBasedParser.v2