TypeError: 'FullTransformerBatch' object is not iterable

Hi!

Firstly, thanks very much for this great product and support which is very useful and efficient for my research!

I am trying to use en_core_web_trf base model, but I keep receiving the following error (There is no problem when I use en_core_web_lg).

TypeError("'FullTransformerBatch' object is not iterable")
Traceback (most recent call last):
  File "C:\Users\Asli\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\Asli\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\prodigy\__main__.py", line 61, in <module>
    controller = recipe(*args, use_plac=True)
  File "cython_src\prodigy\core.pyx", line 325, in prodigy.core.recipe.recipe_decorator.recipe_proxy
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\plac_core.py", line 367, in call
    cmd, result = parser.consume(arglist)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\plac_core.py", line 232, in consume
    return cmd, self.func(*(args + varargs + extraopts), **kwargs)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\prodigy\recipes\train.py", line 283, in train
    silent=silent,
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\prodigy\recipes\train.py", line 197, in _train
    spacy_train(nlp, output_path, use_gpu=gpu_id, stdout=stdout)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 122, in train
    raise e
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 105, in train
    for batch, info, is_best_checkpoint in training_step_iterator:
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 224, in train_while_improving
    score, other_scores = evaluate()
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\training\loop.py", line 281, in evaluate
    scores = nlp.evaluate(dev_corpus(nlp))
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\language.py", line 1385, in evaluate
    examples,
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\util.py", line 1488, in _pipe
    yield from proc.pipe(docs, **kwargs)
  File "spacy\pipeline\trainable_pipe.pyx", line 79, in pipe
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy\util.py", line 1507, in raise_error
    raise e
  File "spacy\pipeline\trainable_pipe.pyx", line 75, in spacy.pipeline.trainable_pipe.TrainablePipe.pipe
  File "spacy\pipeline\tagger.pyx", line 111, in spacy.pipeline.tagger.Tagger.predict
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\model.py", line 315, in predict
    return self._func(self, X, is_train=False)[0]
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\layers\chain.py", line 54, in forward
    Y, inc_layer_grad = layer(X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\model.py", line 291, in __call__
    return self._func(self, X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\layers\chain.py", line 54, in forward
    Y, inc_layer_grad = layer(X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\thinc\model.py", line 291, in __call__
    return self._func(self, X, is_train=is_train)
  File "C:\.....\01_Prodigy\prodigy-1.11.1\venv\lib\site-packages\spacy_transformers\layers\trfs2arrays.py", line 23, in forward
    for trf_data in trf_datas:
TypeError: 'FullTransformerBatch' object is not iterable

I use:
spacy==3.1.1
spacy-transformers==1.0.4
prodigy==1.11.1

Thanks for the report! If you have the latest versions of spacy and spacy-transformers installed (which seems to be the case based on your details), this might be a spaCy bug that should have been fixed, but maybe still occurs in some situations :thinking: We'll look into it!

Hi! This should now be fixed after upgrading to spacy-transformers 1.0.5.

Hi @SofieVL ,

Yes, I just installed the spacy-transformers 1.0.5 and it works!

Thank you very much!

1 Like

Happy to hear it, thanks for reporting back!

Hi All,

I'm trying to pretrain/fine-tune a transformer and getting this error. I'm probably making some mistake in my configuration, but I'm at a loss as to what it would be. This is the only result I'm getting when searching for this error message. (Im not using prodigy, just spacy. Let me know if I need to go elsewhere...) Can you help?
-Kendra

Traceback

Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.dense.weight', 'lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.bias']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
...
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/cli/pretrain.py\", line 70, in pretrain_cli
    pretrain(
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/training/pretrain.py\", line 40, in pretrain
    model = create_pretraining_model(nlp, P)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/training/pretrain.py\", line 163, in create_pretraining_model
    model.initialize(X=[nlp.make_doc(\"Give it a doc to infer shapes\")])
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 299, in initialize
    self.init(self, X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/spacy/ml/models/multi_task.py\", line 169, in mlm_initialize
    wrapped.initialize(X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 299, in initialize
    self.init(self, X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/layers/chain.py\", line 88, in init
    layer.initialize(X=curr_input)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 299, in initialize
    self.init(self, X=X, Y=Y)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/layers/chain.py\", line 90, in init
    curr_input = layer.predict(curr_input)
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/model.py\", line 315, in predict
    return self._func(self, X, is_train=False)[0]
  File \"/home/.../miniconda3/envs/default/lib/python3.9/site-packages/thinc/layers/list2array.py\", line 22, in forward
    lengths = model.ops.asarray1i([len(x) for x in Xs])
TypeError: 'FullTransformerBatch' object is not iterable

Command and config excerpts

python -m spacy pretrain config_transformer.cfg ./pretrain_transformer

...
vectors = "en_core_web_md"
init_tok2vec = pretrain_transformer
verbose = true
raw_text = Data/pretrain.spacy

[system]
gpu_allocator = "pytorch"
seed = 0

[nlp]
lang = "en"
pipeline = ["transformer","textcat"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}

[components]

[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}

[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "roberta-base"

[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96

[components.transformer.model.tokenizer_config]
use_fast = true


[components.textcat]
...
[components.textcat.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"

[corpora]

...
[corpora.pretrain]
@readers = "spacy.Corpus.v1"
path = ${paths.raw_text}
gold_preproc = false
max_length = 500
limit = 0
augmenter = null


[initialize]
vectors = ${paths.vectors}
vocab_data = null
lookups = null
init_tok2vec = ${paths.init_tok2vec}
before_init = null
after_init = null

[initialize.components]

[initialize.tokenizer]


[pretraining]
max_epochs = 20
dropout = 0.2
n_save_every = null
component = "transformer"
layer = ""
corpus = corpora.pretrain

[pretraining.batcher]
@batchers = "spacy.batch_by_words.v1"
size = 3000
discard_oversize = false
tolerance = 0.2
get_length = null

[pretraining.objective]
@architectures = "spacy.PretrainVectors.v1"
maxout_pieces = 3
hidden_size = 300
loss = "cosine"

[pretraining.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = true
eps = 1e-8
learn_rate = 0.001


[training]
...

Info about spaCy

  • spaCy version: 3.1.3
  • Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.17
  • Python version: 3.9.7
  • Pipelines: en_core_web_trf (3.1.0), en_core_web_md (3.1.0)
  • Spacy-transformers: 1.16.0

Hi @kchalkSGS ! Can we double check what version your spacy-transformers is in? There's no 1.16.0 version. As a temporary measure, can you try again with spacy-transformers==1.0.6?

Hi @ljvmiranda921 , thank you! I got the version .16 from an install message that I now know was not a valid shortcut to getting this info. pip says I do actually have version 1.0.6.

Since posting, I'm wondering if pretraining a transformer is just not supported? https://spacy.io/usage/embeddings-transformers#pretraining does suggest pretraining if you're not using a transformer. I have reasons for wanting to finetune the transformer, but maybe the functionality is just not built in yet?

Note that I also ended up discovering the spacy discussion space and posting this there. Finetuning transformer into TextCat · Discussion #9599 · explosion/spaCy · GitHub (including the full version of the config file)

Hi @kchalkSGS, thanks for reporting, we'll investigate if there's any problem with spacy-transformers.
There should be support for transformers as in this blogpost. Also, "pretraining if you're not using a transformer" means that it's possible to match a transformer's accuracy by pretraining, it doesn't mean that it's not supported.

As there is a duplicate post over on spaCy's forums, Finetuning transformer into TextCat · Discussion #9599 · explosion/spaCy · GitHub, we can continue that part of the discussion there.

1 Like