Model Architecture textcat.train-batch

I was wondering what the exact architecture of the CNN underlying the textcat.train-batch is? I have seen it described as a 'simple cnn' and I assume its the same one being used by spacy's textcat component. Thanks

Yes, the architectures are the same ones used in spaCy. As of spaCy v2.1 you can control these by passing the "architecture" argument to the constructor (or via nlp.create_pipe()).

The simple_cnn architecture uses the same token-to-vector strategy as spaCy's NER, dependency parser, tagger etc components. You can find the source code in spacy._ml.Tok2Vec.

thanks much!