I'm doing a training with the command
prodigy train ner and I receive this error:
ValueError: [E088] Text of length 2227606 exceeds maximum of 1000000. The v2.x parser and NER models require roughly 1GB of temporary memory per 100,000 characters in the input. This means long texts may cause memory allocation errors. If you're not using the parser or NER, it's probably safe to increase the 'nlp.max_length' limit. The limit is in number of characters, so you can check whether your inputs are too long by checking 'len(text)'.
My question is how can I increase this maximum length for the training? Or is it better removing the texts longer than 1000000 characters?