I attempted to upload a sample of texts from a csv dataframe (with a limited amount of metadata) through Jupyter Notebook and annotated them with the following command:
! prodigy ner.manual my_set blank:en ./random_directives.csv -- label ENTITY
The documents vary in length but some of them are thousands words long. Unfortunately, after just 10-15 documents prodigy starts to slow down and even briefly interrupt its activities. Is the issue due to the excessive document length? How can I solve the problem, beside reducing it and clean the documents as much as possible?