The problem: When I turn on word-wrap, either by clicking on the checkbox or with the --wrap flag, I get an error in the prodigy UI.

I ran prodigy as follows:

python -m prodigy rel.manual AcousticGuitars en_core_web_trf output\html2text\coref_resolved\AcousticGuitars_relations.jsonl --label CORE

To eliminate possible issues in the input file, I reduced it to just one line:
{"text": "Acoustic guitars of Spain.\n", "tokens": [{"text": "Acoustic", "start": 0, "end": 8, "id": 0, "ws": true}, {"text": "guitars", "start": 9, "end": 16, "id": 1, "ws": true}, {"text": "of", "start": 17, "end": 19, "id": 2, "ws": true}, {"text": "Spain", "start": 20, "end": 25, "id": 3, "ws": false}, {"text": ".", "start": 25, "end": 26, "id": 4, "ws": false}, {"text": "\n", "start": 26, "end": 27, "id": 5, "ws": false}], "_view_id": "relations", "relations": }

Without wrapping the UI looks like:

When I enable wrapping I get the error:

If I use a jsonl file generated via coref.manual and then db-out, I don't see this error. However, I want to used a jsonl file that I generate (from neuralcoref via spacy) to correct it.

I've seen something similar happen here. It seems to be a bug related to the newline (\n) characters in your text field. If you remove these the issue should go away.

It's a bug that we are aware of.

I got the exact same message while annotating with spans.manual and input from a jsonl file which had no '\n' tokens. This was after having annotated, accepted and saved 247 lines to the dataset. I tried restarting but had no success. Then I split the file after line 247. I could annotate to a new dataset with lines 248 onwards. Are there any limitations as to the number of lines or the number of labels (I used 12)?

Could you share that example? That way I might check and see if I can reproduce locally.

The last issue seems to have been due to a corrupt jsonl file. Sorry for the confusion. The first issue is real.

The first issue is in our bug tracker. Will notify this thread once it's fixed. For now the solution would be to manually remove the \n characters. It's a bug in our front-end code. :slightly_frowning_face: sorry about that!