Are you using the latest version? Prodigy's built-in token splitting should accept pre-defined tokenization (there used to be a problem but it should be fixed now)
In the meantime, you could also just use the mark recipe if you already have pre-tokenized data. The recipe will stream in whatever is in the data and render it with a given interface. So your command could look like this:
prodigy mark your_dataset ./your_data.jsonl --view-id ner_manual --label FOO,BAR
You can set PRODIGY_LOGGING=basic to see what it does behind the scenes. If you don't see an error and don't see "No tasks available", and it just gets stuck at loading, double-check that you're passing in the input file correctly? If no input file is provided, Prodigy will try to read from standard input.