What are you using as the base model? CJK languages definitely require different tokenization, since a "word" is not defined as a whitespace-delimited unit. spaCy currently supports Chinese and Japanese via third-party libraries (see here), so you can use those language classes as the base model. See this thread for more details:
If you use a base model that supports tokenization for the given language, you'll be able to annotate the tokens accordingly. (This is also one of the reasons Prodigy always asks for a base model – it lets you supply language-specific or even your own custom tokenization rules.)
Btw, speaking of learning tokenization: In the upcoming version of spaCy, the parser will be able to learn merging tokens, which will be very useful for training CJK models. The rule-based tokenization can then only split on characters, and the model will be able to predict whether characters should be merged into one token. Depending on the data, this can improve accuracy by a lot.