Hi! What exactly are you trying to implement?
The upoming version will allow custom global CSS and custom JavaScript across all interfaces, which should make it easy to customise the visual presentation and how the task is updated with the annotations.
One note about the manual interface and how the tokens are presented (also in case others come across this thread later): If you're planning on training a model on the labelled data later on, especially NLP models on the raw text, you usually want to make sure that you're always able to resolve the tokens back to their offsets in the original text, and that there's no mismatch between what the annotator is seeing vs. what the true underlying text is. That's also part of the reason Prodigy's manual interface uses raw text only, with an option of visualising whitespace characters. I've explained this in some more detail on this thread: