Model explanation

Yes, that's a cool idea! Prodigy should already have all the building blocks for that – you just need to implement the process that weights for the tokens/subtokens or whatever else you want to interpret.

This thread is slightly older, but it has some custom recipes and ideas for visualizing attention during text classification annotation:

We'd love to have this more easily accessible for spaCy models! But otherwise, it really depens on your model, the framework you're using (both for ML and interpretability) and what you're trying to do. There's no out-of-the-box answer for that. But Prodigy should provide the building blocks you need to incorporate model interpretability into your annotation workflow.

This looks great! :100: And this seems to already return formatted HTML, right? So I guess you could stream that in using the html interface, or add it as a separate block?

(Also, a small detail that I want to add to the regular ner/ spans interface: If individual spans can take a "color" value, you could easily implement the same visualization just with character offsets and different colour shades depending on the score, without having to assign distinct labels and label colours.)