Hi Ines,
I wanted to ask that how do we use techniques like LIME and Shap or any explainability technique with spacy NER models? Once a model has been trained, is there any support available on how to explain how the model is performing on the data and also to technical and non-tech users? I have seen some of these techniques used for text classification but couldn't find any useful resources for named entity recognition models developed by spacy. There is the visualisation aspect but i was hoping if there is an instance level approach to explain why the model came to predict a certain entity type as PERSON for example? Any resources on this will be very useful!
Thank you