Explainability for Spacy NER Models

Hi Ines,

I wanted to ask that how do we use techniques like LIME and Shap or any explainability technique with spacy NER models? Once a model has been trained, is there any support available on how to explain how the model is performing on the data and also to technical and non-tech users? I have seen some of these techniques used for text classification but couldn't find any useful resources for named entity recognition models developed by spacy. There is the visualisation aspect but i was hoping if there is an instance level approach to explain why the model came to predict a certain entity type as PERSON for example? Any resources on this will be very useful!

Thank you

1 Like

Hi @Andrew123 , thanks for patiently waiting!

Although we don't have native support for LIME and SHAP explainers, there is a spaCy universe project that does exactly that (alibi). You can check that one out and let us know if it has been useful!