Highlighting the matching words for text classfication

Is it possible to Highlight the matching words that were responsible for the classification results?

Currently there’s no option exposed to do that, so you’d have to write a bit of code to add spans to the example. If you’re looking to highlight words which matched your initial seed terms, this should be fairly easy to do.

If you want to highlight words or phrases which were prominent in the classification for the statistical model, a little bit more work is required. However, the text classifier does use an attention layer which assigns relevance weights to words and phrases. So, the scores are there — it’s just that there’s no really convenient API in spaCy to get them out yet. If these scores are what you need, I can give you a little function to extract them.

Sure, if you could let me know the function, I can do the adaptations or build a small UI for that purpose myself.

I meant, we want to highlight words or phrases which were prominent in the classification, so if you could provide me a helper function that would be useful.

I can do a pull request to write this function if you could point me to part of code where I need to modify ? or is it something that would be part of next release. Thanks.

This should work for now. I’ll add an API for this to Thinc, but for now I think this solution gives decent usability and doesn’t require you to run a different version of anything:


'''Get attention weights out of the ParametricAttention layer.'''
import spacy
from contextlib import contextmanager
from thinc.api import layerize
from spacy.gold import GoldParse


def find_attn_layer(model):
    queue = [model]
    seen = set()
    for layer in queue:
        names = [child.name for child in layer._layers]
        if 'para-attn' in names:
            return layer, names.index('para-attn')
        if id(layer) not in seen:
            queue.extend(layer._layers)
        seen.add(id(layer))
    return None, -1

def create_attn_proxy(attn):
    '''Return a proxy to the attention layer which will fetch the attention
    weights on each call, appending them to the list 'output'. 
    '''
    output = []
    def get_weights(Xs_lengths, drop=0.):
        Xs, lengths = Xs_lengths
        output.append(attn._get_attention(attn.Q, Xs, lengths)[0])
        return attn.begin_update(Xs_lengths, drop=drop)
    return output, layerize(get_weights)

@contextmanager
def get_attention_weights(textcat):
    '''Wrap the attention layer of the textcat with a function to
    intercept the attention weights. We replace the attention component
    with our wrapper in the pipeline for the duration of the context manager.
    On exit, we put everything back.
    '''
    parent, i = find_attn_layer(textcat.model)
    if parent != None:
        output_vars, wrapped = create_attn_proxy(parent._layers[i])
        parent._layers[i] = wrapped
        yield output_vars
    else:
        yield None

def main():
    nlp = spacy.blank('en')
    textcat = nlp.create_pipe('textcat')
    textcat.add_label('SPAM')
    nlp.add_pipe(textcat)
    opt = nlp.begin_training()
    docs = [nlp.make_doc('buy viagra')]
    golds = [GoldParse(docs[0], cats={'SPAM':1})]
    # All calls to the attention model made during this block will append
    # the attention weights to the list attn_weights.
    # The weights for a batch of documents will be a single concatenated
    # array -- so if you pass in a batch of lengths 4, 5 and 7, you'll get
    # the weights in an array of shape (16,). The value at index 3 will be
    # the attention weight of the last word of the first document. The value
    # at index 4 will be the attention weight of the first word of the second
    # document.
    # The attention weights should be floats between 0 and 1, with 1 indicating
    # maximum relevance.
    # The attention layer is parametric (following Liang et al 2016's textcat
    # paper), which means the query vector is learned
    # jointly with the model. It's not too hard to substitute a different
    # attention layer instead, e.g. one which does attention by average of
    # the word vectors or something. See thinc/neural/_classes/attention.py
    with get_attention_weights(textcat) as attn_weights:
        loss = textcat.update(docs, golds, sgd=opt)
    print(attn_weights)


if __name__ == '__main__':
    main()
1 Like

I am also curious about what the model sees as important to a class label, so I have been toying with visualizations of how much each token adds to or subtracts from the label probability.

sensitivity_capture

This approach could be modified to use attention weights pretty readily, but I’m not sure if you could visualize words that subtract from the class probability with it?

I created variants on textcat.teach and texcat.eval that render examples using a custom template and attention weight data from @honnibal’s example.

The code is on Github, and it renders text where items with more attention have larger fonts, and those that have a bunch of attention get a special color.

attention_weights

I transform the attention weights into metadata for use in the HTML template:

def attach_attention_data(input_stream, nlp, attn_weights):
    """Attach attention weights to token data with each example"""
    for item in input_stream:
        tokens_data = []
        attn_weights.clear()
        doc = nlp(item['text'])
        for index, token in enumerate(doc):
            weight = float(attn_weights[0][index][0])
            # If the change is over some threshold, add color to draw the eye
            color = 'rgba(255,0,0,0.54)' if weight > 0.025 else 'inherit'
            tokens_data.append({
                't': token.text_with_ws,
                'c': color,
                's': min(2.5, 1 + weight * 2),
                'w': weight
            })
        item['tokens'] = tokens_data
        yield item

The template loops over tokens and renders each one with a span and custom styling:

<div>{{#tokens}}<span style="font-size:{{s}}em; color:{{c}};">{{t}}</span>{{/tokens}}</div>

It’s all wired up by attaching it to the stream inside the recipe:

@recipe('attncat.eval',...)
def evaluate(...):
    ...
    nlp = spacy.load(...)
    textcat = nlp.get_pipe('textcat')
    assert textcat is not None
    with get_attention_weights(textcat) as attn_weights:
        stream = ...
        stream = attach_attention_data(stream, nlp, attn_weights)
    return {
        'view_id': 'html',
        'stream': stream,
        ...
        'config': {..., 'html_template': template_text}
    }

    ...

Hope it helps. If the custom template does not appear to be working, be sure you do not have an entry for html_template in your prodigy.json file, because it will override the recipe.

1 Like

@justindujardin The idea of randomly masking or dropping tokens and measuring the effect on classification is definitely a respectable way to get per-token relevance. So, good instincts! I’ll try to find a citation for you.

Also, great work on wiring everything up, with the styling as well. Really cool! Now the big question is whether the attention weights give interesting feedback. I suppose it’s possible that they always hover near an even distribution. I haven’t investigated much.

I’ve also just realised a problem with my code. The text classifier actually stacks two models: the CNN, and also a linear bag-of-words model. We only have attention weights from the CNN, but we could be getting a significant score on some word from the bag-of-words, which isn’t being measured in the code I provided. I’ll have a look at extracting the bag-of-words weights as well.

Thanks again for sharing your code on this. Model interpretation is very important; especially with the GDPR coming into effect soon.

1 Like

I was able to get it up and running, however, maybe because of the Bag of words weights are not there it is not highlighting all the words accurately. Would love to see the updated code with Bag of words weight. That would be really awesome.

I added another visualizer that measures the “sensitivity” of the model to the presence of each token in an input by removing them one at a time and measuring the class probability change. It is nearly identical to the attention recipe except for the different relevance calculations.

Words that contribute probability to the class label are made a larger red, and ones that take away from the label probability are made a smaller green. The intensity of the text scaling is based on the magnitude of change:

41 AM

The code is available on Github, and pretty straightforward:

def structural_sensitivity(nlp, spacy_doc, token_index, label):
    """Determine how sensitive the model is to a token's presence by removing it."""

    base_probability = spacy_doc.cats[label]
    if token_index == 0 or token_index == len(spacy_doc):
        minus_current = spacy_doc[0:max(token_index, 0)].text + spacy_doc[token_index + 1:100000].text
    else:
        minus_current = spacy_doc[0:max(token_index, 0)].text + ' ' + spacy_doc[token_index + 1:100000].text

    if minus_current != '':
        mc_prob = nlp(minus_current).cats[label]
    else:
        # Because the next op is a subtraction of base_probability, this will
        # result in 0 delta for an invalid input.
        mc_prob = base_probability

    mc_delta_prob = base_probability - mc_prob

    return mc_delta_prob, '{0:+6.2f}%'.format(mc_delta_prob * 100)


def attach_structural_sensitivity_data(input_stream, nlp, label):
    """Attach structural sensitivity information to token data with each example"""
    for item in input_stream:
        user_input = item['text']
        doc = nlp(user_input)
        tokens_data = []
        for index, token in enumerate(doc):
            (structure_delta, pretty_delta) = structural_sensitivity(nlp, doc, index, label)
            text_color = 'inherit'
            if structure_delta > 0.1:
                text_color = 'rgba(255,0,0,0.54)'
            elif structure_delta < -0.1:
                text_color = 'rgba(0,128,0,0.87)'
            tokens_data.append({
                't': token.text_with_ws,
                'c': text_color,
                's': max(0.7, 1 + structure_delta * 2),
                'w': structure_delta,
            })
        item['tokens'] = tokens_data
        yield item

1 Like

I tested this code, this does not highlight for loads of input data. Not sure if its bug.

If there are no highlights I think it means that the model does not care about any one particular token in the input.

If that is the case maybe the attention weights are a better visualization, because the attention should always go somewhere, right?

I'm not sure about it any further, good luck!

That's nice to hear, thanks!

I also played with visualizing the sensitivity of particular words by looking up similar/dissimilar items, swapping them out, and measuring the class delta. I used the prebuilt large en vectors and tried searching/sorting the vocab items using similarity comparison, and also brown cluster searches. The vocab search ended up being too expensive (minus introducing a cache) and the brown cluster searches were faster but didn't give the matches I wanted.

I later tried sense2vec which gave much better results for most things. I never got around to a web visualization because of the relative computational cost, but my takeaway is that I think the vector approach has great potential for data generation uses. I'll explore it further later.

I've been using the attention weights for annotating (in addition to the token dropping visualization) since you posted it, so I'll let you know if I see anything unusual.

I'm happy to update my examples with bag-of-words data whenever you post a snippet showing how to pull it out of the model.

You bet. Thank you and @ines for making awesome software. :clap:

I love reverse engineering a good black box. :sweat_smile: If there are other interesting parts of the model that I could potentially visualize, let me know.

I haven’t gotten around to using Prodigy yet, but I whipped up a quick way to visualize my documents’ relevant words from the trained model (in tensorflow) based on this paper: http://arxiv.org/abs/1703.01365 . It does require access to the gradients however, so perhaps that’s not easy to do here. Hope the reference helps in some respect.

1 Like

@honnibal I'm trying to use your original code snippet to get the attention layer for an existing SpaCy model. I'm finding that I can only use the with block once; if I try to use it again, I get nothing for the weights, e.g.

with get_attention_weights(textcat) as weights:
    textcat.predict(docs)
print("Calling main 1st time in textcat_weights.")
print(weights)
print(list(zip(docs[0], np.squeeze(weights[0]))))

with get_attention_weights(textcat) as weights2:
    textcat.predict(docs)
print("Calling main 2nd time in textcat_weights.")
print(weights2)

The second print call doesn't return anything. Can you tell me what I'm missing? I ask because I'm trying to use this in a streamlit demo to show which tokens received attention.

@oneextrafact Ah, that snippet is quite old now, so I think details of the model architecture have changed.

In general attention weighting often isn't a very reliable way to inspect which words or phrases were important. I haven't been following the model interpretability work very closely but I'm sure if you look around you'll be able to find some libraries that implement good techniques.