Loading gensim word2vec vectors for terms.teach?

I have a set of pre-trained word vectors I created with gensim word2vec I’d like to use with the terms.teach recipe. These vectors are very domain specific which is why I’d like to use them instead of pretrained embeddings. I’ve also trained them on a pretty large corpus and I’d like to reuse them rather than start from scratch if I can. The documentation says I’ll need to convert my gensim model to spacy format to use it? Based on some googling it seems like I’ll need to follow the instructions from this stackoverflow followed by the modifications from this github issue?

Should this work or am I better off starting from scratch building new embeddings (maybe with fastText vectors?)

You should definitely be able to load your pre-trained vectors. I’m not sure the code in that StackOverflow thread refers to the current version.

Fundamentally you can always add vectors to spaCy as follows. Let’s say you have a list of word strings, and some sequence of vectors. You can do:

nlp.vocab.reset_vectors(shape=shape)
for i, string in enumerate(word_strings):
    nlp.vocab.set_vector(string, vectors[i])

This might be slow for a large number of vectors, but you should only have to do it this way once. After loading in your vectors, you can save out the nlp object with nlp.to_disk(). Then you can pass that directory to Prodigy.

If you’re using pre-trained vectors, take care not to use the md or lg spaCy data packs. These models use the pre-trained GloVe vectors as features. If you use your own pre-trained vectors, the activations will be different for what the model expects, and you’ll get terrible results. The sm model doesn’t use pre-trained vectors, to make it easy to swap in your own.

You might also be interested in the terms.train-vectors recipe. This uses Gensim to train on a text corpus, and saves out the model for use with spaCy. It should serve as a working example of how that’s done.

2 Likes

Awesome I think this is working! It’s still running through my 1 million word vectors but it worked without any obvious errors on the first 100 so I’m guessing this will work out. Here’s the complete recipe:

from gensim models

word2vec = models.Word2Vec.load('word2vec.model')
word2vec.wv.save_word2vec_format('word2vec.bin')

import spacy
import numpy as np
nlp = spacy.load("en_core_web_sm", vectors=False)
rows, cols = 0, 0
for i, line in enumerate(open('word2vec.bin', 'r')):
    if i == 0:
        rows, cols = line.split()
        rows, cols = int(rows), int(cols)
        nlp.vocab.reset_vectors(shape=(rows, cols))
    else:
        word, *vec = line.split()
        vec = np.array([float(i) for i in vec])
        nlp.vocab.set_vector(word, vec)
        print(word)

nlp.to_disk('spacy_word2vec')
2 Likes

@beckerfuffle Unfortunately I suspect your script is still running – I’ve found a bug in the code in set_vectors() that makes it super slow. See here for explanation and mitigation: https://github.com/explosion/spaCy/issues/2032

It took a while for sure :smiley: About 6 hours for 1 million vectors. The deed is done now though.

1 Like

Aah, great — with 2 million it was projected to take like two days :(. Looking forward to getting that fixed.

Looking at the code for Vector, unless I’m mistaken it looks to me like another workaround might be to pass in the row parameter? So in my code above that would mean:

        vec = np.array([float(i) for i in vec])
        nlp.vocab.set_vector(word, vector=vec, row=i)

I haven’t tested this but it might work?

Can we not use some other Word2Vec model (like Tensor Flow) in that case?

@akshitasood63 You can learn vectors with any algorithm. You just need to get the array into numpy, and the list of keys for it.

The only restriction is that the lookup must be ultimately keyed by the lex.orth attribute, so it can’t be context dependent. spaCy has its own way of getting context vectors. You can replace that too, but it’s a bit less convenient (involves subclassing).

I tried to do spacy.load after i converted to spacy mode and I am getting following errors.

AttributeError: ‘FunctionLayer’ object has no attribute ‘vectors’

1 Like

@honnibal
Does “terms.train-vectors” use a built-in Gensim model for word2vec generation or in the following recipe we need to use a Gensim model instead of Spacy model ?

prodigy terms.train-vectors [output_model] [source] [–loader] [–spacy-model] [–lang] [–size] [–window] [–min-count] [–negative] [–n-iter] [–n-workers] [–merge-ents] [–merge-nps]

@beckerfuffle
Can you please help me with the syntax to use pre-trained Gensim model in Prodigy?

You need to use spacy model.

I posted the exact code that I used to convert my gensim word2vec model to a spacy model above Loading gensim word2vec vectors for terms.teach?

YMMV

1 Like

You can find the source of this command in the prodigy/recipes/terms.py file. The steps go like this:

  1. Tokenize and pre-process the text using spaCy, with the model provided by the --spacy-model argument. If you don't set --merge-ents or --merge-nps, it's okay if the model just uses a tokenizer. If you want to start from an entirely blank model, you could do this:

python -c "import spacy; spacy.blank('hi').to_disk('/tmp/blank-hindi-model')"

This one-line shell command imports spaCy, creates a blank model for Hindi (using the language code 'hi') and saves the blank model to disk, in /tmp/blank-hindi-model.

  1. Train the word vectors. This step uses Gensim.

  2. Create a spaCy model directory, based on your input model, with the vectors you've just trained.

The output model from step 3 can then be passed as the spaCy model for other Prodigy commands.

1 Like

If I am specifically looking for say ‘brand names’ , then how do I specify in ‘terms.train-vectors’ ?

The terms.train-vectors recipe takes the data source (ideally, lots of text) and will train vectors on that source, reflecting the use of the words in context. It doesn't really care what those words are – it will simply assign the meaning representations.

If you're interested in extracting brand names later on, you probably want to set the --merge-nps flag when you train the vectors. This will merge noun phrases into one token, so you'll end up with more meaningful vectors for names that consist of more than one token. For example, you'll want a vector for "Coca Cola", not two vectors for "Coca" and "Cola".

prodigy terms.train-vectors /path/to/brand-model your_data.jsonl --spacy-model en_core_web_sm --merge-nps

You can then run terms.teach using your trained vectors and seed terms, for example:

prodigy terms.teach brand_names /path/to/brand-model --seeds "Coca Cola, Nike, McDonalds"

Prodigy will look at the model's vocabulary, and will try to find other terms that are similar to your seed terms "Coca Cola, Nike, McDonalds". As you click through examples and accept and reject them, the target vector will be updated, so Prodigy can keep suggesting you other terms similar to the seed terms and the ones you've accepted (but not like the ones you've rejected). If your vectors were trained on enough representative text, you'll quickly be able to find other brand names, i.e. entries in the vocabulary with similar representations to your target vector.

1 Like

@beckerfuffle @honnibal When I run Michael’s code on my Gensim trained Chinese word vector model, I get the following error-

UnicodeDecodeError Traceback (most recent call last)
in ()
2 rows, cols = 0, 0
3
----> 4 for i, line in enumerate(open(‘wiki.zh.text.simplified_jieba_seg_cbow_w8_mc3.bin’, ‘r’)):
5 if i == 0:
6 rows, cols = line.split()

~/anaconda3/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
–> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xfd in position 18: invalid start byte

Can you guys help me resolve this?