We’re able to learn new vocabulary items without resizing the embedding table. This is one of the big advantages of the hash embeddings used in spaCy. I explain it here: Can you explain how exactly HashEmbed works ?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Using terms.train-vectors recipe with NER | 1 | 1260 | March 3, 2018 | |
Improving on spacy's existing NER entities | 1 | 662 | December 5, 2019 | |
Transfer Learning for NER | 6 | 2478 | May 24, 2021 | |
How do I work with available word vectors during NER training? | 3 | 357 | June 30, 2022 | |
word embeddings from trained NER model?
|
3 | 1655 | December 13, 2021 |