2018.11.01筆記
阿新 • • 發佈:2018-12-08
- 找Word2Vec的工具,實現看效果
- Word2Vec(Google):
- Capture many linguistic regularities
For example vector operations vector(‘Paris’) - vector(‘France’) + vector(‘Italy’) results in a vector that is very close to vector(‘Rome’) - From words to phrases and beyond
Example vector for representing ‘san francisco’ - Word Consine distance
- Word clustering
Deriving word classes from huge data sets. This is achieved by performing K-means clustering on top of the word vectors. The output is a vocabulary file with words and their corresponding class IDs
- Capture many linguistic regularities
- Performance
- Architecture:
- Skip-Gram
- CBOW: fast
- Skip-Gram
- The training algorithm:
- hierarchical softmax: better for infrequent words
- negative sampling: better for frequent words, better with low dimensional vectors
- Sub-sampling of frequent words
- Dimensionality of the word vectors: usually more is better, but not always
- Context(window) size:
- skip-gram: around 10
- CBOW: around 5
- Architecture:
- 獲取訓練資料(黑體的訓練資料在參考網站都有網址)
- First billion characters from wikipedia (use the pre-processing perl script from the bottom of Matt Mahoney’s page)
- Latest Wikipedia dump Use the same script as above to obtain clean text. Should be more than 3 billion words.
- WMT11 site: text data for several languages (duplicate sentences should be removed before training the models)
- Dataset from "One Billion Word Language Modeling Benchmark" Almost 1B words, already pre-processed text.
- UMBC webbase corpus Around 3 billion words, more info here. Needs further processing (mainly tokenization).
- Text data from more languages can be obtained at statmt.org and in the Polyglot project(親測好評).
- 總之Google的word2vec網站有很多可探索的東西
- 影響詞向量質量的因素
- 訓練資料的數量和質量
- 詞向量的大小
- 訓練演算法
- Word2Vec(Google):