1. 程式人生 > >【使用者行為分析】 用wiki百科中文語料訓練word2vec模型

【使用者行為分析】 用wiki百科中文語料訓練word2vec模型

  1.  前言

      最近在調研基於內容的使用者行為分析,在過程中發現了word2vec這個很有幫助的演算法。word2vec,顧名思義是將詞語(word)轉化為向量(vector)的的工具。產自Google,於2013年開源。在向量模型中,我們可以做基於相似度(向量距離/夾角)的運算。在模型中向量的相似度即對應詞之間語義的相似度,簡單來說,就是兩個詞在同一個語義場景出現的概率。比如, 我們向模型輸入 ”java“ 和 ”程式設計師“,得到 0.47。如果輸入 ”java“ 和 "化妝",則得到0.00014。相較即可知前者在語義上相似度遠大於後者,這給我們提供了很多利用的可能性。後文會有更多word2vec的例子。

      word2vec有幾種語言的實現,官方專案是C語言版本的,除此外有Java,python,spark語言的實現。都可以在此頁面中找到連結。本文采用了python的版本進行訓練。

     word2vec演算法可以通過學習一系列的語料建立模型,而wiki百科是一個全面的語料來源,接下來讓我們看看用wiki語料訓練模型的過程。

        2. 前期準備

       3. 語料處理

        wiki語料下載下來是xml格式的,我們把它轉為txt格式。編寫python指令碼如下:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
 
import logging
import os.path
import sys
 
from gensim.corpora import WikiCorpus
 
if __name__ == '__main__':
    program = os.path.basename(sys.argv[0])
    logger = logging.getLogger(program)
 
    logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
    logging.root.setLevel(level=logging.INFO)
    logger.info("running %s" % ' '.join(sys.argv))
 
    # check and process input arguments
    if len(sys.argv) < 3:
        print globals()['__doc__'] % locals()
        sys.exit(1)
    inp, outp = sys.argv[1:3]
    space = " "
    i = 0
 
    output = open(outp, 'w')
    wiki = WikiCorpus(inp, lemmatize=False, dictionary={})
    for text in wiki.get_texts():
        output.write(space.join(text) + "\n")
        i = i + 1
        if (i % 10000 == 0):
            logger.info("Saved " + str(i) + " articles")
 
    output.close()
    logger.info("Finished Saved " + str(i) + " articles")


儲存檔案為:process_wiki.py,並在命令列執行此指令碼:python process_wiki.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.txt。

2015-03-07 15:08:39,181: INFO: running process_enwiki.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.txt
2015-03-07 15:11:12,860: INFO: Saved 10000 articles
2015-03-07 15:13:25,369: INFO: Saved 20000 articles
2015-03-07 15:15:19,771: INFO: Saved 30000 articles
2015-03-07 15:16:58,424: INFO: Saved 40000 articles
2015-03-07 15:18:12,374: INFO: Saved 50000 articles
2015-03-07 15:19:03,213: INFO: Saved 60000 articles
2015-03-07 15:19:47,656: INFO: Saved 70000 articles
2015-03-07 15:20:29,135: INFO: Saved 80000 articles
2015-03-07 15:22:02,365: INFO: Saved 90000 articles
2015-03-07 15:23:40,141: INFO: Saved 100000 articles
.....
2015-03-07 19:33:16,549: INFO: Saved 3700000 articles
2015-03-07 19:33:49,493: INFO: Saved 3710000 articles
2015-03-07 19:34:23,442: INFO: Saved 3720000 articles
2015-03-07 19:34:57,984: INFO: Saved 3730000 articles
2015-03-07 19:35:31,976: INFO: Saved 3740000 articles
2015-03-07 19:36:05,790: INFO: Saved 3750000 articles
2015-03-07 19:36:32,392: INFO: finished iterating over Wikipedia corpus of 3758076 documents with 2018886604 positions (total 15271374 articles, 2075130438 positions before pruning articles shorter than 50 words)
2015-03-07 19:36:32,394: INFO: Finished Saved 3758076 articles

我的執行過程已經不在了,大概執行過程的命令列顯示類似上面。本機(8G記憶體)跑用時約30分鐘。

完成之後,會在當前目錄下得到一個txt檔案,內容已經是UTF-8編碼的文字格式了。

開啟來看看,可以看到結構大概是一篇文章一行,每行內有多個以空格劃分的短句(可右鍵檢視圖片看大圖):


接下來對其進行分詞,此處分詞我寫了一個短Java程式來做,處理流程是:繁體轉簡體 -> 分詞(去停用詞) ->  去除不需要的詞 。我不能確定wiki中只有繁體還是有繁有簡,所以統一轉為簡體處理,否則一個詞的繁體和簡體兩種形式會被word2vec演算法當成兩個不同的詞處理。

分詞過程需要注意,我們除了去停用詞外,還要選擇有人名識別的分詞演算法。MMSeg是我最初的選擇,但後來放棄了,就是因為其對專有名詞、品牌或人名是不識別成一個詞的,它將其每一個字獨立成詞。這種形式用在搜尋引擎裡是合適的,但在此處明顯不當,會讓我們丟失很多資訊。最後選用word分詞演算法,效果還算滿意。

至於去除不需要的詞這裡就見仁見智了,我選擇去除了純數字和數字加中文的詞,因為我不需要這些,對我來說它們是混淆項。

經過分詞之後的檔案變成了這樣:


我保留了一篇文章一行的形式。檢查一下專有名詞和人名,基本按正常情況分詞了。

     4. 模型訓練

程式碼如下:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
 
import logging
import os.path
import sys
import multiprocessing
 
from gensim.corpora import WikiCorpus
from gensim.models import Word2Vec
from gensim.models.word2vec import LineSentence
 
if __name__ == '__main__':
    program = os.path.basename(sys.argv[0])
    logger = logging.getLogger(program)
 
    logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
    logging.root.setLevel(level=logging.INFO)
    logger.info("running %s" % ' '.join(sys.argv))
 
    # check and process input arguments
    if len(sys.argv) < 4:
        print globals()['__doc__'] % locals()
        sys.exit(1)
    inp, outp1, outp2 = sys.argv[1:4]
 
    model = Word2Vec(LineSentence(inp), size=400, window=5, min_count=5,
            workers=multiprocessing.cpu_count())
 
    # trim unneeded model memory = use(much) less RAM
    #model.init_sims(replace=True)
    model.save(outp1)
    model.save_word2vec_format(outp2, binary=False)

執行 python  train_word2vec_model.py  wiki.zh.txt    wiki.zh.text.model  wiki.zh.text.vector

執行過程大致如下:

2015-03-11 18:50:02,586: INFO: running train_word2vec_model.py wiki.zh.text.jian.seg.utf-8 wiki.zh.text.model wiki.zh.text.vector
2015-03-11 18:50:02,592: INFO: collecting all words and their counts
2015-03-11 18:50:02,592: INFO: PROGRESS: at sentence #0, processed 0 words and 0 word types
2015-03-11 18:50:12,476: INFO: PROGRESS: at sentence #10000, processed 12914562 words and 254662 word types
2015-03-11 18:50:20,215: INFO: PROGRESS: at sentence #20000, processed 22308801 words and 373573 word types
2015-03-11 18:50:28,448: INFO: PROGRESS: at sentence #30000, processed 30724902 words and 460837 word types
...
2015-03-11 18:52:03,498: INFO: PROGRESS: at sentence #210000, processed 143804601 words and 1483608 word types
2015-03-11 18:52:07,772: INFO: PROGRESS: at sentence #220000, processed 149352283 words and 1521199 word types
2015-03-11 18:52:11,639: INFO: PROGRESS: at sentence #230000, processed 154741839 words and 1563584 word types
2015-03-11 18:52:12,746: INFO: collected 1575172 word types from a corpus of 156430908 words and 232894 sentences
2015-03-11 18:52:13,672: INFO: total 278291 word types after removing those with count<5
2015-03-11 18:52:13,673: INFO: constructing a huffman tree from 278291 words
2015-03-11 18:52:29,323: INFO: built huffman tree with maximum node depth 25
2015-03-11 18:52:29,683: INFO: resetting layer weights
2015-03-11 18:52:38,805: INFO: training model with 4 workers on 278291 vocabulary and 400 features, using 'skipgram'=1 'hierarchical softmax'=1 'subsample'=0 and 'negative sampling'=0
2015-03-11 18:52:49,504: INFO: PROGRESS: at 0.10% words, alpha 0.02500, 15008 words/s
2015-03-11 18:52:51,935: INFO: PROGRESS: at 0.38% words, alpha 0.02500, 44434 words/s
2015-03-11 18:52:54,779: INFO: PROGRESS: at 0.56% words, alpha 0.02500, 53965 words/s
2015-03-11 18:52:57,240: INFO: PROGRESS: at 0.62% words, alpha 0.02491, 52116 words/s
2015-03-11 18:52:58,823: INFO: PROGRESS: at 0.72% words, alpha 0.02494, 55804 words/s
2015-03-11 18:53:03,649: INFO: PROGRESS: at 0.94% words, alpha 0.02486, 58277 words/s
2015-03-11 18:53:07,357: INFO: PROGRESS: at 1.03% words, alpha 0.02479, 56036 words/s
......
2015-03-11 19:22:09,002: INFO: PROGRESS: at 98.38% words, alpha 0.00044, 85936 words/s
2015-03-11 19:22:10,321: INFO: PROGRESS: at 98.50% words, alpha 0.00044, 85971 words/s
2015-03-11 19:22:11,934: INFO: PROGRESS: at 98.55% words, alpha 0.00039, 85940 words/s
2015-03-11 19:22:13,384: INFO: PROGRESS: at 98.65% words, alpha 0.00036, 85960 words/s
2015-03-11 19:22:13,883: INFO: training on 152625573 words took 1775.1s, 85982 words/s
2015-03-11 19:22:13,883: INFO: saving Word2Vec object under wiki.zh.text.model, separately None
2015-03-11 19:22:13,884: INFO: not storing attribute syn0norm
2015-03-11 19:22:13,884: INFO: storing numpy array 'syn0' to wiki.zh.text.model.syn0.npy
2015-03-11 19:22:20,797: INFO: storing numpy array 'syn1' to wiki.zh.text.model.syn1.npy
2015-03-11 19:22:40,667: INFO: storing 278291x400 projection weights into wiki.zh.text.vector
建模過程耗時30分鐘左右。

我們看看wiki.zh.text.vector的內容:

可以看出,一行是一個詞的向量,後面的各個數字是詞向量每個維度的值。

測試一下我們的模型:

Python 2.7.10 |Anaconda 2.3.0 (64-bit)| (default, May 28 2015, 17:02:03) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
>>> import gensim                                              1、引入gensim
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
>>> model = gensim.models.Word2Vec.load("wiki.zh.text.model")  2、載入模型
>>> model.most_similar(u"足球")                                3、檢視和足球最相關的詞
[(u'\u7532\u7ea7', 0.72623610496521), (u'\u8054\u8d5b', 0.6967484951019287), (u'\u4e59\u7ea7', 0.6815086603164673), (u'\u5973\u5b50\u8db3\u7403', 0.6662559509277344), (u'\u8377\u5170\u8db3\u7403', 0.6462257504463196), (u'\u8db3\u7403\u961f', 0.6444228887557983), (u'\u5fb7\u56fd\u8db3\u7403', 0.6352497935295105), (u'\u8db3\u7403\u534f\u4f1a', 0.6313000917434692), (u'\u6bd4\u5229\u65f6\u676f', 0.6311478614807129), (u'\u4ff1\u4e50\u90e8', 0.6295265555381775)]

結果是:

[(u'甲級', 0.72623610496521), (u'聯賽', 0.6967484951019287), (u'乙級', 0.6815086603164673), (u'女子足球', 0.6662559509277344), (u'荷蘭足球', 0.6462257504463196), (u'足球隊', 0.6444228887557983), (u'德國足球', 0.6352497935295105), (u'足球協會', 0.6313000917434692), (u'比利時杯', 0.6311478614807129), (u'俱樂部', 0.6295265555381775)]

其他例子:

兔子

[(u'一隻', 0.5688103437423706), (u'小狗', 0.5381371974945068), (u'來點', 0.5336571931838989), (u'貓', 0.5334546566009521), (u'老鼠', 0.5177739858627319), (u'青蛙', 0.4972209334373474), (u'狗', 0.4893607497215271), (u'狐狸', 0.48909318447113037), (u'貓咪', 0.47951626777648926), (u'兔', 0.4779919385910034)]

股票

[(u'交易所', 0.7432450652122498), (u'證券', 0.7410451769828796), (u'ipo', 0.7379931211471558), (u'股價', 0.7343258857727051), (u'期貨', 0.7162749767303467), (u'每股', 0.700008749961853), (u'kospi', 0.6965476274490356), (u'換股', 0.6927754282951355), (u'創業板', 0.6897038221359253), (u'收盤報', 0.6847120523452759)]

音樂

[(u'流行', 0.6373772621154785), (u'錄影帶', 0.6004574298858643), (u'音樂演奏', 0.5833497047424316), (u'starships', 0.5774279832839966), (u'樂風', 0.5701465606689453), (u'流行搖滾', 0.5658928155899048), (u'流行樂', 0.5657000541687012), (u'雷鬼音樂', 0.5649317502975464), (u'後搖滾', 0.5644392371177673), (u'節目音樂', 0.5602964162826538)]

電影

[(u'懸疑片', 0.7044901847839355), (u'本片', 0.6980072259902954), (u'喜劇片', 0.6958730220794678), (u'影片', 0.6780649423599243), (u'長片', 0.6747604608535767), (u'動作片', 0.6695533990859985), (u'該片', 0.6645846366882324), (u'科幻片', 0.6598016023635864), (u'愛情片', 0.6590006947517395), (u'此片', 0.6568557024002075)]

招標

[(u'批出', 0.6294339895248413), (u'工程招標', 0.6275389194488525), (u'標書', 0.613800048828125), (u'公開招標', 0.5919119715690613), (u'開標', 0.5917631387710571), (u'流標', 0.5791728496551514), (u'決標', 0.5737971067428589), (u'施工', 0.5641534328460693), (u'中標者', 0.5528303980827332), (u'建設期', 0.5518919825553894)]

女兒

[(u'妻子', 0.8230685591697693), (u'兒子', 0.8067737817764282), (u'丈夫', 0.7952680587768555), (u'母親', 0.7670352458953857), (u'小女兒', 0.7492039203643799), (u'親生', 0.7171255350112915), (u'生下', 0.7126915454864502), (u'嫁給', 0.7088421583175659), (u'孩子', 0.703400731086731), (u'弟弟', 0.7023398876190186)]

比較兩個詞的相似度:

>>> 
KeyboardInterrupt
>>> model.similarity(u"足球",u"運動")   
0.1059533903509985
>>> model.similarity(u"足球",u"球類")
0.26147174351997882
>>> model.similarity(u"足球",u"籃球")
0.56019543381248371
>>> model.similarity(u"IT",u"程式設計師")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/hermes/anaconda/lib/python2.7/site-packages/gensim/models/word2vec.py", line 1279, in similarity
    return dot(matutils.unitvec(self[w1]), matutils.unitvec(self[w2]))
  File "/home/hermes/anaconda/lib/python2.7/site-packages/gensim/models/word2vec.py", line 1259, in __getitem__
    return self.syn0[self.vocab[words].index]
KeyError: u'IT'
分析一下:

“足球”和“運動”的相似度只有0.1,“足球”和“球類”的相似度是0.26,可以看出上下級包含關係的詞在相似性上不突出,但仍然能看出越細的類別體現出來的相似度越好。“足球”和“籃球”的相似度有0.56,可以看出平級的詞之間相似度較好。想計算“IT”和”程式設計師“之間的相似度,發現報錯了,原來是IT這個詞在模型中不存在,恐怕是在去停用詞時當成it被過濾掉了。

>>> model.similarity("java",u"程式設計師")
0.47254847166534308
>>> model.similarity("java","asp")    
0.39323879449275156
>>> model.similarity("java","cpp")
0.50034941548721434
>>> model.similarity("java",u"介面")
0.4277060809963118
>>> model.similarity("java",u"運動")
0.017056843004573947
>>> model.similarity("java",u"化妝")
0.00014340386119819637
>>> model.similarity(u"眼線",u"眼影")  
0.46869071954910552
“java”和“運動”,“java”和“化妝”之間的相似度極小,可以看出話題相差越遠的詞在相似度上的確有所體現。

模型訓練就到這裡結束了,可以看出還有很多可以優化的地方,這和語料庫的規模有關,還和分詞器的效果有關等等,不過這個實驗暫且就到這裡了。對於word2vec,我們更關注的是word2vec在具體的應用任務中的效果,在我們專案中有意願用其進行聚類結果的自動標籤化,如果還有一些其他應用,也歡迎大家一起探討。如果想要這個模型,可以通過評論或者私信聯絡我。

最後補充

a) 關於很多同學私信我的java版本similarity方法實現:

	/**
	 * 求近似程度(點乘向量)
	 * 
	 * @return
	 */
	public Float similarity(String word0, String word1) {
		float[] wv0 = getWordVector(word0);
		float[] wv1 = getWordVector(word1);

		if (wv1 == null || wv0 == null) {
			return null;
		}
		float score = 0;
		for (int i = 0; i < size; i++) {
			score += wv1[i] * wv0[i];
		}
		return score;
	}

b)關於大家最關心的訓練結果的分享:

密碼:j15o