1. 程式人生 > >爬取微博評論並提取主要關鍵詞(一)

爬取微博評論並提取主要關鍵詞(一)

    接到一個自然語言處理的任務,主要是爬取醫療行業微博評論並提取關鍵詞,順便分類。最終是要對這些評論進行自動回覆,給我的不過是初級任務,那麼我就拆解任務目標,一步一步來實現。     一、首先實現的是爬蟲,實際上微博自己有提供api介面供我們查詢,我們只需要找到合適的醫療微博id,並找到該id下評論數較多的微博即可。     在手機或者電腦端,登入微博,搜尋‘醫生’,排在前五的是新浪愛問醫生,還有一群它的區域子賬號,另外還有一個婭醫生,我分別查了一下他們的評論數,發覺還是婭醫生微博評論人數較多——可能是因為是美女的緣故,那麼我就選好了這個微博目標。     仔細看了一下她的微博,裡面大部分是婦科的微博,最高的微博評論和轉發數達到了10000多,如下圖所示:點選評論,(電腦訪問需要點‘檢視更多’)進入該條微博評論介面,地址是:https://m.weibo.cn/api/comments/show?id=4187837902903300(電腦版微博跟評論在一起,地址也太長,還是手機版評論地址比較有規律)初次是打算爬取多個微博,後來為了簡單就爬取其中一個,並且爬微博評論,你還得自己登陸(必須有cookie),所以前期工作一定要準備好。先上程式碼:import sys
import requests
reload(sys)
sys.setdefaultencoding('utf8')
import time
import random
import codecsagents = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"]cookies = [
"SINAGLOBAL=6061592354656.324.1489207743838; un=18240343109; TC-V5-G0=52dad2141fc02c292fc30606953e43ef; wb_cusLike_2140170130=N; _s_tentry=login.sina.com.cn; Apache=5393750164131.485.1511882292296; ULV=1511882292314:55:14:7:5393750164131.485.1511882292296:1511789163477; TC-Page-G0=1e758cd0025b6b0d876f76c087f85f2c; TC-Ugrow-G0=e66b2e50a7e7f417f6cc12eec600f517; login_sid_t=7cbd20d7f5c121ef83f50e3b28a77ed7; cross_origin_proto=SSL; WBStorage=82ca67f06fa80da0|undefined; UOR=,,login.sina.com.cn; WBtopGlobal_register_version=573631b425a602e8; crossidccode=CODE-tc-1EjHEO-2SNIe8-y00Hd0Yq79mGw3l1975ae; SSOLoginState=1511882345; SCF=AvFiX3-W7ubLmZwXrMhoZgCv_3ZXikK7fhjlPKRLjog0OIIQzSqq7xsdv-_GhEe8XWdkHikzsFJyqtvqej6OkaM.; SUB=_2A253GQ45DeThGeRP71IQ9y7NyDyIHXVUb3jxrDV8PUNbmtAKLWrSkW9NTjfYoWTfrO0PkXSICRzowbfjExbQidve; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9WFaVAdSwLmvOo1VRiSlRa3q5JpX5KzhUgL.FozpSh5pS05pe052dJLoIfMLxKBLBonL122LxKnLB.qL1-z_i--fiKyFi-2Xi--fi-2fiKyFTCH8SFHF1C-4eFH81FHWSE-RebH8SE-4BC-RSFH8SFHFBbHWeEH8SEHWeF-RegUDMJ7t; SUHB=04W-u1HCo6armH; ALF=1543418344; wvr=6"]

def readfromtxt(filename):
    file = codecs.open(u'D:/pythondata/spider/web/'+filename, "r",'utf-8')
    text = file.read()
    file.close()
    return textdef writeintxt(dict,filename):
    output = codecs.open(u"D:/pythondata/spider/web/"+filename, 'a+','utf-8')
    for i, list in dict.items():
        comment_str = ""
        for l in list:
            comment_str = comment_str + l.__str__().replace('$$','') + "####"
        output.write(i+"####"+comment_str+'\n')
    output.close()
user_agent = random.choice(agents)
cookies = random.choice(cookies)
headers = {
    'User-agent' : user_agent,
    'Host' : 'm.weibo.cn',
    'Accept' : 'application/json, text/plain, */*',
    'Accept-Language' : 'zh-CN,zh;q=0.8',
    'Accept-Encoding' : 'gzip, deflate, sdch, br',
    'Referer' : 'https://m.weibo.cn/u/**********',
    'Cookie' : cookies,
    'Connection' : 'keep-alive',
}##***自己替換為可用的新浪微博idbase_url = 'https://m.weibo.cn/api/comments/show?id='
weibo_id_list = readfromtxt('weibo_id.txt').split('\n')
result_dict = {}
for weibo_id in weibo_id_list:
    try:
        record_list = []
        i=1
        SIGN = 1
        while(SIGN):
            # url = base_url + weibo_id.split(',')[1] + '&page=' + str(i)
            url = base_url + str(weibo_id) + '&page=' + str(i)
            resp = requests.get(str(url), headers=headers, timeout=200)
            jsondata = resp.json()
            if jsondata.get('ok') == 1:
                SIGN = 1
                i = i + 1
                data = jsondata.get('data')
                for d in data.get('data'):
                    comment = d.get('text').replace('$$','')
#                    like_count = d.get('like_counts')
#                    user_id = d.get("user").get('id')
#                    user_name = d.get("user").get('screen_name').replace('$$','')
#                    one_record = user_id.__str__()+'$$'+like_count.__str__()+'$$'+user_name.__str__()+'$$'+ comment.__str__()
                    record_list.append(comment)
            else:
                SIGN = 0        result_dict[weibo_id]=record_list
        time.sleep(random.randint(2,3))
    except:
        #print(traceback.print_exc())
        print(weibo_id)
        print('*'*100)
        pass
print("ok")writeintxt(result_dict,'comment1.txt')基本思路是將要爬的幾條微博放入txt檔案中,讓程式逐條自動爬取。

這樣我們就獲得了一個爬取後的評論dict——result_dict

,以及最後一條微博的評論list—— record_list

下一部分我講一下怎麼對他們進行自然語言處理。具體程式碼可以到我的碼雲中檢視:https://gitee.com/goskiller/codes/0nfd3oxbcs41qarjtk7iy98參考:http://www.cnblogs.com/zhzhang/p/7208928.htmlhttp://www.cnblogs.com/chenyang920/p/7205597.htmlhttps://blog.csdn.net/qq_21238927/article/details/80172619https://blog.csdn.net/sinat_34022298/article/details/75943272http://www.cnblogs.com/flippedkiki/p/7123549.html