1. 程式人生 > >【python爬蟲-爬微博】爬取王思聰所有微博資料

【python爬蟲-爬微博】爬取王思聰所有微博資料

1. 準備:

  • 代理IP 。網上有很多免費代理ip,如西刺免費代理IP http://www.xicidaili.com/,自己可找一個可以使用的進行測試; 
  • 抓包分析 。通過抓包獲取微博內容地址。當然web下的api地址可以通過瀏覽器獲得。

以下是通過瀏覽器除錯獲得的介面:

個人資訊介面:

微博列表介面:

2. 完整程式碼:

import urllib.request
import json
import time

id = '1826792401'  # 定義要爬取的微博id。王思聰微博https://m.weibo.cn/u/1826792401
proxy_addr = "122.241.72.191:808"  # 設定代理IP


# 定義頁面開啟函式
def use_proxy(url,proxy_addr):
    req = urllib.request.Request(url)
    req.add_header("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0")
    proxy = urllib.request.ProxyHandler({'http': proxy_addr})
    opener = urllib.request.build_opener(proxy, urllib.request.HTTPHandler)
    urllib.request.install_opener(opener)
    data = urllib.request.urlopen(req).read().decode('utf-8', 'ignore')
    return data


# 獲取微博主頁的containerid,爬取微博內容時需要此id
def get_containerid(url):
    data = use_proxy(url, proxy_addr)
    content = json.loads(data).get('data')
    for data in content.get('tabsInfo').get('tabs'):
        if(data.get('tab_type') == 'weibo'):
            containerid = data.get('containerid')
    return containerid


# 獲取微博使用者的基本資訊,如:微博暱稱、微博地址、微博頭像、關注人數、粉絲數、性別、等級等
def get_userInfo(id):
    url = 'https://m.weibo.cn/api/container/getIndex?type=uid&value='+id  # 個人資訊介面
    data = use_proxy(url, proxy_addr)
    content = json.loads(data).get('data')
    profile_image_url = content.get('userInfo').get('profile_image_url')
    description = content.get('userInfo').get('description')
    profile_url = content.get('userInfo').get('profile_url')
    verified = content.get('userInfo').get('verified')
    guanzhu = content.get('userInfo').get('follow_count')
    name = content.get('userInfo').get('screen_name')
    fensi = content.get('userInfo').get('followers_count')
    gender = content.get('userInfo').get('gender')
    urank = content.get('userInfo').get('urank')

    print("微博暱稱:"+name+"\n"+"微博主頁地址:"+profile_url+"\n"+"微博頭像地址:"+profile_image_url+"\n"+"是否認證:"+str(verified)+"\n"+"微博說明:"+description+"\n"+"關注人數:"+str(guanzhu)+"\n"+"粉絲數:"+str(fensi)+"\n"+"性別:"+gender+"\n"+"微博等級:"+str(urank)+"\n")

    pass


# 獲取微博內容資訊,並儲存到文字中,內容包括:每條微博的內容、微博詳情頁面地址、點贊數、評論數、轉發數等
def get_weibo(id, file):
    i = 1
    while True:
        url = 'https://m.weibo.cn/api/container/getIndex?type=uid&value='+id
        weibo_url = 'https://m.weibo.cn/api/container/getIndex?type=uid&value='+id+'&containerid='+get_containerid(url)+'&page='+str(i)
        print(url)
        print(weibo_url)
        try:
            data = use_proxy(weibo_url, proxy_addr)
            content = json.loads(data).get('data')
            cards = content.get('cards')
            if(len(cards)>0):
                for j in range(len(cards)):
                    print("第"+str(i)+"頁,第"+str(j)+"條微博")
                    card_type = cards[j].get('card_type')
                    if(card_type == 9):
                        mblog = cards[j].get('mblog')
                        attitudes_count = mblog.get('attitudes_count')
                        comments_count = mblog.get('comments_count')
                        created_at = mblog.get('created_at')
                        reposts_count = mblog.get('reposts_count')
                        scheme = cards[j].get('scheme')
                        text = mblog.get('text')
                        with open(file, 'a', encoding='utf-8') as fh:
                            fh.write("第"+str(i)+"頁,第"+str(j)+"條微博"+"\n")
                            fh.write("微博地址:"+str(scheme)+"\n"+"釋出時間:"+str(created_at)+"\n"+"微博內容:"+text+"\n"+"點贊數:"+str(attitudes_count)+"\n"+"評論數:"+str(comments_count)+"\n"+"轉發數:"+str(reposts_count)+"\n")
                            pass
                        pass
                    pass
                i += 1
                time.sleep(0.05)
                pass
            else:
                break
        except Exception as e:
            print(e)
            pass
        pass

    pass


if __name__ == "__main__":
    print('開始---')
    file = id+".txt"
    get_userInfo(id)
    get_weibo(id, file)
    print('完成---')
pass

3. 原博主認為:

一般做爬蟲爬取網站,首選的都是m站,其次是wap站,最後考慮PC站。當然,這不是絕對的,有的時候PC站的資訊最全,而你又恰好需要全部的資訊,那麼PC站是你的首選。一般m站都以m開頭後接域名, 所以本文開搞的網址就是 m.weibo.cn。

感謝原博主的偉大貢獻,真實還原了微博api的思路和介面。

-