1. 程式人生 > >(最新)使用爬蟲刷CSDN部落格訪問量——親測有效

(最新)使用爬蟲刷CSDN部落格訪問量——親測有效

說明:該篇部落格是博主一字一碼編寫的,實屬不易,請尊重原創,謝謝大家!

1.概述

前言:前兩天剛寫了第一篇部落格https://blog.csdn.net/qq_41782425/article/details/84934224 發現閱讀量很少,博主很生氣,當時就想到使用爬蟲來增加閱讀量,於是一言不合就開始敲程式碼

分析:首先剛開始我覺得csdn網站不存在反爬蟲,於是直接通過urllib2庫對我寫的第一篇文章,進行while True無限迴圈訪問,然後通過print response.url發現響應的url地址變成了https://passport.csdn.net/login?xxxxxxx,根本沒有進入我要訪問的url地址,當通過瀏覽器不斷訪問部落格時,也會提示登入,所以使用爬蟲進行訪問也同樣會跳轉到登入介面的,所以我自己將headers中附帶登入後的cookie,這樣就不會再從訪問內容目標地址跳轉到登入地址了。

說明:原本我打算使用scrapy框架進行爬取,但是內容太少,不需要爬取全網站,所以直接寫個爬蟲py檔案即可,為了防止訪問太密集導致被封IP,所以使用了https://www.kuaidaili.com/free/inha/ 該免費代理網站的代理IP,也是用了User-Agent大全進行隱藏本機的UA。

注:CSDN網站並不是淘寶同級網站,所以爬取該網站很簡單

2.閒話不扯了,直接看程式碼

1.USER-AGENT程式碼:

反爬蟲和隱藏本機UA是不可缺少的一步(User-Agent這個網上一搜一大把)

USER_AGENTS = [
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60',
        'Opera/8.0 (Windows NT 5.1; U; en)',
        'Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50',
        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50',
        'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0',
        'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36',
        'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
        'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
        'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0)',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.122 UBrowser/4.0.3214.0 Safari/537.36'
    ]

2.URL程式碼:

準備將要訪問的url連結地址(存在url_list列表中,這裡是我個人部落格的網址,包括資源,帖子等)

說明:這樣做可以將訪問度減少,避免檢測到某個頁面在極斷時間內一直在被訪問

url_list = [
        "https://blog.csdn.net/qq_41782425/article/details/84934224",
        "https://blog.csdn.net/qq_41782425/article/category/8519763",
        "https://me.csdn.net/qq_41782425",
        "https://me.csdn.net/download/qq_41782425",
        "https://me.csdn.net/bbs/qq_41782425"
    ]

3.proxy程式碼:

說明:代理這塊我跟大多人不一樣,我是直接寫一個get_proxy方法,利用爬蟲將代理網站的IP爬取下來,存在類屬性中方便我在程式碼中使用,不用open在read那些華而不實的方法

注:因程式碼簡單所以沒有加註釋(有不懂的可以在下面評論,我這個方法線上爬取都是最新的免費代理,好到爆炸)

    def get_proxy(self):
        self.page+=1
        headers = {"User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"}
        request = urllib2.Request("https://www.kuaidaili.com/free/inha/"+str(self.page), headers=headers)
        html = urllib2.urlopen(request).read()
        # print html
        content = etree.HTML(html)
        ip = content.xpath('//td[@data-title="IP"]/text()')
        port = content.xpath('//td[@data-title="PORT"]/text()')
        
        for i in range(len(ip)):
            for p in range(len(port)):
                if i == p:
                    if ip[i] + ':' + port[p] not in self.proxy:
                        self.proxy.append(ip[i] + ':' + port[p])
        # print self.proxy
        if self.proxy:
            print "現在使用的是第" + str(self.page) + "頁代理IP"
            self.spider()

4.spider程式碼:

說明:通過urllib2中的ProxyHandler類構建一個Handler處理器物件,再通過build_opener方法構建一個全域性的opener,之後所有的請求都可以用urlopen()方式去傳送即可

注:為了讓大家更清楚,最後添加了註釋(大家覺得寫得不錯的就點個喜歡,謝謝)

    def spider(self):
        num = 0 # 用於訪問計數
        err_num = 0 #用於異常錯誤計數
        while True:
            # 從列表中隨機選擇UA和代理
            user_agent = random.choice(self.USER_AGENTS)
            proxy = random.choice(self.proxy)

            # proxy = json.loads('{"http":'+'"'+proxy+'"}')
            # print proxy
            # print type(proxy)
            referer = random.choice(self.url_list) #隨機選擇訪問url地址
            headers = {
                # "Host": "blog.csdn.net",
                "Connection": "keep-alive",
                "Cache-Control": "max-age=0",
                "Upgrade-Insecure-Requests": "1",
                # "User-Agent": user_agent,
                "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
                # "Referer": "https://blog.csdn.net/qq_41782425/article/details/84934224",
                # "Accept-Encoding": "gzip, deflate, br",
                "Accept-Language": "zh-CN,zh;q=0.9",
                "Cookie": "your cookie"

            }
            try:
                # 構建一個Handler處理器物件,引數是一個字典型別,包括代理型別和代理伺服器IP+PROT
                httpproxy_handler = urllib2.ProxyHandler({"http": proxy})
                opener = urllib2.build_opener(httpproxy_handler)
                urllib2.install_opener(opener)
                request = urllib2.Request(referer,headers=headers)
                request.add_header("User-Agent", user_agent)
                response = urllib2.urlopen(request)
                html = response.read()
                # 利用etree.HTML,將字串解析為HTML文件
                content = etree.HTML(html)
                # 使用xpath匹配閱讀量
                read_num = content.xpath('//span[@class="read-count"]/text()')
                # 將列表轉為字串
                new_read_num = ''.join(read_num)
                # 通過xpath匹配的頁面為blog.csdn.net/qq_41782425/article/details/84934224所以匹配其他頁面返回的為空
                if len(new_read_num) != 0:
                    print new_read_num

                num += 1
                print '第' + str(num) + '次訪問'
                print response.url + " 代理ip: " + str(proxy)
                # print request.headers
                time.sleep(1)
                # 當訪問數量達到100時,退出迴圈,並呼叫get_proxy方法獲取第二頁的代理
                if num > 100:
                    break
            except Exception as result:
                err_num+=1
                print "錯誤資訊(%d):%s"%(err_num,result)
                # 當錯誤資訊大於等於30時,初始化代理頁面page,重新從第一頁開始獲取代理ip,並退出迴圈
                if err_num >=30:
                    self.__init__()
                    break
            # url = "https://blog.csdn.net/qq_41782425/article/details/84934224"
            # try:
            #     response = requests.get(url,headers=headers,proxies={"http":""})
            #     num += 1
            #     print '第' + str(num) + '次訪問'
            #     print response.url
            # except Exception as result:
            #     err_num+=1
            #     print "錯誤資訊(%d):%s"%(err_num,result)
            #     if err_num >=30:
            #         break
        # 當退出迴圈時,看就會執行get_proxy獲取代理的方法
        print "正在重新獲取代理IP"
        self.get_proxy()

5.啟動程式碼:

if __name__ == "__main__":
    CsdnSpider().get_proxy()

6.效果圖:

D:\PycharmProjects\Web_Crawler\venv\Scripts\python.exe D:/PycharmProjects/Web_Crawler/practice/csdnSpider.py
現在使用的是第1頁代理IP
第1次訪問
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 123.162.168.192:8088
第2次訪問
https://me.csdn.net/bbs/qq_41782425 代理ip: 60.182.22.244:8118
第3次訪問
https://me.csdn.net/qq_41782425 代理ip: 222.186.45.144:9000
第4次訪問
https://me.csdn.net/qq_41782425 代理ip: 27.24.215.49:37644
第5次訪問
https://me.csdn.net/qq_41782425 代理ip: 101.76.209.69:9000
第6次訪問
https://me.csdn.net/bbs/qq_41782425 代理ip: 175.11.194.73:80
第7次訪問
https://me.csdn.net/download/qq_41782425 代理ip: 111.230.254.195:47891
第8次訪問
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 123.162.168.192:53281
第9次訪問
https://me.csdn.net/bbs/qq_41782425 代理ip: 180.118.134.103:8118
閱讀數:419
第10次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 111.198.77.169:47891
第11次訪問
https://me.csdn.net/qq_41782425 代理ip: 27.24.215.49:8118
閱讀數:419
第12次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 221.224.136.211:9000
第13次訪問
https://me.csdn.net/qq_41782425 代理ip: 221.224.136.211:8118
第14次訪問
https://me.csdn.net/download/qq_41782425 代理ip: 222.171.251.43:35101
閱讀數:419
第15次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 111.230.254.195:8118
閱讀數:419
第16次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 180.118.86.75:1080
第17次訪問
https://me.csdn.net/qq_41782425 代理ip: 101.76.209.69:47891
第18次訪問
https://me.csdn.net/bbs/qq_41782425 代理ip: 222.186.45.144:37644
第19次訪問
https://me.csdn.net/qq_41782425 代理ip: 123.162.168.192:42164
第20次訪問
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 180.118.134.103:47891

說明:因為訪問時間過快閱讀數過段時間再到瀏覽器重新整理即可,還有一個情況就是我列印的閱讀量為第一篇部落格的閱讀量,隨機訪問5個頁面,所以看到的閱讀量是慢慢的增加,這樣也是比較安全的,當訪問超過100後,會更換第二頁的代理,測試都沒問題

注:當打印出來的response.url為https://passport.csdn.net/login時,就要更換你的cookie了

閱讀數:420
第98次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 47.95.9.128:8118
第99次訪問
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 111.198.77.169:80
第100次訪問
https://me.csdn.net/qq_41782425 代理ip: 111.198.77.169:8088
第101次訪問
https://me.csdn.net/qq_41782425 代理ip: 124.235.135.210:8118
正在重新獲取代理IP
現在使用的是第2頁代理IP
閱讀數:420
第1次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 221.224.136.211:35101
閱讀數:420
第2次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 115.223.243.136:13289
第3次訪問
https://me.csdn.net/download/qq_41782425 代理ip: 180.104.107.46:9000
第4次訪問
https://me.csdn.net/qq_41782425 代理ip: 139.196.125.96:9000
第5次訪問
https://me.csdn.net/download/qq_41782425 代理ip: 180.118.128.250:8118
第6次訪問
https://me.csdn.net/download/qq_41782425 代理ip: 180.118.134.103:53281
第7次訪問
https://me.csdn.net/bbs/qq_41782425 代理ip: 182.107.13.217:9000
第8次訪問
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 115.223.243.136:9000
閱讀數:421
第9次訪問
https://blog.csdn.net/qq_41782425/article/details/84934224 代理ip: 115.223.243.136:9000
第10次訪問
https://me.csdn.net/qq_41782425 代理ip: 121.232.194.69:9000
第11次訪問
https://blog.csdn.net/qq_41782425/article/category/8519763 代理ip: 139.196.125.96:9000
第12次訪問
https://me.csdn.net/qq_41782425 代理ip: 175.11.194.73:9000

3.完整程式碼

說明:嘗試自己寫一下,邏輯很簡單,實現也很簡單

# coding:utf-8

import urllib2
from lxml import etree
import random
import time
import json,requests

class CsdnSpider():
    USER_AGENTS = [
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 OPR/26.0.1656.60',
        'Opera/8.0 (Windows NT 5.1; U; en)',
        'Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.50',
        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; en) Opera 9.50',
        'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0',
        'Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36',
        'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
        'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.133 Safari/534.16',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER',
        'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)',
        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; SE 2.X MetaSr 1.0)',
        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.122 UBrowser/4.0.3214.0 Safari/537.36'
    ]
    url_list = [
        "https://blog.csdn.net/qq_41782425/article/details/84934224",
        "https://blog.csdn.net/qq_41782425/article/category/8519763",
        "https://blog.csdn.net/qq_41782425/article/details/84993073",
        "https://me.csdn.net/qq_41782425",
        "https://me.csdn.net/download/qq_41782425",
        "https://me.csdn.net/bbs/qq_41782425"
    ]
    def __init__(self):
        self.page = 0
        self.proxy = []
    def get_proxy(self):
        self.page+=1
        headers = {"User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"}
        request = urllib2.Request("https://www.kuaidaili.com/free/inha/"+str(self.page), headers=headers)
        html = urllib2.urlopen(request).read()
        # print html
        content = etree.HTML(html)
        ip = content.xpath('//td[@data-title="IP"]/text()')
        port = content.xpath('//td[@data-title="PORT"]/text()')
        # 將對應的ip和port進行拼接
        for i in range(len(ip)):
            for p in range(len(port)):
                if i == p:
                    if ip[i] + ':' + port[p] not in self.proxy:
                        self.proxy.append(ip[i] + ':' + port[p])
        # print self.proxy
        if self.proxy:
            print "現在使用的是第" + str(self.page) + "頁代理IP"
            self.spider()

    def spider(self):
        num = 0 # 用於訪問計數
        err_num = 0 #用於異常錯誤計數
        while True:
            # 從列表中隨機選擇UA和代理
            user_agent = random.choice(self.USER_AGENTS)
            proxy = random.choice(self.proxy)

            # proxy = json.loads('{"http":'+'"'+proxy+'"}')
            # print proxy
            # print type(proxy)
            referer = random.choice(self.url_list) #隨機選擇訪問url地址
            headers = {
                # "Host": "blog.csdn.net",
                "Connection": "keep-alive",
                "Cache-Control": "max-age=0",
                "Upgrade-Insecure-Requests": "1",
                # "User-Agent": user_agent,
                "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
                # "Referer": "https://blog.csdn.net/qq_41782425/article/details/84934224",
                # "Accept-Encoding": "gzip, deflate, br",
                "Accept-Language": "zh-CN,zh;q=0.9",
                "Cookie": "your cookie"
            }
            try:
                # 構建一個Handler處理器物件,引數是一個字典型別,包括代理型別和代理伺服器IP+PROT
                httpproxy_handler = urllib2.ProxyHandler({"http": proxy})
                opener = urllib2.build_opener(httpproxy_handler)
                urllib2.install_opener(opener)
                request = urllib2.Request(referer,headers=headers)
                request.add_header("User-Agent", user_agent)
                response = urllib2.urlopen(request)
                html = response.read()
                # 利用etree.HTML,將字串解析為HTML文件
                content = etree.HTML(html)
                # 使用xpath匹配閱讀量
                read_num = content.xpath('//span[@class="read-count"]/text()')
                # 將列表轉為字串
                new_read_num = ''.join(read_num)
                # 通過xpath匹配的頁面為blog.csdn.net/qq_41782425/article/details/84934224所以匹配其他頁面返回的為空
                if len(new_read_num) != 0:
                    print new_read_num

                num += 1
                print '第' + str(num) + '次訪問'
                print response.url + " 代理ip: " + str(proxy)
                # print request.headers
                time.sleep(1)
                # 當訪問數量達到100時,退出迴圈,並呼叫get_proxy方法獲取第二頁的代理
                if num > 100:
                    break
            except Exception as result:
                err_num+=1
                print "錯誤資訊(%d):%s"%(err_num,result)
                # 當錯誤資訊大於等於30時,初始化代理頁面page,重新從第一頁開始獲取代理ip,並退出迴圈
                if err_num >=30:
                    self.__init__()
                    break
            # url = "https://blog.csdn.net/qq_41782425/article/details/84934224"
            # try:
            #     response = requests.get(url,headers=headers,proxies={"http":""})
            #     num += 1
            #     print '第' + str(num) + '次訪問'
            #     print response.url
            # except Exception as result:
            #     err_num+=1
            #     print "錯誤資訊(%d):%s"%(err_num,result)
            #     if err_num >=30:
            #         break
        # 當退出迴圈時,看就會執行get_proxy獲取代理的方法
        print "正在重新獲取代理IP"
        self.get_proxy()


if __name__ == "__main__":
    CsdnSpider().get_proxy()