1. 程式人生 > >爬蟲Scrapy框架的setting.py檔案詳解

爬蟲Scrapy框架的setting.py檔案詳解

 
  1. # -*- coding: utf-8 -*-

  2.  
  3. # Scrapy settings for demo1 project

  4. #

  5. # For simplicity, this file contains only settings considered important or

  6. # commonly used. You can find more settings consulting the documentation:

  7. #

  8. # http://doc.scrapy.org/en/latest/topics/settings.html

  9. # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html

  10. # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

  11.  
  12. BOT_NAME = 'demo1' #Scrapy專案的名字,這將用來構造預設 User-Agent,同時也用來log,當您使用 startproject 命令建立專案時其也被自動賦值。

  13.  
  14. SPIDER_MODULES = ['demo1.spiders'] #Scrapy搜尋spider的模組列表 預設: [xxx.spiders]

  15. NEWSPIDER_MODULE = 'demo1.spiders' #使用 genspider 命令建立新spider的模組。預設: 'xxx.spiders'

  16.  
  17.  
  18. #爬取的預設User-Agent,除非被覆蓋

  19. #USER_AGENT = 'demo1 (+http://www.yourdomain.com)'

  20.  
  21. #如果啟用,Scrapy將會採用 robots.txt策略

  22. ROBOTSTXT_OBEY = True

  23.  
  24. #Scrapy downloader 併發請求(concurrent requests)的最大值,預設: 16

  25. #CONCURRENT_REQUESTS = 32

  26.  
  27. #為同一網站的請求配置延遲(預設值:0)

  28. # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay

  29. # See also autothrottle settings and docs

  30. #DOWNLOAD_DELAY = 3 下載器在下載同一個網站下一個頁面前需要等待的時間,該選項可以用來限制爬取速度,減輕伺服器壓力。同時也支援小數:0.25 以秒為單位

  31.  
  32.  
  33. #下載延遲設定只有一個有效

  34. #CONCURRENT_REQUESTS_PER_DOMAIN = 16 對單個網站進行併發請求的最大值。

  35. #CONCURRENT_REQUESTS_PER_IP = 16 對單個IP進行併發請求的最大值。如果非0,則忽略 CONCURRENT_REQUESTS_PER_DOMAIN 設定,使用該設定。 也就是說,併發限制將針對IP,而不是網站。該設定也影響 DOWNLOAD_DELAY: 如果 CONCURRENT_REQUESTS_PER_IP 非0,下載延遲應用在IP而不是網站上。

  36.  
  37. #禁用Cookie(預設情況下啟用)

  38. #COOKIES_ENABLED = False

  39.  
  40. #禁用Telnet控制檯(預設啟用)

  41. #TELNETCONSOLE_ENABLED = False

  42.  
  43. #覆蓋預設請求標頭:

  44. #DEFAULT_REQUEST_HEADERS = {

  45. # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',

  46. # 'Accept-Language': 'en',

  47. #}

  48.  
  49. #啟用或禁用蜘蛛中介軟體

  50. # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

  51. #SPIDER_MIDDLEWARES = {

  52. # 'demo1.middlewares.Demo1SpiderMiddleware': 543,

  53. #}

  54.  
  55. #啟用或禁用下載器中介軟體

  56. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html

  57. #DOWNLOADER_MIDDLEWARES = {

  58. # 'demo1.middlewares.MyCustomDownloaderMiddleware': 543,

  59. #}

  60.  
  61. #啟用或禁用擴充套件程式

  62. # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html

  63. #EXTENSIONS = {

  64. # 'scrapy.extensions.telnet.TelnetConsole': None,

  65. #}

  66.  
  67. #配置專案管道

  68. # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html

  69. #ITEM_PIPELINES = {

  70. # 'demo1.pipelines.Demo1Pipeline': 300,

  71. #}

  72.  
  73. #啟用和配置AutoThrottle擴充套件(預設情況下禁用)

  74. # See http://doc.scrapy.org/en/latest/topics/autothrottle.html

  75. #AUTOTHROTTLE_ENABLED = True

  76.  
  77. #初始下載延遲

  78. #AUTOTHROTTLE_START_DELAY = 5

  79.  
  80. #在高延遲的情況下設定的最大下載延遲

  81. #AUTOTHROTTLE_MAX_DELAY = 60

  82.  
  83.  
  84. #Scrapy請求的平均數量應該並行傳送每個遠端伺服器

  85. #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

  86.  
  87. #啟用顯示所收到的每個響應的調節統計資訊:

  88. #AUTOTHROTTLE_DEBUG = False

  89.  
  90. #啟用和配置HTTP快取(預設情況下禁用)

  91. # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

  92. #HTTPCACHE_ENABLED = True

  93. #HTTPCACHE_EXPIRATION_SECS = 0

  94. #HTTPCACHE_DIR = 'httpcache'

  95. #HTTPCACHE_IGNORE_HTTP_CODES = []

  96. #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

  97.  

 

 

解釋幾個引數:

ROBOTSTXT_OBEY = True -----------是否遵守robots.txt

CONCURRENT_REQUESTS = 16 -----------開啟執行緒數量,預設16

AUTOTHROTTLE_START_DELAY = 3 -----------開始下載時限速並並延遲時間

AUTOTHROTTLE_MAX_DELAY = 60 -----------高併發請求時最大延遲時間

最底下的幾個:是否啟用在本地快取,如果開啟會優先讀取本地快取,從而加快爬取速度,視情況而定

HTTPCACHE_ENABLED = True

HTTPCACHE_EXPIRATION_SECS = 0

HTTPCACHE_DIR ='httpcache'

HTTPCACHE_IGNORE_HTTP_CODES = []

HTTPCACHE_STORAGE ='scrapy.extensions.httpcache.FilesystemCacheStorage'

以上幾個可以視專案需要開啟,但是有兩個引數最好每次都開啟,而每次都是專案檔案手動開啟不免有些麻煩,最好是專案建立後就自動開啟


#DEFAULT_REQUEST_HEADERS = {
#'接受':'text / html,application / xhtml + xml,application / xml; q = 0.9,* / *; q = 0.8',
#'Accept-Language':'en',
#}

這個是瀏覽器請求頭,很多網站都會檢查客戶端的頭,比如豆瓣就是每一個請求都檢查頭的user_agent,否則只會返回403,可以開啟

#USER_AGENT ='Chirco(+ http://www.yourdomain.com)'

這個是至關重要的,大部分伺服器在請求快了會首先檢查User_Agent,而scrapy預設的瀏覽器頭是scrapy1.1我們需要開啟並且修改成瀏覽器頭,如:Mozilla / 5.0(Windows NT 6.1; WOW64)AppleWebKit / 537.1(KHTML,和Gecko一樣)Chrome / 22.0.1207.1 Safari / 537.1

但是最好是這個USER-AGENT會隨機自動更換最好了。

下面的程式碼可以從預先定義的使用者代理的列表中隨機選擇一個來採集不同的頁面 

在settings.py中新增以下程式碼

 

 
  1. DOWNLOADER_MIDDLEWARES = {

  2. 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,

  3. 'randoms.rotate_useragent.RotateUserAgentMiddleware' :400

  4. }

rotate_useragent的程式碼為:

 

 
  1. # -*- coding: utf-8 -*-

  2. import random

  3. from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware

  4.  
  5. class RotateUserAgentMiddleware(UserAgentMiddleware):

  6. def __init__(self, user_agent=''):

  7. self.user_agent = user_agent

  8.  
  9. def process_request(self, request, spider):

  10. #這句話用於隨機選擇user-agent

  11. ua = random.choice(self.user_agent_list)

  12. if ua:

  13. print('User-Agent:'+ua)

  14. request.headers.setdefault('User-Agent', ua)

  15.  
  16. #the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape

  17. user_agent_list = [\

  18. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\

  19. "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\

  20. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\

  21. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\

  22. "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\

  23. "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\

  24. "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\

  25. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\

  26. "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\

  27. "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\

  28. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\

  29. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\

  30. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\

  31. "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\

  32. "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\

  33. "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\

  34. "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\

  35. "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"

  36. ]


 

執行爬蟲可以看到資訊:

 

 
  1. 2017-04-16 00:07:40 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023

  2. User-Agent:Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3

  3. 2017-04-16 00:07:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.jianshu.com/robots.txt> from <GET http://jianshu.com/robots.txt>

  4. User-Agent:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1

  5. 2017-04-16 00:07:41 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.jianshu.com/robots.txt> (referer: None)

  6. User-Agent:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24

  7. 2017-04-16 00:07:41 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.jianshu.com/> from <GET http://jianshu.com/>

  8. User-Agent:Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3


也可以把user_agent_list放到設定檔案中去:

在rotate_useragent檔案中加入一行程式碼

 

from randoms.settings import user_agent_list

執行效果如下:

完整的示例:http//download.csdn.net/detail/u011781521/9815390