scrapy-deltafetch實現增量爬取
阿新 • • 發佈:2019-02-11
前言
在之前的文章中我們都是對目標站點進行全量爬取,只要爬蟲run起來就會對所有的連結都爬取一遍,這其實是很傻的做法,因為很多情況下我們並不需要爬取已經爬過的連結,除非你需要定期更新這個連結對應頁面上的資料。好了,迴歸正題,本文介紹scrapy使用scrapy-deltafetch這個外掛來實現增量爬取,這裡以爬取【美食傑】上的菜譜資訊為例。
正文
安裝scrapy-deltafetch
pip install scrapy-deltafetch
新建專案和爬蟲
$ scrapy startproject meishijie PycharmProjects/meishijie
$ cd PycharmProjects/meishijie
$ scrapy genspider meishi meishij.net
settings.py
用Pycharm開啟生成的專案,編輯settings.py
,新增如下內容:
SPIDER_MIDDLEWARES = {
‘scrapy_deltafetch.DeltaFetch’: 100
}
DELTAFETCH_ENABLED = True
meishi.py
編輯爬蟲檔案meishi.py
,因為是測試我就不寫太多邏輯了,大家知道就好。
# -*- coding: utf-8 -*-
import scrapy
class MeishiSpider(scrapy.Spider):
name = 'meishi'
allowed_domains = ['meishij.net']
start_urls = ['http://www.meishij.net/yaoshanshiliao/jibingtiaoli/weiyan/']
def parse(self, response):
for ms in response.xpath("//div[contains(@class,'i_w')]"):
item = {}
title = ms.xpath("div/div/strong/text()" ).extract_first()
hot = ms.xpath("div/div/span/text()").extract_first()
item["title"] = title
item["hot"] = hot
yield item
next_page = response.xpath("//a[@class='next']/@href").extract_first()
print("下一頁:", next_page)
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)
如上,items.py
,pipelines.py
,middlewares.py
不做任何改動,保持預設就好。
執行
執行如下命令就可以看到scrapy已經愉快地跑起來了
$ scrapy crawl meishi
很快,所有連結已經爬取完畢,檢視執行日誌
2018-01-27 14:11:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'deltafetch/stored': 1500,
'downloader/request_bytes': 25530,
'downloader/request_count': 76,
'downloader/request_method_count/GET': 76,
'downloader/response_bytes': 1280237,
'downloader/response_count': 76,
'downloader/response_status_count/200': 76,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 1, 27, 6, 11, 18, 772936),
'item_scraped_count': 1500,
'log_count/DEBUG': 1577,
'log_count/INFO': 7,
'memusage/max': 52551680,
'memusage/startup': 52551680,
'request_depth_max': 74,
'response_received_count': 76,
'scheduler/dequeued': 75,
'scheduler/dequeued/memory': 75,
'scheduler/enqueued': 75,
'scheduler/enqueued/memory': 75,
'start_time': datetime.datetime(2018, 1, 27, 6, 10, 55, 141746)}
2018-01-27 14:11:18 [scrapy.core.engine] INFO: Spider closed (finished)
可以知道,scrapy共發起了包括入口在內的76次請求,爬取了1500個item,deltafetch儲存了這1500個item對應請求的指紋資訊。
測試增量爬取
再次執行scrapy crawl meishi
命令
2018-01-27 14:27:39 [scrapy_deltafetch.middleware] INFO: Ignoring already visited: <GET http://www.meishij.net/shiliao.php?st=3&cid=178&sortby=update&page=2>
2018-01-27 14:27:39 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-27 14:27:39 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'deltafetch/skipped': 1,
'deltafetch/stored': 20,
'downloader/request_bytes': 471,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 17879,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 1, 27, 6, 27, 39, 940365),
'item_scraped_count': 20,
'log_count/DEBUG': 23,
'log_count/INFO': 8,
'memusage/max': 52420608,
'memusage/startup': 52420608,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 1, 27, 6, 27, 39, 547330)}
2018-01-27 14:27:39 [scrapy.core.engine] INFO: Spider closed (finished)
可以看到scrapy除了入口的請求外,之前請求過的連結都已經跳過,Done!
補充
如果想重新爬取之前已經爬取過的連結,可以通過重置DeltaFetch的快取來實現,具體做法是給你的爬蟲傳一個引數deltafetch_reset
,例如:
$ scrapy crawl meishi -a deltafetch_reset=1