1. 程式人生 > >《python3網路爬蟲開發實戰》--Scrapy

《python3網路爬蟲開發實戰》--Scrapy

1. 架構

引擎(Scrapy):用來處理整個系統的資料流處理, 觸發事務(框架核心)
排程器(Scheduler):用來接受引擎發過來的請求, 壓入佇列中, 並在引擎再次請求的時候返回. 可以想像成一個URL(抓取網頁的網址或者說是連結)的優先佇列, 由它來決定下一個要抓取的網址是什麼, 同時去除重複的網址
下載器(Downloader):用於下載網頁內容, 並將網頁內容返回給蜘蛛(Scrapy下載器是建立在twisted這個高效的非同步模型上的)
爬蟲(Spiders):爬蟲是主要幹活的, 用於從特定的網頁中提取自己需要的資訊, 即所謂的實體(Item)。使用者也可以從中提取出連結,讓Scrapy繼續抓取下一個頁面
專案管道(Pipeline):負責處理爬蟲從網頁中抽取的實體,主要的功能是持久化實體、驗證實體的有效性、清除不需要的資訊。當頁面被爬蟲解析後,將被髮送到專案管道,並經過幾個特定的次序處理資料。
下載器中介軟體(Downloader Middlewares):位於Scrapy引擎和下載器之間的框架,主要是處理Scrapy引擎與下載器之間的請求及響應。
爬蟲中介軟體(Spider Middlewares):介於Scrapy引擎和爬蟲之間的框架,主要工作是處理蜘蛛的響應輸入和請求輸出。
排程中介軟體(Scheduler Middewares):介於Scrapy引擎和排程之間的中介軟體,從Scrapy引擎傳送到排程的請求和響應。

2. 資料流

Scrapy 中的資料流由引擎控制:

(I) Engine首先開啟一個網站,找到處理該網站的 Spider,並向該 Spider請求第一個要爬取的 URL。

(2)Engine從 Spider中獲取到第一個要爬取的 URL,並通過 Scheduler以Request的形式排程。
(3) Engine 向 Scheduler請求下一個要爬取的 URL。
(4) Scheduler返回下一個要爬取的 URL給 Engine, Engine將 URL通過 DownloaderMiddJewares轉發給 Downloader下載。

(5)一旦頁面下載完畢, Downloader生成該頁面的 Response,並將其通過 DownloaderMiddlewares傳送給 Engine。

(6) Engine從下載器中接收到lResponse,並將其通過 SpiderMiddlewares傳送給 Spider處理。

(7) Spider處理 Response,並返回爬取到的 Item及新的 Request給 Engine。
(8) Engine將 Spider返回的 Item 給 Item Pipeline,將新 的 Request給 Scheduler。

(9)重複第(2)步到第(8)步,直到 Scheduler中沒有更多的 Request, Engine關閉該網站,爬取結束。

通過多個元件的相互協作、不同元件完成工作的不同、元件對非同步處理的支援 , Scrapy最大限度 地利用了網路頻寬,大大提高了資料爬取和處理的效率 。

 

2. 實戰

建立專案

scrapy startproject tutorial

建立Spider

cd tutorial

scrapy genspider quotes quotes.toscrape.com

建立Item

Item是儲存爬取資料的容器,它的使用方法和字典類似。 不過,相比字典,Item多了額外的保護機制,可以避免拼寫錯誤或者定義欄位錯誤 。

建立 Item 需要繼承 scrapy.Item類,並且定義型別為 scrapy.Field 的欄位 。 觀察目標網站,我們可以獲取到到內容有 text、 author、 tags。

使用Item

items.py

1 import scrapy
2 
3 
4 class TutorialItem(scrapy.Item):
5     # define the fields for your item here like:
6     # name = scrapy.Field()
7     text = scrapy.Field()
8     author = scrapy.Field()
9     tags = scrapy.Field()

 

quotes.py

 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 
 4 from tutorial.items import TutorialItem
 5 
 6 
 7 class QuotesSpider(scrapy.Spider):
 8     name = "quotes"
 9     allowed_domains = ["quotes.toscrape.com"]
10     start_urls = ['http://quotes.toscrape.com/']
11 
12     def parse(self, response):
13         quotes = response.css('.quote')
14         for quote in quotes:
15             item = TutorialItem()
16             item['text'] = quote.css('.text::text').extract_first()
17             item['author'] = quote.css('.author::text').extract_first()
18             item['tags'] = quote.css('.tags .tag::text').extract()
19             yield item
20 
21         next = response.css('.pager .next a::attr(href)').extract_first()
22         url = response.urljoin(next)
23         yield scrapy.Request(url=url, callback=self.parse)
scrapy crawl quotes -o quotes.json
scrapy crawl quotes -o quotes .csv
scrapy crawl quotes -o quotes.xml
scrapy crawl quotes -o quotes.pickle
scrapy crawl quotes -o quotes.marshal
scrapy crawl quotes -o ftp://user:pass@ftp.example.com/path/to/quotes.csv

3. 使用Item Pipeline

Item Pipeline為專案管道 。 當 Item生成後,它會自動被送到 ItemPipeline進行處理,我們常用 ItemPipeline來做如下操作 。

清理 HTML資料。
驗證爬取資料,檢查爬取欄位。

查重井丟棄重複內容。
將爬取結果儲存到資料庫。

定義一個類並實現 process_item()方法即可。啟用 ItemPipeline後,Item Pipeline會自動呼叫這個方法。 process_item()方法必須返回包含資料的字典或 Item物件,或者丟擲 Dropltem異常。

process_item()方法有兩個引數。 一個引數是 item,每次 Spider生成的 Item都會作為引數傳遞過來。 另一個引數是 spider,就是 Spider的例項。

修改專案裡面的pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

from scrapy.exceptions import DropItem
import pymongo

class TextPipeline(object):
    def __init__(self):
        self.limit = 50

    def process_item(self, item, spider):
        if item['text']:
            if len(item['text']) > self.limit:
                item['text'] = item['text'][0:self.limit].rstrip() + '...'
            return item
        else:
            return DropItem('Missing Text')

class MongoPipeline(object):
    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DB')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def process_item(self, item, spider):
        name = item.__class__.__name__
        self.db[name].insert(dict(item))
        return item

    def close_spider(self, spider):
        self.client.close()

 

seetings.py

# -*- coding: utf-8 -*-

# Scrapy settings for tutorial project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tutorial'

SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'tutorial.pipelines.TextPipeline': 300,
    'tutorial.pipelines.MongoPipeline': 400,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

MONGO_URI='localhost'
MONGO_DB='tutorial'

4. Selector的用法

Selector是一個可以獨立使用的模組。我們可以直接利用 Selector這個類來構建一個選擇器物件,然後呼叫它的相關方法如 xpath()、 css()等來提取資料。