1. 程式人生 > >關於爬蟲時url去重的初步探討(上)

關於爬蟲時url去重的初步探討(上)

部落格第十五天

測試內容:自己寫init_add_request(spider,url:str)方法實現url去重(本次僅測試)

工具:Python3.6,Pycharm,scrapy,

工程內容:

     1. 準備:

# spider.py

import scrapy
from scrapy.http import Request


class DuanDian(scrapy.Spider):
    name = 'duandian'
allowed_domains = ['58.com']
    start_urls = ['http://cd.58.com/']

    def parse(self,
response): pass yield Request('http://bj.58.com',callback = self.parse) yield Request('http://wh.58.com',callback = self.parse)

# pipelines.py

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from .init_utils import init_add_request class DuandianPipeline(object): def process_item(self, item, spider): return item def open_spider(self,spider): # init_add_request(spider,'http://wh.58.com')

# main.py 注:使用此方法便於除錯

from scrapy.cmdline import execute
execute('scrapy crawl duandian'
.split())

# init_utils.py 注:此方法用於去重

from scrapy.http import Request

def init_add_request(spider,url:str):
    rf = spider.crawler.engine.slot.scheduler.df
    request = Request(url)
    rf.request_seen(request)

     2. 測試

# settings.py 注:用於配置pipelings,圖為預設情況

# ITEM_PIPELINES = {
#    'duandian.pipelines.DuandianPipeline': 300,
# }

此時的除錯結果,訪問了全部三個地址:

# 重新設定settings:

ITEM_PIPELINES = {
   'duandian.pipelines.DuandianPipeline': 300,
}

此時的除錯結果,配置好的地址沒有被訪問: