1. 程式人生 > >scrapy-redis分散式爬蟲爬取美女圖片

scrapy-redis分散式爬蟲爬取美女圖片

背景:

家裡網速慢(500kb左右,哎~),網站都是大圖,載入好慢好慢,每每夜深人靜訪問的時候一等就是一分鐘,急啊,索性,直接爬到本地,想怎麼看怎麼看。

爬取目標:(你懂得)

url: h t t p s : / / w w w . j p x g y w . c o m 

為什麼要用scrapy-redis:

為什麼用scrapy-redis,個人原因喜歡只爬取符合自己口味的,這樣我只要開啟爬蟲,碰到喜歡的寫真集,把url lpush到redis,爬蟲就檢測到url並開始執行,這樣爬取就比較有針對性。說白了自己最後看的都是精選的,那豈不是美滋滋

爬取思路:

進入一個喜歡的寫真集,我們的目標就是將下一頁的url和圖片的url提取出來

這裡我試過,response.css和response.xpath都提取不到圖片url,所以我們這裡用selenium控制Chrome或PhantomJS獲取原始碼來提取我們想要的url

開幹!:

1、環境:

個人環境是python3.6.4+scrapy1.5.1

scrapy環境搭建我就不囉嗦了,pip3安裝scrapy,網上教程一大堆。這裡多說一句,我們既然爬取的是圖片,Pillow這個庫是必須要安裝的,selenium這個庫也需要,還有redis,如果沒有,手動pip3 install Pillow/pip3 install selenium/pip3 install redis一下

附上個人虛擬環境庫列表:

Scrapy           1.5.1
Pillow           5.2.0
pywin32          223
requests         2.19.1
selenium         3.14.0
redis

2、建立爬蟲

我們先建立一個scrapy專案,進入虛擬環境

 scrapy startproject ScrapyRedisTest

解壓後我們把 src 中的scrapy_redis整個複製到剛剛建立的ScrapyRedisTest根目錄下

在根目錄下的ScrapyRedisTest中建立一個images資料夾作為圖片存放檔案

這是當前目錄結構:

3、編寫爬蟲

環境搞定了,我們開始寫爬蟲

編寫settings.py檔案:(帶註釋)

童鞋們可以直接複製程式碼替換自動生成的settings.py

# -*- coding: utf-8 -*-

import os,sys

# Scrapy settings for ScrapyRedisTest ChatRoom
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'ScrapyRedisTest'

SPIDER_MODULES = ['ScrapyRedisTest.spiders']
NEWSPIDER_MODULE = 'ScrapyRedisTest.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent

#scrapy自帶的UserAgentMiddleware需要設定的引數,我們這裡設定一個chrome的UserAgent
USER_AGENT = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36"

# Obey robots.txt rules
ROBOTSTXT_OBEY = False          #不遵循ROBOT協議

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'ScrapyRedisTest.middlewares.ScrapyredistestSpiderMiddleware': 543,
#}
SCHEDULER = "scrapy_redis.scheduler.Scheduler"                  #格式:scrapy-redis排程器替換成scrapy_redis的
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"      #格式:scrapy-redis去重器替換成scrapy_redis的
#ITEM_PIPELINES = {
#     'scrapy_redis.pipelines.RedisPipeline': 300,
#}

BASE_DIR=os.path.dirname(os.path.abspath(os.path.dirname(__file__)))
sys.path.insert(0,os.path.join(BASE_DIR,'ScrapyRedisTest'))

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'ScrapyRedisTest.middlewares.ScrapyredistestDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'ScrapyRedisTest.pipelines.ScrapyredistestPipeline': 300,
#}
IMAGES_URLS_FIELD="front_image_url"                     #scrapy 自帶的 ImagesPipeline根據這個欄位判斷從哪個item下載圖片
project_dir=os.path.abspath(os.path.dirname(__file__))
IMAGES_STORE=os.path.join(project_dir,'images')         #下載的圖片儲存在哪個資料夾

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
AUTOTHROTTLE_ENABLED = True                             #scrapy會自動幫我們調整下載速度
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = False

這裡多說一句,個人喜歡設定AUTOTHROTTLE_ENABLED=True這個屬性,雖然爬取速度可能會慢一點,但是能減少被反爬蟲的機率,(爬取拉鉤網的時候不設定這個就會被302)。其次,我們不限制速度爬的辣麼快給人家網站整掛了也不好嘛

編寫middlewares.py檔案:

我們的目標是隻訪問特定的url使用selenium,所以我們編寫一個middleware

童鞋們將程式碼拷貝到自動生成的middlewares.py檔案裡面,不要覆蓋原有的

import time
from scrapy.http import HtmlResponse

class JSPageMiddleware_for_jp(object):
    def process_request(self, request, spider):
        if request.url.startswith("https://www.jpxgyw.com"):        #特定url才使用selenium下載
            spider.driver.get(request.url)
            time.sleep(1)
            print("currentUrl", spider.driver.current_url)
            return HtmlResponse(url=spider.driver.current_url,body=spider.driver.page_source,encoding="utf-8",request=request)

編寫spider檔案:

在spiders檔案下建立一個jp.py作為我們爬蟲的spidier檔案

這裡我們將selenium.webdriver的初始化工作放到__init__中是為了每次爬取新的網站不用重複開啟瀏覽器,(這招我是跟慕課網的bobby老師學的,老師666,為他打call)

# -*- coding: utf-8 -*-
import re
import scrapy
from scrapy_redis.spiders import RedisSpider
from selenium import webdriver
from scrapy.loader import ItemLoader
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals

from ScrapyRedisTest.items import JPItem

class JpSpider(RedisSpider):
    name = 'jp'
    allowed_domains = ['www.jpxgyw.com','img.xingganyouwu.com']
    redis_key = 'jp:start_urls'                                             #redis的key值

    custom_settings = {
        "AUTOTHROTTLE_ENABLED": True,                                       #開啟自動調節爬取速度的外掛

        "DOWNLOADER_MIDDLEWARES": {
            'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 1,    #使用scrapy自帶的middleware模擬user-agent
            'ScrapyRedisTest.middlewares.JSPageMiddleware_for_jp': 2            #使用自己編寫的middleware
        },
        "ITEM_PIPELINES": {
            'scrapy.pipelines.images.ImagesPipeline': 1,                    #使用scrapy自帶的middleware下載圖片
        }
    }
    
    current_page=0        #控制當前爬取到第幾頁的類變數
    # max_page=17

    @staticmethod
    def judgeFinalPage(body):   #判斷是否已爬取到最後一頁
        ma = re.search(r'性感尤物提醒你,訪問頁面出錯了', body)
        return not ma

    def __init__(self,**kwargs):
        #這裡我使用了PhantomJS,童鞋可以替換成chromedriver,executable_path為我的電腦存放phantomjs的路徑,童鞋自行替換,另外在linux上不用設定executable_path
        self.driver=webdriver.PhantomJS(executable_path="D:\\phantomjs-2.1.1-windows\\bin\\phantomjs.exe")
        super(JpSpider,self).__init__()
        dispatcher.connect(self.spider_closed,signals.spider_closed)
    def spider_closed(self,spider): #當爬蟲退出時關閉PhantomJS
        print("spider closed")
        self.driver.quit()

    def parse(self, response):
        if "_" in response.url:                                             #如果start_urls不是從第一頁開始爬
            ma_url_num = re.search(r'_(\d+?).html', response.url)           #提取當前數字
            self.current_page=int(ma_url_num.group(1))                      #寫到global變數
            self.current_page = self.current_page + 1


            ma_url = re.search(r'(.*)_\d+?.html', response.url)             #提取當前url
            nextUrl=ma_url.group(1)+"_"+str(self.current_page)+".html"      #拼接下一頁的url
            print("nextUrl", nextUrl)
        else:                                                               #如果start_urls是從第一頁開始爬
            self.current_page=0                                             #重置

            next_page_num=self.current_page+1
            self.current_page=self.current_page+1

            nextUrl=response.url[:-5]+"_"+str(next_page_num)+".html"        #拼接下一頁的url
            print("nextUrl",nextUrl)
        ma = re.findall(r'src="/uploadfile(.*?).jpg', bytes.decode(response.body))
        imgUrls=[]                                                          #提取當前頁所有圖片url放到列表中
        for i in ma:
            imgUrl="http://img.xingganyouwu.com/uploadfile/"+i+".jpg"
            imgUrls.append(imgUrl)
            print("imgUrl",imgUrl)

        item_loader = ItemLoader(item=JPItem(), response=response)
        item_loader.add_value("front_image_url", imgUrls)                   #放到item中
        jp_item = item_loader.load_item()
        yield jp_item                                                       #交給pipline下載圖片

        if self.judgeFinalPage(bytes.decode(response.body)):                #如果判斷不是最後一頁,繼續下載
            yield scrapy.Request(nextUrl, callback=self.parse, dont_filter=True)
        else:
            print("最後一頁了!")

編寫items.py檔案:

別忘了在items.py檔案中加入下面程式碼,為scrapy圖片下載器指定item

class JPItem(scrapy.Item):
    front_image_url=scrapy.Field()

開始爬:

我是將專案放在自己的阿里雲伺服器上執行的(因為家裡網速太慢。。)

首先要開啟    redis-server   和   redis-cli     windows,linux開啟方法都很簡單,這裡偷懶不寫了,請自行百度

之後cd 到爬蟲根目錄下  scrapy crawl jp  開啟爬蟲,下圖顯示爬蟲正在執行並等待redis中的jp:start_urls

說明爬蟲已經正常運行了,我們去redis,lpush一個url

之後我們就可以看到爬蟲開始工作了

爬完這個url之後不需要關閉爬蟲,因為它一直監聽著redis,我們只要看到中意的url,lpush到redis中就可以了

測試效果:穩定爬取321個url,894張圖片

ok,剩下的我就不管了,在根目錄下的images欣賞圖片吧

最後附上一句:此貼重在學習scrapy框架,檣櫓灰飛煙滅~