1. 程式人生 > >Python爬蟲 --- 2.5 Scrapy之汽車之家爬蟲實踐

Python爬蟲 --- 2.5 Scrapy之汽車之家爬蟲實踐

原文連結:https://www.fkomm.cn/article/2018/8/7/32.html

目的

Scrapy框架為檔案和圖片的下載專門提供了兩個Item Pipeline 它們分別是:

  • FilePipeline

  • ImagesPipeline

這裡主要介紹ImagesPipeline!!

目標分析:

這次我們要爬的是汽車之家:car.autohome.com.cn。最近喜歡吉利博越,所以看了不少這款車的資料。

我們就點開博越汽車的圖片網站:

https://car.autohome.com.cn/pic/series/3788.html

傳統的Scrapy框架圖片下載

Scrapy 框架的實施:

1.建立scrapy專案和爬蟲:

$ scrapy startproject Geely
 $ cd Geely
 $ scrapy genspider BoYue car.autohome.com.cn

2.編寫items.py:

import scrapy

 class GeelyItem(scrapy.Item):
     # define the fields for your item here like:
     # name = scrapy.Field()

     # 儲存圖片分類
     catagory = scrapy.Field()

     # 儲存圖片地址
     image_urls = scrapy.Field()

     # ImagesPipeline 
     images = scrapy.Field()

3.編寫Spider:

# -*- coding: utf-8 -*-
 import scrapy

 #匯入CrawlSpider模組 需改寫原來的def parse(self,response)方法
 from scrapy.spiders import CrawlSpider ,Rule

 #匯入連結提取模組
 from scrapy.linkextractors import LinkExtractor 
 from Geely.items import GeelyItem

 class BoyueSpider(CrawlSpider):
     name = 'BoYue'
     allowed_domains = ['car.autohome.com.cn']
     start_urls = ['https://car.autohome.com.cn/pic/series/3788.html']

     #如需要進行頁面解釋則使用callback回撥函式 因為有下一頁,所以我們需要跟進,這裡使用follow令其為True
     rules = {
         Rule(LinkExtractor(allow=r'https://car.autohome.com.cn/pic/series/3788.+'), callback= 'parse_page', follow=True),
     } 

     def parse_page(self, response):
         catagory = response.xpath('//div[@class = "uibox"]/div/text()').get()
         srcs = response.xpath('//div[contains(@class,"uibox-con")]/ul/li//img/@src').getall()

         #map(函式,引數二),將引數二中的每個都進行函式計算並返回一個列表
         srcs = list(map(lambda x:x.replace('t_',''),srcs))
         srcs = list(map(lambda x:response.urljoin(x),srcs))
         yield GeelyItem(catagory=catagory, image_urls = srcs)

4.編寫PIPELINE:

    import os
     from urllib import request

     class GeelyPipeline(object):

         def __init__(self):
             #os.path.dirname()獲取當前檔案的路徑,os.path.join()獲取當前目錄並拼接成新目錄
             self.path = os.path.join(os.path.dirname(__file__), 'images')

             # 判斷路徑是否存在
             if not os.path.exists(self.path):  
                 os.mkdir(self.path)

         def process_item(self, item, spider):

             #分類儲存
             catagory = item['catagory']
             urls = item['image_urls']

             catagory_path = os.path.join(self.path, catagory)

             #如果沒有該路徑即建立一個
             if not os.path.exists(catagory_path): 
                 os.mkdir(catagory_path)

             for url in urls:
                 #以_進行切割並取最後一個單元
                 image_name = url.split('_')[-1] 
                 request.urlretrieve(url,os.path.join(catagory_path,image_name))

             return item

5.編寫settings.py

    BOT_NAME = 'Geely'

     SPIDER_MODULES = ['Geely.spiders']
     NEWSPIDER_MODULE = 'Geely.spiders'

     # Obey robots.txt rules
     ROBOTSTXT_OBEY = False

     ITEM_PIPELINES = {
        'Geely.pipelines.GeelyPipeline': 1,
     }

6.讓專案跑起來:

$ scrapy crawl BoYue

7.結果展示:

使用Images_pipeline進行圖片下載

使用步驟:

  1. 定義好一個item,然後定義兩個屬性 image_urls 和 images。 image_urls是用來儲存需要下載的檔案的url連結,列表型別;

  2. 當檔案下載完成後,會把檔案下載的相關資訊儲存到item的images屬性中。例如:下載路徑,下載url 和檔案的效驗碼;

  3. 再配置檔案settings.py中配置FILES_STORE,指定檔案下載路徑;

  4. 啟動pipeline,在ITEM_PIPELINES中設定自定義的中介軟體!!!

具體步驟

在上面的基礎上修改

1.修改settings.py

    ITEM_PIPELINES = {
        # 'Geely.pipelines.GeelyPipeline': 1,
        # 'scrapy.pipelines.images.ImagesPipeline': 1,
        'Geely.pipelines.GeelyImagesPipeline': 1,
     }

     #工程根目錄
     project_dir = os.path.dirname(__file__)
     #下載圖片儲存位置
     IMAGES_STORE = os.path.join(project_dir, 'images')

2.改寫pipelines,py

    # -*- coding: utf-8 -*-

     # Define your item pipelines here
     #
     # Don't forget to add your pipeline to the ITEM_PIPELINES setting
     # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

     import os
     from urllib import request

     from scrapy.pipelines.images import ImagesPipeline
     from Geely import settings

     # class GeelyPipeline(object):

     #     def __init__(self):
     #         #os.path.dirname()獲取當前檔案的路徑,os.path.join()獲取當前目錄並拼接成新目錄
     #         self.path = os.path.join(os.path.dirname(__file__), 'images')

     #         # 判斷路徑是否存在
     #         if not os.path.exists(self.path):  
     #             os.mkdir(self.path)

     #     def process_item(self, item, spider):

     #         #分類儲存
     #         catagory = item['catagory']
     #         urls = item['image_urls']

     #         catagory_path = os.path.join(self.path, catagory)

     #         #如果沒有該路徑即建立一個
     #         if not os.path.exists(catagory_path): 
     #             os.mkdir(catagory_path)

     #         for url in urls:
     #             #以_進行切割並取最後一個單元
     #             image_name = url.split('_')[-1] 
     #             request.urlretrieve(url,os.path.join(catagory_path,image_name))

     #         return item

     # 繼承ImagesPipeline
     class GeelyImagesPipeline(ImagesPipeline):

         # 該方法在傳送下載請求前呼叫,本身就是傳送下載請求的
         def get_media_requests(self, item, info):

             # super()直接呼叫父類物件
             request_objects = super(GeelyImagesPipeline, self).get_media_requests(item, info)
             for request_object in request_objects:
                 request_object.item = item
             return request_objects

         def file_path(self, request, response=None, info=None):

             path = super(GeelyImagesPipeline, self).file_path(request, response, info)

             # 該方法是在圖片將要被儲存時呼叫,用於獲取圖片儲存的路徑
             catagory = request.item.get('catagory')

             # 拿到IMAGES_STORE
             images_stores = settings.IMAGES_STORE
             catagory_path = os.path.join(images_stores, catagory)

             #判斷檔名是否存在,如果不存在建立檔案
             if not os.path.exists(catagory_path): 
                 os.mkdir(catagory_path)

             image_name = path.replace('full/','')
             image_path = os.path.join(catagory+'/',image_name)

             return image_path

3.讓專案跑起來:

$ scrapy crawl BoYue

將會得到與原來相同的結果!!!!