1. 程式人生 > >python爬蟲之scrapy文件下載

python爬蟲之scrapy文件下載

files 下載 item toc mat spider color pid 一點

我們在寫普通腳本的時候,從一個網站拿到一個文件的下載url,然後下載,直接將數據寫入文件或者保存下來,但是這個需要我們自己一點一點的寫出來,而且反復利用率並不高,為了不重復造輪子,scrapy提供很流暢的下載文件方式,只需要隨便寫寫便可用了。

mat.py文件

 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 from scrapy.linkextractor import LinkExtractor
 4 from weidashang.items import matplotlib
 5 
 6 class MatSpider(scrapy.Spider):
7 name = "mat" 8 allowed_domains = ["matplotlib.org"] 9 start_urls = [https://matplotlib.org/examples] 10 11 def parse(self, response):
       #抓取每個腳本文件的訪問頁面,拿到後下載
12 link = LinkExtractor(restrict_css=div.toctree-wrapper.compound li.toctree-l2) 13 for link in link.extract_links(response):
14 yield scrapy.Request(url=link.url,callback=self.example) 15 16 def example(self,response):
      #進入每個腳本的頁面,抓取源碼文件按鈕,並和base_url結合起來形成一個完整的url
17 href = response.css(a.reference.external::attr(href)).extract_first() 18 url = response.urljoin(href) 19 example = matplotlib()
20 example[file_urls] = [url] 21 return example

pipelines.py

1 class MyFilePlipeline(FilesPipeline):
2     def file_path(self, request, response=None, info=None):
3         path = urlparse(request.url).path
4         return join(basename(dirname(path)),basename(path))

settings.py

1 ITEM_PIPELINES = {
2    weidashang.pipelines.MyFilePlipeline: 1,
3 }
4 FILES_STORE = examples_src

items.py

class matplotlib(Item):
    file_urls = Field()
    files = Field()

run.py

1 from scrapy.cmdline import execute
2 execute([scrapy, crawl, mat,-o,example.json])

python爬蟲之scrapy文件下載