1. 程式人生 > >Scrapy 抓取股票行情

Scrapy 抓取股票行情

安裝 Coding 環境 tps .org mat 等價 node als

安裝scrapy會出現錯誤,我們選擇anaconda3作為編譯環境,搜索scrapy安裝(有錯誤自查)

創建scrapy爬蟲項目:

  調出cmd,到相應目錄:輸入:

scrapy startproject stockstar

技術分享圖片

放置spide代碼的目錄文件   spider(用於編寫爬蟲)

項目中的item文件      items.py(用於保存所抓取的數據的容器,其存儲方式類似於Python的字典)

項目的 中間件        middlewares.py(提供一種簡便的機制,通過允許插入自定義代碼來拓展scrapy的功能)

項目的pipelines文件     pipelines.py(核心處理器)

項目的 設置文件       settings.py

項目的配置文件       scrapy.cfg

技術分享圖片

創建項目後:在settings文件中有一句:

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

有時候我們需要關閉:設為false

右擊文件夾,在彈出的快捷鍵中選擇:Mark Directory as --Sources Root,這樣使導入包的語法更簡潔

1.定義一個item容器:

在items.py中編寫:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
# # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy from scrapy.loader import ItemLoader from scrapy.loader.processors import TakeFirst class StockstarItemLoader(ItemLoader): #自定義itemloader,用於存儲爬蟲所抓取的字段內容 default_output_processor = TakeFirst() class StockstarItem(scrapy.Item):
# define the fields for your item here like: # name = scrapy.Field() code = scrapy.Field() #股票代碼 abbr = scrapy.Field() #股票簡稱 last_trade = scrapy.Field() #最新價 chg_ratio = scrapy.Field() #漲跌幅 chg_amt = scrapy.Field() #漲跌額 chg_ratio_5min = scrapy.Field() #5分鐘漲幅 volumn = scrapy.Field() #成交量 turn_over = scrapy.Field() #成交額

settings.py加上:

from scrapy.exporters import JsonItemExporter
#默認顯示的中文是閱讀性較差的Unicode字符
#需定義子類顯示出原來的字符集(將父類的ensure——ascii屬性設置為False即可)
class CustomJsonLinesItemExporter(JsonItemExporter):
    def __init__(self,file,**kwargs):
        super(CustomJsonLinesItemExporter,self).__init__(file,ensure_ascii=False,**kwargs)
#啟用新定義的Exporter類
FEED_EXPORTERS = {
    json:stockstar.settings.CustomJsonLinesItemExporter,
}
DOWNLOAD_DELAY = 0.25

cmd進入項目文件:

輸入:scrapy genspider stock quote.stockstar.com,生產spider代碼

技術分享圖片

stock.py

# -*- coding: utf-8 -*-
import scrapy
from items import StockstarItem,StockstarItemLoader

class StockSpider(scrapy.Spider):
    name = stock  #定義爬蟲名
    allowed_domains = [quote.stockstar.com]#定義爬蟲域
    start_urls = [http://quote.stockstar.com/stock/ranklist_a_3_1_1.html]#定義爬蟲連接

    def parse(self, response):#撰寫爬蟲邏輯
        page = int(response.url.split("_")[-1].split(".")[0])#抓取頁碼
        item_nodes = response.css(#datalist tr)
        for item_node in item_nodes:
            #根據item文件所定義的字段內容,進行字段內容的抓取
            item_loader = StockstarItemLoader(item=StockstarItem(),selector=item_node)
            item_loader.add_css("code","td:nth-child(1) a::text")
            item_loader.add_css("abbr","td:nth-child(2) a::text")
            item_loader.add_css("last_trade","td:nth-child(3) span::text")
            item_loader.add_css("chg_ratio","td:nth-child(4) span::text")
            item_loader.add_css("chg_amt","td:nth-child(5) span::text")
            item_loader.add_css("chg_ratio_5min","td:nth-child(6) span::text")
            item_loader.add_css("volumn","td:nth-child(7)::text")
            item_loader.add_css("turn_over","td:nth-child(8)::text")
            stock_item = item_loader.load_item()
            yield stock_item
        if item_nodes:
            next_page = page+1
            next_url = response.url.replace("{0}.html".format(page),"{0}.html".format(next_page))
            yield  scrapy.Request(url=next_url,callback=self.parse)

在stockstar下添加一個main.py

from scrapy.cmdline import execute
execute(["scrapy","crawl","stock","-o","items.json"])
#等價於在cmd中輸入:scrapy crawl stock -o items.json

執行:

技術分享圖片

Scrapy 抓取股票行情