1. 程式人生 > >Python專案--Scrapy框架(二)

Python專案--Scrapy框架(二)

本文主要是利用scrapy框架爬取果殼問答中熱門問答, 精彩問答的相關資訊

環境

win8, python3.7, pycharm

正文

1. 建立scrapy專案檔案

在cmd命令列中任意目錄下執行以下程式碼, 即可在該目錄下建立GuoKeWenDa專案檔案

scrapy startproject GuoKeWenDa

2. 建立爬蟲主程式

在cmd中切換到GuoKeWenDa目錄下, 執行以下程式碼:

cd GuoKeWenDa
scrapy genspider GuoKeWenDaSpider GuoKeWenDaSpider.toscrape.com

建立GuoKeWenDaSpider.py檔案成功

3. 定義要爬取的專案

分析果殼熱門問答果殼精彩問答, 發現兩頁面的結構一致, 我們爬取其中的主題, 簡介, 關注數, 回答數, 標籤, 文章連結等6個資訊

在items.py中定義

 1 import scrapy
 2 from scrapy.item import Item, Field
 3 
 4 class GuokewendaItem(Item):
 5     # define the fields for your item here like:
 6     # name = scrapy.Field()
 7     title = Field()
 8     intro = Field()
9 attention = Field() 10 answer = Field() 11 label = Field() 12 link = Field() 

4. 編寫爬蟲主程式 

 在GuoKeWenDaSpider.py檔案中編寫:

 1 import scrapy
 2 from scrapy.spiders import CrawlSpider
 3 from scrapy.selector import Selector
 4 from scrapy.http import Request
 5 from GuoKeWenDa.items import
GuokewendaItem 6 7 class GuoKeWenDa(CrawlSpider): 8 name = 'GuoKeWenDa' 9 allowed_domains = ['GuoKeWenDaSpider.toscrape.com'] 10 urls = ['hottest', 'highlight'] 11 #對urls進行遍歷 12 start_urls = ['https://www.guokr.com/ask/{0}/?page={1}'.format(str(m),str(n)) for m in urls for n in range(1, 101)] 13 def parse(self, response): 14 item = GuokewendaItem() 15 #初始化原始碼 16 selector = Selector(response) 17 #用xpath進行解析 18 infos = selector.xpath('//ul[@class="ask-list-cp"]/li') 19 for info in infos: 20 title = info.xpath('div[2]/h2/a/text()').extract()[0].strip() 21 intro = info.xpath('div[2]/p/text()').extract()[0].strip() 22 attention = info.xpath('div[1]/p[1]/span/text()').extract()[0] 23 answer = info.xpath('div[1]/p[2]/span/text()').extract()[0] 24 labels = info.xpath('div[2]/div/p/a/text()').extract() 25 link = info.xpath('div[2]/h2/a/@href').extract()[0] 26 if labels: 27 label = " ".join(labels) #用join將列表轉成以" "分隔的字串 28 else: 29 label ='' 30 item['title'] = title 31 item['intro'] = intro 32 item['attention'] = attention 33 item['answer'] = answer 34 item['label'] = label 35 item['link'] = link 36 yield item

5. 儲存到MongoDB

import pymongo

class GuokewendaPipeline(object):
    def __init__(self):
        '''連線MongoDB'''
        client = pymongo.MongoClient(host='localhost')
        db = client['test']
        guokewenda = db['guokewenda']
        self.post= guokewenda
    def process_item(self, item, spider):
        '''寫入MongoDB'''
        info = dict(item)
        self.post.insert(info)
        return item

 6. 配置setting

在原有程式碼中去掉以下程式碼的註釋 (快捷鍵"Ctrl" + "/")

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
DOWNLOAD_DELAY = 5
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
}
ITEM_PIPELINES = {
   'GuoKeWenDa.pipelines.GuokewendaPipeline': 300,
}

7. 新建main.py檔案

在GuoKeWenDa檔案目錄下新建main.py檔案, 編輯:

1 from scrapy import cmdline
2 cmdline.execute('scrapy crawl GuoKeWenDa'.split())

執行main.py檔案

8. 爬取結果

總結

實際中熱門問答只有2頁, 因此遍歷它的第3到100頁就顯得太多餘:

urls = ['hottest', 'highlight']
start_urls = ['https://www.guokr.com/ask/{0}/?page={1}'.format(str(m),str(n)) for m in urls for n in range(1, 101)]
start_urls = ['https://www.guokr.com/ask/hottest/?page={}'.format(str(m)) for m in range(1,3)] + ['https://www.guokr.com/ask/highlight/?page={}'.format(n) for n in range(1,101)]