1. 程式人生 > >python爬取古詩文網站詩文一欄的所有詩詞

python爬取古詩文網站詩文一欄的所有詩詞

寫在前面

曾經,我們都有夢,關於文學,關於愛情,關於一場穿越世界的旅行,如今我們深夜飲酒,杯子碰在一起,都是夢破碎的聲音
曾經,面對詩文如痴如醉,而如今,已漠眼闌珊,風起雲湧不再,嗚呼哀哉,索一首詩篇以慰藉爍爍華年

卷一

前幾日,發現古詩文網站,如獲至寶,便被一時私念驅使,將其中的詩文一欄文章全部爬下來了。此一文以記之。

卷二

爬取整個過程如偷盜一般,條理清晰,速戰速決。且聽細細道來。

  1. 首先獲取詩文一欄所有標籤的URL,然後進入標籤中,獲取所有詩文詳情頁的URL
  2. 爬取每個詳情頁中的詳細的、喜歡的資訊,如:題目,作者,內容
  3. 將獲取到的資訊儲存到資料庫中

卷三

匯入有用的包

#請求包
import requests
#解析網頁的包
from lxml import etree
#匯入資料庫的類,該類在另一個檔案中實現,後面會有
from write_database import Write_databases

類的建構函式

class GuShiWen():
    def __init__(self):
        self.main_url = 'https://www.gushiwen.org/'
        self.headers = {
            'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
        }
        self.bash_url = 'https://so.gushiwen.org/'
        #初始化資料庫
        self.database = Write_databases()

首先獲取詩文一欄所有標籤的URL

    def get_all_shiw_urls(self):
        res = requests.get(self.main_url,headers=self.headers)
        html = etree.HTML(res.text)
        sons_div_lists = html.xpath(".//div[@class='sons'][1]/div[@class='cont']/a")[:-2]
        for a_info in sons_div_lists:
            a_href = a_info.xpath('./@href')[0]
            a_text = a_info.xpath('./text()')
            self.get_all_content_urls(a_href)

獲取某個標籤中所有詩文的url,並構造為可用的URL

    def get_all_content_urls(self,urls):
        text_html = requests.get(urls,headers=self.headers)
        html = etree.HTML(text_html.text)
        text_title = html.xpath('.//div[@class="title"][1]/h1/text()')
        text_dev = html.xpath('.//div[@class="sons"][1]/div')
        for item in text_dev:
            text_span = item.xpath('./span')
            for span_item in text_span:
                try:
                    text_a_href = span_item.xpath('./a/@href')[0]
                    text_a_text = span_item.xpath('.//text()')
                except:
                    continue
                self.get_poetry(self.bash_url + text_a_href)

爬取詩文的詳細資訊,並寫入到資料庫

    def get_poetry(self,url):
        poetry_html = requests.get(url,headers=self.headers)
        html = etree.HTML(poetry_html.text)
        poetry_div = html.xpath('.//div[@class="sons"]/div')[0]
        poetry_title = poetry_div.xpath('./h1/text()')[0]
        poetry_author = poetry_div.xpath('./p//text()')
        poetry_author = " ".join(poetry_author)
        poetry_cont = poetry_div.xpath('./div[2]//text()')
        poetry_cont = " ".join(poetry_cont)
        print("====="*10+'===='+'===')
        print(poetry_title)
        print(poetry_author)
        print(poetry_cont)
        self.write_database(poetry_title,poetry_author,poetry_cont)

    def write_database(self,title,author,cont):
        self.database.insert_data(title,author,cont)

最後,main函式

def main():
    gusw = GuShiWen()
    gusw.get_all_shiw_urls()

卷四

實現資料庫類,主要包含的功能有,連線資料庫,將爬到的資訊寫入到資料庫,隨機讀出資料庫中某一首詩詞的資訊,關閉資料庫

import pymysql
import random

class Write_databases():
    def __init__(self):
        self.db = pymysql.connect(
            host = '127.0.0.1',
            user = 'root',
            password = 'root',
            database = 'gushiw',
            port = 3306
        )
        self.cursor = self.db.cursor()

    def insert_data(self,title,author,cont):
        sql = '''
            insert into gushiw_table(id,poetry_title,poetry_author,poetry_cont)
            values(null,%s,%s,%s)
        '''
        self.cursor.execute(sql,(title,author,cont))
        self.db.commit()
    def read_data(self):
        id = random.randint(127,4017)
        print(id)
        sql = 'select * from gushiw_table where id = %s'

        value = self.cursor.execute(sql,(id,))

        value = self.cursor.fetchall()
        print(value)
        title = value[0][1]
        author = value[0][2]
        cont = value[0][3]
        print(title,author,cont)
    def close_databases(self):
        self.db.close()

未完待續,之後寫一個軟體,將隨機讀出資料庫中的詩文,並展示。