1. 程式人生 > >Python抓取網頁並儲存為PDF

Python抓取網頁並儲存為PDF

1、開發環境搭建
(1)Python2.7.13的安裝:參考《廖雪峰老師的網站》
(2)Python包管理器pip的安裝:參考《pip安裝文件說明》
因為基於版本2.7.13,因為2.7.9以上已經自帶pip,所以不需要單獨安裝,但是需要我們更新。上面的說明文件有說明。
(3)Python下的PDF工具:PyPDF2;
安裝命令:

pip install PyPDF2

PyPDF2簡易示例:

from PyPDF2 import PdfFileMerger
merger = PdfFileMerger()
input1 = open("1.pdf", "rb")
input2 = open
("2.pdf", "rb") merger.append(input1) merger.append(input2) # 寫入到輸出pdf文件中 output = open("hql_all.pdf", "wb") merger.write(output)

(4)Python下的Microsoft Word 2007工具:

pip install python-docx

(5)依賴工具的安裝
requests、beautifulsoup 是爬蟲兩大神器,reuqests 用於網路請求,beautifusoup 用於操作 html 資料。有了這兩把梭子,幹起活來利索。scrapy 這樣的爬蟲框架我們就不用了,這樣的小程式派上它有點殺雞用牛刀的意思。此外,既然是把 html 檔案轉為 pdf,那麼也要有相應的庫支援, wkhtmltopdf 就是一個非常的工具,它可以用適用於多平臺的 html 到 pdf 的轉換,pdfkit 是 wkhtmltopdf 的Python封裝包。首先安裝好下面的依賴包

pip install requests
pip install beautifulsoup4
pip install pdfkit

(6)手動安裝wkhtmltopdf
Windows平臺直接在 http://wkhtmltopdf.org/downloads.html 下載穩定版的 wkhtmltopdf 進行安裝,安裝完成之後把該程式的執行路徑加入到系統環境 $PATH 變數中,否則 pdfkit 找不到 wkhtmltopdf 就出現錯誤 “No wkhtmltopdf executable found”。Ubuntu 和 CentOS 可以直接用命令列進行安裝

$ sudo apt-get install wkhtmltopdf  # ubuntu
$ sudo yum intsall wkhtmltopdf # centos

2、原始碼

# coding=utf-8  
import os  
import re  
import time  
import logging  
import pdfkit  
import requests  
from bs4 import BeautifulSoup  
from PyPDF2 import PdfFileMerger  

html_template = """ 
<!DOCTYPE html> 
<html lang="en"> 
<head> 
    <meta charset="UTF-8"> 
</head> 
<body> 
{content} 
</body> 
</html> 

"""  


def parse_url_to_html(url, name):  
    """ 
    解析URL,返回HTML內容 
    :param url:解析的url 
    :param name: 儲存的html檔名 
    :return: html 
    """  
    try:  
        response = requests.get(url)  
        soup = BeautifulSoup(response.content, 'html.parser')  
        # 正文  
        body = soup.find_all(class_="x-wiki-content")[0]  
        # 標題  
        title = soup.find('h4').get_text()  

        # 標題加入到正文的最前面,居中顯示  
        center_tag = soup.new_tag("center")  
        title_tag = soup.new_tag('h1')  
        title_tag.string = title  
        center_tag.insert(1, title_tag)  
        body.insert(1, center_tag)  
        html = str(body)  
        # body中的img標籤的src相對路徑的改成絕對路徑  
        pattern = "(<img .*?src=\")(.*?)(\")"  

        def func(m):  
            if not m.group(3).startswith("http"):  
                rtn = m.group(1) + "http://www.liaoxuefeng.com" + m.group(2) + m.group(3)  
                return rtn  
            else:  
                return m.group(1)+m.group(2)+m.group(3)  
        html = re.compile(pattern).sub(func, html)  
        html = html_template.format(content=html)  
        html = html.encode("utf-8")  
        with open(name, 'wb') as f:  
            f.write(html)  
        return name  

    except Exception as e:  

        logging.error("解析錯誤", exc_info=True)  


def get_url_list():  
    """ 
    獲取所有URL目錄列表 
    :return: 
    """  
    response = requests.get("http://www.liaoxuefeng.com/wiki/0014316089557264a6b348958f449949df42a6d3a2e542c000")  
    soup = BeautifulSoup(response.content, "html.parser")  
    menu_tag = soup.find_all(class_="uk-nav uk-nav-side")[1]  
    urls = []  
    for li in menu_tag.find_all("li"):  
        url = "http://www.liaoxuefeng.com" + li.a.get('href')  
        urls.append(url)  
    return urls  


def save_pdf(htmls, file_name):  
    """ 
    把所有html檔案儲存到pdf檔案 
    :param htmls:  html檔案列表 
    :param file_name: pdf檔名 
    :return: 
    """  
    options = {  
        'page-size': 'Letter',  
        'margin-top': '0.75in',  
        'margin-right': '0.75in',  
        'margin-bottom': '0.75in',  
        'margin-left': '0.75in',  
        'encoding': "UTF-8",  
        'custom-header': [  
            ('Accept-Encoding', 'gzip')  
        ],  
        'cookie': [  
            ('cookie-name1', 'cookie-value1'),  
            ('cookie-name2', 'cookie-value2'),  
        ],  
        'outline-depth': 10,  
    }  
    pdfkit.from_file(htmls, file_name, options=options)  


def main():  
    start = time.time()  
    file_name = u"liaoxuefeng_Python3_tutorial"  
    urls = get_url_list()  
    for index, url in enumerate(urls):  
      parse_url_to_html(url, str(index) + ".html")  
    htmls =[]  
    pdfs =[]  
    for i in range(0,124):  
        htmls.append(str(i)+'.html')  
        pdfs.append(file_name+str(i)+'.pdf')  

        save_pdf(str(i)+'.html', file_name+str(i)+'.pdf')  

        print u"轉換完成第"+str(i)+'個html'  

    merger = PdfFileMerger()  
    for pdf in pdfs:  
       merger.append(open(pdf,'rb'))  
       print u"合併完成第"+str(i)+'個pdf'+pdf  

    output = open(u"廖雪峰Python_all.pdf", "wb")  
    merger.write(output)  

    print u"輸出PDF成功!"  

    for html in htmls:  
        os.remove(html)  
        print u"刪除臨時檔案"+html  

    for pdf in pdfs:  
        os.remove(pdf)  
        print u"刪除臨時檔案"+pdf  

    total_time = time.time() - start  
    print(u"總共耗時:%f 秒" % total_time)  


if __name__ == '__main__':  
    main()