1. 程式人生 > >python爬蟲學習--pixiv爬蟲(2)--國際排行榜的圖片爬取

python爬蟲學習--pixiv爬蟲(2)--國際排行榜的圖片爬取

之前用面向過程的形式寫了一下pixiv爬蟲的登入...

覺得還是面向物件好一些...

那就先把登入過程重寫一下...

class Pixiv_Spider:

    def __init__(self):
        self.p_id = ''
        self.p_pw = ''

    def Login(self):                        #處理登入所需要的請求資訊

        p_login_url = 'https://www.pixiv.net/login.php'
        
        data = {                                    #登入所要post的資訊
                'mode':'login',
                'skip':1
                }

        data['pixiv_id'] = self.p_id                #傳入登入id以及password
        data['pass'] = self.p_pw

        p_login_data = urllib.urlencode(data)

        p_login_header = {                          #頭資訊
                'accept-language':'zh-cn,zh;q=0.8',
                'referer':'https://www.pixiv.net/login.php?return_to=0',
                'user-agent':'mozilla/5.0 (windows nt 10.0; win64; x64; rv:45.0) gecko/20100101 firefox/45.0'
                }

        request = urllib2.Request(
                url = p_login_url,
                data = p_login_data,
                headers = p_login_header
                )
        try:
            cookie_file = 'cookie.txt'                  #生成cookie
            cookie = cookielib.MozillaCookieJar(cookie_file)
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))         
            response = opener.open(request)             #登入
            cookie.save(ignore_discard = True,ignore_expires = True)
        except urllib2.URLError,e:
            if hasattr(e,"reason"):
                print "登入失敗???",e.reason

ps = Pixiv_Spider()
ps.p_id = raw_input('請輸入你的pixiv id:')
ps.p_pw = raw_input('請輸入你的pixiv密碼:')
ps.Login()

登入完成後就可以進行我們想要圖片批量爬取了...

首先需要寫一個方法利用一下前面登入的cookie...

def Cookie_Login(self):                         #讀取之前登陸生成的cookie
    cookie_login = cookielib.MozillaCookieJar()
    cookie_login.load('cookie.txt',ignore_discard = True,ignore_expires = True)
    opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie_login))
    return opener

為了以後的需要...

我們在這裡寫一個選項選單...

方便以後新增更多的功能...

def Choice_Pixiv(self,opener):     #選擇要跳轉到的頁面
    if (self.p_choice == '1'):
        try:
            p_page = opener.open(self.p_international_url)   #p_international_url = 'http://www.pixiv.net/ranking_area.php?type=detail&no=6'
            p_international = p_page.read().decode('utf-8')  #利用cookie登入後的頁面
        except urllib2.URLError,e:
            if hasattr(e,"reason"):
                print "連線錯誤:",e.reason

這個時候就可以寫今天的主體了國際榜的方法了...

這次需要一個非常厲害的東西 BeautifulSoup...它可以幫助我們在這裡進一步提取html中的關鍵細節

def Pixiv_International(self,opener,p_international,dl_dir):
    soup = BeautifulSoup(p_international)
    for i in range(1,101):                    #已知pixiv國際榜的排名為100名,用for迴圈來完成
     get_information = str(soup.find(id=i))          #通過bs處理html將我們所需要的資訊大體提取出來
在pixiv的國際榜中有單張上傳的圖片,多張上傳的圖片,還有一中漫畫格式和動圖

先來看一下他們在頁面程式碼中是什麼樣子

單張圖片:

<a class="work _work " href="member_illust.php?mode=medium&illust_id=56037267"> 
href中的就是我們正常瀏覽作品時的url的一部分

so...將它提取出來...

 result_url = re.search(re.compile('<.*?work\s_work\s".*?href="(.*?)">',re.S),get_information)
多張圖片:

動圖:


漫畫多圖:


同理還是提取出中間的href...

result_multiple = re.search(re.compile('<a.*?work\s_work\smultiple\s.*?href="(.*?)">',re.S),get_information)     #多圖
result_video = re.search(re.compile('<a.*?work\s_work\sugoku-illust\s.*?href="(.*?)">',re.S),get_information)    #動圖
result_manga_multiple = re.search(re.compile('<a.*?work\s_work\smanga\smultiple\s.*?href="(.*?)">',re.S),get_information)   #漫畫多圖

在當我們用re.search搜尋不到的時候,他就會返回一個None...利用這一點我們就可以判斷圖片模式了

由於功力不足...我無法抓取動圖...所以放棄動圖...

其他的單圖,多圖和漫畫多圖在後面處理的時候不一樣...所以這樣寫

if result_video == None:                                #判斷是否是動圖
    if result_manga_multiple == None:                   #判斷是否為manga
        if result_multiple == None:                     #判斷是否為多圖
            p_url = 'http://www.pixiv.net/' + result_url.group(1)
        else:
            p_url = 'http://www.pixiv.net/' + result_multiple.group(1)
    else:
        p_url = 'http://www.pixiv.net/' + result_manga_multiple.group(1)
else:
    print "誒呀!這是張動圖...無能為力啊...╮(╯▽╰)╭"

這下子我們就能擁有瀏覽這些圖片的url了...

但是這個時候我們輸出一下會發現有些和我們想象中的不一樣啊(╯‵□′)╯︵┴─┴

http://www.pixiv.net/member_illust.php?mode=medium&amp;illust_id=56039502

http://www.pixiv.net/member_illust.php?mode=medium&illust_id=56039502

經過仔細對比發現是url中間的&在前面的處理中被轉義成&amp;了...

我們再寫一個工具類來幫助我們將他轉換回來...

class Tools:

    remove = re.compile('amp;')

    def removesomething(self,x):
        x = re.sub(self.remove,"",x)
        return x.strip()
通過re.compile找出多出來的amp;,再用re.sub賦空字元就可以了...

想要呼叫這個類,在爬蟲類Pixiv_Spider的__init__中增加這條程式碼就可以了

def __init__(self):
    self.tool = Tools()

經過一番折騰現在的程式碼變成這樣了

if result_video == None:
    if result_manga_multiple == None:                   #判斷是否為manga
        if result_multiple == None:                     #判斷是否為多圖
             print "報告!前方發現單張張圖片..."
             p_url = self.tool.removesomething('http://www.pixiv.net/' + result_url.group(1))
        else:
            print "報告!前方發現多張圖片..."
           p_url = self.tool.removesomething('http://www.pixiv.net/' + result_multiple.group(1))
    else:
        print "報告!前方發現多張圖片..."
        p_url = self.tool.removesomething('http://www.pixiv.net/' + result_manga_multiple.group(1))
else:
     print "誒呀!前方這是張動圖...無能為力啊...╮(╯▽╰)╭"

光url提取出來還不夠,我還想儲存這些圖片的資訊:標題,p站id,作者等等...

再來看一下剛剛的html...

標題:

p站id:

作者:

再寫一個方法將這些資訊列印到螢幕上並將它們以檔案形式儲存...

def Download_Data(self,i,get_information,p_url):
    #通過使用正則表示式再處理一遍經過bs處理的html程式碼,找到需要的資訊(url,title,user)
    result_title = re.search(re.compile('<a href=".*?>(.*?)</a>',re.S),get_information)
    result_id = re.search(re.compile('<a class.*?illust_id=(.*?)">',re.S),get_information) 
    result_user = re.search(re.compile('<span class.*?>(.*?)</span>',re.S),get_information)    
    p_rank = str(i)                   #在螢幕上輸出資訊
    print "RANK #" + p_rank
    p_id = result_id.group(1)
    print "Pixiv ID:" + p_id
    p_title = result_title.group(1)
    print "Title:" + p_title
    p_user = result_user.group(1)
    print "User:" + p_user
    file_data = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '.txt','w')     #建立資訊檔案
    massage = [                         #儲存資訊
           'rank:' + p_rank +'\n',
           'id:' + p_id + '\n',
           'title:' + p_title + '\n',
           'user:' + p_user + '\n',
           'url:' + p_url
    ]
    file_data.writelines(massage)
    file_data.close()
    print "pixiv資訊儲存成功"           #將資訊以txt格式儲存下來
   return p_id

墨跡了這麼長時間,收集了足夠的資訊...現在可以寫下載的部分了...

先看下網頁...

單張圖片:

多張圖片(漫畫和普通多圖):


經過觀察我們發現多圖的介面可以看到一共有多少張圖片...這樣子我們就可以先將多圖的圖片數提取出來...

soup = BeautifulSoup(opener.open(p_url))
result_pic_more = re.search(re.compile('</li><li>.*?\s(.*?)P</li>',re.S),str(soup.find_all("ul",class_="meta")))
print "報告!發現圖片" + result_pic_more.group(1) + "張..."
點開多圖...跳轉到另一個頁面...

通過檢視網頁程式碼發現他指向了這樣一個url...

點進去發現這裡就是我們想要的原圖,後面的page控制了圖片在多圖中的順序

http://www.pixiv.net/member_illust.php?mode=manga_big&illust_id=56039502&page=0

除了page以外...這個url和我們剛剛爬到的url還是有點區別...那就在Tools()裡給它構造出一個一樣的

http://www.pixiv.net/member_illust.php?mode=medium&illust_id=56039502

make_m = re.compile('mode=medium')

def make_big_url(self,x):
    x = re.sub(self.make_m,"mode=manga_big",x)
    return x.strip()

於是通過現有資訊來進行爬取

for j in range(0,int(result_pic_more.group(1))):            
    make_url = self.tool.make_big_url(p_url)+'&page='+str(j)      #構造url
    m_soup = BeautifulSoup(opener.open(make_url))
    real_url = re.search(re.compile('<img.*?src="(.*?)"/>',re.S),str(m_soup.find_all("img")
    print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)   #下載圖片並儲存
    d_url = opener.open(make-url)
   file_pic = open('E:/pixivdata/pixiv_' + p_id + '_' + str(j) + '.jpg','wb')
    file_pic.write(d_url.read)
    file_pic.close()

但是到程式到這裡就會報錯...

找來找去發現在當訪問原圖的url的時候瀏覽器會向伺服器傳送一個請求頭...這個請求頭與之前的請求頭有區別的是它多了一個引數Referer...


如果在只訪問原圖url的時候...瀏覽器(我們的程式)並不會傳送帶Referer的請求頭...伺服器收到這個請求頭但是不會迴應...於是就悲催了...403

那麼我們就給他一個Referer...

def Download_Request(self,opener,make_url,real_url):
    p_download_header = {                          #頭資訊
        'Accept-Language':'zh-CN,zh;q=0.8',
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:45.0) Gecko/20100101 Firefox/45.0'
    }
    p_download_header['Referer'] = make_url        #將referer加入header,沒有referer會顯示403                
    download_request = urllib2.Request(
        url = real_url.group(1),
        headers = p_download_header
        )
    decode_url = opener.open(download_request)
    return decode_url.read()

不光是這裡...細心的人可以發現我們提交的Referer和瀏覽器上看到的有點區別

我們還是在剛剛的Tools()裡再寫一個方法處理它

rmbig = re.compile('_big')

def removebig(self,x):
    x = re.sub(self.rmbig,"",x)
    return x.strip()

p_download_header['Referer'] = self.tool.removebig(make_url)
這下可以成功抓取多張圖片了...

不過我在這裡再給程式添加個小功能...就是將圖片按照伺服器上的檔案格式進行儲存...在Tool()中新增下面的方法...

def Pic_Type(self,real_url):                    #區分圖片解析度
    p_type = re.search(re.compile('png',re.S),real_url)
    if p_type == None:
        self.pic_type = 'jpg'
        return self.pic_type
    else:
        self.pic_type = 'png'
        return self.pic_type
這樣我們多圖下載的功能就完成了
for j in range(0,int(result_pic_more.group(1))):            
    make_url = p_url+'&page='+str(j)       #生成多張的url
    m_soup = BeautifulSoup(opener.open(make_url))
    real_url = re.search(re.compile('<img.*?src="(.*?)"/>',re.S),str(m_soup.find_all("img")))
    p_type = self.tool.Pic_Type(real_url.group(1))
    print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)#下載圖片並儲存
    file_pic = open('E:/pixivdata/pixiv_' + p_id + '_' + str(j) + '.' + p_type,'wb')  
    file_pic.write(self.Download_Request(opener,make_url,real_url))
    file_pic.close()
    print '成功下載到本地(/≧▽≦)/...'


點開單圖...使其最大化...用檢視網頁程式碼...


找到了單張圖片大圖的url...

http://i4.pixiv.net/img-original/img/2016/03/27/16/00/01/56037267_p0.png

和多圖的差不多

soup = BeautifulSoup(opener.open(p_url))
real_url = re.search(re.compile('.*?data-src="(.*?)"',re.S),str(soup.find_all("img",class_="original-image")))
print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)
p_type = self.tool.Pic_Type(real_url.group(1))
file_pic = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '.' + p_type,'wb')                
file_pic.write(self.Download_Request(opener,make_url,real_url))
file_pic.close()
print '成功下載到本地(/≧▽≦)/...'

整理一下...下載的方法就寫好了...

def Download_Pic(self,p_num,i,opener,p_url,p_id,dl_dir):
    if p_num == '1':
    soup = BeautifulSoup(opener.open(p_url))
    real_url = re.search(re.compile('.*?data-src="(.*?)"',re.S),str(soup.find_all("img",class_="original-image")))
    print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)
    p_type = self.tool.Pic_Type(real_url.group(1))
    file_pic = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '.' + p_type,'wb')
       file_pic.write(self.Download_Request(opener,p_url,real_url))
    file_pic.close()
    print '成功下載到本地(/≧▽≦)/...'
            
   if p_num == 'more':
       soup = BeautifulSoup(opener.open(p_url))
    result_pic_more = re.search(re.compile('</li><li>.*?\s(.*?)P</li>',re.S),str(soup.find_all("ul",class_="meta")))
    print "發現圖片" + result_pic_more.group(1) + "張...⊙▽⊙"
    for j in range(0,int(result_pic_more.group(1))):
           make_url = self.tool.make_big_url(p_url)+'&page='+str(j)      #生成多張的url
           m_soup = BeautifulSoup(opener.open(make_url))
           real_url = re.search(re.compile('<img.*?src="(.*?)"/>',re.S),str(m_soup.find_all("img")))
           p_type = self.tool.Pic_Type(real_url.group(1))
           print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)     #下載圖片並儲存
           file_pic = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '_' + str(j) + '.' + p_type,'wb')  
           file_pic.write(self.Download_Request(opener,make_url,real_url))
           file_pic.close()
      print '成功下載到本地(/≧▽≦)/...'


寫了這麼多我們整理下就是我們想要的小程式了

#coding:UTF-8

__author__ = 'monburan'
__version__ = '0.10 only_international'

import os
import re
import urllib
import urllib2
import cookielib
from urllib2 import urlopen
from bs4 import BeautifulSoup

class Tools:

    remove = re.compile('amp;')
    rmbig = re.compile('_big')
    make_m = re.compile('mode=medium')
    
    def removebig(self,x):
        x = re.sub(self.rmbig,"",x)
        return x.strip()

    def removesomething(self,x):
        x = re.sub(self.remove,"",x)
        return x.strip()

    def make_big_url(self,x):
        x = re.sub(self.make_m,"mode=manga_big",x)
        return x.strip()

    def Pic_Type(self,real_url):                    #區分圖片解析度
        p_type = re.search(re.compile('png',re.S),real_url)
        if p_type == None:
            self.pic_type = 'jpg'
            return self.pic_type
        else:
            self.pic_type = 'png'
            return self.pic_type

class Pixiv_Spider:

    def __init__(self):
        self.tool = Tools()
        self.p_id = ''
        self.p_pw = ''
        self.p_choice = ''
        self.dl_dir = ''
        self.pic_type = ''
        self.p_international_url = 'http://www.pixiv.net/ranking_area.php?type=detail&no=6'     #國際排行榜url

    def Login(self):                        #處理登入所需要的請求資訊
        p_login_url = 'https://www.pixiv.net/login.php'        
        data = {                                    #登入所要post的資訊
                'mode':'login',
                'skip':1
                }
        data['pixiv_id'] = self.p_id                #傳入登入id以及password
        data['pass'] = self.p_pw
        p_login_data = urllib.urlencode(data)
        p_login_header = {                          #頭資訊
                'accept-language':'zh-cn,zh;q=0.8',
                'referer':'https://www.pixiv.net/login.php?return_to=0',
                'user-agent':'mozilla/5.0 (windows nt 10.0; win64; x64; rv:45.0) gecko/20100101 firefox/45.0'
                }
        request = urllib2.Request(
                url = p_login_url,
                data = p_login_data,
                headers = p_login_header
                )
        try:
            cookie_file = 'cookie.txt'                  #生成cookie
            cookie = cookielib.MozillaCookieJar(cookie_file)
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))         
            response = opener.open(request)             #登入
            cookie.save(ignore_discard = True,ignore_expires = True)
        except urllib2.URLError,e:
            if hasattr(e,"reason"):
                print "登入失敗???",e.reason
    
    def Download_Request(self,opener,make_url,real_url):
            p_download_header = {                          #頭資訊
                'Accept-Language':'zh-CN,zh;q=0.8',
                'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:45.0) Gecko/20100101 Firefox/45.0'
            }

            p_download_header['Referer'] = self.tool.removebig(make_url)        #將處理過的referer加入header,沒有referer會顯示403
                
            download_request = urllib2.Request(
                url = real_url.group(1),
                headers = p_download_header
                ) 
            decode_url = opener.open(download_request)
            return decode_url.read()

    def Cookie_Login(self):                         #讀取之前登陸生成的cookie
            cookie_login = cookielib.MozillaCookieJar()
            cookie_login.load('cookie.txt',ignore_discard = True,ignore_expires = True)
            opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie_login))
            return opener

    def Choice_Pixiv(self,opener):     #選擇要跳轉到的頁面
        if (self.p_choice == '1'):
            try:
                p_page = opener.open(self.p_international_url)
                p_international = p_page.read().decode('utf-8')
                dl_dir = 'international'
                self.Pixiv_International(opener,p_international,dl_dir)
            except urllib2.URLError,e:
                if hasattr(e,"reason"):
                    print "連線錯誤:",e.reason

    def Pixiv_International(self,opener,p_international,dl_dir):
        soup = BeautifulSoup(p_international)
        os.mkdir(r'E:/pixivdata/' + dl_dir + '/')          #生成資料夾
        print "生成"+dl_dir+"目錄成功!"
        for i in range(1,101):                              #已知pixiv國際榜的排名為100名,用for迴圈來完成
            get_information = str(soup.find(id=i))          #通過bs處理html將我們所需要的資訊大體提取出來
            result_url = re.search(re.compile('<.*?work\s_work\s".*?href="(.*?)">',re.S),get_information)
            result_multiple = re.search(re.compile('<a.*?work\s_work\smultiple\s.*?href="(.*?)">',re.S),get_information)
            result_video = re.search(re.compile('<a.*?work\s_work\sugoku-illust\s.*?href="(.*?)">',re.S),get_information)
            result_manga_multiple = re.search(re.compile('<a.*?work\s_work\smanga\smultiple\s.*?href="(.*?)">',re.S),get_information)            
            if result_video == None:
                if result_manga_multiple == None:                   #判斷是否為manga
                    if result_multiple == None:                     #判斷是否為多圖
                        p_num = '1'
                        p_url = self.tool.removesomething('http://www.pixiv.net/' + result_url.group(1))
                        print "報告!前方發現單張圖片..."
                        p_id = self.Download_Data(i,get_information,p_url,opener,dl_dir)
                        self.Download_Pic(p_num,i,opener,p_url,p_id,dl_dir)
                    else:
                        p_num = 'more'
                        p_url = self.tool.removesomething('http://www.pixiv.net/' + result_multiple.group(1))
                        print "報告!前方發現多張圖片..."
                        p_id = self.Download_Data(i,get_information,p_url,opener,dl_dir)
                        self.Download_Pic(p_num,i,opener,p_url,p_id,dl_dir)
                else:
                    p_num = 'more'
                    p_url = self.tool.removesomething('http://www.pixiv.net/' + result_manga_multiple.group(1))
                    print "報告!前方發現多張漫畫..."
                    p_id = self.Download_Data(i,get_information,p_url,opener,dl_dir)
                    self.Download_Pic(p_num,i,opener,p_url,p_id,dl_dir)
            else:
                print "報告!前方這是張動圖...無能為力啊...╮(╯▽╰)╭"

    def Download_Data(self,i,get_information,p_url,opener,dl_dir):
        #通過使用正則表示式再處理一遍經過bs處理的html程式碼,找到需要的資訊(url,title,user)
        result_title = re.search(re.compile('<a href=".*?>(.*?)</a>',re.S),get_information)
        result_id = re.search(re.compile('<a class.*?illust_id=(.*?)">',re.S),get_information) 
        result_user = re.search(re.compile('<span class.*?>(.*?)</span>',re.S),get_information)        
        p_rank = str(i)
        p_id = result_id.group(1)
        p_title = result_title.group(1)
        p_user = result_user.group(1)
        print "RANK #" + p_rank + "\nPixiv ID:" + p_id + "\nTitle:" + p_title +"\nUser:" + p_user
        file_data = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '.txt','w')     #建立資訊檔案
        massage = [                         #儲存資訊
            'rank:' + p_rank +'\n',
            'id:' + p_id + '\n',
            'title:' + p_title + '\n',
            'user:' + p_user + '\n',
            'url:' + p_url
            ]
        file_data.writelines(massage)
        file_data.close()
        print "報告!pixiv資訊儲存成功..."           #將資訊以txt格式儲存下來
        return p_id

    def Download_Pic(self,p_num,i,opener,p_url,p_id,dl_dir):
        if p_num == '1':
            soup = BeautifulSoup(opener.open(p_url))
            real_url = re.search(re.compile('.*?data-src="(.*?)"',re.S),str(soup.find_all("img",class_="original-image")))
            print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)
            p_type = self.tool.Pic_Type(real_url.group(1))
            file_pic = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '.' + p_type,'wb')                
            file_pic.write(self.Download_Request(opener,p_url,real_url))
            file_pic.close()
            print '成功下載到本地(/≧▽≦)/...'            
        if p_num == 'more':
            soup = BeautifulSoup(opener.open(p_url))
            result_pic_more = re.search(re.compile('</li><li>.*?\s(.*?)P</li>',re.S),str(soup.find_all("ul",class_="meta")))
            print "發現圖片" + result_pic_more.group(1) + "張...⊙▽⊙"
            for j in range(0,int(result_pic_more.group(1))):
                make_url = self.tool.make_big_url(p_url)+'&page='+str(j)        #生成多張的url
                m_soup = BeautifulSoup(opener.open(make_url))
                real_url = re.search(re.compile('<img.*?src="(.*?)"/>',re.S),str(m_soup.find_all("img")))
                p_type = self.tool.Pic_Type(real_url.group(1))
                print '成功找到大圖連結(ˉ﹃ˉ)...\n' + real_url.group(1)     #下載圖片並儲存
                file_pic = open('E:/pixivdata/' + dl_dir + '/pixiv_' + p_id + '_' + str(j) + '.' + p_type,'wb')  
                file_pic.write(self.Download_Request(opener,make_url,real_url))
                file_pic.close()
                print '成功下載到本地(/≧▽≦)/...'        

    def Program_Start(self):
        self.Login()
        opener = self.Cookie_Login()
        self.Choice_Pixiv(opener)

ps = Pixiv_Spider()
ps.p_id = raw_input('請輸入你的pixiv id:')
ps.p_pw = raw_input('請輸入你的pixiv密碼:')
print ('1.進入國際排行榜)
ps.p_choice = raw_input()
ps.Program_Start()
來看一下執行的結果吧(今天剛好前三名分別是多圖,單圖,和動圖)