1. 程式人生 > >Python爬蟲--爬取Stanford University、Harvard University關於Professor的相關資訊

Python爬蟲--爬取Stanford University、Harvard University關於Professor的相關資訊

                        Python爬蟲

要求:
Institute Bschool faculty directory
Stanford University https://www.gsb.stanford.edu/faculty-research/faculty
Harvard University https://www.hbs.edu/faculty/Pages/browse.aspx
Massachusetts Institute of Technology http://mitsloan.mit.edu/faculty-and-research/faculty-directory/


University of Cambridge https://www.jbs.cam.ac.uk/faculty-research/faculty-a-z/
University of Oxford https://www.sbs.ox.ac.uk/about-us/people?f[0]=department_facet%3A230
University of Chicago https://www.chicagobooth.edu/faculty/directory
Columbia University https://www8.gsb.columbia.edu/faculty-research/faculty-directory?full_time=y&division=All&op=Search&form_build_id=form-Gl3ByqgZuJU6goJNDyaIByhMhWNTlR8iWuhntfhsjf0&form_id=all_dept_form

Yale University https://som.yale.edu/faculty-research/faculty-directory
University of California, Berkeley http://facultybio.haas.berkeley.edu/faculty-photo/
University of Pennsylvania https://www.wharton.upenn.edu/faculty-directory/
只抓取以下職稱:Professor,Associate Professor,Assistant Professor,Professor Emeritus
姓名、職稱、郵箱、地址、電話、個人主頁地址
個人背景介紹、研究領域、研究成果、教學(研究成果一般比較多,可以提供一個網頁連結)
在這裡插入圖片描述

以下以Stanford University為例,進行爬蟲。
因為該網站的反爬蟲檢測比較明顯,所以需要設定代理ProxyHandler處理器。使用代理ip常常時爬蟲和反爬蟲比較好用的,因為很多網站會檢測某一時間段某個Ip的訪問次數,如果訪問不像正常人,就會被禁止,而你的程式碼中就會報錯(顯示urlError,遠端主機斷開或者拒絕訪問),而此時就要設定多個代理伺服器,避免某個IP訪問過於頻繁。在urllib.request庫中,通過ProxyHandler來設定使用代理伺服器。
免費的開放代理獲取基本沒有成本,我們可以在一些代理網站上收集這些免費代理,測試後如果可以用,就把它收集起來用在爬蟲上面。

免費短期代理網站舉例:
西刺免費代理IP
快代理免費代理
Proxy360代理
全網代理IP
如果代理IP足夠多,就可以像隨機獲取User-Agent一樣,隨機選擇一個代理去訪問網站。

import urllib.request
import urllib
import xlwt
from lxml import etree
import requests
from bs4 import BeautifulSoup
import time
import random

#36個
USER_AGENT = [
         "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
         "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
         "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0",
         "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko",
         "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
         "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
         "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)",
         "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
         "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
         "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
         "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
         "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
         "Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
         "Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
         "Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
         "Mozilla/5.0 (Linux; U; Android 2.3.7; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
         "MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
         "Opera/9.80 (Android 2.3.4; Linux; Opera Mobi/build-1107180945; U; en-GB) Presto/2.8.149 Version/11.10",
         "Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13",
         "Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; en) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.337 Mobile Safari/534.1+",
         "Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.0; U; en-US) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/233.70 Safari/534.6 TouchPad/1.0",
         "Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/20.0.019; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.18124",
         "Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)",
         "UCWEB7.0.2.37/28/999",
         "NOKIA5700/ UCWEB7.0.2.37/28/999",
         "Openwave/ UCWEB7.0.2.37/28/999",
         "Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999",
         # iPhone 6:
         "Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25",

     ]
# 代理ip
IPS = [  {'http':'115.223.253.171:9000'},
        {'http':'58.254.220.116:53579'},
        {'http':'119.254.94.103:45431'},
        {'http':'117.35.57.196:80'},
        {'http':'221.2.155.35:8060'},
        { 'http': '118.190.95.35:9001'},
        {'http': '124.235.181.175:80'},
        { 'http': '110.73.6.70:8123'},
        { 'http': '110.73.0.121:8123'},
        { 'http': '222.94.145.158:808'},
        { 'http': '118.190.95.35:9001'},
        { 'http': '124.235.181.175:80'},
        {'http':'112.230.247.164:8060'},
        {'http':'121.196.196.105:80'},
        {'http':'219.145.170.23'},
        {'http':'115.218.215.184:9000'},
        {'http':'58.254.220.116:53579'},
        {'http':'119.254.94.103:45431'},
        {'http':'117.35.57.196:80'},
        {'http':'221.2.155.35:8060'},
        {'http':'47.106.92.90:8081'}
]

訪問主頁的url,並抓取主頁上的teacher,(至於想要通過例如職位之類的篩選,可以之後再處理)

k = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'K', 'U', 'V', 'W',
     'X', 'Y', 'Z']
def get_html(i):
    proxy = random.choice(IPS)
    b = random.choice(USER_AGENT)
    header2 = {"User-Agent": b}
    httpproxy_handler = urllib.request.ProxyHandler(proxy)
    opener = urllib.request.build_opener(httpproxy_handler)
    url = 'https://www.gsb.stanford.edu/faculty-research/faculty?last_name=' + k[i]
    req = urllib.request.Request(url, headers=header2)
    response=opener.open(req)
    html = response.read().decode('utf-8')
    return html

因為在主頁中,並不是把所有的老師都放在同一個頁面上,而是按照名字的字母進行分類,所以需要先把所有的老師的name,以及個人主頁的網址爬出來。

def get_data(html):
    soup = BeautifulSoup(html, 'html.parser')
    div_people_list = soup.find_all('div' , attrs = {'class' : 'field-item even'})
    div_job_list=soup.find_all('div' , attrs = {'class' : 'person-position'})
    for list in div_people_list:
        a_s=list.find_all('a')
        for a in a_s :
            df=a['href'].replace(' ','')
            df.replace('\n','')
            url = 'https://www.gsb.stanford.edu%s'%df
            url.replace('\n','')
            url=url.replace('\n','')
            name = a.get_text()
            names.append(name)
            PersonalWebsites.append(url)

    for jlist in div_job_list:
        if(jlist!=None):
           j_s=jlist.string
        else:
           j_s='null'
        jobs.append(j_s)
    time.sleep(2)

再每次url切換的時候,可以通過time.sleep()、response.close等方式來減緩訪問過於頻繁。
然後就是根據每次得到的某個teacher的個人主頁進行,爬取自己想要的部分資料

def get_data2(url):
     proxy = random.choice(IPS)
     print(proxy)
     b = random.choice(USER_AGENT)
     header2 = {"User-Agent": b}
     httpproxy_handler = urllib.request.ProxyHandler(proxy)
     opener = urllib.request.build_opener(httpproxy_handler)
     req = urllib.request.Request(url, headers=header2)
     response = opener.open(req)
     html = response.read().decode('utf-8')
     soup = BeautifulSoup(html, 'html.parser')  # html.parser是解析器
     s = etree.HTML(html)
     #研究領域(Academic Area)
     aca=soup.find('div', attrs={'class': 'field-name-field-academic-area-single'})
     if(aca!=None):
         acade=aca.find('a')
         academic=acade.get_text()
     else:
         academic='null'
     Academics.append(academic)

     emi = s.xpath('//*[@id="block-system-main"]/div/div/div[1]/fieldset/div/div[2]/div/div/span/a/@href')
     if (emi != []):
         email = emi[0]
     else:
         email = 'null'
     emails.append(email)

     # 獲取聯絡方式
     te = s.xpath('//*[@id="block-system-main"]/div/div/div[1]/fieldset/div/div[1]/div[1]/div/div/a/text()')
     if (te!=[]):
         tel = te[0]
     else:
         tel = 'null'
     tels.append(tel)

     # 獲取背景介紹
     s = etree.HTML(html)
     backgro = s.xpath('//*[@id="block-system-main"]/div/div/div[2]/div/div[1]/div[1]/p[1]/text()')
     if (backgro == []):
         background = 'null'
     else:
         background=backgro[0]
     Backgrounds.append(background)

     # 研究興趣(Research Interests)
     inter = s.xpath('//*[@id="block-system-main"]/div/div/div[2]/div[4]/div/ul/li/text()')
     interes=''
     if (inter!=[]):
         for ints in inter:
             interes=interes+ints+"  "
         interest = interes
     else:
         interest='null'
     InterestsAreas.append(interest)
     response.close()
     time.sleep(2)

最後就是寫主函式,呼叫自己寫的方法了。完整的原始碼以及最後得到的結果可以從我上傳的資源裡檢視(附上的是斯坦福和哈佛的)。
我遇到的其中一個問題是,本來是利用BeautifulSoup包,通過選擇器來定位,但是存在一個問題,就是tag標籤的巢狀問題。
比如一個段落裡面<p>m,ssmn<br></br>nbmd<a href='www.baidu.com'>wewerw</a></p>,如果想要<p>之間的所有文字,就會出錯,返回空列表;而利用xpath的方法定位到p標籤的位置,可以得到最前部分的文字,但文字不全。

如果有人知道怎麼解決這個問題,歡迎留言