1. 程式人生 > >python爬蟲requests的庫使用詳解

python爬蟲requests的庫使用詳解

        Requests是python實現的簡單易用的HTTP庫,使用起來比urllib簡潔很多,Requests庫是用pythony語言開發,基urllib,採用Apache2 Licensed 開源協議的 第三方HTTP庫。

       Requests的官網文件:requests官網中文文件

1.requests傳送get請求與常見屬性

1.1.requests的傳送無參get請求

1.request傳送基本get請求
import requests

response = requests.get('http://httpbin.org/get')
print(response.text) #使用response.text顯示response內容

'''結果如下:
{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Connection": "close", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.20.1"
  }, 
  "origin": "114.221.2.90", 
  "url": "http://httpbin.org/get"
}
'''

1.2.requests傳送有參get請求

1.requests傳送帶引數的get請求
方式1:
import requests
response = requests.get("http://httpbin.org/get?name=germey&age=22")
print(response.text)
方式2:
import requests

data = {
    'name': 'germey',
    'age': 22
}
response = requests.get("http://httpbin.org/get", params=data)
print(response.text)

1.3requests解析JSON

1.requests解析JSON
import requests
import json

response = requests.get("http://httpbin.org/get")
print(type(response.text))
print('-------------------------------')
print(response.json()) #獲取的response轉換成JSON
print('-------------------------------')
print(json.loads(response.text)) #使用JSON類中的方法將response轉換成JSON,和上面結果一樣
print('-------------------------------')
print(type(response.json()))
'''結果如下:
<class 'str'>
-------------------------------
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.20.1'}, 'origin': '114.221.2.90', 'url': 'http://httpbin.org/get'}
-------------------------------
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.20.1'}, 'origin': '114.221.2.90', 'url': 'http://httpbin.org/get'}
-------------------------------
<class 'dict'>
'''

1.4通過get請求獲取網頁文字或二進位制資料

1.通過get請求獲取網頁二進位制格式和文字格式資料
import requests

response = requests.get("https://github.com/favicon.ico")
print(type(response.text), type(response.content)) #<class 'str'> <class 'bytes'>
print(response.text)  #是字串型別
print(response.content) #content是網頁二進位制資料

2.通過file類方法將get請求獲取的網頁資料儲存到本地

import requests

response = requests.get("https://github.com/favicon.ico")
with open('./favicon.ico', 'wb') as f:  #將get請求返回的內容儲存到當前目錄
    f.write(response.content)
    f.close()

1.5傳送get請求新增headers引數

    一般爬蟲都要新增headers引數,不然很多往網站直接就會 返回not found,核心就是user_agent。比如下面爬去知乎介面,如果不新增headers直接返回失敗

import requests

response = requests.get("https://www.zhihu.com/explore")
print(response.text)
'''
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>openresty</center>
</body>
</html>

'''

使用requests傳送get請求新增headers引數:可以正常訪問知乎.

import requests

headers = {
'user-agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
}
response = requests.get("https://www.zhihu.com/explore", headers=headers)
print(response.text)

1.6使用get請求返回的一些屬性提取

import requests

response = requests.get('https://www.baidu.com/')
print(type(response)) #返回型別
print('--------------------------------------')
print(response.status_code) #get請求返回值
print('--------------------------------------')
print(type(response.text)) 
print('--------------------------------------')
print(response.text)
print('--------------------------------------')
print(response.cookies)

'''結果如下:
<class 'requests.models.Response'> 
--------------------------------------
200
--------------------------------------
<class 'str'>
--------------------------------------
<!DOCTYPE html>
<!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html;charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge><meta content=always name=referrer><link rel=stylesheet type=text/css href=https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css><title>ç¾åº¦ä¸ä¸ï¼ä½ å°±ç¥é</title></head> <body link=#0000cc> <div id=wrapper> <div id=head> <div class=head_wrapper> <div class=s_form> <div class=s_form_wrapper> <div id=lg> <img hidefocus=true src=//www.baidu.com/img/bd_logo1.png width=270 height=129> </div> <form id=form name=f action=//www.baidu.com/s class=fm> <input type=hidden name=bdorz_come value=1> <input type=hidden name=ie value=utf-8> <input type=hidden name=f value=8> <input type=hidden name=rsv_bp value=1> <input type=hidden name=rsv_idx value=1> <input type=hidden name=tn value=baidu><span class="bg s_ipt_wr"><input id=kw name=wd class=s_ipt value maxlength=255 autocomplete=off autofocus=autofocus></span><span class="bg s_btn_wr"><input type=submit id=su value=ç¾åº¦ä¸ä¸ class="bg s_btn" autofocus></span> </form> </div> </div> <div id=u1> <a href=http://news.baidu.com name=tj_trnews class=mnav>æ°é»</a> <a href=https://www.hao123.com name=tj_trhao123 class=mnav>hao123</a> <a href=http://map.baidu.com name=tj_trmap class=mnav>å°å¾</a> <a href=http://v.baidu.com name=tj_trvideo class=mnav>è§é¢</a> <a href=http://tieba.baidu.com name=tj_trtieba class=mnav>è´´å§</a> <noscript> <a href=http://www.baidu.com/bdorz/login.gif?login&amp;tpl=mn&amp;u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1 name=tj_login class=lb>ç»å½</a> </noscript> <script>document.write('<a href="http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u='+ encodeURIComponent(window.location.href+ (window.location.search === "" ? "?" : "&")+ "bdorz_come=1")+ '" name="tj_login" class="lb">ç»å½</a>');
                </script> <a href=//www.baidu.com/more/ name=tj_briicon class=bri style="display: block;">æ´å¤äº§å</a> </div> </div> </div> <div id=ftCon> <div id=ftConw> <p id=lh> <a href=http://home.baidu.com>å³äºç¾åº¦</a> <a href=http://ir.baidu.com>About Baidu</a> </p> <p id=cp>&copy;2017&nbsp;Baidu&nbsp;<a href=http://www.baidu.com/duty/>使ç¨ç¾åº¦åå¿è¯»</a>&nbsp; <a href=http://jianyi.baidu.com/ class=cp-feedback>æè§åé¦</a>&nbsp;京ICPè¯030173å·&nbsp; <img src=//www.baidu.com/img/gs.gif> </p> </div> </div> </div> </body> </html>

--------------------------------------
<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>

Process finished with exit code 0


'''

2.requests傳送post請求,跟get差不多

2.1.傳送post請求,引數以字典的形式即可

import requests

data = {'name': 'germey', 'age': '22'}
response = requests.post("http://httpbin.org/post", data=data)
print(response.text)

'''結果請求:
{
  "args": {}, 
  "data": "", 
  "files": {}, 
  "form": {
    "age": "22", 
    "name": "germey"
  }, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Connection": "close", 
    "Content-Length": "18", 
    "Content-Type": "application/x-www-form-urlencoded", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.19.1"
  }, 
  "json": null, 
  "origin": "114.221.2.90", 
  "url": "http://httpbin.org/post"
}


'''

2.2傳送帶headers的post請求

import requests

data = {'name': 'germey', 'age': '22'}
headers = {
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
}
response = requests.post("http://httpbin.org/post", data=data, headers=headers)
print(response.json())

'''結果如下:
{'args': {}, 'data': '', 'files': {}, 'form': {'age': '22', 'name': 'germey'}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Content-Length': '18', 'Content-Type': 'application/x-www-form-urlencoded', 'Host': 'httpbin.org', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'}, 'json': None, 'origin': '114.221.2.90', 'url': 'http://httpbin.org/post'}
'''

2.3關於response常見的屬性

import requests

response = requests.get('http://www.jianshu.com')
print(type(response.status_code), response.status_code)
print(type(response.headers), response.headers)
print(type(response.cookies), response.cookies)
print(type(response.url), response.url)
print(type(response.history), response.history)

3.關於requests庫的常見其他用法

3.1檔案上傳功能

1.將當前目錄下的下的favicon.ico檔案上傳到遠端伺服器上
import requests

files = {'file': open('./favicon.ico', 'rb')}
response = requests.post("http://httpbin.org/post", files=files)
print(response.text)

'''結果返回:
{
  "args": {}, 
  "data": "", 
  "files": {
    "file": "data:application/octet-stream;base64,AAABAAIAEBAAAAEAIAAoBQAAJgAAACAgAAABACAAKBQAAE4FAAAoAAAAEAAAACAAAAABACAAAAA
.......內容省略...................
  }, 
  "form": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Connection": "close", 
    "Content-Length": "6665", 
    "Content-Type": "multipart/form-data; boundary=64baa48fe6e9aa9985fd4758bc97f1e9", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.20.1"
  }, 
  "json": null, 
  "origin": "114.221.2.90", 
  "url": "http://httpbin.org/post"
}

'''

3.2獲取網站cookie值

import requests

response = requests.get("https://www.baidu.com")
print(response.cookies)
for key, value in response.cookies.items():
    print(key + '=' + value)
'''結果如下:
<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>
BDORZ=27315
'''

3.3模擬登陸,通過Session()

import requests

s = requests.Session()
s.get('http://httpbin.org/cookies/set/number/123456789')
response = s.get('http://httpbin.org/cookies')
print(response.text)

3.4證書驗證

1.登陸有些網站時,如果沒有下載過網站驗證證書,直接訪問會被報錯,如下所示:請求12306網站會被報錯,SSLError

import requests

response = requests.get('https://www.12306.cn')
print(response.status_code)

2.這個時候可以在傳送get請求時,使用verify=False進行不驗證,在可以通過urllib3.disable_warnings()忽略報錯

import requests
from requests.packages import urllib3
urllib3.disable_warnings()
response = requests.get('https://www.12306.cn', verify=False)##證書驗證設為FALSE
print(response.status_code)

3.可以在傳送get請求時,新增本地證書進行驗證,如下所示:

import requests

response = requests.get('https://www.12306.cn', cert=('./server.crt', './key'))
print(response.status_code)

3.5requests關於代理的設定

1.進行服務代理設定
import requests

proxies = {
  "http": "http://127.0.0.1:9743",
  "https": "https://127.0.0.1:9743",
}

response = requests.get("https://www.taobao.com", proxies=proxies)
print(response.status_code)

2.代理設定方式2
import requests

proxies = {
    "http": "http://user:[email protected]:9743/",
}
response = requests.get("https://www.taobao.com", proxies=proxies)
print(response.status_code)

3.6超時設定,以及異常處理

import requests
from requests.exceptions import ReadTimeout
try:
    response = requests.get("http://httpbin.org/get", timeout = 0.8)
    print(response.status_code)
    print('-----------------異常分界線-------------------')
except ReadTimeout :
    print('哈哈哈哈,Timeout')

'''測試結果1:
200
-----------------異常分界線-------------------
'''
'''測試結果2:
哈哈哈哈,Timeout

'''

3.7認證設定

訪問某些網站時,首先是登入介面,需要輸入使用者名稱和密碼,必須登入以後才能進行操作,這個時候可以使用auth進行授權賬號密碼進行登入。如下所示:

import requests
from requests.auth import HTTPBasicAuth

r = requests.get('http://120.27.34.24:9001', auth=HTTPBasicAuth('user', '123'))
print(r.status_code)

3.8常見的請求異常型別

在你不確定會發生什麼錯誤時,儘量使用try...except來捕獲異常所有的requests exception:

import requests
from requests.exceptions import ReadTimeout, ConnectionError, RequestException
try:
    response = requests.get("http://httpbin.org/get", timeout = 0.5)
    print(response.status_code)
except ReadTimeout:
    print('Timeout')
except ConnectionError:
    print('Connection error')
except RequestException:
    print('Error')