1. 程式人生 > >python協程有多厲害?

python協程有多厲害?

python import star str for x11 1.7 logs monk

  爬一個××網站上的東西,測算了一下協程的速度提升到底有多大,網站鏈接就不放了。。。

import requests
from bs4 import BeautifulSoup as sb
import lxml
import time

url = http://www.××××.com/html/part/index27_
url_list = []

start = time.time()

for i in range(2,47):
    print(get page +str(i))
    headers = {User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.78 Safari/537.36
} res = requests.get((url+str(i)+.html), headers) res.encoding = gb2312 soup = sb(res.text, lxml) div = sb(res.text, lxml).find(div, class_="box list channel") for li in div.find_all(li): urls = (http://www.××××.com + li.a.get(href)) url_list.append(urls)
print(urls) print(url_list) print(time.time()-start)

爬完用時 111.7 s。

來試試協程:

  

import requests
from bs4 import BeautifulSoup as sb
import lxml
import time
from gevent import monkey
import gevent

monkey.patch_all()

url = http://www.231ka.com/html/part/index27_
url_list = []

for i in range(2,47):
    url_list.append(url
+str(i)+.html) def get(url): print(get data from :+url) headers = { User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.78 Safari/537.36} res = requests.get(url, headers) res.encoding = gb2312 soup = sb(res.text, lxml) div = sb(res.text, lxml).find(div, class_="box list channel") for li in div.find_all(li): ur = (http://www.231ka.com + li.a.get(href)) print(ur) start = time.time() task = [] for url in url_list: task.append(gevent.spawn(get,url)) gevent.joinall(task) print(time.time()-start)

結果是: 55.6 s

也就是說在同樣是單線程的情況下,采用了協程後可以使得時間縮減一半,而且僅僅是使用了python的第三方協程庫實現的。

牛逼了

python協程有多厲害?