1. 程式人生 > >python 爬蟲資料存入csv格式方法

python 爬蟲資料存入csv格式方法

python 爬蟲資料存入csv格式方法

命令儲存方式:
scrapy crawl ju -o ju.csv

 

第一種方法:
with open("F:/book_top250.csv","w") as f:
f.write("{},{},{},{},{}\n".format(book_name ,rating, rating_num,comment, book_link))
複製程式碼


第二種方法:
with open("F:/book_top250.csv","w",newline="") as f: ##如果不新增newline="",爬取資訊會隔行顯示
w = csv.writer(f)
w.writerow([book_name ,rating, rating_num,comment, book_link])
複製程式碼


方法一的程式碼:
import requests
from lxml import etree
import time

urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]
with open("F:/book_top250.csv","w") as f:
for url in urls:
r = requests.get(url)
selector = etree.HTML(r.text)

books = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]')
for book in books:
book_name = book.xpath('./div[1]/a/@title')[0]
rating = book.xpath('./div[2]/span[2]/text()')[0]
rating_num = book.xpath('./div[2]/span[3]/text()')[0].strip('()\n ') #去除包含"(",")","\n"," "的首尾字元
try:
comment = book.xpath('./p[2]/span/text()')[0]
except:
comment = ""
book_link = book.xpath('./div[1]/a/@href')[0]
f.write("{},{},{},{},{}\n".format(book_name ,rating, rating_num,comment, book_link))

time.sleep(1)
複製程式碼


方法二的程式碼:
import requests
from lxml import etree
import time
import csv

urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]
with open("F:/book_top250.csv","w",newline='') as f:
for url in urls:
r = requests.get(url)
selector = etree.HTML(r.text)

books = selector.xpath('//*[@id="content"]/div/div[1]/div/table/tr/td[2]')
for book in books:
book_name = book.xpath('./div[1]/a/@title')[0]
rating = book.xpath('./div[2]/span[2]/text()')[0]
rating_num = book.xpath('./div[2]/span[3]/text()')[0].strip('()\n ') #去除包含"(",")","\n"," "的首尾字元
try:
comment = book.xpath('./p[2]/span/text()')[0]
except:
comment = ""
book_link = book.xpath('./div[1]/a/@href')[0]

w = csv.writer(f)
w.writerow([book_name ,rating, rating_num,comment, book_link])
time.sleep(1)