Python爬取房產資料,哪裡跌價買哪裡,你可能不賺,但我永遠不虧
阿新 • • 發佈:2018-12-28
小夥伴,我又來了,這次我們寫的是用python爬蟲爬取烏魯木齊的房產資料並展示在地圖上,地圖工具我用的是 BDP個人版-免費線上資料分析軟體,資料視覺化軟體 ,這個可以匯入csv或者excel資料
學習Python中有不明白推薦加入交流裙
號:735934841
群裡有志同道合的小夥伴,互幫互助,
群裡有免費的視訊學習教程和PDF!
本次我使用的是scrapy框架,可能有點大材小用了,主要是剛學完用這個練練手,再寫程式碼前我還是建議大家先分析網站,分析好資料,再去動手寫程式碼,因為好的分析可以事半功倍,烏魯木齊樓盤,烏魯木齊新樓盤,烏魯木齊樓盤資訊 - 烏魯木齊吉屋網 這個網站的資料比較全,每一頁獲取房產的LIST資訊,並且翻頁,點進去是詳情頁,獲取房產的詳細資訊(包含名稱,地址,房價,經緯度),再用pipelines儲存item到excel裡,最後在bdp生成地圖報表,廢話不多說上程式碼:
JiwuspiderSpider.py
# -*- coding: utf-8 -*- from scrapy import Spider,Request import re from jiwu.items import JiwuItem class JiwuspiderSpider(Spider): name = "jiwuspider" allowed_domains = ["wlmq.jiwu.com"] start_urls = ['http://wlmq.jiwu.com/loupan'] def parse(self, response): """ 解析每一頁房屋的list :param response: :return: """ for url in response.xpath('//a[@class="index_scale"]/@href').extract(): yield Request(url,self.parse_html) # 取list集合中的url 呼叫詳情解析方法 # 如果下一頁屬性還存在,則把下一頁的url獲取出來 nextpage = response.xpath('//a[@class="tg-rownum-next index-icon"]/@href').extract_first() #判斷是否為空 if nextpage: yield Request(nextpage,self.parse) #回撥自己繼續解析 def parse_html(self,response): """ 解析每一個房產資訊的詳情頁面,生成item :param response: :return: """ pattern = re.compile('<script type="text/javascript">.*?lng = '(.*?)';.*?lat = '(.*?)';.*?bname = '(.*?)';.*?' 'address = '(.*?)';.*?price = '(.*?)';',re.S) item = JiwuItem() results = re.findall(pattern,response.text) for result in results: item['name'] = result[2] item['address'] = result[3] # 對價格判斷只取數字,如果為空就設定為0 pricestr =result[4] pattern2 = re.compile('(d+)') s = re.findall(pattern2,pricestr) if len(s) == 0: item['price'] = 0 else:item['price'] = s[0] item['lng'] = result[0] item['lat'] = result[1] yield item
item.py
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html import scrapy class JiwuItem(scrapy.Item): # define the fields for your item here like: name = scrapy.Field() price =scrapy.Field() address =scrapy.Field() lng = scrapy.Field() lat = scrapy.Field() pass
pipelines.py 注意此處是吧mongodb的儲存方法註釋了,可以自選選擇儲存方式
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import pymongo from scrapy.conf import settings from openpyxl import workbook class JiwuPipeline(object): wb = workbook.Workbook() ws = wb.active ws.append(['小區名稱', '地址', '價格', '經度', '緯度']) def __init__(self): # 獲取資料庫連線資訊 host = settings['MONGODB_URL'] port = settings['MONGODB_PORT'] dbname = settings['MONGODB_DBNAME'] client = pymongo.MongoClient(host=host, port=port) # 定義資料庫 db = client[dbname] self.table = db[settings['MONGODB_TABLE']] def process_item(self, item, spider): jiwu = dict(item) #self.table.insert(jiwu) line = [item['name'], item['address'], str(item['price']), item['lng'], item['lat']] self.ws.append(line) self.wb.save('jiwu.xlsx') return item
最後報表的資料
mongodb資料庫
地圖報表效果圖:BDP分享儀表盤,分享視覺化效果