

??從圖上可以看到每一條出租房屋信息,主要包括:價格,戶型,面積,樓層,裝修,類型,所在區(qū),小區(qū),出租方式,朝向,鄰近的地鐵線。

??開始上代碼:
??創(chuàng)建一個scrapy項目(scrapy startproject Anjuke_Spider)。會生成如下圖3的目錄。然后,在“spiders”文件夾下創(chuàng)建一個py文檔,這里命名為“anjuke_zufang”。然后加入“run”文件。最后的目錄如圖4.


??下面,跳過scrapy的"settings"設置,直接寫主要代碼。
??引入所要用的庫:
import scrapy
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
from Anjuke_Spider.items import AnjukeSpiderItem
??然后創(chuàng)建一個爬蟲的類:
class AnjukeSpider(scrapy.spiders.CrawlSpider): name = 'anjuke'
??寫起始的url:
start_urls = ['https://bj.zu.anjuke.com/']
??接下來就是本文的核心,LinkExtractor,首先得根據(jù)抓包數(shù)據(jù)和網(wǎng)頁進行分析,并不是所有的網(wǎng)頁爬取都能使用LinkExtractor。首先得分析需要爬取的網(wǎng)頁url。
首先分析不同的目錄頁url,點擊頁面下的頁碼項,去不同目錄頁,發(fā)現(xiàn)其url如下:
https://bj.zu.anjuke.com/?from=navigation 第1頁
https://bj.zu.anjuke.com/fangyuan/p2/ 第2頁
https://bj.zu.anjuke.com/fangyuan/p3/ 第3頁
https://bj.zu.anjuke.com/fangyuan/p4/ 第4頁
https://bj.zu.anjuke.com/fangyuan/p5/ 第5頁
??其中試著將第1頁的url改為“https://bj.zu.anjuke.com/fangyuan/p1/”,訪問這個url,發(fā)現(xiàn)返回的正是第1頁的頁面。 這樣便發(fā)現(xiàn)房源的目錄頁面有這樣的規(guī)律,他們url大致一樣,改變的僅是最后的頁碼編號,這可以用如下的代碼匹配房源目錄頁面的url:
https://bj.zu.anjuke.com/fangyuan/p\d+/
??在得到房源目錄頁面后,點擊房源進入房源頁面,獲取所要的信息,如圖5,我們點擊網(wǎng)頁獲取我們需要的信息,可以發(fā)現(xiàn)其網(wǎng)址類似于圖6所示,形如:
https://bj.zu.anjuke.com/fangyuan/1203399585?from=Filter_1&hfilter=filterlist
https://bj.zu.anjuke.com/fangyuan/1202089286?from=Filter_2&hfilter=filterlist
https://bj.zu.anjuke.com/fangyuan/1200258936?from=Filter_3&hfilter=filterlist
??可以發(fā)現(xiàn)不同的網(wǎng)頁,在url上主要是url末尾的10位數(shù)字不同。url的?號后的內(nèi)容一般是jquery,有些內(nèi)容可以去掉這樣來簡化url。把問號后的內(nèi)容去掉發(fā)現(xiàn)其仍能夠訪問房源的頁面,這樣便能找到房源頁面的url的規(guī)律,可以用如下的代碼匹配房源的url:
https://bj.zu.anjuke.com/fangyuan/\d{10}


接下開始使用“Rule”和“LinkExtractor”:
rules = (
Rule(LinkExtractor(allow='fangyuan/p\d+/'), follow=True),
Rule(LinkExtractor(allow='https://bj.zu.anjuke.com/fangyuan/\d{10}'), callback='parse_item'),
)
??第一行“Rule(LinkExtractor(allow='fangyuan/p\d+/'), follow=True)”,用“allow”指定要訪問的網(wǎng)址,因為前面已經(jīng)指定start_urls,所以這里將“follow”指定為“True”,表示在start_urls后面添加,合起來即“https://bj.zu.anjuke.com/fangyuan/p\d+/”,這里訪問的是房源的目錄頁。
第二行“Rule(LinkExtractor(allow='https://bj.zu.anjuke.com/fangyuan/\d{10}'), callback='parse_item')”是在第一行訪問房源頁的基礎上,訪問每一個房源頁,“callback”指定下面對數(shù)據(jù)進行處理的方法。
??接下來就該定義一個方法,方法名應該與“callback”指定的方法名相同。
def parse_item(self, response):
price = int(response.xpath("http://ul[@class='house-info-zufang cf']/li[1]/span[1]/em/text()").extract_first())
house_type = response.xpath("http://ul[@class='house-info-zufang cf']/li[2]/span[2]/text()").extract_first()
area = int(response.xpath("http://ul[@class='house-info-zufang cf']/li[3]/span[2]/text()").extract_first().replace('平方米',''))
rent_type = response.xpath("http://ul[@class='title-label cf']/li[1]/text()").extract_first()
towards = response.xpath("http://ul[@class='house-info-zufang cf']/li[4]/span[2]/text()").extract_first()
floor = response.xpath("http://ul[@class='house-info-zufang cf']/li[5]/span[2]/text()").extract_first()
decoration = response.xpath("http://ul[@class='house-info-zufang cf']/li[6]/span[2]/text()").extract_first()
building_type = response.xpath("http://ul[@class='house-info-zufang cf']/li[7]/span[2]/text()").extract_first()
district = response.xpath("http://ul[@class='house-info-zufang cf']/li[8]/a[2]/text()").extract_first()
station = response.xpath("http://ul[@class='house-info-zufang cf']/li[8]/a[3]/text()").extract_first()
community = response.xpath("http://ul[@class='house-info-zufang cf']/li[8]/a[1]/text()").extract_first()
subway_line = response.xpath("http://ul[@class='title-label cf']/li[3]/text()").extract_first()
??方法parse_item的response返回的便是房源頁面的HTML數(shù)據(jù),因為沒有json數(shù)據(jù),所以只能從HTML數(shù)據(jù)匹配所需要的數(shù)據(jù)。這里用的Xpath,獲取price(租金)、house_type(戶型)、area(面積)、rent_type(出租方式)、towards(朝向)、floor(樓層)、decoration(裝修)、building_type(樓類型)、district(所在區(qū))、station(臨近地鐵站)、community(社區(qū)、小區(qū))、subway_line(地鐵線路)。
最后貼上anjuke_zufang.py的完整代碼:
import scrapy
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
from Anjuke_Spider.items import AnjukeSpiderItem
class AnjukeSpider(scrapy.spiders.CrawlSpider):
name = 'anjuke'
start_urls = ['https://bj.zu.anjuke.com/']
rules = (
Rule(LinkExtractor(allow='fangyuan/p\d+/'), follow=True),
Rule(LinkExtractor(allow='https://bj.zu.anjuke.com/fangyuan/\d{10}'), callback='parse_item'),
)
def parse_item(self, response):
price = int(response.xpath("http://ul[@class='house-info-zufang cf']/li[1]/span[1]/em/text()").extract_first())
house_type = response.xpath("http://ul[@class='house-info-zufang cf']/li[2]/span[2]/text()").extract_first()
area = int(response.xpath("http://ul[@class='house-info-zufang cf']/li[3]/span[2]/text()").extract_first().replace('平方米',''))
rent_type = response.xpath("http://ul[@class='title-label cf']/li[1]/text()").extract_first()
towards = response.xpath("http://ul[@class='house-info-zufang cf']/li[4]/span[2]/text()").extract_first()
floor = response.xpath("http://ul[@class='house-info-zufang cf']/li[5]/span[2]/text()").extract_first()
decoration = response.xpath("http://ul[@class='house-info-zufang cf']/li[6]/span[2]/text()").extract_first()
building_type = response.xpath("http://ul[@class='house-info-zufang cf']/li[7]/span[2]/text()").extract_first()
district = response.xpath("http://ul[@class='house-info-zufang cf']/li[8]/a[2]/text()").extract_first()
station = response.xpath("http://ul[@class='house-info-zufang cf']/li[8]/a[3]/text()").extract_first()
community = response.xpath("http://ul[@class='house-info-zufang cf']/li[8]/a[1]/text()").extract_first()
subway_line = response.xpath("http://ul[@class='title-label cf']/li[3]/text()").extract_first()
item = AnjukeSpiderItem()
item['price'] = price
item['house_type'] = house_type
item['area'] = area
item['rent_type'] = rent_type
item['towards'] = towards
item['floor'] = floor
item['decoration'] = decoration
item['building_type'] = building_type
item['district'] = district
item['station'] = station
item['community'] = community
item['subway_line'] = subway_line
yield item
??對于scrapy項目,除了要寫爬蟲的主程序,還需要配置settings,items,pipelines,middleware等文件。還需要修改run文件。對于items文件修改如下:
import scrapy
class AnjukeSpiderItem(scrapy.Item):
price = scrapy.Field()
house_type = scrapy.Field()
area = scrapy.Field()
rent_type = scrapy.Field()
towards = scrapy.Field()
floor = scrapy.Field()
decoration = scrapy.Field()
building_type = scrapy.Field()
district = scrapy.Field()
community = scrapy.Field()
station = scrapy.Field()
subway_line = scrapy.Field()
本項目需要在settings中設置headers,如下:
DEFAULT_REQUEST_HEADERS = {
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'zh-CN,zh;q=0.9'
}
??run文件如下:
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
# 獲取settings.py模塊的設置
from Anjuke_Spider.spiders.anjuke_zufang import AnjukeSpider
settings = get_project_settings()
process = CrawlerProcess(settings=settings)
# 可以添加多個spider
process.crawl(AnjukeSpider)
# 啟動爬蟲,會阻塞,直到爬取完成
process.start()
??其次還需要在pipelines中將結(jié)果寫入一個文件中,本項目將結(jié)果寫入了mysql數(shù)據(jù)庫,這里不再詳細介紹。

??其中要注意的是,在爬取的過程中,會觸發(fā)網(wǎng)站的反爬機制,但是在設置一個隨機的時間間隔便能繼續(xù)進行,在settings中設置:
DOWNLOAD_DELAY = 1.5