最近學(xué)習(xí)數(shù)據(jù)分析,因此嘗試一下這兩個網(wǎng)站的職位需求做分析用,在其中遇到了很多坑,記錄一下。
框架就選用了scrapy,比較簡單,建了兩個文件,分別作用于不同的網(wǎng)站。

先來看BOSS直聘:
網(wǎng)上搜了很多BOSS直聘的例子,以為很容易,只需要模擬一個登陸頭就可以了……但是進去發(fā)現(xiàn)完全不是那么一回事。
按照慣例,首先在items.py中定義需要獲取的數(shù)據(jù):
import scrapy
class PositionViewItem(scrapy.Item):
# define the fields for your item here like:
name :scrapy.Field = scrapy.Field()#名稱
salary :scrapy.Field = scrapy.Field()#薪資
education :scrapy.Field = scrapy.Field()#學(xué)歷
experience :scrapy.Field = scrapy.Field()#經(jīng)驗
jobjd :scrapy.Field = scrapy.Field()#工作ID
district :scrapy.Field = scrapy.Field()#地區(qū)
category :scrapy.Field = scrapy.Field()#行業(yè)分類
scale :scrapy.Field = scrapy.Field()#規(guī)模
corporation :scrapy.Field = scrapy.Field()#公司名稱
url :scrapy.Field = scrapy.Field()#職位URL
createtime :scrapy.Field = scrapy.Field()#發(fā)布時間
posistiondemand :scrapy.Field = scrapy.Field()#崗位職責(zé)
cortype :scrapy.Field = scrapy.Field()#公司性質(zhì)
上面定義的就是ITEM,構(gòu)思好需要的數(shù)值,目前就簡單的設(shè)置為普通的scrapy.Field()
name :str = 'DA'
url :str='https://www.zhipin.com/c100010000/?query=%E6%95%B0%E6%8D%AE&page=10'#起始url設(shè)定為進入BOSS直聘之后的搜索頁,搜索參數(shù)為全國的數(shù)據(jù)分析
cookies :Dict = {
"__zp_stoken__":"bf79ElaZ4z7IK5JruWAX5j256l7CJf3k7Ag2A9mrsSPN%2FnLgjChK0LguCrB%2FtIEFMKdnysNhr4ilqIicjeHkCsCpBQ%3D%3D"
}#設(shè)置cookies
headers :Dict = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0',
'Referer': 'https://www.zhipin.com/web/common/security-check.html?seed=6gkgYHovIokVntQcwXUH9KW3%2FbEZsqfeaoCctIp1rE8%3D&name=f2d51032&ts=1571623520634&callbackUrl=%2Fjob_detail%2F%3Fquery%3D%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%26city%3D100010000%26industry%3D%26position%3D&srcReferer=https%3A%2F%2Fwww.zhipin.com%2Fjob_detail%2F%3Fquery%3D%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%26city%3D100010000%26industry%3D%26position%3D'
}#設(shè)置登錄頭
設(shè)置完常用的參數(shù)之后,嘗試定義start_requests方法作為爬取的起始url
def start_requests(self) -> Request:
yield Request(self.url, headers=self.headers, cookies=self.cookies)#返回一個yield,調(diào)用默認callback,第一個參數(shù)是之前定義的url,第二個是定義的請求頭,第三個是cookies。
scrapy中默認的回調(diào)函數(shù)為parse,直接定義一個parse用于獲取response的內(nèi)容,之后直接用xpath語法進行解析。
def parse(self, response) -> None:
if response.status == 200:
PositionInfos :selector.SelectorList = response.selector.xpath(r'//div[@class="job-primary"]')
for positioninfo in PositionInfos:
pvi = PositionViewItem()
pvi['name'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/div[@class="job-title"]/text()').extract())
pvi['salary'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/span[@class="red"]/text()').extract())
pvi['education'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[2])
pvi['experience'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[1])
pvi['district'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[0])
pvi['corporation'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/h3[@class="name"]/a/text()').extract())
pvi['category'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[0])
try:
pvi['scale'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[2])
except IndexError:
pvi['scale'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[1])
pvi['url'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/@href').extract())
yield pvi
nexturl = response.selector.xpath(r'//a[@ka="page-next"]/@href').extract()
if nexturl:
nexturl = urljoin(self.url, ''.join(nexturl))
print(nexturl)
yield Request(nexturl, headers=self.headers, cookies=self.cookies, callback=self.parse)
xpath選擇器后面跟的.extract()會返回一個list,里面包含的是選擇器選擇出來的所有元素,如果選擇不出來,那么這個語句會報錯而不是返回空值!
yield pvi的作用是把定義好的ITEM傳給pipelines,方便在pipelines中對獲取的數(shù)據(jù)進行操作。
nexturl = response.selector.xpath(r'//a[@ka="page-next"]/@href').extract()獲取到下一頁的鏈接之后,要用urllib.parse中的urljoin將獲取到的鏈接和源鏈接進行合并,因為抓到的鏈接并不是一個完整的url,而是類似于
/c101010100/?query=%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90&page=2這種格式,需要用urljoin進行合并,合并規(guī)則如下:
url='http://ip/ path='api/user/login' urljoin(url,path)拼接后的路徑為'http//ip/api/user/login'
本以為這樣就好了,用scrapy crawl + 名字()運行,結(jié)果發(fā)現(xiàn)請求不到數(shù)據(jù),會直接302重定向到一個securitycheck的網(wǎng)頁.
打開fiddler查看請求過程:

可以看到完全模擬了整個查詢過程,先直接請求一遍地址,之后重定向到security-check的網(wǎng)頁,之后再切回到返回的頁面,看上去沒有問題,但是仔細查看會發(fā)現(xiàn)cookies中的__zp_token__發(fā)生了變化:



那么就很清楚了,應(yīng)該是在調(diào)用security-check之后回寫了一個token,之后根據(jù)這個最新的token來判斷請求,看了一下似乎是通過一個js進行加密回寫的,知乎上有大神寫了解密的辦法,對前端不太懂,放棄了...
轉(zhuǎn)載鏈接如下:https://zhuanlan.zhihu.com/p/83235220
這個token只能通過手動刷新的方式獲取,一般能持續(xù)個幾次請求就會失效,要重新獲取.不過手動爬也只能爬個10頁左右,后面的不登陸就沒有了,因此也無所謂.
后來嘗試通過selenium模擬的方式進行,也宣告失敗.
總之不是很成功,目前不推薦啦...