接上一章,我們來到第三關(guān),地址:http://www.heibanke.com/lesson/crawler_ex02/,提示需要登錄,那就先注冊個賬號登錄,登錄后頁面如圖:

image.png
看起來和第二關(guān)差不多,不過多了一句話:“比上一關(guān)多了兩層保護(hù)”,看來就是在第二關(guān)的基礎(chǔ)上加了兩層限制,不管那么多,直接把第二關(guān)的爬蟲代碼修改下url(http://www.heibanke.com/lesson/crawler_ex02/)運行試試看,提示403錯誤
urllib.error.HTTPError: HTTP Error 403: FORBIDDEN
看來是一個登錄cookie驗證,F(xiàn)12打開調(diào)試工具,查看Network,顯示如下圖:

image.png
既然猜測是登錄驗證,那就加上Cookie試試,
header = {
'User-Agent': r'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
r'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
'Connection': 'keep-alive',
'Cookie':r'Hm_lvt_74e694103cf02b31b28db0a346da0b6b=1514366315; csrftoken=VDdjKqyv39hMDXMaUW5SMkDAGRF1y85m; sessionid=0fd2tziqn8jhuzuxl5lramgd0swfb2wm; Hm_lpvt_74e694103cf02b31b28db0a346da0b6b=1514427240',
'Refer':'http://www.heibanke.com/lesson/crawler_ex02/'
}
req = request.Request(url, data)
依然403,仔細(xì)對比下參數(shù),發(fā)現(xiàn)csrfmiddlewaretoken參數(shù)的值變了,于是復(fù)制下網(wǎng)頁上的token到代碼里,再次運行,成功,結(jié)果如圖:

部分結(jié)果截圖
去網(wǎng)頁上試試,昵稱隨便輸一個,密碼輸入上面獲取的結(jié)果:13,搞定

image.png
所有代碼:
from urllib import request
from urllib import parse
from bs4 import BeautifulSoup
def get_page(url, params):
print('get url %s' % url)
data = parse.urlencode(params).encode('utf-8')
header = {
'User-Agent': r'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
r'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
'Connection': 'keep-alive',
'Cookie':r'Hm_lvt_74e694103cf02b31b28db0a346da0b6b=1514366315; csrftoken=1yFgXVZtw2rACmTYDGABYKs9VWLWqbeH; sessionid=m4paft1uuvhm3thrwvdgwut2rvu8uz8d; Hm_lpvt_74e694103cf02b31b28db0a346da0b6b=1514428404',
'Refer':'http://www.heibanke.com/lesson/crawler_ex02/'
}
req = request.Request(url, data, headers=header)
page = request.urlopen(req).read()
page = page.decode('utf-8')
return page
count = 0
url = "http://www.heibanke.com/lesson/crawler_ex02/"
token = '1yFgXVZtw2rACmTYDGABYKs9VWLWqbeH'
username = 'pkxutao'
password = -1
# 構(gòu)造post參數(shù)
data = {
'csrfmiddlewaretoken': token,
'username': 'pkxutao',
'password': password
}
result = '您輸入的密碼錯誤, 請重新輸入'
while result == '您輸入的密碼錯誤, 請重新輸入':
count += 1
password += 1
data['password'] = password
print('第%d次嘗試,參數(shù):%d' % (count, password))
result = get_page(url, data)
soup = BeautifulSoup(result, "html.parser")
# 解析h3元素
h3 = soup.find_all("h3")[0]
result = soup.find_all("h3")[0].text
print('成功,username:%s, password:%d' % (username, password))
總結(jié)
這一關(guān)相對于上一關(guān)多了兩層保護(hù),作者說的很明顯,加上這一關(guān)必須登錄,所以很容易猜測出其中一層保護(hù)是Cookie驗證。我在添加Cookie后測試了好幾次還是403,就一直在找第二層保護(hù)是什么。通過fiddler抓包對比網(wǎng)頁請求和爬蟲請求的參數(shù),發(fā)現(xiàn)除了網(wǎng)頁請求的header里面多了一些參數(shù)外,就是body參數(shù)csrfmiddlewaretoken不一樣了,把csrfmiddlewaretoken的值搞成一樣的測試,就過關(guān)了,還是要心細(xì)和多測試。