根據(jù)scrapy官方文檔:http://doc.scrapy.org/en/master/topics/practices.html#avoiding-getting-banned里面的描述,要防止scrapy被ban,主要有以下幾個策略。
- 動態(tài)設(shè)置user agent
- 禁用cookies/啟用cookie
- 設(shè)置延遲下載
- 使用Google cache (未記錄)
- 使用IP地址池(Tor project、VPN和代理IP)
- 利用第三方平臺crawlera做scrapy爬蟲防屏蔽 (未記錄)
動態(tài)設(shè)置user agent
# -*- coding:utf-8 -*-
import random
def get_headers():
useragent_list = [
'Mozilla/5.0 (Windows NT 6.1; rv,2.0.1) Gecko/20100101 Firefox/4.0.1',
'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows NT 6.1; rv,2.0.1) Gecko/20100101 Firefox/4.0.1',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.9.2.1000 Chrome/39.0.2146.0 Safari/537.36',
'Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/532.3',
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5',
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36'
]
useragent = random.choice(useragent_list)
header = {'User-Agent': useragent}
return header
禁用cookies/啟用cookie

cookies(來自維基百科)因為HTTP協(xié)議是無狀態(tài)的,即服務(wù)器不知道用戶上一次做了什么,這嚴(yán)重阻礙了交互式Web應(yīng)用程序的實現(xiàn)。在典型的網(wǎng)上購物場景中,用戶瀏覽了幾個頁面,買了一盒餅干和兩飲料。最后結(jié)帳時,由于HTTP的無狀態(tài)性,不通過額外的手段,服務(wù)器并不知道用戶到底買了什么。 所以Cookie就是用來繞開HTTP的無狀態(tài)性的“額外手段”之一。服務(wù)器可以設(shè)置或讀取Cookies中包含信息,借此維護(hù)用戶跟服務(wù)器會話中的狀態(tài)。
Cookie另一個典型的應(yīng)用是當(dāng)?shù)卿浺粋€網(wǎng)站時,網(wǎng)站往往會請求用戶輸入用戶名和密碼,并且用戶可以勾選“下次自動登錄”。如果勾選了,那么下次訪問同一網(wǎng)站時,用戶會發(fā)現(xiàn)沒輸入用戶名和密碼就已經(jīng)登錄了。這正是因為前一次登錄時,服務(wù)器發(fā)送了包含登錄憑據(jù)(用戶名加密碼的某種加密形式)的Cookie到用戶的硬盤上。第二次登錄時,(如果該Cookie尚未到期)瀏覽器會發(fā)送該Cookie,服務(wù)器驗證憑據(jù),于是不必輸入用戶名和密碼就讓用戶登錄了。
采用selenium + PhantomJS 模擬瀏覽器登錄Lagou,獲取cookie
# -*- coding:utf-8 -*-
import sys
import time
import random
from selenium import webdriver
reload(sys)
sys.setdefaultencoding('utf-8')
def random_sleep_time():
sleeptime = random.randint(0, 10)
return time.sleep(sleeptime)
def get_headers_with_cookie():
driver = webdriver.PhantomJS(executable_path="D:\phantomjs-2.1.1-windows\\bin\phantomjs.exe") #需下載PhantomJS并解壓到某一路徑
url_login = 'https://passport.lagou.com/login/login.html'
driver.get(url_login)
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input').clear()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input').send_keys('username') #需替換可用賬戶
random_sleep_time()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input').clear()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input').send_keys('password') #需替換可用賬戶
random_sleep_time()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[5]/input').click()
random_sleep_time()
cookies = "; ".join([item["name"] + "=" + item["value"] for item in driver.get_cookies()])
headers = get_headers()
headers['cookie'] = cookies.encode('utf-8')
return headers
XPath解析 Copy XPath技巧 參考向右奔跑-009 - 使用XPath解析網(wǎng)頁
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input')
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input')
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[5]/input')

Scrapy 禁用Cookie
在setting.py中設(shè)置
COOKIES_ENABLED=False
代理設(shè)置 PROXIES
在setting.py中設(shè)置
PROXIES = [
{'ip_port': '111.11.228.75:80', 'user_pass': ''},
{'ip_port': '120.198.243.22:80', 'user_pass': ''},
{'ip_port': '111.8.60.9:8123', 'user_pass': ''},
{'ip_port': '101.71.27.120:80', 'user_pass': ''},
{'ip_port': '122.96.59.104:80', 'user_pass': ''},
{'ip_port': '122.224.249.122:8088', 'user_pass': ''},
]
設(shè)置下載延遲
在setting.py中設(shè)置
DOWNLOAD_DELAY=3
創(chuàng)建中間件(middlewares.py)
import random
import base64
from settings import PROXIES
class RandomUserAgent(object):
"""Randomly rotate user agents based on a list of predefined ones"""
def __init__(self, agents):
self.agents = agents
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings.getlist('USER_AGENTS'))
def process_request(self, request, spider):
# print "**************************" + random.choice(self.agents)
request.headers.setdefault('User-Agent', random.choice(self.agents))
class ProxyMiddleware(object):
def process_request(self, request, spider):
proxy = random.choice(PROXIES)
if proxy['user_pass'] is not None:
request.meta['proxy'] = "http://%s" % proxy['ip_port']
encoded_user_pass = base64.encodestring(proxy['user_pass'])
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
print "**************ProxyMiddleware have pass************" + proxy['ip_port']
else:
print "**************ProxyMiddleware no pass************" + proxy['ip_port']
request.meta['proxy'] = "http://%s" % proxy['ip_port']
設(shè)置下載中間件
DOWNLOADER_MIDDLEWARES = {
# 'myproject.middlewares.MyCustomDownloaderMiddleware': 543,
'myproject.middlewares.RandomUserAgent': 1,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
# 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
'myproject.middlewares.ProxyMiddleware': 100,
}
參考資料
[1] 如何讓你的scrapy爬蟲不再被ban
[2] 為何大量網(wǎng)站不能抓取?爬蟲突破封禁的6種常見方法
[3] 互聯(lián)網(wǎng)網(wǎng)站的反爬蟲策略淺析
[4] 用 Python 爬蟲抓站的一些技巧總結(jié)
[5] 如何識別PhantomJs爬蟲
[6] 麻袋理財之反爬蟲實踐
[7] 中間件