scrapy-redis的官方文檔寫的比較簡潔,沒有提及其運(yùn)行原理,所以如果想全面的理解分布式爬蟲的運(yùn)行原理,還是得看scrapy-redis的源代碼才行。
官方站點(diǎn):https://github.com/rolando/scrapy-redis
scrapy-redis工程的主體還是是redis和scrapy兩個(gè)庫,工程本身實(shí)現(xiàn)的東西不是很多,這個(gè)工程就像膠水一樣,把這兩個(gè)插件粘結(jié)了起來。下面我們來看看,scrapy-redis的每一個(gè)源代碼文件都實(shí)現(xiàn)了什么功能,最后如何實(shí)現(xiàn)分布式的爬蟲系統(tǒng)。
一、scrapy-redis架構(gòu)原理
Scrapy-redis提供了下面四種組件(components):
四種組件意味著這四個(gè)模塊都要做相應(yīng)的修改
- Scheduler(調(diào)度器)
- Duplication Filter(重復(fù)過濾)
- Item Pipeline(管道)
- Base Spider(繼承類)
下面分別介紹四個(gè)組件:
1. Scheduler(調(diào)度器):
Scrapy改造了python本來的collection.deque(雙向隊(duì)列)形成了自己的Scrapy queue(https://github.com/scrapy/queuelib/blob/master/queuelib/queue.py)),但是Scrapy多個(gè)spider不能共享待爬取隊(duì)列Scrapy queue, 即Scrapy本身不支持爬蟲分布式,scrapy-redis 的解決是把這個(gè)Scrapy queue換成redis數(shù)據(jù)庫(也是指redis隊(duì)列),從同一個(gè)redis-server存放要爬取的request,便能讓多個(gè)spider去同一個(gè)數(shù)據(jù)庫里讀取。
Scrapy中跟“待爬隊(duì)列”直接相關(guān)的就是調(diào)度器Scheduler,它負(fù)責(zé)對新的request進(jìn)行入列操作(加入Scrapy queue),取出下一個(gè)要爬取的request(從Scrapy queue中取出)等操作。它把待爬隊(duì)列按照優(yōu)先級建立了一個(gè)字典結(jié)構(gòu),比如:
{
優(yōu)先級0 : 隊(duì)列0
優(yōu)先級1 : 隊(duì)列1
優(yōu)先級2 : 隊(duì)列2
}
然后根據(jù)request中的優(yōu)先級,來決定該入哪個(gè)隊(duì)列,出列時(shí)則按優(yōu)先級較小的優(yōu)先出列。為了管理這個(gè)比較高級的隊(duì)列字典,Scheduler需要提供一系列的方法。但是原來的Scheduler已經(jīng)無法使用,所以使用Scrapy-redis的scheduler組件。
2. Duplication Filter(過濾工具):
Scrapy中用集合實(shí)現(xiàn)這個(gè)request去重功能,Scrapy中把已經(jīng)發(fā)送的request指紋放入到一個(gè)集合中,把下一個(gè)request的指紋拿到集合中比對,如果該指紋存在于集合中,說明這個(gè)request發(fā)送過了,如果沒有則繼續(xù)操作。這個(gè)核心的判重功能是這樣實(shí)現(xiàn)的:
def request_seen(self, request):
# self.request_figerprints 就是一個(gè)指紋集合
fp = self.request_fingerprint(request)
# 這就是判重的核心操作
if fp in self.fingerprints:
return True
self.fingerprints.add(fp)
if self.file:
self.file.write(fp + os.linesep)
在scrapy-redis中去重是由Duplication Filter組件來實(shí)現(xiàn)的,它通過redis的set 不重復(fù)的特性,巧妙的實(shí)現(xiàn)了Duplication Filter去重。scrapy-redis調(diào)度器從引擎接受request,將request的指紋存redis的set檢查是否重復(fù),并將不重復(fù)的request push寫redis的 request queue。
引擎請求request(Spider發(fā)出的)時(shí),調(diào)度器從redis的request queue隊(duì)列里根據(jù)優(yōu)先級pop 出個(gè)request 返回給引擎,引擎將此request發(fā)給spider處理。
3. Item Pipeline(管道):
引擎將(Spider返回的)爬取到的Item給Item Pipeline,scrapy-redis 的Item Pipeline將爬取到的 Item 存redis的 items queue。
修改過Item Pipeline可以很方便的根據(jù) key 從 items queue 提取item,從而實(shí)現(xiàn)items processes集群。
4. Base Spider:
不在使用scrapy原有的Spider類,重寫的RedisSpider繼承了Spider和RedisMixin這兩個(gè)類,RedisMixin是用來從redis讀取url的類。
當(dāng)我們生成一個(gè)Spider繼承RedisSpider時(shí),調(diào)用setup_redis函數(shù),這個(gè)函數(shù)會(huì)去連接redis數(shù)據(jù)庫,然后會(huì)設(shè)置signals(信號(hào)):
- 一個(gè)是當(dāng)spider空閑時(shí)候的signal,會(huì)調(diào)用spider_idle函數(shù),這個(gè)函數(shù)調(diào)用schedule_next_request函數(shù),保證spider是一直活著的狀態(tài),并且拋出DontCloseSpider異常。
- 一個(gè)是當(dāng)抓到一個(gè)item時(shí)的signal,會(huì)調(diào)用item_scraped函數(shù),這個(gè)函數(shù)會(huì)調(diào)用schedule_next_request函數(shù),獲取下一個(gè)request。
Scrapy-redis框架執(zhí)行過程總結(jié):
最后總結(jié)一下scrapy-redis的總體思路:這套組件通過重寫scheduler和 spider類,實(shí)現(xiàn)了調(diào)度、spider啟動(dòng)和redis的交互。
實(shí)現(xiàn)新的dupefilter和queue類,達(dá)到了判重和調(diào)度容器和redis 的交互,因?yàn)槊總€(gè)主機(jī)上的爬蟲進(jìn)程都訪問同一個(gè)redis數(shù)據(jù)庫,所以調(diào)度和判重都統(tǒng)一進(jìn)行統(tǒng)一管理,達(dá)到了分布式爬蟲的目的。
當(dāng)spider被初始化時(shí),同時(shí)會(huì)初始化一個(gè)對應(yīng)的scheduler對象,這個(gè)調(diào)度器對象通過讀取settings,配置好自己的調(diào)度容器queue和判重工具dupefilter。
每當(dāng)一個(gè)spider產(chǎn)出一個(gè)request的時(shí)候,scrapy引擎會(huì)把這個(gè)reuqest遞交給這個(gè)spider對應(yīng)的scheduler對象進(jìn)行調(diào)度,scheduler對象通過訪問redis對request進(jìn)行判重,如果不重復(fù)就把他添加進(jìn)redis中的調(diào)度器隊(duì)列里。當(dāng)調(diào)度條件滿足時(shí),scheduler對象就從redis的調(diào)度器隊(duì)列中取出一個(gè)request發(fā)送給spider,讓他爬取。
當(dāng)spider爬取的所有暫時(shí)可用url之后,scheduler發(fā)現(xiàn)這個(gè)spider對應(yīng)的redis的調(diào)度器隊(duì)列空了,于是觸發(fā)信號(hào)spider_idle,spider收到這個(gè)信號(hào)之后,直接連接redis讀取start_urls池,拿取新的一批url入口,然后再次重復(fù)上邊的工作。
二、源碼解析
下面的源碼的注釋基本都有,我就重要的代碼進(jìn)行解釋
1.connection.py
這個(gè)文件是用于連接redis的文件,用到比較多,也是最重要的文件
import six
from scrapy.utils.misc import load_object
from . import defaults
# Shortcut maps 'setting name' -> 'parmater name'.
#關(guān)系
SETTINGS_PARAMS_MAP = {
'REDIS_URL': 'url',
'REDIS_HOST': 'host',
'REDIS_PORT': 'port',
'REDIS_ENCODING': 'encoding',
}
#獲取一個(gè)redis連接實(shí)例
#生成連接redis的參數(shù)
def get_redis_from_settings(settings):
"""Returns a redis client instance from given Scrapy settings object.
This function uses ``get_client`` to instantiate the client and uses
``defaults.REDIS_PARAMS`` global as defaults values for the parameters. You
can override them using the ``REDIS_PARAMS`` setting.
Parameters
----------
settings : Settings
A scrapy settings object. See the supported settings below.
Returns
-------
server
Redis client instance.
Other Parameters
----------------
REDIS_URL : str, optional
Server connection URL.
REDIS_HOST : str, optional
Server host.
REDIS_PORT : str, optional
Server port.
REDIS_ENCODING : str, optional
Data encoding.
REDIS_PARAMS : dict, optional
Additional client parameters.
"""
#淺拷貝 是為了防止params的改變,會(huì)導(dǎo)致默認(rèn)SETTINGS_PARAMS被改變
params = defaults.REDIS_PARAMS.copy()
#將設(shè)置中的參數(shù)更新進(jìn)入params中
params.update(settings.getdict('REDIS_PARAMS'))
# XXX: Deprecate REDIS_* settings.
#遍歷映射表,獲取指定的參數(shù)
for source, dest in SETTINGS_PARAMS_MAP.items():
#優(yōu)先使用設(shè)置中的參數(shù)
val = settings.get(source)
#如果設(shè)置中沒有進(jìn)行設(shè)置,則params不更新
if val:
params[dest] = val
# Allow ``redis_cls`` to be a path to a class.
if isinstance(params.get('redis_cls'), six.string_types):
params['redis_cls'] = load_object(params['redis_cls'])
return get_redis(**params)
# Backwards compatible alias.
from_settings = get_redis_from_settings
def get_redis(**kwargs):
"""Returns a redis client instance.
Parameters
----------
redis_cls : class, optional
Defaults to ``redis.StrictRedis``.
url : str, optional
If given, ``redis_cls.from_url`` is used to instantiate the class.
**kwargs
Extra parameters to be passed to the ``redis_cls`` class.
Returns
-------
server
Redis client instance.
"""
#沒有redis_cls 則用默認(rèn)的redis連接
redis_cls = kwargs.pop('redis_cls', defaults.REDIS_CLS)
#判斷kwarg有沒有url
url = kwargs.pop('url', None)
if url:
return redis_cls.from_url(url, **kwargs)
else:
#走這里
return redis_cls(**kwargs)
Connection提供了一個(gè)很重要的函數(shù),from_settings = get_redis_from_settings這個(gè)函數(shù)引入defualt.py文件,定義了我們訪問過的指紋。pipline,queue,schedule文件都會(huì)調(diào)用。
2.defaults.py
主要存放默認(rèn)的參數(shù)
import redis
# For standalone use.
#去重的鍵名key
DUPEFILTER_KEY = 'dupefilter:%(timestamp)s'
#定義的存儲(chǔ)items的鍵名,spiders是爬蟲的名稱
PIPELINE_KEY = '%(spider)s:items'
#redis連接對象,是用于連接redis
REDIS_CLS = redis.StrictRedis
#字符集編碼
REDIS_ENCODING = 'utf-8'
# Sane connection defaults.
# redis的連接的參數(shù)
REDIS_PARAMS = {
'socket_timeout': 30,
'socket_connect_timeout': 30,
'retry_on_timeout': True,
'encoding': REDIS_ENCODING,
}
# 隊(duì)列的變量名,用于存儲(chǔ)爬取的url隊(duì)列
SCHEDULER_QUEUE_KEY = '%(spider)s:requests'
# 優(yōu)先級隊(duì)列,用于規(guī)定隊(duì)列的進(jìn)出方式
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'
# 用于去重的key,給request加指紋存儲(chǔ)的地方
SCHEDULER_DUPEFILTER_KEY = '%(spider)s:dupefilter'
# 用于生成指紋的類
SCHEDULER_DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter'
# 起始url對應(yīng)的key
START_URLS_KEY = '%(name)s:start_urls'
# 起始url的類型
START_URLS_AS_SET = False
3. dupefilter.py
scrapy的去重是利用集合來實(shí)現(xiàn)的,而在scrapy分布式的去重就需要利用共享的集合,那么這里使用的就是redis中集合數(shù)據(jù)結(jié)構(gòu)。
import logging
import time
from scrapy.dupefilters import BaseDupeFilter
from scrapy.utils.request import request_fingerprint
from . import defaults
from .connection import get_redis_from_settings
logger = logging.getLogger(__name__)
# TODO: Rename class to RedisDupeFilter.
class RFPDupeFilter(BaseDupeFilter):
"""
Redis-based request duplicates filter.
This class can also be used with default Scrapy's scheduler.
"""
logger = logger
def __init__(self, server, key, debug=False):
"""
Initialize the duplicates filter.
Parameters#參數(shù)
----------
#server:redis的連接實(shí)例
server : redis.StrictRedis
The redis server instance.
key : str
Redis key Where to store fingerprints. 存儲(chǔ)requests指紋的地方
debug : bool, optional
Whether to log filtered requests. 是否記錄過濾的requests
"""
self.server = server
self.key = key
self.debug = debug
self.logdupes = True
#類方法傳遞的當(dāng)前方法
@classmethod
def from_settings(cls, settings):
"""Returns an instance from given settings.
This uses by default the key ``dupefilter:<timestamp>``. When using the
``scrapy_redis.scheduler.Scheduler`` class, this method is not used as
it needs to pass the spider name in the key.
Parameters
----------
settings : scrapy.settings.Settings
Returns
-------
RFPDupeFilter
A RFPDupeFilter instance.
"""
#獲取redis的連接實(shí)例
server = get_redis_from_settings(settings)
# XXX: This creates one-time key. needed to support to use this
# class as standalone dupefilter with scrapy's default scheduler
# if scrapy passes spider on open() method this wouldn't be needed
# TODO: Use SCRAPY_JOB env as default and fallback to timestamp.
#生成存儲(chǔ)指紋的key
key = defaults.DUPEFILTER_KEY % {'timestamp': int(time.time())}
#使用默認(rèn)值Flase
debug = settings.getbool('DUPEFILTER_DEBUG')
#傳給當(dāng)前類,并把參數(shù)傳給init函數(shù)
return cls(server, key=key, debug=debug)
@classmethod
def from_crawler(cls, crawler):
"""Returns instance from crawler.
Parameters
----------
crawler : scrapy.crawler.Crawler
Returns
-------
RFPDupeFilter
Instance of RFPDupeFilter.
"""
return cls.from_settings(crawler.settings)
def request_seen(self, request):
"""Returns True if request was already seen.
Parameters
----------
request : scrapy.http.Request
Returns
-------
bool
"""
#s生成一個(gè)指紋
fp = self.request_fingerprint(request)
# This returns the number of values added, zero if already exists.
#將指紋加入redis 指紋是一個(gè)集合類型
# self.server redis連接實(shí)例
#self.key 是存儲(chǔ)指紋的key fp指紋
#self.key 已經(jīng)存在返回0,不存在則返回1
added = self.server.sadd(self.key, fp)
#當(dāng)added為0,說明指紋已經(jīng)存在,返回true。否則返回False
return added == 0
def request_fingerprint(self, request):
"""Returns a fingerprint for a given request.
Parameters
----------
request : scrapy.http.Request
Returns
-------
str
"""
return request_fingerprint(request)
@classmethod
def from_spider(cls, spider):
settings = spider.settings
server = get_redis_from_settings(settings)
dupefilter_key = settings.get("SCHEDULER_DUPEFILTER_KEY", defaults.SCHEDULER_DUPEFILTER_KEY)
key = dupefilter_key % {'spider': spider.name}
debug = settings.getbool('DUPEFILTER_DEBUG')
return cls(server, key=key, debug=debug)
#當(dāng)爬蟲結(jié)束時(shí),清空指紋
def close(self, reason=''):
"""Delete data on close. Called by Scrapy's scheduler.
Parameters
----------
reason : str, optional
"""
self.clear()
def clear(self):
"""Clears fingerprints data."""
self.server.delete(self.key)
def log(self, request, spider):
"""Logs given request.
Parameters
----------
request : scrapy.http.Request
spider : scrapy.spiders.Spider
"""
if self.debug:
msg = "Filtered duplicate request: %(request)s"
self.logger.debug(msg, {'request': request}, extra={'spider': spider})
elif self.logdupes:
msg = ("Filtered duplicate request %(request)s"
" - no more duplicates will be shown"
" (see DUPEFILTER_DEBUG to show all duplicates)")
self.logger.debug(msg, {'request': request}, extra={'spider': spider})
self.logdupes = False
request_seen()方法直接換成了數(shù)據(jù)庫的存儲(chǔ)方式,鑒別重復(fù)的方式,還是使用指紋,指紋同樣,是依靠request_fingerprint()方法倆獲取的。獲取指紋之后就直接向集合添加指紋,如果添加成功,說明這個(gè)指紋原本不存在于集合中,返回1。代碼中最后的返回結(jié)果是判斷添加結(jié)果是否為0,如果剛才返回為1,那么這個(gè)判定結(jié)果是false,也就是不重復(fù),否則判定為重復(fù)。
4. picklecompat.py
這里實(shí)現(xiàn)了loads和dumps兩個(gè)函數(shù),其實(shí)就是實(shí)現(xiàn)了一個(gè)序列化器。
因?yàn)閞edis數(shù)據(jù)庫不能存儲(chǔ)復(fù)雜對象(key部分只能是字符串,value部分只能是字符串,字符串列表,字符串集合和hash),所以我們存啥都要先串行化成文本才行。
這里使用的就是python的pickle模塊,一個(gè)兼容py2和py3的串行化工具。這個(gè)serializer主要用于一會(huì)的scheduler存reuqest對象。
"""A pickle wrapper module with protocol=-1 by default."""
try:
import cPickle as pickle # PY2
except ImportError:
import pickle#PY3用的包
#反序列化就是將字符串轉(zhuǎn)換為json數(shù)據(jù)
def loads(s):
return pickle.loads(s)
#序列化就是將json數(shù)據(jù)轉(zhuǎn)換為字符串
def dumps(obj):
return pickle.dumps(obj, protocol=-1)
5. piplines.py
用于處理爬蟲爬取的數(shù)據(jù)將數(shù)據(jù)序列化放到redis中
from scrapy.utils.misc import load_object
from scrapy.utils.serialize import ScrapyJSONEncoder
from twisted.internet.threads import deferToThread
from . import connection, defaults
#序列化的字符串
default_serialize = ScrapyJSONEncoder().encode
class RedisPipeline(object):
"""Pushes serialized item into a redis list/queue
Settings
--------
REDIS_ITEMS_KEY : str
Redis key where to store items.
REDIS_ITEMS_SERIALIZER : str
Object path to serializer function.
"""
def __init__(self, server,
key=defaults.PIPELINE_KEY,
serialize_func=default_serialize):
"""Initialize pipeline.
Parameters
----------
server : StrictRedis
Redis client instance.
key : str
Redis key where to store items.
serialize_func : callable
Items serializer function.
"""
self.server = server
self.key = key
self.serialize = serialize_func
#將類本身傳入函數(shù)
#用來生成參數(shù)和redis的連接實(shí)例
@classmethod
def from_settings(cls, settings):
#from_settings = get_redis_from_settings
#生成redis連接實(shí)例
params = {
'server': connection.from_settings(settings),
}
#如果設(shè)置中有item_key,我們就用設(shè)置中的
if settings.get('REDIS_ITEMS_KEY'):
params['key'] = settings['REDIS_ITEMS_KEY']
#如果設(shè)置中有序列化的函數(shù),則優(yōu)先使用設(shè)置中的
if settings.get('REDIS_ITEMS_SERIALIZER'):
params['serialize_func'] = load_object(
settings['REDIS_ITEMS_SERIALIZER']
)
#將參數(shù)返回當(dāng)前類
return cls(**params)
@classmethod
def from_crawler(cls, crawler):
return cls.from_settings(crawler.settings)
#將item傳遞過來,自動(dòng)觸發(fā)這個(gè)函數(shù),process_item
def process_item(self, item, spider):
#創(chuàng)建一個(gè)線程,用于存儲(chǔ)item,也就是說上一個(gè)item還沒有存儲(chǔ)完,下一個(gè)item就可以存儲(chǔ)
return deferToThread(self._process_item, item, spider)
#實(shí)現(xiàn)存儲(chǔ)函數(shù)
def _process_item(self, item, spider):
#生成item_key
key = self.item_key(item, spider)
#使用默認(rèn)的序列化函數(shù),將item序列化為字符串
data = self.serialize(item)
#self.server是redis的連接實(shí)例
self.server.rpush(key, data)
return item
#用于存儲(chǔ)item
def item_key(self, item, spider):
"""Returns redis key based on given spider.
Override this function to use a different key depending on the item
and/or spider.
"""
# self.key='%(spider)s:items'=%(spider.name)s:items'
return self.key % {'spider': spider.name} # 格式化字符串
6.queue.py
爬取隊(duì)列,有三個(gè)隊(duì)列實(shí)現(xiàn),首先它實(shí)現(xiàn)了一個(gè)父類base,提供一些基本方法與屬性
from scrapy.utils.reqser import request_to_dict, request_from_dict
from . import picklecompat
class Base(object):
"""Per-spider base queue class"""
def __init__(self, server, spider, key, serializer=None):
"""Initialize per-spider redis queue.
Parameters
----------
server : StrictRedis
Redis client instance.
spider : Spider
Scrapy spider instance.
key: str
Redis key where to put and get messages.
serializer : object
Serializer object with ``loads`` and ``dumps`` methods.
"""
if serializer is None:
# Backward compatibility.
# TODO: deprecate pickle.
serializer = picklecompat
# 當(dāng)序列化 沒有l(wèi)oads函數(shù)時(shí) 拋出異常
# 拋出異常的目的就是為了使傳過來的序列化必須含有l(wèi)oads函數(shù)
if not hasattr(serializer, 'loads'):
raise TypeError("serializer does not implement 'loads' function: %r"
% serializer)
if not hasattr(serializer, 'dumps'):
raise TypeError("serializer '%s' does not implement 'dumps' function: %r"
% serializer)
# 下面的函數(shù)當(dāng)類的所有函數(shù) 都可以使用
self.server = server
self.spider = spider
self.key = key % {'spider': spider.name}
self.serializer = serializer
# 將request進(jìn)行編碼成字符串
def _encode_request(self, request):
"""Encode a request object"""
# 將requests轉(zhuǎn)換為字典
obj = request_to_dict(request, self.spider)
# 將字典轉(zhuǎn)換為字符串進(jìn)行返回
return self.serializer.dumps(obj)
# 將已經(jīng)編碼的encoded_request解碼為字典
def _decode_request(self, encoded_request):
"""Decode an request previously encoded"""
obj = self.serializer.loads(encoded_request)
# 將dict轉(zhuǎn)換為request objects 可以直接通過下載器進(jìn)行下載
return request_from_dict(obj, self.spider)
# len方法 必須被重載 否則不能使用
def __len__(self):
"""Return the length of the queue"""
raise NotImplementedError
def push(self, request):
"""Push a request"""
raise NotImplementedError
def pop(self, timeout=0):
"""Pop a request"""
raise NotImplementedError
# 刪除指定的self.key
def clear(self):
"""Clear queue/stack"""
self.server.delete(self.key)
首先看一下_encode_request() 和_decode_request()方法。我們把一個(gè)reques對象存儲(chǔ)到數(shù)據(jù)庫中,但數(shù)據(jù)庫無法直接存儲(chǔ)對象,多以要將request序列化轉(zhuǎn)化為字符串,而這兩個(gè)方法分別可以實(shí)現(xiàn)序列化和反序列化的操作,這個(gè)過程中可以利用pickle庫來實(shí)現(xiàn),隊(duì)列queu在調(diào)用push()方法將request 存入數(shù)據(jù)庫時(shí),會(huì)調(diào)用_encode_request ()方法進(jìn)行序列化,在調(diào)用pop()取出request時(shí),會(huì)調(diào)用_decode_request()進(jìn)行反序列化