在scrapy中使用selenium

在scrapy中使用selenium

scrapy是個(gè)好工具,selenium也是一個(gè)好工具,但是兩者一結(jié)合,就不那么好了。因?yàn)橥粋€(gè)非阻塞程序中塞入一段阻塞的代碼,不能不令人抓狂。但即便如此,還是有不少需求需要在scrapy中使用selenium(往往是因?yàn)镴avaScript搞不定)。既然如此,不妨來試一下怎樣更好的利用scraoy特性使用selenium。大概思路如下:

  1. 編寫專屬的SeleniumRequest類用來封裝selenium的相關(guān)操作;
  2. 編寫下載中間件,用于啟動(dòng)瀏覽器,并根據(jù)SeleniumReuqest的相關(guān)屬性進(jìn)行進(jìn)一步操作。

OK,思路很清晰,接下來就擼起袖子干吧。

編寫SeleniumRequest

毫無疑問這個(gè)類要繼承自Scrapy.Reuqest,同時(shí)我們希望這個(gè)類能保存一些屬性用于對(duì)瀏覽器的操作。大概如下:

  1. 首先是wait_until,用來保存瀏覽器等待到我們想要的條件加載出來為止;
  2. script,用來保存js腳本,用于在加載后執(zhí)行該腳本;
  3. handler,該屬性為一個(gè)函數(shù),接收一個(gè)driver參數(shù),當(dāng)網(wǎng)頁加載完成后調(diào)用它。

代碼如下:

class SeleniumRequest(scrapy.Request):
    """Selenium Request
    
    :param wait_until: 等待條件
        結(jié)構(gòu): {by: condition}
        其中 by 的可指定類型可查看selenium.webdriver.common.by.By 
        如: By.ID, By.XPATH 等(僅支持指定條件出現(xiàn))   
        :type wait_until: dict
    
    :param wait_time: 等待時(shí)間
    :type wait_time: int
    
    :param script: 需要執(zhí)行的js腳本
        執(zhí)行的結(jié)果會(huì)存儲(chǔ)到 meta 中,字段為 js_result

    :param handler: 處理driver實(shí)例的函數(shù)
        該函數(shù)不需要返回值
    """
    def __init__(self, url, callback=None,
                 wait_until=None, wait_time=None,
                 script=None, handler=None, **kwargs):
        self.wait_until = wait_until
        self.script = script
        self.wait_time = wait_time
        self.handler = handler
        super().__init__(url, callback, **kwargs)

到此請(qǐng)求類就寫完了,接下來開始寫下載中間件。

編寫下載中間件

下載中間件負(fù)責(zé)接收SeleniumReuqest并實(shí)際調(diào)用瀏覽器和操作瀏覽器,最后將瀏覽器獲取到的網(wǎng)頁源碼封裝為HtmlResponse返回。因此它要做的事相對(duì)多一點(diǎn)。下面一步步來寫:

  1. 第一步還是要先定義一下類,構(gòu)造函數(shù)中我們需要一個(gè)項(xiàng)目設(shè)置實(shí)例,因?yàn)槲覀円獜呐渲梦募蝎@取Webdriver的啟動(dòng)路徑和其它設(shè)置信息(規(guī)定它必須被配置在scrapy項(xiàng)目的配置文件中,以保持使用上的統(tǒng)一),需要的設(shè)置分別為SELENIUM_DRIVER_PATHSELENIUM_HEADLESS,分別表示路徑和是否顯示瀏覽器界面。
# 引入下面所有代碼需要的模塊和方法
import logging

from scrapy import signals
from scrapy.http import HtmlResponse

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

logger = logging.getLogger(__name__)


class SeleniumDownloadMiddleWare(object):

    def __init__(self, settings):
        driver_path = settings['SELENIUM_DRIVER_PATH']
        headless = settings.getbool('SELENIUM_HEADLESS', True)
        
        # 目前就只支持 Chrome 了
        options = webdriver.ChromeOptions()
        options.headless = headless

        # User-Agent與項(xiàng)目配置保持一致
        # 否則可能會(huì)導(dǎo)致在某些根據(jù)該請(qǐng)求頭設(shè)定cookies的網(wǎng)站上出現(xiàn)意想不到的情況
        ua = settings['DEFAULT_REQUEST_HEADERS']['User-Agent']
        options.add_argument(f'user-agent={ua}')
        self._options = options
        self._driver_path = driver_path
        self._driver = None
  1. 接下來定義類方法from_crawler用來實(shí)例化類。在這里,還要綁定一個(gè)爬蟲結(jié)束的信號(hào),以保證當(dāng)爬蟲結(jié)束時(shí)測(cè)試瀏覽器被正常關(guān)閉。
    @classmethod
    def from_crawler(cls, crawler):
        dm = cls(crawler.settings)
        crawler.signals.connect(dm.close, signal=signals.spider_closed)
        return dm
  1. 于是馬上就輪到close方法了:
    def closed(self):
        if self._driver is not None:
            self._driver.quit()
            logger.debug('Selenium closed')
  1. 寫一個(gè)driver屬性方便調(diào)用:
    @property
    def driver(self):
        if self._driver is None:
            self._driver = webdriver.Chrome(
                executable_path=self._driver_path, options=self._options
            )
        return self._driver
  1. 終于來到了最后的環(huán)節(jié),當(dāng)然就是寫一個(gè)process_request方法了,我們將通過該方法處理SeleniumRequest
    def process_request(self, request, spider):
        if not isinstance(request, SeleniumRequest):
            return

        self.driver.get(request.url)

        # 處理等待條件
        if request.wait_until:
            for k, v in request.wait_until.items():
                condition = EC.presence_of_element_located((k, v))
                WebDriverWait(self.driver, request.wait_time).until(
                    condition
                )

        # 處理js腳本
        if request.script:
            result = self.driver.execute_script(request.script)
            if result is not None:
                request.meta['js_result'] = result

        # 調(diào)用處理函數(shù)
        if request.handler is not None:
            request.handler(self.driver)

        # 傳遞Cookies
        for cookie_name, cookie_value in request.cookies.items():
            self.driver.add_cookie(
                {
                    'name': cookie_name,
                    'value': cookie_value
                }
            )
        request.cookies = self.driver.get_cookies()
        request.meta['browser'] = self.driver

        # 返回 Response對(duì)象
        body = str.encode(self.driver.page_source)
        return HtmlResponse(
            self.driver.current_url,
            body=body,
            encoding='utf-8',
            request=request
        )

到此就寫完了,接下來在項(xiàng)目的配置中配置該中間件就可以使用了。完整代碼如下:

import logging

from scrapy import signals
from scrapy.http import HtmlResponse

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

# 這里需要修改為SeleniumRequest的定義處
from utils.selenium import SeleniumRequest

logger = logging.getLogger(__name__)


class SeleniumDownloadMiddleWare(object):

    def __init__(self, settings):
        driver_path = settings['SELENIUM_DRIVER_PATH']
        headless = settings.getbool('SELENIUM_HEADLESS', True)
        ua = settings['DEFAULT_REQUEST_HEADERS']['User-Agent']
        options = webdriver.ChromeOptions()
        options.headless = headless
        options.add_argument(f'user-agent={ua}')
        self._options = options
        self._driver_path = driver_path
        self._driver = None

    @property
    def driver(self):
        if self._driver is None:
            self._driver = webdriver.Chrome(
                executable_path=self._driver_path, options=self._options
            )
        return self._driver

    @classmethod
    def from_crawler(cls, crawler):
        dm = cls(crawler.settings)
        crawler.signals.connect(dm.close, signal=signals.spider_closed)
        return dm

    def process_request(self, request, spider):
        if not isinstance(request, SeleniumRequest):
            return

        self.driver.get(request.url)

        # 處理等待條件
        if request.wait_until:
            for k, v in request.wait_until.items():
                condition = EC.presence_of_element_located((k, v))
                WebDriverWait(self.driver, request.wait_time).until(
                    condition
                )

        # 處理js腳本
        if request.script:
            result = self.driver.execute_script(request.script)
            if result is not None:
                request.meta['js_result'] = result

        # 調(diào)用處理函數(shù)
        if request.handler is not None:
            request.handler(self.driver)

        # 傳遞Cookies
        for cookie_name, cookie_value in request.cookies.items():
            self.driver.add_cookie(
                {
                    'name': cookie_name,
                    'value': cookie_value
                }
            )
        request.cookies = self.driver.get_cookies()
        request.meta['browser'] = self.driver

        # 返回 Response對(duì)象
        body = str.encode(self.driver.page_source)
        return HtmlResponse(
            self.driver.current_url,
            body=body,
            encoding='utf-8',
            request=request
        )

    def close(self):
        if self._driver is not None:
            self._driver.quit()
            logger.debug('Selenium closed')
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容