Scrapy+Selenium+Headless Chrome的Google Play爬蟲

前言

  • 展示如何使用Scrapy爬取靜態(tài)數(shù)據(jù)和Selenium+Headless Chrome爬取JS動態(tài)生成的數(shù)據(jù),從而爬取完整的Google Play印尼市場的應用數(shù)據(jù)。
  • 注意不同國家的數(shù)據(jù)格式不一樣,解析的方法也不一樣。如果爬取其他國家市場的應用數(shù)據(jù),需要修改解析數(shù)據(jù)的代碼(在下文的GooglePlaySpider.py文件中)。
  • 項目的運行環(huán)境:
  • 運行平臺:macOS
  • Python版本:3.6
  • IDE:Sublime Text

安裝

通過Scrapy創(chuàng)建項目

  • $ scrapy startproject gp

定義爬蟲數(shù)據(jù)Item

  • items.py文件添加:

    # 產(chǎn)品
    class ProductItem(scrapy.Item):
        gp_icon = scrapy.Field()  # 圖標
        gp_name = scrapy.Field()  # GP名稱
        // ...
    
    # 評論
    class GPReviewItem(scrapy.Item):
        avatar_url = scrapy.Field()  # 頭像鏈接
        user_name = scrapy.Field()  # 用戶名稱
        // ...
    

創(chuàng)建爬蟲

  • spiders文件夾創(chuàng)建GooglePlaySpider.py:

    import scrapy
    from gp.items import ProductItem, GPReviewItem
    
    
    class GooglePlaySpider(scrapy.Spider):
        name = 'gp'
        allowed_domains = ['play.google.com']
    
        def __init__(self, *args, **kwargs):
            urls = kwargs.pop('urls', [])  # 獲取參數(shù)
            if urls:
                self.start_urls = urls.split(',')
            print('start urls = ', self.start_urls)
    
        def parse(self, response):
            print('Begin parse ', response.url)
    
            item = ProductItem()
    
            content = response.xpath('//div[@class="LXrl4c"]')
    
            try:
                item['gp_icon'] = response.urljoin(content.xpath('//img[@class="T75of ujDFqe"]/@src')[0].extract())
            except Exception as error:
                exception_count += 1
                print('gp_icon except = ', error)
                item['gp_icon'] = ''
    
            try:
                item['gp_name'] = content.xpath('//h1[@class="AHFaub"]/span/text()')[0].extract()
            except Exception as error:
                exception_count += 1
                print('gp_name except = ', error)
                item['gp_name'] = ''
            
            // ...
                
            yield item
    
  • 運行爬蟲:

    $ scrapy crawl gp -a urls='https://play.google.com/store/apps/details?id=id.danarupiah.weshare.jiekuan&hl=id'

  • 評論數(shù)據(jù):

    'gp_review': []
    
  • 獲取不到評論數(shù)據(jù)的原因是:評論數(shù)據(jù)是通過JS代碼動態(tài)生成的,所以需要模擬瀏覽器請求網(wǎng)頁獲取。

通過Selenium+Headless Chrome獲取評論數(shù)據(jù)

  • 在最里面的gp文件夾創(chuàng)建配置文件configs.py并添加瀏覽器路徑:

    # 瀏覽器路徑
    CHROME_PATH = r''  # 可以指定絕對路徑,如果不指定的話會在$PATH里面查找
    CHROME_DRIVER_PATH = r''  # 可以指定絕對路徑,如果不指定的話會在$PATH里面查找
    
  • middlewares.py文件創(chuàng)建ChromeDownloaderMiddleware:

    from scrapy.http import HtmlResponse
    from selenium import webdriver
    from selenium.common.exceptions import TimeoutException
    from gp.configs import *
    
    
    class ChromeDownloaderMiddleware(object):
    
        def __init__(self):
            options = webdriver.ChromeOptions()
            options.add_argument('--headless')  # 設置無界面
            if CHROME_PATH:
                options.binary_location = CHROME_PATH
            if CHROME_DRIVER_PATH:
                self.driver = webdriver.Chrome(chrome_options=options, executable_path=CHROME_DRIVER_PATH)  # 初始化Chrome驅(qū)動
            else:
                self.driver = webdriver.Chrome(chrome_options=options)  # 初始化Chrome驅(qū)動
    
        def __del__(self):
            self.driver.close()
    
        def process_request(self, request, spider):
            try:
                print('Chrome driver begin...')
                self.driver.get(request.url)  # 獲取網(wǎng)頁鏈接內(nèi)容
                return HtmlResponse(url=request.url, body=self.driver.page_source, request=request, encoding='utf-8',
                                    status=200)  # 返回HTML數(shù)據(jù)
            except TimeoutException:
                return HtmlResponse(url=request.url, request=request, encoding='utf-8', status=500)
            finally:
                print('Chrome driver end...')
    
  • settings.py文件添加:

    DOWNLOADER_MIDDLEWARES = {
       'gp.middlewares.ChromeDownloaderMiddleware': 543,
    }
    
  • 再次運行爬蟲:

    $ scrapy crawl gp -a urls='https://play.google.com/store/apps/details?id=id.danarupiah.weshare.jiekuan&hl=id'

  • 評論數(shù)據(jù):

    'gp_review': [{'avatar_url': 'https://lh3.googleusercontent.com/-RZM2NdsDoWQ/AAAAAAAAAAI/AAAAAAAAAAA/ACLGyWCJIbUq9MxjbT2dmsotE2knI_t1xQ/s48-c-rw-mo/photo.jpg',
     'rating_star': '5',
     'review_text': 'Euis Suharani',
     'user_name': 'Euis Suharani'},
                   {'avatar_url': 'https://lh3.googleusercontent.com/-ppBNQHj5SUs/AAAAAAAAAAI/AAAAAAAAAAA/X8z6OBBBnwc/s48-c-rw/photo.jpg',
     'rating_star': '3',
     'review_text': 'Pengguna Google',
     'user_name': 'Pengguna Google'},
                   {'avatar_url': 'https://lh3.googleusercontent.com/-lLkaJ4GjUhY/AAAAAAAAAAI/AAAAAAAABfA/UPoS4CbDOpQ/s48-c-rw/photo.jpg',
     'rating_star': '5',
     'review_text': 'novi anna',
     'user_name': 'novi anna'},
                   {'avatar_url': 'https://lh3.googleusercontent.com/-XZDMrSc_pxE/AAAAAAAAAAI/AAAAAAAAAAA/awl5OkP7uR4/s48-c-rw/photo.jpg',
     'rating_star': '4',
     'review_text': 'Pengguna Google',
     'user_name': 'Pengguna Google'}]
    

使用sqlalchemy操作MySQL

  • 在配置文件configs.py添加數(shù)據(jù)庫連接信息:

    # 數(shù)據(jù)庫連接信息
    DATABASES = {
        'DRIVER': 'mysql+pymysql',
        'HOST': '127.0.0.1',
        'PORT': 3306,
        'NAME': 'gp',
        'USER': 'root',
        'PASSWORD': 'root',
    }
    
  • 在最里面的gp文件夾創(chuàng)建數(shù)據(jù)庫連接文件connections.py

    from sqlalchemy.ext.declarative import declarative_base
    from sqlalchemy import create_engine
    from sqlalchemy.orm import sessionmaker
    from sqlalchemy_utils import database_exists, create_database
    from gp.configs import *
    
    # sqlalchemy model 基類
    Base = declarative_base()
    
    
    # 數(shù)據(jù)庫連接引擎,用來連接數(shù)據(jù)庫
    def db_connect_engine():
        engine = create_engine("%s://%s:%s@%s:%s/%s?charset=utf8"
                               % (DATABASES['DRIVER'],
                                  DATABASES['USER'],
                                  DATABASES['PASSWORD'],
                                  DATABASES['HOST'],
                                  DATABASES['PORT'],
                                  DATABASES['NAME']),
                               echo=False)
    
        if not database_exists(engine.url):
            create_database(engine.url)  # 創(chuàng)建庫
            Base.metadata.create_all(engine)  # 創(chuàng)建表
    
        return engine
    
    
    # 數(shù)據(jù)庫會話,用來操作數(shù)據(jù)庫表
    def db_session():
        return sessionmaker(bind=db_connect_engine())
    
    
  • 在最里面的gp文件夾創(chuàng)建sqlalchemy model文件models.py

    from sqlalchemy import Column, ForeignKey
    from sqlalchemy.dialects.mysql import TEXT, INTEGER
    from sqlalchemy.orm import relationship
    from gp.connections import Base
    
    
    class Product(Base):
        # 表的名字:
        __tablename__ = 'product'
    
        # 表的結構:
        id = Column(INTEGER, primary_key=True, autoincrement=True)  # ID
        updated_at = Column(INTEGER)  # 最后一次更新時間
    
        gp_icon = Column(TEXT)   # 圖標
        gp_name = Column(TEXT)  # GP名稱
        // ...
    
    
    class GPReview(Base):
        # 表的名字:
        __tablename__ = 'gp_review'
    
        # 表的結構:
        id = Column(INTEGER, primary_key=True, autoincrement=True)  # ID
        product_id = Column(INTEGER, ForeignKey(Product.id))
        avatar_url = Column(TEXT)   # 頭像鏈接
        user_name = Column(TEXT)  # 用戶名稱
        // ...
    
  • pipelines.py文件添加數(shù)據(jù)庫操作代碼:

    from gp.connections import *
    from gp.items import ProductItem
    from gp.models import *
    
    
    class GoogleplayspiderPipeline(object):
    
        def __init__(self):
            self.session = db_session()
    
        def process_item(self, item, spider):
            print('process item from gp url = ', item['gp_url'])
    
            if isinstance(item, ProductItem):
    
                session = self.session()
    
                model = Product()
                model.gp_icon = item['gp_icon']
                model.gp_name = item['gp_name']
                // ...
    
                try:
                    m = session.query(Product).filter(Product.gp_url == model.gp_url).first()
    
                    if m is None:  # 插入數(shù)據(jù)
                        print('add model from gp url ', model.gp_url)
                        session.add(model)
                        session.flush()
                        product_id = model.id
                        for review in item['gp_review']:
                            r = GPReview()
                            r.product_id = product_id
                            r.avatar_url = review['avatar_url']
                            r.user_name = review['user_name']
                            // ...
    
                            session.add(r)
                    else:  # 更新數(shù)據(jù)
                        print("update model from gp url ", model.gp_url)
                        m.updated_at = item['updated_at']
                        m.gp_icon = item['gp_icon']
                        m.gp_name = item['gp_name']
                        // ...
    
                        product_id = m.id
                        session.query(GPReview).filter(GPReview.product_id == product_id).delete()
                        session.flush()
                        for review in item['gp_review']:
                            r = GPReview()
                            r.product_id = product_id
                            r.avatar_url = review['avatar_url']
                            r.user_name = review['user_name']
                            // ...
    
                            session.add(r)
    
                    session.commit()
                    print('spider_success')
                except Exception as error:
                    session.rollback()
                    print('gp error = ', error)
                    print('spider_failure_exception')
                    raise
                finally:
                    session.close()
            return item
    
  • settings.py文件的ITEM_PIPELINES注釋打開:

    ITEM_PIPELINES = {
       'gp.pipelines.GoogleplayspiderPipeline': 300,
    }
    
  • 再次運行爬蟲:

    $ scrapy crawl gp -a urls='https://play.google.com/store/apps/details?id=id.danarupiah.weshare.jiekuan&hl=id'

  • 查看MySQL數(shù)據(jù)庫存儲的爬蟲數(shù)據(jù):

    • 訪問MySQL$ mysql -u root -p,輸入密碼:root
    • 列出所有數(shù)據(jù)庫:mysql> show databases;,可以看到新建的gp
    • 訪問gpmysql> use gp;
    • 列出所有的數(shù)據(jù)表:mysql> show tables;,可以看到新建的productgp_review
    • 查看產(chǎn)品數(shù)據(jù):mysql> select * from product;
    • 查看評論數(shù)據(jù):mysql> select * from gp_review;

完整項目代碼

最后編輯于
?著作權歸作者所有,轉載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

相關閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容