Scrapy 爬蟲實(shí)戰(zhàn)-爬取字幕庫(kù)

Scrapy 爬蟲實(shí)戰(zhàn)-爬取字幕庫(kù)


1.首先,創(chuàng)建Scrapy框架

創(chuàng)建工程
scrapy startproject zimuku

創(chuàng)建爬蟲程序
cd zimuku
scrapy genspider zimu zimuku.cn

如圖:


snipaste_20181110_074005.png

snipaste_20181110_074302.png

我們會(huì)發(fā)現(xiàn)所有的框架以及模板都已經(jīng)創(chuàng)建好了,
依次給大家看看:

zimu.py
# -*- coding: utf-8 -*-
import scrapy


class ZimuSpider(scrapy.Spider):
    name = 'zimu'
    allowed_domains = ['zimuku.cn']
    start_urls = ['http://zimuku.cn/']

    def parse(self, response):
        pass

items.py
 -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class ZimukuItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    pass

pipelines.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class ZimukuPipeline(object):
    def process_item(self, item, spider):
        return item

這是三個(gè)比較重要,其他的我先不一一列舉了。
接下來(lái),我們要進(jìn)行分析網(wǎng)頁(yè)了

2.網(wǎng)頁(yè)分析

snipaste_20181110_074858.png

我們的目的是把這個(gè)內(nèi)容保存下來(lái),因?yàn)镾crapy框架自帶一下工具,所以我們就用xpath來(lái)做內(nèi)容匹配。
提取紅框的內(nèi)容的xpath語(yǔ)句為:/html/body/div[2]/div/div/div[2]/table/tbody/tr[1]/td[1]/a/b/text()

3.編寫代碼
(1)首先編寫items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class ZimukuItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    #要爬取的內(nèi)容定義
    text = scrapy.Field()

(2)編寫zimu.py

# -*- coding: utf-8 -*-
import scrapy
#需要把items中的類導(dǎo)進(jìn)來(lái)
import zimuku.items import ZimukuItem

class ZimuSpider(scrapy.Spider):
    name = 'zimu'
    allowed_domains = ['zimuku.cn']
    start_urls = ['http://zimuku.cn/']

    def parse(self, response):
        '''
        :param response: 解析網(wǎng)頁(yè)返回的內(nèi)容
        :return: 
        '''
        name = response.xpath("/html/body/div[2]/div/div/div[2]/table/tbody/tr[1]/td[1]/a/b/text()")
        item = {}
        item['text'] = name
        yield item

(3)編寫piplines(處理爬到的內(nèi)容的)

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class ZimukuPipeline(object):
    def process_item(self, item, spider):
        with open("F:\\python\\1.txt", 'a') as fp:
            fp.write(str(item['text']))
        print(item['text'])


(4)settings.py

#通過(guò)配置告訴Scrapy明白是誰(shuí)來(lái)處理結(jié)果
ITEM_PIPELINES={'zimuku.pipelines.ZimukuPipeline':300,}

(5)運(yùn)行

#不打印日志
scrapy crawl zimu --nolog
#打印日志
scrapy crawl meiju 

建議最好打印日志,不然有些錯(cuò)誤不會(huì)發(fā)現(xiàn),除了問(wèn)題還不知道出在哪塊
snipaste_20181110_094710.png

我們會(huì)發(fā)現(xiàn)運(yùn)行成功了,我們?cè)倏纯次募欠癖4娉晒?/p>

snipaste_20181110_094749.png

OK!??!終于成功了。
這篇Scrapy是一步一步做的,對(duì)自己還是入門的朋友都是不錯(cuò)的參考。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容