抓取omim數(shù)據(jù)庫(kù)的數(shù)據(jù)

簡(jiǎn)介

OMIM 全稱叫做Online Mendelian Inheritance in Man, 是一個(gè)不斷更新的人類孟德?tīng)栠z傳病的數(shù)據(jù)庫(kù),這個(gè)數(shù)據(jù)庫(kù)主要關(guān)注人類基因變異和表型性狀之間的關(guān)系。

OMIM官網(wǎng)網(wǎng)址為:https://www.omim.org/

omim

OMIM的注冊(cè)用戶可以下載或者使用API獲取數(shù)據(jù)。這里我們嘗試使用爬蟲(chóng)來(lái)抓取Phenotype-Gene Relationships數(shù)據(jù)。

使用scrapy抓取數(shù)據(jù)

創(chuàng)建項(xiàng)目

scrapy startproject omimScrapy
cd omimScrapy
scrapy genspider omim omim.org

配置item信息


import scrapy

class OmimscrapyItem(scrapy.Item):
    # define the fields for your item here like:
    geneSymbol = scrapy.Field() 
    mimNumber = scrapy.Field()
    location = scrapy.Field()
    phenotype = scrapy.Field()
    phenotypeMimNumber = scrapy.Field()
    nheritance = scrapy.Field()
    mappingKey = scrapy.Field()
    descriptionFold = scrapy.Field()
    diagnosisFold = scrapy.Field()
    inheritanceFold = scrapy.Field()
    populationGeneticsFold = scrapy.Field()

制作爬蟲(chóng)

我們是通過(guò)mim2gene.txt這個(gè)文件的內(nèi)容依次抓取,所以需要對(duì)對(duì)該文件進(jìn)行解析。

    '''
        解析omim mim2gene.txt的文件
    '''
    def readMim2Gene(self,filename):
        filelist = []
        with open(filename,"r") as f:
            for line in f.readlines():
                tempList = []
                strs = line.split()
                mimNumber = strs[0]
                mimEntryType = strs[1]
                geneSymbol = "."
                if(len(strs)>=4):
                    geneSymbol = strs[3]
                if(mimEntryType in ["gene","gene/phenotype"]):
                    tempList.append(mimNumber)
                    tempList.append(mimEntryType)
                    tempList.append(geneSymbol)
                    filelist.append(tempList)     
        return filelist

解析該文件后,爬蟲(chóng)抓取的入口需要?jiǎng)討B(tài)的生成。我們需要通過(guò)start_requests方法來(lái)動(dòng)態(tài)生成抓取的url。然后基于該url進(jìn)行抓取,獲取相應(yīng)的內(nèi)容。

注意:此階段可以同時(shí)對(duì)html內(nèi)容進(jìn)行解析,抽取所需要的內(nèi)容,也可以先把html內(nèi)容保存起來(lái),后續(xù)統(tǒng)一處理。此處沒(méi)有對(duì)抓取到的html內(nèi)容進(jìn)行解析,而是直接把html內(nèi)容保存成html文件,文件名以mimNumber來(lái)命名,后綴后.html

# -*- coding: utf-8 -*-
import scrapy
from bs4 import BeautifulSoup
from omimScrapy.items import OmimscrapyItem

class OmimSpider(scrapy.Spider):
    name = 'omim'
    allowed_domains = ['omim.org']
    #start_urls = ['http://omim.org/']

    def start_requests(self):
        filelist = self.readMim2Gene("mim2gene.txt")
        for row in filelist:
            item = OmimscrapyItem()
            item['mimNumber'] = row[0]
            item['geneSymbol'] = row[2]
            url = "https://www.omim.org/entry/"+row[0]
            yield scrapy.Request(url,method='GET',callback=self.saveHtml,meta={'item':item})

    def saveHtml(self, response):
        item = response.meta['item']
        html = response.body.decode("utf-8")
        with open("/root/data/entry/"+item['mimNumber']+".html",'w+') as f:
            f.write(html)
            f.flush()

爬蟲(chóng)設(shè)置

OMIM robots.txt 設(shè)置了爬蟲(chóng)策略,只允許 微軟必應(yīng) bingbot 和谷歌 googlebot 爬蟲(chóng)獲取指定路徑內(nèi)容。主要關(guān)注一下幾個(gè)方面的配置即可。

BOT_NAME = 'bingbot'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'bingbot (+https://www.bing.com/bingbot.htm)'

# Configure a delay for requests for the same website (default: 0)
DOWNLOAD_DELAY = 4

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

執(zhí)行

接下來(lái)就可以執(zhí)行抓取操作了,這個(gè)過(guò)程比較慢,估計(jì)需要一天的時(shí)間。之后所有的html頁(yè)面即被保存成本地的html頁(yè)面了。

scrapy crawl omim

后續(xù)提取

基于本地的html的提取操作就很簡(jiǎn)單了,可以使用BeautifulSoup來(lái)進(jìn)行提取。提取的核心操作如下:

'''
    解析Phenotype-Gene Relationships表格
'''
def parseHtmlTable(html):
    soup = BeautifulSoup(html,"html.parser")
    table = soup.table
    location,phenotype,mimNumber,nheritance,mappingKey,descriptionFold,diagnosisFold,inheritanceFold,populationGeneticsFold="","","","","","","","",""
    if not table:
        result = "ERROR"
    else:
        result = "SUCCESS"
        trs = table.find_all('tr')
        for tr in trs:
            tds = tr.find_all('td')
            if len(tds)==0:
                continue
            elif len(tds)==4:
                phenotype = phenotype + "|" + (tds[0].get_text().strip() if tds[0].get_text().strip()!='' else '.' )
                mimNumber = mimNumber + "|" + (tds[1].get_text().strip() if tds[1].get_text().strip()!='' else '.')
                nheritance = nheritance + "|" + (tds[2].get_text().strip() if tds[2].get_text().strip()!='' else '.')
                mappingKey = mappingKey + "|" + (tds[3].get_text().strip() if tds[3].get_text().strip()!='' else '.')
            elif len(tds)==5:
                location = tds[0].get_text().strip() if tds[0].get_text().strip()!='' else '.'
                phenotype = tds[1].get_text().strip() if tds[1].get_text().strip()!='' else '.'
                mimNumber = tds[2].get_text().strip() if tds[2].get_text().strip()!='' else '.'
                nheritance = tds[3].get_text().strip() if tds[3].get_text().strip()!='' else '.'
                mappingKey = tds[4].get_text().strip() if tds[4].get_text().strip()!='' else '.'
            else:
                result = "ERROR"
        
        descriptionFoldList = soup.select("#descriptionFold")
        descriptionFold = "." if len(descriptionFoldList)==0 else descriptionFoldList[0].get_text().strip()
        
        diagnosisFoldList = soup.select("#diagnosisFold")
        diagnosisFold = "." if len(diagnosisFoldList)==0 else diagnosisFoldList[0].get_text().strip()
        
        inheritanceFoldList = soup.select("#inheritanceFold")
        inheritanceFold = "." if len(inheritanceFoldList)==0 else inheritanceFoldList[0].get_text().strip()
        
        populationGeneticsFoldList = soup.select("#populationGeneticsFold")
        populationGeneticsFold = "." if len(populationGeneticsFoldList)==0 else populationGeneticsFoldList[0].get_text().strip()

至于最終保存成什么格式,就看個(gè)人需要了。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 如何用Python爬數(shù)據(jù)?(一)網(wǎng)頁(yè)抓取 你期待已久的Python網(wǎng)絡(luò)數(shù)據(jù)爬蟲(chóng)教程來(lái)了。本文為你演示如何從網(wǎng)頁(yè)里找...
    王樹(shù)義閱讀 377,178評(píng)論 27 268
  • 為什么要做爬蟲(chóng)?都說(shuō)現(xiàn)在是"大數(shù)據(jù)時(shí)代",那數(shù)據(jù)從何而來(lái)? 企業(yè)產(chǎn)生的用戶數(shù)據(jù):百度指數(shù)、阿里指數(shù)、TBI騰訊瀏覽...
    FUNS大濕兄閱讀 493評(píng)論 0 0
  • 33款可用來(lái)抓數(shù)據(jù)的開(kāi)源爬蟲(chóng)軟件工具 要玩大數(shù)據(jù),沒(méi)有數(shù)據(jù)怎么玩?這里推薦一些33款開(kāi)源爬蟲(chóng)軟件給大家。 爬蟲(chóng),即...
    visiontry閱讀 7,694評(píng)論 1 99
  • HTTP基本原理 URI、URL、URN(Uninform Resource) URI(Identifier):統(tǒng)...
    GHope閱讀 2,286評(píng)論 2 26
  • 2019年8月24日紅源悟語(yǔ) 自我覺(jué)醒:想,都是問(wèn)題;做,才是答案! 今日成長(zhǎng) 一個(gè)人成功的關(guān)鍵取決于他的心態(tài),每...
    紅源隨筆閱讀 90評(píng)論 0 0

友情鏈接更多精彩內(nèi)容