[TOC]
目標(biāo)
用scrapy寫(xiě)一個(gè)可以下載頁(yè)面,解析靜態(tài)頁(yè)面的爬蟲(chóng),加head,加鏈接生成器
解析,可能用xpath+bs
安裝
conda install scrapy
如果已經(jīng)安裝過(guò),要升級(jí),執(zhí)行
conda update scrapy
生成原始爬蟲(chóng)
新建一個(gè)文件夾scrapy,然后執(zhí)行scrapy startproject tutorial,生成demo
在tutorial/spider目錄下,新建dmoz_spider.py,輸入下面代碼
#!/usr/bin/python
# -*- coding: utf-8 -*-
# __author__ = "leisurem"
import sys
import scrapy
from bs4 import BeautifulSoup
reload(sys)
sys.setdefaultencoding('utf-8')
class DmozSpider(scrapy.spiders.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
'http://jobs.51job.com/nanjing/76840759.html?s=0',
]
def parse(self, response):
# print response.url.split("/")
filename = response.url.split("/")[-1]
with open(filename, 'wb') as f:
soup = BeautifulSoup(response.body, "html5lib")
company_name = soup.find('p', class_="cname").get_text().strip()
job_title = soup.find('h1').get('title')
job_describe = soup.find(
'div', class_="bmsg job_msg inbox").get_text().split()[1]
company_address = soup.find(
'div', class_="bmsg inbox").get_text().split()[0]
company_info = soup.find(
'div', class_="tmsg inbox").get_text().split()[0]
f.write('company_name is ' + company_name + '\n' + '\n')
f.write('job_title is ' + job_title + '\n' + '\n')
f.write('job_describe is ' + job_describe + '\n' + '\n')
f.write('company_address is ' + company_address + '\n' + '\n')
f.write('company_info is ' + company_info + '\n' + '\n')
bs4
這邊的解析用了BeautifulSoup
簡(jiǎn)要介紹一下bs4的用法
bs4是和xpat不太一樣的一種路徑定位方式(當(dāng)然,你實(shí)在需要,bs也支持re定位)
安裝
conda install beautifulsoup4
conda install html5lib
conda install lxml
bs解析
bs里面的東西多,介紹幾種方法
標(biāo)簽定位
如果已經(jīng)知道要定位的內(nèi)容在a標(biāo)簽內(nèi),但是a標(biāo)簽往往不止一個(gè),可以數(shù)一下再第幾個(gè)a標(biāo)簽內(nèi),比如再第6個(gè)a標(biāo)簽里soup.find_all('a')[5].get_text()
如果,知道就是第一個(gè)a標(biāo)簽,則可以用soup.find('a').get_text()正則
找出所有b開(kāi)頭的標(biāo)簽,比如body,b,b2這些都會(huì)被找出來(lái)
import re
for tag in soup.find_all(re.compile("^b")):
print(tag.name)
下面代碼找出所有名字中包含”t”的標(biāo)簽
for tag in soup.find_all(re.compile("t")):
print(tag.name)
- 類(lèi)解析
如果是類(lèi)似這樣的代碼,可以按照類(lèi)名和值搜索tag
<p class="cname">
<a target="_blank" title="萬(wàn)得信息技術(shù)股份有限公司(Wind資訊)">萬(wàn)得信息技術(shù)股份有限公司(Wind資訊)<em class="icon_b i_link"></em></a>
</p>
soup.find('p', class_="cname").get_text().strip()
加配置頭

打開(kāi)firefox,按f12,點(diǎn)reload按鈕,然后點(diǎn)旁邊的edit and resent,拷貝head
打開(kāi)項(xiàng)目目錄下的settings.py,輸入下面代碼
DEFAULT_REQUEST_HEADERS = {
'Host': 'jobs.51job.com',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'zh-CN,en-US;q=0.7,en;q=0.3',
'Accept-Encoding': 'gzip, deflate',
'Cookie': 'guid=14676224113529970042; ps=us%3DWmZSMFUpBzYAZg99VzBXZw09U3wANAdmBTBVe1tgAjYPMVo5VDEDNlM3CGALZ1NpBD4BNAE1VXxWFQA4AHAEcVpg%26%7C%26nv_3%3D; 51job=cuid%3D67028249%26%7C%26to%3DDTMCblc1VGAAaQxiB2ZRawN8BWgHZFVpUidWcQtzUjcBZAUiBWZTYFc2WjYKbV1pAzFXYwQxB2M%253D%26%7C%26cusername%3Dleisurem%26%7C%26cpassword%3D%26%7C%26ccry%3D.02PaxjLxs3vQ%26%7C%26cresumeid%3D88438738%26%7C%26cresumeids%3D.0RBx5M27b7UU%257C%26%7C%26cname%3D%25C2%25ED%25CE%25C4%25BD%25DC%26%7C%26cemail%3Dleisurem%2540126.com%26%7C%26cemailstatus%3D3%26%7C%26cnickname%3D%26%7C%26cenglish%3D0%26%7C%26cautologin%3D1%26%7C%26sex%3D0%26%7C%26cconfirmkey%3DleDLv4rER1zlU%26%7C%26cnamekey%3DleVc9stx.CGiQ; slife=lastvisit%3D070200; search=jobarea%7E%60070200%7C%21ord_field%7E%600%7C%21recentSearch0%7E%601%A1%FB%A1%FA070200%2C00%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA%C4%A3%D0%CD+++%BB%D8%B2%E2%A1%FB%A1%FA2%A1%FB%A1%FA%A1%FB%A1%FA-1%A1%FB%A1%FA1469523105%A1%FB%A1%FA0%A1%FB%A1%FA%A1%FB%A1%FA%7C%21recentSearch1%7E%601%A1%FB%A1%FA070200%2C00%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA07%2C08%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA07%2C06%2C05%2C04%2C03%A1%FB%A1%FApython++++%B9%C9%C6%B1%A1%FB%A1%FA2%A1%FB%A1%FA%A1%FB%A1%FA-1%A1%FB%A1%FA1469518712%A1%FB%A1%FA0%A1%FB%A1%FA%A1%FB%A1%FA%7C%21recentSearch2%7E%601%A1%FB%A1%FA070200%2C00%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA07%2C06%2C05%2C04%2C03%A1%FB%A1%FApython++++%B9%C9%C6%B1%A1%FB%A1%FA2%A1%FB%A1%FA%A1%FB%A1%FA-1%A1%FB%A1%FA1469518699%A1%FB%A1%FA0%A1%FB%A1%FA%A1%FB%A1%FA%7C%21recentSearch3%7E%601%A1%FB%A1%FA070200%2C00%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FApython++++%B9%C9%C6%B1%A1%FB%A1%FA2%A1%FB%A1%FA%A1%FB%A1%FA-1%A1%FB%A1%FA1469518681%A1%FB%A1%FA0%A1%FB%A1%FA%A1%FB%A1%FA%7C%21recentSearch4%7E%601%A1%FB%A1%FA070200%2C00%A1%FB%A1%FA000000%A1%FB%A1%FA0000%A1%FB%A1%FA00%A1%FB%A1%FA9%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FA99%A1%FB%A1%FApython++%C1%BF%BB%AF%A1%FB%A1%FA2%A1%FB%A1%FA%A1%FB%A1%FA-1%A1%FB%A1%FA1469518666%A1%FB%A1%FA0%A1%FB%A1%FA%A1%FB%A1%FA%7C%21collapse_expansion%7E%601%7C%21; nsearch=jobarea%3D%26%7C%26ord_field%3D%26%7C%26recentSearch0%3D%26%7C%26recentSearch1%3D%26%7C%26recentSearch2%3D%26%7C%26recentSearch3%3D%26%7C%26recentSearch4%3D%26%7C%26collapse_expansion%3D',
'Connection': 'keep-alive',
'Cache-Control': 'max-age=0',
}
定義item
打開(kāi)items.py,加入以下代碼來(lái)定義抓取域,暫定五個(gè),包括工資,職位,職位描述,公司類(lèi)型,公司規(guī)模,公司行業(yè)
job_pay = Field()
job_title = Field()
job_describe = Field()
company_type = Field()
company_scale = Field()
company_industry = Field()
url抽取器
考慮到url抽取略慢,重寫(xiě)了url抽取器
在tutorial下新建buildlink.py,完成url抽取器的編碼
主要思路是先解析出所在地區(qū)的url樣式,以及當(dāng)天本地區(qū)更新的職位數(shù)量,然后直接生成,而非按頁(yè)解析出每個(gè)職位列表頁(yè)的鏈接
如http://search.51job.com/list/070200,070211,0000,00,9,99,%2B,2,1.html是某地的一個(gè)職位列表頁(yè),通過(guò)分析程序可以在這邊頁(yè)面上解析出當(dāng)前總共的職位列表有多少頁(yè),然后替換最后的數(shù)字1來(lái)生成本地區(qū)的所有職位列表的url
運(yùn)行爬蟲(chóng)
把url抽取器加入到spider之后,即可運(yùn)行爬蟲(chóng),輸出格式為json,下面是修改后的代碼
class DmozSpider(scrapy.spiders.Spider):
name = "51job"
def __init__(self):
self.allowed_domains = ["51job.com"]
self.start_urls = ['http://jobs.51job.com/nanjing-xwq/77959226.html?s=0', ]
def parse(self, response):
item = TutorialItem()
soup = BeautifulSoup(response.body, "html5lib")
item['job_title'] = soup.find('h1').get('title')
item['job_pay'] = soup.find('div', class_="cn").strong.get_text().strip()
item['job_describe'] = soup.find('div', class_="bmsg job_msg inbox").get_text().split()[1]
item['company_type'], item['company_scale'], item['company_industry'] = [x.strip() for x in soup.find('p', class_="msg ltype").get_text().split('|')]
yield item
運(yùn)行命令如下
scrapy crawl 51job -o items.json
后續(xù)工作
根據(jù)爬蟲(chóng)設(shè)計(jì)概要,下面會(huì)運(yùn)行一段時(shí)間,這期間不可避免的會(huì)遇到一些問(wèn)題:
- 登陸,獲取cookie
- 登陸又會(huì)遇到驗(yàn)證碼的問(wèn)題\
- 對(duì)于打算抓取的頁(yè)面,需要進(jìn)行url去重,這可能會(huì)涉及bloomfilter
- 對(duì)于打算解析的頁(yè)面,可能會(huì)涉及頁(yè)面重復(fù)的判斷
- 數(shù)據(jù)存儲(chǔ)會(huì)進(jìn)一步優(yōu)化
- 圖片存儲(chǔ)會(huì)加進(jìn)來(lái)
- 同時(shí)考慮js解析的問(wèn)題