拉勾網(wǎng)職位信息爬取

分析網(wǎng)頁

通過瀏覽器查看網(wǎng)頁源代碼,未能找到職位信息,因此需要打開F12開發(fā)者工具抓包分析職位數(shù)據(jù)使怎樣被加載到網(wǎng)頁的。抓包后發(fā)現(xiàn)職位數(shù)據(jù)是通過js異步加載的,數(shù)據(jù)存在于XHR的json數(shù)據(jù)中。因此可以分析ajax請求,得到其頭部信息,從而進行抓取。

抓包截圖

爬取思路

  1. 通過抓包獲取頭部信息,再用requests獲取返回的json數(shù)據(jù),然后觀察處理后得到職位信息
  2. 將抓取的信息寫入Excel表保存

代碼實現(xiàn)

  1. 通過抓包獲取頭部信息,再用requests獲取返回的json數(shù)據(jù),然后觀察處理后得到職位信息
def get_job_list(data):
    url = 'https://www.lagou.com/jobs/positionAjax.json?city=%E5%B9%BF%E5%B7%9E&' \
          'needAddtionalResult=false&isSchoolJob=0'
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) ' \
                      'AppleWebKit/537.36 (KHTML, like Gecko)' \
                      'Chrome/50.0.2661.102 Safari/537.36',
        'Referer': 'https://www.lagou.com/jobs/list_python?'\
        'city=%E5%B9%BF%E5%B7%9E&cl=false&fromSearch=true&labelWords=&suginput=',
        'Cookie': 'user_trace_token=20170828211503-'\
        'e7456f80-8bf2-11e7-8a6b-525400f775ce;' \
        'LGUID=20170828211503-e74571a4-8bf2-11e7-8a6b-525400f775ce; '\
        'index_location_city=%E5%B9%BF%E5%B7%9E; '\
        'Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1503926479,1503926490,'\
        '1503926505,1505482427; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6'\
        '=1505482854; LGRID=20170915214052-7da8c649-9a1b-11e7-94ae-525400f775ce;'\
        ' _ga=GA1.2.1531463847.1503926105; _gid=GA1.2.1479848580.1505482430;'\
        'Hm_lvt_9d483e9e48ba1faa0dfceaf6333de846=1503926253,1503926294,'\
        '1503926301,1505482781; Hm_lpvt_9d483e9e48ba1faa0dfceaf6333de846='\
        '1505482855; TG-TRACK-CODE=search_code; '\
        'JSESSIONID=ABAAABAACBHABBIEDE54BE195ADCD6F900E8C2AE4DE5008; '\
        'SEARCH_ID=1d353f6b121b419eaa0e511784e0042e'
    }
    response = requests.post(url,data=data,headers=headers)
    jobs=response.json()['content']['positionResult']['result']
    return jobs
  1. 將抓取的信息寫入Excel表
def excel_write(wb,style,jobs,name):
    ws = wb.add_sheet(name)
    headdata  = ['positionName','salary','city','district','workYear','education',
                 'companyFullName']
    datadict = {0:'positionName',1:'salary',2:'city',3:'district',
                4:'workYear',5:'education',6:'companyFullName'}
    for i in range(7):
        ws.write(0, i, headdata[i], style)
    index=1
    for i in jobs:
        for j in range(7):
            ws.write(index, j, i[datadict[j]])
        index += 1
  1. 爬取源碼
# !/usr/bin/env python3.6
# coding:utf-8
# @Author : Natsume
# @Filename : lagoujob.py
'''
@Description:
拉勾網(wǎng)職位信息爬蟲,修改post的data相關(guān)參數(shù)可以爬取任何職位的相關(guān)信息
'''
import requests
import xlwt
import time


# 獲取json數(shù)據(jù),并處理得到職位信息
def get_job_list(data):
    url = 'https://www.lagou.com/jobs/positionAjax.json?'\
          'city=%E5%B9%BF%E5%B7%9E&needAddtionalResult=false&isSchoolJob=0'
    headers = {
                'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; WOW64) ' \
                      'AppleWebKit/537.36 (KHTML, like Gecko)' \
                      'Chrome/50.0.2661.102 Safari/537.36',
                'Referer': 'https://www.lagou.com/jobs/list_python?'\
                'city=%E5%B9%BF%E5%B7%9E&cl=false&fromSearch=true&labelWords=&suginput=',
                'Cookie': 'user_trace_token=20170828211503-'\
                'e7456f80-8bf2-11e7-8a6b-525400f775ce;' \
                'LGUID=20170828211503-e74571a4-8bf2-11e7-8a6b-525400f775ce; '\
                'index_location_city=%E5%B9%BF%E5%B7%9E; '\
                'Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1503926479,1503926490,'\
                '1503926505,1505482427; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6'\
                '=1505482854; LGRID=20170915214052-7da8c649-9a1b-11e7-94ae-525400f775'\
                ' ce;_ga=GA1.2.1531463847.1503926105; _gid=GA1.2.1479848580.1505482430;'\
                'Hm_lvt_9d483e9e48ba1faa0dfceaf6333de846=1503926253,1503926294,'\
                '1503926301,1505482781; Hm_lpvt_9d483e9e48ba1faa0dfceaf6333de846='\
                '1505482855; TG-TRACK-CODE=search_code; '\
                'JSESSIONID=ABAAABAACBHABBIEDE54BE195ADCD6F900E8C2AE4DE5008; '\
                'SEARCH_ID=1d353f6b121b419eaa0e511784e0042e' }         # 設(shè)置請求頭部信息
    response = requests.post(url,data=data,headers=headers)
    jobs=response.json()['content']['positionResult']['result']        # 處理得到職位信息
    return jobs


#將抓取的職位信息寫入Excel表
def excel_write(wb,style,jobs,name):
    ws = wb.add_sheet(name)
    headdata  = ['positionName','salary','city','district','workYear','education',
                 'companyFullName']
    datadict = {0:'positionName',1:'salary',2:'city',3:'district',
                4:'workYear',5:'education',6:'companyFullName'}
    for i in range(7):                       # 寫入表頭
        ws.write(0, i, headdata[i], style)
    index=1
    for i in jobs:                           #
        for j in range(7):
            ws.write(index, j, i[datadict[j]])
        index += 1


#設(shè)置寫入字體樣式
def set_style():
    style = xlwt.XFStyle()
    font = xlwt.Font()
    font.bold = True
    font.italic = False
    font.name = '宋體'
    style.font = font
    return style


# 設(shè)置post請求的頭部數(shù)據(jù)
def get_data(i,x):
    data = {
        'first': 'false',
        'pn':str(i),
        'kd':x
    }
    return data

# 爬蟲執(zhí)行入口
if __name__ == '__main__':
    wb = xlwt.Workbook(encoding='utf-8')
    for i in range(1,9):
        kd = 'python'
        data = get_data(i,kd)
        jobs = get_job_list(data)
        style = set_style()
        excel_write(wb,style,jobs,data['kd']+str(i))
        time.sleep(1)
        print(i)
    savepath = 'D:/pythonjob/{}拉勾網(wǎng).xls'.format(kd)
    wb.save(savepath)

爬取結(jié)果

爬取結(jié)果
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容