爬取京東商品信息

利用 BeautifulSoup + Requests 爬取京東商品信息并保存在Excel中

一、查看網(wǎng)頁信息

打開京東商城,隨便輸入個商品,就選固態(tài)硬盤吧


先看看 URL 的規(guī)律,可以看到我們輸入的關(guān)鍵詞是在 keyword 后面


試試刪掉后面的一些字符,發(fā)現(xiàn)并不影響我們的訪問,所以我們的 URL 可以簡化成下面這個樣子


復(fù)制下鏈接,發(fā)現(xiàn)簡化后的 URL 是醬紫的https://search.jd.com/Search?keyword=%E5%9B%BA%E6%80%81%E7%A1%AC%E7%9B%98&enc=utf-8
keyword后面的“固態(tài)硬盤”變成了這個

%E5%9B%BA%E6%80%81%E7%A1%AC%E7%9B%98

這是因為網(wǎng)址中的中文會被編碼成UTF-8,每個中文3個字節(jié),每個字節(jié)前加上%號。編碼和解碼方法如下:

>>> import urllib
>>>urllib.parse.unquote('%E5%9B%BA%E6%80%81%E7%A1%AC%E7%9B%98')
'固態(tài)硬盤'
>>> urllib.parse.quote('固態(tài)硬盤')
'%E5%9B%BA%E6%80%81%E7%A1%AC%E7%9B%98'

那我們就可以寫出搜索商品名對應(yīng)請求的 URL:

def get_good_url(word):
    url_str = urllib.parse.quote(word)
    url = "https://search.jd.com/Search?keyword={}&enc=utf-8".format(url_str)
    return url

二、爬取信息

接著來看看我們所需爬取商品的信息
選中一個商品,右鍵檢查



再檢查下第二個商品,查看下規(guī)律,我們可以發(fā)現(xiàn)每個商品信息,都存在下面這個 class 里。所以我們就可以用BeautifulSoup的 find_all(class_="gl-i-wrap") 找出所有的商品,生成一個列表,再從中找出每個商品對應(yīng)的信息



但點開這個標簽后,如下圖,我們可以發(fā)現(xiàn),我們索要爬取商品信息都存在一樣的標簽內(nèi)。
所以我們也可以先找出所有的 name,price,commit,img,生成四個列表,再把它們一一對應(yīng)

三、生成代碼

import requests
from bs4 import BeautifulSoup
import urllib 


headers = {                            #加個請求頭偽裝瀏覽器
    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
    }

def get_good_url(word):
    url_str = urllib.parse.quote(word)
    url = "https://search.jd.com/Search?keyword={}&enc=utf-8".format(url_str)
    return url


def get_html(url):
    html = requests.get(url, headers=headers)
    html.encoding = html.apparent_encoding    #html.apparent_encoding為從請求返回的內(nèi)容分析出的編碼格式,此處轉(zhuǎn)為'utf-8'
    soup = BeautifulSoup(html.text, 'lxml')
    return soup

#all_goods = soup.find_all(class_='gl-i-wrap') 另一種查找方法


def get_info(soup):
    titles = soup.find_all(class_="p-name p-name-type-2")
    prices = soup.find_all(class_="p-price")
    commits = soup.find_all(class_="p-commit")
    imgs = soup.find_all(class_="p-img")

    for title, price, commit, img in zip(titles, prices, commits, imgs):
        data = {
            'title' :   title.text.strip(),
            'price' :   price.text.strip(),
            'commit':   commit.text.strip(),
            'link'  :   img.find_all('a')[0].get("href"),
            'img'   :   img.find_all('img')[0].get("src")
            }
        print(data)

if __name__ == '__main__':
    good = input("請輸入你要查詢的商品\n")
    link = get_good_url(good)
    html = get_html(link)
    get_info(html)

運行下試試看:

>>> 
=================== RESTART: C:/Users/Why Me/Desktop/jd.py ===================
請輸入你要查詢的商品
固態(tài)硬盤
{'commit': '已有6.4萬+人評價', 'link': '//item.jd.com/2010277.html', 'price': '¥469.00', 'title': '三星(SAMSUNG) 750 EVO 120G SATA3 固態(tài)硬盤', 'img': '//img12.360buyimg.com/n7/jfs/t2212/266/1035221213/221087/773b0946/563977acNf0e20fa1.jpg'}
{'commit': '已有6.9萬+人評價', 'link': '//item.jd.com/1279827.html', 'price': '¥699.00', 'title': '三星(SAMSUNG) 850 EVO 250G SATA3 固態(tài)硬盤', 'img': '//img12.360buyimg.com/n7/jfs/t3346/324/399270074/297766/3973b0ec/5809a884N64b7c922.jpg'}
{'commit': '已有7.4萬+人評價', 'link': '//item.jd.com/2010278.html', 'price': '¥669.00', 'title': '三星(SAMSUNG) 750 EVO 250G SATA3 固態(tài)硬盤', 'img': '//img13.360buyimg.com/n7/jfs/t1927/358/970997561/221087/773b0946/563977f8Nfc78217b.jpg'}
{'commit': '已有10萬+人評價', 'link': '//item.jd.com/779351.html', 'price': '¥419.00', 'title': '金士頓(Kingston)V300 120G SATA3 固態(tài)硬盤', 'img': '//img11.360buyimg.com/n7/jfs/t3631/219/2161004093/156337/8219df07/584623caNc6709dd6.jpg'}
{'commit': '已有434967人評價', 'link': '//item.jd.com/1652127.html', 'price': '¥', 'title': '金士頓(Kingston)DDR3 1600 4G臺式機內(nèi)存+V300 120G 固態(tài)硬盤套裝', 'img': '//img12.360buyimg.com/n7/jfs/t1291/10/518608285/159481/aa443498/557ff074N2fb18be7.jpg'}
...

查詢別的商品試試:


QAQ

四、儲存數(shù)據(jù)

保存在Excel里會比較好分析
這里使用 xslxwriter

import requests
from bs4 import BeautifulSoup
import urllib
import xlsxwriter


headers = {
    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
    }

def get_good_url(word):
    url_str = urllib.parse.quote(word)
    url = "https://search.jd.com/Search?keyword={}&enc=utf-8".format(url_str)
    return url


def get_html(url):
    html = requests.get(url, headers=headers)
    html.encoding = html.apparent_encoding
    soup = BeautifulSoup(html.text, 'lxml')
    return soup


def get_info(soup, good):
    titles = soup.find_all(class_="p-name p-name-type-2")
    prices = soup.find_all(class_="p-price")
    commits = soup.find_all(class_="p-commit")
    imgs = soup.find_all(class_="p-img")

    workbook = xlsxwriter.Workbook(good + '.xlsx') #創(chuàng)建新表
    worksheet = workbook.add_worksheet()
    
    bold = workbook.add_format({'bold': True})  #建立粗體格式
    
    worksheet.write('A1', 'Title', bold)        #寫入標題,粗體
    worksheet.write('B1', 'Price', bold)
    worksheet.write('C1', 'Commit', bold)
    worksheet.write('D1', 'Link', bold)
    worksheet.write('F1', 'Img', bold)

    worksheet.set_column('A:A', 100)            #改變列寬度
    worksheet.set_column('B:B', 10)
    worksheet.set_column('C:C', 18)
    worksheet.set_column('D:D', 27)
    worksheet.set_column('F:F', 100)
    
    row = 1
    col = 0
    
    for title, price, commit, img in zip(titles, prices, commits, imgs):
        data = {
            'title' :   title.text.strip(),
            'price' :   price.text.strip(),
            'commit':   commit.text.strip(),
            'link'  :   img.find_all('a')[0].get("href"),  #鏈接的標簽也在 img 標簽里找
            'img'   :   img.find_all('img')[0].get("src")
            }
        #print(data)
        worksheet.write(row, col, data['title'])    #寫入數(shù)據(jù)
        worksheet.write(row, col+1, data['price'])
        worksheet.write(row, col+2, data['commit'])
        worksheet.write(row, col+3, data['link'])
        worksheet.write(row, col+4, data['img'])
        row += 1
        
    workbook.close()
    
        

if __name__ == '__main__':
    good = input("請輸入你要查詢的商品\n")
    link = get_good_url(good)
    html = get_html(link)
    get_info(html, good)
    

運行下試試看

>>> 
=================== RESTART: C:/Users/Why Me/Desktop/jd.py ===================
請輸入你要查詢的商品
固態(tài)硬盤

Warning (from warnings module):
  File "D:\python3.52\lib\site-packages\xlsxwriter\worksheet.py", line 830
    force_unicode(url))
UserWarning: Ignoring URL 'https://ccc-x.jd.com/dsp/nc?ext=Y2xpY2sueC5qZC5jb20vSmRDbGljay8_eHVpZD01MjAwNyZ4c2l0ZWlkPTEwOTk1NDM5OF8xNDc2JnRvPWh0dHA6Ly9pdGVtLmpkLmNvbS8xNTkyNDQ4Lmh0bWw&log=7PpBMf6t87I6oM0VLPwEmWrd2SgyaWJjj6EC4vYhuh7iCsttJfv9TDfcAgTKqWbCLLeI1dEGfC09SoPIvPAKj4Xtbv-6jnX-qAWZKz46GdiJJNV2ZU3OWox54fbLzZ-TRTooveAkSRdWyaH0DE4M3DwxQts4PxqUQiiov99E20WKCLFpu4ncy0V6NR8PfTloBPGVKTUkAjLHnqzQzO0rb_ok9tZBsyXLPRoNUiqZcvB9ajEs8Zb6BCtHCzu5QDmD-yiaD25Tm_eS4DgkfGayyFFoMGx_y6FyO2E1zbDIUNcoF5G4ON1xMOaPciH2CptI6XSdUF8ViyV9SmzCEykWUrD9i2Ne0oi0qMyZNfsoDpHAx6f4UCdEHMfwu45XisbAnfj21UjheU7tzM3KuWk_0OLH-J77gHlUyuX72psI4dyUKGyEyYGgswvn_bLD3DX3&v=404' with link or location/anchor > 255 characters since it exceeds Excel's limit for URLS

Warning (from warnings module):
  File "D:\python3.52\lib\site-packages\xlsxwriter\worksheet.py", line 830
    force_unicode(url))
UserWarning: Ignoring URL 'https://ccc-x.jd.com/dsp/nc?ext=Y2xpY2sueC5qZC5jb20vSmRDbGljay8_eHVpZD01MjAwNyZ4c2l0ZWlkPTEwOTUzMzAwNF8xNDc2JnRvPWh0dHA6Ly9pdGVtLmpkLmNvbS80MTczODY0Lmh0bWw&log=7PpBMf6t87I6oM0VLPwEmWrd2SgyaWJjj6EC4vYhuh5LVKlnoUwiKskX7yp59hsaYbRCZqHPA7of0ku0pKD8yyMlENlDBmmWbYQSf5iudST1aW-kq4LWnzYSiXwquGa-lI_ZpBv3PQD6U_UWdQyYDLMCQ5bmriNRaHFpJosmkQU7RG-rXJZ98TaN_snWQixVUiEHC46VwrN9PqHlvkNnXAS-rvvda-_qloIbofbme2FqWymvkxzSlLYqS73YOQuiH4ugaFGdNOaP94Wt3MTWT5rkJfrZMWr33qDLS3JBvTa1tewqA8EbImCHaNbUT9tCbkEngyIMMT5emd-Q-GrEVwFHBSWTxhne-aSWEDzCR76612OabK1mfCVrtQefrh0I96hinm5qsYkb751issutBi9Yd325l7JJA3-0eLou0lw&v=404' with link or location/anchor > 255 characters since it exceeds Excel's limit for URLS
>>> 

報警告,原來是有的鏈接長度超過了Excel單元格的限制,不要緊。
打開生成的"固態(tài)硬盤.xlsx"看看


效果不錯!
不過 price 列有些缺失了,待會再看下原因
還有我們只是爬取了一頁商品,更多頁,甚至全部呢。雖然第一頁已經(jīng)足夠了,不過有時候分析需要用

五、爬取多頁商品信息

還是分析下 URL, 可以看到 page 即為頁數(shù)變更的字段,而且是一次增加 2。所以我們就可以寫出所要請求的所以商品網(wǎng)址


def get_good_urls(word):
    url_str = urllib.parse.quote(word)
    urls = ("https://search.jd.com/Search?keyword={}&enc=utf-8&qrst=1&rt=1&stop=1&vt=2&offset=4&page={}&s=1&click=0".format(url_str, i) for i in range(1,12,2))
    return urls

如果我們要請求多頁的話就用 () 生成器會省內(nèi)存,如果頁數(shù)比較少 用 [] 生成個列表就可以了。這里先生成 10 頁試試看效果

由于 xlsxwriter 不能讀取,所以我們只能一次性把所有數(shù)據(jù)寫入,先來個比較土的方法,還是由上面的代碼修改

import requests
from bs4 import BeautifulSoup
import urllib
import xlsxwriter


headers = {
    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
    }


def get_good_urls(word):
    url_str = urllib.parse.quote(word)
    urls = ("https://search.jd.com/Search?keyword={}&enc=utf-8&qrst=1&rt=1&stop=1&vt=2&offset=4&page={}&s=1&click=0".format(url_str, i) for i in range(1,12,2))
    return urls


def get_html(url):
    html = requests.get(url, headers=headers)
    html.encoding = html.apparent_encoding
    soup = BeautifulSoup(html.text, 'lxml')
    return soup


def get_info(soup):
    all_titles = soup.find_all(class_="p-name p-name-type-2")
    all_prices = soup.find_all(class_="p-price")
    all_commits = soup.find_all(class_="p-commit")
    all_imgs = soup.find_all(class_="p-img")

    titles = []
    prices = []
    commits = []
    links = []
    imgs = []
    
    for title in all_titles:
        titles.append(title.text.strip())

    for price in all_prices:
        prices.append(price.text.strip())
        
    for commit in all_commits:
        commits.append(commit.text.strip())

    for link in all_imgs:
        links.append(link.find_all('a')[0].get("href"))

    for img in all_imgs:
        imgs.append(img.find_all('img')[0].get("src"))

    return titles, prices, commits, links, imgs

if __name__ == '__main__':
    good = input("請輸入你要查詢的商品\n")
    links = get_good_urls(good)

    workbook = xlsxwriter.Workbook(good + '.xlsx') #創(chuàng)建新表
    worksheet = workbook.add_worksheet()
    
    bold = workbook.add_format({'bold': True})  #建立粗體格式
    
    worksheet.write('A1', 'Title', bold)        #寫入標題,粗體
    worksheet.write('B1', 'Price', bold)
    worksheet.write('C1', 'Commit', bold)
    worksheet.write('D1', 'Link', bold)
    worksheet.write('E1', 'Img', bold)

    worksheet.set_column('A:A', 100)            #改變列寬度
    worksheet.set_column('B:B', 10)
    worksheet.set_column('C:C', 18)
    worksheet.set_column('D:D', 27)
    worksheet.set_column('E:E', 100)

    all_row = 1
    col = 0
    
    for link in links:
        html = get_html(link)
        ti, pr, co, li, im = get_info(html)

        row = all_row
        for t in ti:
            worksheet.write(row, col, t)
            row += 1

        row = all_row
        for p in pr:
            worksheet.write(row, col+1, p)
            row += 1

        row = all_row
        for c in co:
            worksheet.write(row, col+2, c)
            row += 1

        row = all_row     
        for l in li:
            worksheet.write(row, col+3, l)
            row += 1

        row = all_row     
        for i in im:
            worksheet.write(row, col+4, i)
            row += 1

        all_row += len(ti)
        print('Done One page')
    workbook.close()

看著很蹩腳,不管了,先湊合試試看

>>> 
================== RESTART: C:/Users/Why Me/Desktop/爬京東2.py ==================
請輸入你要查詢的商品
固態(tài)硬盤

Warning (from warnings module):
  File "D:\python3.52\lib\site-packages\xlsxwriter\worksheet.py", line 830
    force_unicode(url))
UserWarning: Ignoring URL 'https://ccc-x.jd.com/dsp/nc?ext=Y2xpY2sueC5qZC5jb20vSmRDbGljay8_eHVpZD01MjAwNyZ4c2l0ZWlkPTE1MDEyNzQ1XzE0NzYmdG89aHR0cDovL2l0ZW0uamQuY29tLzM1MDA5NzQuaHRtbA&log=X8iXmZwdy8FrP784YxabEBovMCmgCc1tSMJf40elIqO5X09xjWDJrwbXJgDIu--hzdqLCdWvtuXToxiOC6fwtcQocJezn7MF1BIQ-O71yq2ZnJeNEqSqI6t6pJSSKmrbg3ZKkm-z_YHe04MrG_t1MSxvxPJqBTA8PpsJ3qhLXI3GZDAzT_vDqKnbr52l80NutEulONu-sKe5XxVPpIIZiDu8_PE1aXPJvRwC9EFb7VjlDw1FkOyc6ZgclyhIpWq-hEA3zNiKa7shBoDdCgprkm3a_RpUBhg7ak96p9XdlRS5gwK2cN-ByQ5DFYjCtzs4jo2x5HUShAcp74TdTpSgaiOMh4xwPqtE1Fs30VifVN5RvdNTxcGnbFsS_1MhfijzrJNMmuGMA3d1KN68w1cqPOqlN-o68u0Id4Wzt85e5Chc9EWXjZJVeOZdjgMRd1reOw657DT_zkQfWYkDGvlzjA&v=404' with link or location/anchor > 255 characters since it exceeds Excel's limit for URLS
...
...
...
Done One page

還是跟上面一樣的警告,不管,先看看效果。
還是不錯的。復(fù)制幾個 link 鏈接檢驗下信息有對齊正確不,發(fā)現(xiàn)也沒有錯誤。OK

六、優(yōu)化

在得到的excel中,可以看到價格列缺失了一些數(shù)據(jù),無論用 BeautifulSoup 的 css 選擇器還是 re,都也找不到缺失的數(shù)據(jù),看下網(wǎng)頁源代碼,找個我們?nèi)笔У臄?shù)據(jù)
發(fā)現(xiàn)在源碼中是存在的


但是通過 requests 請求到的 rensponse 則不見了


那么應(yīng)該是 ajax 。那么有倆種解決方法
一是把獲得價格的那個請求找到,應(yīng)該返回的是個json字符串,之后解析下json串,二是通過 selenium 模仿瀏覽器。
查看下 幾個 xhr



沒發(fā)現(xiàn)有關(guān)的請求,所以就用第二種方法好了。
可以直接用 selenium 庫到達目的,但由于我們上面的是通過 requests + BeautifluSoup,所以我們這里就用 selenium + BeautifulSoup ,這樣就稍微改下上面的代碼就行了。

from bs4 import BeautifulSoup
import urllib
import xlsxwriter
from selenium import webdriver


def get_good_urls(word):
    url_str = urllib.parse.quote(word)
    urls = ("https://search.jd.com/Search?keyword={}&enc=utf-8&qrst=1&rt=1&stop=1&vt=2&offset=4&page={}&s=1&click=0".format(url_str, i) for i in range(1,12,2))
    return urls


def get_html(url):
    driver = webdriver.PhantomJS()
    driver.get(url)
    web_data = driver.page_source
    soup = BeautifulSoup(web_data, 'lxml')
    return soup

def get_info(soup):
    all_titles = soup.find_all(class_="p-name p-name-type-2")
    all_prices = soup.find_all(class_="p-price")
    all_commits = soup.find_all(class_="p-commit")
    all_imgs = soup.find_all(class_="p-img")

    titles = []
    prices = []
    commits = []
    links = []
    imgs = []

    for title in all_titles:
        titles.append(title.text.strip())

    for price in all_prices:
        prices.append(price.text.strip())

    for commit in all_commits:
        commits.append(commit.text.strip())

    for link in all_imgs:
        links.append(link.find_all('a')[0].get("href"))

    for img in all_imgs:
        imgs.append(img.find_all('img')[0].get("src"))

    return titles, prices, commits, links, imgs

if __name__ == '__main__':
    good = input("請輸入你要查詢的商品\n")
    links = get_good_urls(good)

    workbook = xlsxwriter.Workbook(good + '.xlsx') #創(chuàng)建新表
    worksheet = workbook.add_worksheet()

    bold = workbook.add_format({'bold': True})  #建立粗體格式

    worksheet.write('A1', 'Title', bold)        #寫入標題,粗體
    worksheet.write('B1', 'Price', bold)
    worksheet.write('C1', 'Commit', bold)
    worksheet.write('D1', 'Link', bold)
    worksheet.write('F1', 'Img', bold)

    worksheet.set_column('A:A', 100)            #改變列寬度
    worksheet.set_column('B:B', 10)
    worksheet.set_column('C:C', 18)
    worksheet.set_column('D:D', 27)
    worksheet.set_column('F:F', 100)

    all_row = 1
    col = 0

    for link in links:
        html = get_html(link)
        ti, pr, co, li, im = get_info(html)

        row = all_row
        for t in ti:
            worksheet.write(row, col, t)
            row += 1

        row = all_row
        for p in pr:
            worksheet.write(row, col+1, p)
            row += 1

        row = all_row
        for c in co:
            worksheet.write(row, col+2, c)
            row += 1

        row = all_row     
        for l in li:
            worksheet.write(row, col+3, l)
            row += 1

        row = all_row     
        for i in im:
            worksheet.write(row, col+4, i)
            row += 1

        all_row += len(ti)
        print('Done One page')
    workbook.close()

并沒多大改變,就把用 requests 請求換成 selenium 而已,運行下試試。

ok!
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容