編寫爬蟲之爬取淘寶上某寶貝該月的銷量

首先感謝【小甲魚】極客Python之效率革命。講的很好,通俗易懂,適合入門。

感興趣的朋友可以訪問https://fishc.com.cn/forum-319-1.html來支持小甲魚。謝謝大家。
想要學(xué)習(xí)requests庫的可以查閱: https://fishc.com.cn/forum.php?mod=viewthread&tid=95893&extra=page%3D1%26filter%3Dtypeid%26typeid%3D701

1.找到目標(biāo)URL

https://s.taobao.com/search?q=XXXX寶貝的名字XXXXXX

我們先把源碼爬下來看看

# -*- coding:UTF-8 -*-
import requests

def open_url(keyword):
    payload = {'q': "零基礎(chǔ)入門學(xué)習(xí)Python", "sort": "sale-desc"}
    url = "https://s.taobao.com/search"
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36",
    }
    res = requests.get(url, params=payload, headers=headers)
    return res

def main():
    keyword = input(u"請輸入搜索關(guān)鍵詞:")
    res = open_url(keyword)

    with open('items.txt', 'w', encoding='utf-8') as file:
        file.write(res.text)

if __name__ == '__main__':
    main()

通過觀察發(fā)現(xiàn),我們想要的內(nèi)容好像就在這里?。∪缓笪覀兙蜕险齽t,把這一塊摳出來


源碼.png

2.用正則來定位元素

# -*- coding:UTF-8 -*-
import re

def main():
    with open("items.txt", 'r', encoding="utf-8") as file1:
        # re.search(pattern, string, flags=0)
        g_page_config = re.search(r"g_page_config = (.*?);\n", file1.read())  #  .*? 表示匹配任意數(shù)量的重復(fù),但是在能使整個匹配成功的前提下使用最少的重復(fù)
        with open("g_page_config.txt", 'w', encoding="utf-8") as file2:
            file2.write(g_page_config.group(1))

if __name__ == '__main__':
    main()
正則摳出來的內(nèi)容.png

發(fā)現(xiàn)內(nèi)容還是好多,字典里面有字典,字典里面還有字典,頭大,怎么辦?
我們就按照老辦法,把后綴名改成.json,然后用火狐瀏覽器打開。


定位.png

3.提取我們想要的數(shù)據(jù)(按銷量排序,統(tǒng)計前3頁所有的銷量)

# -*- coding:UTF-8 -*-
import re
import json
import requests

def open_url(keyword, page=1):
    # &s=0表示從第1個商品開始顯示,由于1頁有44個商品,所以&s=44表示第二頁
    payload = {'q': keyword, 's': str((page - 1) * 44), "sort": "sale-desc"}
    url = "https://s.taobao.com/search"
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36",
    }
    res = requests.get(url, params=payload, headers=headers)
    return res


# 獲取列表頁的所有商品
def get_items(res):
    g_page_config = re.search(r"g_page_config = (.*?);\n", res.text)
    page_config_json = json.loads(g_page_config.group(1))  # 將已編碼的 JSON 字符串解碼為 Python 對象
    page_items = page_config_json['mods']['itemlist']['data']['auctions']

    results = []  # 整理出我們關(guān)注的信息
    for each_item in page_items:
        dict1 = dict.fromkeys(('nid', 'title', 'detail_url', 'view_price', 'view_sales', 'nick'))
        dict1['nid'] = each_item['nid']
        dict1['title'] = each_item['title']
        dict1['detail_url'] = each_item['detail_url']
        dict1['view_price'] = each_item['view_price']
        dict1['view_sales'] = each_item['view_sales']
        dict1['nick'] = each_item['nick']
        results.append(dict1)

    return results


# 統(tǒng)計該頁面所有商品的銷量
def count_sales(items):
    count = 0
    for each in items:
        if '小甲魚' in each['title']:
            count += int(re.search(r'\d+', each['view_sales']).group())
    return count


def main():
    keyword = input(u"請輸入搜索關(guān)鍵詞:")
    page = 3  # 前三頁
    total = 0
    for each in range(page):
        res = open_url(keyword, each+1)
        items = get_items(res)
        total += count_sales(items)
    print("總銷量是:", total)


if __name__ == '__main__':
    main()
輸出.png
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容