推薦系統(tǒng)(四):word2vec在推薦系統(tǒng)的應(yīng)用

一、Word2Vec算法

Word2Vec簡單講就是通過學(xué)習(xí)文本然后用詞向量的方式表征詞的語義信息,即通過Embedding把原先詞所在空間映射到一個新的空間中去,使得語義上相似的單詞在該空間內(nèi)距離相近。以傳統(tǒng)神經(jīng)網(wǎng)絡(luò)為基礎(chǔ)的神經(jīng)概率語言模型,缺點主要是計算量太大,集中體現(xiàn)在:隱層和輸出層之間的矩陣運算和輸出層上的 Softmax歸一化運算上。因此Word2Vec就是針對這兩點來優(yōu)化神經(jīng)概率語言模型的。 中兩個重要的模型是: CBOW模型和 Skip-gram模型。

1. CBOW模型

CBOW(continue bag-of-words) 包括三層結(jié)構(gòu):輸入層、投影層和輸出層。


輸入層:包含2c個詞的詞向量。
投影層:對輸入的2c個詞向量進行累加操作。
輸出層:輸出層對應(yīng)一顆哈夫曼(Huffman)樹,它是以語料中出現(xiàn)過的詞當(dāng)葉子節(jié)點,以各詞在語料庫中出現(xiàn)的次數(shù)當(dāng)權(quán)值構(gòu)造而成。在這顆樹中,葉子結(jié)點共N個,分別對應(yīng)詞典\mathbb{D}中的詞,非葉結(jié)點N-1個。

Hierarchical Softmax 的思想,對于詞典 \mathbb{D} 中的的任意詞 w ,哈夫曼樹中必然存在唯一一條從根節(jié)點到詞 w 對應(yīng)葉子節(jié)點的路徑 p^w。路徑 p^w中包含節(jié)點的個數(shù)為l^w,則路徑 p^w 上存在 l^w-1 個分支,將每個分支看作一次二分類,那么每一次分類就對應(yīng)一個概率,最后這些概率連乘得到p(w | \text { Context }(w)),即
p(w | \text { Context }(w)) =\prod_{j=2}^{l^{w}} p\left(d_{j}^{w} | X_{w} ; \theta_{j-1}^{w}\right)

其中d_{j}^{w}表示路徑p^w中第j個節(jié)點對應(yīng)的編碼(根節(jié)點不對應(yīng)編碼);\theta_{j}^{w}表示路徑p^w中第j個非葉子節(jié)點對應(yīng)的向量;X_{w}表示 \text { Context }(w)中所有詞向量的疊加。

\begin{aligned} p\left(d_{j}^{w} | X_{w} ; \theta_{j-1}^{w}\right) &=\left\{\begin{array}{ll}{\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right),} & {\text { if } d_{j}^{w}=0} \\ {1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right),} & {\text { otherwise }} \end{array}\\ {} {=\left[\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]^{1-d_{j}^{w}} \cdot\left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]^{d_{j}^{w}}}\right.\end{aligned}

其中\sigma(x) = \frac{1}{1+e^{-x}}。通過對數(shù)極大似然化處理可得CBOW模型的目標(biāo)函數(shù)為:
\begin{aligned} \mathcal{L} &=\sum_{w \in \mathcal{D}} \log \prod_{j=2}^{l^{w}}\left(\left[\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]^{1-d_{j}^{w}} \cdot\left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]^{d_{j}^{w}}\right) \\ &=\sum_{w \in \mathcal{D}} \sum_{j=2}^{l^{w}}\left(\left(1-d_{j}^{w}\right) \cdot \log \left[\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]+d_{j}^{w} \cdot \log \left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]\right) \\ &=\sum_{w \in \mathcal{D}} \sum_{j=2}^{l^{w}} \Phi\left(\theta_{j-1}^{w}, X_{w}\right) \end{aligned}

word2vec極大化目標(biāo)函數(shù)使用的算法是隨機梯度上升法,首先考慮\Phi\left(\theta_{j-1}^{w}, X_{w}\right)\theta_{j-1}^{w}的梯度計算:
\begin{aligned} \frac{\partial \Phi\left(\theta_{j-1}^{w}, X_{w}\right)}{\partial \theta_{j-1}^{w}} &=\frac{\partial}{\theta_{j-1}^{w}}\left(\left(1-d_{j}^{w}\right) \cdot \log \left[\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]+d_{j}^{w} \cdot \log \left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]\right) \\ &=\left(1-d_{j}^{w}\right)\left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right] X_{w}-d_{j}^{w} \sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right) X_{w} \\ &=\left(\left(1-d_{j}^{w}\right)\left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]-d_{j}^{w} \sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right) X_{w} \\ &=\left(1-d_{j}^{w}-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right) X_{w} \end{aligned}

\theta_{j-1}^{w}的更新公式為:
\theta_{j-1}^{w}:=\theta_{j-1}^{w}+\eta\left(1-d_{j}^{w}-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right) X_{w}

然后考慮\Phi\left(\theta_{j-1}^{w}, X_{w}\right)X_{w}的梯度計算:
\begin{aligned} \frac{\partial \Phi\left(\theta_{j-1}^{w}, X_{w}\right)}{\partial X_{w}} &=\frac{\partial}{X_{w}}\left(\left(1-d_{j}^{w}\right) \cdot \log \left[\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]+d_{j}^{w} \cdot \log \left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]\right) \\ &=\left(1-d_{j}^{w}\right)\left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right] \theta_{j-1}^{w}-d_{j}^{w} \sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right) \theta_{j-1}^{w} \\ &=\left(\left(1-d_{j}^{w}\right)\left[1-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right]-d_{j}^{w} \sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right) \theta_{j-1}^{w} \\ &=\left(1-d_{j}^{w}-\sigma\left(X_{w}^{T} \theta_{j-1}^{w}\right)\right) \theta_{j-1}^{w} \end{aligned}

觀察到\Phi\left(\theta_{j-1}^{w}, X_{w}\right)\theta_{j-1}^{w}X_{w}具有對稱性,word2vec直接取下式方式來更新v(\widetilde{w})
v(\widetilde{w}):=v(\widetilde{w})+\eta \sum_{j=2}^{l^{w}} \frac{\partial \Phi\left(\theta_{j-1}^{w}, X_{w}\right)}{\partial X_{w}}, \widetilde{w} \in \text { Context }(w)

2. Skip-gram模型

Skip-gram模型也包括輸入層、投
影層和輸出層。


輸入層:只含有當(dāng)前樣本的中心詞w的詞向量v(w) \in \mathbb{R};
投影層:由于為恒等投影,因此該層可有可無;
輸出層:也為一棵哈夫曼樹。

Skip-gram模型中已知當(dāng)前詞w,需要對其上下文Context(w)中的詞進行預(yù)測,關(guān)鍵是條件概率函數(shù)p(\text {Context}(w) | w),即:

p(\text {Context}(w) | w)=\prod_{u \in C \text {ontext}(w)} p(u | w)

同樣由Hierarchical Softmax的思想,可得:
\begin{aligned} p(u | w) &=\prod_{j=2}^{l^{w}} p\left(d_{j}^{u} | v(w) ; \theta_{j-1}^{u}\right) \\ &=\prod_{j=2}^{l^{u}}\left[\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]^{1-d_{j}^{u}} \cdot\left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right]^{d_{j}^{u}}\right. \end{aligned}

通過極大似然化處理可得Skip-gram模型的目標(biāo)函數(shù)為:
\begin{aligned} {L} &=\sum_{w \in {D}} \log \prod_{u \in C o n t e x t} \prod_{j=2}^{l^{u}}\left(\left[\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]^{1-d_{j}^{u}} \cdot\left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right]^{d_{j}^{u}}\right)\right.\\ &=\sum_{w \in {D}} \sum_{u \in C o n t e x t} \sum_{j=2}^{l^{u}}\left(\left(1-d_{j}^{u}\right) \cdot \log \left[\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]+d_{j}^{u} \cdot \log \left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]\right) \\ &=\sum_{w \in {D}} \sum_{u \in \text {Context}(w)} \sum_{j=2}^{l^{u}} {O}\left(\theta_{j-1}^{u}, v(w)\right) \end{aligned}

考慮{O}\left(\theta_{j-1}^{u}, v(w)\right)\theta_{j-1}^{u}的梯度:
\begin{aligned} \frac{\partial {O}\left(\theta_{j-1}^{u}, v(w)\right)}{\partial \theta_{j-1}^{u}} &\left.=\frac{\partial}{\partial \theta_{j-1}^{u}}\left(\left(1-d_{j}^{u}\right) \cdot \log \left[\sigma(w)^{T} \theta_{j-1}^{u}\right)\right]+d_{j}^{u} \cdot \log \left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]\right) \\ &=\left(1-d_{j-1}^{u}\right)\left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right] v(w)-d_{j}^{u} \sigma\left(v(w)^{T} \theta_{j-1}^{u}\right) v(w) \\ &=\left(\left(1-d_{j}^{u}\right)\left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]-d_{j}^{u} \sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right) v(w) \\ &=\left(1-d_{j}^{u}-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right) v(w) \end{aligned}

\theta_{j-1}^{u}的更新公式為:
\theta_{j-1}^{u}:=\theta_{j-1}^{u}+\eta\left(1-d_{j}^{u}-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right) v(w)

再考慮{O}\left(\theta_{j-1}^{u}, v(w)\right)v(w)的梯度:
\begin{aligned} \frac{\partial {O}\left(\theta_{j-1}^{u}, v(w)\right)}{\partial v(w)} &=\frac{\partial}{\partial v(w)}\left(\left(1-d_{j}^{u}\right) \cdot \log \left[\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]+d_{j}^{u} \cdot \log \left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]\right) \\ &=\left(1-d_{j}^{u}\right)\left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right] \theta_{j-1}^{u}-d_{j}^{u} \sigma\left(v(w)^{T} \theta_{j-1}^{u}\right) \theta_{j-1}^{u} \\ &=\left(\left(1-d_{j}^{u}\right)\left[1-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right]-d_{j}^{u} \sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right) \theta_{j-1}^{u} \\ &=\left(1-d_{j}^{u}-\sigma\left(v(w)^{T} \theta_{j-1}^{u}\right)\right) \theta_{j-1}^{u} \end{aligned}

v(w)更新公式為:
v(w):=v(w)+\eta \sum_{u \in \text { Context }(w)} \sum_{j=2}^{l^{u}} \frac{\partial {O}\left(\theta_{j-1}^{u}, v(w)\right)}{\partial v(w)}

二、算法實現(xiàn)

本次所用數(shù)據(jù)來自一家英國在線零售商店2010年1月12日至2011年12月9的全部交易數(shù)據(jù),數(shù)據(jù)規(guī)模為54908\times 8。下載地址,https://archive.ics.uci.edu/ml/datasets/Online+Retail ,字段描述如下:

  • InvoiceNo:發(fā)票編號。 定類數(shù)據(jù),為每個事務(wù)唯一分配的6位整數(shù)。 如果此代碼以字母'c'開頭,則表示取消。
  • StockCode:產(chǎn)品(項目)代碼。 定類數(shù)據(jù),為每個不同的產(chǎn)品唯一分配的5位整數(shù)。
  • Description:產(chǎn)品(項目)名稱。定類數(shù)據(jù)。
  • Quantity:每筆交易的每件產(chǎn)品(項目)的數(shù)量。 數(shù)字。
  • InvoiceDate:Invoice日期和時間。 數(shù)字,生成每個事務(wù)的日期和時間。
  • UnitPrice:單價。 數(shù)字,英鎊單位產(chǎn)品價格。
  • CustomerID:客戶編號。 定類數(shù)據(jù),為每個客戶唯一分配的5位整數(shù)。
  • Country:國家名稱。 定類數(shù)據(jù),每個客戶所在國家/地區(qū)的名稱。
數(shù)據(jù)集舉例
1. 庫函數(shù)、數(shù)據(jù)讀入以及刪除缺失值
import numpy as np
import pandas as pd
import random
from tqdm import tqdm
from gensim.models import Word2Vec
import matplotlib.pyplot as plt
import umap

data = pd.read_excel('Online Retail.xlsx')
data.isnull().sum()
data.dropna(inplace=True)
2. 找到用戶的個數(shù)及其列表,并隨機打亂,可知用戶共4372個
data['StockCode'] = data['StockCode'].astype(str)

customers = data['CustomerID'].unique().tolist()
random.shuffle(customers)

train_customers = [customers[i] for i in range(round(0.9*len(customers)))]
train_data = data[data['CustomerID'].isin(train_customers)]
validation_data = data[~data['CustomerID'].isin(train_customers)]
3. 劃分訓(xùn)練集和驗證集并加載
train_customers = [customers[i] for i in range(round(0.9*len(customers)))]
train_data = data[data['CustomerID'].isin(train_customers)]
validation_data = data[~data['CustomerID'].isin(train_customers)]

train_purchases = []  #訓(xùn)練集用戶購買記錄
for i in tqdm(train_customers):
    temp = train_data[train_data['CustomerID']==i]['StockCode'].tolist()
    train_purchases.append(temp)
    
val_purchases = []  #驗證集用戶購買記錄
for i in tqdm(validation_data['CustomerID'].unique()):
    temp = validation_data[validation_data['CustomerID']==i]['StockCode'].tolist()
    val_purchases.append(temp)
100%|██████████| 3935/3935 [00:04<00:00, 974.74it/s]
100%|██████████| 437/437 [00:00<00:00, 1613.25it/s]
4. word2vec模型加載及初始化
model = Word2Vec(window = 10,sg = 1, hs = 0, negative = 10, alpha = 0.03, min_alpha = 0.0007, seed = 14)
model.build_vocab(train_purchases,progress_per = 200)
model.train(train_purchases, total_examples = model.corpus_count, epochs = 10, report_delay = 1)

model.init_sims(replace=True)
X = model[model.wv.vocab]
5. 模型向量可視化
cluster_embedding = umap.UMAP(n_neighbors=30, min_dist=0.0, n_components=2, random_state=42).fit_transform(X)
plt.figure(1)
plt.scatter(cluster_embedding[:,0], cluster_embedding[:,1], s=3, cmap='Spectral')
6. 創(chuàng)建產(chǎn)品股票代碼及描述的字典并刪除重復(fù)項
products = train_data[["StockCode", "Description"]]
products.drop_duplicates(inplace=True, subset='StockCode', keep='last')
products_dict = products.groupby('StockCode')['Description'].apply(list).to_dict()
In [8]: products_dict['21931']
Out[8]: ['JUMBO STORAGE BAG SUKI']
7. 以產(chǎn)品向量作為輸入,并返回6個相似產(chǎn)品
def similar_products(v,n=6):
    ms = model.wv.similar_by_vector(v,topn = n+1)[1:]
    new_ms = []
    for j in ms:
        pair = (products_dict[j[0]][0],j[1])
        new_ms.append(pair)
    return new_ms
similar_products(model['84406B'])
In [9]: similar_products(model['21931'])
Out[9]: 
[('JUMBO BAG STRAWBERRY', 0.8357644081115723),
 ('JUMBO BAG OWLS', 0.8068020343780518),
 ('JUMBO  BAG BAROQUE BLACK WHITE', 0.7999265193939209),
 ('JUMBO BAG RED RETROSPOT', 0.7874696254730225),
 ('JUMBO BAG PINK POLKADOT', 0.7594423294067383),
 ('JUMBO STORAGE BAG SKULLS', 0.758986234664917)]
8. 基于多次購買的平均值推薦相似產(chǎn)品
def aggregate_vectors(products):
    product_vec = []
    for i in products:
        product_vec.append(model[i])
    return np.mean(product_vec,axis=0)
    
aggregate_vectors(val_purchases[1]).shape
similar_products(aggregate_vectors(val_purchases[1]))
similar_products(aggregate_vectors(val_purchases[1][-10:]))
In [13]: similar_products(aggregate_vectors(val_purchases[1]))
Out[13]: 
[('LUNCH BAG RED RETROSPOT', 0.6403661370277405),
 ('ALARM CLOCK BAKELIKE RED ', 0.638660728931427),
 ('RED RETROSPOT PICNIC BAG', 0.6361196637153625),
 ('JUMBO BAG RED RETROSPOT', 0.6360040903091431),
 ('SET/5 RED RETROSPOT LID GLASS BOWLS', 0.6345535516738892),
 ('ALARM CLOCK BAKELIKE PINK', 0.6296969056129456)]

In [14]: similar_products(aggregate_vectors(val_purchases[1][-10:]))
Out[14]: 
[('ROUND SNACK BOXES SET OF 4 FRUITS ', 0.7854548692703247),
 ('LUNCH BOX WITH CUTLERY RETROSPOT ', 0.6739486455917358),
 ('SET OF 3 BUTTERFLY COOKIE CUTTERS', 0.6696499586105347),
 ('SET OF 3 REGENCY CAKE TINS', 0.6598889827728271),
 ('PICNIC BOXES SET OF 3 RETROSPOT ', 0.6580283641815186),
 ('POSTAGE', 0.6528887748718262)]

參考資料

[1]. 推薦系統(tǒng)與深度學(xué)習(xí). 黃昕等. 清華大學(xué)出版社. 2019.
[2]. 美團機器學(xué)習(xí)實踐. 美團算法團隊. 人民郵電出版社. 2018.
[3]. 推薦系統(tǒng)算法實踐. 黃美靈. 電子工業(yè)出版社. 2019.
[4]. 推薦系統(tǒng)算法. 項亮. 人民郵電出版社. 2012.
[5]. https://github.com/smrutiranjan097/Building-a-Recommendation-System-using-Word2vec
[6]. https://github.com/shoreyarchit/Movie-Recommendation-System
[7]. https://github.com/yhangf/ML-NOTE

夜雨翦春韭,新炊間黃粱。——杜甫《贈衛(wèi)八處士》

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容