循環(huán)神經(jīng)網(wǎng)絡(luò)脈絡(luò)

循環(huán)神經(jīng)網(wǎng)絡(luò)

下圖展示了如何基于循環(huán)神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)語言模型。我們的目的是基于當(dāng)前的輸入與過去的輸入序列,預(yù)測(cè)序列的下一個(gè)字符。循環(huán)神經(jīng)網(wǎng)絡(luò)引入一個(gè)隱藏變量H,用H_{t}表示H在時(shí)間步t的值。H_{t}的計(jì)算基于X_{t}H_{t-1},可以認(rèn)為H_{t}記錄了到當(dāng)前字符為止的序列信息,利用H_{t}對(duì)序列的下一個(gè)字符進(jìn)行預(yù)測(cè)。

RNN模型

循環(huán)神經(jīng)網(wǎng)絡(luò)的構(gòu)造

我們先看循環(huán)神經(jīng)網(wǎng)絡(luò)的具體構(gòu)造。假設(shè)\boldsymbol{X}_t \in \mathbb{R}^{n \times d}是時(shí)間步t的小批量輸入,\boldsymbol{H}_t \in \mathbb{R}^{n \times h}是該時(shí)間步的隱藏變量,則:

\boldsymbol{H}_t = \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh} + \boldsymbol{H}_{t-1} \boldsymbol{W}_{hh} + \boldsymbol_h).

其中,\boldsymbol{W}_{xh} \in \mathbb{R}^{d \times h},\boldsymbol{W}_{hh} \in \mathbb{R}^{h \times h},\boldsymbol_{h} \in \mathbb{R}^{1 \times h},\phi函數(shù)是非線性激活函數(shù)。由于引入了\boldsymbol{H}_{t-1} \boldsymbol{W}_{hh},H_{t}能夠捕捉截至當(dāng)前時(shí)間步的序列的歷史信息,就像是神經(jīng)網(wǎng)絡(luò)當(dāng)前時(shí)間步的狀態(tài)或記憶一樣。由于H_{t}的計(jì)算基于H_{t-1},上式的計(jì)算是循環(huán)的,使用循環(huán)計(jì)算的網(wǎng)絡(luò)即循環(huán)神經(jīng)網(wǎng)絡(luò)(recurrent neural network)。

在時(shí)間步t,輸出層的輸出為:

\boldsymbol{O}_t = \boldsymbol{H}_t \boldsymbol{W}_{hq} + \boldsymbol_q.

其中\boldsymbol{W}_{hq} \in \mathbb{R}^{h \times q},\boldsymbol_q \in \mathbb{R}^{1 \times q}。

從零開始實(shí)現(xiàn)循環(huán)神經(jīng)網(wǎng)絡(luò)

先嘗試從零開始實(shí)現(xiàn)一個(gè)基于字符級(jí)循環(huán)神經(jīng)網(wǎng)絡(luò)的語言模型,這里我們使用周杰倫的歌詞作為語料,首先讀入數(shù)據(jù):

import torch
import torch.nn as nn
import time
import math
import sys
sys.path.append("/home/kesci/input")
import d2l_jay9460 as d2l
(corpus_indices, char_to_idx, idx_to_char, vocab_size) = d2l.load_data_jay_lyrics()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
one-hot向量

我們需要將字符表示成向量,這里采用one-hot向量。假設(shè)詞典大小是N,每次字符對(duì)應(yīng)一個(gè)從0N-1的唯一的索引,則該字符的向量是一個(gè)長(zhǎng)度為N的向量,若字符的索引是i,則該向量的第i個(gè)位置為1,其他位置為0。下面分別展示了索引為0和2的one-hot向量,向量長(zhǎng)度等于詞典大小。

def one_hot(x, n_class, dtype=torch.float32):
    # x 為列表,n_class 為類別
    result = torch.zeros(x.shape[0], n_class, dtype=dtype, device=x.device)  # shape: (n, n_class)
    result.scatter_(1, x.long().view(-1, 1), 1)  # result[i, x[i, 0]] = 1
    return result
# 舉個(gè)例子
x = torch.tensor([0, 2])
x_one_hot = one_hot(x, vocab_size)
print(x_one_hot)
print(x_one_hot.shape)
print(x_one_hot.sum(axis=1))

eg:

tensor([[1., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 1.,  ..., 0., 0., 0.]])
torch.Size([2, 1027])
tensor([1., 1.])

我們每次采樣的小批量的形狀是(批量大小, 時(shí)間步數(shù))。下面的函數(shù)將這樣的小批量變換成數(shù)個(gè)形狀為(批量大小, 詞典大?。┑木仃?,矩陣個(gè)數(shù)等于時(shí)間步數(shù)。也就是說,時(shí)間步t的輸入為\boldsymbol{X}_t \in \mathbb{R}^{n \times d},其中n為批量大小,d為詞向量大小,即one-hot向量長(zhǎng)度(詞典大?。?/p>

def to_onehot(X, n_class):
    return [one_hot(X[:, i], n_class) for i in range(X.shape[1])]
def one_hot(x, n_class, dtype=torch.float32):
    result = torch.zeros(x.shape[0], n_class, dtype=dtype, device=x.device)  # shape: (n, n_class)
    result.scatter_(1, x.long().view(-1, 1), 1)  # result[i, x[i, 0]] = 1
    return result
X = torch.arange(10).view(2, 5)
inputs = to_onehot(X, vocab_size)
print(len(inputs), inputs[0].shape)
result:5 torch.Size([2, 1027])
模型參數(shù)初始化
num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
# num_inputs: d
# num_hiddens: h, 隱藏單元的個(gè)數(shù)是超參數(shù)
# num_outputs: q

def get_params():
    def _one(shape):
        param = torch.zeros(shape, device=device, dtype=torch.float32)
        nn.init.normal_(param, 0, 0.01)
        return torch.nn.Parameter(param)

    # 隱藏層參數(shù)
    W_xh = _one((num_inputs, num_hiddens))
    W_hh = _one((num_hiddens, num_hiddens))
    b_h = torch.nn.Parameter(torch.zeros(num_hiddens, device=device))
    # 輸出層參數(shù)
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device))
    return (W_xh, W_hh, b_h, W_hq, b_q)
### 定義模型

# 函數(shù)rnn用循環(huán)的方式依次完成循環(huán)神經(jīng)網(wǎng)絡(luò)每個(gè)時(shí)間步的計(jì)算。
def rnn(inputs, state, params):
    # inputs和outputs皆為num_steps個(gè)形狀為(batch_size, vocab_size)的矩陣
    W_xh, W_hh, b_h, W_hq, b_q = params
    H, = state
    outputs = []
    for X in inputs:
        H = torch.tanh(torch.matmul(X, W_xh) + torch.matmul(H, W_hh) + b_h)
        Y = torch.matmul(H, W_hq) + b_q
        
        outputs.append(Y)
    return outputs, (H,)
# 函數(shù)init_rnn_state初始化隱藏變量,這里的返回值是一個(gè)元組。
def init_rnn_state(batch_size, num_hiddens, device):
    return (torch.zeros((batch_size, num_hiddens), device=device), )

裁剪梯度

循環(huán)神經(jīng)網(wǎng)絡(luò)中較容易出現(xiàn)梯度衰減或梯度爆炸,這會(huì)導(dǎo)致網(wǎng)絡(luò)幾乎無法訓(xùn)練。裁剪梯度(clip gradient)是一種應(yīng)對(duì)梯度爆炸的方法。假設(shè)我們把所有模型參數(shù)的梯度拼接成一個(gè)向量 \boldsymbol{g},并設(shè)裁剪的閾值是\theta。裁剪后的梯度

\min\left(\frac{\theta}{\|\boldsymbol{g}\|}, 1\right)\boldsymbol{g}

L_2范數(shù)不超過\theta。

def grad_clipping(params, theta, device):
    norm = torch.tensor([0.0], device=device)
    for param in params:
        norm += (param.grad.data ** 2).sum()
    norm = norm.sqrt().item()
    if norm > theta:
        for param in params:
            param.grad.data *= (theta / norm)

定義預(yù)測(cè)函數(shù)

以下函數(shù)基于前綴prefix(含有數(shù)個(gè)字符的字符串)來預(yù)測(cè)接下來的num_chars個(gè)字符。這個(gè)函數(shù)稍顯復(fù)雜,其中我們將循環(huán)神經(jīng)單元rnn設(shè)置成了函數(shù)參數(shù),這樣在后面小節(jié)介紹其他循環(huán)神經(jīng)網(wǎng)絡(luò)時(shí)能重復(fù)使用這個(gè)函數(shù)。

def predict_rnn(prefix, num_chars, rnn, params, init_rnn_state,
                num_hiddens, vocab_size, device, idx_to_char, char_to_idx):
    state = init_rnn_state(1, num_hiddens, device)
    output = [char_to_idx[prefix[0]]]   # output記錄prefix加上預(yù)測(cè)的num_chars個(gè)字符
    for t in range(num_chars + len(prefix) - 1):
        # 將上一時(shí)間步的輸出作為當(dāng)前時(shí)間步的輸入
        X = to_onehot(torch.tensor([[output[-1]]], device=device), vocab_size)
        # 計(jì)算輸出和更新隱藏狀態(tài)
        (Y, state) = rnn(X, state, params)
        # 下一個(gè)時(shí)間步的輸入是prefix里的字符或者當(dāng)前的最佳預(yù)測(cè)字符
        if t < len(prefix) - 1:
            output.append(char_to_idx[prefix[t + 1]])
        else:
            output.append(Y[0].argmax(dim=1).item())
    return ''.join([idx_to_char[i] for i in output])

代碼整合

import torch
import torch.nn as nn
import time
import math
import sys
sys.path.append("/home/kesci/input")
import d2l_jay9460 as d2l
(corpus_indices, char_to_idx, idx_to_char, vocab_size) = d2l.load_data_jay_lyrics()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def to_onehot(X, n_class):
    return [one_hot(X[:, i], n_class) for i in range(X.shape[1])]
def one_hot(x, n_class, dtype=torch.float32):
    result = torch.zeros(x.shape[0], n_class, dtype=dtype, device=x.device)  # shape: (n, n_class)
    result.scatter_(1, x.long().view(-1, 1), 1)  # result[i, x[i, 0]] = 1
    return result
num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
# num_inputs: d
# num_hiddens: h, 隱藏單元的個(gè)數(shù)是超參數(shù)
# num_outputs: q

def get_params():
    def _one(shape):
        param = torch.zeros(shape, device=device, dtype=torch.float32)
        nn.init.normal_(param, 0, 0.01)
        return torch.nn.Parameter(param)

    # 隱藏層參數(shù)
    W_xh = _one((num_inputs, num_hiddens))
    W_hh = _one((num_hiddens, num_hiddens))
    b_h = torch.nn.Parameter(torch.zeros(num_hiddens, device=device))
    # 輸出層參數(shù)
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device))
    return (W_xh, W_hh, b_h, W_hq, b_q)
def rnn(inputs, state, params):
    # inputs和outputs皆為num_steps個(gè)形狀為(batch_size, vocab_size)的矩陣
    W_xh, W_hh, b_h, W_hq, b_q = params
    H, = state
    outputs = []
    for X in inputs:
        H = torch.tanh(torch.matmul(X, W_xh) + torch.matmul(H, W_hh) + b_h)
        Y = torch.matmul(H, W_hq) + b_q
        
        outputs.append(Y)
    return outputs, (H,)
def init_rnn_state(batch_size, num_hiddens, device):
    return (torch.zeros((batch_size, num_hiddens), device=device), )
def grad_clipping(params, theta, device):
    norm = torch.tensor([0.0], device=device)
    for param in params:
        norm += (param.grad.data ** 2).sum()
    norm = norm.sqrt().item()
    if norm > theta:
        for param in params:
            param.grad.data *= (theta / norm)
def predict_rnn(prefix, num_chars, rnn, params, init_rnn_state,
                num_hiddens, vocab_size, device, idx_to_char, char_to_idx):
    state = init_rnn_state(1, num_hiddens, device)
    output = [char_to_idx[prefix[0]]]   # output記錄prefix加上預(yù)測(cè)的num_chars個(gè)字符
    for t in range(num_chars + len(prefix) - 1):
        # 將上一時(shí)間步的輸出作為當(dāng)前時(shí)間步的輸入
        X = to_onehot(torch.tensor([[output[-1]]], device=device), vocab_size)
        # 計(jì)算輸出和更新隱藏狀態(tài)
        (Y, state) = rnn(X, state, params)
        # 下一個(gè)時(shí)間步的輸入是prefix里的字符或者當(dāng)前的最佳預(yù)測(cè)字符
        if t < len(prefix) - 1:
            output.append(char_to_idx[prefix[t + 1]])
        else:
            output.append(Y[0].argmax(dim=1).item())
    return ''.join([idx_to_char[i] for i in output])
params = get_params()
predict_rnn('分開', 10, rnn, params, init_rnn_state, num_hiddens, vocab_size,
            device, idx_to_char, char_to_idx)
Result:'分開蛛公疑虹不食其屬草好'

困惑度

我們通常使用困惑度(perplexity)來評(píng)價(jià)語言模型的好壞?;貞浺幌?a target="_blank">“softmax回歸”一節(jié)中交叉熵?fù)p失函數(shù)的定義。

\color{red}{困惑度是對(duì)交叉熵?fù)p失函數(shù)做指數(shù)運(yùn)算后得到的值。}

  • 最佳情況下,模型總是把標(biāo)簽類別的概率預(yù)測(cè)為1,此時(shí)困惑度為1;
  • 最壞情況下,模型總是把標(biāo)簽類別的概率預(yù)測(cè)為0,此時(shí)困惑度為正無窮;
  • 基線情況下,模型總是預(yù)測(cè)所有類別的概率都相同,此時(shí)困惑度為類別個(gè)數(shù)。

顯然,任何一個(gè)有效模型的困惑度必須小于類別個(gè)數(shù)。在本例中,困惑度必須小于詞典大小vocab_size。

定義模型訓(xùn)練函數(shù)

跟之前章節(jié)的模型訓(xùn)練函數(shù)相比,這里的模型訓(xùn)練函數(shù)有以下幾點(diǎn)不同:

  1. 使用困惑度評(píng)價(jià)模型。
  2. 在迭代模型參數(shù)前裁剪梯度。
  3. 對(duì)時(shí)序數(shù)據(jù)采用不同采樣方法將導(dǎo)致隱藏狀態(tài)初始化的不同。
def train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,
                          vocab_size, device, corpus_indices, idx_to_char,
                          char_to_idx, is_random_iter, num_epochs, num_steps,
                          lr, clipping_theta, batch_size, pred_period,
                          pred_len, prefixes):
    if is_random_iter:
        data_iter_fn = d2l.data_iter_random
    else:
        data_iter_fn = d2l.data_iter_consecutive
    params = get_params()
    loss = nn.CrossEntropyLoss()

    for epoch in range(num_epochs):
        if not is_random_iter:  # 如使用相鄰采樣,在epoch開始時(shí)初始化隱藏狀態(tài)
            state = init_rnn_state(batch_size, num_hiddens, device)
        l_sum, n, start = 0.0, 0, time.time()
        data_iter = data_iter_fn(corpus_indices, batch_size, num_steps, device)
        for X, Y in data_iter:
            if is_random_iter:  # 如使用隨機(jī)采樣,在每個(gè)小批量更新前初始化隱藏狀態(tài)
                state = init_rnn_state(batch_size, num_hiddens, device)
            else:  # 否則需要使用detach函數(shù)從計(jì)算圖分離隱藏狀態(tài)
                for s in state:
                    s.detach_()
            # inputs是num_steps個(gè)形狀為(batch_size, vocab_size)的矩陣
            inputs = to_onehot(X, vocab_size)
            # outputs有num_steps個(gè)形狀為(batch_size, vocab_size)的矩陣
            (outputs, state) = rnn(inputs, state, params)
            # 拼接之后形狀為(num_steps * batch_size, vocab_size)
            outputs = torch.cat(outputs, dim=0)
            # Y的形狀是(batch_size, num_steps),轉(zhuǎn)置后再變成形狀為
            # (num_steps * batch_size,)的向量,這樣跟輸出的行一一對(duì)應(yīng)
            y = torch.flatten(Y.T)
            # 使用交叉熵?fù)p失計(jì)算平均分類誤差
            l = loss(outputs, y.long())
            
            # 梯度清0
            if params[0].grad is not None:
                for param in params:
                    param.grad.data.zero_()
            l.backward()
            grad_clipping(params, clipping_theta, device)  # 裁剪梯度
            d2l.sgd(params, lr, 1)  # 因?yàn)檎`差已經(jīng)取過均值,梯度不用再做平均
            l_sum += l.item() * y.shape[0]
            n += y.shape[0]

        if (epoch + 1) % pred_period == 0:
            print('epoch %d, perplexity %f, time %.2f sec' % (
                epoch + 1, math.exp(l_sum / n), time.time() - start))
            for prefix in prefixes:
                print(' -', predict_rnn(prefix, pred_len, rnn, params, init_rnn_state,
                    num_hiddens, vocab_size, device, idx_to_char, char_to_idx))

訓(xùn)練模型并創(chuàng)作歌詞

num_epochs, num_steps, batch_size, lr, clipping_theta = 250, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 50, 50, ['分開', '不分開']
# 隨機(jī)采樣訓(xùn)練模型
train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,
                      vocab_size, device, corpus_indices, idx_to_char,
                      char_to_idx, True, num_epochs, num_steps, lr,
                      clipping_theta, batch_size, pred_period, pred_len,
                      prefixes)
# 相鄰采樣訓(xùn)練模型
train_and_predict_rnn(rnn, get_params, init_rnn_state, num_hiddens,
                      vocab_size, device, corpus_indices, idx_to_char,
                      char_to_idx, False, num_epochs, num_steps, lr,
                      clipping_theta, batch_size, pred_period, pred_len,
                      prefixes)
Result

epoch 50, perplexity 60.294393, time 0.74 sec

  • 分開 我想要你想 我不要再想 我不要再想 我不要再想 我不要再想 我不要再想 我不要再想 我不要再想 我
  • 不分開 我想要你 你有了 別不我的可愛女人 壞壞的讓我瘋狂的可愛女人 壞壞的讓我瘋狂的可愛女人 壞壞的讓我
    epoch 100, perplexity 7.141162, time 0.72 sec
  • 分開 我已要再愛 我不要再想 我不 我不 我不要再想 我不 我不 我不要 愛情我的見快就像龍卷風(fēng) 離能開
  • 不分開柳 你天黃一個(gè)棍 后知哈兮 快使用雙截棍 哼哼哈兮 快使用雙截棍 哼哼哈兮 快使用雙截棍 哼哼哈兮
    epoch 150, perplexity 2.090277, time 0.73 sec
  • 分開 我已要這是你在著 不想我都做得到 但那個(gè)人已經(jīng)不是我 沒有你在 我卻多難熬 沒有你在我有多難熬多
  • 不分開覺 你已經(jīng)離 我想再好 這樣心中 我一定帶我 我的完空 不你是風(fēng) 一一彩縱 在人心中 我一定帶我媽走
    epoch 200, perplexity 1.305391, time 0.77 sec
  • 分開 我已要這樣牽看你的手 它一定實(shí)現(xiàn)它一定像現(xiàn) 載著你 彷彿載著陽光 不管到你留都是晴天 蝴蝶自在飛力
  • 不分開覺 你已經(jīng)離開我 不知不覺 我跟了這節(jié)奏 后知后覺 又過了一個(gè)秋 后知后覺 我該好好生活 我該好好生
    epoch 250, perplexity 1.230800, time 0.79 sec
  • 分開 我不要 是你看的太快了悲慢 擔(dān)心今手身會(huì)大早 其么我也睡不著 昨晚夢(mèng)里你來找 我才 原來我只想
  • 不分開覺 你在經(jīng)離開我 不知不覺 你知了有節(jié)奏 后知后覺 后知了一個(gè)秋 后知后覺 我該好好生活 我該好好生

循環(huán)神經(jīng)網(wǎng)絡(luò)的簡(jiǎn)介實(shí)現(xiàn)

定義模型

我們使用Pytorch中的nn.RNN來構(gòu)造循環(huán)神經(jīng)網(wǎng)絡(luò)。在本節(jié)中,我們主要關(guān)注nn.RNN的以下幾個(gè)構(gòu)造函數(shù)參數(shù):

  • input_size - The number of expected features in the input x
  • hidden_size – The number of features in the hidden state h
  • nonlinearity – The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'
  • batch_first – If True, then the input and output tensors are provided as (batch_size, num_steps, input_size). Default: False

這里的batch_first決定了輸入的形狀,我們使用默認(rèn)的參數(shù)False,對(duì)應(yīng)的輸入形狀是 (num_steps, batch_size, input_size)。

forward函數(shù)的參數(shù)為:

  • input of shape (num_steps, batch_size, input_size): tensor containing the features of the input sequence.
  • h_0 of shape (num_layers * num_directions, batch_size, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1.

forward函數(shù)的返回值是:

  • output of shape (num_steps, batch_size, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the RNN, for each t.
  • h_n of shape (num_layers * num_directions, batch_size, hidden_size): tensor containing the hidden state for t = num_steps.

現(xiàn)在我們構(gòu)造一個(gè)nn.RNN實(shí)例,并用一個(gè)簡(jiǎn)單的例子來看一下輸出的形狀。

rnn_layer = nn.RNN(input_size=vocab_size, hidden_size=num_hiddens)
num_steps, batch_size = 35, 2
X = torch.rand(num_steps, batch_size, vocab_size)
state = None
Y, state_new = rnn_layer(X, state)
print(Y.shape, state_new.shape)
#我們定義一個(gè)完整的基于循環(huán)神經(jīng)網(wǎng)絡(luò)的語言模型。
class RNNModel(nn.Module):
    def __init__(self, rnn_layer, vocab_size):
        super(RNNModel, self).__init__()
        self.rnn = rnn_layer
        self.hidden_size = rnn_layer.hidden_size * (2 if rnn_layer.bidirectional else 1) 
        self.vocab_size = vocab_size
        self.dense = nn.Linear(self.hidden_size, vocab_size)

    def forward(self, inputs, state):
        # inputs.shape: (batch_size, num_steps)
        X = to_onehot(inputs, vocab_size)
        X = torch.stack(X)  # X.shape: (num_steps, batch_size, vocab_size)
        hiddens, state = self.rnn(X, state)
        hiddens = hiddens.view(-1, hiddens.shape[-1])  # hiddens.shape: (num_steps * batch_size, hidden_size)
        output = self.dense(hiddens)
        return output, state
# 類似的,我們需要實(shí)現(xiàn)一個(gè)預(yù)測(cè)函數(shù),與前面的區(qū)別在于前向計(jì)算和初始化隱藏狀態(tài)。
def predict_rnn_pytorch(prefix, num_chars, model, vocab_size, device, idx_to_char,
                      char_to_idx):
    state = None
    output = [char_to_idx[prefix[0]]]  # output記錄prefix加上預(yù)測(cè)的num_chars個(gè)字符
    for t in range(num_chars + len(prefix) - 1):
        X = torch.tensor([output[-1]], device=device).view(1, 1)
        (Y, state) = model(X, state)  # 前向計(jì)算不需要傳入模型參數(shù)
        if t < len(prefix) - 1:
            output.append(char_to_idx[prefix[t + 1]])
        else:
            output.append(Y.argmax(dim=1).item())
    return ''.join([idx_to_char[i] for i in output])
# 使用權(quán)重為隨機(jī)值的模型來預(yù)測(cè)一次。
model = RNNModel(rnn_layer, vocab_size).to(device)
predict_rnn_pytorch('分開', 10, model, vocab_size, device, idx_to_char, char_to_idx)
# 接下來實(shí)現(xiàn)訓(xùn)練函數(shù),這里只使用了相鄰采樣。
def train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes):
    loss = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=lr)
    model.to(device)
    for epoch in range(num_epochs):
        l_sum, n, start = 0.0, 0, time.time()
        data_iter = d2l.data_iter_consecutive(corpus_indices, batch_size, num_steps, device) # 相鄰采樣
        state = None
        for X, Y in data_iter:
            if state is not None:
                # 使用detach函數(shù)從計(jì)算圖分離隱藏狀態(tài)
                if isinstance (state, tuple): # LSTM, state:(h, c)  
                    state[0].detach_()
                    state[1].detach_()
                else: 
                    state.detach_()
            (output, state) = model(X, state) # output.shape: (num_steps * batch_size, vocab_size)
            y = torch.flatten(Y.T)
            l = loss(output, y.long())
            
            optimizer.zero_grad()
            l.backward()
            grad_clipping(model.parameters(), clipping_theta, device)
            optimizer.step()
            l_sum += l.item() * y.shape[0]
            n += y.shape[0]
        

        if (epoch + 1) % pred_period == 0:
            print('epoch %d, perplexity %f, time %.2f sec' % (
                epoch + 1, math.exp(l_sum / n), time.time() - start))
            for prefix in prefixes:
                print(' -', predict_rnn_pytorch(
                    prefix, pred_len, model, vocab_size, device, idx_to_char,
                    char_to_idx))
# 訓(xùn)練模型。
num_epochs, batch_size, lr, clipping_theta = 250, 32, 1e-3, 1e-2
pred_period, pred_len, prefixes = 50, 50, ['分開', '不分開']
train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                            corpus_indices, idx_to_char, char_to_idx,
                            num_epochs, num_steps, lr, clipping_theta,
                            batch_size, pred_period, pred_len, prefixes)

RNN存在的問題:梯度較容易出現(xiàn)衰減或爆炸(BPTT)
?控循環(huán)神經(jīng)?絡(luò):捕捉時(shí)間序列中時(shí)間步距離較?的依賴關(guān)系
RNN:

image

H_{t} = ?(X_{t}W_{xh} + H_{t-1}W_{hh} + b_{h})
GRU:

image

R_{t} = σ(X_tW_{xr} + H_{t?1}W_{hr} + b_r)\\ Z_{t} = σ(X_tW_{xz} + H_{t?1}W_{hz} + b_z)\\ \widetilde{H}_t = tanh(X_tW_{xh} + (R_t ⊙H_{t?1})W_{hh} + b_h)\\ H_t = Z_t⊙H_{t?1} + (1?Z_t)⊙\widetilde{H}_t
? 重置?有助于捕捉時(shí)間序列?短期的依賴關(guān)系;
? 更新?有助于捕捉時(shí)間序列??期的依賴關(guān)系。

參數(shù)初始化

(corpus_indices, char_to_idx, idx_to_char, vocab_size) = d2l.load_data_jay_lyrics()
num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)
def get_params():  
    def _one(shape):
        ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32) #正態(tài)分布
        return torch.nn.Parameter(ts, requires_grad=True)
    def _three():
        return (_one((num_inputs, num_hiddens)),
                _one((num_hiddens, num_hiddens)),
                torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))
     
    W_xz, W_hz, b_z = _three()  # 更新門參數(shù)
    W_xr, W_hr, b_r = _three()  # 重置門參數(shù)
    W_xh, W_hh, b_h = _three()  # 候選隱藏狀態(tài)參數(shù)
    
    # 輸出層參數(shù)
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)
    return nn.ParameterList([W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q])

def init_gru_state(batch_size, num_hiddens, device):   #隱藏狀態(tài)初始化
    return (torch.zeros((batch_size, num_hiddens), device=device), )

GRU模型

def gru(inputs, state, params):
    W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
    H, = state
    outputs = []
    for X in inputs:
        Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z)
        R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r)
        H_tilda = torch.tanh(torch.matmul(X, W_xh) + R * torch.matmul(H, W_hh) + b_h)
        H = Z * H + (1 - Z) * H_tilda
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H,)

模型訓(xùn)練

num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分開', '不分開']
d2l.train_and_predict_rnn(gru, get_params, init_gru_state, num_hiddens,
                          vocab_size, device, corpus_indices, idx_to_char,
                          char_to_idx, False, num_epochs, num_steps, lr,
                          clipping_theta, batch_size, pred_period, pred_len,
                          prefixes)
模型簡(jiǎn)潔實(shí)現(xiàn)
num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分開', '不分開']

lr = 1e-2 # 注意調(diào)整學(xué)習(xí)率
gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

LSTM

  • 長(zhǎng)短期記憶long short-term memory :
    遺忘門:控制上一時(shí)間步的記憶細(xì)胞
    輸入門:控制當(dāng)前時(shí)間步的輸入
    輸出門:控制從記憶細(xì)胞到隱藏狀態(tài)
    記憶細(xì)胞:?種特殊的隱藏狀態(tài)的信息的流動(dòng)
Image Name

I_t = σ(X_tW_{xi} + H_{t?1}W_{hi} + b_i) \\ F_t = σ(X_tW_{xf} + H_{t?1}W_{hf} + b_f)\\ O_t = σ(X_tW_{xo} + H_{t?1}W_{ho} + b_o)\\ \widetilde{C}_t = tanh(X_tW_{xc} + H_{t?1}W_{hc} + b_c)\\ C_t = F_t ⊙C_{t?1} + I_t ⊙\widetilde{C}_t\\ H_t = O_t⊙tanh(C_t)

num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size
print('will use', device)

def get_params():
    def _one(shape):
        ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32)
        return torch.nn.Parameter(ts, requires_grad=True)
    def _three():
        return (_one((num_inputs, num_hiddens)),
                _one((num_hiddens, num_hiddens)),
                torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))
    
    W_xi, W_hi, b_i = _three()  # 輸入門參數(shù)
    W_xf, W_hf, b_f = _three()  # 遺忘門參數(shù)
    W_xo, W_ho, b_o = _three()  # 輸出門參數(shù)
    W_xc, W_hc, b_c = _three()  # 候選記憶細(xì)胞參數(shù)
    
    # 輸出層參數(shù)
    W_hq = _one((num_hiddens, num_outputs))
    b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)
    return nn.ParameterList([W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q])

def init_lstm_state(batch_size, num_hiddens, device):
    return (torch.zeros((batch_size, num_hiddens), device=device), 
            torch.zeros((batch_size, num_hiddens), device=device))
### LSTM模型
def lstm(inputs, state, params):
    [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params
    (H, C) = state
    outputs = []
    for X in inputs:
        I = torch.sigmoid(torch.matmul(X, W_xi) + torch.matmul(H, W_hi) + b_i)
        F = torch.sigmoid(torch.matmul(X, W_xf) + torch.matmul(H, W_hf) + b_f)
        O = torch.sigmoid(torch.matmul(X, W_xo) + torch.matmul(H, W_ho) + b_o)
        C_tilda = torch.tanh(torch.matmul(X, W_xc) + torch.matmul(H, W_hc) + b_c)
        C = F * C + I * C_tilda
        H = O * C.tanh()
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H, C)
# 訓(xùn)練模型
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分開', '不分開']

d2l.train_and_predict_rnn(lstm, get_params, init_lstm_state, num_hiddens,
                          vocab_size, device, corpus_indices, idx_to_char,
                          char_to_idx, False, num_epochs, num_steps, lr,
                          clipping_theta, batch_size, pred_period, pred_len,
                          prefixes)

簡(jiǎn)潔實(shí)現(xiàn)

num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分開', '不分開']

lr = 1e-2 # 注意調(diào)整學(xué)習(xí)率
lstm_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens)
model = d2l.RNNModel(lstm_layer, vocab_size)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

深度循環(huán)網(wǎng)絡(luò)和雙向循環(huán)網(wǎng)絡(luò)

深度循環(huán)神經(jīng)網(wǎng)絡(luò)

Image Name

\boldsymbol{H}_t^{(1)} = \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh}^{(1)} + \boldsymbol{H}_{t-1}^{(1)} \boldsymbol{W}_{hh}^{(1)} + \boldsymbol_h^{(1)})\\ \boldsymbol{H}_t^{(\ell)} = \phi(\boldsymbol{H}_t^{(\ell-1)} \boldsymbol{W}_{xh}^{(\ell)} + \boldsymbol{H}_{t-1}^{(\ell)} \boldsymbol{W}_{hh}^{(\ell)} + \boldsymbol_h^{(\ell)})\\ \boldsymbol{O}_t = \boldsymbol{H}_t^{(L)} \boldsymbol{W}_{hq} + \boldsymbol_q

num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分開', '不分開']

lr = 1e-2 # 注意調(diào)整學(xué)習(xí)率

gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=2)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)
gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=6)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

雙向循環(huán)神經(jīng)網(wǎng)絡(luò)

image

num_hiddens=128
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e-2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分開', '不分開']

lr = 1e-2 # 注意調(diào)整學(xué)習(xí)率

gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens,bidirectional=True)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

epoch 40, perplexity 1.001741, time 0.91 sec
 - 分開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開
 - 不分開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開
epoch 80, perplexity 1.000520, time 0.91 sec
 - 分開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開
 - 不分開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開
epoch 120, perplexity 1.000255, time 0.99 sec
 - 分開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開
 - 不分開球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我
epoch 160, perplexity 1.000151, time 0.92 sec
 - 分開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開始開
 - 不分開球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我球我
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

友情鏈接更多精彩內(nèi)容