?循環(huán)神經(jīng)網(wǎng)絡(luò)RNN讓神經(jīng)網(wǎng)絡(luò)有了記憶, 對于序列話的數(shù)據(jù),循環(huán)神經(jīng)網(wǎng)絡(luò)能達到更好的效果.
更多可以查看官網(wǎng) :
* PyTorch 官網(wǎng)
MNIST手寫體
import torch
from torch import nn
from torch.autograd import Variable
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 1 # 訓(xùn)練整批數(shù)據(jù)多少次, 為了節(jié)約時間, 只訓(xùn)練一次
BATCH_SIZE = 64
TIME_STEP = 28 # rnn 時間步數(shù) / 圖片高度
INPUT_SIZE = 28 # rnn 每步輸入值 / 圖片每行像素
LR = 0.01 # learning rate
DOWNLOAD_MNIST = True # 如果你已經(jīng)下載好了mnist數(shù)據(jù)就寫上 Fasle
# Mnist 手寫數(shù)字
train_data = torchvision.datasets.MNIST(
root='./mnist/', # 保存或者提取位置
train=True, # this is training data
transform=torchvision.transforms.ToTensor(), # 轉(zhuǎn)換 PIL.Image or numpy.ndarray 成
# torch.FloatTensor (C x H x W), 訓(xùn)練的時候 normalize 成 [0.0, 1.0] 區(qū)間
download=DOWNLOAD_MNIST, # 沒下載就下載, 下載了就不用再下了
)

還是他QAQ,來自MNIST數(shù)據(jù)集的示例圖像
黑色的地方的值都是0, 白色的地方值大于0.
同樣, 除了訓(xùn)練數(shù)據(jù), 還給一些測試數(shù)據(jù), 測試看看它有沒有訓(xùn)練好.
test_data = torchvision.datasets.MNIST(root='./mnist/', train=False)
# 批訓(xùn)練 50samples, 1 channel, 28x28 (50, 1, 28, 28)
train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
# 為了節(jié)約時間, 我們測試時只測試前2000個
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1), volatile=True).type(torch.FloatTensor)[:2000]/255. # shape from (2000, 28, 28) to (2000, 1, 28, 28), value in range(0,1)
test_y = test_data.test_labels[:2000]
RNN模型
和以前一樣, 用一個 class 來建立 RNN 模型. 這個 RNN 整體流程是
-
(input0, state0)->LSTM->(output0, state1); -
(input1, state1)->LSTM->(output1, state2); - ...
-
(inputN, stateN)->LSTM->(outputN, stateN+1); -
outputN->Linear->prediction.
通過LSTM分析每一時刻的值, 并且將這一時刻和前面時刻的理解合并在一起, 生成當(dāng)前時刻對前面數(shù)據(jù)的理解或記憶. 傳遞這種理解給下一時刻分析.
class RNN(nn.Module):
def __init__(self):
super(RNN, self).__init__()
self.rnn = nn.LSTM( # LSTM 效果要比 nn.RNN() 好多了
input_size=28, # 圖片每行的數(shù)據(jù)像素點
hidden_size=64, # rnn hidden unit
num_layers=1, # 有幾層 RNN layers
batch_first=True, # input & output 會是以 batch size 為第一維度的特征集 e.g. (batch, time_step, input_size)
)
self.out = nn.Linear(64, 10) # 輸出層
def forward(self, x):
# x shape (batch, time_step, input_size)
# r_out shape (batch, time_step, output_size)
# h_n shape (n_layers, batch, hidden_size) LSTM 有兩個 hidden states, h_n 是分線, h_c 是主線
# h_c shape (n_layers, batch, hidden_size)
r_out, (h_n, h_c) = self.rnn(x, None) # None 表示 hidden state 會用全0的 state
# 選取最后一個時間點的 r_out 輸出
# 這里 r_out[:, -1, :] 的值也是 h_n 的值
out = self.out(r_out[:, -1, :])
return out
rnn = RNN()
print(rnn)
"""
RNN (
(rnn): LSTM(28, 64, batch_first=True)
(out): Linear (64 -> 10)
)
"""
訓(xùn)練
將圖片數(shù)據(jù)看成一個時間上的連續(xù)數(shù)據(jù), 每一行的像素點都是這個時刻的輸入, 讀完整張圖片就是從上而下的讀完了每行的像素點,然后拿出 RNN 在最后一步的分析值判斷圖片是哪一類.
optimizer = torch.optim.Adam(rnn.parameters(), lr=LR) # optimize all parameters
loss_func = nn.CrossEntropyLoss() # the target label is not one-hotted
# training and testing
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader): # gives batch data
b_x = Variable(x.view(-1, 28, 28)) # reshape x to (batch, time_step, input_size)
b_y = Variable(y) # batch y
output = rnn(b_x) # rnn output
loss = loss_func(output, b_y) # cross entropy loss
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
"""
...
Epoch: 0 | train loss: 0.0945 | test accuracy: 0.94
Epoch: 0 | train loss: 0.0984 | test accuracy: 0.94
Epoch: 0 | train loss: 0.0332 | test accuracy: 0.95
Epoch: 0 | train loss: 0.1868 | test accuracy: 0.96
"""
最后取10個數(shù)據(jù), 看看預(yù)測的值到底對不對:
test_output = rnn(test_x[:10].view(-1, 28, 28))
pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
"""
[7 2 1 0 4 1 4 9 5 9] prediction number
[7 2 1 0 4 1 4 9 5 9] real number
"""