基于Pytorch訓(xùn)練CIFAR-10數(shù)據(jù)集神經(jīng)網(wǎng)絡(luò)分類器

什么是CIFAR-10數(shù)據(jù)集?

CIFAR-10 是一個包含了10類,60000 張 32x32像素彩色圖像的數(shù)據(jù)集。
CIFAR-10數(shù)據(jù)集

每類圖像有6000張;分為50000張訓(xùn)練數(shù)據(jù)和10000張測試數(shù)據(jù)。CIFAR-10 數(shù)據(jù)網(wǎng)址:http://www.cs.toronto.edu/~kriz/cifar.html
數(shù)據(jù)集分為5個訓(xùn)練數(shù)據(jù)集和1個測試數(shù)據(jù)集,每個批次10000張圖像

cifar10數(shù)據(jù)分批

第一步:下載數(shù)據(jù)集并加載到內(nèi)存。圖像數(shù)據(jù)會經(jīng)過標(biāo)準(zhǔn)化(Normalize)和歸一化處理。對數(shù)據(jù)集進(jìn)行標(biāo)準(zhǔn)化處理,就是讓數(shù)據(jù)集的均值為0,方差為1,把數(shù)據(jù)集映射到(-1,1)之間,這樣可以加速訓(xùn)練過程,提高模型泛化能力。

為什么要標(biāo)準(zhǔn)化輸入數(shù)據(jù)

歸一化將像素值從0~255已經(jīng)轉(zhuǎn)化為0~1之間,加快訓(xùn)練網(wǎng)絡(luò)的收斂性。圖像的像素處于0-1范圍時,由于仍然介于0-255之間,所以圖像依舊是有效的,并且可以正常查看圖像

import torch
import torchvision # 圖像處理工具包
import torchvision.transforms as transforms 
N = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)

trainloader = torch.utils.data.DataLoader(trainset, batch_size=N, shuffle=True, num_workers=0)

testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)

testloader = torch.utils.data.DataLoader(testset, batch_size=N, shuffle=False, num_workers=0)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

運(yùn)行結(jié)果:

Using downloaded and verified file: ./data\cifar-10-python.tar.gz
Extracting ./data\cifar-10-python.tar.gz to ./data
Files already downloaded and verified

第二步:隨機(jī)查看一批圖片

import matplotlib.pyplot as plt 
import numpy as np 
#圖像的像素處于0-1范圍時,由于仍然介于0-255之間,所以圖像依舊是有效的,并且可以正常查看圖像
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()


# get some random training images
dataiter = iter(trainloader)
print(type(dataiter))
images, labels = dataiter.next()
print(dataiter.next())
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(N)))

運(yùn)行結(jié)果

第三步:定義卷積神經(jīng)網(wǎng)絡(luò)。需要注意的是,開發(fā)者必須對圖像像素變化負(fù)責(zé),要非常清楚圖像經(jīng)過每個神經(jīng)網(wǎng)絡(luò)層處理后,輸出的像素尺寸,例如,經(jīng)過一個5x5, stride=1的卷積后,一個32x32輸入的圖像會變成28x28。

import torch.nn as nn
import torch.nn.functional as F 

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)  # 32 -> 28
        self.pool1 = nn.MaxPool2d(2)     # 28 -> 14
        self.conv2 = nn.Conv2d(6, 16, 5) # 14 -> 10
        self.pool2 = nn.MaxPool2d(2)     # 10 -> 5
        self.fc1   = nn.Linear(16*5*5, 120) # 展平
        self.fc2   = nn.Linear(120, 84)
        self.fc3   = nn.Linear(84, 10)   # 10類

    def forward(self, x):
        x = self.pool1(F.relu(self.conv1(x)))
        x = self.pool2(F.relu(self.conv2(x)))
        x = x.view(-1, 16*5*5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()
print(net)

輸出:

Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)

第四步:定義損失函數(shù)并訓(xùn)練網(wǎng)絡(luò)。作為分類應(yīng)用,選擇交叉熵?fù)p失函數(shù);優(yōu)化方案選擇adam。

import torch.optim as optim
criterion = nn.CrossEntropyLoss() #分類應(yīng)用,選擇交叉熵?fù)p失函數(shù)
# torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
optimizer = optim.Adam(net.parameters()) #其余參數(shù)默認(rèn)

for epoch in range(3):
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data # inputs類型和尺寸:<class 'torch.Tensor'> torch.Size([N, 3, 32, 32])
        optimizer.zero_grad() # 將上一次的梯度值清零
        output = net(inputs)  # 前向計(jì)算forward()
        loss = criterion(output, labels) # 計(jì)算損失值
        loss.backward()       # 反向計(jì)算backward()
        running_loss += loss.item() #累積loss值
        optimizer.step()      # 更新神經(jīng)網(wǎng)絡(luò)參數(shù)

        if i % 2000 == 1999:
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000)) #計(jì)算平均loss值
            running_loss = 0.0
print('Finished Training')

輸出:

[1, 2000] loss: 1.644
[2, 2000] loss: 1.421
[3, 2000] loss: 1.211
Finished Training

第五步 保存訓(xùn)練的模型,Pytorch支持兩種保存方式

  • 僅保存模型參數(shù)
  • 保存完整模型(包含參數(shù))
WEIGHT = './cifar_net_weights.pth'
MODEL  = './cifar_net_model.pth'
torch.save(net.state_dict(), WEIGHT) # 僅保存模型參數(shù)
torch.save(net, MODEL)               # 保存整個模型(包含參數(shù))
保存模型

netron分別打開模型文件和權(quán)重文件可以看到區(qū)別

打開模型文件 vs 權(quán)重文件

第六步 基于模型文件做推理計(jì)算

import torch,torchvision
import torch.nn as nn
import torch.nn.functional as F 
import torchvision.transforms as transforms 

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)  # 32 -> 28
        self.pool1 = nn.MaxPool2d(2)     # 28 -> 14
        self.conv2 = nn.Conv2d(6, 16, 5) # 14 -> 10
        self.pool2 = nn.MaxPool2d(2)     # 10 -> 5
        self.fc1   = nn.Linear(16*5*5, 120) # 展平
        self.fc2   = nn.Linear(120, 84)
        self.fc3   = nn.Linear(84, 10)   # 10類

    def forward(self, x):
        x = self.pool1(F.relu(self.conv1(x)))
        x = self.pool2(F.relu(self.conv2(x)))
        x = x.view(-1, 16*5*5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

MODEL  = './cifar_net_model.pth'
net = torch.load(MODEL)
print(net)

N = 16
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=N, shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))

class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs, 1)
        c = (predicted == labels).squeeze()
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1

for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

輸出結(jié)果:

Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
Files already downloaded and verified
Accuracy of the network on the 10000 test images: 58 %
Accuracy of plane : 53 %
Accuracy of car : 70 %
Accuracy of bird : 44 %
Accuracy of cat : 41 %
Accuracy of deer : 42 %
Accuracy of dog : 40 %
Accuracy of frog : 81 %
Accuracy of horse : 66 %
Accuracy of ship : 80 %
Accuracy of truck : 69 %

迷思:加載模型文件,還需要Net類的定義?不符合常理啊!~~

第七步 用GPU加速訓(xùn)練

  • net.to(device) # 把網(wǎng)絡(luò)送入GPU
  • inputs, labels = data[0].to(device), data[1].to(device) # 把數(shù)據(jù)送到GPU
    測試下來,GPU訓(xùn)練并沒有提升多少速度,是因?yàn)楸纠窠?jīng)網(wǎng)絡(luò)很淺很窄。把神經(jīng)網(wǎng)絡(luò)加寬加深后,GPU的加速效果就會出來
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容