未整理Pytorch使用tensorboardX可視化
pytorch tensorboard_tutorial
一、模型可視化
一個(gè)簡單的網(wǎng)絡(luò)可視化工具:torchsummary
安裝方法:
pip install torchsummary
源代碼地址
當(dāng)然還有增強(qiáng)版:torchsummaryX
例一:VGG網(wǎng)絡(luò)可視化
>>> import torch, torchvision
>>> model = torchvision.models.vgg.vgg16()
>>> from torchsummary import summary
>>> summary(model, (3, 224, 224)) # (model, input_size, batch_size=-1, device="cuda")
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 224, 224] 1,792
ReLU-2 [-1, 64, 224, 224] 0
Conv2d-3 [-1, 64, 224, 224] 36,928
ReLU-4 [-1, 64, 224, 224] 0
MaxPool2d-5 [-1, 64, 112, 112] 0
Conv2d-6 [-1, 128, 112, 112] 73,856
ReLU-7 [-1, 128, 112, 112] 0
Conv2d-8 [-1, 128, 112, 112] 147,584
ReLU-9 [-1, 128, 112, 112] 0
MaxPool2d-10 [-1, 128, 56, 56] 0
Conv2d-11 [-1, 256, 56, 56] 295,168
ReLU-12 [-1, 256, 56, 56] 0
Conv2d-13 [-1, 256, 56, 56] 590,080
ReLU-14 [-1, 256, 56, 56] 0
Conv2d-15 [-1, 256, 56, 56] 590,080
ReLU-16 [-1, 256, 56, 56] 0
MaxPool2d-17 [-1, 256, 28, 28] 0
Conv2d-18 [-1, 512, 28, 28] 1,180,160
ReLU-19 [-1, 512, 28, 28] 0
Conv2d-20 [-1, 512, 28, 28] 2,359,808
ReLU-21 [-1, 512, 28, 28] 0
Conv2d-22 [-1, 512, 28, 28] 2,359,808
ReLU-23 [-1, 512, 28, 28] 0
MaxPool2d-24 [-1, 512, 14, 14] 0
Conv2d-25 [-1, 512, 14, 14] 2,359,808
ReLU-26 [-1, 512, 14, 14] 0
Conv2d-27 [-1, 512, 14, 14] 2,359,808
ReLU-28 [-1, 512, 14, 14] 0
Conv2d-29 [-1, 512, 14, 14] 2,359,808
ReLU-30 [-1, 512, 14, 14] 0
MaxPool2d-31 [-1, 512, 7, 7] 0
Linear-32 [-1, 4096] 102,764,544
ReLU-33 [-1, 4096] 0
Dropout-34 [-1, 4096] 0
Linear-35 [-1, 4096] 16,781,312
ReLU-36 [-1, 4096] 0
Dropout-37 [-1, 4096] 0
Linear-38 [-1, 1000] 4,097,000
================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 218.59
Params size (MB): 527.79
Estimated Total Size (MB): 746.96
----------------------------------------------------------------
例二:自定義網(wǎng)絡(luò)可視化
class Convnet(nn.Module): # 重寫module
def __init__(self, x_dim, hid_dim=64, z_dim=64):
super().__init__()
self.encoder = nn.Sequential(
# 4層卷積
conv_block(x_dim, hid_dim),
conv_block(hid_dim, hid_dim),
conv_block(hid_dim, hid_dim),
conv_block(hid_dim, z_dim)
)
def forward(self, x):
x = self.encoder(x)
flatten = x.view(x.size(0), -1)
return flatten
from torchsummary import summary
model = Convnet(x_dim=3)
summary(model=model, input_size=(3, 64, 64), device="cpu") # cpu計(jì)算
# 結(jié)果
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 64, 64] 1,792
BatchNorm2d-2 [-1, 64, 64, 64] 128
ReLU-3 [-1, 64, 64, 64] 0
MaxPool2d-4 [-1, 64, 32, 32] 0
Conv2d-5 [-1, 64, 32, 32] 36,928
BatchNorm2d-6 [-1, 64, 32, 32] 128
ReLU-7 [-1, 64, 32, 32] 0
MaxPool2d-8 [-1, 64, 16, 16] 0
Conv2d-9 [-1, 64, 16, 16] 36,928
BatchNorm2d-10 [-1, 64, 16, 16] 128
ReLU-11 [-1, 64, 16, 16] 0
MaxPool2d-12 [-1, 64, 8, 8] 0
Conv2d-13 [-1, 64, 8, 8] 36,928
BatchNorm2d-14 [-1, 64, 8, 8] 128
ReLU-15 [-1, 64, 8, 8] 0
MaxPool2d-16 [-1, 64, 4, 4] 0
================================================================
Total params: 113,088
Trainable params: 113,088
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 8.63
Params size (MB): 0.43
Estimated Total Size (MB): 9.11
----------------------------------------------------------------
二、圖像可視化
1 安裝Visdom模塊
pip install visdom
2 啟動服務(wù)器
程序運(yùn)行過程中保持服務(wù)器開啟:
python -m visdom.server
3 例一(以單通道圖像28*28為例)
from visdom import Visdom
vis = Visdom()
# 顯示單張圖片
vis.image(tensor[1,28,28]) # 只展示格式
# 顯示多張圖片
vis.images(tensor[5,1,28,28])
4 例二
?
結(jié)果: