备注
点击 此处 下载完整示例代码
介绍 || 张量 || 自动求导 || 构建模型 || TensorBoard支持 || 训练模型 || `模型理解<captumyt.html>`_
PyTorch TensorBoard支持¶
Created On: Nov 30, 2021 | Last Updated: May 29, 2024 | Last Verified: Nov 05, 2024
请跟随下面的视频或在`youtube <https://www.youtube.com/watch?v=6CEld3hZgqc>`__观看。
开始之前¶
要运行此教程,您需要安装PyTorch、TorchVision、Matplotlib和TensorBoard。
使用``conda``:
conda install pytorch torchvision -c pytorch
conda install matplotlib tensorboard
使用``pip``:
pip install torch torchvision matplotlib tensorboard
安装依赖项后,请重新启动安装了这些依赖项的Python环境中的notebook。
介绍¶
在此notebook中,我们将训练一种变体的LeNet-5与Fashion-MNIST数据集进行对抗。Fashion-MNIST是一组图片瓦片,显示各种服饰类型,并提供十个类别标签以表明对应的服饰种类。
# PyTorch model and training necessities
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Image datasets and image manipulation
import torchvision
import torchvision.transforms as transforms
# Image display
import matplotlib.pyplot as plt
import numpy as np
# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter
# In case you are using an environment that has TensorFlow installed,
# such as Google Colab, uncomment the following code to avoid
# a bug with saving embeddings to your TensorBoard directory
# import tensorflow as tf
# import tensorboard as tb
# tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
在TensorBoard中展示图片¶
让我们开始将来自数据集的样本图片添加到TensorBoard:
# Gather datasets and prepare them for consumption
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Store separate training and validations splits in ./data
training_set = torchvision.datasets.FashionMNIST('./data',
download=True,
train=True,
transform=transform)
validation_set = torchvision.datasets.FashionMNIST('./data',
download=True,
train=False,
transform=transform)
training_loader = torch.utils.data.DataLoader(training_set,
batch_size=4,
shuffle=True,
num_workers=2)
validation_loader = torch.utils.data.DataLoader(validation_set,
batch_size=4,
shuffle=False,
num_workers=2)
# Class labels
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')
# Helper function for inline image display
def matplotlib_imshow(img, one_channel=False):
if one_channel:
img = img.mean(dim=0)
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
if one_channel:
plt.imshow(npimg, cmap="Greys")
else:
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# Extract a batch of 4 images
dataiter = iter(training_loader)
images, labels = next(dataiter)
# Create a grid from the images and show them
img_grid = torchvision.utils.make_grid(images)
matplotlib_imshow(img_grid, one_channel=True)

上述过程中,我们使用TorchVision和Matplotlib创建了一个输入数据小批量的可视化网格。下来,我们使用``SummaryWriter``的``add_image()``调用将图像记录下来以供TensorBoard使用,同时使用``flush()``使其立即写入磁盘。
# Default log_dir argument is "runs" - but it's good to be specific
# torch.utils.tensorboard.SummaryWriter is imported above
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
# Write image data to TensorBoard log dir
writer.add_image('Four Fashion-MNIST Images', img_grid)
writer.flush()
# To view, start TensorBoard on the command line with:
# tensorboard --logdir=runs
# ...and open a browser tab to http://localhost:6006/
如果您在命令行启动TensorBoard并在新的浏览器标签页中打开它(通常在`localhost:6006 <localhost:6006>`__),您应该能在IMAGES标签下看到图片网格。
绘制标量以可视化训练过程¶
TensorBoard对于跟踪训练的进展和效率非常有用。以下我们将运行一个训练循环,跟踪一些指标,并保存这些数据以供TensorBoard使用。
让我们定义一个模型用于对图片瓦片进行分类,并定义优化器和损失函数以便训练:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 4 * 4)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
现在让我们训练一个单独的epoch,并在每1000个批次评估训练与验证集的损失:
print(len(validation_loader))
for epoch in range(1): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(training_loader, 0):
# basic training loop
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 1000 == 999: # Every 1000 mini-batches...
print('Batch {}'.format(i + 1))
# Check against the validation set
running_vloss = 0.0
# In evaluation mode some model specific operations can be omitted eg. dropout layer
net.train(False) # Switching to evaluation mode, eg. turning off regularisation
for j, vdata in enumerate(validation_loader, 0):
vinputs, vlabels = vdata
voutputs = net(vinputs)
vloss = criterion(voutputs, vlabels)
running_vloss += vloss.item()
net.train(True) # Switching back to training mode, eg. turning on regularisation
avg_loss = running_loss / 1000
avg_vloss = running_vloss / len(validation_loader)
# Log the running loss averaged per batch
writer.add_scalars('Training vs. Validation Loss',
{ 'Training' : avg_loss, 'Validation' : avg_vloss },
epoch * len(training_loader) + i)
running_loss = 0.0
print('Finished Training')
writer.flush()
2500
Batch 1000
Batch 2000
Batch 3000
Batch 4000
Batch 5000
Batch 6000
Batch 7000
Batch 8000
Batch 9000
Batch 10000
Batch 11000
Batch 12000
Batch 13000
Batch 14000
Batch 15000
Finished Training
切换到您的TensorBoard页面并查看SCALARS标签。
可视化您的模型¶
TensorBoard还能用于检查模型内的数据流。为此,请使用模型和样本输入调用``add_graph()``方法:
# Again, grab a single mini-batch of images
dataiter = iter(training_loader)
images, labels = next(dataiter)
# add_graph() will trace the sample input through your model,
# and render it as a graph.
writer.add_graph(net, images)
writer.flush()
当您切换到TensorBoard时,应能看到GRAPHS标签。双击“NET”节点查看模型内的层和数据流。
使用嵌入可视化您的数据集¶
我们使用的28x28图片瓦片可以建模为784维向量(28 * 28 = 784)。将数据投影到低维表示可能会有启发性。``add_embedding()``方法会将一组数据投影到具有最大方差的三个维度,并以交互式三维图形式显示。``add_embedding()``方法自动完成投影到方差最大的三个维度的操作。
以下,我们将取样数据并生成如此的嵌入:
# Select a random subset of data and corresponding labels
def select_n_random(data, labels, n=100):
assert len(data) == len(labels)
perm = torch.randperm(len(data))
return data[perm][:n], labels[perm][:n]
# Extract a random subset of data
images, labels = select_n_random(training_set.data, training_set.targets)
# get the class labels for each image
class_labels = [classes[label] for label in labels]
# log embeddings
features = images.view(-1, 28 * 28)
writer.add_embedding(features,
metadata=class_labels,
label_img=images.unsqueeze(1))
writer.flush()
writer.close()
现在如果你切换到TensorBoard并选择PROJECTOR标签,应该能看到投影的三维表示。您可以旋转和缩放模型,从大和小的规模检查它,并观察是否能在投影数据和标签聚类中发现模式。
为了更好地可视化,建议:
从左侧的“Color by”下拉菜单中选择“label”。
切换顶部的夜间模式图标,以将浅色图片放置到深色背景上。
其他资源¶
有关更多信息,请查看:
有关`torch.utils.tensorboard.SummaryWriter <https://pytorch.org/docs/stable/tensorboard.html?highlight=summarywriter>`__的PyTorch文档
有关TensorBoard的更多信息,请参阅`TensorBoard文档 <https://www.tensorflow.org/tensorboard>`__
脚本总运行时间: (9分钟 24.977秒)