.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "beginner/introyt/trainingyt.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_introyt_trainingyt.py: `Introduction `_ || `Tensors `_ || `Autograd `_ || `Building Models `_ || `TensorBoard Support `_ || **Training Models** || `Model Understanding `_ Training with PyTorch ===================== Follow along with the video below or on `youtube `__. .. raw:: html
Introduction ------------ In past videos, we’ve discussed and demonstrated: - Building models with the neural network layers and functions of the torch.nn module - The mechanics of automated gradient computation, which is central to gradient-based model training - Using TensorBoard to visualize training progress and other activities In this video, we’ll be adding some new tools to your inventory: - We’ll get familiar with the dataset and dataloader abstractions, and how they ease the process of feeding data to your model during a training loop - We’ll discuss specific loss functions and when to use them - We’ll look at PyTorch optimizers, which implement algorithms to adjust model weights based on the outcome of a loss function Finally, we’ll pull all of these together and see a full PyTorch training loop in action. Dataset and DataLoader ---------------------- The ``Dataset`` and ``DataLoader`` classes encapsulate the process of pulling your data from storage and exposing it to your training loop in batches. The ``Dataset`` is responsible for accessing and processing single instances of data. The ``DataLoader`` pulls instances of data from the ``Dataset`` (either automatically or with a sampler that you define), collects them in batches, and returns them for consumption by your training loop. The ``DataLoader`` works with all kinds of datasets, regardless of the type of data they contain. For this tutorial, we’ll be using the Fashion-MNIST dataset provided by TorchVision. We use ``torchvision.transforms.Normalize()`` to zero-center and normalize the distribution of the image tile content, and download both training and validation data splits. .. GENERATED FROM PYTHON SOURCE LINES 65-96 .. code-block:: default import torch import torchvision import torchvision.transforms as transforms # PyTorch TensorBoard support from torch.utils.tensorboard import SummaryWriter from datetime import datetime transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Create datasets for training & validation, download if necessary training_set = torchvision.datasets.FashionMNIST('./data', train=True, transform=transform, download=True) validation_set = torchvision.datasets.FashionMNIST('./data', train=False, transform=transform, download=True) # Create data loaders for our datasets; shuffle for training, not for validation training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True) validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False) # Class labels classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot') # Report split sizes print('Training set has {} instances'.format(len(training_set))) print('Validation set has {} instances'.format(len(validation_set))) .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0.00/26.4M [00:00`__ with momentum. It can be instructive to try some variations on this optimization scheme: - Learning rate determines the size of the steps the optimizer takes. What does a different learning rate do to the your training results, in terms of accuracy and convergence time? - Momentum nudges the optimizer in the direction of strongest gradient over multiple steps. What does changing this value do to your results? - Try some different optimization algorithms, such as averaged SGD, Adagrad, or Adam. How do your results differ? .. GENERATED FROM PYTHON SOURCE LINES 200-205 .. code-block:: default # Optimizers specified in the torch.optim package optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9) .. GENERATED FROM PYTHON SOURCE LINES 206-225 The Training Loop ----------------- Below, we have a function that performs one training epoch. It enumerates data from the DataLoader, and on each pass of the loop does the following: - Gets a batch of training data from the DataLoader - Zeros the optimizer’s gradients - Performs an inference - that is, gets predictions from the model for an input batch - Calculates the loss for that set of predictions vs. the labels on the dataset - Calculates the backward gradients over the learning weights - Tells the optimizer to perform one learning step - that is, adjust the model’s learning weights based on the observed gradients for this batch, according to the optimization algorithm we chose - It reports on the loss for every 1000 batches. - Finally, it reports the average per-batch loss for the last 1000 batches, for comparison with a validation run .. GENERATED FROM PYTHON SOURCE LINES 225-262 .. code-block:: default def train_one_epoch(epoch_index, tb_writer): running_loss = 0. last_loss = 0. # Here, we use enumerate(training_loader) instead of # iter(training_loader) so that we can track the batch # index and do some intra-epoch reporting for i, data in enumerate(training_loader): # Every data instance is an input + label pair inputs, labels = data # Zero your gradients for every batch! optimizer.zero_grad() # Make predictions for this batch outputs = model(inputs) # Compute the loss and its gradients loss = loss_fn(outputs, labels) loss.backward() # Adjust learning weights optimizer.step() # Gather data and report running_loss += loss.item() if i % 1000 == 999: last_loss = running_loss / 1000 # loss per batch print(' batch {} loss: {}'.format(i + 1, last_loss)) tb_x = epoch_index * len(training_loader) + i + 1 tb_writer.add_scalar('Loss/train', last_loss, tb_x) running_loss = 0. return last_loss .. GENERATED FROM PYTHON SOURCE LINES 263-276 Per-Epoch Activity ~~~~~~~~~~~~~~~~~~ There are a couple of things we’ll want to do once per epoch: - Perform validation by checking our relative loss on a set of data that was not used for training, and report this - Save a copy of the model Here, we’ll do our reporting in TensorBoard. This will require going to the command line to start TensorBoard, and opening it in another browser tab. .. GENERATED FROM PYTHON SOURCE LINES 276-326 .. code-block:: default # Initializing in a separate cell so we can easily add more epochs to the same run timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') writer = SummaryWriter('runs/fashion_trainer_{}'.format(timestamp)) epoch_number = 0 EPOCHS = 5 best_vloss = 1_000_000. for epoch in range(EPOCHS): print('EPOCH {}:'.format(epoch_number + 1)) # Make sure gradient tracking is on, and do a pass over the data model.train(True) avg_loss = train_one_epoch(epoch_number, writer) running_vloss = 0.0 # Set the model to evaluation mode, disabling dropout and using population # statistics for batch normalization. model.eval() # Disable gradient computation and reduce memory consumption. with torch.no_grad(): for i, vdata in enumerate(validation_loader): vinputs, vlabels = vdata voutputs = model(vinputs) vloss = loss_fn(voutputs, vlabels) running_vloss += vloss avg_vloss = running_vloss / (i + 1) print('LOSS train {} valid {}'.format(avg_loss, avg_vloss)) # Log the running loss averaged per batch # for both training and validation writer.add_scalars('Training vs. Validation Loss', { 'Training' : avg_loss, 'Validation' : avg_vloss }, epoch_number + 1) writer.flush() # Track best performance, and save the model's state if avg_vloss < best_vloss: best_vloss = avg_vloss model_path = 'model_{}_{}'.format(timestamp, epoch_number) torch.save(model.state_dict(), model_path) epoch_number += 1 .. rst-class:: sphx-glr-script-out .. code-block:: none EPOCH 1: batch 1000 loss: 1.859723868340254 batch 2000 loss: 0.7959158919476904 batch 3000 loss: 0.6875963613968342 batch 4000 loss: 0.5840691136796958 batch 5000 loss: 0.601861707421951 batch 6000 loss: 0.5326420218963176 batch 7000 loss: 0.5101273439126089 batch 8000 loss: 0.4920893395068124 batch 9000 loss: 0.45734640963794665 batch 10000 loss: 0.4638966364953085 batch 11000 loss: 0.4171614876713138 batch 12000 loss: 0.4280265563330031 batch 13000 loss: 0.4263071584069985 batch 14000 loss: 0.4181580100507708 batch 15000 loss: 0.42254892712844594 LOSS train 0.42254892712844594 valid 0.4007205665111542 EPOCH 2: batch 1000 loss: 0.3819776755711646 batch 2000 loss: 0.3976921703386761 batch 3000 loss: 0.37814622786745894 batch 4000 loss: 0.34922556551004524 batch 5000 loss: 0.34570265819173074 batch 6000 loss: 0.3456719932517153 batch 7000 loss: 0.34924780977396586 batch 8000 loss: 0.35666742020787207 batch 9000 loss: 0.33993772263024585 batch 10000 loss: 0.3685673026727891 batch 11000 loss: 0.3419286552860576 batch 12000 loss: 0.33543308569528746 batch 13000 loss: 0.33815630553881054 batch 14000 loss: 0.3307116681025509 batch 15000 loss: 0.3606875884762267 LOSS train 0.3606875884762267 valid 0.3472764194011688 EPOCH 3: batch 1000 loss: 0.30639552796969655 batch 2000 loss: 0.3228924395108479 batch 3000 loss: 0.31235727290814974 batch 4000 loss: 0.3145854706429818 batch 5000 loss: 0.29222741585151746 batch 6000 loss: 0.31345307654022325 batch 7000 loss: 0.3163325209467439 batch 8000 loss: 0.31647060124657583 batch 9000 loss: 0.3141732101450179 batch 10000 loss: 0.299438458041157 batch 11000 loss: 0.30074762070312866 batch 12000 loss: 0.29664640293602135 batch 13000 loss: 0.3008534389541455 batch 14000 loss: 0.3202067443535416 batch 15000 loss: 0.3115060192462843 LOSS train 0.3115060192462843 valid 0.33710092306137085 EPOCH 4: batch 1000 loss: 0.27951306609585846 batch 2000 loss: 0.29738874051375025 batch 3000 loss: 0.28281411081092667 batch 4000 loss: 0.28956373607418934 batch 5000 loss: 0.28590615158868604 batch 6000 loss: 0.2835932457768722 batch 7000 loss: 0.27401943222882985 batch 8000 loss: 0.2870593838140994 batch 9000 loss: 0.27542450427323456 batch 10000 loss: 0.28151207812947177 batch 11000 loss: 0.27685102666457534 batch 12000 loss: 0.2891393224728527 batch 13000 loss: 0.2877112180689146 batch 14000 loss: 0.281157614742333 batch 15000 loss: 0.2708676661506033 LOSS train 0.2708676661506033 valid 0.30610325932502747 EPOCH 5: batch 1000 loss: 0.2748342432942154 batch 2000 loss: 0.2569611764227302 batch 3000 loss: 0.24103483629236688 batch 4000 loss: 0.2569761824092675 batch 5000 loss: 0.2605063391393851 batch 6000 loss: 0.2804330715298693 batch 7000 loss: 0.25689922451737585 batch 8000 loss: 0.2736012614666906 batch 9000 loss: 0.2698981114967537 batch 10000 loss: 0.26709053535146543 batch 11000 loss: 0.2632426246944069 batch 12000 loss: 0.27027270019240224 batch 13000 loss: 0.2651329760397275 batch 14000 loss: 0.280449686764925 batch 15000 loss: 0.2526728889870938 LOSS train 0.2526728889870938 valid 0.28390395641326904 .. GENERATED FROM PYTHON SOURCE LINES 327-369 To load a saved version of the model: .. code:: python saved_model = GarmentClassifier() saved_model.load_state_dict(torch.load(PATH)) Once you’ve loaded the model, it’s ready for whatever you need it for - more training, inference, or analysis. Note that if your model has constructor parameters that affect model structure, you’ll need to provide them and configure the model identically to the state in which it was saved. Other Resources --------------- - Docs on the `data utilities `__, including Dataset and DataLoader, at pytorch.org - A `note on the use of pinned memory `__ for GPU training - Documentation on the datasets available in `TorchVision `__, `TorchText `__, and `TorchAudio `__ - Documentation on the `loss functions `__ available in PyTorch - Documentation on the `torch.optim package `__, which includes optimizers and related tools, such as learning rate scheduling - A detailed `tutorial on saving and loading models `__ - The `Tutorials section of pytorch.org `__ contains tutorials on a broad variety of training tasks, including classification in different domains, generative adversarial networks, reinforcement learning, and more .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 27 minutes 16.034 seconds) .. _sphx_glr_download_beginner_introyt_trainingyt.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: trainingyt.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: trainingyt.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_