.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "beginner/dcgan_faces_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_dcgan_faces_tutorial.py: DCGAN Tutorial ============== **Author**: `Nathan Inkawhich `__ .. GENERATED FROM PYTHON SOURCE LINES 12-114 Introduction ------------ This tutorial will give an introduction to DCGANs through an example. We will train a generative adversarial network (GAN) to generate new celebrities after showing it pictures of many real celebrities. Most of the code here is from the DCGAN implementation in `pytorch/examples `__, and this document will give a thorough explanation of the implementation and shed light on how and why this model works. But don’t worry, no prior knowledge of GANs is required, but it may require a first-timer to spend some time reasoning about what is actually happening under the hood. Also, for the sake of time it will help to have a GPU, or two. Lets start from the beginning. Generative Adversarial Networks ------------------------------- What is a GAN? ~~~~~~~~~~~~~~ GANs are a framework for teaching a deep learning model to capture the training data distribution so we can generate new data from that same distribution. GANs were invented by Ian Goodfellow in 2014 and first described in the paper `Generative Adversarial Nets `__. They are made of two distinct models, a *generator* and a *discriminator*. The job of the generator is to spawn ‘fake’ images that look like the training images. The job of the discriminator is to look at an image and output whether or not it is a real training image or a fake image from the generator. During training, the generator is constantly trying to outsmart the discriminator by generating better and better fakes, while the discriminator is working to become a better detective and correctly classify the real and fake images. The equilibrium of this game is when the generator is generating perfect fakes that look as if they came directly from the training data, and the discriminator is left to always guess at 50% confidence that the generator output is real or fake. Now, lets define some notation to be used throughout tutorial starting with the discriminator. Let :math:`x` be data representing an image. :math:`D(x)` is the discriminator network which outputs the (scalar) probability that :math:`x` came from training data rather than the generator. Here, since we are dealing with images, the input to :math:`D(x)` is an image of CHW size 3x64x64. Intuitively, :math:`D(x)` should be HIGH when :math:`x` comes from training data and LOW when :math:`x` comes from the generator. :math:`D(x)` can also be thought of as a traditional binary classifier. For the generator’s notation, let :math:`z` be a latent space vector sampled from a standard normal distribution. :math:`G(z)` represents the generator function which maps the latent vector :math:`z` to data-space. The goal of :math:`G` is to estimate the distribution that the training data comes from (:math:`p_{data}`) so it can generate fake samples from that estimated distribution (:math:`p_g`). So, :math:`D(G(z))` is the probability (scalar) that the output of the generator :math:`G` is a real image. As described in `Goodfellow’s paper `__, :math:`D` and :math:`G` play a minimax game in which :math:`D` tries to maximize the probability it correctly classifies reals and fakes (:math:`logD(x)`), and :math:`G` tries to minimize the probability that :math:`D` will predict its outputs are fake (:math:`log(1-D(G(z)))`). From the paper, the GAN loss function is .. math:: \underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(z)))\big] In theory, the solution to this minimax game is where :math:`p_g = p_{data}`, and the discriminator guesses randomly if the inputs are real or fake. However, the convergence theory of GANs is still being actively researched and in reality models do not always train to this point. What is a DCGAN? ~~~~~~~~~~~~~~~~ A DCGAN is a direct extension of the GAN described above, except that it explicitly uses convolutional and convolutional-transpose layers in the discriminator and generator, respectively. It was first described by Radford et. al. in the paper `Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks `__. The discriminator is made up of strided `convolution `__ layers, `batch norm `__ layers, and `LeakyReLU `__ activations. The input is a 3x64x64 input image and the output is a scalar probability that the input is from the real data distribution. The generator is comprised of `convolutional-transpose `__ layers, batch norm layers, and `ReLU `__ activations. The input is a latent vector, :math:`z`, that is drawn from a standard normal distribution and the output is a 3x64x64 RGB image. The strided conv-transpose layers allow the latent vector to be transformed into a volume with the same shape as an image. In the paper, the authors also give some tips about how to setup the optimizers, how to calculate the loss functions, and how to initialize the model weights, all of which will be explained in the coming sections. .. GENERATED FROM PYTHON SOURCE LINES 114-141 .. code-block:: default #%matplotlib inline import argparse import os import random import torch import torch.nn as nn import torch.nn.parallel import torch.optim as optim import torch.utils.data import torchvision.datasets as dset import torchvision.transforms as transforms import torchvision.utils as vutils import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from IPython.display import HTML # Set random seed for reproducibility manualSeed = 999 #manualSeed = random.randint(1, 10000) # use if you want new results print("Random Seed: ", manualSeed) random.seed(manualSeed) torch.manual_seed(manualSeed) torch.use_deterministic_algorithms(True) # Needed for reproducible results .. rst-class:: sphx-glr-script-out .. code-block:: none Random Seed: 999 .. GENERATED FROM PYTHON SOURCE LINES 142-176 Inputs ------ Let’s define some inputs for the run: - ``dataroot`` - the path to the root of the dataset folder. We will talk more about the dataset in the next section. - ``workers`` - the number of worker threads for loading the data with the ``DataLoader``. - ``batch_size`` - the batch size used in training. The DCGAN paper uses a batch size of 128. - ``image_size`` - the spatial size of the images used for training. This implementation defaults to 64x64. If another size is desired, the structures of D and G must be changed. See `here `__ for more details. - ``nc`` - number of color channels in the input images. For color images this is 3. - ``nz`` - length of latent vector. - ``ngf`` - relates to the depth of feature maps carried through the generator. - ``ndf`` - sets the depth of feature maps propagated through the discriminator. - ``num_epochs`` - number of training epochs to run. Training for longer will probably lead to better results but will also take much longer. - ``lr`` - learning rate for training. As described in the DCGAN paper, this number should be 0.0002. - ``beta1`` - beta1 hyperparameter for Adam optimizers. As described in paper, this number should be 0.5. - ``ngpu`` - number of GPUs available. If this is 0, code will run in CPU mode. If this number is greater than 0 it will run on that number of GPUs. .. GENERATED FROM PYTHON SOURCE LINES 176-215 .. code-block:: default # Root directory for dataset dataroot = "data/celeba" # Number of workers for dataloader workers = 2 # Batch size during training batch_size = 128 # Spatial size of training images. All images will be resized to this # size using a transformer. image_size = 64 # Number of channels in the training images. For color images this is 3 nc = 3 # Size of z latent vector (i.e. size of generator input) nz = 100 # Size of feature maps in generator ngf = 64 # Size of feature maps in discriminator ndf = 64 # Number of training epochs num_epochs = 5 # Learning rate for optimizers lr = 0.0002 # Beta1 hyperparameter for Adam optimizers beta1 = 0.5 # Number of GPUs available. Use 0 for CPU mode. ngpu = 1 .. GENERATED FROM PYTHON SOURCE LINES 216-245 Data ---- In this tutorial we will use the `Celeb-A Faces dataset `__ which can be downloaded at the linked site, or in `Google Drive `__. The dataset will download as a file named ``img_align_celeba.zip``. Once downloaded, create a directory named ``celeba`` and extract the zip file into that directory. Then, set the ``dataroot`` input for this notebook to the ``celeba`` directory you just created. The resulting directory structure should be: .. code-block:: sh /path/to/celeba -> img_align_celeba -> 188242.jpg -> 173822.jpg -> 284702.jpg -> 537394.jpg ... This is an important step because we will be using the ``ImageFolder`` dataset class, which requires there to be subdirectories in the dataset root folder. Now, we can create the dataset, create the dataloader, set the device to run on, and finally visualize some of the training data. .. GENERATED FROM PYTHON SOURCE LINES 245-271 .. code-block:: default # We can use an image folder dataset the way we have it setup. # Create the dataset dataset = dset.ImageFolder(root=dataroot, transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) # Create the dataloader dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=workers) # Decide which device we want to run on device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu") # Plot some training images real_batch = next(iter(dataloader)) plt.figure(figsize=(8,8)) plt.axis("off") plt.title("Training Images") plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0))) plt.show() .. image-sg:: /beginner/images/sphx_glr_dcgan_faces_tutorial_001.png :alt: Training Images :srcset: /beginner/images/sphx_glr_dcgan_faces_tutorial_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 272-290 Implementation -------------- With our input parameters set and the dataset prepared, we can now get into the implementation. We will start with the weight initialization strategy, then talk about the generator, discriminator, loss functions, and training loop in detail. Weight Initialization ~~~~~~~~~~~~~~~~~~~~~ From the DCGAN paper, the authors specify that all model weights shall be randomly initialized from a Normal distribution with ``mean=0``, ``stdev=0.02``. The ``weights_init`` function takes an initialized model as input and reinitializes all convolutional, convolutional-transpose, and batch normalization layers to meet this criteria. This function is applied to the models immediately after initialization. .. GENERATED FROM PYTHON SOURCE LINES 290-301 .. code-block:: default # custom weights initialization called on ``netG`` and ``netD`` def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) .. GENERATED FROM PYTHON SOURCE LINES 302-328 Generator ~~~~~~~~~ The generator, :math:`G`, is designed to map the latent space vector (:math:`z`) to data-space. Since our data are images, converting :math:`z` to data-space means ultimately creating a RGB image with the same size as the training images (i.e. 3x64x64). In practice, this is accomplished through a series of strided two dimensional convolutional transpose layers, each paired with a 2d batch norm layer and a relu activation. The output of the generator is fed through a tanh function to return it to the input data range of :math:`[-1,1]`. It is worth noting the existence of the batch norm functions after the conv-transpose layers, as this is a critical contribution of the DCGAN paper. These layers help with the flow of gradients during training. An image of the generator from the DCGAN paper is shown below. .. figure:: /_static/img/dcgan_generator.png :alt: dcgan_generator Notice, how the inputs we set in the input section (``nz``, ``ngf``, and ``nc``) influence the generator architecture in code. ``nz`` is the length of the z input vector, ``ngf`` relates to the size of the feature maps that are propagated through the generator, and ``nc`` is the number of channels in the output image (set to 3 for RGB images). Below is the code for the generator. .. GENERATED FROM PYTHON SOURCE LINES 328-362 .. code-block:: default # Generator Code class Generator(nn.Module): def __init__(self, ngpu): super(Generator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is Z, going into a convolution nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(ngf * 8), nn.ReLU(True), # state size. ``(ngf*8) x 4 x 4`` nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), # state size. ``(ngf*4) x 8 x 8`` nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), # state size. ``(ngf*2) x 16 x 16`` nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), # state size. ``(ngf) x 32 x 32`` nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), nn.Tanh() # state size. ``(nc) x 64 x 64`` ) def forward(self, input): return self.main(input) .. GENERATED FROM PYTHON SOURCE LINES 363-367 Now, we can instantiate the generator and apply the ``weights_init`` function. Check out the printed model to see how the generator object is structured. .. GENERATED FROM PYTHON SOURCE LINES 367-383 .. code-block:: default # Create the generator netG = Generator(ngpu).to(device) # Handle multi-GPU if desired if (device.type == 'cuda') and (ngpu > 1): netG = nn.DataParallel(netG, list(range(ngpu))) # Apply the ``weights_init`` function to randomly initialize all weights # to ``mean=0``, ``stdev=0.02``. netG.apply(weights_init) # Print the model print(netG) .. rst-class:: sphx-glr-script-out .. code-block:: none Generator( (main): Sequential( (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) (6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (8): ReLU(inplace=True) (9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (11): ReLU(inplace=True) (12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (13): Tanh() ) ) .. GENERATED FROM PYTHON SOURCE LINES 384-401 Discriminator ~~~~~~~~~~~~~ As mentioned, the discriminator, :math:`D`, is a binary classification network that takes an image as input and outputs a scalar probability that the input image is real (as opposed to fake). Here, :math:`D` takes a 3x64x64 input image, processes it through a series of Conv2d, BatchNorm2d, and LeakyReLU layers, and outputs the final probability through a Sigmoid activation function. This architecture can be extended with more layers if necessary for the problem, but there is significance to the use of the strided convolution, BatchNorm, and LeakyReLUs. The DCGAN paper mentions it is a good practice to use strided convolution rather than pooling to downsample because it lets the network learn its own pooling function. Also batch norm and leaky relu functions promote healthy gradient flow which is critical for the learning process of both :math:`G` and :math:`D`. .. GENERATED FROM PYTHON SOURCE LINES 403-404 Discriminator Code .. GENERATED FROM PYTHON SOURCE LINES 404-434 .. code-block:: default class Discriminator(nn.Module): def __init__(self, ngpu): super(Discriminator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is ``(nc) x 64 x 64`` nn.Conv2d(nc, ndf, 4, 2, 1, bias=False), nn.LeakyReLU(0.2, inplace=True), # state size. ``(ndf) x 32 x 32`` nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ndf * 2), nn.LeakyReLU(0.2, inplace=True), # state size. ``(ndf*2) x 16 x 16`` nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ndf * 4), nn.LeakyReLU(0.2, inplace=True), # state size. ``(ndf*4) x 8 x 8`` nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False), nn.BatchNorm2d(ndf * 8), nn.LeakyReLU(0.2, inplace=True), # state size. ``(ndf*8) x 4 x 4`` nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False), nn.Sigmoid() ) def forward(self, input): return self.main(input) .. GENERATED FROM PYTHON SOURCE LINES 435-438 Now, as with the generator, we can create the discriminator, apply the ``weights_init`` function, and print the model’s structure. .. GENERATED FROM PYTHON SOURCE LINES 438-454 .. code-block:: default # Create the Discriminator netD = Discriminator(ngpu).to(device) # Handle multi-GPU if desired if (device.type == 'cuda') and (ngpu > 1): netD = nn.DataParallel(netD, list(range(ngpu))) # Apply the ``weights_init`` function to randomly initialize all weights # like this: ``to mean=0, stdev=0.2``. netD.apply(weights_init) # Print the model print(netD) .. rst-class:: sphx-glr-script-out .. code-block:: none Discriminator( (main): Sequential( (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (1): LeakyReLU(negative_slope=0.2, inplace=True) (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): LeakyReLU(negative_slope=0.2, inplace=True) (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (7): LeakyReLU(negative_slope=0.2, inplace=True) (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (10): LeakyReLU(negative_slope=0.2, inplace=True) (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False) (12): Sigmoid() ) ) .. GENERATED FROM PYTHON SOURCE LINES 455-486 Loss Functions and Optimizers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With :math:`D` and :math:`G` setup, we can specify how they learn through the loss functions and optimizers. We will use the Binary Cross Entropy loss (`BCELoss `__) function which is defined in PyTorch as: .. math:: \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right] Notice how this function provides the calculation of both log components in the objective function (i.e. :math:`log(D(x))` and :math:`log(1-D(G(z)))`). We can specify what part of the BCE equation to use with the :math:`y` input. This is accomplished in the training loop which is coming up soon, but it is important to understand how we can choose which component we wish to calculate just by changing :math:`y` (i.e. GT labels). Next, we define our real label as 1 and the fake label as 0. These labels will be used when calculating the losses of :math:`D` and :math:`G`, and this is also the convention used in the original GAN paper. Finally, we set up two separate optimizers, one for :math:`D` and one for :math:`G`. As specified in the DCGAN paper, both are Adam optimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track of the generator’s learning progression, we will generate a fixed batch of latent vectors that are drawn from a Gaussian distribution (i.e. fixed_noise) . In the training loop, we will periodically input this fixed_noise into :math:`G`, and over the iterations we will see images form out of the noise. .. GENERATED FROM PYTHON SOURCE LINES 486-503 .. code-block:: default # Initialize the ``BCELoss`` function criterion = nn.BCELoss() # Create batch of latent vectors that we will use to visualize # the progression of the generator fixed_noise = torch.randn(64, nz, 1, 1, device=device) # Establish convention for real and fake labels during training real_label = 1. fake_label = 0. # Setup Adam optimizers for both G and D optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999)) optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999)) .. GENERATED FROM PYTHON SOURCE LINES 504-572 Training ~~~~~~~~ Finally, now that we have all of the parts of the GAN framework defined, we can train it. Be mindful that training GANs is somewhat of an art form, as incorrect hyperparameter settings lead to mode collapse with little explanation of what went wrong. Here, we will closely follow Algorithm 1 from the `Goodfellow’s paper `__, while abiding by some of the best practices shown in `ganhacks `__. Namely, we will “construct different mini-batches for real and fake” images, and also adjust G’s objective function to maximize :math:`log(D(G(z)))`. Training is split up into two main parts. Part 1 updates the Discriminator and Part 2 updates the Generator. **Part 1 - Train the Discriminator** Recall, the goal of training the discriminator is to maximize the probability of correctly classifying a given input as real or fake. In terms of Goodfellow, we wish to “update the discriminator by ascending its stochastic gradient”. Practically, we want to maximize :math:`log(D(x)) + log(1-D(G(z)))`. Due to the separate mini-batch suggestion from `ganhacks `__, we will calculate this in two steps. First, we will construct a batch of real samples from the training set, forward pass through :math:`D`, calculate the loss (:math:`log(D(x))`), then calculate the gradients in a backward pass. Secondly, we will construct a batch of fake samples with the current generator, forward pass this batch through :math:`D`, calculate the loss (:math:`log(1-D(G(z)))`), and *accumulate* the gradients with a backward pass. Now, with the gradients accumulated from both the all-real and all-fake batches, we call a step of the Discriminator’s optimizer. **Part 2 - Train the Generator** As stated in the original paper, we want to train the Generator by minimizing :math:`log(1-D(G(z)))` in an effort to generate better fakes. As mentioned, this was shown by Goodfellow to not provide sufficient gradients, especially early in the learning process. As a fix, we instead wish to maximize :math:`log(D(G(z)))`. In the code we accomplish this by: classifying the Generator output from Part 1 with the Discriminator, computing G’s loss *using real labels as GT*, computing G’s gradients in a backward pass, and finally updating G’s parameters with an optimizer step. It may seem counter-intuitive to use the real labels as GT labels for the loss function, but this allows us to use the :math:`log(x)` part of the ``BCELoss`` (rather than the :math:`log(1-x)` part) which is exactly what we want. Finally, we will do some statistic reporting and at the end of each epoch we will push our fixed_noise batch through the generator to visually track the progress of G’s training. The training statistics reported are: - **Loss_D** - discriminator loss calculated as the sum of losses for the all real and all fake batches (:math:`log(D(x)) + log(1 - D(G(z)))`). - **Loss_G** - generator loss calculated as :math:`log(D(G(z)))` - **D(x)** - the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Think about why this is. - **D(G(z))** - average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Think about why this is. **Note:** This step might take a while, depending on how many epochs you run and if you removed some data from the dataset. .. GENERATED FROM PYTHON SOURCE LINES 572-656 .. code-block:: default # Training Loop # Lists to keep track of progress img_list = [] G_losses = [] D_losses = [] iters = 0 print("Starting Training Loop...") # For each epoch for epoch in range(num_epochs): # For each batch in the dataloader for i, data in enumerate(dataloader, 0): ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) ########################### ## Train with all-real batch netD.zero_grad() # Format batch real_cpu = data[0].to(device) b_size = real_cpu.size(0) label = torch.full((b_size,), real_label, dtype=torch.float, device=device) # Forward pass real batch through D output = netD(real_cpu).view(-1) # Calculate loss on all-real batch errD_real = criterion(output, label) # Calculate gradients for D in backward pass errD_real.backward() D_x = output.mean().item() ## Train with all-fake batch # Generate batch of latent vectors noise = torch.randn(b_size, nz, 1, 1, device=device) # Generate fake image batch with G fake = netG(noise) label.fill_(fake_label) # Classify all fake batch with D output = netD(fake.detach()).view(-1) # Calculate D's loss on the all-fake batch errD_fake = criterion(output, label) # Calculate the gradients for this batch, accumulated (summed) with previous gradients errD_fake.backward() D_G_z1 = output.mean().item() # Compute error of D as sum over the fake and the real batches errD = errD_real + errD_fake # Update D optimizerD.step() ############################ # (2) Update G network: maximize log(D(G(z))) ########################### netG.zero_grad() label.fill_(real_label) # fake labels are real for generator cost # Since we just updated D, perform another forward pass of all-fake batch through D output = netD(fake).view(-1) # Calculate G's loss based on this output errG = criterion(output, label) # Calculate gradients for G errG.backward() D_G_z2 = output.mean().item() # Update G optimizerG.step() # Output training stats if i % 50 == 0: print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch, num_epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) # Save Losses for plotting later G_losses.append(errG.item()) D_losses.append(errD.item()) # Check how the generator is doing by saving G's output on fixed_noise if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)): with torch.no_grad(): fake = netG(fixed_noise).detach().cpu() img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) iters += 1 .. rst-class:: sphx-glr-script-out .. code-block:: none Starting Training Loop... [0/5][0/1583] Loss_D: 1.9253 Loss_G: 4.2295 D(x): 0.4681 D(G(z)): 0.5867 / 0.0234 [0/5][50/1583] Loss_D: 0.1787 Loss_G: 19.3262 D(x): 0.9021 D(G(z)): 0.0000 / 0.0000 [0/5][100/1583] Loss_D: 0.8667 Loss_G: 11.6657 D(x): 0.8051 D(G(z)): 0.0001 / 0.0008 [0/5][150/1583] Loss_D: 1.3525 Loss_G: 12.1388 D(x): 0.9505 D(G(z)): 0.6232 / 0.0000 [0/5][200/1583] Loss_D: 0.2267 Loss_G: 3.0958 D(x): 0.9012 D(G(z)): 0.0884 / 0.0801 [0/5][250/1583] Loss_D: 0.5070 Loss_G: 5.4072 D(x): 0.8867 D(G(z)): 0.2253 / 0.0176 [0/5][300/1583] Loss_D: 1.6674 Loss_G: 3.8464 D(x): 0.3278 D(G(z)): 0.0075 / 0.0428 [0/5][350/1583] Loss_D: 0.4727 Loss_G: 3.7289 D(x): 0.7323 D(G(z)): 0.0545 / 0.0510 [0/5][400/1583] Loss_D: 0.6952 Loss_G: 2.7046 D(x): 0.6052 D(G(z)): 0.0549 / 0.0986 [0/5][450/1583] Loss_D: 0.5542 Loss_G: 3.0846 D(x): 0.7670 D(G(z)): 0.1568 / 0.0664 [0/5][500/1583] Loss_D: 1.0535 Loss_G: 2.1153 D(x): 0.4753 D(G(z)): 0.0334 / 0.1758 [0/5][550/1583] Loss_D: 0.4631 Loss_G: 3.7431 D(x): 0.7778 D(G(z)): 0.1181 / 0.0379 [0/5][600/1583] Loss_D: 0.8609 Loss_G: 3.8593 D(x): 0.5462 D(G(z)): 0.0152 / 0.0599 [0/5][650/1583] Loss_D: 0.4497 Loss_G: 3.3994 D(x): 0.8336 D(G(z)): 0.1825 / 0.0557 [0/5][700/1583] Loss_D: 1.1335 Loss_G: 6.6631 D(x): 0.9307 D(G(z)): 0.5644 / 0.0042 [0/5][750/1583] Loss_D: 0.9320 Loss_G: 2.4560 D(x): 0.5228 D(G(z)): 0.0341 / 0.1436 [0/5][800/1583] Loss_D: 0.3762 Loss_G: 3.7155 D(x): 0.7771 D(G(z)): 0.0367 / 0.0394 [0/5][850/1583] Loss_D: 0.4964 Loss_G: 3.8047 D(x): 0.8701 D(G(z)): 0.2378 / 0.0425 [0/5][900/1583] Loss_D: 0.8660 Loss_G: 2.5298 D(x): 0.5728 D(G(z)): 0.0613 / 0.1060 [0/5][950/1583] Loss_D: 0.6210 Loss_G: 5.1183 D(x): 0.8608 D(G(z)): 0.3201 / 0.0113 [0/5][1000/1583] Loss_D: 0.4514 Loss_G: 3.0069 D(x): 0.8198 D(G(z)): 0.1793 / 0.0816 [0/5][1050/1583] Loss_D: 0.9952 Loss_G: 2.4624 D(x): 0.5886 D(G(z)): 0.2386 / 0.1327 [0/5][1100/1583] Loss_D: 0.3753 Loss_G: 3.4336 D(x): 0.8538 D(G(z)): 0.1545 / 0.0491 [0/5][1150/1583] Loss_D: 1.2277 Loss_G: 8.3980 D(x): 0.9757 D(G(z)): 0.6354 / 0.0004 [0/5][1200/1583] Loss_D: 0.4791 Loss_G: 4.4621 D(x): 0.9104 D(G(z)): 0.2816 / 0.0174 [0/5][1250/1583] Loss_D: 0.9103 Loss_G: 2.8637 D(x): 0.6830 D(G(z)): 0.3030 / 0.0814 [0/5][1300/1583] Loss_D: 0.4706 Loss_G: 3.4780 D(x): 0.7263 D(G(z)): 0.0537 / 0.0564 [0/5][1350/1583] Loss_D: 0.6167 Loss_G: 2.9633 D(x): 0.7014 D(G(z)): 0.1246 / 0.0843 [0/5][1400/1583] Loss_D: 0.4402 Loss_G: 3.6633 D(x): 0.8045 D(G(z)): 0.1519 / 0.0405 [0/5][1450/1583] Loss_D: 0.6162 Loss_G: 3.2270 D(x): 0.7262 D(G(z)): 0.1551 / 0.0698 [0/5][1500/1583] Loss_D: 0.6604 Loss_G: 5.0311 D(x): 0.8329 D(G(z)): 0.3040 / 0.0142 [0/5][1550/1583] Loss_D: 0.6295 Loss_G: 2.5956 D(x): 0.7470 D(G(z)): 0.1827 / 0.1079 [1/5][0/1583] Loss_D: 0.4805 Loss_G: 3.0063 D(x): 0.7542 D(G(z)): 0.0939 / 0.0826 [1/5][50/1583] Loss_D: 0.5213 Loss_G: 3.1918 D(x): 0.7881 D(G(z)): 0.1799 / 0.0701 [1/5][100/1583] Loss_D: 0.8115 Loss_G: 6.1316 D(x): 0.8694 D(G(z)): 0.4134 / 0.0039 [1/5][150/1583] Loss_D: 0.5532 Loss_G: 4.4881 D(x): 0.8302 D(G(z)): 0.2354 / 0.0201 [1/5][200/1583] Loss_D: 0.3384 Loss_G: 4.1441 D(x): 0.9209 D(G(z)): 0.2018 / 0.0273 [1/5][250/1583] Loss_D: 0.4416 Loss_G: 3.1567 D(x): 0.7591 D(G(z)): 0.0979 / 0.0693 [1/5][300/1583] Loss_D: 0.6491 Loss_G: 4.7122 D(x): 0.9065 D(G(z)): 0.3628 / 0.0185 [1/5][350/1583] Loss_D: 0.4252 Loss_G: 3.0034 D(x): 0.8016 D(G(z)): 0.1348 / 0.0796 [1/5][400/1583] Loss_D: 0.5872 Loss_G: 4.5848 D(x): 0.9056 D(G(z)): 0.3416 / 0.0181 [1/5][450/1583] Loss_D: 0.5208 Loss_G: 3.1924 D(x): 0.6910 D(G(z)): 0.0759 / 0.0614 [1/5][500/1583] Loss_D: 0.6373 Loss_G: 2.3228 D(x): 0.6159 D(G(z)): 0.0194 / 0.1504 [1/5][550/1583] Loss_D: 0.6092 Loss_G: 2.8430 D(x): 0.7126 D(G(z)): 0.1594 / 0.0798 [1/5][600/1583] Loss_D: 2.6392 Loss_G: 1.6702 D(x): 0.1509 D(G(z)): 0.0036 / 0.2652 [1/5][650/1583] Loss_D: 0.6055 Loss_G: 4.3314 D(x): 0.9293 D(G(z)): 0.3700 / 0.0255 [1/5][700/1583] Loss_D: 0.6743 Loss_G: 4.4243 D(x): 0.9419 D(G(z)): 0.4105 / 0.0184 [1/5][750/1583] Loss_D: 0.6271 Loss_G: 2.9614 D(x): 0.7595 D(G(z)): 0.2399 / 0.0767 [1/5][800/1583] Loss_D: 0.4085 Loss_G: 3.9207 D(x): 0.8878 D(G(z)): 0.2179 / 0.0311 [1/5][850/1583] Loss_D: 0.4656 Loss_G: 2.7812 D(x): 0.7499 D(G(z)): 0.1113 / 0.0902 [1/5][900/1583] Loss_D: 0.6448 Loss_G: 4.0957 D(x): 0.8971 D(G(z)): 0.3655 / 0.0274 [1/5][950/1583] Loss_D: 0.5962 Loss_G: 4.6224 D(x): 0.9161 D(G(z)): 0.3504 / 0.0151 [1/5][1000/1583] Loss_D: 0.4554 Loss_G: 3.5027 D(x): 0.8349 D(G(z)): 0.2024 / 0.0460 [1/5][1050/1583] Loss_D: 0.3777 Loss_G: 3.6027 D(x): 0.8371 D(G(z)): 0.1483 / 0.0401 [1/5][1100/1583] Loss_D: 0.8256 Loss_G: 4.3474 D(x): 0.9606 D(G(z)): 0.4579 / 0.0239 [1/5][1150/1583] Loss_D: 0.6338 Loss_G: 1.8006 D(x): 0.6571 D(G(z)): 0.1016 / 0.2144 [1/5][1200/1583] Loss_D: 0.4544 Loss_G: 3.9609 D(x): 0.8648 D(G(z)): 0.2375 / 0.0275 [1/5][1250/1583] Loss_D: 0.4300 Loss_G: 3.2581 D(x): 0.8453 D(G(z)): 0.1992 / 0.0594 [1/5][1300/1583] Loss_D: 0.3428 Loss_G: 2.8011 D(x): 0.9327 D(G(z)): 0.2062 / 0.0917 [1/5][1350/1583] Loss_D: 0.6456 Loss_G: 1.5907 D(x): 0.6620 D(G(z)): 0.1341 / 0.2518 [1/5][1400/1583] Loss_D: 1.0552 Loss_G: 5.5392 D(x): 0.9393 D(G(z)): 0.5810 / 0.0081 [1/5][1450/1583] Loss_D: 0.5158 Loss_G: 3.9685 D(x): 0.9226 D(G(z)): 0.3171 / 0.0272 [1/5][1500/1583] Loss_D: 0.5365 Loss_G: 3.8893 D(x): 0.9286 D(G(z)): 0.3350 / 0.0297 [1/5][1550/1583] Loss_D: 1.7469 Loss_G: 7.0958 D(x): 0.9607 D(G(z)): 0.7453 / 0.0017 [2/5][0/1583] Loss_D: 0.4801 Loss_G: 2.5083 D(x): 0.7563 D(G(z)): 0.1414 / 0.1120 [2/5][50/1583] Loss_D: 0.8642 Loss_G: 4.0698 D(x): 0.8873 D(G(z)): 0.4655 / 0.0247 [2/5][100/1583] Loss_D: 0.5755 Loss_G: 3.8060 D(x): 0.9221 D(G(z)): 0.3580 / 0.0294 [2/5][150/1583] Loss_D: 0.5431 Loss_G: 2.7516 D(x): 0.7336 D(G(z)): 0.1651 / 0.0892 [2/5][200/1583] Loss_D: 0.5343 Loss_G: 3.0836 D(x): 0.8583 D(G(z)): 0.2747 / 0.0657 [2/5][250/1583] Loss_D: 0.4806 Loss_G: 2.7586 D(x): 0.8156 D(G(z)): 0.2104 / 0.0845 [2/5][300/1583] Loss_D: 1.3261 Loss_G: 0.8489 D(x): 0.3586 D(G(z)): 0.0284 / 0.4896 [2/5][350/1583] Loss_D: 0.5982 Loss_G: 3.3485 D(x): 0.8648 D(G(z)): 0.3249 / 0.0514 [2/5][400/1583] Loss_D: 0.6146 Loss_G: 3.5353 D(x): 0.9260 D(G(z)): 0.3638 / 0.0412 [2/5][450/1583] Loss_D: 0.6543 Loss_G: 2.2284 D(x): 0.6189 D(G(z)): 0.0859 / 0.1610 [2/5][500/1583] Loss_D: 0.4549 Loss_G: 2.8017 D(x): 0.7619 D(G(z)): 0.1257 / 0.0871 [2/5][550/1583] Loss_D: 0.5540 Loss_G: 1.4729 D(x): 0.6413 D(G(z)): 0.0471 / 0.2910 [2/5][600/1583] Loss_D: 2.1852 Loss_G: 5.1836 D(x): 0.9680 D(G(z)): 0.8271 / 0.0115 [2/5][650/1583] Loss_D: 0.6494 Loss_G: 2.2610 D(x): 0.7654 D(G(z)): 0.2731 / 0.1346 [2/5][700/1583] Loss_D: 0.8246 Loss_G: 1.7544 D(x): 0.5155 D(G(z)): 0.0515 / 0.2318 [2/5][750/1583] Loss_D: 0.5312 Loss_G: 1.7904 D(x): 0.7456 D(G(z)): 0.1756 / 0.1989 [2/5][800/1583] Loss_D: 0.6807 Loss_G: 3.6964 D(x): 0.8213 D(G(z)): 0.3498 / 0.0347 [2/5][850/1583] Loss_D: 0.5764 Loss_G: 3.3782 D(x): 0.8822 D(G(z)): 0.3260 / 0.0465 [2/5][900/1583] Loss_D: 0.5902 Loss_G: 1.6969 D(x): 0.7025 D(G(z)): 0.1623 / 0.2254 [2/5][950/1583] Loss_D: 0.7378 Loss_G: 2.8792 D(x): 0.7970 D(G(z)): 0.3450 / 0.0788 [2/5][1000/1583] Loss_D: 0.9063 Loss_G: 0.9848 D(x): 0.5777 D(G(z)): 0.2088 / 0.4277 [2/5][1050/1583] Loss_D: 1.9781 Loss_G: 0.4740 D(x): 0.1853 D(G(z)): 0.0076 / 0.6719 [2/5][1100/1583] Loss_D: 0.5326 Loss_G: 3.6264 D(x): 0.9161 D(G(z)): 0.3214 / 0.0376 [2/5][1150/1583] Loss_D: 0.6537 Loss_G: 2.6539 D(x): 0.8052 D(G(z)): 0.3127 / 0.0880 [2/5][1200/1583] Loss_D: 0.4548 Loss_G: 2.6971 D(x): 0.9091 D(G(z)): 0.2740 / 0.0922 [2/5][1250/1583] Loss_D: 0.8103 Loss_G: 1.0119 D(x): 0.5446 D(G(z)): 0.0925 / 0.4174 [2/5][1300/1583] Loss_D: 0.4992 Loss_G: 2.3328 D(x): 0.7667 D(G(z)): 0.1788 / 0.1199 [2/5][1350/1583] Loss_D: 0.5945 Loss_G: 1.8714 D(x): 0.7370 D(G(z)): 0.2142 / 0.2019 [2/5][1400/1583] Loss_D: 0.5062 Loss_G: 2.9554 D(x): 0.8657 D(G(z)): 0.2759 / 0.0672 [2/5][1450/1583] Loss_D: 0.5050 Loss_G: 2.6050 D(x): 0.7379 D(G(z)): 0.1417 / 0.0925 [2/5][1500/1583] Loss_D: 0.4741 Loss_G: 2.5782 D(x): 0.8164 D(G(z)): 0.2093 / 0.0978 [2/5][1550/1583] Loss_D: 2.4340 Loss_G: 0.5105 D(x): 0.1405 D(G(z)): 0.0178 / 0.6642 [3/5][0/1583] Loss_D: 0.5847 Loss_G: 1.8185 D(x): 0.6761 D(G(z)): 0.1390 / 0.2010 [3/5][50/1583] Loss_D: 0.6756 Loss_G: 1.3954 D(x): 0.6495 D(G(z)): 0.1602 / 0.2877 [3/5][100/1583] Loss_D: 0.9389 Loss_G: 3.5586 D(x): 0.9247 D(G(z)): 0.5097 / 0.0413 [3/5][150/1583] Loss_D: 0.8383 Loss_G: 4.1223 D(x): 0.9423 D(G(z)): 0.4889 / 0.0257 [3/5][200/1583] Loss_D: 0.7028 Loss_G: 1.1357 D(x): 0.5806 D(G(z)): 0.0670 / 0.3769 [3/5][250/1583] Loss_D: 0.8205 Loss_G: 1.5882 D(x): 0.6002 D(G(z)): 0.1722 / 0.2446 [3/5][300/1583] Loss_D: 0.5772 Loss_G: 1.6588 D(x): 0.7126 D(G(z)): 0.1792 / 0.2325 [3/5][350/1583] Loss_D: 0.9131 Loss_G: 2.5469 D(x): 0.7485 D(G(z)): 0.4025 / 0.1120 [3/5][400/1583] Loss_D: 0.7285 Loss_G: 1.4736 D(x): 0.5603 D(G(z)): 0.0560 / 0.2728 [3/5][450/1583] Loss_D: 0.8201 Loss_G: 4.7116 D(x): 0.9017 D(G(z)): 0.4620 / 0.0132 [3/5][500/1583] Loss_D: 0.6197 Loss_G: 2.9266 D(x): 0.8219 D(G(z)): 0.3099 / 0.0695 [3/5][550/1583] Loss_D: 0.5623 Loss_G: 1.9462 D(x): 0.7680 D(G(z)): 0.2197 / 0.1803 [3/5][600/1583] Loss_D: 0.9292 Loss_G: 1.1519 D(x): 0.4686 D(G(z)): 0.0416 / 0.3883 [3/5][650/1583] Loss_D: 0.5886 Loss_G: 2.6010 D(x): 0.7573 D(G(z)): 0.2352 / 0.0955 [3/5][700/1583] Loss_D: 0.4422 Loss_G: 2.2618 D(x): 0.8097 D(G(z)): 0.1775 / 0.1356 [3/5][750/1583] Loss_D: 0.6118 Loss_G: 2.8917 D(x): 0.8792 D(G(z)): 0.3475 / 0.0695 [3/5][800/1583] Loss_D: 0.5473 Loss_G: 1.9801 D(x): 0.7403 D(G(z)): 0.1811 / 0.1653 [3/5][850/1583] Loss_D: 0.6400 Loss_G: 2.7699 D(x): 0.8352 D(G(z)): 0.3345 / 0.0804 [3/5][900/1583] Loss_D: 0.4683 Loss_G: 2.7304 D(x): 0.8466 D(G(z)): 0.2339 / 0.0828 [3/5][950/1583] Loss_D: 1.0093 Loss_G: 5.9043 D(x): 0.9404 D(G(z)): 0.5629 / 0.0047 [3/5][1000/1583] Loss_D: 0.5349 Loss_G: 2.1615 D(x): 0.7366 D(G(z)): 0.1640 / 0.1430 [3/5][1050/1583] Loss_D: 1.2765 Loss_G: 0.6246 D(x): 0.3708 D(G(z)): 0.0978 / 0.5693 [3/5][1100/1583] Loss_D: 0.5150 Loss_G: 3.0683 D(x): 0.8824 D(G(z)): 0.2883 / 0.0633 [3/5][1150/1583] Loss_D: 0.6427 Loss_G: 2.3221 D(x): 0.6846 D(G(z)): 0.1879 / 0.1345 [3/5][1200/1583] Loss_D: 1.4129 Loss_G: 0.7791 D(x): 0.3089 D(G(z)): 0.0284 / 0.5125 [3/5][1250/1583] Loss_D: 0.8410 Loss_G: 0.9022 D(x): 0.5329 D(G(z)): 0.1067 / 0.4592 [3/5][1300/1583] Loss_D: 0.6202 Loss_G: 2.1313 D(x): 0.6066 D(G(z)): 0.0584 / 0.1690 [3/5][1350/1583] Loss_D: 1.2004 Loss_G: 0.5458 D(x): 0.3802 D(G(z)): 0.0438 / 0.6172 [3/5][1400/1583] Loss_D: 0.4449 Loss_G: 3.0477 D(x): 0.8518 D(G(z)): 0.2230 / 0.0635 [3/5][1450/1583] Loss_D: 0.4845 Loss_G: 2.1626 D(x): 0.7261 D(G(z)): 0.1237 / 0.1463 [3/5][1500/1583] Loss_D: 0.6804 Loss_G: 2.9004 D(x): 0.8089 D(G(z)): 0.3330 / 0.0715 [3/5][1550/1583] Loss_D: 0.5228 Loss_G: 2.0975 D(x): 0.7766 D(G(z)): 0.1999 / 0.1542 [4/5][0/1583] Loss_D: 0.5070 Loss_G: 3.0032 D(x): 0.8567 D(G(z)): 0.2696 / 0.0691 [4/5][50/1583] Loss_D: 0.9748 Loss_G: 4.4468 D(x): 0.9213 D(G(z)): 0.5354 / 0.0186 [4/5][100/1583] Loss_D: 0.6213 Loss_G: 2.1751 D(x): 0.6348 D(G(z)): 0.0899 / 0.1572 [4/5][150/1583] Loss_D: 0.6697 Loss_G: 1.7084 D(x): 0.6313 D(G(z)): 0.1257 / 0.2225 [4/5][200/1583] Loss_D: 0.8032 Loss_G: 1.5958 D(x): 0.5151 D(G(z)): 0.0426 / 0.2646 [4/5][250/1583] Loss_D: 0.9996 Loss_G: 1.2608 D(x): 0.4645 D(G(z)): 0.0573 / 0.3387 [4/5][300/1583] Loss_D: 0.4582 Loss_G: 2.7660 D(x): 0.8346 D(G(z)): 0.2170 / 0.0858 [4/5][350/1583] Loss_D: 0.3809 Loss_G: 3.5536 D(x): 0.8834 D(G(z)): 0.2013 / 0.0390 [4/5][400/1583] Loss_D: 0.6527 Loss_G: 2.2881 D(x): 0.7386 D(G(z)): 0.2494 / 0.1337 [4/5][450/1583] Loss_D: 0.5231 Loss_G: 1.8814 D(x): 0.7282 D(G(z)): 0.1458 / 0.1877 [4/5][500/1583] Loss_D: 0.5383 Loss_G: 1.5842 D(x): 0.7036 D(G(z)): 0.1342 / 0.2415 [4/5][550/1583] Loss_D: 0.9516 Loss_G: 1.0133 D(x): 0.4710 D(G(z)): 0.0501 / 0.4142 [4/5][600/1583] Loss_D: 1.0705 Loss_G: 0.8433 D(x): 0.4447 D(G(z)): 0.1189 / 0.4909 [4/5][650/1583] Loss_D: 0.6983 Loss_G: 1.8885 D(x): 0.6748 D(G(z)): 0.2142 / 0.1789 [4/5][700/1583] Loss_D: 0.3807 Loss_G: 2.7729 D(x): 0.8701 D(G(z)): 0.1975 / 0.0779 [4/5][750/1583] Loss_D: 0.4087 Loss_G: 2.6961 D(x): 0.8605 D(G(z)): 0.2062 / 0.0937 [4/5][800/1583] Loss_D: 0.6979 Loss_G: 4.2029 D(x): 0.8687 D(G(z)): 0.3796 / 0.0221 [4/5][850/1583] Loss_D: 0.6469 Loss_G: 3.2722 D(x): 0.8828 D(G(z)): 0.3646 / 0.0546 [4/5][900/1583] Loss_D: 0.5220 Loss_G: 1.9707 D(x): 0.7330 D(G(z)): 0.1533 / 0.1705 [4/5][950/1583] Loss_D: 0.4789 Loss_G: 2.9207 D(x): 0.8481 D(G(z)): 0.2463 / 0.0680 [4/5][1000/1583] Loss_D: 0.5314 Loss_G: 1.9446 D(x): 0.6925 D(G(z)): 0.1124 / 0.1768 [4/5][1050/1583] Loss_D: 0.5690 Loss_G: 3.1526 D(x): 0.8775 D(G(z)): 0.3241 / 0.0542 [4/5][1100/1583] Loss_D: 0.4210 Loss_G: 2.5976 D(x): 0.8832 D(G(z)): 0.2351 / 0.0944 [4/5][1150/1583] Loss_D: 0.4784 Loss_G: 2.1561 D(x): 0.7820 D(G(z)): 0.1739 / 0.1473 [4/5][1200/1583] Loss_D: 0.5640 Loss_G: 1.6350 D(x): 0.6674 D(G(z)): 0.0848 / 0.2409 [4/5][1250/1583] Loss_D: 1.1821 Loss_G: 5.9182 D(x): 0.9561 D(G(z)): 0.6136 / 0.0045 [4/5][1300/1583] Loss_D: 0.5865 Loss_G: 4.5427 D(x): 0.9453 D(G(z)): 0.3724 / 0.0155 [4/5][1350/1583] Loss_D: 1.4747 Loss_G: 5.4916 D(x): 0.9590 D(G(z)): 0.7031 / 0.0066 [4/5][1400/1583] Loss_D: 0.4061 Loss_G: 3.5037 D(x): 0.8808 D(G(z)): 0.2233 / 0.0375 [4/5][1450/1583] Loss_D: 0.5928 Loss_G: 1.4196 D(x): 0.6548 D(G(z)): 0.0989 / 0.2901 [4/5][1500/1583] Loss_D: 0.5381 Loss_G: 3.6485 D(x): 0.8691 D(G(z)): 0.2970 / 0.0338 [4/5][1550/1583] Loss_D: 0.7174 Loss_G: 2.1455 D(x): 0.7092 D(G(z)): 0.2569 / 0.1469 .. GENERATED FROM PYTHON SOURCE LINES 657-670 Results ------- Finally, lets check out how we did. Here, we will look at three different results. First, we will see how D and G’s losses changed during training. Second, we will visualize G’s output on the fixed_noise batch for every epoch. And third, we will look at a batch of real data next to a batch of fake data from G. **Loss versus training iteration** Below is a plot of D & G’s losses versus training iterations. .. GENERATED FROM PYTHON SOURCE LINES 670-681 .. code-block:: default plt.figure(figsize=(10,5)) plt.title("Generator and Discriminator Loss During Training") plt.plot(G_losses,label="G") plt.plot(D_losses,label="D") plt.xlabel("iterations") plt.ylabel("Loss") plt.legend() plt.show() .. image-sg:: /beginner/images/sphx_glr_dcgan_faces_tutorial_002.png :alt: Generator and Discriminator Loss During Training :srcset: /beginner/images/sphx_glr_dcgan_faces_tutorial_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 682-689 **Visualization of G’s progression** Remember how we saved the generator’s output on the fixed_noise batch after every epoch of training. Now, we can visualize the training progression of G with an animation. Press the play button to start the animation. .. GENERATED FROM PYTHON SOURCE LINES 691-699 .. code-block:: default fig = plt.figure(figsize=(8,8)) plt.axis("off") ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list] ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True) HTML(ani.to_jshtml()) .. image-sg:: /beginner/images/sphx_glr_dcgan_faces_tutorial_003.png :alt: dcgan faces tutorial :srcset: /beginner/images/sphx_glr_dcgan_faces_tutorial_003.png :class: sphx-glr-single-img .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 700-705 **Real Images vs. Fake Images** Finally, lets take a look at some real images and fake images side by side. .. GENERATED FROM PYTHON SOURCE LINES 705-724 .. code-block:: default # Grab a batch of real images from the dataloader real_batch = next(iter(dataloader)) # Plot the real images plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.axis("off") plt.title("Real Images") plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0))) # Plot the fake images from the last epoch plt.subplot(1,2,2) plt.axis("off") plt.title("Fake Images") plt.imshow(np.transpose(img_list[-1],(1,2,0))) plt.show() .. image-sg:: /beginner/images/sphx_glr_dcgan_faces_tutorial_004.png :alt: Real Images, Fake Images :srcset: /beginner/images/sphx_glr_dcgan_faces_tutorial_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 725-739 Where to Go Next ---------------- We have reached the end of our journey, but there are several places you could go from here. You could: - Train for longer to see how good the results get - Modify this model to take a different dataset and possibly change the size of the images and the model architecture - Check out some other cool GAN projects `here `__ - Create GANs that generate `music `__ .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 5 minutes 49.207 seconds) .. _sphx_glr_download_beginner_dcgan_faces_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: dcgan_faces_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: dcgan_faces_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_