Note

Click here to download the full example code

# DCGAN Tutorial¶

**Author**: Nathan Inkawhich

## Introduction¶

This tutorial will give an introduction to DCGANs through an example. We will train a generative adversarial network (GAN) to generate new celebrities after showing it pictures of many real celebrities. Most of the code here is from the dcgan implementation in pytorch/examples, and this document will give a thorough explanation of the implementation and shed light on how and why this model works. But don’t worry, no prior knowledge of GANs is required, but it may require a first-timer to spend some time reasoning about what is actually happening under the hood. Also, for the sake of time it will help to have a GPU, or two. Lets start from the beginning.

## Generative Adversarial Networks¶

### What is a GAN?¶

GANs are a framework for teaching a DL model to capture the training
data’s distribution so we can generate new data from that same
distribution. GANs were invented by Ian Goodfellow in 2014 and first
described in the paper Generative Adversarial
Nets.
They are made of two distinct models, a *generator* and a
*discriminator*. The job of the generator is to spawn ‘fake’ images that
look like the training images. The job of the discriminator is to look
at an image and output whether or not it is a real training image or a
fake image from the generator. During training, the generator is
constantly trying to outsmart the discriminator by generating better and
better fakes, while the discriminator is working to become a better
detective and correctly classify the real and fake images. The
equilibrium of this game is when the generator is generating perfect
fakes that look as if they came directly from the training data, and the
discriminator is left to always guess at 50% confidence that the
generator output is real or fake.

Now, lets define some notation to be used throughout tutorial starting with the discriminator. Let \(x\) be data representing an image. \(D(x)\) is the discriminator network which outputs the (scalar) probability that \(x\) came from training data rather than the generator. Here, since we are dealing with images the input to \(D(x)\) is an image of HWC size 3x64x64. Intuitively, \(D(x)\) should be HIGH when \(x\) comes from training data and LOW when \(x\) comes from the generator. \(D(x)\) can also be thought of as a traditional binary classifier.

For the generator’s notation, let \(z\) be a latent space vector sampled from a standard normal distribution. \(G(z)\) represents the generator function which maps the latent vector \(z\) to data-space. The goal of \(G\) is to estimate the distribution that the training data comes from (\(p_{data}\)) so it can generate fake samples from that estimated distribution (\(p_g\)).

So, \(D(G(z))\) is the probability (scalar) that the output of the generator \(G\) is a real image. As described in Goodfellow’s paper, \(D\) and \(G\) play a minimax game in which \(D\) tries to maximize the probability it correctly classifies reals and fakes (\(logD(x)\)), and \(G\) tries to minimize the probability that \(D\) will predict its outputs are fake (\(log(1-D(G(x)))\)). From the paper, the GAN loss function is

In theory, the solution to this minimax game is where \(p_g = p_{data}\), and the discriminator guesses randomly if the inputs are real or fake. However, the convergence theory of GANs is still being actively researched and in reality models do not always train to this point.

### What is a DCGAN?¶

A DCGAN is a direct extension of the GAN described above, except that it explicitly uses convolutional and convolutional-transpose layers in the discriminator and generator, respectively. It was first described by Radford et. al. in the paper Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks. The discriminator is made up of strided convolution layers, batch norm layers, and LeakyReLU activations. The input is a 3x64x64 input image and the output is a scalar probability that the input is from the real data distribution. The generator is comprised of convolutional-transpose layers, batch norm layers, and ReLU activations. The input is a latent vector, \(z\), that is drawn from a standard normal distribution and the output is a 3x64x64 RGB image. The strided conv-transpose layers allow the latent vector to be transformed into a volume with the same shape as an image. In the paper, the authors also give some tips about how to setup the optimizers, how to calculate the loss functions, and how to initialize the model weights, all of which will be explained in the coming sections.

```
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seem for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
```

Out:

```
Random Seed: 999
```

## Inputs¶

Let’s define some inputs for the run:

**dataroot**- the path to the root of the dataset folder. We will talk more about the dataset in the next section**workers**- the number of worker threads for loading the data with the DataLoader**batch_size**- the batch size used in training. The DCGAN paper uses a batch size of 128**image_size**- the spatial size of the images used for training. This implementation defaults to 64x64. If another size is desired, the structures of D and G must be changed. See here for more details**nc**- number of color channels in the input images. For color images this is 3**nz**- length of latent vector**ngf**- relates to the depth of feature maps carried through the generator**ndf**- sets the depth of feature maps propagated through the discriminator**num_epochs**- number of training epochs to run. Training for longer will probably lead to better results but will also take much longer**lr**- learning rate for training. As described in the DCGAN paper, this number should be 0.0002**beta1**- beta1 hyperparameter for Adam optimizers. As described in paper, this number should be 0.5**ngpu**- number of GPUs available. If this is 0, code will run in CPU mode. If this number is greater than 0 it will run on that number of GPUs

```
# Root directory for dataset
dataroot = "/home/ubuntu/facebook/datasets/celeba"
# Number of workers for dataloader
workers = 4
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
```

## Data¶

In this tutorial we will use the Celeb-A Faces
dataset which can
be downloaded at the linked site, or in Google
Drive.
The dataset will download as a file named *img_align_celeba.zip*. Once
downloaded, create a directory named *celeba* and extract the zip file
into that directory. Then, set the *dataroot* input for this notebook to
the *celeba* directory you just created. The resulting directory
structure should be:

```
/path/to/celeba
-> img_align_celeba
-> 188242.jpg
-> 173822.jpg
-> 284702.jpg
-> 537394.jpg
...
```

This is an important step because we will be using the ImageFolder dataset class, which requires there to be subdirectories in the dataset’s root folder. Now, we can create the dataset, create the dataloader, set the device to run on, and finally visualize some of the training data.

```
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
```

## Implementation¶

With our input parameters set and the dataset prepared, we can now get into the implementation. We will start with the weigth initialization strategy, then talk about the generator, discriminator, loss functions, and training loop in detail.

### Weight Initialization¶

From the DCGAN paper, the authors specify that all model weights shall
be randomly initialized from a Normal distribution with mean=0,
stdev=0.2. The `weights_init`

function takes an initialized model as
input and reinitializes all convolutional, convolutional-transpose, and
batch normalization layers to meet this criteria. This function is
applied to the models immediately after initialization.

```
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
```

### Generator¶

The generator, \(G\), is designed to map the latent space vector (\(z\)) to data-space. Since our data are images, converting \(z\) to data-space means ultimately creating a RGB image with the same size as the training images (i.e. 3x64x64). In practice, this is accomplished through a series of strided two dimensional convolutional transpose layers, each paired with a 2d batch norm layer and a relu activation. The output of the generator is fed through a tanh function to return it to the input data range of \([-1,1]\). It is worth noting the existence of the batch norm functions after the conv-transpose layers, as this is a critical contribution of the DCGAN paper. These layers help with the flow of gradients during training. An image of the generator from the DCGAN paper is shown below.

Notice, the how the inputs we set in the input section (*nz*, *ngf*, and
*nc*) influence the generator architecture in code. *nz* is the length
of the z input vector, *ngf* relates to the size of the feature maps
that are propagated through the generator, and *nc* is the number of
channels in the output image (set to 3 for RGB images). Below is the
code for the generator.

```
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
```

Now, we can instantiate the generator and apply the `weights_init`

function. Check out the printed model to see how the generator object is
structured.

```
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
```

Out:

```
Generator(
(main): Sequential(
(0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace)
(9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU(inplace)
(12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(13): Tanh()
)
)
```

### Discriminator¶

As mentioned, the discriminator, \(D\), is a binary classification network that takes an image as input and outputs a scalar probability that the input image is real (as opposed to fake). Here, \(D\) takes a 3x64x64 input image, processes it through a series of Conv2d, BatchNorm2d, and LeakyReLU layers, and outputs the final probability through a Sigmoid activation function. This architecture can be extended with more layers if necessary for the problem, but there is significance to the use of the strided convolution, BatchNorm, and LeakyReLUs. The DCGAN paper mentions it is a good practice to use strided convolution rather than pooling to downsample because it lets the network learn its own pooling function. Also batch norm and leaky relu functions promote healthy gradient flow which is critical for the learning process of both \(G\) and \(D\).

Discriminator Code

```
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
```

Now, as with the generator, we can create the discriminator, apply the
`weights_init`

function, and print the model’s structure.

```
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
```

Out:

```
Discriminator(
(main): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): LeakyReLU(negative_slope=0.2, inplace)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(4): LeakyReLU(negative_slope=0.2, inplace)
(5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): LeakyReLU(negative_slope=0.2, inplace)
(8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): LeakyReLU(negative_slope=0.2, inplace)
(11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
(12): Sigmoid()
)
)
```

### Loss Functions and Optimizers¶

With \(D\) and \(G\) setup, we can specify how they learn through the loss functions and optimizers. We will use the Binary Cross Entropy loss (BCELoss) function which is defined in PyTorch as:

Notice how this function provides the calculation of both log components in the objective function (i.e. \(log(D(x))\) and \(log(1-D(G(z)))\)). We can specify what part of the BCE equation to use with the \(y\) input. This is accomplished in the training loop which is coming up soon, but it is important to understand how we can choose which component we wish to calculate just by changing \(y\) (i.e. GT labels).

Next, we define our real label as 1 and the fake label as 0. These labels will be used when calculating the losses of \(D\) and \(G\), and this is also the convention used in the original GAN paper. Finally, we set up two separate optimizers, one for \(D\) and one for \(G\). As specified in the DCGAN paper, both are Adam optimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track of the generator’s learning progression, we will generate a fixed batch of latent vectors that are drawn from a Gaussian distribution (i.e. fixed_noise) . In the training loop, we will periodically input this fixed_noise into \(G\), and over the iterations we will see images form out of the noise.

```
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
```

### Training¶

Finally, now that we have all of the parts of the GAN framework defined, we can train it. Be mindful that training GANs is somewhat of an art form, as incorrect hyperparameter settings lead to mode collapse with little explanation of what went wrong. Here, we will closely follow Algorithm 1 from Goodfellow’s paper, while abiding by some of the best practices shown in ganhacks. Namely, we will “construct different mini-batches for real and fake” images, and also adjust G’s objective function to maximize \(logD(G(z))\). Training is split up into two main parts. Part 1 updates the Discriminator and Part 2 updates the Generator.

**Part 1 - Train the Discriminator**

Recall, the goal of training the discriminator is to maximize the
probability of correctly classifying a given input as real or fake. In
terms of Goodfellow, we wish to “update the discriminator by ascending
its stochastic gradient”. Practically, we want to maximize
\(log(D(x)) + log(1-D(G(z)))\). Due to the separate mini-batch
suggestion from ganhacks, we will calculate this in two steps. First, we
will construct a batch of real samples from the training set, forward
pass through \(D\), calculate the loss (\(log(D(x))\)), then
calculate the gradients in a backward pass. Secondly, we will construct
a batch of fake samples with the current generator, forward pass this
batch through \(D\), calculate the loss (\(log(1-D(G(z)))\)),
and *accumulate* the gradients with a backward pass. Now, with the
gradients accumulated from both the all-real and all-fake batches, we
call a step of the Discriminator’s optimizer.

**Part 2 - Train the Generator**

As stated in the original paper, we want to train the Generator by
minimizing \(log(1-D(G(z)))\) in an effort to generate better fakes.
As mentioned, this was shown by Goodfellow to not provide sufficient
gradients, especially early in the learning process. As a fix, we
instead wish to maximize \(log(D(G(z)))\). In the code we accomplish
this by: classifying the Generator output from Part 1 with the
Discriminator, computing G’s loss *using real labels as GT*, computing
G’s gradients in a backward pass, and finally updating G’s parameters
with an optimizer step. It may seem counter-intuitive to use the real
labels as GT labels for the loss function, but this allows us to use the
\(log(x)\) part of the BCELoss (rather than the \(log(1-x)\)
part) which is exactly what we want.

Finally, we will do some statistic reporting and at the end of each epoch we will push our fixed_noise batch through the generator to visually track the progress of G’s training. The training statistics reported are:

**Loss_D**- discriminator loss calculated as the sum of losses for the all real and all fake batches (\(log(D(x)) + log(D(G(z)))\)).**Loss_G**- generator loss calculated as \(log(D(G(z)))\)**D(x)**- the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Think about why this is.**D(G(z))**- average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Think about why this is.

**Note:** This step might take a while, depending on how many epochs you
run and if you removed some data from the dataset.

```
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
```

Out:

```
Starting Training Loop...
[0/5][0/1583] Loss_D: 1.7410 Loss_G: 4.7765 D(x): 0.5343 D(G(z)): 0.5771 / 0.0136
[0/5][50/1583] Loss_D: 0.0612 Loss_G: 5.6819 D(x): 0.9802 D(G(z)): 0.0280 / 0.0128
[0/5][100/1583] Loss_D: 0.3386 Loss_G: 5.5543 D(x): 0.8714 D(G(z)): 0.0753 / 0.0061
[0/5][150/1583] Loss_D: 0.6645 Loss_G: 4.1186 D(x): 0.6207 D(G(z)): 0.0232 / 0.0319
[0/5][200/1583] Loss_D: 0.6224 Loss_G: 5.0742 D(x): 0.6577 D(G(z)): 0.0180 / 0.0182
[0/5][250/1583] Loss_D: 1.1068 Loss_G: 6.0105 D(x): 0.8665 D(G(z)): 0.5356 / 0.0065
[0/5][300/1583] Loss_D: 0.6362 Loss_G: 3.0147 D(x): 0.6475 D(G(z)): 0.0693 / 0.0708
[0/5][350/1583] Loss_D: 0.5961 Loss_G: 6.0109 D(x): 0.9108 D(G(z)): 0.3481 / 0.0056
[0/5][400/1583] Loss_D: 0.6504 Loss_G: 4.7492 D(x): 0.7980 D(G(z)): 0.2629 / 0.0161
[0/5][450/1583] Loss_D: 0.4975 Loss_G: 3.4582 D(x): 0.7916 D(G(z)): 0.1714 / 0.0493
[0/5][500/1583] Loss_D: 0.6064 Loss_G: 5.9301 D(x): 0.6418 D(G(z)): 0.0154 / 0.0075
[0/5][550/1583] Loss_D: 0.5386 Loss_G: 3.0440 D(x): 0.7223 D(G(z)): 0.0623 / 0.0721
[0/5][600/1583] Loss_D: 0.5438 Loss_G: 5.0630 D(x): 0.8342 D(G(z)): 0.2186 / 0.0137
[0/5][650/1583] Loss_D: 0.3589 Loss_G: 5.3334 D(x): 0.7848 D(G(z)): 0.0512 / 0.0128
[0/5][700/1583] Loss_D: 1.2913 Loss_G: 1.9607 D(x): 0.4800 D(G(z)): 0.0237 / 0.2481
[0/5][750/1583] Loss_D: 1.1204 Loss_G: 8.0865 D(x): 0.9155 D(G(z)): 0.5475 / 0.0012
[0/5][800/1583] Loss_D: 0.5440 Loss_G: 7.0507 D(x): 0.9488 D(G(z)): 0.3400 / 0.0020
[0/5][850/1583] Loss_D: 1.2007 Loss_G: 5.7836 D(x): 0.9307 D(G(z)): 0.5358 / 0.0069
[0/5][900/1583] Loss_D: 0.6336 Loss_G: 5.1699 D(x): 0.7686 D(G(z)): 0.2360 / 0.0123
[0/5][950/1583] Loss_D: 0.7086 Loss_G: 2.4324 D(x): 0.6151 D(G(z)): 0.0500 / 0.1356
[0/5][1000/1583] Loss_D: 0.9949 Loss_G: 5.5314 D(x): 0.8959 D(G(z)): 0.5068 / 0.0094
[0/5][1050/1583] Loss_D: 1.0692 Loss_G: 7.2607 D(x): 0.9312 D(G(z)): 0.5539 / 0.0020
[0/5][1100/1583] Loss_D: 0.8563 Loss_G: 5.2237 D(x): 0.8543 D(G(z)): 0.4079 / 0.0124
[0/5][1150/1583] Loss_D: 1.0366 Loss_G: 8.4808 D(x): 0.9338 D(G(z)): 0.5539 / 0.0005
[0/5][1200/1583] Loss_D: 0.5036 Loss_G: 3.9316 D(x): 0.6914 D(G(z)): 0.0313 / 0.0474
[0/5][1250/1583] Loss_D: 0.4875 Loss_G: 4.1724 D(x): 0.9288 D(G(z)): 0.2873 / 0.0296
[0/5][1300/1583] Loss_D: 1.6240 Loss_G: 10.1444 D(x): 0.9512 D(G(z)): 0.7121 / 0.0002
[0/5][1350/1583] Loss_D: 0.3466 Loss_G: 4.3779 D(x): 0.8570 D(G(z)): 0.1347 / 0.0266
[0/5][1400/1583] Loss_D: 1.0945 Loss_G: 5.0242 D(x): 0.4271 D(G(z)): 0.0045 / 0.0198
[0/5][1450/1583] Loss_D: 0.4036 Loss_G: 4.8519 D(x): 0.9323 D(G(z)): 0.2431 / 0.0125
[0/5][1500/1583] Loss_D: 0.4211 Loss_G: 4.2635 D(x): 0.9202 D(G(z)): 0.2499 / 0.0235
[0/5][1550/1583] Loss_D: 0.6313 Loss_G: 2.1711 D(x): 0.6148 D(G(z)): 0.0301 / 0.1642
[1/5][0/1583] Loss_D: 0.4374 Loss_G: 3.2809 D(x): 0.8676 D(G(z)): 0.2115 / 0.0595
[1/5][50/1583] Loss_D: 0.3890 Loss_G: 4.2647 D(x): 0.8779 D(G(z)): 0.1878 / 0.0246
[1/5][100/1583] Loss_D: 0.5632 Loss_G: 2.1876 D(x): 0.6674 D(G(z)): 0.0582 / 0.1581
[1/5][150/1583] Loss_D: 0.7452 Loss_G: 1.4767 D(x): 0.5901 D(G(z)): 0.0515 / 0.3134
[1/5][200/1583] Loss_D: 0.5235 Loss_G: 2.4908 D(x): 0.7241 D(G(z)): 0.1153 / 0.1237
[1/5][250/1583] Loss_D: 0.6736 Loss_G: 2.4484 D(x): 0.6405 D(G(z)): 0.0768 / 0.1446
[1/5][300/1583] Loss_D: 0.3241 Loss_G: 4.4764 D(x): 0.9072 D(G(z)): 0.1844 / 0.0182
[1/5][350/1583] Loss_D: 0.7582 Loss_G: 1.5815 D(x): 0.5891 D(G(z)): 0.0633 / 0.2747
[1/5][400/1583] Loss_D: 0.4859 Loss_G: 4.9354 D(x): 0.9021 D(G(z)): 0.2738 / 0.0124
[1/5][450/1583] Loss_D: 0.4614 Loss_G: 3.2952 D(x): 0.8261 D(G(z)): 0.1851 / 0.0540
[1/5][500/1583] Loss_D: 0.8310 Loss_G: 1.7699 D(x): 0.5379 D(G(z)): 0.0368 / 0.2212
[1/5][550/1583] Loss_D: 0.6769 Loss_G: 5.8292 D(x): 0.9275 D(G(z)): 0.3921 / 0.0064
[1/5][600/1583] Loss_D: 0.6454 Loss_G: 4.1244 D(x): 0.8966 D(G(z)): 0.3720 / 0.0244
[1/5][650/1583] Loss_D: 0.3627 Loss_G: 2.1587 D(x): 0.7742 D(G(z)): 0.0538 / 0.1552
[1/5][700/1583] Loss_D: 0.4774 Loss_G: 4.0678 D(x): 0.9170 D(G(z)): 0.2917 / 0.0263
[1/5][750/1583] Loss_D: 0.6703 Loss_G: 4.9335 D(x): 0.9276 D(G(z)): 0.3900 / 0.0135
[1/5][800/1583] Loss_D: 0.9533 Loss_G: 3.2846 D(x): 0.7302 D(G(z)): 0.3869 / 0.0654
[1/5][850/1583] Loss_D: 0.5749 Loss_G: 3.1238 D(x): 0.7971 D(G(z)): 0.2428 / 0.0629
[1/5][900/1583] Loss_D: 0.6346 Loss_G: 3.8405 D(x): 0.8686 D(G(z)): 0.3390 / 0.0312
[1/5][950/1583] Loss_D: 0.3454 Loss_G: 3.0266 D(x): 0.8157 D(G(z)): 0.0986 / 0.0680
[1/5][1000/1583] Loss_D: 0.5232 Loss_G: 3.0917 D(x): 0.7159 D(G(z)): 0.1115 / 0.0649
[1/5][1050/1583] Loss_D: 0.4057 Loss_G: 3.1889 D(x): 0.8739 D(G(z)): 0.2137 / 0.0583
[1/5][1100/1583] Loss_D: 0.7644 Loss_G: 4.6229 D(x): 0.9007 D(G(z)): 0.4315 / 0.0152
[1/5][1150/1583] Loss_D: 0.5870 Loss_G: 4.2393 D(x): 0.8820 D(G(z)): 0.3301 / 0.0218
[1/5][1200/1583] Loss_D: 0.5409 Loss_G: 4.3346 D(x): 0.8950 D(G(z)): 0.3123 / 0.0188
[1/5][1250/1583] Loss_D: 0.5145 Loss_G: 2.7247 D(x): 0.7959 D(G(z)): 0.2113 / 0.0845
[1/5][1300/1583] Loss_D: 0.9762 Loss_G: 5.1848 D(x): 0.9335 D(G(z)): 0.5338 / 0.0111
[1/5][1350/1583] Loss_D: 0.4861 Loss_G: 1.9915 D(x): 0.7476 D(G(z)): 0.1443 / 0.1784
[1/5][1400/1583] Loss_D: 0.5126 Loss_G: 3.5335 D(x): 0.8686 D(G(z)): 0.2767 / 0.0437
[1/5][1450/1583] Loss_D: 0.6201 Loss_G: 4.0346 D(x): 0.8563 D(G(z)): 0.3323 / 0.0297
[1/5][1500/1583] Loss_D: 0.8325 Loss_G: 3.7802 D(x): 0.9116 D(G(z)): 0.4682 / 0.0337
[1/5][1550/1583] Loss_D: 0.5370 Loss_G: 3.2629 D(x): 0.8782 D(G(z)): 0.3033 / 0.0527
[2/5][0/1583] Loss_D: 0.4490 Loss_G: 2.6278 D(x): 0.8644 D(G(z)): 0.2286 / 0.0968
[2/5][50/1583] Loss_D: 0.5429 Loss_G: 2.4201 D(x): 0.7206 D(G(z)): 0.1421 / 0.1274
[2/5][100/1583] Loss_D: 0.6441 Loss_G: 1.8611 D(x): 0.6521 D(G(z)): 0.1372 / 0.1895
[2/5][150/1583] Loss_D: 0.7552 Loss_G: 1.9784 D(x): 0.5349 D(G(z)): 0.0259 / 0.1920
[2/5][200/1583] Loss_D: 0.6191 Loss_G: 2.8107 D(x): 0.8282 D(G(z)): 0.3087 / 0.0807
[2/5][250/1583] Loss_D: 0.6378 Loss_G: 0.7903 D(x): 0.6215 D(G(z)): 0.0737 / 0.5304
[2/5][300/1583] Loss_D: 0.8615 Loss_G: 4.1822 D(x): 0.9125 D(G(z)): 0.4813 / 0.0214
[2/5][350/1583] Loss_D: 0.7818 Loss_G: 2.1185 D(x): 0.5268 D(G(z)): 0.0236 / 0.1761
[2/5][400/1583] Loss_D: 0.5775 Loss_G: 2.4210 D(x): 0.7225 D(G(z)): 0.1640 / 0.1216
[2/5][450/1583] Loss_D: 0.6574 Loss_G: 1.9382 D(x): 0.6125 D(G(z)): 0.0717 / 0.1851
[2/5][500/1583] Loss_D: 0.7193 Loss_G: 1.3663 D(x): 0.6460 D(G(z)): 0.1790 / 0.2970
[2/5][550/1583] Loss_D: 0.5717 Loss_G: 1.1562 D(x): 0.6722 D(G(z)): 0.1172 / 0.3544
[2/5][600/1583] Loss_D: 0.5733 Loss_G: 1.7378 D(x): 0.7067 D(G(z)): 0.1535 / 0.2104
[2/5][650/1583] Loss_D: 0.5524 Loss_G: 2.5174 D(x): 0.7396 D(G(z)): 0.1827 / 0.1124
[2/5][700/1583] Loss_D: 0.5783 Loss_G: 3.1047 D(x): 0.8400 D(G(z)): 0.2942 / 0.0583
[2/5][750/1583] Loss_D: 0.7621 Loss_G: 3.5279 D(x): 0.8625 D(G(z)): 0.4185 / 0.0402
[2/5][800/1583] Loss_D: 0.5156 Loss_G: 2.2836 D(x): 0.7676 D(G(z)): 0.1933 / 0.1315
[2/5][850/1583] Loss_D: 0.5184 Loss_G: 1.9332 D(x): 0.6832 D(G(z)): 0.0822 / 0.1832
[2/5][900/1583] Loss_D: 0.6994 Loss_G: 1.4406 D(x): 0.6425 D(G(z)): 0.1765 / 0.2786
[2/5][950/1583] Loss_D: 0.4353 Loss_G: 2.2441 D(x): 0.7550 D(G(z)): 0.1131 / 0.1345
[2/5][1000/1583] Loss_D: 0.7991 Loss_G: 2.1826 D(x): 0.5514 D(G(z)): 0.0824 / 0.1587
[2/5][1050/1583] Loss_D: 0.5401 Loss_G: 2.7843 D(x): 0.7868 D(G(z)): 0.2256 / 0.0835
[2/5][1100/1583] Loss_D: 0.4664 Loss_G: 3.1416 D(x): 0.8352 D(G(z)): 0.2209 / 0.0596
[2/5][1150/1583] Loss_D: 0.6325 Loss_G: 2.8259 D(x): 0.8070 D(G(z)): 0.3063 / 0.0788
[2/5][1200/1583] Loss_D: 0.6430 Loss_G: 2.3213 D(x): 0.7069 D(G(z)): 0.2053 / 0.1288
[2/5][1250/1583] Loss_D: 0.6666 Loss_G: 2.3651 D(x): 0.6957 D(G(z)): 0.2232 / 0.1136
[2/5][1300/1583] Loss_D: 0.6898 Loss_G: 2.0959 D(x): 0.5888 D(G(z)): 0.0698 / 0.1618
[2/5][1350/1583] Loss_D: 0.4682 Loss_G: 2.3703 D(x): 0.7865 D(G(z)): 0.1815 / 0.1146
[2/5][1400/1583] Loss_D: 1.9241 Loss_G: 4.9826 D(x): 0.9611 D(G(z)): 0.8045 / 0.0120
[2/5][1450/1583] Loss_D: 0.5572 Loss_G: 3.0824 D(x): 0.8141 D(G(z)): 0.2645 / 0.0623
[2/5][1500/1583] Loss_D: 0.5475 Loss_G: 2.4658 D(x): 0.7895 D(G(z)): 0.2381 / 0.1056
[2/5][1550/1583] Loss_D: 0.6447 Loss_G: 2.0879 D(x): 0.7510 D(G(z)): 0.2605 / 0.1475
[3/5][0/1583] Loss_D: 0.6381 Loss_G: 2.2677 D(x): 0.5997 D(G(z)): 0.0589 / 0.1423
[3/5][50/1583] Loss_D: 0.5246 Loss_G: 2.4863 D(x): 0.8170 D(G(z)): 0.2533 / 0.0971
[3/5][100/1583] Loss_D: 0.6546 Loss_G: 2.0608 D(x): 0.5901 D(G(z)): 0.0482 / 0.1742
[3/5][150/1583] Loss_D: 0.6993 Loss_G: 2.1796 D(x): 0.7464 D(G(z)): 0.2891 / 0.1468
[3/5][200/1583] Loss_D: 0.4683 Loss_G: 3.4126 D(x): 0.8732 D(G(z)): 0.2604 / 0.0449
[3/5][250/1583] Loss_D: 1.1389 Loss_G: 2.0086 D(x): 0.4573 D(G(z)): 0.1672 / 0.1946
[3/5][300/1583] Loss_D: 0.9955 Loss_G: 2.6264 D(x): 0.8518 D(G(z)): 0.5098 / 0.1045
[3/5][350/1583] Loss_D: 0.6122 Loss_G: 2.3177 D(x): 0.7341 D(G(z)): 0.2195 / 0.1218
[3/5][400/1583] Loss_D: 0.4851 Loss_G: 2.3104 D(x): 0.7164 D(G(z)): 0.1048 / 0.1302
[3/5][450/1583] Loss_D: 0.9736 Loss_G: 3.0969 D(x): 0.8177 D(G(z)): 0.4831 / 0.0666
[3/5][500/1583] Loss_D: 0.4172 Loss_G: 3.0851 D(x): 0.8413 D(G(z)): 0.1882 / 0.0607
[3/5][550/1583] Loss_D: 1.0374 Loss_G: 4.5584 D(x): 0.9371 D(G(z)): 0.5627 / 0.0158
[3/5][600/1583] Loss_D: 1.4752 Loss_G: 4.2739 D(x): 0.9154 D(G(z)): 0.6914 / 0.0259
[3/5][650/1583] Loss_D: 0.5531 Loss_G: 3.0427 D(x): 0.8980 D(G(z)): 0.3317 / 0.0651
[3/5][700/1583] Loss_D: 0.5063 Loss_G: 3.2418 D(x): 0.9168 D(G(z)): 0.3194 / 0.0526
[3/5][750/1583] Loss_D: 0.9072 Loss_G: 2.6338 D(x): 0.8287 D(G(z)): 0.4476 / 0.1051
[3/5][800/1583] Loss_D: 0.5793 Loss_G: 3.0017 D(x): 0.8367 D(G(z)): 0.2952 / 0.0650
[3/5][850/1583] Loss_D: 0.7323 Loss_G: 1.5835 D(x): 0.7031 D(G(z)): 0.2542 / 0.2624
[3/5][900/1583] Loss_D: 0.8700 Loss_G: 0.7147 D(x): 0.5452 D(G(z)): 0.1443 / 0.5193
[3/5][950/1583] Loss_D: 0.6939 Loss_G: 3.8402 D(x): 0.8951 D(G(z)): 0.3957 / 0.0323
[3/5][1000/1583] Loss_D: 0.6877 Loss_G: 3.2708 D(x): 0.8961 D(G(z)): 0.4064 / 0.0479
[3/5][1050/1583] Loss_D: 0.5942 Loss_G: 2.1052 D(x): 0.7482 D(G(z)): 0.1943 / 0.1599
[3/5][1100/1583] Loss_D: 1.3440 Loss_G: 1.0975 D(x): 0.3225 D(G(z)): 0.0275 / 0.3898
[3/5][1150/1583] Loss_D: 0.9743 Loss_G: 0.7362 D(x): 0.4532 D(G(z)): 0.0377 / 0.5191
[3/5][1200/1583] Loss_D: 0.5449 Loss_G: 3.3160 D(x): 0.9286 D(G(z)): 0.3405 / 0.0470
[3/5][1250/1583] Loss_D: 1.4523 Loss_G: 4.5809 D(x): 0.9435 D(G(z)): 0.6834 / 0.0181
[3/5][1300/1583] Loss_D: 0.5323 Loss_G: 1.8969 D(x): 0.7139 D(G(z)): 0.1336 / 0.1756
[3/5][1350/1583] Loss_D: 0.8738 Loss_G: 1.4546 D(x): 0.5904 D(G(z)): 0.2019 / 0.2847
[3/5][1400/1583] Loss_D: 0.5638 Loss_G: 2.9446 D(x): 0.8311 D(G(z)): 0.2846 / 0.0680
[3/5][1450/1583] Loss_D: 0.6807 Loss_G: 2.0295 D(x): 0.7355 D(G(z)): 0.2727 / 0.1622
[3/5][1500/1583] Loss_D: 1.0942 Loss_G: 4.2318 D(x): 0.9379 D(G(z)): 0.5893 / 0.0220
[3/5][1550/1583] Loss_D: 0.7626 Loss_G: 1.1945 D(x): 0.5456 D(G(z)): 0.0699 / 0.3559
[4/5][0/1583] Loss_D: 0.8527 Loss_G: 1.5775 D(x): 0.5045 D(G(z)): 0.0592 / 0.2595
[4/5][50/1583] Loss_D: 0.6169 Loss_G: 2.1098 D(x): 0.6210 D(G(z)): 0.0715 / 0.1593
[4/5][100/1583] Loss_D: 0.6240 Loss_G: 4.0493 D(x): 0.8949 D(G(z)): 0.3613 / 0.0245
[4/5][150/1583] Loss_D: 1.2244 Loss_G: 4.4992 D(x): 0.9142 D(G(z)): 0.6345 / 0.0168
[4/5][200/1583] Loss_D: 0.8440 Loss_G: 3.9695 D(x): 0.9430 D(G(z)): 0.4920 / 0.0259
[4/5][250/1583] Loss_D: 0.4478 Loss_G: 2.2964 D(x): 0.8179 D(G(z)): 0.1943 / 0.1250
[4/5][300/1583] Loss_D: 2.2635 Loss_G: 0.5694 D(x): 0.1612 D(G(z)): 0.0444 / 0.6162
[4/5][350/1583] Loss_D: 0.6468 Loss_G: 2.0621 D(x): 0.7144 D(G(z)): 0.2285 / 0.1549
[4/5][400/1583] Loss_D: 0.5546 Loss_G: 2.5629 D(x): 0.7203 D(G(z)): 0.1549 / 0.0987
[4/5][450/1583] Loss_D: 0.5631 Loss_G: 2.6959 D(x): 0.8355 D(G(z)): 0.2860 / 0.0950
[4/5][500/1583] Loss_D: 0.3379 Loss_G: 3.3417 D(x): 0.9105 D(G(z)): 0.2015 / 0.0461
[4/5][550/1583] Loss_D: 0.4735 Loss_G: 2.0325 D(x): 0.7338 D(G(z)): 0.1263 / 0.1609
[4/5][600/1583] Loss_D: 2.0311 Loss_G: 0.2577 D(x): 0.1952 D(G(z)): 0.0265 / 0.7894
[4/5][650/1583] Loss_D: 0.7533 Loss_G: 1.8400 D(x): 0.5676 D(G(z)): 0.0874 / 0.1890
[4/5][700/1583] Loss_D: 0.9188 Loss_G: 4.6259 D(x): 0.9499 D(G(z)): 0.5365 / 0.0142
[4/5][750/1583] Loss_D: 2.2407 Loss_G: 4.3486 D(x): 0.9678 D(G(z)): 0.8384 / 0.0196
[4/5][800/1583] Loss_D: 0.4253 Loss_G: 2.5194 D(x): 0.7736 D(G(z)): 0.1336 / 0.1052
[4/5][850/1583] Loss_D: 0.6301 Loss_G: 3.4002 D(x): 0.9003 D(G(z)): 0.3777 / 0.0441
[4/5][900/1583] Loss_D: 0.6255 Loss_G: 3.0213 D(x): 0.8341 D(G(z)): 0.3128 / 0.0653
[4/5][950/1583] Loss_D: 0.4347 Loss_G: 2.4700 D(x): 0.8621 D(G(z)): 0.2262 / 0.1087
[4/5][1000/1583] Loss_D: 1.8125 Loss_G: 6.8850 D(x): 0.9794 D(G(z)): 0.7733 / 0.0017
[4/5][1050/1583] Loss_D: 0.6759 Loss_G: 3.5487 D(x): 0.9311 D(G(z)): 0.3988 / 0.0406
[4/5][1100/1583] Loss_D: 0.6292 Loss_G: 2.3590 D(x): 0.7340 D(G(z)): 0.2332 / 0.1170
[4/5][1150/1583] Loss_D: 0.5594 Loss_G: 3.1096 D(x): 0.8872 D(G(z)): 0.3207 / 0.0589
[4/5][1200/1583] Loss_D: 0.8286 Loss_G: 1.5944 D(x): 0.5454 D(G(z)): 0.1241 / 0.2445
[4/5][1250/1583] Loss_D: 0.5629 Loss_G: 2.0180 D(x): 0.6664 D(G(z)): 0.0998 / 0.1761
[4/5][1300/1583] Loss_D: 0.4862 Loss_G: 2.3706 D(x): 0.7676 D(G(z)): 0.1646 / 0.1231
[4/5][1350/1583] Loss_D: 0.6882 Loss_G: 3.9018 D(x): 0.8979 D(G(z)): 0.4070 / 0.0273
[4/5][1400/1583] Loss_D: 0.4564 Loss_G: 2.3660 D(x): 0.8441 D(G(z)): 0.2281 / 0.1200
[4/5][1450/1583] Loss_D: 0.4388 Loss_G: 2.3696 D(x): 0.7871 D(G(z)): 0.1470 / 0.1191
[4/5][1500/1583] Loss_D: 0.7701 Loss_G: 1.9558 D(x): 0.6080 D(G(z)): 0.1517 / 0.1852
[4/5][1550/1583] Loss_D: 0.4689 Loss_G: 2.2964 D(x): 0.8025 D(G(z)): 0.1943 / 0.1278
```

## Results¶

Finally, lets check out how we did. Here, we will look at three different results. First, we will see how D and G’s losses changed during training. Second, we will visualize G’s output on the fixed_noise batch for every epoch. And third, we will look at a batch of real data next to a batch of fake data from G.

**Loss versus training iteration**

Below is a plot of D & G’s losses versus training iterations.

```
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
```

**Visualization of G’s progression**

Remember how we saved the generator’s output on the fixed_noise batch after every epoch of training. Now, we can visualize the training progression of G with an animation. Press the play button to start the animation.

```
#%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
```

**Real Images vs. Fake Images**

Finally, lets take a look at some real images and fake images side by side.

```
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
```

## Where to Go Next¶

We have reached the end of our journey, but there are several places you could go from here. You could:

- Train for longer to see how good the results get
- Modify this model to take a different dataset and possibly change the size of the images and the model architecture
- Check out some other cool GAN projects here
- Create GANs that generate music

**Total running time of the script:** ( 28 minutes 32.605 seconds)