# WGAN—Understanding the Wasserstein Distance

Learn what Wasserstein distance is and get to know its benefits.

## We'll cover the following

GANs have been known to be hard to train, especially if we have tried to build one from scratch. In this lesson, we will talk about how to use a better distance measure to improve the training of GANs, namely, the Wasserstein GAN.

The groundwork for **Wasserstein GAN**

## Analyzing the problems with vanilla GAN loss

Let’s go over the commonly used loss functions for GANs:

$\underset{real}{\mathbb E} [\log D(x)] + \underset{fake}{\mathbb E} [\log(1 - D(x))]$ , which is the vanilla form of GAN loss.$\underset{fake}{\mathbb E} [\log(1 - D(x))]$ $\underset{fake}{\mathbb E} [-\log D(x)]$

The experimental results have shown that these loss functions work well in several applications. However, let’s dig deep into these functions and see what could go wrong when they don’t work so well.

**Step 1: **Problems with the first loss function:

Assume that the generator network is trained, and we need to find an optimal discriminator network D. We have the following:

Get hands-on with 1400+ tech skills courses.