Another way to explain GANs is through the probabilistic formulation we used on variational autoencoders.
GANs follow a different approach in finding the probability distribution of the data .
Instead of computing or the approximate , we only care about the ability to sample data from the distribution.
But what does that actually mean?
If we assume that our data follow a probability distribution , we will want to build a model that allows us to draw samples from .
As we did with VAE, we again introduce a latent variable with a prior distribution . is usually a simple random distribution such as the uniform or a Gaussian (normal distribution).
We then sample from and pass the sample to the generator network , which will output a sample of data with .
can be thought of as a sample from a third distribution, the generator’s distribution . The generator will be trained to convert random into fake data or, in other words, to force to be as close as possible to .
This is where the discriminator network D comes into play. The discriminator is simply a classifier that produces a single probability, wherein 0 corresponds to a fake generated and 1 to the real sample from our distribution.
These two networks are trained using this minimax game. Let’s take a closer look.
Training
One key insight is the indirect training: this basically means that the generator is not trained to minimize the distance to a specific image, but just to fool the discriminator!
The loss that occurs in this training is called adversarial loss.
The adversarial loss enables the model to learn in an unsupervised manner.
When we train D, real images are labeled as 1 and fake generated images as 0. On the other hand, the ground truth label, when training the generator, is 1 for fake images (like a real image), even though the examples are fake.
This happens because our objective is just to fool D. The image below illustrates this process:
Get hands-on with 1200+ tech skills courses.