Introducing Adversarial Learning
Get a brief overview of what adversarial learning is.
The training process where the two models try to weaken each other and, as a result, improve each other is called adversarial learning. As demonstrated in the following diagram, models A and B have totally opposite agendas (for example, classification and generation). However, during each step of the training, the output of Model A improves Model B, and the output of Model B improves Model A:
Generator and discriminator networks
Here, we will show the basic components of GANs and explain how they work with/against each other to achieve our goal of generating realistic samples. A typical structure of a GAN is shown in the following diagram. It contains two different networks: a generator network and a discriminator network. The generator network typically takes random noises as input and generates fake samples. Our goal is to let the fake samples be as close to the real samples as possible. That’s where the discriminator comes in. The discriminator is, in fact, a classification network whose job is to tell whether a given sample is fake or real. The generator tries its best to trick and confuse the discriminator to make the wrong decision, while the discriminator tries its best to distinguish the fake samples from the real ones.
In this process, the differences between fake and real samples are used to improve the generator. Therefore, the generator gets better at generating realistic-looking samples while the discriminator gets better at picking them out. Since real samples are used to train the discriminator, the training process is therefore supervised. Even though the generator always gives fake samples without the knowledge of ground truth, the overall training of GAN is still supervised:
Mathematical background of GANs
Let’s take a look at the math behind this process to get a better understanding of the mechanism. Let
Get hands-on with 1400+ tech skills courses.