...

/

Generative Adversarial Networks in Detail

Generative Adversarial Networks in Detail

Understand the basic concepts behind GANs and their training process.

We'll cover the following...

Another way to explain GANs is through the probabilistic formulation we used on variational autoencoders.

GANs follow a different approach in finding the probability distribution of the data pdata(x)p_{data}(x).

Instead of computing or the approximate pdata(x)p_{data}(x), we only care about the ability to sample data from the distribution.

But what does that actually mean?

If we assume that our data xix_i follow a probability distribution pdata(x)p_{data}(x), we will want to build a model that allows us to draw samples from pdata(x)p_{data}(x).

As we did with VAE, we again introduce a latent variable zz with a prior distribution p(z)p(z). p(z)p(z) is usually a simple ...