Search⌘ K
AI Features

Variational Autoencoder: Theory

Understand the core principles of variational autoencoders by learning how they use probabilistic latent spaces to generate data. Discover the ELBO loss function components and grasp the reparameterization trick to enable gradient backpropagation during training.

In simple terms, a variational autoencoder is a probabilistic version of autoencoders.

Why?

Because we want to be able to sample from the latent vector (zz) space to generate new data, which is not possible with vanilla autoencoders.

Each latent variable zz that is generated from the input will now represent a probability distribution (or what we call the posterior distribution denoted as p(zx)p(z|x)).

All we need to do is find the posterior p(zx)p(z|x) or solve the inference problem.

In fact, the encoder will try to approximate the ...