Wasserstein GAN with Gradient Penalty

Explore the Wasserstein GAN with gradient penalty.

Weight clipping can lead to problems with gradient stability. Instead, researchers suggest adding a gradient penalty to the critic’s loss function, which indirectly tries to constrain the original critic’s gradient to have a norm close to 11. Interestingly, in 2013, researchers proposed a method for directing neural networks to become kk-Lipschitz by penalizing the objective function with the operator norm of the weights of each layer. The preceding equation thus becomes taken from the paper “Improved Training of Wasserstein GANs” the following:

Get hands-on with 1200+ tech skills courses.