# Appendix D: Unstable Learning

Learn about unstable learning and gradient descent for adversarial training.

## Is gradient descent suitable for training GANs?

When training neural networks we use gradient descent to **find a path down a loss function to find the combination of learnable parameters that minimize the error**. This is a very well researched area and techniques today are very sophisticated. The **Adam optimiser** is a good example.

The dynamics of a GAN are different from a simple neural network. The generator and discriminator networks are trying to achieve opposing objectives. There are parallels between a GAN and adversarial games where one player is trying to maximize an objective while the other is trying to minimize it, each undoing the benefit of the opponent’s previous move.

Is the gradient descent method suitable for such adversarial games? This might seem like an unnecessary question, but the answer is rather interesting.

## Simple adversarial example

The following is a very simple objective function:

$f = x.y$

One player has control over the values of $x$ and is trying to maximize the objective $f$. A second player has control over $y$ and is trying to minimize the objective $f$.

Let’s visualize this function to get a feel for it. The following picture shows a surface plot of $f$ =$x·y$ from three different angles.

Get hands-on with 1200+ tech skills courses.