...
/Defining Activation Functions and The Loss Function
Defining Activation Functions and The Loss Function
Understand the concepts of activation functions and loss function and practice generating samples using a basic GAN.
We'll cover the following...
We will be only using NumPy to calculate and train our GAN model (and optionally using Matplotlib to visualize the signals). All of the following code can be placed in a simple .py file (such as simple_gan.py). Let’s look at the code step by step:
Import the
numpylibrary:
Define a few constant variables that are needed in our model:
Define the real sine samples (with
numpy.sin) that we want to estimate:
In the previous snippet, we use a bool variable named random to introduce randomness into the real samples, as real-life data has. The real samples look like this (50 samples with random=True):
Define the activation functions and their derivatives. If you're not familiar with the concept of activation functions, just remember that their jobs are to adjust the outputs of a layer so that its next layer can have a better understanding of these output values:
Define a helper function to initialize the layer parameters:
Define the loss function (both forward and backward):
This is called binary cross-entropy, which is typically used in binary classification problems (in which a sample either belongs to class A or class B). Sometimes, one of the networks is trained too well so that the sigmoid output of the discriminator might be either too close to 0 or 1. Both of the scenarios lead to numerical errors of the log function. Therefore, we need to restrain the maximum and minimum values of the output value.
Working on forward pass and backpropagation
Now, let's create our generator and discriminator networks. We put the code in the same simple_ ...