Cross Entropy

Learn about cross-entropy loss in detail and play with its code.

What is the cross-entropy loss?

So far, we have used the log loss formula for our binary classifiers. We even used the log loss when we bundled ten binary classifiers in a multiclass classifier (in The Final Challenge). In that case, we added together with the losses of the ten classifiers to get a total loss.

While the log loss served us well so far, it’s time to switch to a simpler formula, one that’s specific to multiclass classifiers. It’s called the cross-entropy lossIt measures the distance between the classifier’s predictions and the labels. (The lower the loss, the better the classifier.), and it looks like this:

L=1myi.log(y^i)L = -\frac{1}{m}\sum{y_i.\log(\hat{y}_i)}

Here’s the cross-entropy loss in form of a code:

Get hands-on with 1200+ tech skills courses.