Loss Functions

Learn the basic concepts of loss functions.

The loss function evaluates how well our algorithm models our datasets. It calculates the difference between the predicted output and the ground truth of a single training example. It’s also known as the error function.

It represents how far off the predicted output is from the expected output. We can think of the loss function as the penalty for failing to get the expected output.

Most optimization problems use loss functions. The main goal is to minimize the loss, which will maximize the accuracy. The lower the loss, the better our model is.

Given the following synthetic data, we can calculate the total loss for price predictions for metals ($/troy ounce) by:

  • Subtracting between predictions and actual value.
  • Getting the absolute value of the number.
Metals Predictions Actual value Total Loss
Gold
Platinum
1995.80
998.32
1998.30
999.34
3.52 (2.50 for Gold, 1.02 for Platinum)
Gold
Platinum
2003.12
999.34
1998.30
999.34
4.82 (4.82 for Gold, 0 for Platinum)
Gold
Platinum
1998.05
995.81
1998.30
999.34
3.78 (0.25 for Gold, 3.53 for Platinum)

Note: A loss function is direction-agnostic.

The Pytorch Image Model provides the following standard loss functions:

  • LabelSmoothingCrossEntropy
  • SoftTargetCrossEntropy

The LabelSmoothingCrossEntropy function

The LabelSmoothingCrossEntropy function works the same way as negative log-likelihood loss (NLL loss) with an extra smoothing argument. The formula for NLL loss is:

L(y)=log(y)L(y) = -log(y)

The following graph shows the range of negative log-likelihood:

Get hands-on with 1200+ tech skills courses.