The norm loss function, also known as the least squares error (LSE), is used to minimize the sum of the square of differences between the target value, , and the estimated value,
The mathematical representation of -norm is:
As an error function, -norm is less robust to outliers than the -norm. An outlier causes the error value to increase to a much larger number because the difference in the actual and predicted value gets squared.
However, -norm always provides one stable solution (unlike -norm).
The -norm loss function is known as the least absolute error (LAE) and is used to minimize the sum of absolute differences between the target value, , and the estimated values, .
The code to implement the -norm is given below:
import numpy as np actual_value = np.array([1, 2, 3]) predicted_value = np.array([1.1, 2.1, 5 ]) # take square of differences and sum them l2 = np.sum(np.power((actual_value-predicted_value),2)) print(l2)
View all Courses