Evaluation Metrics
Explore how to evaluate time series forecast models using key metrics like mean absolute error mean squared error mean absolute percentage error and R-squared. Understand their calculation interpretation and how to implement them in Python for better model assessment.
Overview
Although looking at actual vs. predicted graphs can give us a general idea of whether models work well, that's not enough to properly evaluate and compare models. To do that, we need some sort of objective metric that allows us to quantify the quality of the forecasts. That is where evaluation metrics can help us.
These metrics will usually be a way to measure how distant our predictions are from the actual values for any given moment in time. The bigger that distance, the greater the error.
Let's now look at some of the main metrics used for time series forecasting.
Mean absolute error
The mean absolute error (MAE) calculates the absolute value of the error for each prediction and then averages them out.
Where