Having constructed forecasting models like ARIMA and SARIMA in previous chapters, the next step is to assess their performance. Building a model is only part of the process; we need objective methods to determine how accurately it predicts future values and how it stacks up against alternatives.
In this chapter, you will learn techniques for effective model evaluation specific to time series data. We will start with how to properly split your data into training and testing sets, respecting the temporal order to avoid looking into the future. You will then learn to compute and interpret common forecast accuracy metrics, including:
For example, the MAE gives the average absolute difference between predicted and actual values: MAE=n1∑i=1n∣Actuali−Forecasti∣
We will also examine information criteria like the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), which assist in model selection by balancing model fit with complexity. By the end of this chapter, you will be able to apply these metrics and criteria to compare different forecasting models and visualize their performance against actual data.
6.1 Need for Model Evaluation
6.2 Train-Test Split for Time Series
6.3 Common Evaluation Metrics (MAE, MSE, RMSE, MAPE)
6.4 Information Criteria (AIC, BIC)
6.5 Comparing Forecasts from Different Models
6.6 Visualizing Forecast Performance
6.7 Hands-on Practice: Evaluating Forecasts
© 2025 ApX Machine Learning