
Which we evaluate forecasts should correspond to the criterion by Some authors (e.g., Zellner, 1986) argue that the criterion by The advantages of using MAE instead of MSE are explained in Davydenko and Fildes (2016), see Section 3.1:
Can excel trendline calculate mad or mape full#
One very good article to look at is Hyndman & Koehler "Another look at measures of forecast accuracy" (2006).įinally, one alternative is to calculate full predictive densities and assess these using proper scoring-rules. Measures of forecast accuracy were a big topic in the forecasting community some years back, and they still pop up now and then. Without looking at the actual implications of forecast errors, any discussion about "better criteria" is basically meaningless. In the end, which error measure to use really depends on your Cost of Forecast Error, i.e., which kind of error is most painful. I give some more information and an illustration in What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? That thread considers the mape, but also other error measures, and it contains links to other related threads. In extreme cases (say, Poisson distributed sales with a mean below $\log 2\approx 0.69$), your MAE will be lowest for a flat zero forecast. This is most relevant for count data, which are typically skewed. Thus, if you calibrate your forecasts to minimize the MAE, your point forecast will be the future median, not the future expected value, and your forecasts will be biased if your future distribution is not symmetric. The expected MAD is minimized by the median of the future distribution. The expected MSE is minimized by the expected value of the future distribution. The challenge is that different error measures are minimized by different functionals. So you should choose an error measure that rewards "good" one number summaries of (unknown, possibly forecasted, but possibly only implicit) future densities. The error measure then is a way to assess the quality of this single number summary.

Such a point forecast $F_t$ is our attempt to summarize what we know about the future distribution (i.e., the predictive distribution) at time $t$ using a single number, a so-called functional of the future density. Now, we want to have a good error measure for a point forecast. Some forecasting methods explicitly output such a full distribution, and some don't - but it is always there, if only implicitly. So the future outcome follows a probability distribution. Note that we don't know the future outcome perfectly, nor will we ever. After seeing the chart, you probably already know the answer.Ĭonclusion: in this example, when using the FORECAST.ETS function, you can also use the value 4 for the fourth argument.To decide which point forecast error measure to use, we need to take a step back. You can use the function to find the length of the seasonal pattern. Enter the value 49 into cell C13, select the range A1:C17 and insert a scatter plot with straight lines and markers.ģ. The default value of 1 indicates seasonality is detected automatically.Ģ. The fourth argument indicates the length of the seasonal pattern. The FORECAST.ETS function below predicts a future value using Exponential Triple Smoothing. The FORECAST.ETS function in Excel 2016 or later is a great function which can detect a seasonal pattern.ġ. This equation predicts the same future values. Note: when you add a trendline to an Excel chart, Excel can display the equation in a chart. Enter the value 89 into cell C11, select the range A1:C14 and insert a scatter plot with straight lines and markers. Explanation: when we drag the FORECAST.LINEAR function down, the absolute references ($B$2:$B$11 and $A$2:$A$11) stay the same, while the relative reference (A12) changes to A13 and A14.Ģ.
