QUESTION:

What measures does AUTOBOX provide for actually measuring forecast performance?

ANSWER:

AUTOBOX provides a comprehensive summary of forecasting performance and allows for

forecast errors to be tracked by origin and by lead time. In a sentence, one 12 period forecast

error is not the same as twelve one period forecast errors. Kind of obvious, but most people

 never think of tracking performance from different origins.

 

ACTUAL DATA & FOUR PERIOD OUT FORECASTS FROM SIX ORIGINS (LEAD TIME = 4 ; ORIGINS = 6)

ACTUAL

397

378

472

370

395

427

 

ORIGIN\DATE

1984/3

1984/4

1984/5

1984/6

1984/7

1984/8

 

1984/2

308

328

399

355

     

1984/3

 

347

472

387

431

   

1984/4

   

396

404

426

439

 

1984/5

     

421

436

441

 

1984/6

       

444

448

 

1984/7

         

444

 

Note: We have 6 estimates of a one-period forecast error

Note: We have 5 estimates of a two-period forecast error

Note: We have 4 estimates of a three-period forecast error

Note: We have 3 estimates of a four-period forecast error

 

Measures To Assess Forecast Model Performance

The above table contains all the raw data necessary to assess how predictable the future is for alternative lead times. The point is simple and profound. Forecasting accuracy from a single launch point generates correlated forecast errors. Forecast error analyses should use a number of different origins and a number of lead times. To reiterate, a set of historical values are used to perform some modeling activity, be it automatic or not, in order to come up with a model and a set of coefficients. These are then fixed and each of the withheld observations are then used as the launching point for a new set of forecasts. With this approach, the new observations don't fully participate in the modeling process and only affect the forecast but not the model form nor its parameters.

 

Typical Output Tables From AUTOBOX

 

VALUES ARE IN TERMS OF THE ORIGINAL METRIC

Number of Actuals

3

Forecast Mean Deviation (Bias)

469.327

Forecast Mean Percent Error

1.20294

Forecast Mean Absolute Deviation

1601.47

Forecast Mean Absolute % Error

4.26474

Forecast Variance (Precision)

.39e+077

Forecast Bias Squared (Reliability)

220268

Forecast Mean Square Error (Accuracy)

.41E+07

Relative Absolute Error

.3969

 

Typical Output Tables From AUTOBOX (MORE)

 

   

Lead Time

MEAN DEVIATION (BIAS)

MEAN % ERROR

MEAN ABSOLUTE DEVIATION

MEAN ABSOLUTE % ERROR

1

.18E+04

4.49

.41E+04

10.75

2

.22E+04

5.49

.36E+04

12.35

3

.38E+04

5.49

.48E+04

15.75

4

.15E+04

2.29

.21E+04

5.75

 

Typical Output Tables From AUTOBOX (MORE)

 

   

Lead Time

VARIANCE (PRECISION)

BIAS SQUARED (RELIABILITY)

MEAN SQUARE ERROR (ACCURACY)

RELATIVE ABSOLUTE ERROR

1

.23E+08

.32E+07

.26E+08

.54

2

.24E+08

.36E+07

.24E+08

.62

3

.24E+08

.56E+07

.34E+08

.39

4

.14E+08

.56E+07

.44E+08

.34

 


 

ACCURACY = PRECISION + RELIABILITY

We will now define all of these terms so that you can know how they were computed.

R = A - F and N = NAIVE FORECAST

a) Forecast Mean Deviation (Bias)

The simple average of the errors where bias is the actual less the forecast.

b) Forecast Mean Percent Error

Expressing the error as a percentage of the actual we get the percent error. If we average these percentages then we get the average or mean percent error.

c) Forecast Mean Absolute Deviation

Each bias or error can cancel or offset another. This statistic disables that potential flaw insofar as it computes the absolute average disallowing cancellation of the errors.

d) Forecast Mean Absolute % Error

If we now take the simple percent errors and take there absolute magnitude we can then compute the average or mean percent error.

e) Forecast Variance

The simple sum of squares of the errors around the average error is taken and averaged. Often called Precision.

f) Forecast Bias Squared

The overall average error is squared to compute this statistic. This is often called Reliability.

g) Forecast Mean Square Error

The sum of the errors squared and averaged is often called Accuracy.

h) Relative Absolute Error

Performance vis-a-vis a random walk prediction is often a useful measure. Here, we sum the absolute errors from the model and divide it by the sum of absolute errors from a random walk model.

ACCURACY = PRECISION + RELIABILITY