Matching Theoretical Autocovariance to Actual Autocovariance

The variance(autocovariance) of a stationary time series can be decomposed into three distinct components;

(1) that part which is due to the innovation variance of the series (random shock). Innovations, sometimes called "shocks," are the part of a series which is "new" or uncorrelated with the past. A large innovation variance means that shocks to the series are large.

(2) that part due to the pattern of autocorrelation in the series. Strong autocorrelation means that shocks are relatively persistent;

(3) and outliers (deterministic shocks), i.e. pulses, seasonal pulses, level shifts or local time trends.

All three lead to large variance.

In terms of what is predictable, i.e. part of the signal and what is not;

Predictable:

(A) That part due to the pattern of autocorrelation in the series.

(B) The continuance of seasonal pulses, level shifts and local time trends

Not Predictable:

(C) That part which is due to the innovation variance of the series (random shock) . Innovations, sometimes called "shocks," are the part of the a series which is "new" or uncorrelated with the past.

(D) The emergence of Deterministic Pulses

(E) The Discontinuance of seasonal pulses, level shifts and local time trends

 

In order to identify the the recurring pattern in the autocorrelation function (2), we will initally assume that no outliers exist (3). This assumption will be relaxed when the tentative model's residuals are examined for the presence of these effects. Thus we can focus on the first two components:

(1) that part which is due to the innovation variance of the series (random shock).

(2) that part due to the pattern of autocorrelation in the series.

Model identification, i.e. that part due to the pattern in the autocorrelation can be done in two ways:

1. The Time Domain or

2. The Frequency Domain

AUTOBOX uses the time domain but we include a discussion of both for completeness purposes.

Time Domain

AUTOBOX computes the actual autocovariance from the data and then computes theoretical autocovariances for a number of initial candidate models. Each of these candidates have their own shape, driven in part by the model form and the parameters which are based on the actual data. This insightful joining of theoretical forms based on actual correlative structure is the key to pattern recognition. After assessing a degree of conformance, a winner is selected which is then used as an initially identified model. This model is then tested for necessity, sufficiency, outliers, variance change, parameter change etc. in order to evolve the final model.

  1. if the initial identified paramaters are not invertible the candidate is ignored

Frequency Domain

We could simulate a particular autoregression and compute the theoretical log spectra for a known model and then compare the actual log spectra to the theoretical candidate in order to assess its reasonableness.

The integral of the spectrum provides a frequency decomposition of variance. The spectrum can be thought of as measuring the contribution to variance, or "power", from each frequency. Since frequency is inversely related to periodicity=2*p/frequency, the low frequency components of variance are associated with long-run changes in the series, and the high frequency components of variance are associated with short-run changes in the series. For example, the highest observable frequency, Pi, corresponds to a periodicity of 2; the middle frequency Pi/2 corresponds to a periodicity of 4; Pi/4 to a periodicity of 8, and so on.

By measuring the shape of the spectrum, it is possible to infer the "sign" and strength of autocorrelation component of variance. A flat spectrum corresponds to no autocorrelation, or white noise. Loosely speaking, a negatively sloped spectrum is "positively autocorrelated," with most of its power in the low frequencies, and a postively sloped spectrum is "negatively autocorrelated," with most of its power in the high frequencies.

The smoothness coefficient [Froeb & Koyak, 1994] is designed to measure the degree of autocorrelation, or "smoothness" in a series by measuring the shape of the log spectrum, which is a monotonic transform of the spectrum. Any frequency between zero and Pi partitions the spectrum into long-run (low frequency) and short-run (high frequency) components. By measuring the relative size of the two components, we measure the "shape" of the log spectrum, and the "strength" of the autocorrelation.

  1. the area under the log spectrum is equal to the log innovation variance
  2. the shape of the spectrum is determined by the "sign" and "strength" of autocorrelation in the series.
Time Domain Model Identification Example

CLICK HERE:Home Page For AUTOBOX