Q & A

Frequently Asked Statistical Questions

INTERVENTION/OUTLIERS



QUESTION:

Most of your competitors market products that do outlier detection and some sort of adjustment. What is the big deal with AUTOBOX in this regard ?

ANSWER:

The answer to this question hinges on how we compute the standard deviation for the purposes of determining statistical significance. Consider that the standard deviation of the series could be used to normalize the series to a 0,1 distribution. The standard deviation as computed by our competitors , save one , is based upon the sum of squares of the values around the MEAN . The formula requires that one compute the sum of squares around the EXPECTED VALUE. The MEAN is the expected value if and only if the series is UNCORRELATED or independent of previous values. AUTOBOX correctly computes the standard deviation , thus correctly assesses unusual values. Consider the series 1,9,1,9,1,9,9,9,1,9,.... . The usual , and in this case incorrect , computation of the standard deviation would cause a failure to detect the anomaly. Replacing the MEAN with the EXPECTED VALUE would result in a standard deviation that is smaller and thus the anomaly in the 7th value would be immediately identified.

QUESTION:

Allright, so your computation of the standard deviation is more correct, explain why I need AUTOBOX to detect level shifts, seasonal pulses and time trends.

ANSWER:

If one restricts OUTLIER DETECTION to detecting unusual values that are INDPENDENTLY flagged, one can miss important information. One man's signal is another man's noise is the best way that I can put it. If you have an "UNUSUAL" value every December , this is "USUAL" since it represents a predictable process. Software that INDPENDENTLY evaluates each and every value based only upon IT's characteristics can severely miss the boat in picking up quality information and predictable structure. This is then the argument for Seasonal Pulse identification, a unique feature of AUTOBOX. The idea here is to test for instantaneous and immediately disappearing changes in the mean of the errors. If the mean of the errors changes to another "regime" this is called a level or a step shift. For example a series 1,1,1,1,1,2,2,2,2,2,2,2, would exhibit A STEP SHIFT or a change in the mean. AUTOBOX would identify the need for a dummy variable that had 0,0,0,0,0,1,1,1,1,1,1,1 and when estimated the coefficient of response would be a 1 with an intercept of 1. A step or level shift can be seen to be the integration of a pulse , viz. 1 1 0 1 0 1 0 1 0 2 when differenced 1 2 0 2 0 2 0 2 0 2 0 2 0 In the same spirit a trend can be seen to be the integration of a step 1 1 2 1 3 1 4 1 5 1 7 when differenced 2 9 2 11 2 13 2 15 2 17 2 19 2 AUTOBOX can then detect the need for a TIME TREND VARIABLE which of course is nothing more than the integration of a step or the double integration of a pulse.

QUESTION:

I am intrigued by how AUTOBOX performs intervention detection or as it is commonly known outlier detection. It is clear that the correct computation of the standard deviation using the correct EXPECTED VALUE rather than the simple MEAN would work in detailing the identification scheme. In fact I probably could have it programmed here at my company saving the expense of AUTOBOX but how does it work when it detects SEASONAL PULSES , LEVEL(STEP) SHIFTS and LOCAL TIME TRENDS ? On the surface I need to explain this to my boss but I would also like to know how it's done so that I could program it myself.

ANSWER:

The heart of the matter is the General Linear model where Y is the variable to be predicted and the form of the X matrix is to be determined empirically. Consider a case where the noise process is uncorrelated. One could construct simple regression models, where the trial X variable was initially 1,0,0,0,0,0,0.....0,0,0,0 and evaluate the error sum of squares. You could then create yet a second model where the X variable was 0,1,0,0,0,0,0,0,0,0,0.......0,0,0,0 and evaluate it in terms of the resultant error sum of squares. If you had 100 observations you would then run 100 regressions and the regression that generated the smallest error sum of squares would then be the MAXIMUM LIKELIHOOD ESTIMATE. This process would be repeated until the most important NEW variable was not statistically significant. This is essentially STEP-WISE forward regressioon, where the X variable is found by a search method. Two important generalizations are needed before you go running off to make our new competitor; 1. The search for the X variable has to be extended to include SEASONAL PULSES, LEVEL SHIFTS and TIME TRENDS and 2. The error process may be other than white noise, thus one has to iteratively construct TRANSFER FUNCTIONS rather than multiple regression. The process gets a little more sticky when you have pre-defined user inputs in the model. Consider the case where

TIME Y X
(User defined variable)
1 .1 0
2 .2 0
3 .1 0
4 2.0 1
5 .1 0
6 .3 0
7 2.4 1
8 .2 0
9 .1 0
10 .1 0
11 .2 1

In the case where X is not suggested by the user there are 2 outliers or 2 unusual values ( time period 4 and 7 ). Thus the model would contain 2 dummy variables. However in the case when the user knows about a causal variable ( point in time of a promotion for example ) AUTOBOX would detect an outlier at time period 11. This would lead to a totally different set of coefficients and a different comprehension of the nature of the data.

QUESTION:

I have a problem in testing the significance of an event Through a panel survey conducted on 115 points of sale (POS) I gather weekly sales data for 3 products. Follows a summary table where: Xi = Weekly total sales (115 POS) per product Yi = Number of POS (counts) which in the corresponding week had the product available on shelf Week 1 Week 2 Week n X1 Y1 X2 Y2 ...... Xn Yn Product A 136 90 142 86 156 99 Product B 645 76 538 84 552 81 Product C 318 103 346 108 301 92 My questions are: 1. How can test if the effect of, e.g., a promotional action caused sales in week 2 for product A (142 US$) to be significantly different from the corresponding level in week 1 (136 US$)? 2. How can I test if the number of POS which in week 2 had product A in stock (86 POS) is significantly smaller than that in week 1 (90 POS)?

ANSWER:

At one level , your problem is a Transfer Function where you are specifying two cause variables ; 1. The Number of Stores and 2. An indicator variable Z (all zeroes save the week in question which would be inicated by a one). This variable is called an Intervention Variable and is hypothesized in advance. If it were found empirically it would be still called an Intervention Variable bit it would have been detected by Intervention Detection or Outlier Analysis. Briefly the reason that you have to construct a Transfer function is that Sales in a particular week may be effected by sales the previous week or weeks, sales at this point in time a year ago, the number of sites carrying the product last week , the week before that and/or a recent level shift due to some omitted variable or even a trend in sales. All of these things , and more , may be operational thus to identify and measure the increment or lift due to this unique activity one has to allow for the other effects. Failing this one can incorrectly assign significance to what is otherwise caused by lag effects or seasonal processes or local changes in the mean.
Transfer Function is defined as:

Y t = CONSTANT + c 1 Yt - 1 + ... +c j Yt - j + at +d 0 Xt - 0 + ... +d 1 Xt - 1 + ... +d k Xt - k

where at is White Noise process.

 

 

où is white noise. The model presents a formulation of an input series , and the output series . The transfer model is summarized in the polynomial .

The problem is to IDENTIFY and ESTIMATE the polynomial .

 


But before continuing I think that your problem could be slightly restated.

The Series

The data could be laid out as follows , using Y as the variable to be predicted.

 

Time Series Data For Product A
Time Period# of Stores (X)Indicator Variable for Special Promotion (Z)$ Sales (Y)
Week 1 90 0 136
Week 2 86 1 142
Week n 99 0 156

 

Time Series Data For Product B
Time Period# of Stores (X)Indicator Variable for Special Promotion (Z)$ Sales (Y)
Week 1 76 0 645
Week 2 84 0 538
Week n 81 0 552

Yi = Weekly total sales (115 POS) per product Xi = Number of POS (counts) which in the corresponding week had the product available on shelf Week 1 Week 2 Week n Y1 X1 Y2 X2 ...... Yn Xn Product A 136 90 142 86 156 99 Construct a Transfer Function between the two exogenous variables (X and Z) making sure that the error process is correctly modelled, that is the ARIMA component. Test to see if there are violations of the Gaussian assumptions; viz. 1. The mean of the errors is zero everywhere. This can be tested via Outlier Detection. Various model augmenation may be required ( Pulse, Seasonal Pulse, Level Shift, Time Trends) to reach this objective. 2. The variance of the errors may not be constant, i.e. the variance may be proportional to the level or it may have had regime changes, where for some period of time the variance may have doubled or whatever. All of the above is premaces on the assumption that the Number of Stores and the Sales for Product 2 have no effecton the Sales of Product 1. If this were not true or had to be tested then the required tools would be Vector ARIMA, where in this case there would be Two endogenous variables (Y1,Y2) and three exogenous series X1,X2 and the hypothesized Z1 variable. Conclusion ------------ To the best of my knowledge AFS is the sole provider of software to deal with either solution. AUTOBOX allows one to identify and model outliers in a Transfer Function. I don't know any other piece of software that does that. Secondly , MTS which is a product of AFS deals with the Vector formulation. Again I believe that it is a unique solution because the only other Vector ARIMA program requires all variables , both endogenous and exogenous to be simultaneously predicted, thus there are no purely exogenous variables. All variables in that system are treated as endogenous. Reference: BOX, G.E.P. AND JENKINS, G.M. (1976). TIME SERIES ANALYSIS: | FORECASTING AND CONTROL, 2ND ED., SAN FRANCISCO: HOLDEN DAY. | Reilly, D.P. (1980). "Experiences with an Automatic | Box-Jenkins Modeling Algorithm," in Time Series Analysis, | ed. O.D. Anderson. (Amsterdam: North-Holland), pp. | 493-508. | | Reilly, D.P. (1987). "Experiences with an Automatic Transfer | Function Algorithm," in Computer Science and Statistics | Proceedings of the 19th Symposium on the Interface, ed. | R.M. Heiberger, (Alexandria, VI: American Statistical | Association), pp. 128-135. | | Tiao, G.C., and Box, G.E.P. (1981). "Modeling Multiple Time | Series with Applications," Journal of the American | Statistical Association, Vol. 76, pp. 802-816. | | Tsay, R.S. (1986). "Time Series Model Specification in the | Presence of Outliers," Journal of the American Statistical | Society, Vol. 81, pp. 132-141. |


QUESTION:

I want to analyze whether there is a significant trend over time in the annual failure rate of a product. I have 20 years of measurements (i.e., n = 20). As I understand it, an ordinary regression analysis would be inappropriate because the residuals are not independent (i.e., the error associated with a failure rate for 1974 is more highly correlated with the 1975 failure rate than the 1994 failure rate). Is it appropriate to simply divide the data into two groups (the 1st 10 years vs. the 2nd 10 years) and do a between-groups ANOVA? Or is there some other (better) way to analyze these data? Should anyone be so inclined as to do the analysis, here are the data:

 

Year Failure Rate
1974 3.3
1975 2.5
1976 2.7
1977 2.4
1978 5.7
1979 3.2
1980 1.6
1981 5.2
1982 2.8
1983 2.4
1984 2.7
1985 1.3
1986 4.5
1987 4.5
1988 1.4
1989 3.6
1990 1.5
1991 1.4
1992 1.6
1993 1.6

ANSWER:

"The Heart of the Matter" Your question regarding "testing the differences between two means" where the breakpoint , if any , is unknown and to be found is an application of intervention detection. The problem can , and is in your case, compounded by unusal one-time only values. Another major complication , not in evidence in your problem, is the affect of autocorrelated errors and the need to account for that in computing the "t value or F value" for the test statistic. The answer to your question is

1. Don't assume that you need to break the data into two halves or some other guessed breakpoint. Employ Intervention Detection to search for that break point that provides the greatest distinction between the two developed groups. It is possible to pre-specify the test point but it is clear from your question that you were just struggling. In some cases where the analyst knows of an event ( law change , regime change ) it is advisable to pre-specify , but most of the time this is a dangerous proposition as delays or lead effects may be in the data.

2. Adjusting for one time onlies , analgous to Robust Regression, allows one to develop a gaussian process free of one-time anomalies and thus more correctly point to a break point. In this case:

OBSERVED VALUE TIME POINT ESTIMATED PULSE VALUE
5.7 1978 3.2
5.2 1981 2.7
4.5 1986 2.0
4.5 1987 2.0

3. After adjusting for these four "unusual values" the autocorrelation of the modified series exhibited the following;

LAG ACF VALUE STND. ERROR T- RATIO CHI-SQUARE PROBABILITY
1 -.362 .224 -1.62 3.03 NA
2 -.020 .251 -.08 3.04 NA
3 .267 .251 1.06 4.88 NA
4 -.329 .265 -1.24 7.86 NA
5 .337 .285 1.18 11.18 NA
6 -.297 .304 -.98 13.95 NA
7 .032 .318 .10 13.99 .0002
8 .217 .318 .68 15.71 .0004

Analysis concludes that there is no evidence of any ARIMA structure. Note that if there was an ARIMA model , not some assumed model like Durbin's would be used to model the noise process for our generalized least squares estimator of the test for the significant difference between the two means of a specified or unspecified group size. Early reserachers in the 1950's (Durbin,Watson,Hildreth,Liu to name a few had to ASSUME the form of the error structure). More modern approaches skillfully IDENTIFY the form of the error process leading to correct model specification. Dated procedures should be treated very carefully as their simplicity can be your demise! One final comment , your question had to with things changing . It is possible that the mean might have changed or the trend might have changed. AUTOBOX speaks to both questions and in this case concluded that the mean had indeed changed.

3. In the final analysis a "t test" is developed for the Hypothesis that

  1. Group 1 Observations 1 -16 (1974-1989)
  2. Group 2 Observations 17-20 (1990-1993)

INPUT SERIES X5 I~AL0017 0017 LEVEL
Lambda Value 6 Omega (input) -Factor # 5 0 -.9666675 .316667 -3.053

The conclusion based on a t value of -3.053 , is that yes there is a statistically significant difference between the two failure rates. This conclusion is obvious when one looks at a plot. Click here to view.


QUESTION:

Please explain INTERVENTION MODELING when I know a priority the timing and the duration of an event.

ANSWER:

In this section we discuss a class of models and a model development process for time series influenced by identifiable isolated events. More commonly known as intervention analysis, this type of modeling is transfer function modeling with a stochastic output series and a deterministic input variable. The values of the input variable are usually set to either zero or one, indicating off or on. For instance, a time series disturbed by a single event, such as a strike or a price change, could be modeled as a function of its own past values and a "dummy" variable. The dummy input would be represented by a series of zeroes, with a value of one at the time period of the intervention. A model of this form may better represent the time series under study. Intervention modeling can also be useful for "what if" analysis - that is, assessing the effect of possible deterministic changes to a stochastic time series.

There is a major flaw associated with theory based models. It is called specification bias and this model suffers from it. Consider the assumption of a level shift variable starting at time period T. The modeler knows that this is the de jure date of the intervention. For example the date that a gun law went into effect. If the true state of nature is that it took a few periods for the existence of the law to affect the behavior then no noticeable effect could be measured during this period of delay. If you split the data into two mutually exclusive groups based upon theory, the test results will be biased towards no difference or no effect. This is because observations that the user is placing in the second group rightfully belong in the first group, thus the means then are closer than otherwise and a false conclusion can arise. This specification error has to be traded off with the potential for finding spurious significance when faced with testing literally thousands and thousands of hypothesis.


QUESTION:

Give me the formal presentation of the IMPACT of outliers.

ANSWER:

Outliers and structure changes are commonly encountered in time series data
analysis. The presence of the extraordinary events could and have misled conventional
time series analysts resulting in erroneous conclusion. The impact of these
events is often overlooked however for the lack of a simple yet effective means
to incorporate these isolated events. Several approaches have been considered
in the literature for handling outliers in a time series. We will first illustrate
the effect of unknown events which cause simple model identification to go awry.
We will then illustrate what to do in the case when one knows a priori about
the date and nature of the isolated event. We will also point out a major flaw
when one assumes an incorrect model specification. Then we introduce the notion
of finding the intervention variables through a sequence of alternative regression
models yielding maximum likelihood estimates of both the form and the effect
of the isolated event. Standard identification of Arima models uses the sample
ACF as one of the two vehicles for model identification. The ACF is computed
using the covariance and the variance. An outlier distorts both of these and
in effect dampens the ACF by inflating both measures. Another problem with outliers
is that they can distort the sample ACF and PACF by introducing spurious structure
or correlations. For example consider the circumstance where the outlier dampens
the ACF:

ACF = COVARIANCE/VARIANCE

Thus the net effect is to conclude that the ACF is flat; and the resulting conclusion is that no information from the past is useful. These are the results of incorrectly using statistics without validating the parametric requirements. It is necessary to check that no isolated event has inflated either of these measures leading to an "Alice in Wonderland" conclusion. Various researches have concluded that the history of stock market prices is information-less. Perhaps the conclusion should have been that the analysts were statistic-less. Another way to understand this is to derive the estimator of the coefficient from a simple model and to evaluate the effect of a distortion. Consider the true model as an AR(1) with the following familiar form:

[ 1 - PHI1 B ] Y(t) = A(t) or Y(t) = A(t)/[ 1 - PHI1 B ] [ 1 - PHI1 B ] Y(t) = A(t) or Y(t) = PHI1 Y(t-1) + A(t) The variance of Y can be derived as: variance(Y) = PHI1*PHI1 variance(Y) + variance(A) thus PHI1 = SQRT( 1 - variance(A)/variance(Y) )

 

Now if the true state of nature is where an intervention of form I(t) occurs at time period t with a magnitude of W we have:

Y(t) = {A(t)/[ 1 - PHI1 B ]}+ W I(t) with variance(Y) = [PHI1*PHI1 variance(Y) + variance(A)] + [W I(t)]*[W I(t)] =true variance(Y) + distortion thus PHI1 = SQRT(1- [var(A) + [W I(t)]*[W I(t)] ]/variance(Y)

 

The inaccuracy or bias due to the intervention is not predictable due to the complex nature of the relationship. At one extreme the addition of the squared bias to variance(A) would increase the numerator and drive the ratio to 1 and the estimate of PHI1 to zero. The rate at which this happens depends on the relative size of the variances and the magnitude and duration of the isolated event. Thus the presence of an outlier could hide the true model. Now consider another option where the variance(Y) is large relative to variance(A). The effect of the bias is to drive the ratio to zero and the estimate of PHI1 to unity. A shift in the mean would generate an ACF that did not die out slowly thus leading to a misidentified first difference model. In conclusion the effects of the outlier depend on the true state of nature. It can both incorrectly hide model form and incorrectly generate evidence of a bogus model.


QUESTION:

Please explain INTERVENTION MODELING/DETECTION when I don't know a priori the timing and the duration of an event. Be as formal as you can.

ANSWER:

 

Early work restricted outlier detection to identification of three types of outliers in a time series. These outliers weree represented as intervention variables of the forms: pulse, level shifts and seasonal pulses. The procedure for detecting the outlier variables is as follows. Develop the appropriate ARIMA model for the series. Test the hypothesis that there is an outlier via a series of regressions at each time period. Modify the residuals for any potential outlier and repeat the search until all possible outliers are discovered. These outliers can then be included as intervention variables in a multiple input B-J model. The noise model can be identified from the original series modified for the outliers. AFS has extended outlier detection to detecting the presence of local time trends.

This option to the program provides a more complete method for the development of a model to forecast a univariate time series. The basic premise is that a univariate time series may not be homogoneous and, therefore, the modeling procedure should account for this. By homogeneous, we mean that the underlying noise process of a univariate time series is random about a constant mean. If a series is not homogeneous, then the process driving the series has undergone a change in structure and an ARIMA model is not sufficient. The AUTOBOX heuristic that is in place checks the series for homogeneity and modifies the model if it finds any such changes in structure. The point is that it is necessary for the mean of the residuals to be close enough to zero so that it can be assumed to be zero for all intents and purposes. That requirement is necessary but it is not sufficient. The mean of the errors (residuals) must be near zero for all time slices or sections. This is a more stringent requirement for model adequacy and is at the heart of intervention detection. Note that some inferior forecasting programs use standardized residuals as the vehicle for identifying outliers. This is inadequate when the ARIMA model is non-null. Consider the case where the observed series exhibits a change in level at a particular point in time.

If you try to identify outliers or interventions in this series via classical standardized residuals you get one outlier or one unusual value. The problem is that if you "fix" the bad observation at the identified time point, the subsequent value is identified as an outlier due to the recursive process. The simple-minded approach of utilizing standardized residuals is in effect identification of innovative outliers and not additive outliers.

The logic behind the automatic intervention procedure has its roots in the technique proposed by Chang and Tiao (1983) and programmed by Bell (1983). It starts by developing an ARIMA model for the univariate time series (using the automatic ARIMA algorithm). A series of regressions on the residuals from the ARIMA model checks for any underlying changes in structure. If the series is found to be homogeneous, then the ARIMA model is used to forecast. If the series is found to be nonhomogeneous, then the various changes in structure are represented in a transfer function model by dummy (intervention) input variables and the ARIMA model becomes the tentative noise model. The program then estimates the transfer function-noise model and performs all of the diagnostic checks for sufficiency, necessity and invertibility. The model is updated as needed, and the diagnostic checking stage ends when all of the criteria for an acceptable model are met. The final step is to generate the forecast values. The user controls the level of detail that the output report is to contain, as well as some key options for modeling precision (lambda search and backcasting, for example). The user can also elect to have this process start with an examination of the original time series. This may be necessary for those cases where the series is overwhelmingly influenced by outlier variables.

We now present a summary of the mathematical properties underlying this procedure. This is taken from the Downing and McLaughlin (1986) paper (with permission!). For purposes of this discussion, we present , in their notation, the following equation, which is the general ARIMA model:

 

P(B) (N(t) - MEAN) = CONSTANT + T(B) A(t) , (eq. 1)where N(t) = the discrete time series,MEAN = the average of time series,P(B) = the autoregressive factor(s),CONSTANT= the deterministic trend,T(B) = the moving average factor(s),A(t) = the noise series,and B = the backshift operator.

Outliers can occur in many ways. They may be the result of a gross error, for example, a recording or transcript error. They may also occur by the effect of some exogenous intervention. These can be described by two different, but related, generating models discussed by Chang and Tiao (1983) and by Tsay (1986). They are termed the innovational outlier (IO) and additive outlier (AO) models. An additive outlier can be defined as,

 

Y(t) = N(t) + W E(to) (eq. 2)

while an innovational outlier is defined as,

Y(t) = N(t) + [P(B)/T(B)] W E(to) (eq. 3)where Y = the observed time series, t in lengthW = the magnitude of the outlier,E (t ) = 1 if t = to,0 if t <> to

that is, E (t ) is a time indicator signifying the time occurrence t o of the outlier, and N is an unobservable outlier free time series that t follows the model given by (eq. 1). Expressing Equation (eq. 2) in terms of white noise series A in Equation (eq. 1), we find that for the AO model

Y(t) = [T(B)/P(B)] A(t) + W E(to) , (eq. 4)while for the the IO modelY(t) = [T(B)/P(B)][ A(t) + W E(to)] , (eq. 5)

Equation (eq. 4) indicates that the additive outlier appears as simply a level change in the t th observation and is described as a o "gross error" model by Tiao (1985). The innovational outlier represents an extraordinary shock at time period to since it influences observations Y(to) , Y(to+1)..... through the memory of the system described by T(B)/P(B).

The reader should note that the residual outlier analysis as conducted in the course of diagnostic checking is an AO type. Also note that AO and IO models are relatable. In other words, a single IO model is equivalent to a potentially infinite AO model and vice versa. To demonstrate this, we expand equation (eq.5) to

Y(t) = [T(B)/P(B)] A(t) + [T(B)/P(B)] W E(to) , (eq. 6)

and then express (eq. 6) in terms of (eq. 4)

Y(t) = [T(B)/P(B)] A(t) + WW E(to) , (eq. 7)

where WW = [T(B)/P(B)] W .

Due to estimation considerations, the following discussion will be concerned with the additive outlier case only. Those interested in the estimation, testing, and subsequent adjustment for innovative outliers should read Tsay (1986). Note that while the above models indicate a single outlier, in practice several outliers may be present.

The estimation of the AO can be obtained by forming

II(B) = [T(B)/P(B)] (eq. 8)

and calculating the residuals E(t) by

E(t) = II(B) Y(t) (eq. 9)

= II(B)[ [T(B)/P(B)] A(t) + W E(to) ]

= A(t) + W II(B) E(to) .

By least squares theory, the magnitude W of the additive outlier can be estimated by

EST of W(to) = n*n II(B) E(to) (eq. 10)

The variance of W(to) is given by:

Var(W(to)) = n*n var(A) (eq. 11)

where var(A) is the variance of the white noise process A(t) .

Based on the above results, Chang and Tiao (1983) proposed the following test statistic for outlier detection:

ç(to)= EST W(to) / n sqrt(var(A)). (eq. 12)

If the null hypothesis of no outlier is true, then ç(to) has the standard normal distribution. Usually, in practice the true parameters II and åý are unknown, but consistent estimates exist. Even more important is the fact that to, the time of the outlier, is unknown, but every time point may be checked. In this case one uses the statistic:

ç = max absolute value of ç(to) where to goes from 1 to n (eq. 13) and declares an outlier at time to if the maximum occurs at to and is greater than some critical value C. Chang and Tiao (1983) suggest values of 3.0, 3.5 and 4.0 for C.

The outlier model given by Equation (eq. 4) indicates a pulse change in the series at time to. A step change can also be modeled

simply by replacing E(to) with S(to) where:

S(to) = 1 if t greater than to (eq. 14)

0 if not

We note that (1-B)S(to) = E(to) . Using S(to) one can apply least squares to estimate the step change and perform the same tests of hypothesis reflected in Equations (eq. 12) and (eq. 13). In this way, significant pulse and/or step changes in the time series can be detected.

A straightforward extension of this approach to transfer functions has also been introduced in this version of AUTOBOX. This, of course, implies that the outliers or interventions are not only identified on the basis of the noise filter but the form and nature of the individual transfer functions.

 

 

 
Go to top