Chapter 6
Autocorrelation or Serial Correlation Section 6.1
Introduction
2 Evaluating Econometric Work How does an analyst know when the econometric work is completed?
3 Evaluating Econometric Work Econometric Results – indeed a term that for many economists conjures up horrifying visions of well-meaning but perhaps marginally skilled, likely nocturnal, individuals sorting through endless piles of computer print-outs. One “final” print-out is then chosen for seemingly mysterious reasons and the rest discarded to be recycled through a local paper processor and another computer printer for other “econometricians” to repeat the process ad infinitum. Besides supplying a lucrative business for the paper recyclers, what useful output, if any, results from such a process? This question lies at the heart of the so called “science of econometrics as currently applied,” a practice which has been called “data-mining”, “number crunching”, “model sifting”, “data grubbing”, “fishing”, “data messaging”, and even “alchemy” among other less palatable terms. All of these euphemisms describe basically, the same process: choosing an econometric model based on repeated experimentation with available sample data.
4 - Ziemer (1984) Evaluating Econometric Work “Econometrics may not have the everlasting charm of Holmesian characters and adventures, or even a famous resident of Baker Street, but there is much in his methodological approach to the solving of criminal cases that is of relevance to applied econometric modeling. Holmesian detection may be interpreted as accommodating the relationship between data theory, modeling procedures, deductions and inferences, analysis of biases, testing of theories, re-evaluation and reformulation of theories, and finally reaching a solution to the problem at hand. With this in mind, can applied econometricians learn anything from the master of detection? - McAleer (1994) 5 Key Diagnostics Diagnostic Component of Econometric Model Under Consideration Serial Correlation Error (or Disturbance) Term (Autocorrelation) Heteroscedasticity Error (or Disturbance) Term Collinearity Diagnostics Explanatory Variables Influence diagnostics Observations Structural Change Structural Parameters
See Beggs (1988).
6 Section 6.2
Autocorrelation or Serial Correlation Autocorrelation or Serial Correlation Definition Consequences Formal Tests – Durbin-Watson Test – Nonparametric Runs Test – Durbin's h-Test – Durbin's m-Test – Lagrange Multiplier (LM) Test – Box-Pierce Test ( Q Statistic) – Ljung-Box Test ( Q* Statistic) – (Small-sample modification of Box-Pierce Q Statistic) Solution – Generalized Least Squares 88 Formal Definition of Autocorrelation or Serial Correlation Autocorrelation or serial correlation refers to the lack of independence of error (or disturbance) terms. Autocorrelation and serial correlation refer to the same phenomenon. Simply put, a systematic pattern exists in the residuals of the econometric model. Ideally, the residuals, which represent a composite of all factors not embedded in the model, should exhibit no pattern. That is to say, the residuals should follow a white-noise (or random) pattern.
99 Prevalence of Serial Correlation With the use of time-series data in econometric applications, serial correlation is “public enemy number one.” Systematic patterns in the error terms commonly arise due to the (inadvertent) omission of explanatory variables in econometric models. These variables might come from disciplines other than economics, finance, or business; for example, psychology and sociology. Alternatively, these variables might represent factors that simply are difficult to quantify, such as tastes and preferences of consumers or technological innovation on the part of producers.
1010 Consequences of Serial Correlation Bishop (1981) Errors “contaminated” with autocorrelation or serial correlation Potential of discovering “spurious” relationships due to problems with autocorrelated errors (Granger and Newbold, 1974) Difficulties with structural analysis and forecasting If the error structure is autoregressive, then OLS estimates of the regression parameters are (1) unbiased, (2) consistent, but (3) inefficient in small and in large samples.
1111 continued... Consequences of Serial Correlation The estimates of the standard errors of the coefficients in any econometric model are biased downward if the residuals are positively autocorrelated. They are biased upward if the residuals are negatively autocorrelated. Therefore, the calculated t statistic is biased upward or downward in the opposite direction of the bias in the estimated standard error of that coefficient. Granger and Newbold (1974) further suggest that the econometric results can be defined as “nonsense” if R2 >DW(d).
1212 continued... Consequences of Serial Correlation Positive autocorrelation of the errors generally tends to make the estimate of the error variance too small, so confidence intervals are too narrow and null hypotheses are rejected with a higher probability than the stated significance level. Negative autocorrelation of the errors generally tends to make the estimate of the error variance too large, so confidence intervals are too wide; also the power of significance tests is reduced. With either positive or negative autocorrelation, least- squares parameter estimates usually are not as efficient as generalized least-squares parameter estimates.
1313 Regression with Autocorrelated Errors Ordinary regression analysis is based on several statistical assumptions. One key assumption is that the errors are independent of each other. However, with time series data, the ordinary regression residuals usually are correlated over time. Violation of the independent errors assumption has three important consequences for ordinary regression. – First, statistical tests of the significance of the parameters and the confidence limits for the predicted values are not correct. – Second, the estimates of the regression coefficients are not as efficient as they would be if the autocorrelation were taken into account. – Third, because the ordinary regression residuals are not independent, they contain information that can be
1414 used to improve the prediction of future values. Solution to the Serial Correlation Problem Generalized Least Squares (GLS) The AUTOREG procedure solves this problem by augmenting the regression model with an autoregressive model for the random error, thereby accounting for the systematic pattern of the errors. Instead of the usual regression model, the following autoregressive error model is used: = ′β + ε yt xt t ε = − φ ε − φ ε − − φ ε + t 1 t−1 2 t−2 ... m t−m vt σ 2 vt ~ IN ,0( ) σ 2 The notation vt ~ IN ,0( ) indicates that each vt is normally and independently distributed with mean 0 and variance σ2.
1515 continued... Solution to the Serial Correlation Problem Generalized Least Squares (GLS) By simultaneously estimating the regression coefficients β
and the autoregressive error model parameters φi , the AUTOREG procedure corrects the regression estimates for autocorrelation. Thus, this kind of regression analysis is often called autoregressive error correction or serial correlation correction . This technique is also called the use of generalized least squares (GLS) .
1616 Predicted Values and Residuals The AUTOREG procedure can produce two kinds of predicted values and corresponding residuals and confidence limits. The first kind of predicted value is obtained from only the structural part of the model. This predicted value is an estimate of the unconditional mean of the dependent variable at time t. The second kind of predicted value includes both the structural part of the model and the predicted value of the autoregressive error process. Both the structural part and autoregressive error process of the model (termed the full model ) are used to forecast future values.
1717 continued... Predicted Values and Residuals Use the OUTPUT statement to store predicted values and residuals in a SAS data set and to output other values such as confidence limits and variance estimates. The P= option specifies an output variable to contain the full model predicted values. The PM= option names an output variable for the predicted (unconditional) mean. The R= and RM= options specify output variables for the corresponding residuals, computed as the actual value minus the predicted value.
1818 Serial Correlation Disturbance terms are not independent. = β + β + β + + β + ε = Yt 0 1 X1t 2 X 2t ... k X kt t i 2,1 ,..., T ε ε ≠ ≠ E( i j ) 0 i j
ε ε The correlation between t and t-k is called an autocorrelation of order k.
1919 continued... Serial Correlation Recommend a graphical analysis of plotting the residuals over time to determine the existence of a non-random or systematic pattern. Residuals
Time Positive Correlation
Residuals
Time Negative Correlation
2020 ... Section 6.3
Tests for Serial Correlation The Durbin-Watson Test ε = ρε + AR(1) process t t−1 vt ρ = H 0 : 0 ρ ≠ H1 : 0 n − 2 ∑(eˆt eˆt−1) ≤ = t=2 ≤ or d statistic 0 DW n 4 2 ∑eˆt t=1 n ∑ eˆt eˆt−1 DW (d ) ≈ − ρ ρˆ = − ≈ t=2 DW (d ) 1(2 ) so that 1 n 2 2 ∑ et t=1 If ρ = 0, then d = 2; if ρ = 1, then d = 0; if ρ = -1, then d = 4 22 continued... The Durbin-Watson Test dL,d U depend on α, k, n. DW is invalid with models that contain no intercept and models that contain lagged dependent variables. The distribution of DW(d) is reported by Durbin and Watson (1950, 1951).
2323 continued... f(d)
Reject H 0 Reject H 0 Evidence of Zone of Zone of Evidence of negative positive indecision indecision autocorrelation autocorrelation
Accept H 0
0 d dL dU 2 4-dU 4-dL 4
24 continued... The Durbin-Watson Test The sampling distribution of d depends on the values of the exogenous variables and hence Durbin and
Watson derived upper ( dU) limits and lower ( dL) limits for the significance levels for d. Tables of the distribution are found in most econometric textbooks. The Durbin-Watson test perhaps is the most used procedure in econometric applications.
2525 The Durbin-Watson Statistic
2626 continued... Appendix G: Statistical Table
27 Limitations of the Durbin-Watson Test Although the Durbin-Watson test is the most commonly used test for serial correlation, there are limitations: 1. The test is for first-order serial correlation only. 2. The test might be inconclusive. 3. The test cannot be applied in models with lagged dependent variables. 4. The test cannot be applied in models without intercepts.
2828 Additional Tables There are other tables for the DW test that have been prepared to take care of special situations. Some of these are: 1. R.W. Farebrother (1980) provides tables for regression models with no intercept term. 2. Savin and White (1977) present tables for the DW test for samples with 6 to 200 observations and for as many as 20 regressors.
2929 continued... Additional Tables 3. Wallis (1972) gives tables for regression models with quarterly data. Here you want to test for fourth-order autocorrelation rather than the first-order autocorrelation. In this case, the DW statistic is n − 2 ∑(uˆ t uˆ t−4 ) = t=5 d4 n 2 ∑ uˆ t t=1
Wallis provides 5% critical values d L and d U for two situations: where the k regressors include an intercept (but not a full set of seasonal dummy variables) and another where the regressors include four quarterly seasonal dummy variables. ρ In each case the critical values are for testing H 0: =0 against ρ ρ H1: > 0. For the hypothesis H 1: < 0, Wallis suggests that the appropriate critical values are (4-dU) and (4-dL). King and Giles (1978) give further significance points for Wallis tests. 3030 continued... Additional Tables
4. King (1981) gives the 5% points for d L and d U quarterly time-series data with trend and/or seasonal dummy variables. These tables are for testing first-order autocorrelation. 5. King (1983) gives tables for the DW test for monthly data. In case of monthly data, you want to test for twelfth -order autocorrelation.
3131 Nonparametric Runs Test (Gujarti, 1978) ρ More general than the DW test. Interest in H 0: = 0. – Test of AR(1) process in the error terms N+ = number of positive residuals N- = number of negative residuals N = number of observations Nr = number of runs + − [ ]= Example: E Nr 2( N N /) N []= + − + − − 2 − VAR Nr 2( N N 2( N N N)) /( N (N ))1
= − ≈ Test Statistic: Z (Nr (E Nr )) / VAR (Nr ) N )1,0(
Reject H 0 (non-autocorrelation) if the test statistic is too large in absolute value.
3232 The REG Procedure Model: MODEL1 Dependent Variable: lnpcg
Number of Observations Read 36 Number of Observations Used 36
Analysis of Variance
Sum of Mean Source DF Squares Square F Value Pr > F
Model 6 0.78035 0.13006 150.85 <.0001 Error 29 0.02500 0.00086218 Corrected Total 35 0.80535
Root MSE 0.02936 R -Square 0.9690 Dependent Mea n -0.00371 Adj R-Sq 0.9625 Coeff Var -791.75086
3333 continued... Parameter Estimates
Parameter Standard Varia ble DF Estimate Error t Value Pr > |t|
Int ercept 1 -17.27084 1.71977 -10.04 <.0001 lny 1 1.94035 0.19785 9.81 <.0001 lnpg 1 -0.11398 0.03409 -3.34 0.0023 lnpn c 1 -0.11773 0.07654 -1.54 0.1349 lnpuc 1 0.21874 0.15677 1.40 0.1735 lnppt 1 -0.03906 0.08444 -0.46 0.6471 t 1 -0.01747 0.00701 -2.49 0.0188
34 The REG Pro cedure Model: MO DEL1 Dependent Varia ble: lnpcg
Durbin -Watson D 0.786 Pr < DW <.0001 Pr > DW 1.0000 Number of Observatio ns 36 1st Order Autocorrel ation 0.601
NOTE: Pr
3535 Obs year lnpcgres Obs year lnpcgres
1 1960 0.017118 19 1978 0.024483 2 1961 0.010675 20 1979 -0.007387 3 1962 0.019701 21 1980 -0.039505 4 1963 0.025370 22 1981 -0.014226 5 1964 -0.019348 23 1982 0.022968 6 1965 -0.043199 24 1983 0.042105 7 1966 -0.045077 25 1984 -0.011793 8 1967 - 0.056039 26 1985 -0.028349 9 1968 -0.032779 27 1986 -0.007728 10 1969 0.003223 28 1987 0.016606 11 1970 0.008482 29 1988 -0.019070 12 1971 0.02140 0 30 1989 -0.005426 13 1972 0.016612 31 1990 -0.008096 14 1973 -0.015336 32 1991 -0.025680 15 1974 0.006500 33 1992 -0.015855 16 1975 0.039297 34 1993 0.013422 17 1976 0.04 8282 35 1994 0.007171 18 1977 0.050095 36 1995 0.001383 3636 The Greene Problem In the Greene problem for gasoline, DW = 0.786 and ρ ˆ = 0.601.
Use of Nonparametric Runs test + - N = 36 N = 19 N = 17 Nr = 11 [ ]= + − = E Nr 2( N N /) N 17 94. [ ]= + − + − − 2 − = VAR Nr (2N N (2N N N)) /( N (N 1)) 8.69
= − Z (Nr E(Nr )) VAR (Nr ) 11 −17 94. − 94.6 Z = = = − 35.2 69.8 95.2 = = at α .05 , reject H 0:ρ 0. = α = Zcrit 96.1 at .05 3737 Analysis Limitations Analysts must recognize that a “good” Durbin-Watson statistic is insufficient evidence upon which to conclude that the error structure is “contamination free” in terms of autocorrelation. The Durbin-Watson test is only applicable for the presence of first-order autocorrelation. There is little reason to suppose that the correct model for residuals is AR(1). A mixed, autoregressive, moving- average (ARMA) structure is much more likely to be correct, especially with quarterly, monthly, and weekly frequencies of time-series data. Modeling of the residuals can be employed following the methodology of Box and Jenkins (1976). Owing to higher frequencies of time-series data used in applied econometrics in recent years, the pattern of the error structure generally is more complex than the common AR(1) pattern. 3838 A General Test for Higher-Order Serial Correlation The LM Test (Breusch and Pagan, 1980) LM - Lagrange multiplier = β + β + + β + = yt 0 1X t1 ... k Xkt u t t 2,1 ,..., .n = ρ + ρ + + ρ + σ2 u t 1u t−1 2u t−2 ... pu t−p et et ~ IN ,0( ) ρ = ρ = = ρ = H0 : 1 2 ... p 0 The Xs might or might not include lagged dependent variables. 1. Estimate by OLS and obtain the least squares residuals.
2. Estimate uˆ t . p = γ + γ + + γ + ρ + uˆt 0 1 X1t ... k X kt ∑uˆt−i i vt . t=1 3. Test whether the coefficients of u ˆ t − i are all zero. Use the conventional F statistic.
3939 Box-Pierce or Ljung-Box Tests Check the serial correlation pattern of the residuals. You must be sure that there is no serial correlation (desire white noise). H0: no pattern in the residuals (The residuals are white noise.) Box and Pierce (1970) suggest looking at not only the first- order autocorrelation but autocorrelation of all orders of residuals. m = 2 Calculate Q N∑rk , where k=1 2 rk is the autocorrelation of lag k, and N is the number of observations in the series. 2 χ If the model fitted is appropriate,Q ~& m . Ljung and Box (1978) suggest a modification of the Q statistic for moderate sample sizes. m = + − −1 2 Q* N(N )2 ∑(N )k rk k=1 40 Box-Pierce or Ljung-Box Tests With the Box-Pierce or Ljung-Box tests, you examine the interface of structural models with time-series models. Use the correlations and partial correlations of the residuals over time. The idea is to determine the appropriate pattern in the error structure from the autocorrelation and partial autocorrelation functions associated with the residuals. Autocorrelation functions tell you about moving average (MA) patterns. Partial autocorrelation functions tell you about autoregressive (AR) patterns. Anticipate ARMA error structures, particularly higher- order AR patterns in residuals of econometric models.
41 Section 6.4
Sample Problem: The Demand for Shrimp The REG Pro cedure Model: MO DEL1 Dependent Variab le: QSHRIMP
Numb er of Observations Read 97 Number of Observations Used 97
Analysis of Variance
Sum of Mean Source DF Squares Square F Value Pr > F
Model 6 2064.71370 344.11895 19.59 <.0001 Error 90 1580.90382 17.56560 Corrected Total 96 3645.61751
Root MSE 4.19113 R-Square 0.5 664 Dependent Mean 6.85409 Adj R-Sq 0.5374 Coeff Var 61.14789
43 continued... Parameter Estimates
Parameter Standard V ariable DF Estimate Error t Value Pr > |t|
I ntercept 1 29.69939 8.11798 3.66 0.0004 P SHRIMP 1 -0.03223 0.00346 -9.33 <.0001 PF IN 1 -0.02731 0.01362 -2.00 0.0480 P SHELL1 1 0.01590 0.00719 2.21 0.0294 ADSHRIMP 1 -0.00101 0.00708 -0.14 0.8865 A DFIN 1 -0.01398 0.00480 -2.91 0.0046 A DSHELL1 1 0.00848 0.02078 0.41 0.6843
44 The REG Procedure Output
The REG Procedure Model: MODEL1
Dependent Variable: QSHRIMP
Durbin-Watson D 2.092 Pr < DW 0.6443 Pr > DW 0.3557 Number of Observations 97 1st Order Autocorrelation -0.049
NOTE: Pr
Conclusion: No AR(1) pattern in the residuals
45 RESID
16
12
8
4
0
-4
-8 10 20 30 40 50 60 70 80 90 46 The ARIMA Procedure Output
Name of Variable = resQSHRIMP The autocorrelation function Mean of Working Series -12E-16 (acf). A plot of the correlation of Standard Deviation 4.037075 Number of Observations 97 the residuals at various lags. Corr (et, et-k), k = 0, 1, 2, …, 24.
MA(3) Pattern Autocorrelations
Lag Covariance Correlation - 1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std Error
0 16.297977 1.00000 | |********************| 0 1 -0.803782 -.04932 | . *| . | 0.101535 2 1.365554 0.08379 | . |**. | 0.101781 3 5.065873 0.31083 | . |****** | 0.102490 4 1.279257 0.07849 | . |** . | 0.111786 5 0.847853 0.05202 | .|* . | 0.112353 6 1.553722 0.09533 | . ** | . | 0.112601 7 2.748583 0.16865 | . |*** . | 0.113430 8 0.060094 0.00369 | . | . | 0.115986 9 1.246364 0.07647 | . |** . | 0.115988 10 2.816294 0.17280 | . |*** . | 0.116506
47 Partial Autocorrelations AR(3) Pattern Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1 -0.04932 | . *| . | 2 0.08155 | . |** . | 3 0.32156 | . |****** | 4 0.12215 | . |** . | 5 0.01467 | . | . The partial | 6 -0.01941 | . | . | 7 0.12325 | . |** . autocorrelation | 8 -0.00334 | . | . function (PACF). | 9 0.02246 | . | . A plot of the| 10 0.09677 | . |** . | 11 0.06154 | . |* . correlation | of the 12 -0.09759 | . **| . residuals at| various 13 0.04673 | . |* . lags after |netting 14 -0.15242 | .***| . | 15 -0.01645 | . | . out intermittent | 16 -0.19457 | ****| . lags. | 17 0.08137 | . |** . | 18 -0.07893 | . **| . | 19 -0.07409 | . *| . | 20 -0.01932 | . | . | 21 0.06747 | . |* . | 22 -0.19432 | ****| . | 48 PROC ARIMA Output Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 24 35 6 7 8 9 1
23 -0.06713 | . *| . | 24 -0.03305 | . *|. |
Autocorrelation Check for White Noise
To Chi- Pr > Lag Square DF ChiSq ------Autocorrelations------
6 12.70 6 0.0480 -0.049 0.084 0.311 .078 0 0.052 0.095 12 20.18 12 0.0637 0.169 0.004 0.076 730.1 0.056 -0.037 18 26.80 18 0.0828 0.159 -0.090 0.001 -0.099 .067 0 -0.093 2 4 36.18 24 0.0527 -0.130 0.065 -0.044 -0.218 -0.051 -0.031
The Ljung Box Q* statistic reveals that the residual series is not white noise.
49 Correlogram of RESID
Presence of MA(3), AR(3) pattern
50 Durbin -Watson Statistics
Order DW Pr < DW Pr > DW
1 2.0921 0.6443 0.3557 2 1.8221 0.1974 0.8026 3 1.3585 0.0007 0.9993
NOTE: Pr
Godfrey's Serial Correla tion Test
Alternative LM Pr > LM
AR(1) 0.2799 0.5967 AR(2) 0.9989 0.6069 AR(3) 11.6932 0.0085
Presence of AR(3) pattern
51 OLS Estimates
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 29.6994 8.1180 3.66 0.0004 PSHRIMP 1 -0.0322 0.003456 -9.33 <.0001 PFIN 1 -0.0273 0.0136 -2.00 0.0480 PSHELL1 1 0.0159 0.007185 2.21 0.0294 ADSHRIMP 1 -0.001014 0.007085 -0.14 0.8865 ADFIN 1 -0.0140 0.004805 -2.91 0.0046 ADSHELL1 1 0.008479 0.0208 0.41 0.6843
52 Breusch-Godfrey Serial Correlation LM Test:
F-statistic 3.975111 Prob. F(3,87) 0.0105 Obs*R-squared 11.69324 Prob. Chi-Square(3) 0.0085
Test Equation: Dependent Variable: RESID Method: Least Squares Sample: 1 97 Included observations: 97 Presample missing value lagged residuals set to zer o.
Coefficient Std. Error t-Statistic Prob.
C -0.064917 7.783149 -0.008341 0.9934 PSHRIMP 0.000324 0.003415 0.094983 0.9245 PFIN 0.003214 0.013142 0.244517 0.8074 PSHELL1 -0.002645 0.006911 -0.382712 0.7029 ADSHRIMP 0.003714 0.007144 0.519862 0.6045 ADSHELL1 0.002087 0.020725 0.100694 0.9200 ADFIN -0.000244 0.004692 -0.052095 0.9586 Presence of RESID(-1) -0.074724 0.109910 -0.679863 0.4984 AR(3) Pattern RESID(-2) 0.091318 0.107147 0.852272 0.3964 RESID(-3) 0.351359 0.106391 3.302520 0.0014
R-squared 0.120549 Mean dependent var -3.13E-15 Adjusted R-squared 0.029571 S.D. dependent var 4.058047 S.E. of regression 3.997597 Akaike info criterion 5.706646 Sum squared resid 1390.328 Schwarz criterion 5.972081 Log likelihood -266.7724 Hannan-Quinn criter. 5.813975 F-statistic 1.325037 Durbin-Watson stat 2.066485 53 Prob(F-statistic) 0.235832
Correcting for serial correlation through the use of PROC AUTOREG
Estimates of Autoregressive Parameters
Standard
Lag Coefficient Error tlue Va 1 0.071520 0.101517 .70 0 2 -0.096118 0.101283 .95 -0 Use of Yule- 3 -0.321561 0.101517 3.17 - Walker estimates of φφφ φφφ φφφ 1, 2, and 3
Yule-Walker Estimates SSE 1376.71794 DFE 87 MSE 15.82434 RootE MS 3.97798 SBC 578.680834 AIC 552.933724 Regress R-Square 0.5618 Total R-Square 0.6224 Log Likelihood -551.8663 Observations 97
54
Durbin-Watson Statistics
Order DW < PrDW Pr > DW
1 2.0340 .5135 0 0.4865 2 2.0047 .5319 0 0.4681 3 1.8530 .2997 0 0.7003
NOTE: Pr
Now, no serial correlation exists in the residuals.
55 The AUTOREG Procedure
Godfrey's Serial Correlation Test
Alternative LM Pr > LM
AR(1) 2.4131 0.1203 AR(2) 2.5993 0.2726 GLS Estimates AR(3) 2.6009 0.4573
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 29.6938 7.2391 4.10 <.0001 PSHRIMP 1 -0.0324 0.003706 -8.73 <.0001 PFIN 1 -0.0177 0.0130 -1.36 0.1763 PSHELL1 1 0.009815 0.006848 1.43 0.1554 ADSHRIMP 1 0.000814 0.006389 0.13 0.8989 ADFIN 1 -0.0151 0.004708 -3.21 0.0019 ADSHELL1 1 0.001010 0.0193 0.05 0.9584 Statistically significant estimated coefficients for PSHRIMP and ADFIN 56 Partia l Autocorrel ations Starting values 1 -0.049318 of estimates of 2 0 .081553 φφφ φφφ φφφ 1, 2, and 3 in 3 0.321561 the ML procedure. Preliminary MSE 14.4802
Estimates of Autoregr essive Parameters Standard Lag Coefficient Error t Value
1 0.071520 0.101517 0.70 2 -0.096118 0.101283 -0.95
3 -0.321561 0.101517 -3.17
Algorithm converged.
57 Use of the Maximum Likelihood procedure to produce estimates Maximum Likelihood Estimates φφφ φφφ φφφ of 1, 2, and 3.
SSE 1365.10117 DFE 87 MSE 15.69082 Root MSE 3.96116 SBC 578.057305 AIC 552.310196 Regress R -Square 0.5711 Total R-Square 0.62 56 Log Likelihood -266.1551 Observations 97
Durbin -Watson Statistics Order DW Pr < DW Pr > DW
1 2.1042 0.6574 0.3426 2 2.0571 0.6368 0.3632 3 1.9762 0.54 81 0.4519
NOTE: Pr
Alternative LM Pr > LM AR(1) 2.0021 0.1571 AR(2) 2.2920 0.3179 AR(3) 2.2951 0.5135 ML Estimates
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 30.4400 7.2224 4.21 <.0001 PSHRIMP 1 -0.0341 0.004134 -8.24 <.0001 PFIN 1 -0.0138 0.0135 -1.02 0.3093 PSHELL1 1 0.007794 0.007022 1.11 0.2701 ADSHRIMP 1 0.001561 0.006500 0.24 0.8108 ADFIN 1 -0.0155 0.004768 -3.25 0.0016 AD SHELL1 1 -0.003312 0.0196 -0.17 0.8660 AR1 1 0.0152 0.1060 0.14 0.8861 AR2 1 -0.1256 0.1037 -1.21 0.2290 AR3 1 -0.3915 0.1042 -3.76 0.0003
59 continued... Autoregressive parameters assumed given.
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 30.4400 7.1700 4.25 <.0001 PSHRIMP 1 -0.0341 0.003944 -8.64 <.0001 PFIN 1 -0.0138 0.0132 -1.05 0.2970 PSHELL1 1 0.007794 0.006865 1.14 0.2594 ADSHRIMP 1 0.001561 0.006240 0.25 0.8030 ADFIN 1 -0.0155 0.004694 -3.31 0.0014 ADSHELL1 1 -0.003312 0.0189 -0.18 0.8614
60 Depiction of the Estimated Model for the Qshrimp Problem The estimated model is based on the Maximum Likelihood estimates. Qshrimp t = 30.4400 - .0341*Pshrimp t - .0138*Pfin t + .007794*Pshell1 t + .001561*Adshrimp t - .0155Adfin t - .003312*Adshell1 t + v t vt = -.0152*v t-1 + .1256*v t-2 + .3915*v t-3 + εt MSE = 15.69082 (estimate of residual variance) This estimate is smaller than the OLS estimate of 17.5656. The total R-square statistic computed from the residuals of the autoregressive model is 0.6256, reflecting the improved fit from the use of past residuals to help predict the next value of
Qshrimp t. The Reg Rsq value is 0.5711, which is the R-square statistic for a regression of transformed variables adjusted for the
61 estimated autocorrelation. Comparison of Diagnostic Statistics and Parameter Estimates from the Qshrimp Problem
Explanatory OLS Yule-Walker Maximum Likelihood Variables
Parameter Standard Parameter Standard Parameter Standard Estimate Error Estimate Error Estimate Error
Intercept 29.6994 8.1180 29.6938 7.2391 30.4400 7.2224
Pshrimp -0.0322 0.003456 -0.0324 0.003706 -0.0341 0.004134
Pfin -0.0273 0.0136 -0.0177 0.0130 -0.0138 0.0135
Pshell1 0.0159 0.007185 0.009815 0.006848 0.007794 0.007022
Adshrimp -0.001014 0.007085 0.000814 0.006389 0.001561 0.006500
Adfin -0.0140 0.004805 -0.0151 0.004708 -0.0155 0.004768
Adshell1 0.008479 0.0208 0.001010 0.0193 -0.003312 0.0196
62 continued... Comparison of Diagnostic Statistics and Parameter Estimates OLS Yule-Walker Maximum Likelihood Parameter Standard Parameter Standard Parameter Standard Estimate Error Estimate Error Estimate Error MSE 17.5656 15.82434 15.69082 Root MSE 4.19113 3.97798 3.96116 SBC 578.028031 578.680834 578.057305 AIC 560.005054 552.933724 552.310196 Regress R- 0.5664 0.5618 0.5711 Square Total R-Square 0.5664 0.6224 0.6256 Log Likelihood -273.00253 -551.8663 -266.151
AR 1 NA 0.07152 0.101517 0.0152 0.1060 AR 2 NA -0.096118 0.101283 -0.1256 0.1037 AR 3 NA -0.321561 0.101517 -0.3915 0.1042
DW order 1 2.0921 2.0340 2.1042 DW order 2 1.8221 2.0047 2.0571 63 DW order 3 1.3585 1.8530 1.9762 Section 6.5
Sample Problem: The Demand for Gasoline Model: MODEL1 Dependent Variable: lnpcg
Nu mber of Observations Read 36 Nu mber of Observations Used 36
Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F Model 6 0.78035 0.13006 150.85 <.0001 Error 29 0.02500 0.00086218 Corrected Total 35 0.80535
Root MSE 0.02936 R -Square 0.9690 Dependent Mean -0.00371 Adj R-Sq 0.9625 Coeff Var -791.75086
65 continued... OLS Estimates
Parameter Estimates Parameter Standard
Variable DF Estimate Error t Value Pr > |t| Intercept 1 -17.27084 1.71977 -10.04 <.0001 lny 1 1.94035 0.19785 9.81 <.0001 lnpg 1 -0.11398 0.03409 -3.34 0.0023 lnpnc 1 -0.11773 0.07654 -1.54 0.1349 lnpuc 1 0.21874 0.15677 1.40 0.1735 lnppt 1 -0.03906 0.08444 -0.46 0.6471 t 1 -0.01747 0.00701 -2.49 0.0188
66 The REG Procedure Output
Model: MODEL1 Dependent Variable: lnpcg
Durbin-Watson D 0.786 Pr < DW <. 0001 Pr > DW 1.0000 Number of Observations 36 1st Order Autocorrelation 0.601
NOTE: Pr
What is the conclusion regarding serial correlation based on the Durbin-Watson statistic?
67 RESID
.06
.04
.02
.00
-.02
-.04
-.06 1960 1965 1970 1975 1980 1985 1990 1995 68
Name Nameof Variable of Variable = lnpcgres res= lnpcg Mean of Working Series 9.68E9.68E----16161616 Standard Deviation 0.026354 Number of Observations 36 MA(1) Autocorrelations Autocorrelations
Lag Covariance Correlation ---1-1 99 88 77 66 55 434 23 12 01 10 21 23 43 54 56 76 87 89 Std19 1 Error Std Error
0 0.00069453 1.00000 | | |********|********************| |********************| 0 0 1 0.00041753 0.60110.60117 0.6011 7 | | . . |************ |************ 0.166667 | | 0.166667 2 0.00004865 0.07004 | . |* . 0.218760| 3 ---0.0001150- 0.0001150 ---.16552 .16552 - | | . . ***| ***| . . 0.219382| | 0.219382 4 ---0.0001336- 0.0001336 ---.19242 - .19242 | | . . ****| ****| . . 0.222824| | 0.222824 5 ---0.0000673- 0.0000673 ---.09685 .09685 - | | . . **| **| . . 0.227393| | 0.227393 6 0.00004871 0.07014 0.07014 | | . . |* |* . . 0.228536 | | 0.228536 7 ---8.1403E- 8.1403E8.1403E----77 ---.00117-.00117 | | . . | | . . 0.229133 | | 0.229133 8 ---0.0001372- 0.0001372 ---.19760 - .19760 | | . . ****| ****| . . | 0.229133 0.229133| 9 ---0.0002564- 0.0002564 ---.36917 - .36917 | | . .**********|****| *** . . 0.233818 | | 0.233818 Statistically significant autocorrelation and partial autocorrelation coefficients 69 AR(1), AR(2) Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 43 5 6 7 8 9 1
1 0.60117 | . |************ | 2 -0.45626 | *********| . | 3 0.09383 | . |** . | 4 -0.11796 | . **| . | 5 0.07910 | . |** . | 6 0.10048 | . |** . | 7 -0.33312 | *******| . | 8 -0.02557 | . *| . | 9 -0.31678 | .******| . |
70 continued...
Autocorrelation Check for White Noise
To Chi - Pr > Lag Square DF ChiSq ------Autocorrelations------
6 17.68 6 0.0071 0.601 0.070 -0.166 -0.192 -0.097 0.070
Statistically significant Ljung-Box Q* statistic. Thus, the pattern in the residuals is not random or white noise.
71 72 The AUTOREG Procedure
Dependent Variable lnpcg
Ordinary Least Squares Estimates
SSE 0.02500325 DFE 29 MSE 0.0008622 Root MSE 0.02936 SBC -134.55346 AIC -145.6381 Regress R-Square 0.9690 Total R-Square 0.9690 Normal Test 0.5997 Pr > ChiSq 0.7409 Log Likelihood 7 9.8190475 Observations 36
Durbin-Watson Statistics
Order DW Pr < DW Pr > DW
1 0.7859 <.0001 1.0000 2 1.8415 0.1645 0.8355
NOTE: Pr
73 continued... AR(1), AR(2) Godfrey's Serial Correlation Test Terms Alternative LM Pr > LM
AR(1) 15.0740 0.0001 AR(2) 18.5985 <.0001
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -17.2708 1.7198 -10.04 <.0001 lny 1 1.9404 0.1979 9.81 <.0001 lnpg 1 -0.1140 0.0341 -3.34 0.0023 lnpnc 1 -0.1177 0.0765 -1.54 0.1349 lnp uc 1 0.2187 0.1568 1.40 0.1735 lnppt 1 -0.0391 0.0844 -0.46 0.6471 t 1 -0.0175 0.007014 -2.49 0.0188
74 The AUTOREG P rocedure
Estimates of Aut ocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
0 0.000695 1.000000 | |**************** ****| 1 0.000418 0.601169 | |************ | 2 0.000049 0.070040 | |* |
Partial
Autocorrelations
1 0.601169 2 -0.456258 φφφ φφφ Estimates of 1, 2 Preliminary MSE 0.000351
Estimates of Autoregressive Parameters
Standard
Lag Coefficient Error t Value
1 -0.875457 0.171251 -5.11 2 0.456258 0.171251 2.66 75 continued... Yule -Walker Estimates
SSE 0.01185535 DFE 27 MSE 0.0004391 Root MSE 0.02095 SBC -153.33526 AIC -167.58693 Regress R-Square 0.9558 Total R-Square 0.9853 Log Likelihood -27.396156 Observations 36
After you take into account this pattern in the Durbin-Watson Statistics residuals, the serial correlation Order DW Pr < DW Pr > DW problem no longer is evident. 1 1.7378 0.0773 0.9227 2 2.2225 0.5771 0.4229
NOTE: Pr
Alternative LM Pr > LM
AR(1) .08521 0.2975 AR(2) .32482 0.3127
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -15.7683 1.8017 -8.75 <.0001 lny 1 1.7663 0.2067 8.55 <.0001 lnpg 1 -0.0863 0.0353 -2.44 0.0214 lnpnc 1 -0.1398 0.0863 -1.62 0.1171 lnpuc 1 0.1872 0.1874 1.00 0.3267 lnppt 1 -0.0733 0.1007 -0.73 0.4729 t 1 - 0.0110 0.007215 -1.53 0.1387
77 Estimates of Autocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
0 0.000695 1.000000 | |********************| 1 0.000418 0.601169 | |************ | 2 0.000049 0.070040 | |* |
Partial
Autocorrelations 1 0.601169 Starting values in the 2 -0.456258 ML procedure to obtain φφφ φφφ estimates of 1, 2 Preliminary MSE 0.000351
Estimates of Autoregressive Parameters
Standard
Lag Coefficient Error t Value
1 -0.875457 0.171251 -5.11 2 0.456258 0.171251 2.66
78 continued... Maximum Likelihood Estimates
SSE 0.01174976 DFE 27 MSE 0.0004352 Root MSE 0.02086 SBC -153.49238 AIC -167.74405 Regress R -Square 0.9580 Total R-Square 0.9854 Log Likelihood 92.8720272 Observations 36
Durb in -Watson Statistics
Order DW Pr < DW Pr > DW
1 1.8195 0.1255 0.8745 2 2.2171 0.5664 0.4336
NOTE: Pr
79
Godfrey's Serial Correlation Test
Alternative LM Pr > LM AR(1) 0.7457 0.3878 AR(2) 2.0349 0.3615 ML Estimates
Standard Approx
Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -15.8652 1.9232 -8.25 <.0001 lny 1 1.7770 0.2202 8.07 <.0001 lnpg 1 -0.0816 0.0355 -2.30 0.0295 lnpnc 1 -0.1494 0.0878 -1.70 0.1001 lnpuc 1 0.2007 0.2035 0.99 0.3328 lnppt 1 -0.0808 0.1069 -0.76 0.4561 t 1 -0.0109 0.007223 -1.50 0.1445 AR1 1 -0.9159 0.1693 -5.41 <.0001 AR2 1 0.5194 0.1732 3.00 0.0058
80 continued... Autoregressive parameters assumed given.
Standard Approx
Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -15.8652 1.7796 -8.92 <.0001 lny 1 1.7770 0.2040 8.71 <.0001 lnpg 1 -0.0816 0.0347 -2.35 0.0264 lnpnc 1 -0.1494 0.0866 -1.73 0.0957 lnpuc 1 0.2007 0.1889 1.06 0.2976 lnppt 1 -0.0808 0.1006 -0.80 0.4285 t 1 -0.0109 0.007 092 -1.53 0.1375
81 Depiction of the Estimated Model for the Demand for Gasoline Problem (Greene) The estimated model is based on the Maximum Likelihood estimates: = − + − − ln pcg t 15 .8652 .1 7770 ln* yt .0 0816 ln* pg t .0 1494 ln* pnc t + − − .0 2007 ln* puc t .0 0808 ln* ppt t .0 0109 *t = − + ε vt 0.9159 *vt−1 0.5194 *vt−2 t
Regress R-Square = 0.9580 Total R-Square = 0.9854 2 MSE = 0.0004352 = σˆ DW order 1 = 1.8195 DW order 2 = 2.2171
82 Section 6.6
A Test for Serial Correlation in the Presence of a Lagged Dependent Variable Durbin’s h-Test A large sample test for autocorrelation when lagged dependent variables are present. ρ H0: = 0 AR(1) process in error terms ρˆ ≈1− d)2/1( h = ρˆ /(n 1−nV (β)) d is the DW statistic.
Coefficient associated with Yt−1 h ~& N )1,0( The test breaks down if nV ( βˆ ) ≥ 1 .
If the Durbin's h-test breaks down, compute the OLS residuals u ˆ t . Then regress u ˆ t on u ˆ t − 1 y, t − 1 , and the set of exogenous variables. The test uˆ t−1 for ρ = 0 is carried out by testing the significance of the coefficient . (Durbin's m-test) 84 The REG Procedure Output Model: MODEL1
Dependent Variable: lnpcg
Number of Observations Read 36 Number of Observations Used 35 Number of Observations with Missing Values 1
Analysis of Variance
Sum of Mean Source DF Squares Square F Value Pr > F
Model 7 0.68226 0.09747 210.45 <.0001 Error 27 0.01250 0.00046312 Corrected Total 34 0.69476
Root MSE 0.02152 R-Square 0.9820 Dependent Mean 0.00566 Adj R-Sq 0.9773 Coeff Var 380.20921
85 continued... OLS Estimates Parameter Estimates
Parameter Standard Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -6.81147 2.38667 -2.85 0.0082 laglnpcg 1 0.63554 0.12456 5.10 <.0001 lny 1 0.77175 0.26878 2.87 0.0079 lnpg 1 -0.11817 0.02525 -4.68 <.0001 lnpnc 1 0.12623 0.07379 1.71 0.0986 lnpuc 1 -0.13859 0.13861 -1.00 0.3263 lnppt 1 0.05087 0.06 490 0.78 0.4400 t 1 -0.01057 0.00541 -1.95 0.0612 Presence of a lagged dependent variable
86 The REG Procedure Output
Model: MODEL1 Dependent Variable: lnpcg
Durbin -Watson D 1.639 Pr < DW 0.0079 Pr > DW 0.9921 Number of Observations 35 1st Order Autocorrelation 0.180
NOTE: Pr
The DW statistic reveals positive serial correlation, AR(1) pattern.
87 Name of Variable = lnpcgres
Mean of Working Series -66E-17 Standard Deviation 0.018902 Number of Observations 35
Autocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 03 14 25 6 7 8 9 1 Std Error
0 0.00035727 1.00000 | |********************| 0 1 0.00006445 0.18040 | . *** |* . | 0.169031 2 -0.0001012 -.28328 | .******| . | 0.174445 3 -0.0000410 -.11466 | . **| . | 0.187127 4 -0.0000612 -.17122 | . ***| . | 0.189124 5 0.00002013 0.05634 | . |* . | 0.193502 6 0.00007480 0.20936 | . **** | . | 0.193970 7 0.00002450 0.06858 | . |* . | 0.200323 8 -0.0000229 -.06416 | . *| . | 0.200992
88 From the ACF and PACF plots, no pattern in the residuals is revealed. Partial Autocorrelations Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 34 5 6 7 8 9 1
1 0.18040 | . |**** . | 2 -0.32645 | *******| . | 3 0.01390 | . | . | 4 -0.27677 | .******| . | 5 0.15471 | . |*** . | 6 0.02360 | . | . | 7 0.07670 | . |** . | 8 -0.06233 | . *| . |
The ARIMA Procedure Autocorrelation Check for White Noise
To Chi - Pr > Lag Square DF ChiSq ------Autocorrelations------
6 8.24 6 0.2211 0.180 -0.283 -0.115 -0.171 0.056 0.209
The Ljung-Box Q* statistic suggests a white noise residual series. 89 Durbin’s h-test, at least at The AUTOREG Procedure the 0.10 level of significance, suggests a non-random Dependent Variable lnpcg residual series.
Ordinary Least Squares Estimates
SSE 0.01250434 DFE 27 MSE 0.0004631 Root MSE 0.02152 SBC -150.02748 AIC -162.47027 Regress R -Square 0.9820 Total R-Square 0.98 20 D urbin h 1.5788 Pr > h 0.0572 Normal Test 1.3963 Pr > ChiSq 0.4975 Log Likelihood 89.2351336 Observations 35 Durbin -Watson 1.6391
Godfrey's Serial Co rrelation Test
Alternative LM Pr > LM
AR(1) 1 .7213 0.1895 90 continued... Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -6.8115 2.3867 -2.85 0.0082 laglnpcg 1 0.6355 0.1246 5.10 <.0001 lny 1 0.7717 0.2688 2.87 0.0079 lnpg 1 -0.1182 0.0252 -4.68 <.0001 lnp nc 1 0.1262 0.0738 1.71 0.0986 lnpuc 1 -0.1386 0.1386 -1.00 0.3263 lnppt 1 0.0509 0.0649 0.78 0.4400 t 1 -0.0106 0.005411 -1.95 0.0612
91 The AUTOREG Procedure Output Partial Autocorrelations Starting value of φφφ 1 0.180402 in the ML procedure
Preliminary MSE 0.000346
Estimates of Autoregressive Parameters
Standard Lag Coefficient Error t Value
1 -0.180402 0.192898 -0.94
Algorithm converged. 92 Maximum Likeliho od Estimates
SSE 0.00983118 DFE 26 MSE 0.0003781 Root MSE 0.01945 SBC -153.04594 AIC -167.04408 Regress R -Square 0.8412 Total R-Square 0.98 58 Log Likelihood 92.5220382 Observations 35 Durbin -Watson 1.7302
Godfrey's Serial Co rrelation Test
Alternative LM Pr > LM
AR(1) 2 .3826 0.1227
GLS (ML) Estimates
93 continued... Standard Approx Variable DF Estimate Error t Value Pr > |t|
Intercept 1 -8.3477 2.5227 -3.31 0.0027 laglnpcg 1 0.2081 0.1299 1.60 0.1213 lny 1 0.9295 0.2890 3.22 0.0035 lnpg 1 -0.2157 0.0407 -5.31 <.0001 lnpnc 1 0.0259 0.0797 0.33 0.7476 lnpuc 1 0.0213 0.2016 0.11 0.9167 lnppt 1 0.0425 0.0933 0.46 0.6525 t 1 -0.002830 0.0122 -0.23 0.8187 AR1 1 -0.9175 0.1324 -6.93 <.0001
94 The AUTOREG Procedure Output
Autoregressive parameters assumed given.
Standard Approx Variable DF Estimate Error t Value Pr > |t|
Inte rcept 1 -8.3477 2.2511 -3.71 0.0010 laglnpcg 1 0.2081 0.1223 1.70 0.1008 lny 1 0.9295 0.2567 3.62 0.0012 lnpg 1 -0.2157 0.0373 -5.79 <.0001 lnpnc 1 0.0259 0.0766 0.34 0.7377 lnpuc 1 0.0213 0.1728 0.12 0.9029 lnppt 1 0.0425 0.0905 0.47 0.6426 t 1 -0.002830 0.009249 -0.31 0.7621
95 Calculation of Durbin’s h-Test for the Greene Problem
In the Greene Problem for gasoline demand: ρˆ =1− .1( 639 )2/ = .0 1805
n = 35
v(B) = .0( 12456 )2 = .0 0155
h = .1 5788
96 Analysis of Variance
Sum of Mean
Source DF Squares Square F Value Pr > F Model 7 0.00061193 0.00008742 0.19 0.9848 Error 26 0.01189 0.00045736 Corrected Total 33 0.01250
Root MSE 0.02139 R-Square 0.0489 Dependent Mean -0.00002950 Adj R-Sq -0.2071 Coeff Var -72488
Parameter Estimates Parameter Standard
Variable Label DF Estimate Error t Value Pr > |t|
Intercept Intercept 1 -1.10845 2.12226 -0.52 0.6059 lnpcgres1 1 0.26893 0.23259 1.16 0.2581 laglnpcg 1 -0.08946 0.13979 -0.64 0.5278 lny 1 0.12258 0.23732 0.52 0.6098 lnpg 1 0.00625 0.02344 0.27 0.7918 lnpnc 1 -0.02989 0.05719 -0.52 0.6056 lnppt 1 -0.00223 0.06428 -0.03 0.9726 97 t 1 0.00038584 0.00450 0.09 0.9324 Section 6.7
Summary Remarks about the Issue of Serial Correlation Final Considerations With time-series data, in most cases this problem will surface. Analysts must examine the error structure carefully. Minimally do the following: – Graph the residuals over time. – Consider the significance of the Durbin-Watson statistic. – Consider higher-order autocorrelation structure via PROC ARIMA. – Consider the Godfrey LM test. – Consider the Box-Pierce or Ljung-Box tests ( Q statistics). Re-estimate econometric models with AR(p) error structures via PROC AUTOREG. – Use the Yule-Walker or Maximum Likelihood method to obtain estimates of the AR(P) error structure. – A preference exists for the Maximum Likelihood method.
99