<<

WHAT HAPPENED TO MACROECONOMETRIC MODELS?+

Testing Macroeconometric Models

By RAY C. FNR*

Interest in research topics in different ries, which in turn generated a counter- fields fluctuates over time, and the field of response in the form of new Keynesian eco- is no exception. From Jan nomics. Tinbergen’s (1939) model-building in the When asked me to organize late 1930’s through work in the 1960’s, there a session entitled “What Happened to was considerable interest in the construc- Macroeconometric Models?” he left am- tion of structural macroeconomic models. biguous (at least to me) whether or not he The dominant methodology of this period felt that the premise of the session should was what I will call the “Cowles Commis- be that macroeconometric models had died, sion” approach. Structural econometric with the task of the session being to exam- models were specified, estimated, and then ine why. My interest in structural macroeco- analyzed and tested in various ways. One of nomic model-building began when I was a the major macroeconometric efforts of the graduate student at M.I.T. in the mid-1960’s. 1960’s, building on the earlier work of This was a period when there was still inter- (1950) and Klein and Arthur est in the Brooking+model project and when Goldberger (19X$ was the Brookings model intensive work was being carried out on the (James Duesenberty et al., 1965, 1969). This MPS (MAT.-Penn-SSRC) model. Many model was a joint effort of many individuals, hours were spent by many students in the and at its peak it contained nearly 400 basement of the Sloan building at M.I.T. equations. Although much was learned from working on various macroeconometric equa- this exercise, the model never achieved the tions using an IBM 1620 computer (punch success that was initially expected, and it cards and all). This was also the beginning was laid to rest around 1972. of the development of TSP (Time Series Two important events in the 1970’s con- Processor), a computer program that pro- tributed to the decline in popularity of the vided an easy way of using various econo- Cowles Commission approach. The first was metric techniques. The program was initi- the commercialization of macroeconometric ated by , and it soon attracted models. This changed the focus of research many others to help in its development. I on the models. Basic research gave way to played a minor role in this work. the day-to-day needs of keeping the models Perhaps because of fond memories of my up-to-date, of subjectively adjusting the time in the basement of Sloan, I have never forecasts to make them “reasonable,” and lost interest in structural models. I continue of meeting the special needs of clients. The to believe that the Cowles Commission ap- second event was Robert Lucas’s (1976) cri- proach is the best way of trying to learn how tique, which argued that the models are the macroeconomy works, and 1 have con- not likely to be useful for policy purposes. tinued to try to make progress using this The Lucas critique led to a line of research approach. My view is thus that macroecono- that culminated in real-business-cycle theo- metric models have not died, even though there has been limited academic interest in these models in the last 20 years. I have ‘Dticursanrs: Olivia Blanchard, Massachusetts In- argued elsewhere (Fair, 1992) that macro- stitute of Technology; William Brainard, Yale Univer- has not been well served by the sity. real-business-cycle approach, which is not “Cowles Foundation, Yale UniversiQ, Box 2125, interested in testing models in a serious Yale Station, New Haven; CT 06520 way, and by new Keynesian economics, 287 which has moved macroeconomics away of estimating large-scale nonlinear mod- from its econometric base. els by three-stage last squares (3SLS) This paper is a brief review of the progress and full-information maximum likcli- that I feel has been made in the develop- hood (FIML), and with current personal ment of macroeconometric techniques in the computers like the 486’s, estimation of a last two decades. It is written for those who large-scale model by 3SLS or FIML rc- have paid little attention to the field for quires at most a few hours of computer many years and would like a general idea of time. Estimation by two-stage least squares what has been going on. Because of space (2SLS) is almost instantaneous, and estima- limitations, this is not an extensive review, tion using a robust estimator like two-stage and only a few references are given. More least absolute deviations (2SLAD) is also extensive discussions and lists of references very fast with the USC of a computational are contained in Fair (lY84, 1993). I argue trick. that progress has been made in the last two The availability of fast, inexpensive com- decades in improving the ability of re- puters has made stochastic simulation of searchers to estimate, test, and analyze macroeconometric models routine, and as macroeconometric models. In particular, discussed below, this has greatly expanded progress has been made in testing, and this the kinds of research that can be done on is emphasized below. I hope in the next two these models. Stochastic simulation requires decades that the Cowles Commission ap- that an assumption be made about the dis- proach will attract more academic interest tribution of uy. It is usually assumed that u, and that more attention will be given to is an independently and identially dis- testing and improving structural models. tributed multivariate normal HO: Z), al- though other assumptions can be used. I. Estimation, StochasticSimulation, Given consistent estimates of ai for all i and (denoted Gi), the covar&tnce matrixA X can be estimated as (1/7)UU’, where U is the The following notation is used. The (non- m x 7 matrix of values of ici,, where Liil = linear) model is written as f;(y,,x,, IQ. Given the estimate of “Z, err01 terms can be drawn from the &0,X:) distri- bution. (1) ~;(Y,J,,%) =% Coefficients can also be drawn in stochas- tic simulation work. Let & denote the vector i=l,..., n f=l,..., T of all the-coefficient estimates in the model, and let V denqte the estimated covariance where y, is an n-dimensional vector of en- matrix of &. (V obviously depends 0” the dogenous variables, X, is a vector of pre- estimation technique used.) Given V and determined variables (including lagged given, say, the normality assumption, co@ endogenous variables), oli is a vector of ficients can be drawn from the .N(&:V) unknown coefficients, and uir is the error distribution. E,wgenous-variable values can term for equation i for observation f. It also he drawn for stochastic simulations once is assumed that the first m equations an assumption is made about the stochastic are stochastic, with the remaining uif nature of the exogenous variables. (i=m+l ,....n) identically zero for all f. It is now possible to handle the rational- The T-dimensional vector (u,,, ~u,,) expectations (RE) assumption in macro- will be denoted by II,, and Z will denote the econometric models. Expected values of cn- . M X m covariance matrix of u,. dogenous variables for future periods can Advances in computational techniques appear as explanatory variables in the sto- and computer hardware have considerably chastic equations. If expectations are ratio- lessened the computational burden of work- nal, they are based on the model and on ing with large-scale models. William Parke’s information up to the beginning of the cur- (lY82) algorithm opened up the possibility rent period. In other words, under the RE assumption, the expected values are the Take, say, the 2SLS estimates as the base predicted values from the model-the ex- coefficient values, and cornput: Z using pectations are “model consistent.” Single- these estimates. From the N(O,X) distribu- equation estimation of models with RE tion, draw a vector of the m error terms for is possible using Lars Hansen’s (1982) each of the 7 observations. Given these method-of-moments estimator, which is a error terms and the 2SLS coefficient esti- modified version of ZSLS. Solution and mates, solve the model for the entire period FIML estimation are possible using tech- 1 through T. This is a dynamic simulation niques discussed in Fair and John Taylor (i.e., one in which the lagged values of the (1983). Solution of models with RE is dif- endogenous variables are updated as the ficult because future predicted values affect solution proceeds). The predicted values current predicted values. An iterative tech- from this solution form a new data set. nique is needed that iterates over solution Estimate the model by 2SLS using this data puths of the endogenous variables. Even set, and record the set of estimates. This is given this difficulty, however, most tech- one repetition. Repeat the draws, solution, niques are computationally feasible for and estimation for many repetitions, and models with RE, including stochastic simu- record each set of estimates. If J repetitions lation. are done (where J is a number like 500 or l,OOO), one has .J values of each coefficient II. Testing Single Equations estimate, which are likely to be a good ap- proximation of the exact distribution. This Testing macroeconometric equations and distribution can then be compared to the models is very difficult, which is one of the asymptotic distribution. main reasons why there is so much dis- Using this procedure, I have found that agreement in macroeconomics about how the estimates of the coefficients of lagged the economy works. Lurking everywhere is dependent variables are usually biased up- the potential problem of “data mining”- ward, something that has been known for finding an equation or model that tits well simple equations since the late 1940’s. It is within the estimation period but is in fact a possible to correct for this bias by obtaining poor approximation of the data generating “median unbiased estimates” using a modi- process. Another difficulty is that models fied version of the procedure discussed in can be based on different sets of exogenous Donald Andrews (1993). I have found after variables, and controlling for these differ- correcting for this bias that the exact-distri- ences in making comparisons across models bution approximations are close to the is not straightforward. Nevertheless, there asymptotic distributions. In this sense non- arc many tests available, both for single stationarity does not appear to be a prob- equations and for complete models. lem in macroeconometric models. First, it is possible to examine whether A straightfonvard way of testing the spec- the asymptotic approximations of the distri- ification of an equation is to add variables butions of the estimators that are used for to it and test their significance. For the hypothesis-testing are accurate. If some of 2SLS estimator, a chi-square test can be the variables are not stationary, the asymp- used. For example, a test of the dynamic totic approximations may not be vew good. specification of an equation is to add lagged In fact, much of the recent literature in values of the left-hand side and all right- time-series has been con- hand-side variables and test whether they cerned with the consequences of non- are significant. David Hendry et al. (1984) stationary variables. The procedure for show that adding these lagged values is quite examining accuracy is to use stochastic general in that it encompasses many differ- simulation and reestimation to get a good ent types of dynamic specifications. If the approximation of the exact distribution of lagged vslues are not significant, this is the estimates and then to compare this dis- strong support for the dynamic specifica- tribution to the asymptotic distribution. tion. Another test of the structure of an equa- The AP test statistic is a weighted average tion is to add a time trend to the equation. of the chi-square values for each possible Long before unit roots and cointegration split in the sample period between T, and became popular, model-builders worried Tz. Asymptotic critical values for this statis- about picking up spurious correlation from tic are provided in the AP paper. common trending variables. Jf adding a time If the AP value is significant, which means trend substantially changes some of the co- that the hypothesis of structural stability is efficient estimates, this is cause for concern. rejected, it may be of interest to examine A third test is to estimate an equation the individual chi-square values to see at under the assumption that its error term what observation the largest value occurred. follows an autoregressive process of order This is likely to give one a general idea of n, where n is a number around 4. Many where the structural change occurred, even equations are estimated assuming a first- though the AP test itself does not pin down order process, and if adding a fourth-order the exact date. process results in a significant increase in I have found that few macroeconomic explanatory power, this is evidence that the equations pass all the above tests. If any serial-correlation properties of the error equation does not pass a test, it is not term have not been properly accounted for. always clear what should be done. If, for A fourth test is to add values led one or example, the hypothesis of structural stabil- more times to the equation, estimate the ity is rejected, one possibility is to divide the equation using Hansen’s (1982) method, and sample period into two parts and estimate test whether the led values are significant. two separate equations. The resulting co- Again, a chi-square test is available for this efficient estimates, however, are not always purpose. If the led values are not statisti- sensible in terms of what one would expect cally significant, this is evidence against the from theory. Similarly, when the additional RE hypothesis. If the led values are signifi- lagged values are significant, the equation cant, this suggests that expectations have with the additional lagged values does not not been adequately accounted for. always have what one would consider sensi- A fifth test is simply to add variables that ble dynamic properties. In other words, might belong in the equation (according to when an equation fails a test, the change in smne theories), and test for their signifi- the equation that the test results suggest cance. For example, J have found age-distri- may not produce what seem to be sensible bution variables to be significant in aggre- results. In many cases one may stay with the gate- equations and have added original equation even though it failed the these variables to the equations. test. My feeling (being optimistic) is that One of the most important issues to ex- much of this difficulty is due to small-sam- amine about an equation is whether its co- ple problems, which will lessen over time as efficients change over time (i.e., whether the sample sizes increase, but this is an impor- structure is stable over time). A common tant area for future research. test of structural stability is to pick a date at which the structure is hypothesized to have IIL Testing Complete Models changed and then test the hypothesis that a change occurred at this date. The test is When testing complete structural models, usually an F or chi-square test. Recently, it is useful to have benchmark models to use however, Andrew and Werner Ploberger for comparison purposes. Vector autore- (1992; henceforth, AP) have proposed a test gressive (VAR) models provide useful that does not require that the date of the benchmarks. If the interest is on GDP pre- structural change be chosen a priori, and I dictions, however, I have found “autore- have found this test to be very useful. The gressive components” (AC) models to be hypothesis tested is that a structural change better benchmarks than VAR models in the occurred between observations T1 and Tz, sense of being more accurate. An AC model where T, is close to 1 and T, is close to T. is one in which each component of GDP is VOL. 83 NO. 2 WHAT HAPPENED TO MACROECONOMETRIC MODELS? 291

regressed on its own lagged values and to fit the sample well. A further step is lagged values of GDP. GDP is then deter- needed to handle this problem, which is to mined from the GDP identity as the sum of compare variances computed from outside- the components. AC models do not have sample forecast errors with variances corn- the problem, as VAR models do, of adding puted from stochastic simulation. If this is large numbers of parameters as the number done over a number of sample periods, it is of variables (components in the AC case) is possible to estimate adjustments to the fore- increased. cast-error variances. Stochastic simulation allows one to corn- If both of these problems are taken care pute forecast-error variances. Each repeti- of, the final estimated forecast-error vari- tion consists of draws of the structural error ances have accounted for the four main terms and (possibly) the coefficients. Given sowces of uncertainty of a forecast-from these draws, given the values of the exoge- the error terms, coefficient estimates, ex- nous variables, and given the initial condi- ogenous variables, and possible misspecifi- tions, the model is solved dynamically over cation of the model (i.e., possible data min- the period of interest, say an eight-quarter ing&-and so they can be compared across period. This gives a solution value of each models. More details are given in Fair endogenous variable for each of the eight (1980). quarters. If this is done J times (where Another way to compare models, dis- again J is a number like 500 or l,OOO),one cussed in Fair and Robert Shiller (1990), is has J solution values for each variable and to regress the actual value of a variable on a quarter. From these values one can com- constant and the predicted values of the pute means and variances for each variable variable from different models. If one and quarter. model’s forecast contains all the informa- One might think that forecast-error vari- tion in another model’s forecast plus some, ances computed in this way could simply be then its forecast should be significant in this compared across models to see which vari- regression, and the other model’s forecast ances are smaller. There are, however, two should not. If both forecasts contain inde- additional problems. The first is controlling pendent information, then both should be for different sets of exogenous variables significant. If neither forecast contains use- ~CIOSSmodels WAR and AC models, for ful information, then neither should be sig- example, have no exogenous variables). This nificant. This test is related to the literature can be done in a variety of ways. One is to on encompassing tests and the literature on estimate autoregressive equations for each the optimal combination of forecasts. exogenous variable and to add these equa- Stochastic simulation can be used to cal- tions to the model. The expanded model culate the probability of various events hap- can then be stochastically simulated to get pening. Say one is interested in the proba- the variances. Another way is to estimate in bility that within an eight-quarter period at sane manner the forecast-error variance for least two successive quarters have negative each exogenous variable (perhaps using past GDP growth. Draw a set of error terms for errors made by forecasting services in fore- the period and solve the model using these casting the variable) and then to use these draws. Record whether or not there were at . estimates and the normality assumption to least two successive quarters of negative draw exogenous-variable values for the growth for the solution values. This is one stochastic simulation. repetition. Do J repetitions, and calculate The second problem is the possibility of the percentage of the .I repetitions in which data mining. A model may have small esti- the event occurred. This percentage is the mated variances of the structural error terms estimated probability, an estimate that is and small estimated variances of the coef- consistent with the probability structure of ficient estimates (which lead to small fore- the model. cast-error variances from the stochastic sim- This procedure can be used for testing ulation) because it has managed spuriously purposes. It is possible for a given event to compute a series of probability estimates of variation attributed to the fixed error and to compare these estimates to the ac- terms. In practice the correlation of the tual outcomes (which are either 0 or 1). error terms across equations is usually small, Various measures are available for comput- and so the assumption of no correlation is ing the accuracy of the probabilities, and usually not very restrictive. these measures can be compared across The optimal choice of monetary-policy models to see which model’s estimated instruments is another issue that can be probabilities best reflect the actual out- examined using stochastic simulation. comes. William Poole (1970) examined the optimal I have found that structural models gen- choice analytically in a stochastic IS-,!44 erallv do better than VAR and AC models model, and stochastic simulation allows this in the above tests. Only limited work has to be done in larger models. Forecast-error been done, however, on comparing one variances of, say, GDP can be computed structural model against another. Much first tixing the short-term interest rate and might be learned in the future if more test- second fixing the money supply, and then ing of structural models were done. the variances can be compared. Note that in the process of testing models Finally, optimal control problems are now one is in effect testing the quantitative im- fairly easy to solve using large-scale models. portance of the Lucas critique. If coeffi- If one is willing to assume certainty equiva- cients in a model change considerably when lence even though the model is nonlinear, a policy variable changes and the model has the optimal control problem can be set up not accounted for this, the model is mis- as a standard unconstrained optimization specified and should not do well in tests. problem, which can be solved by a number of numerical algorithms. IV. Analyzing Complete Models

A common way of analyzing macroecono- REFERENCES metric models is to compute “multipliers.” One or more exogenous variables are Andrew, Donald W. K., “Exactly Median- changed, and the effects of this change on Unbiased Estimation of First Order Au- the endogenous variables are computed. toregressive/Unit Root Models,” Econo- Stochastic simulation can be used to com- metrica, January 1993, 61, 139-65. pute standard errors of these multipliers, _ and Ploberger, Werner, “Optimal Tests and this is now a computationally routine When a Nuisance Parameter Is Present matter. If J repetitions are made of a given Only Under the Alternative,” Cowles change, one has J values of the change in Foundation Paper No. 1015, Yale Univer- each endogenous variable for each observa- sity, February 1992. tion, and means and standard errors can be Duesenbeny, James S., Fromm, Gary, Klein, computed from these values. Computing Lawrence R and Kuh, Edwin, eds., The standard errors of multipliers is useful be- Brookings Quarterly Econometric Model of cause it allows one to gauge how much the United States, Chicago: Rand Mc- confidence to place on the results. Nally, 1965. Stochastic simulation can be used to ex- and _, The amine what a macroeconometric model says Brookings Model: Some Further Results, about the sources of economic fluctuations. Chicago: Rand McNally, 1969. One first computes a forecast-error variance Fair, Ray C., “Estimating the Expected Pre- drawing all the error terms and then com- dictive Accuracy of Econometric Models,” putes the variance after taking one or more International Economic R&w. June 1980, of the structural error terms as fixed. If the 21) 355-78. fixed error terms are uncorrelated with the _, Specification, Estimation, and Anal- other error terms, the difference between ysis of Macroeconometric Models, Cam- the two estimated variances is the amount bridge, MA: Press, WI.. 8.7 .!io. 2 WHAT HAPPENED TO MACROECONOMETRIC MODELS? 293

1984. Handbook of Econometrics, Amsterdam: ~, “The Cowles Commission Ap- North-Holland, 1984, pp. 1023-1100. proach, Real Business Cycle Theories, Klein, Lawrence R., Economic Fluctuations in and New-Keynesian Economics,” in the United States, 1921-1941, New York: Michael T. Belongia and Michelle R. Wiley, 1950. Garfinkel, eds., The Businms Cycle: Theo- _ and Goldbqer, Arthur S., An Econo- ries and Euidcnce. Boston: Kluwer, 1992, metric Model of thr United States pp. 133-47. 1929- 1952. Amsterdam: North-Holland, “Testing Macroeconometric Mod- 1955. els,“‘unpublished manuscript, 1993. Lucas, Robert E., Jr., “Econometric Policy _ and Shiller, Robert J., “COtIIpari”g Evaluation: A. Critique,” in K. Brunner Information in Forecasts from Economet- and A. H. Meltzer. eds.. The Philiim Cume ric Models,” American Economic Review, and Labor Mark&, Amsterdam: North- June 1990, 80, 375-89. Holland, 1976, pp. 19-46. _ and Taylor. John B., “Solution and Parke, William R, “An Algorithm for FIML Maximum Likelihood Estimation of Dy- and 3SLS Estimation of Large Nonlinear namic Rational Expectations Models,” Models,” , January 1982, 50, Econometrica, July 1983, 51, 1169-85. 81-95. IIansen, Lam Peter, “Large Sample Proper- Poole, William, “Optimal Choice of Mone- ties of Generalized Method of Moments tary Policy Instruments in a Simple Estimators,” Eamometrica, July 1982, SO, Stochastic Macro Model,” Quarterly Jaw- 1029-54. ml of Economics, May 1970, 84, 197-216. Hendry, David F., Pagan, Adrian R. and Sargan, Tinbergen, Jan., Statistical Testing of Business .I. Uenis, “Dynamic Specifications,” in Z. Cycle Theo&~, Geneva: League of Na- Griliches and M. D. Intriligator, eds., tions, 1939.