<<

This page intentionally left blank Essays in Econometrics

This book, and its companion volume in the Econometric Society Monographs series (ESM No. 32), present a collection of papers by Clive W. J. Granger. His contributions to economics and econometrics, many of them seminal, span more than four decades and touch on all aspects of time series analysis. The papers assembled in this volume explore topics in causality, integration and cointegra- tion, and long memory. Those in the companion volume investigate themes in spectral analysis, seasonality, nonlinearity, methodology, and forecasting. The two volumes contain the original articles as well as an introduction written by the editors.

Eric Ghysels is Edward M. Bernstein Professor of Economics and Professor of Finance at the University of North Carolina, Chapel Hill. He previously taught at the University of Montreal and Pennsylvania State University. Professor Ghysels’s main research interests are time series econometrics and finance. He has served on the editorial boards of several academic journals and has published more than sixty articles in leading economics, finance, and statistics journals.

Norman R. Swanson is Associate Professor of Economics at Texas A&M Uni- versity and formerly taught at Pennsylvania State University. He received his doctorate at the University of California, San Diego, where he studied theoret- ical, financial, and macroeconomics under ’s tutelage. Professor Swanson is an associate editor for numerous academic journals and is the author of more than thirty refereed articles and research papers.

Mark W. Watson is Professor of Economics and Public Affairs at the Woodrow Wilson School, . He previously served on the faculties of Harvard and Northwestern Universities and as associate editor of Econometrica, the Journal of Monetary Economics, and the Journal of Applied Econometrics and currently is a Research Associate of the National Bureau of Economic Research (NBER) and consultant at the Federal Reserve Bank of Richmond. Professor Watson is a Fellow of the Econometric Society and currently holds research grants from NBER and the National Science Foundation.

Econometric Society Monographs No. 33

Editors: Peter Hammond, Stanford University Alberto Holly, University of Lausanne

The Econometric Society is an international society for the advancement of economic theory in relation to statistics and mathematics. The Econometric Society Monograph Series is designed to promote the publication of original research contributions of high quality in mathematical economics and theoretical and applied econometrics.

Other titles in the series: G. S. Maddala Limited-dependent and qualitative variables in econometrics, 0 521 33825 5 Gerard Debreu Mathematical economics: Twenty papers of Gerard Debreu, 0 521 33561 2 Jean-Michel Grandmont Money and value: A reconsideration of classical and neoclassical monetary economics, 0 521 31364 3 Franklin M. Fisher Disequilibrium foundations of equilibrium economics, 0 521 37856 7 Andreu Mas-Colell The theory of general economic equilibrium: A differentiable approach, 0 521 26514 2, 0 521 38870 8 Cheng Hsiao Analysis of panel data, 0 521 38933 X Truman F. Bewley, Editor Advances in econometrics – Fifth World Congress (Volume I), 0 521 46726 8 Truman F. Bewley, Editor Advances in econometrics – Fifth World Congress (Volume II), 0 521 46725 X Herve Moulin Axioms of cooperative decision making, 0 521 36055 2, 0 521 42458 5 L. G. Godfrey Misspecification tests in econometrics: The Lagrange multiplier principle and other approaches, 0 521 42459 3 Tony Lancaster The econometric analysis of transition data, 0 521 43789 X Alvin E. Roth and Marilda A. Oliviera Sotomayor, Editors Two-sided matching: A study in game-theoretic modeling and analysis, 0 521 43788 1 Wolfgang Härdle, Applied nonparametric regression, 0 521 42950 1 Jean-Jacques Laffont, Editor Advances in economic theory – Sixth World Congress (Volume I), 0 521 48459 6 Jean-Jacques Laffont, Editor Advances in economic theory – Sixth World Congress (Volume II), 0 521 48460 X Halbert White Estimation, inference and specification, 0 521 25280 6, 0 521 57446 3 Christopher Sims, Editor Advances in econometrics – Sixth World Congress (Volume I), 0 521 56610 X Christopher Sims, Editor Advances in econometrics – Sixth World Congress (Volume II), 0 521 56609 6 Roger Guesnerie A contribution to the pure theory of taxation, 0 521 23689 4, 0 521 62956 X David M. Kreps and Kenneth F. Wallis, Editors Advances in economics and econometrics – Seventh World Congress (Volume I), 0 521 58011 0, 0 521 58983 5 David M. Kreps and Kenneth F. Wallis, Editors Advances in economics and econometrics – Seventh World Congress (Volume II), 0 521 58012 9, 0 521 58982 7 David M. Kreps and Kenneth F. Wallis, Editors Advances in economics and econometrics – Seventh World Congress (Volume III), 0 521 58013 7, 0 521 58981 9 Donald P. Jacobs, Ehud Kalai, and Morton I. Kamien, Editors Frontiers of research in economic theory: The Nancy L. Schwartz Memorial Lectures, 1983–1997, 0 521 63222 6, 0 521 63538 1 A. Cohn Cameron and Pravin K. Trivedi, Regression analysis of count data, 0 521 63201 3, 0 521 63567 5 Steinar Strøm, Editor Econometrics and economic theory in the 20th century: The Ragnar Frisch Centennial Symposium, 0 521 633230, 0 521 633656 Eric Ghysels, Norman R. Swanson, and Mark Watson, Editors Essays in econometrics: Collected papers of Clive W. J. Granger (Volume I), 0 521 77297 4, 0 521 80401 8, 0 521 77496 9, 0 521 79697 0 Eric Ghysels, Norman R. Swanson, and Mark Watson, Editors Essays in econometrics: Collected papers of Clive W. J. Granger (Volume II), 0 521 79207 X, 0 521 80401 8, 0 521 79649 0, 0 521 79697 0 CLIVE WILLIAM JOHN GRANGER Essays in Econometrics Collected Papers of Clive W. J. Granger

Volume II: Causality, Integration and Cointegration, and Long Memory

Edited by Eric Ghysels University of North Carolina at Chapel Hill Norman R. Swanson Texas A&M University Mark W. Watson Princeton University    Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo

Cambridge University Press The Edinburgh Building, Cambridge  , United Kingdom Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521792073

© Cambridge University Press 2001

This book is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

First published in print format 2001 isbn-13- 978-0-511-06938-3 eBook (EBL) isbn-10- 0-511-06938-3 eBook (EBL) isbn-13- 978-0-521-79207-3 hardback isbn-10- 0-521-79207-X hardback isbn-13- 978-0-521-79649-1 paperback isbn-10- 0-521-79649-0 paperback

Cambridge University Press has no responsibility for the persistence or accuracy of s for external or third-party internet websites referred to in this book, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. To Clive W. J. Granger: Mentor, Colleague, and Friend. We are honored to present this selection of his research papers.

E. G. N. R. S. M. W. W.

Contents

Acknowledgments page xiii List of Contributors xvii

Introduction 1 eric ghysels, norman r. swanson, and mark watson

PART ONE: CAUSALITY 1. Investigating Causal Relations by Econometric Models and Cross-Spectral Methods, c. w. j. granger, Econometrica, 37, 1969, pp. 424–38. Reprinted in Rational Expectations, edited by j. sargent and r. lucas, 1981, University of Minnesota Press. 31 2. Testing for Causality: A Personal Viewpoint, c. w. j. granger, Journal of Economic Dynamics and Control, 2, 1980, pp. 329–52. 48 3. Some Recent Developments in a Concept of Causality, c. w. j. granger, Journal of Econometrics, 39, 1988, pp. 199-211. 71 4. Advertising and Aggregate Consumption: An Analysis of Causality, r. ashley, c. w. j. granger and r. schmalensee, Econometrica, 48, 1980, pp. 1149–67. 84

PART TWO: INTEGRATION AND COINTEGRATION 5. Spurious Regression in Econometrics, c. w. j. granger and p. newbold, Journal of Econometrics, 2, 1974, pp. 111–20. 109 6. Some Properties of Time Series Data and Their Use in Econometric Model Specification, c. w. j. granger, Journal of Econometrics, 16, 1981, pp. 121–30. 119 x Contents

7. Time Series Analysis of Error Correction Models, c. w. j. granger and a. a. weiss, in Studies in Econometrics: Time Series and Multivariate Statistics, edited by s. karlin, t. amemiya, and l. a. goodman, Academic Press, New York, 1983, pp. 255–78. 129 8. Co-Integration and Error-Correction: Representation, Estimation, and Testing, r. engle and c. w. j. granger, Econometrica, 55, 1987, pp. 251–76. 145 9. Developments in the Study of Cointegrated Economic Variables, c. w. j. granger, Oxford Bulletin of Economics and Statistics, 48, 1986, pp. 213–28. 173 10. Seasonal Integration and Cointegration, s. hylleberg, r. f. engle, c. w. j. granger, and b. s. yoo, Journal of Econometrics, 44, 1990, pp. 215–38. 189 11. A Cointegration Analysis of Treasury Bill Yields, a. d. hall, h. m. anderson, and c. w. j. granger, Review of Economics and Statistics, 74, 1992, pp. 116–26. 212 12. Estimation of Common Long Memory Components in Cointegrated Systems, j. gonzalo and c. w. j. granger, Journal of Business and Economic Statistics, 13, 1995, pp. 27–35. 232 13. Separation in Cointegrated Systems and Persistent-Transitory Decompositions, c. w. j. granger and n. haldrup, Department of Economics, Oxford Bulletin of Economics and Statistics, 59, 1997, pp. 449–64. 254 14. Nonlinear Transformations of Integrated Time Series, c. w. j. granger and j. hallman, Journal of Time Series Analysis, 12, 1991, pp. 207–24. 269 15. Long Memory Series with Attractors, c. w. j. granger and j. hallman, Oxford Bulletin of Economics and Statistics, 53, 1991, pp. 11–26. 286 16. Further Developments in the Study of Cointegrated Variables, c. w. j. granger and n. r. swanson, Oxford Bulletin of Economics and Statistics, 58, 1996, pp. 374–86. 302

PART THREE: LONG MEMORY 17. An Introduction to Long-Memory Time Series Models and Fractional Differencing, c. w. j. granger and r. joyeux, Journal of Time Series Analysis, 1, 1980, pp. 15–29. 321 Contents xi

18. Long Memory Relationships and the Aggregation of Dynamic Models, c. w. j. granger, Journal of Econometrics, 14, 1980, pp. 227–38. 338 19. A Long Memory Property of Stock Market Returns and a New Model, z. ding, c. w. j. granger and r. f. engle, Journal of Empirical Finance, 1, 1993, pp. 83–106. 349

Index 373

Acknowledgments

Grateful acknowledgment is made to the following publishers and sources for permission to reprint the articles cited here.

ACADEMIC PRESS “Non-Linear Time Series Modelling,” with A. Andersen, Applied Time Series Analysis,edited by David F.Findley,1978,Academic Press,25–38. “Time Series Analysis of Error Correction Models,” with A. A. Weiss, in Studies in Econometrics: Time Series and Multivariate Statistics, edited by S. Karlin, T. Amemiya, and L. A. Goodman, Academic Press, New York, 1983, 255–78.

AMERICAN STATISTICAL ASSOCIATION “Is Seasonal Adjustment a Linear or Nonlinear Data-Filtering Process?” with E. Ghysels and P. L. Siklos, Journal of Business and Economic Statistics, 14, 1996, 374–86. “Semiparametric Estimates of the Relation Between Weather and Elec- tricity Sales,” with R. F. Engle, J. Rice, and A. Weiss, Journal of the American Statistical Association, 81, 1986, 310–20. “Estimation of Common Long-Memory Components in Cointegrated Systems,” with J. Gonzalo, Journal of Business and Economic Statistics, 13, 1995, 27–35.

BLACKWELL PUBLISHERS “Time Series Modelling and Interpretation,” with M. J. Morris, Journal of the Royal Statistical Society, Series A, 139, 1976, 246–57. “Forecasting Transformed Series,” with P. Newbold, The Journal of the Royal Statistical Society, Series B, 38, 1976, 189–203. “Developments in the Study of Cointegrated Economic Variables,” Oxford Bulletin of Economics and Statistics, 48, 1986, 213–28. xiv Acknowledgments

“Separation in Cointegrated Systems and Persistent-Transitory Decom- positions,” with N. Haldrup, Oxford Bulletin of Economics and Statistics, 59, 1997, 449–64. “Nonlinear Transformations of Integrated Time Series,” with J. Hallman, Journal of Time Series Analysis, 12, 1991, 207–24. “Long Memory Series with Attractors,” with J. Hallman, Oxford Bulletin of Economics and Statistics, 53, 1991, 11–26. “Further Developments in the Study of Cointegrated Variables,” with N. R. Swanson, Oxford Bulletin of Economics and Statistics, 58, 1996, 374–86. “An Introduction to Long-Memory Time Series Models and Fractional Differencing,” with R. Joyeux, Journal of Time Series Analysis, 1, 1980, 15–29.

BUREAU OF THE CENSUS “Seasonality: Causation, Interpretation and Implications,” in Seasonal Analysis of Economic Time Series, Economic Research Report, ER-1, edited by A. Zellner, 1979, Bureau of the Census, 33–46. “Forecasting White Noise,” in Applied Time Series Analysis of Economic Data, Proceedings of the Conference on Applied Time Series Analy- sis of Economic Data, October 1981, edited by A. Zellner, U.S. Depart- ment of Commerce, Bureau of the Census, Government Printing Office, 1983, 308–14.

CAMBRIDGE UNIVERSITY PRESS “The ET Interview: Professor Clive Granger,” Econometric Theory, 13, 1997, 253–303. “Implications of Aggregation with Common Factors,” Econometric Theory, 3, 1987, 208–22.

CHARTERED INSTITUTION OF WATER AND ENVIRONMENTAL MANAGEMENT “Estimating the Probability of Flooding on a Tidal River,” Journal of the Institution of Water Engineers, 13, 1959, 165–74.

THE ECONOMETRICS SOCIETY “The Typical Spectral Shape of an Economic Variable,” Econometrica, 34, 1966, 150–61. “Modelling Nonlinear Relationships Between Extended-Memory Vari- ables,” Econometrica, 63, 1995, 265–79. “Near Normality and Some Econometric Models,” Econometrica, 47, 1979, 781–4. Acknowledgments xv

“Investigating Causal Relations by Econometric Models and Cross- Spectral Methods,” Econometrica, 37, 1969, 424–38. Reprinted in Rational Expectations, edited by T. Sargent and R. Lucas, 1981, Uni- versity of Minnesota Press, Minneapolis. “Advertising and Aggregate Consumption: An Analysis of Causality,” with R. Ashley and R. Schmalensee, Econometrica, 48, 1980, 1149–67. “Co-Integration and Error-Correction: Representation, Estimation and Testing,” with R. Engle, Econometrica, 55, 1987, 251–76.

ELSEVIER “Testing for Neglected Nonlinearity in Time Series Models: A Compar- ison of Neural Network Methods and Alternative Tests,” with T.-H. Lee and H. White, Journal of Econometrics, 56, 1993, 269–90. “On The Invertibility of Time Series Models,” with A. Andersen, Sto- chastic Processes and Their Applications, 8, 1978, 87–92. “Comments on the Evaluation of Policy Models,” with M. Deutsch, Journal of Policy Modelling, 14, 1992, 397–416. “Invited Review: Combining Forecasts – Twenty Years Later,” Journal of Forecasting, 8, 1989, 167–73. “The Combination of Forecasts Using Changing Weights,” with M. Deutsch and T. Teräsvirta, International Journal of Forecasting, 10, 1994, 47–57. “Short-Run Forecasts of Electricity Loads and Peaks,” with R. Ramanathan, R. F. Engle, F.Vahid-Araghi, and C. Brace, International Journal of Forecasting, 13, 1997, 161–74. “Some Recent Developments in a Concept of Causality,” Journal of Econometrics, 39, 1988, 199–211. “Spurious Regressions in Econometrics,” with P. Newbold, Journal of Econometrics, 2, 1974, 111–20. “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” Journal of Econometrics, 16, 1981, 121–30. “Seasonal Integration and Cointegration,” with S. Hylleberg, R. F. Engle, and B. S. Yoo, Journal of Econometrics, 44, 1990, 215–38. “Long-Memory Relationships and the Aggregation of Dynamic Models,” Journal of Econometrics, 14, 1980, 227–38. “A Long Memory Property of Stock Market Returns and a New Model,” with Z. Ding and R. F. Engle, Journal of Empirical Finance, 1, 1993, 83–106.

FEDERAL RESERVE BANK OF MINNEAPOLIS “The Time Series Approach to Econometric Model Building,” with P. Newbold, in New Methods in Business Cycle Research, edited by C. Sims, 1977, Federal Reserve Bank of Minneapolis. xvi Acknowledgments

HELBING AND LICHTENHAHN VERLAG “Spectral Analysis of New York Stock Market Prices,” with O. Morgen- stern, Kyklos, 16, 1963, 1–27. Reprinted in Random Character of Stock Market Prices, edited by P. H. Cootner, 1964, MIT Press, Cambridge, MA.

JOHN WILEY & SONS, LTD. “Using the Correlation Exponent to Decide Whether an Economic Series is Chaotic,” with T. Liu and W. P. Heller, Journal of Applied Econometrics, 7, 1992, S25–40. Reprinted in Nonlinear Dynamics, Chaos, and Econometrics, edited by M. H. Pesaran and S. M. Potter, Wiley, Chichester. “Can We Improve the Perceived Quality of Economic Forecasts?” Journal of Applied Econometrics, 11, 1996, 455–73.

MACMILLAN PUBLISHERS, LTD. “Prediction with a Generalized Cost of Error Function,” Operational Research Quarterly, 20, 1969, 199–207. “The Combination of Forecasts, Using Changing Weights,” with M. Deutsch and T. Teräsvirta, International Journal of Forcasting, 10, 1994, 45–57.

MIT PRESS “Testing for Causality: A Personal Viewpoint,” Journal of Economic Dynamics and Control, 2, 1980, 329–52. “A Cointegration Analysis of Treasury Bill Yields,” with A. D. Hall and H. M. Anderson, Review of Economics and Statistics, 74, 1992, 116–26. “Spectral Analysis of New York Stock Market Prices,” with O. Morgen- stern, Kyklos, 16, 1963, 1–27. Reprinted in Random Character of Stock Market Prices, edited by P. H. Cootner, 1964, MIT Press, Cambridge, MA.

TAYLOR & FRANCIS, LTD. “Some Comments on the Evaluation of Economic Forecasts,” with P. Newbold, Applied Economics, 5, 1973, 35–47. Contributors

A. Andersen R. F. Engle Department of Economic Statistics Department of Economics University of Sydney University of California, San Diego Sydney La Jolla, CA Australia U.S.A.

E. Ghysels H. M. Anderson Department of Economics Department of Econometrics University of North Carolina at Chapel Monash University Hill Australia Chapel Hill, NC U.S.A. R. Ashley University of California, San Diego J. Gonzalo La Jolla, CA Department of Economics U.S.A. University Carlos III Madrid Spain J. M. Bates Bramcote C. W. J. Granger Nottingham Department of Economics United Kingdom University of California, San Diego La Jolla, CA 92093

C. Brace N. Haldrup Puget Sound Power and Light Company Department of Economics Bellevue, WA University of Aarhus U.S.A. Aarhus Denmark M. Deutsch Department of Economics A. D. Hall University of California, San Diego School of Finance and Economics La Jolla, CA University of Technology U.S.A. Sydney Australia

Z. Ding J. Hallman Frank Russell Company Federal Reserve Board Tacoma, WA Washington, DC U.S.A. U.S.A. xviii Contributors

W. P. Heller J. Rice University of California, San Diego Department of Statistics La Jolla, CA University of California, Berkeley U.S.A. Berkeley, CA U.S.A. S. Hylleberg Department of Economics R. Schmalensee University of Aarhus Sloan School of Management Aarhus Massachusetts Institute of Technology Denmark Cambridge, MA U.S.A. R. Joyeux School of Economics and Financial P. L. Siklos Studies Department of Economics Macquarie University Wilfrid Laurier University Sydney Waterloo, Ontario Australia Canada

T.-H. Lee N. R. Swanson Department of Economics Department of Economics University of California, Riverside Texas A&M University Riverside, CA College Station, TX U.S.A. U.S.A.

T. Lui T. Teräsvirta Department of Economics School of Finance and Economics Ball State University University of Technology Muncie, IN Sydney U.S.A. Australia

O. Morgenstern (deceased) F. Vahid-Araghi Princeton University Department of Econometrics Princeton, NJ Monash University U.S.A. Australia

M. J. Morris M. Watson University of East Anglia Department of Economics United Kingdom Princeton University Princeton, NJ P. Newbold U.S.A. Department of Economics Nottingham University A. A. Weiss Nottingham Department of Economics United Kingdom University of Southern California Los Angeles, CA P. C. B. Phillips U.S.A. Cowles Foundation for Research in Economics H. White Yale University Department of Economics New Haven, CT University of California, San Diego U.S.A. La Jolla, CA U.S.A. R. Ramanathan Department of Economics B. S. Yoo University of California, San Diego Yonsei University La Jolla, CA Seoul U.S.A. South Korea Introduction Volume I

At the beginning of the twentieth century, there was very little funda- mental theory of time series analysis and surely very few economic time series data. Autoregressive models and moving average models were introduced more or less simultaneously and independently by the British statistician Yule (1921, 1926, 1927) and the Russian statistician Slutsky (1927). The mathematical foundations of stationary stochastic processes were developed by Wold (1938), Kolmogorov (1933, 1941a, 1941b), Khintchine (1934), and Mann and Wald (1943).Thus, modern time series analysis is a mere eight decades old. Clive W. J. Granger has been working in the field for nearly half of its young life. His ideas and insights have had a fundamental impact on statistics, econometrics, and dynamic economic theory. Granger summarized his research activity in a recent ET Interview (Phillips 1997), which appears as the first reprint in this volume, by saying, “I plant a lot of seeds, a few of them come up, and most of them do not.” Many of the seeds that he planted now stand tall and majestic like the Torrey Pines along the California coastline just north of the Uni- versity of California, San Diego, campus in La Jolla, where he has been an economics faculty member since 1947. Phillips notes in the ET Inter- view that “It is now virtually impossible to do empirical work in time series econometrics without using some of his [Granger’s] methods or being influenced by some of his ideas.” Indeed, applied time series econometricians come across at least one of his path-breaking ideas almost on a daily basis. For example, many of his contributions in the areas of spectral analysis, long memory, causality, forecasting, spurious regression, and cointegration are seminal. His influence on the profes- sion continues with no apparent signs of abatement.

SPECTRAL METHODS In his ET Interview, Granger explains that early in his career he was con- fronted with many applied statistical issues from various disciplines 2 Eric Ghysels, Norman R. Swanson, and Mark Watson because he was the only statistician on the campus of the University of Nottingham, where he completed his PhD in statistics and served as lec- turer for a number of years. This led to his first publications, which were not in the field of economics. Indeed, the first reprint in Volume II of this set contains one of his first published works, a paper in the field of hydrology. Granger’s first influential work in time series econometrics emerged from his research with Michio Hatanaka. Both were working under the supervision of Oskar Morgenstern at Princeton and were guided by John Tukey. Cramér (1942) had developed the spectral decom- position of weakly stationary processes, and the 1950s and early 1960s were marked by intense research efforts devoted to spectral analysis. Many prominent scholars of the time, including Milton Friedman, John von Neumann, and Oskar Morgenstern, saw much promise in the appli- cation of Fourier analysis to economic data. In 1964, Princeton Univer- sity Press published a monograph by Granger and Hatanaka, which was the first systematic and rigorous treatment of spectral analysis in the field of economic time series. Spectral methods have the appealing feature that they do not require the specification of a model but instead follow directly from the assumption of stationarity. Interestingly, more than three decades after its initial publication, the book remains a basic reference in the field. The work of Granger and Hatanaka was influential in many dimen- sions. The notion of business cycle fluctuations had been elaborately dis- cussed in the context of time series analysis for some time. Spectral analysis provided new tools and yielded fundamental new insights into this phenomenon. Today, macroeconomists often refer to business cycle frequencies, and a primary starting point for the analysis of business cycles is still the application of frequency domain methods. In fact, advanced textbooks in macroeconomics, such as Sargent (1987), devote an entire chapter to spectral analysis. The dominant feature of the spec- trum of most economic time series is that most of the power is at the lower frequencies. There is no single pronounced business cycle peak; instead there are a wide number of moderately sized peaks over a large range of cycles between four and eight years in length. Granger (1966) dubbed this shape the “typical spectral shape” of an economic variable. A predecessor to Granger’s 1966 paper entitled “The Typical Spectral Shape of an Economic Variable”is his joint paper with Morgenstern pub- lished in 1963, which is entitled “Spectral Analysis of New York Stock Market Prices.” Both papers are representative of Granger’s work in the area of spectral analysis and are reproduced as the first set of papers fol- lowing the ET Interview. The paper with Morgenstern took a fresh look at the random walk hypothesis for stock prices, which had been advanced by the French mathematician M. L. Bachelier (1900). Granger and Morgenstern esti- Introduction 3 mated spectra of return series of several major indices of stocks listed on the New York Stock Exchange. They showed that business cycle and seasonal variations were unimportant for return series, as in every case the spectrum was roughly flat at almost all frequencies. However, they also documented evidence that did not support the random walk model. In particular, they found that very long-run movements were not ade- quately explained by the model. This is interesting because the random walk hypothesis was associated with definitions of efficiency of financial markets for many years (e.g., see the classic work of Samuelson 1965 and Fama 1970).The Granger and Morgenstern paper is part of a very impor- tant set of empirical papers written during the early part of the 1960s, which followed the early work of Cowles (1933). Other related papers include Alexander (1961, 1964), Cootner (1964), Fama (1965), Mandel- brot (1963), and Working (1960). Today, the long-term predictability of asset returns is a well-established empirical stylized fact, and research in the area remains very active (e.g., see Campbell, Lo, and MacKinlay 1997 for recent references).

SEASONALITY Seasonal fluctuations were also readily recognized from the spectrum, and the effect of seasonal adjustment on economic data was therefore straightforward to characterize. Nerlove (1964, 1965) used spectral tech- niques to analyze the effects of various seasonal adjustment procedures. His approach was to compute spectra of unadjusted and adjusted series and to examine the cross spectrum of the two series. Nerlove’s work took advantage of the techniques Granger and Hatanaka had so carefully laid out in their monograph. Since then, many papers that improve these techniques have been written. They apply the techniques to the study of seasonal cycles and the design of seasonal adjustment filters. For example, many significant insights have been gained by viewing seasonal adjustment procedures as optimal linear signal extraction filters (e.g., see Hannan 1967; Cleveland and Tiao 1976; Pierce 1979; and Bell 1984, among others). At the same time, there has been a perpetual debate about the merits of seasonal adjustment, and since the creation of the X- 11 program, many improvements have been made and alternative pro- cedures have been suggested.The Census X-11 program was the product of several decades of research. Its development was begun in the early 1930s by researchers at the National Bureau of Economic Research (NBER) (see, for example, Macaulay 1931), and it emerged as a fully operational procedure in the mid 1960s, in large part due to the work by Julius Shiskin and his collaborators at the U.S. Bureau of the Census (see Shiskin et al. 1967). During the 1960s and 1970s, numerous papers were written on the topic of seasonality, including important papers by Sims 4 Eric Ghysels, Norman R. Swanson, and Mark Watson

(1974) and Wallis (1974). Granger’s (1979) paper, “Seasonality: Causa- tion, Interpretation and Implications,” is the first of two papers on the topic of seasonality included in this volume. It was written for a major conference on seasonality, which took place in the late 1970s, and appeared in a book edited by Zellner (1979). In this paper, he asks the pointed question, “Why adjust?” and gives a very balanced view of the merits and drawbacks of seasonal adjustment. The paper remains one of the best reflections on the issue of seasonality and seasonal adjustment. The second paper in this subsection, “Is Seasonal Adjustment a Linear or a Nonlinear Data-Filtering Process?,” written with Ghysels and Siklos (1996), also deals with a pointed question that was initially posed by Young (1968). The question is: Are seasonal adjustment procedures (approximately) linear data transformations? The answer to this ques- tion touches on many fundamental issues, such as the treatment of sea- sonality in regression (cf. Sims 1974; Wallis 1974) and the theory of seasonal adjustment. The paper shows that the widely applied X-11 program is a highly nonlinear filter.

NONLINEARITY The book by Box and Jenkins (1970) pushed time series analysis into a central role in economics. At the time of its publication, the theory of stationary linear time series processes was well understood, as evidenced by the flurry of textbooks written during the late 1960s and the 1970s, such as Anderson (1971), Fuller (1976), Granger and Newbold (1977), Hannan (1970), Nerlove et al. (1979), and Priestley (1981). However, many areas of time series analysis fell beyond the scope of linear sta- tionary processes and were not well understood. These areas included nonstationarity and long memory (covered in Volume II) and nonlinear models. Four papers on nonlinearity in time series analysis are repro- duced in Volume I and are representative of Granger’s important work in this area. Because the class of nonlinear models is virtually without bound, one is left with the choice of either letting the data speak (and suffering the obvious dangers of overfitting) or relying on economic theory to yield the functional form of nonlinear economic relationships. Unfortunately, most economic theories provide only partial descriptions, with blanks that need to be filled in by exploratory statistical techniques. The papers in this section address the statistical foundations of nonlin- ear modeling some of the classical debates in the literature of nonlinear modeling. The first paper,“Non-Linear Time Series Modeling,” describes the sta- tistical underpinnings of a particular class of nonlinear models. This paper by Granger and Andersen predates their joint monograph on bilinear models (Granger and Andersen 1978). This class of models is not as popular today as it once was, although bilinear models are con- Introduction 5 nected in interesting ways to models of more recent vintage, such as the class of ARCH models introduced by Engle (1982). One of the classical debates in the literature on nonlinear models pertains to the use of deter- ministic versus stochastic processes to describe economic phenomenon. Granger has written quite extensively on the subject of chaos (a class of deterministic models) and has expressed some strong views on its use in economics, characterizing the theory of chaos as fascinating mathematics but not of practical relevance in econometrics (see Granger 1992, 1994). Liu, Granger, and Heller (1992), in the included paper entitled “Using the Correlation Exponent to Decide Whether an Economic Series Is Chaotic,” study the statistical properties of two tests designed to distin- guish deterministic time series from stochastic white noise. The tests are the Grassberger-Procacia correlation exponent test and the Brock, Dechert, and Scheinkman test. Along the same lines, Lee, White, and Granger (1993), in the paper entitled “Testing for Neglected Nonlinear- ity in Time Series Models” examine a battery of tests for nonlinearity. Both papers are similar in that they consider basic questions of nonlin- ear modeling and provide useful and practical answers. The fourth paper in this section, “Modeling Nonlinear Relationships Between Extended-Memory Variables,” is the Fisher-Schultz lecture delivered at the 1993 European Meetings of the Econometric Society in Uppsala. The lecture coincided with the publication of the book by Granger and Teräsvirta (1993) on modeling nonlinear economic rela- tionships. This book is unique in the area because it combines a rich col- lection of topics ranging from testing for linearity, chaos, and long memory to aggregation effects and forecasting. In his Fisher-Schultz lecture, Granger addresses the difficult area of nonlinear modeling of nonstationary processes.The paper shows that the standard classification of I(0) and I(1) processes in linear models is not sufficient for nonlinear functions. This observation also applies to fractional integration. As is typical, Granger makes suggestions for new areas of research, advancing the notions of short memory in mean and extended memory, and relates these ideas to earlier concepts of mixing conditions, as discussed for instance in McLeish (1978), Gallant and White (1988), and Davidson (1994). At this point, it is too early to tell whether any of these will give us the guidance toward building a unified theory of nonlinear nonsta- tionary processes. The final paper in this section is entitled “Semiparametric Estimates of the Relation Between Weather and Electricity Sales.” This paper with Engle, Rice, and Weiss is a classic contribution to the nonparamentric and semiparametric literature and stands out as the first application of semiparametric modeling techniques to economics (previous work had been done on testing). Other early work includes Robinson (1988) and Stock (1989). Recent advances in the area are discussed in Bierens (1990), Delgado and Robinson (1992), Granger and Teräsvirta (1993), 6 Eric Ghysels, Norman R. Swanson, and Mark Watson

Härdle (1990), Li (1998), Linton and Neilson (1995), and Teräsvirta, Tjostheim, and Granger (1994), to name but a few. In this classic paper, Granger and his coauthors use semiparametric models, which include a linear part and a nonparametric cubic spline function to model electric- ity demand. The variable that they use in the nonparametric part of their model is temperature, which is known to have an important nonlinear effect on demand.

METHODOLOGY The title of this subsection could cover most of Granger’s work; however, we use it to discuss a set of six important papers that do not fit elsewhere. The first paper is Granger and Morris’s 1976 paper “Time Series Mod- eling and Interpretation.” This is a classic in the literatures on aggrega- tion and measurement error. The paper contains an important theorem on the time series properties of the sum of two independent series, say ARMA(p,m) + ARMA(q,n), and considers a number of special cases of practical interest, like the sum of an AR(p) and a white noise process. A key insight in the paper is that complicated time series models might arise from aggregation. The paper also contains the seeds of Granger’s later paper (Granger 1987) on aggregation with common factors, which is discussed later. The next paper, Granger and Anderson’s “On the Invertibility of Time Series Models,” also deals with a fundamental issue in time series. Invert- ibility is a familiar concept in linear models. When interpreted mechan- ically, invertibility refers to conditions that allow the inverse of a lag polynomial to be expressed in positive powers of the backshift operator. More fundamentally, it is a set of conditions that allows the set of shocks driving a stochastic process to be recovered from current and lagged real- izations of the observed data. In linear models, the set of conditions is the same, but in nonlinear models they are not. Granger and Anderson make this point, propose the relevant definition of invertibility appro- priate for both linear and nonlinear models, and discuss conditions that ensure invertibility for some specific examples. The third paper in this section is Granger’s “Near Normality and Some Econometric Models.”This paper contains exact small sample ver- sions of the central limit theorem. Granger’s result is really quite amazing: Suppose x and y are two independent and identically distrib- uted (i.i.d.) random variables and let z be a linear combination of x and y. Then the distribution of z is closer to the normal than the distribution of x and y (where the notion of “closer” is defined in terms of cumulants of the random variables). The univariate version of this result is con- tained in Granger (1977), and the multivariate generalization is given in the paper included here. The theorem in this paper shows that a bivari- Introduction 7 ate process formed by a weighted sum of bivariate vectors whose com- ponents are i.i.d. is generally nearer-normal than its constituents, and the components of the vector will be nearer-uncorrelated. The fourth paper, “The Time Series Approach to Econometric Model Building,” is a paper joint with Paul Newbold. It was published in 1977, a time when the merits of Box-Jenkins-style time series analysis versus clas- sical econometric methods were being debated among econometricians. Zellner and Palm (1974) is a classic paper in the area. Both papers tried to combine the insights of the Box-Jenkins approach with the structural approach to simultaneous equations modeling advocated by the Cowles Foundation. The combination of time series techniques with macroeco- nomic modeling received so much attention in the 1970s that it probably seems a natural approach to econometricians trained over the last two decades. Work by Sims (1980) on vector autoregression (VAR) models, the rational expectation approach in econometrics pursued by Hansen and Sargent (1980), and numerous other papers are clearly a result of and in various ways a synthesis of this debate. Of much more recent vintage is the next paper in this subsection, entitled: “Comments on the Evaluation of Policy Models,” joint with Deutsch (1992). In this paper, the authors advocate the use of rigorous econometric analysis when constructing and evaluating policy models and note that this approach has been largely neglected both by policy makers and by econometricians. The final paper in this section is Granger’s 1987 paper, “Implications of Aggregation with Common Factors.” This paper concerns the classic problem of aggregation of microeconomic relationships into aggregate relationships. The paper deals almost exclusively with linear microeco- nomic models so that answers to the standard aggregation questions are transparent. (For example, the aggregate relationship is linear, with coef- ficients representing averages of the coefficients across the micropopu- lation.) The important lessons from this paper don’t deal with these questions but rather with the implications of approximate aggregation. Specifically, Granger postulates a microeconomic environment in which individuals’ actions are explained by both idiosyncratic and common factors. Idiosyncratic factors are the most important variables explaining the microeconomic data, but these factors are average out when the microrelations are aggregated so that the aggregated data depend almost entirely on the common factors. Because the common factors are not very important for the microdata, an econometrician using microdata could quite easily decide that these factors are not important and not include them in the micromodel. In this case, the aggregate model con- structed from the estimated micromodel would be very misspecified. Because macroeconomists are now beginning to rely on microdatasets in their empirical work, this is a timely lesson. 8 Eric Ghysels, Norman R. Swanson, and Mark Watson

FORECASTING By the time this book is published, Granger will be in his sixth decade of active research in the area of forecasting.1 In essence, forecasting is based on the integration of three tasks: model specification and con- struction, model estimation and testing, and model evaluation and selec- tion. Granger has contributed extensively in all three, including classics in the areas of forecast evaluation, forecast combination, data transfor- mation, aggregation, seasonality and forecasting, and causality and fore- casting. Some of these are reproduced in this section.2 One of Granger’s earliest works on forecasting serves as a starting point for this section of Volume I. This is his 1959 paper, “Estimating the Probability of Flooding on a Tidal River,” which could serve as the benchmark example in a modern cost-benefit analysis text because the focus is on predicting the number of floods per century that can be expected on a tidal stretch. This paper builds on earlier work by Gumbel (1958), where estimates for nontidal flood plains are provided.The paper illustrates the multidisciplinary flavor of much of Granger’s work. The second paper in this section is entitled “Prediction with a Gen- eralized Cost of Error Function” (1969). This fundamental contribution highlights the restrictive nature of quadratic cost functions and notes that practical economic and management problems may call instead for the use nonquadratic and possibly nonsymmetric loss functions. Granger illuminates the potential need for such generalized cost functions and proposes an appropriate methodology for implementing such functions. For example, the paper discusses the importance of adding a bias term to predictors, a notion that is particularly important for model selection. This subject continues to receive considerable attention in economics (see, for example, Christoffersen and Diebold 1996, 1997; Hoffman and

1 His first published paper in the field was in the prestigious Astrophysical Journal in 1957 and was entitled “A Statistical Model for Sunspot Activity.” 2 A small sample of important papers not included in this section are Granger (1957, 1967); Granger, Kamstra, and White (1989); Granger, King, and White (1995); Granger and Sin (1997); Granger and Nelson (1979); and Granger and Thompson (1987). In addition, Granger has written seven books on the subject, including Spectral Analysis of Economic Time Series (1964, joint with M. Hatanaka), Predictability of Stock Market Prices (1970, joint with O. Morgenstern), Speculation, Hedging and Forecasts of Commodity Prices (1970, joint with W. C. Labys), Trading in Commodities (1974), Forecasting Economic Time Series (1977, joint with P.Newbold), Forecasting in Business and Economics (1980), and Modelling Nonlinear Dynamic Relationships (1993, joint with T.Teräsvirta).All these books are rich with ideas. For example, Granger and Newbold (1977) discuss a test for choosing between two competing forecasting models based on an evaluation of predic- tion errors. Recent papers in the area that propose tests similar in design and purpose to that discussed by Granger and Newbold include Corradi, Swanson, and Olivetti (1999); Diebold and Mariano (1995); Fair and Shiller (1990); Kolb and Stekler (1993); Meese and Rogoff (1988); Mizrach (1991); West (1996); and White (1999). Introduction 9

Rasche 1996; Leitch and Tanner 1991; Lin and Tsay 1996; Pesaran and Timmerman 1992, 1994; Swanson and White 1995, 1997; Weiss 1996). A related and subsequent paper entitled “Some Comments on the Evalu- ation of Economic Forecasts” (1983, joint with Newbold) is the third paper in this section. In this paper, generalized cost functions are eluci- dated, forecast model selection tests are outlined, and forecast efficiency in the sense of Mincer and Zarnowitz (1969) is discussed.The main focus of the paper, however, is the assertion that satisfactory tests of model performance should require that a “best” model produce forecasts, which cannot be improved upon by combination with (multivariate) Box- Jenkins-type forecasts. This notion is a precursor to so-called forecast encompassing and is related to Granger’s ideas about forecast combina- tion, a subject to which we now turn our attention. Three papers in this section focus on forecast combination, a subject that was introduced in the 1969 Granger and Bates paper, “The Com- bination of Forecasts.” This paper shows that the combination of two separate sets of airline passenger forecasts yield predictions that mean- square-error dominate each of the original sets of forecasts. That com- bined forecasts yield equal or smaller error variance is shown in an appendix to the paper. This insight has led to hundreds of subsequent papers, many of which concentrate on characterizing data-generating processes for which this feature holds, and many of which generalize the framework of Granger and Bates. A rather extensive review of the lit- erature in this area is given in Clemen (1989) (although many papers have been subsequently published). The combination literature also touches on issues such as structural change, loss function design, model misspecification and selection, and forecast evaluation tests.These topics are all discussed in the two related papers that we include in this section – namely, “Invited Review: Combining Forecasts – Twenty Years Later,” (1989) and “The Combination of Forecasts Using Changing Weights” (1994, joint with M. Deutsch and T. Teräsvirta). The former paper has a title that is self explanatory, while the latter considers changing weights associated with the estimation of switching and smooth transition regres- sion models – two types of nonlinear models that are currently receiv- ing considerable attention. The literature on data transformation in econometrics is extensive, and it is perhaps not surprising that one of the early forays in the area is Granger and Newbold’s “Forecasting Transformed Series” (1976). In this paper, general autocovariance structures are derived for a broad class of stationary Gaussian processes, which are transformed via some function that can be expanded by using Hermite polynomials. In addi- tion, Granger and Newbold point out that the Box and Cox (1964) trans- formation often yields variables that are “near-normal,” for example, making subsequent analysis more straightforward. (A more recent paper in this area, which is included in Volume II, is Granger and Hallman 10 Eric Ghysels, Norman R. Swanson, and Mark Watson

1991). The sixth paper in this part of Volume I is entitled “Forecasting White Noise.” Here, where Granger illustrates the potential empirical pitfalls associated with loose interpretation of theoretical results. His main illustration focuses on the commonly believed fallacy that: “The objective of time series analysis is to find a filter which, when applied to the series being considered, results in white noise.” Clearly such a state- ment is oversimplistic, and Granger illustrates this by considering three different types of white noise, and blending in causality, data transfor- mation, Markov chains, deterministic chaos, nonlinear models, and time- varying parameter models. The penultimate paper in this section, “Can We Improve the Perceived Quality of Economic Forecasts?” (1996), focuses on some of the fundamental issues currently confronting forecasters. In particular, Granger espouses on what sorts of loss functions we should be using, what sorts of information and information sets may be useful, and how forecasts can be improved in quality and presentation (for example, by using 50% rather than 95% confidence intervals).The paper is dedicated to the path-breaking book by Box and Jenkins (1970) and is a lucid piece that is meant to encourage discussion among practitioners of the art.The final paper in Volume I is entitled “Short-Run Forecasts of Electricity Loads and Peaks” (1997) and is meant to provide the reader of this volume with an example of how to correctly use current forecasting methodology in economics. In this piece, Ramanathan, Engle, Granger, Vahid-Araghi, and Brace implement a short-run forecasting model of hourly electrical utility system loads, focusing on model design, estima- tion, and evaluation.

Volume II

CAUSALITY Granger’s contributions to the study of causality and causal relationships in economics are without a doubt among some of his most well known. One reason for this may be the importance in so many fields of research of answering questions of the sort: What will happen to Y if X falls? Another reason is that Granger’s answers to these questions are elegant mathematically and simple to apply empirically. Causality had been considered in economics before Granger’s 1969 paper entitled “Investi- gating Causal Relations by Econometric Models and Cross-Spectral Introduction 11

Methods” (see, for example, Granger 1963; Granger and Hatanaka 1964; Hosoya 1977; Orcutt 1952; Simon 1953;Wiener 1956). In addition, papers on the concept of causality and on causality testing also appeared (and continue to appear) after Granger’s classic work (see, for example, Dolado and Lütkepohl 1994; Geweke 1982; Geweke et al. 1983; Granger and Lin 1994; Hoover 1993; Sims 1972; Swanson and Granger 1997; Toda and Phillips 1993, 1994;Toda and Yamamoto 1995; Zellner 1979, to name but a very few). However, Granger’s 1969 paper is a cornerstone of modern empirical causality analysis and testing. For this reason, Volume II begins with his 1969 contribution. In the paper, Granger uses cross- spectral methods as well as simple bivariate time series models to for- malize and to illustrate a simple, appealing, and testable notion of causality. Much of his insight is gathered in formal definitions of causal- ity, feedback, instantaneous causality, and causality lag. These four defi- nitions have formed the basis for virtually all the research in the area in the last thirty years and will probably do so for the next thirty years. His

first definition says that “. . . Yt causes Xt if we are able to better predict Xt using all available information than if the information apart from Yt had been used” (Granger 1969, p. 428). It is, thus, not surprising that many forecasting papers post Granger (1969) have used the “Granger causal- ity test” as a basic tool for model specification. It is also not surprising that economic theories are often compared and evaluated using Granger causality tests. In the paper, Granger also introduces the important concept of instantaneous causality and stresses how crucial sampling fre- quency and aggregation are, for example. All this is done within the framework of recently introduced (into economics by Granger and Hatanaka 1964) techniques of spectral analysis. The next paper in this part of Volume II, “Testing for Causality: A Personal Viewpoint” (1980), contains a number of important additional contribution that build on Granger (1969) and outlines further directions for modern time series analysis (many of which have subsequently been adopted by the profession). The paper begins by axiomatizing a concept of causality.This leads to a formal probabilistic interpretation of Granger (1969), in terms of conditional distribution functions, which is easily operationalized to include universal versus not universal information sets (for example, “data inadequacies”), and thus leads to causality tests based on conditional expectation and/or variance, for example. In addi- tion, Granger discusses the philosophical notion of causality and the roots of his initial interest and knowledge in the area. His discussion cul- minates with careful characterizations of so-called instantaneous and spurious causality. Finally, Granger emphasizes the use of post-sample data to confirm causal relationships found via in-sample Wald and Lagrange multiplier tests. Continuing with his methodological contributions, the third paper, “Some Recent Developments in a Concept of Causality” (1986), shows 12 Eric Ghysels, Norman R. Swanson, and Mark Watson that if two I(1) series are cointegrated, then there must be Granger cau- sation in at least one direction. He also discusses the use of causality tests for policy evaluation and revisits the issue of instantaneous causality, noting that three obvious explanations for apparent instantaneous causality are that: (i) variables react without any measurable time delay, (ii) the time interval over which data are collected is too large to capture causal relations properly, or that temporal aggregation leads to apparent instantaneous causation, and (iii) the information set is incomplete, thus leading to apparent instantaneous causality. It is argued that (ii) and (iii) are more plausible, and examples are provided. This section closes with a frequently cited empirical investigation entitled “Advertising and Aggregate Consumption: An Analysis of Causality” (1980). The paper is meant to provide the reader with an example of how to correctly use the concept of causality in economics. In this piece, Ashley, Granger, and Schmalensee stress the importance of out-of-sample forecasting perfor- mance in the evaluation of alternative causal systems and provide inter- esting evidence that advertising does not cuase consumption but that consumption may cause advertising.

INTEGRATION AND COINTEGRATION Granger’s “typical spectral shape” implies that most economic time series are dominated by low-frequency variability. Because this variabil- ity can be modeled by a unit root in a series’ autoregressive polynomial, the typical spectral shape provides the empirical motivation for work on integrated, long memory, and cointegrated processes. Granger’s contri- butions in this area are usefully organized into four categories. The first contains research focused on the implications of this low-frequency vari- ability for standard econometric methods, and the Granger and Newbold work on spurious regressions is the most notable contribution in this category. The second includes Granger’s research on linear time series models that explain the joint behavior of low-frequency components for a system of economic time series. His development of the idea of coin- tegration stands out here. The third category contains both empirical contributions and detailed statistical issues arising in cointegrated systems (like “trend” estimation). Finally, the fourth category contains his research on extending cointegration in time-invariant linear systems to nonlinear and time-varying systems. Papers representing his work in each of these categories are included in this section of Volume II. The first paper in this section is the classic 1974 Granger and Newbold paper “Spurious Regressions in Econometrics,” which contains what is arguably the most influential Monte Carlo study in econometrics. (The closest competitor that comes to our mind is the experiment reported in Slutsky 1927.) The Granger-Newbold paper shows that linear regres- sions involving statistically independent, but highly persistent random Introduction 13 variables will often produce large “t-statistics” and sample R2s. The results reported in this paper showed that serial correlation in the regres- sion error together with serial correlation in the regressor have disas- trous effects on the usual procedures of statistical inference. The basic result was known (Yule 1926), but the particulars of Granger and Newbold’s experiments were dramatic and unexpected. Indeed, in his ET Interview, Granger reminisces about giving a seminar on the topic at the London School of Economics (LSE), where some of the most sophisti- cated time-series econometricians of the time found the Granger- Newbold results incredible and suggested that he check his computer code. The paper had a profound impact on empirical work because, for example, researchers could no longer ignore low Durbin-Watson statis- tics. One of the most insightful observations in the paper is that, when considering the regression y = xb + e, the null hypothesis b = 0 implies that e has the same serial properties as y, so that it makes little sense constructing a t-statistic for this null hypothesis without worrying about serial correlation. The basic insight that both sides of an equation must have the same time series properties shows up repeatedly in Granger’s work and forms the basis of what he calls “consistency” in his later work. The Granger-Newbold spurious regression paper touched off a fertile debate on how serial correlation should be handled in regression models. Motivated by the typical spectral shape together with the likelihood of spurious regressions in levels regressions, Granger and Newbold sug- gested that applied researchers specify regressions using the first- differences of economic time series. This advice met with skepticism. There was an uneasy feeling that even though first-differencing would guard against the spurious regression problem, it would also eliminate the dominant low-frequency components of economic time series, and it was the interaction of these components that researchers wanted to measure with regression analysis. In this sense, first-differencing threw the baby out with the bath water. Hendry and Mizon (1978) provided a constructive response to the Granger-Newbold spurious regression chal- lenge with the suggestion that time series regression models be specified as autoregressive distributed lags in levels (that is, a(B)yt = c(B)xt + et). In this specification, the first-difference restriction could be viewed a common factor of (1 - B) in the a(B) and c(B) lag polynomials, and this restriction could be investigated empirically. These autoregressive dis- tributed lag models could also be rewritten in error-correction form, which highlighted their implied relationship between the levels of the series (useful references for this includes Sargan 1964; Hendry, Pagan, and Sargan 1981; and Hendry 1995). This debate led to Granger’s formalization of cointegration (see ET Interview, page 274). His ideas on the topic were first exposited in his 1981 paper “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” which is included as the second paper 14 Eric Ghysels, Norman R. Swanson, and Mark Watson in this section of Volume II. The paper begins with a discussion of con- sistency between the two sides of the previously mentioned equation. Thus, if y = xb + e and x contains important seasonal variation and e is white noise that is unrelated to x, then y must also contain important seasonal variation. The paper is most notable for its discussion of con- sistency in regards to the order of integration of the variables and the development of “co-integration,” which appears in Section 4 of the paper. (As it turns out, the term was used so much in the next five years that by the mid-1980s the hyphen had largely disappeared and co- integration became cointegration.) The relationship between error- correction models and cointegration is mentioned, and it is noted that two cointegrated variables have a unit long-run correlation. The paper probably contains Granger’s most prescient statements. For example, in discussing the “special case” of the autoregressive distributed lag that gives rise to a cointegrating relation, he states: “Although it may appear to be very special, it also seems to be potentially important.” And after giving some examples of cointegrated variables, he writes: “It might be interesting to undertake a wide-spread study to find out which pairs of economic variables are co-integrated.” Granger expanded on his cointegration ideas in his 1983 paper “Time Series Analysis of Error Correction Models” with Weiss, which is included as the third paper in this section.This paper makes three impor- tant contributions. First, it further explores the link between error- correction models and cointegration (focusing primarily on bivariate models). Second, it introduces methods for testing for cointegration. These include the residual-based tests developed in more detail in Engle and Granger’s later paper and the tests that were analyzed several years later by Horvath and Watson (1995). The paper does not tackle the unit- root distribution problems that arise in the tests (more on this later) and instead suggests practical “identification” procedures analogous to those used in Box-Jenkins model building. The final contribution of the paper is an application of cointegration to three classic economic relations, each of which was studied in more detail by later researchers using “modern” cointegration methods. The first application considered employee income and national income (in logarithms) and, thus, focused on labor’s share of national income, one of the “Great Ratios” investigated earlier by Kosobud and Klein (1961) using other statistical methods.The second application considered money and nominal income, where Granger and Weiss found little evidence supporting cointegration. Later researchers added nominal interest rates to this system, producing a long-run money demand relation, and found stronger evidence of cointegration (Baba, Hendry, and Star 1992; Hoffman and Rasche 1991; Stock and Watson 1993). The third application considered the trivariate system of nominal wages, prices, and productivity,which was studied in more detail a decade later by Campbell and Rissman (1994). Introduction 15

The now-classic reference on cointegration, Engle and Granger’s “Co- Integration and Error-Correction: Representation, Estimation and Testing,” is included as the fourth paper in this section. This paper is so well known that, literally, it needs no introduction. The paper includes “Granger’s Representation Theorem,” which carefully lays out the con- nection between moving average and vector error correction represen- tations for cointegrated models involving I(1) variables. It highlights the nonstandard statistical inference issues that arise in cointegrated models including unit roots and unidentified parameters. Small sample critical values for residual-based cointegration tests are given, and asymptoti- cally efficient estimators for I(0) parameters are developed (subse- quently known as Engle-Granger two-step estimators). The paper also contains a short, but serious, empirical section investigating cointegra- tion between consumption and income, long-term and short-term inter- est rates, and money and nominal income. Granger’s 1986 “Developments in the Study of Cointegrated Economic Variables” is the next entry in the section and summarizes the progress made during the first five years of research on the topic. Rep- resentation theory for I(1) processes was well understood by this time, and several implications had been noted, perhaps the most surprising was the relationship between cointegration and causality discussed in the last subsection. (If x and y are cointegrated, then either x must Granger-cause y or the converse, and thus cointegration of asset prices is at odds with the martingale property.) Work had begun on the representation theory for I(2) processes (Johansen 1988a; Yoo 1987). Inference techniques were still in their infancy, but great strides would be made in the subse- quent five years. A set of stylized cointegration facts was developing (consumption and income are cointegrated, money and nominal interest rates are not, for example).The paper ends with some new ideas on coin- tegration in nonlinear models and in models with time-varying coeffi- cients. This is an area that has not attracted a lot of attention (a notable exception being Balke and Fomby 1997), primarily because of the diffi- cult problems in statistical inference. Cointegration is one of those rare ideas in econometrics that had an immediate effect on empirical work. It crystallized a notion that earlier researchers had tried to convey as, for example, “true regressions” (Frisch 1934), low-frequency regressions (Engle 1974), or the most pre- dictable canonical variables from a system (Box and Tiao 1977). There is now an enormous body of empirical work utilizing Granger’s cointe- gration framework. Some of the early work was descriptive in nature (asking, like Granger and Weiss, whether a set of variables appeared to be cointegrated), but it soon became apparent that cointegration was an implication of important economic theories, and this insight allowed researchers to test separately both the long-run and short-run implica- tions of the specific theories. For example, Campbell and Shiller (1987) 16 Eric Ghysels, Norman R. Swanson, and Mark Watson and Campbell (1987) showed that cointegration was an implication of rational expectations versions of present value relations, making the concept immediately germane to a large number of applications includ- ing the permanent income model of consumption, the term structure of interest rates, money demand, and asset price determination, for example. The connection with error correction models meant that coin- tegration was easily incorporated into vector autoregressions, and researchers exploited this restriction to help solve the identification problem in these models (see Blanchard and Quah 1989; King et al. 1991, for example). Development of empirical work went hand in hand with development of inference procedures that extended the results for univariate autore- gressions with unit roots to vector systems (for example, see Chan and Wei 1987; and Phillips and Durlauf 1986). Much of this work was focused directly on the issues raised by Granger in the papers reproduced here. For example, Phillips (1986) used these new techniques to help explain the Granger-Newbold spurious regression results. Stock (1987) derived the limiting distribution of least squares estimators of cointegrating vectors, showing that the estimated coefficients were T-consistent. Phillips and Ouliaris (1990) derived asymptotic distributions of residual- based tests for cointegration. Using the vector error-correction model, Johansen (1988b) and Ahn and Reinsel (1990) developed Gaussian maximum likelihood estimators and derived the asymptotic properties of the estimators. Johansen (1988b) derived likelihood-based tests for cointegration. Many refinements of these procedures followed during the late 1980s and early 1990s (Phillips 1991; Saikkonen 1991; Stock and Watson 1993, to list a few examples from a very long list of contribu- tions), and by the mid 1990s a rather complete guide to specification, estimation, and testing in cointegrated models appeared in textbooks such as Hamilton (1994) and Hendry (1995). During this period, Granger and others were extending his cointe- gration analysis in important directions. One particularly useful exten- sion focused on seasonality, and we include Hylleberg, Engle, Granger, and Yoo’s “Seasonal Integration and Cointegration,” as the next paper in this section. A common approach to univariate modeling of seasonal series is to remove the seasonal and trend components by taking sea- sonal differences. For example, for quarterly data, this involves filtering the data using (1 - B4). This operation explicitly incorporates (1 - B4) into the series’ autoregressive polynomial and implies that the autore- gression will contain four unit roots: two real roots associated with fre- quencies 0 and p and a complex conjugate pair associated with frequency p/2. Standard cointegration and unit-root techniques focus only on the zero-frequency unit root; the Hyllberg et al. paper discusses the compli- cations that arise from the remaining three unit roots. Specifically, the paper develops tests for unit roots and seasonal cointegration at Introduction 17 frequencies other than zero. This is done in a clever way by first expand- ing the autoregressive polynomial in a partial fraction expansion with terms associated with each of the unit roots. This simplifies the testing problem because it makes it possible to apply standard regression-based tests to filtered versions of the series. This paper has led to the so-called HEGY approach of testing for seasonal roots separately. It has been extended in several ways notably by Ghysels et al. (1994) who built joint tests, such as testing for the presence of all seasonal unit roots, based on the HEGY regressions. Many of Granger’s papers include empirical examples of the pro- posed techniques, but only occasionally is the empirical analysis the heart of the paper. One notable exception is “A Cointegration Analysis of Treasury Bill Yields,” with Hall and Anderson, which is included as the sixth paper in this section. The framework for the paper is the familiar expectations theory of the term structure. There are two novelties: first, the analysis is carried out using a large number of series (that is, twelve series), and second, the temporal stability of the cointegrating relation is investigated. The key conclusion is that interest-rate spreads on 1–12 month U.S. Treasury Bills appear to be I(0) except during the turbulent 1979–82 time period. A natural way to think about cointegrated systems is in terms of underlying, but unobserved, persistent, and transitory components. The persistent factors capture the long-memory or low-frequency variability in the observed series, and the transitory factors explain the shorter memory or high-frequency variation. In many situations, the persistent components correspond to interesting economic concepts (“trend” or “permanent” income, aggregate productivity,“core” inflation, and so on) Thus, an important question is how to estimate these components from the observed time series, and this is difficult because there is no unique way to carry out the decomposition. One popular decomposition associ- ates the persistent component with the long-run forecasts in the observed series and the transitory component with the corresponding residual (Beveridge and Nelson 1981). This approach has limitations: notably the persistent component is, by construction, a martingale, and the innovations in the persistent and the transitory components are cor- related. In the next two papers included in this section, Granger takes up this issue. The first paper, “Estimation of Common Long-Memory Components in Cointegrated Systems,” was written with Gonzalo. They propose a decomposition that has two important characteristics: first, both components are a function only of the current values of the series, and second, innovations in the persistent components are uncorrelated with the innovations in the transitory component. In the second paper, “Separation in Cointegrated Systems and Persistent-Transitory Decompositions” (with N. Haldrup), Granger takes up the issue of esti- mation of these components in large systems. The key question is 18 Eric Ghysels, Norman R. Swanson, and Mark Watson whether the components might be computed separately for groups of series so that the components could then be analyzed separately without having to model the entire system of variables. Granger and Haldrup present conditions under which this is possible. Unfortunately the con- ditions are quite stringent so that few simplifications surface for applied researchers. The final three papers in this section focus on nonlinear generaliza- tions of cointegration. The first two of these are joint works with Hallman. In “Nonlinear Transformations of Integrated Time Series,” Granger and Hallman begin with integrated and cointegrated variables and ask whether nonlinear functions of the series will also appear to be integrated and cointegrated. The problem is complex, and few analytic results are possible. However, the paper includes several approximations and simulations that are quite informative. One of the most interesting results in the paper is a simulation that suggests that Dickey-Fuller tests applied to the ranks of Gaussian random walks have well-behaved lim- iting distributions. This is important, of course because statistics based on ranks are invariant to all monotonic transformations applied to the data. In their second paper “Long Memory Series with Attractors,” Granger and Halman discuss nonlinear attractors (alternatively I(0) non- linear functions of stochastically trending variables) and experiment with semiparametric methods for estimating these nonlinear functions. The last paper, “Further Developments in the Study of Cointegrated Vari- ables,” with Swanson is a fitting end to this section. It is one of Granger’s “seed” papers – overflowing with ideas and, as stated in the first para- graph, raising “more question than it solves.” Specifically, the paper not only discusses time-varying parameter models for cointegration and their implications for time variation in vector error-correction models, how nonlinear cointegrated models can arise as solutions to nonlinear optimization problems, and models for nonlinear leading indicator analy- sis but also contains a nonlinear empirical generalization of the analysis in King et al. (1991). No doubt, over the next decade, a few of these seeds will germinate and create their own areas of active research.

LONG MEMORY Even though integrated variables have been widely used in empirical work, they represent a fairly narrow class of models capable of generat- ing Granger’s typical spectral shape. In particular, it has been noted that autocorrelation functions of many time series exhibit a slow hyperbolic decay rate. This phenomenon, called long memory or sometimes also called long-range dependence, is observed in geophysical data, such as river flow data (see Hurst 1951, 1956; Lawrence and Kottegoda 1977) and in climatological series (see Hipel and McLeod 1978a, 1978b; Mandelbrot and Wallis 1968) as well as in economic time series Introduction 19

(Adelman 1965; Mandelbrot 1963). In two important papers, Granger extends these processes to provide more flexible low-frequency or long- memory behavior by considering I(d) processes with noninteger values of d. The first of these papers, Granger and Joyeux’s (1980) “An Intro- duction to Long-Memory Time Series Models and Fractional Differenc- ing,” is related to earlier work by Mandelbrot and Van Ness (1968) describing fractional Brownian motion. Granger and Joyeux begin by d introducing the I(d) process (1 - B) yt = et for noninteger d. They show 1 that the process is covariance stationary when d < –2 and derive the auto- correlations and spectrum of the process. Interestingly, the autocorrela- tions die out at a rate t 2d-1 for large t showing that the process has a much longer memory than stationary finite-order ARMA processes (whose autocorrelations die out at rate rt where ΩrΩ < 1). In the second of these papers, “Long Memory Relationships and the Aggregation of Dynamic Models,” Granger shows how this long-memory process can be generated by a large number of heterogenous AR(1) processes. This aggregation work continues to intrigue researchers, as evidenced by recent extensions by Lippi and Zaffaroni (1999). Empirical work investigating long-memory processes was initially hin- dered by a lack of statistical methods for estimation and testing, but methods now have been developed that are applicable in fairly general settings (for example, see Robinson 1994, 1995; Lobato and Robinson 1998). In addition, early empirical work in macroeconomics and finance found little convincing evidence of long memory (see Lo 1991, for example). However, a new flurry of empirical work has found strong evi- dence for long memory in the absolute value of asset returns. One of the most important empirical contributions is the paper by Ding, Granger, and Engle, “A Long Memory Property of the Stock Market Returns and a New Model,” which is included as the last paper in this section. Using daily data on S&P 500 stock returns from 1928 to 1991, this paper reports autocorrelations of the absolute values of returns that die out very slowly and remain significantly greater than zero beyond lags of 100 periods. This finding seems to have become a stylized fact in empirical finance (see Andersen and Bollerslev 1998; Lobato and Savin 1998) and serves as the empirical motivation for a large number of recent papers.

REFERENCES Adelman, I., 1965, Long Cycles: Fact or Artifact? American Economic Review, 55, 444–63. Ahn, S. K., and G. C. Reinsel, 1990, Estimation of Partially Nonstationary Autore- gressive Models, Journal of the American Statistical Association, 85, 813–23. 20 Eric Ghysels, Norman R. Swanson, and Mark Watson

Alexander, S., 1961, Price Movements in Speculative Markets:Trends or Random Walks, Industrial Management Review, 2, 7–26. 1964, “Price Movements in Speculative Markets: Trends or Random Walks, No. 2,” in P. Cootner (ed.), The Random Character of Stock Market Prices, Massachusetts Institute of Technology Press, Cambridge, MA. Andersen, T., and T. Bollerslev, 1998, Heterogeneous Information Arrivals and Return Volatility Dynamics: Uncovering the Long-run in High Frequency Returns, Journal of Finance, 52, 975–1005. Anderson, T. W., 1971, The Statistical Analysis of Time Series, New York: Wiley. Baba, Y., D. F. Hendry, and R. M. Star, 1992, The Demand for M1 in the U.S.A., 1960–1988, Review of Economic Studies, 59, 25–61. Bachelier, L., 1900, Theory of Speculation, in P.Cootner, ed., The Random Char- acter of Stock Market Prices, Cambridge, MA: Massachusetts Institute of Technology Press, 1964; Reprint. Balke, N., and T. B. Fomby, 1997, Threshold Cointegration, International Eco- nomic Review, 38, No. 3, 627–45. Bell, W. R., 1984, Signal Extraction for Nonstationary Time Series, The Annals of Statistics, 12, 646–64. Beveridge, S., and C. R. Nelson, 1981, A New Approach to Decomposition of Time Series into Permanent and Tansitory Components with Particular Attention to Measurement of the “Business Cycle,” Journal of Monetary Economics, 7, 151–74. Bierens, H., 1990, Model-free Asymptotically Best Forecasting of Stationary Economic Time Series, Econometric Theory, 6, 348–83. Blanchard, O. J., and D. Quah, 1989, The Dynamic Effects of Aggregate Demand and Supply Disturbances, American Economic Review, 79, 655–73. Box, G. E. P., and D. R. Cox, 1964, An Analysis of Transformations, Journal of the Royal Statistical Society Series B, 26, 211–43. Box, G. E. P., and G. M. Jenkins, 1970, Time Series Analysis, Forecasting and Control, San Fransisco: Holden Day. Box, G. E. P., and G. Tiao, 1977, A Canonical Analysis of Multiple Time Series, Biometrika, 64, 355–65. Burns, A. F., and W. C. Mitchell, 1947, Measuring Business Cycles, New York: National Bureau of Economic Research. Campbell, J. Y., 1987, Does Saving Anticipate Declining Labor Income, Econo- metrica, 55, 1249–73. Campbell, J. Y., A. W. Lo, and A. C. McKinlay, 1997, The Econometrics of Finan- cial Markets, Princeton, NJ: Princeton University Press. Campbell, J. Y., and R. J. Shiller, 1987, Cointegration and Tests of the Present Value Models, Journal of Political Economy, 95, 1062–88. Reprinted in R. F. Engle and C. W. J. Granger, eds., Long-Run Economic Relationships, Read- ings in Cointegration, Oxford, Oxford University Press. Chan, N. H., and C. Z. Wei, 1987, Limiting Distributions of Least Squares Esti- mators of Unstable Autoregressive Processes, The Annals of Statistics, 16, 367–401. Christoffersen, P., and F. X. Diebold, 1996, Further Results on Forecasting and Model Selection Under Asymmetric Loss, Journal of Applied Econometrics, 11, 651–72. Introduction 21

1997, Optimal Prediction Under Asymmetric Loss, Econometric Theory, 13, 808–17. Clemen, R. T., 1989, Combining Forecasts: A Review and Annotated Bibliogra- phy, International Journal of Forecasting, 5, 559–83. Cleveland, W. P., and G. C. Tiao, 1976, Decomposition of Seasonal Time Series: A Model for the X-11 Program, Journal of the American Statistical Associ- ation, 71, 581–7. Cootner, P. (ed.), 1964, The Random Character of Stock Market Prices, Massa- chusetts Institute of Technology Press, Cambridge, MA. Corradi, V., N. R. Swanson, and C. Olivetti, 1999, Predictive Ability With Coin- tegrated Variables, Working Paper, Texas A&M University. Cowles, A., 1933, Can Stock Market Forecasters Forecast?, Econometrica,1, 309–324. 1960, A Revision of Previous Conclusions Regarding Stock Price Behavior, Econometrica, 28, 909–915. Cramér, H., 1942, On Harmonic Analysis of Certain Function Spaces, Arkiv. Mat. Astron. Fysik, 28B, No. 12, 1–7. Davidson, J., 1994, Stochastic Limit Theory, Oxford: Oxford University Press. Delgado, M. A., and P. M. Robinson, 1992, Nonparametric and Semiparametric Methods for Economic Research, Journal of Economic Surveys, 6, 201–49. Diebold, F. X., and R. S. Mariano, 1995, Comparing Predictive Accuracy, Journal of Business and Economic Statistics, 13, 253–63. Dolado, J. J., and H. Lütkepohl, 1994, Making Wald Tests Work for Cointegrated VAR Systems, Econometric Reviews. Engle, R. F., 1974, Band Spectrum Regression, International Economic Review, 15, 1–11. 1982, Autoregressive Conditional Heteroskedasticity with Estimates of UK Inflation, Econometrica, 50, 987–1007. Fair, R. C., and R. J. Shiller, 1990, Comparing Information in Forecasts from Econometric Models, American Economic Review, 80, 375–89. Fama, E., 1965, The Behavior of Stock Market Prices, Journal of Business, 38, 34–105. 1970, Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25, 383–417. Frisch, R., 1934, Statistical Confluence Analysis by Means of Complete Regres- sions Systems, Oslo: Universitets, Økonomiske Institut. Fuller, W.A., 1976, Introduction to Statistical Time Series, New York: John Wiley. Gallant, A. R., and H. White, 1988, A Unified Theory of Estimation and Inference for Nonlinear Dynamics Models, New York: Basil Blackwell. Geweke, J., 1982, Measures of Linear Dependence and Feedback Between Time Series, Journal of the American Statistical Association, 77, 304–24. Geweke, J., R. Meese, and W. Dent, 1983, Comparing Alternative Tests of Causal- ity in Temporal Systems, Journal of Econometrics, 21, 161–94. Ghysels, E., C. W. J. Granger, and P. L. Siklos, 1996, Is Seasonal Adjustment a Linear or Nonlinear Data-Filtering Process? Journal of Business and Eco- nomic Statistics, 14, 374–86. Ghysels, E., H. S. Lee, and J. Noh, 1994, Testing for Unit Roots in Seasonal Time- Series – Some Theoretical Extensions and a Monte Carlo Investigation, Journal of Econometrics, 62, 415–42. 22 Eric Ghysels, Norman R. Swanson, and Mark Watson

Granger, C. W. J., 1957, A Statistical Model for Sunspot Activity, The Astrophys- ical Journal, 126, 152–8. 1963, Economic Processes Involving Feedback, Information and Control,6, 28–48. 1966, The Typical Spectral Shape of an Economic Variable, Econometrica, 34, 150–61. 1967, Simple Trend-Fitting for Long-Range Forecasting, Management Deci- sion, Spring, 29–34. 1974, Trading in Commodities, Cambridge, England: Woodhead-Faulkner. 1977, Tendency Towards Normality of Linear Combinations of Random Variables, Metrika, 23, 237–48. 1979, Seasonality: Causation, Interpretation and Implications, in A. Zellner, ed., Seasonal Analysis of Economic Time Series, Economic Research Report, ER-1, Bureau of the Census 1979. 1980, Forecasting in Business and Economics, San Diego: Academic Press. 1992, Comment on Two Papers Concerning Chaos and Statistics by S. Chatterjee and M. Ylmaz and by M. Berliner, Statistical Science, 7, 69–122. 1994, Is Chaotic Economic Theory Relevant for Economics? Journal of Inter- national and Comparative Economics, forthcoming. Granger, C. W. J., and A. P. Andersen, 1978, An Introduction to Bilinear Time Series Models, Göttingen: Vandenhoeck and Ruprecht. Granger, C. W. J., and M. Hatanaka, 1964, Spectral Analysis of Economic Time Series, Princeton, NJ: Princeton University Press. Granger, C. W. J., M. Kamstra, and H. White, 1989, Interval Forecasting: An Analysis Based Upon ARCH-Quantile Estimators, Journal of Economet- rics, 40, 87–96. 1995, Comments of Testing Economic Theories and the Use of Model Selec- tion Criteria, Journal of Econometrics, 67, 173–87. Granger, C. W. J., and W. C. Labys, 1970, Speculation, Hedging and Forecasts of Commodity Prices, Lexington, MA: Heath and Company. Granger, C. W. J., and J.-L. Lin, 1994, Causality in the Long-Run, Econometric Theory, 11, 530–6. Granger, C.W.J., and O. Morgenstern, 1963, Spectral Analysis of New York Stock Market Prices, Kyklos, 16, 1–27. Reprinted in P. H. Cootner, ed., Random Character of Stock Market Prices, Cambridge, MA: MIT Press, 1964. 1970, Predictability of Stock Market Prices, Lexington, MA: Heath and Company. Granger, C. W. J., and M. Morris, 1976, Time Series Modeling and Interpretation, Journal of the Royal Statistical Society Series A, 139, 246–57. Granger, C. W. J., and H. L. Nelson, 1979, Experience with Using the Box-Cox Transformation When Forecasting Economic Time Series, Journal of Econo- metrics, 9, 57–69. Granger, C. W. J., and P. Newbold, 1977, Forecasting Economic Time Series, New York: Academic Press. 1977, Forecasting Economic Time Series, 1st ed., San Diego: Academic Press. Granger, C. W. J., and C.-Y. Sin, 1997, Estimating and Forecasting Quantiles with Asymmetric Least Squares, Working Paper, University of California, San Diego. Introduction 23

Granger, C. W. J., and T. Teräsvirta, 1993, Modeling Nonlinear Dynamic Rela- tionships, Oxford: Oxford University Press. Granger, C. W. J., and P. Thompson, 1987, Predictive Consequences of Using Conditioning on Causal Variables, Economic Theory, 3, 150–2. Gumbel, D., 1958, Statistical theory of Floods and Droughts, Journal I.W.E., 12, 157–67. Hamilton, J. D., 1994, Time Series Analysis, Princeton, NJ: Princeton University Press. Hannan, E. J., 1967, Measurement of a Wandering Signal Amid Noise, Journal of Applied Probability, 4, 90–102. 1970, Multiple Time Series, New York: Wiley. Hansen, L. P., and T. J. Sargent, 1980, Formulating and Estimating Dynamic Linear Rational Expectations Models, Journal of Economic Dynamics and Control, 2, No. 1, 7–46. Härdle, W., 1990, Applied Nonparametric Regression, Cambridge: Cambridge University Press. Hendry, D. F., 1995, Dynamic Econometrics, Oxford, England: Oxford University Press. Hendry, D. F., and G. E. Mizon, 1978, Serial Correlation as a Convenient Simpli- fication, Not a Nuisance:A Comment on a Study of the Demand For Money by the Bank of England, Economic Journal, 88, 549–63. Hendry, D. F.,A. R. Pagan, and J. D. Sargan, 1984, Dynamic Specification, Chapter 18, in M. D. Intriligator and Z. Griliches, eds., Handbook of Econometrics, Vol. II, Amsterdam: North Holland. Hipel, K. W., and A. I. McLeod, 1978a, Preservation of the Rescaled Adjusted Range, 2: Simulation Studies Using Box-Jenkins Models, Water Resources Research, 14, 509–16. 1978b, Preservation of the Rescaled Adjusted Range, 3: Fractional Gaussian Noise Algorithms, Water Resources Research, 14, 517–18. Hoffman, D. L., and R. H. Rasche, 1991, Long-Run Income and Interest Elastic- ities of Money Demand in the United States, Review of Economics and Statistics, 73, 665–74. 1996, Assessing Forecast Performance in a Cointegrated System, Journal of Applied Econometrics, 11, 495–517. Hoover, K. D., 1993, Causality and Temporal Order in Macroeconomics or Why Even Economists Don’t Know How to Get Causes from Probabilities, British Journal for the Philosophy of Science, December. Horvath, M.T. K., and M.W.Watson, 1995,Testing for Cointegration When Some of the Cointegrating Vectors and Prespecified, Econometric Theory, 11, No. 5, 952–84. Hosoya, Y., 1977, On the Granger Condition for Non-Causality, Econometrica, 45, 1735–6. Hurst, H. E., 1951, Long-term Storage Capacity of Reservoirs, Transactions of the American Society of Civil Engineers, 116, 770–99. 1956, Methods of Using Long Term Storage in Reservoirs, Proceedings of the Institute of Civil Engineers, 1, 519–43. Ignacio, N., N. Labato, and P. M. Robinson, 1998, A Nonparametric Test for I(0), Review of Economic Studies, 65, 475–95. 24 Eric Ghysels, Norman R. Swanson, and Mark Watson

Johansen, S. 1988a, The Mathematical Structure of Error Correction Models, in N. U. Prabhu, ed., Contemporary Mathematics, Vol. 80: Structural Inference for Stochastic Processes, Providence, RI: American Mathematical Society. 1988b, Statistical Analysis of Cointegrating Vectors, Journal of Economic Dynamics and Control, 12, 231–54. Khintchine,A. 1934, Korrelationstheorie der Stationare Stochastischen Processe, Math Annual, 109, 604–15. King, R., C. I. Plosser, J. H. Stock, and M. W.Watson, 1991, Stochastic Trends and Economic Fluctuations, American Economic Review, 81, No. 4, 819–40. Kolb, R. A., and H. O. Stekler, 1993, Are Economic Forecasts Significantly Better Than Naive Predictions? An Appropriate Test, International Journal of Forecasting, 9, 117–20. Kolmogorov,A. N., 1933, Grundbegriffe der Wahrscheinlichkeitrechnung, Ergeb- nisse der Mathematik. Published in English in 1950 as Foundations of the Theory of Probability, Bronx, NY: Chelsea. 1941a, Stationary Sequences in Hilbert Space, (Russian) Bull. Math. Univ. Moscow 2, No. 6, 40. 1941b, Interpolation und Extrapolation von Stationaren Zufalligen Folgen [Russian, German summary], Bull. Acad. Sci. U.R.S.S. Ser. Math., 5, 3–14. Kosobud, R., and L. Klein, 1961, Some Econometrics of Growth: Great Ratios of Economics, Quarterly Journal of Economics, 25, 173–98. Lawrence, A. J., and N. T. Kottegoda, 1977, Stochastic Modeling of River Flow Time Series, Journal of the Royal Statistical Society Series A, 140, 1–47. Lee, T.-H., H. White, and C. W. J. Granger, 1993, Testing for Neglected Nonlin- earity in Time Series Models: A Comparison of Neural Network Methods and Alternative Tests, Journal of Econometrics, 56, 269–90. Leitch, G., and J. E. Tanner, 1991, Economic Forecast Evaluation: Profits Versus the Conventional Error Measures, American Economic Review, 81, 580–90. Li, Q., 1998, Efficient Estimation of Additive Partially Linear Models, Interna- tional Economic Review, forthcoming. Lin, J.-L., and R. S. Tsay, 1996, Co-integration Constraint and Forecasting: An Empirical Examination, Journal of Applied Econometrics, 11, 519–38. Linton, O., and J. P.Neilson, ••, 1995, A Kernal Method of Estimating Structured Nonparametric Regression Based on Marginal Integration, Biometrika, 82, 91–100. Lippi, M., and P. Zaffaroni, 1999, Contemporaneous Aggregation of Linear Dynamic Models in Large Economies, Mimeo, Universita La Sapienza and Banca d’Italia. Liu, T., C. W. J. Granger, and W. Heller, 1992, Using the Correlation Exponent to Decide whether an Economic Series Is Chaotic, Journal of Applied Econometrics, 7S, 525–40. Reprinted in M. H. Pesaran and S. M. Potter, eds., Nonlinear Dynamics, Chaos, and Econometrics, Chichester: Wiley. Lo, A., 1991, Long-Term Memory in Stock Prices, Econometrica, 59, 1279– 313. Lobato, I., and P. M. Robinson, 1998, A Nonparametric Test for I(0), Review of Economic Studies, 65, 475–95. Lobato, I., and N. E. Savin, 1998, Real and Spurious Long-Memory Properties of Stock-Market Data, Journal of Business and Economic Statistics, 16, No. 3, 261–7. Introduction 25

Lütkepohl, H., 1991, Introduction to Multiple Time Series Analysis, New York: Springer-Verlag. Macauley, F. R., 1931, The Smoothing of Time Series, New York, NY: National Bureau of Economic Research. Mandelbrot, B., 1963, The Variation of Certain Speculative Prices, Journal of Business, 36, 394–419. Mandelbrot, B. B., and J. W. Van Ness, 1968, Fractional Brownian Motions, Frac- tional Brownian Noises and Applications, SIAM Review, 10, 422–37. Mandelbrot, B. B., and J. Wallis, 1968, Noah, Joseph and Operational Hydrology, Water Resources Research, 4, 909–18. Mann, H. B., and A. Wald, 1943, On the Statistical Treatment of Linear Stochas- tic Difference Equations, Econometrica, 11, 173–220. McLeish, D. L., 1978,A Maximal Inequality and Dependent Strong Laws, Annals of Probability, 3, 829–39. Meese, R. A., and K. Rogoff, 1983, Empirical Exchange Rate Models of the Seventies: Do They Fit Out of Sample, Journal of International Economics, 14, 3–24. Mincer, J.,and V.Zarnowitz, 1969,The Evaluation of Economic Forecasts, in Eco- nomic Forecasts and Expectations, J.Mincer, ed., New York:National Bureau of Economic Research.

Mizrach, B., 1991, Forecast Comparison in L2, Working Paper, Rutgers Univeristy. Nerlove, M., 1964, Spectral Analysis of Seasonal Adjustment Procedures, Econo- metrica, 32, 241–86. 1965,A Comparison of a Modified Hannan and the BLS Seasonal Adjustment Filters, Journal of the American Statistical Association, 60, 442–91. Nerlove, M., D. Grether, and J. Carvalho, 1979, Analysis of Economic Time Series – A Synthesis, New York: Academic Press. Orcutt, G. H., 1952,Actions, Consequences and Causal Relations, Review of Eco- nomics and Statistics, 34, 305–13. Pesaran, M. H., and A. G. Timmerman, 1992, A Simple Nonparametric Test of Predictive Performance, Journal of Business and Economic Statistics, 10, 461–5. 1994, A Generalization of the Nonparametric Henriksson-Merton Test of Market Timing, Economics Letters, 44, 1–7. Phillips, P. C. B., 1986, Understanding Spurious Regressions in Econometrics, Journal of Econometrics, 33, No. 3, 311–40. 1991, Optimal Inference in Cointegrated Systems, Econometrica, 59, 283–306. 1997, ET Interview: Clive Granger, Econometric Theory, 13, 253–304. Phillips, P. C. B., and S. N. Durlauf, 1986, Multiple Time Series Regression with Integrated Processes, Review of Economic Studies, 53, No. 4, 473–96. Phillips, P. C. B., and S. Ouliaris, 1990, Asymptotic Properties of Residual Based Tests for Cointegration, Econometrica, 58, No. 1, 165–93. Pierce, D. A., 1979, Signal Extraction Error in Nonstationary Time Series, The Annals of Statistics, 7, 1303–20. Priestley, M. B., 1981, Spectral Analysis and Time Series, New York: Academic Press. Rissman, E., and J. Campbell, 1994, Long-run Labor Market Dynamics and Short-run Inflation, Economic Perspectives. 26 Eric Ghysels, Norman R. Swanson, and Mark Watson

Robinson, P. M., 1988, Root N-consistent Semiparametric Regression, Econo- metrica, 56, 931–54. 1994, Semiparametric Analysis of Long Memory Time Series, The Annals of Statistics, 22, 515–39. 1995, Gaussian Semiparametric Estimation of Long Range Dependence, The Annals of Statistics, 23, 1630–61. Saikkonen, P., 1991, Asymptotically Efficient Estimation of Cointegrating Regressions, Econometric Theory, 7, 1–21. Samuelson, P., 1965, Proof that Properly Anticipated Prices Fluctuate Randomly, Industrial Management Review, 6, 41–9. Sargan, J. D., 1964, Wages and Prices in the United Kingdom: A Study in Econo- metric Methodology, in P.E. Hart, G. Mills, and J. N. Whittaker, eds., Econo- metric Analysis of National Economic Planning, London: Butterworths. Sargent, T. J., 1987, Macroeconomic Theory, 2nd ed., New York: Academic Press. Shiskin, J.,A. H.Young, and J. C. Musgrave, 1967,The X-11 Variant of the Census Method II Seasonal Adjustment Program, Technical Paper 15, U.S. Bureau of the Census, Washington, DC. Simon, H. A., 1953, Causal Ordering and Identifiability, in W. C. Hood and T. C. Koopmans, eds., Studies in Econometric Method, Cowles Commission Monograph 14, New York. Sims, C.A., 1972, Money, Income, and Causality, American Economic Review, 62, 540–52. 1974, Seasonality in Regression, Journal of the American Statistical Associa- tion, 69, 618–26. 1980, Macroeconomics and Reality, Econometrica, 48, No. 1, 1–48. Slutzky, E., 1927, The Summation of Random Causes as the Source of Cyclic Processes, Econometrica, 5, 105–46, 1937. Translated from the earlier paper of the same title in Problems of Economic Conditions, Moscow: Cojuncture Institute. Stock, J. H., 1987, Asymptotic Properties of Least Squares Estimators of Coin- tegrating Vectors, Econometrica, 55, 1035–56. 1989, Nonparametric Policy Analysis, Journal of the American Statistical Asso- ciation, 84, 567–75. Stock, J. H., and M. W. Watson, 1993, A Simple Estimator of Cointegrating Vectors in Higher Order Integrated Systems, Econometrica, 61, No. 4, 783–820. Swanson, N. R., and C. W. J. Granger, 1997, Impulse Response Functions Based on a Causal Approach to Residual Orthogonalization in Vector Autoregre- sion, Journal of the American Statistical Association, 92, 357–67. Swanson, N. R., and H. White, 1995, A Model Selection Approach to Assessing the Information in the Term Structure Using Linear Models and Artificial Neural Networks, Journal of Business and Economic Statistics, 13, 265–75. 1997, A Model Selection Approach to Real-Time Macroeconomic Forecasting Using Linear Models and Artificial Neural Networks, Review of Economics and Statistics, 79, 540–50. Teräsvirta, T., D. Tjostheim, and C. W. J. Granger, 1994, Aspects of Modeling Nonlinear Time Series, in Handbook of Econometrics, Vol. IV, Amsterdam: Elsevier. Introduction 27

Toda, H. Y., and P. C. B. Phillips, 1993, Vector Autoregressions and Causality, Econometrica, 61, 1367–93. 1994, Vector Autoregression and Causality: A Theoretical Overview and Simulation Study, Econometric Reviews, 13, 259–85. Toda, H. Y., and T. Yamamoto, 1995, Statistical Inference in Vector Autoregres- sions with Possibly Integrated Processes, Journal of Econometrics, 66, 225–50. Wallis, K. F.,1974, Seasonal Adjustment and Relations between Variables, Journal of the American Statistical Association, 69, 18–32. Wiener, N., 1956, The Theory of Prediction, in E. F. Beckenback, ed., Modern Mathematics for Engineers, Series 1. Weiss, A. A., 1996, Estimating Time Series Models Using the Relevant Cost Function, Journal of Applied Econometrics, 11, 539–60. Wold, H., 1938, A Study in the Analysis of Stationary Time Series, Stockholm: Almqvist and Wiksell. Working, H., 1960, Note on the Correlation of First Differences of Averages in a Random Chain, Econometrica, 28, 916–18. Yoo, B. S., 1987, Co-integrated Time Series Structure, Ph.D. Dissertation, UCSD. Young, A. H., 1968, Linear Approximations to the Census and BLS Seasonal Adjustment Methods, Journal of the American Statistical Association, 63, 445–71. Yule, G. U., 1921, On the Time-Correlation Problem, with Especial Reference to the VariateDifference Correlation Method, Journal of the Royal Statistical Society, 84, 497–526. 1926, Why Do We Sometimes Get Nonsense Correlations Between Time Series? A Study in Sampling and the Nature of Time Series, Journal of the Royal Statistical Society, 89, 1–64. 1927, On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer’s Sunspot Numbers, Philosophical Transactions, 226A. Zellner, A., 1979, Causality and Econometrics, in K. Brunner and A. H. Meltzer, eds., Three Aspects of Policy and Policymaking, Carnegie-Rochester Con- ference Series, Vol. 10, Amsterdam: North Holland. Zellner, A., and F. Palm, 1974, time Series Analysis and Simultaneous Equation Econometric Models, Journal of Econometrics, 2, 17–54.

PART ONE

CAUSALITY

CHAPTER 1

Investigating Causal Relations by Econometric Models and Cross-Spectral Methods* C. W. J. Granger

There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in record- ing information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalization of this result with the partial cross spectrum is suggested.

The object of this paper is to throw light on the relationships between certain classes of econometric models involving feedback and the func- tions arising in spectral analysis, particularly the cross spectrum and the partial cross spectrum. Causality and feedback are here defined in an explicit and testable fashion. It is shown that in the two-variable case the feedback mechanism can be broken down into two causal relations and that the cross spectrum can be considered as the sum of two cross spectra, each closely connected with one of the causations.The next three sections of the paper briefly introduce those aspects of spectral methods, model building, and causality which are required later. Section IV pre- sents the results for the two-variable case and Section V generalizes these results for three variables.

* Econometrica, 37, 1969, 424–438. Reprinted in Rational Expectations, edited by T.Sargent and R. Lucas, 1981, University of Minnesota Press. 32 C. W. J. Granger

I. SPECTRAL METHODS

If Xt is a stationary time series with mean zero, there are two basic spectral representations associated with the series: (i) the Cramer representation, p itw Xedzt = x ()w , (1) Ú-p where zx (w) is a complex random process with uncorrelated increments so that

Edz[]xx()wl dz ()=π0, wl ,

= dFx ()ww,;= l (2) (ii) the spectral representation of the covariance sequence p xx itw mwtt= EXX[]tt- = e dFx (). (3) Ú-p

If Xt has no strictly periodic components, dFx(w) = fx(w)dw, where fx(w) is the power spectrum of Xt. The estimation and interpretation of power spectra have been discussed in Granger and Hatanaka (1964) and Nerlove (1964). The basic idea underlying the two spectral representa- tions is that the series can be decomposed as a sum (i.e., integral) of uncorrelated components, each associated with a particular frequency. It follows that the variance of the series is equal to the sum of the vari- ances of the components. The power spectrum records the variances of the components as a function of their frequencies and indicates the relative importance of the components in terms of their contribution to the overall variance.

If Xt and Yt are a pair of stationary time series, so that Yt has the spectrum fy(w) and Cramer representation p itw Yedzt = y ()w , Ú-p then the cross spectrum (strictly power cross spectrum) Cr(w) between

Xt and Yt is a complex function of w and arises both from

Edz[]xy()ww dz ()=π0, wl , = Cr()wwwl d ,;= and p xy itw mwwtt= E[] Xtt Y- = e Cr() d . Ú-p It follows that the relationship between two series can be expressed only in terms of the relationships between corresponding frequency components. Two further functions are defined from the cross spectrum as being more useful for interpreting relationships between variables: (i) the coherence, Investigating Causal Relations 33

Cr()w 2 C()w = , ffxy()ww () which is essentially the square of the correlation coefficient between corresponding frequency components of Xt and Yt, and (ii) the phase, imaginary part of Cr()w fw()= tan-1 , real part of Cr()w which measures the phase difference between corresponding frequency components.When one variable is leading the other, f(w)/w measure the extent of the time lag. Thus, the coherence is used to measure the degree to which two series are related and the phase may be interpreted in terms of time lags. Estimation and interpretation of the coherence and phase function are discussed in Granger and Hatanaka (1964, chaps. 5 and 6). It is worth noting that f(w) has been found to be robust under changes in the stationarity assumption (Granger and Hatanaka 1964, chap. 9).

If Xt, Yt, and Zt are three time series, the problem of possibly mis- leading correlation and coherence values between two of them due to the influence on both of the third variable can be overcome by the use of partial cross-spectral methods.

The spectral, cross-spectral matrix [fij(w)] = S(w) between the three variables is given by

Èdzx ()w ˘ E Ídz ()w ˙ dz()www dz () dz ()= f() ww d Í y ˙ []xyz[] ij Í ˙ Îdzz ()w ˚ where

fij()ww= f x () when ijx== , = Crxy ()w when i== x,, j y etc.

The partial spectral, cross-spectral matrix between Xt and Yt given Zt is found by partitioning S(w) into components:

È S11 S12 ˘ S = Í ˙. Î S21 S22 ˚ The partitioning lines are between the second and third rows, and second and third columns. The partial spectral matrix is then

-1 SSSSSxy, z =-11 12 22 21 . Interpretation of the components of this matrix is similar to that involving partial correlation coefficients. Thus, the partial cross spectrum 34 C. W. J. Granger can be used to find the relationship between two series once the effect of a third series has been taken into account. The partial coher- ence and phase are defined directly from the partial cross spectrum as before. Interpretation of all of these functions and generalizations to the n-variable case can be found in Granger and Hatanaka (1964, chap. 5).

II. FEEDBACK MODELS

Consider initially a stationary random vector Xt = {X1t, X2t,...,Xkt}, each component of which has zero mean. A linear model for such a vector consists of a set of linear equations by which all or a subset of the com- ponents of Xt are “explained” in terms of present and past values of com- ponents of Xt. The part not explained by the model may be taken to consist of a white-noise random vector et, such that

Ets[]eets¢ =π0, , ==Its, , (4) where I is a unit matrix and 0 is a zero matrix. Thus, the model may be written as m AX0 tjtjt=+Â AX- e (5) j =1 where m may be infinite and the A’s are matrices. The completely general model as defined does not have unique matrices Aj as an orthogonal transformation. Yt =LXt can be performed which leaves the form of the model the same, where L is the orthogonal matrix, i.e., a square matrix having the property LL¢ = I. This is seen to be the case as ht =Let is still a white-noise vector. For the model to be determined, sufficient a priori knowledge is required about the values of the coefficients of at least one of the A’s, in order for constraints to be set up so that such transformations are not possible. This is the so- called identification problem of classical econometrics. In the absence of such a priori constraints, L can always be chosen so that the A0 is a triangular matrix, although not uniquely, thus giving a spurious causal- chain appearance to the model.

Models for which A0 has nonvanishing terms off the main diagonal will be called “models with instantaneous causality.” Models for which

A0 has no nonzero term off the main diagonal will be called “simple causal models.” These names will be explained later. Simple causal models are uniquely determined if orthogonal transforms such as L are not possible without changing the basic form of the model. It is possible for a model apparently having instantaneous causality to be transformed using an orthogonal L to a simple causal model. Investigating Causal Relations 35

These definitions can be illustrated simply in the two variable case.

Suppose the variables are Xt, Yt. Then the model considered is of the form m m XbYtt+=0 ÂÂ aX jtj- + bYjtj- +¢et , j =11j = m m YcXtt+=0 ÂÂ cX jtj- + dYjtj- +¢¢et . (6) j =11j =

If b0 = c0 = 0, then this will be a simple causal model. Otherwise it will be a model with instantaneous causality. Whether or not a model involving some group of economic variables can be a simple causal model depends on what one considers to be the speed with which information flows through the economy and also on the sampling period of the data used. It might be true that when quar- terly data are used, for example, a simple causal model is not sufficient to explain the relationships between the variables, while for monthly data a simple causal model would be all that is required. Thus, some nonsimple causal models may be constructed not because of the basic properties of the economy being studied but because of the data being used. It has been shown elsewhere (Granger 1963; Granger and Hatanaka 1964, chap. 7) that a simple causal mechanism can appear to be a feedback mechanism if the sampling period for the data is so long that details of causality cannot be picked out.

III. CAUSALITY Cross-spectral methods provide a useful way of describing the relation- ship between two (or more) variables when one is causing the other(s). In many realistic economic situations, however, one suspects that feed- back is occurring. In these situations the coherence and phase diagrams become difficult or impossible to interpret, particularly the phase diagram. The problem is how to devise definitions of causality and feed- back which permits tests for their existence. Such a definition was pro- posed in earlier papers (Granger 1963; Granger and Hatanaka 1964, chap. 7). In this section, some of these definitions will be discussed and extended. Although later sections of this paper will use this definition of causality they will not completely depend upon it. Previous papers con- cerned with causality in economic systems (Basman 1963; Orcutt 1952; Simon 1953; Strotz and Wold 1960) have been particularly concerned with the problem of determining a causal interpretation of simultaneous equation systems, usually with instantaneous causality. Feedback is not explicitly discussed. This earlier work has concentrated on the form that the parameters of the equations should take in order to discern definite causal relationships. The stochastic elements and the natural 36 C. W. J. Granger time ordering of the variables play relatively minor roles in the theory. In the alternative theory to be discussed here, the stochastic nature of the variables and the direction of the flow of time will be central fea- tures. The theory is, in fact, not relevant for nonstochastic variables and will rely entirely on the assumption that the future cannot cause the past. This theory will not, of course, be contradictory to previous work but there appears to be little common ground. Its origins may be found in suggestion by Wiener (1956).The relationship between the definition dis- cussed here and work of Good (1962) has yet to be determined. – If A is a stationary stochastic process, let A represent the set of t = t past values {At-j, j = 1,2,...,•} and At represent the set of past and present values {At-j, j = 0,1,...,•}. Further let A(k) represent the set {At-j, j = k, k + 1,...,•}. Denote the optimum, unbiased, least-squares predictor of A using the – t set of values Bt by Pt(AΩB). Thus, for instance, Pt(XΩX) will be the optimum predictor of Xt using only past Xt. The predictive error series 2 will be denoted by et(AΩB) = At - Pt(AΩB). Let s (AΩB) be the vari- ance of et(AΩB). The initial definitions of causality, feedback, and so forth, will be very general in nature. Testable forms will be introduced later. Let Ut be all the information in the universe accumulated since time t - 1 and let

Ut - Yt denote all this information apart from the specified series Yt. We then have the following definitions.

Definition 1: Causality. If s 2(XΩU) < s 2(XΩ UY- ), we say that Y is causing X, denoted by Yt fi Xt. We say that Yt is causing Xt if we are better able to predict Xt using all available information than if the infor- mation apart from Yt had been used. – – Definition 2: Feedback. If s 2(XΩU) < s 2(XΩ UY- ), and s 2(YΩU) < s 2(YΩ UX- ), we say that feedback is occurring, which is denoted

Yt ¤ Xt, i.e., feedback is said to occur when Xt is causing Yt and also Yt is causing Xt. – = – Definition 3: Instantaneous Causality. If s 2(XΩU, Y) < s 2(XΩU), we say that instantaneous causality Yt fi Xt is occurring. In other words, the current value of Xt is better “predicted” if the present value of Yt is included in the “prediction” than if it is not.

Definition 4: Causality Lag. If Yt fi Xt, we define the (integer) causality lag m to be the least value of k such that s 2[XΩU - Y(k)] < 2 s [XΩU - Y(k + 1)]. Thus, knowing the values Yt-j, j = 0,1,...,m - 1, will be of no help in improving the prediction of Xt.

The definitions have assumed that only stationary series are involved. – In the nonstationary case, s(XΩU) etc. will depend on time t and, in Investigating Causal Relations 37 general, the existence of causality may alter over time. The definitions can clearly be generalized to be operative for a specified time t. One could then talk of causality existing at this moment of time. Considering nonstationary series, however, takes us further away from testable defi- nitions and this tack will not be discussed further. The one completely unreal aspect of the above definitions is the use of the series Ut, representing all available information. The large major- ity of the information in the universe will be quite irrelevant, i.e., will have no causal consequence. Suppose that all relevant information is D numerical in nature and belongs to the vector set of time series Y t = i {Y t, i Œ D} for some integer set D. Denote the set {i Œ D, i π j} by D(j) i D ( j) and {Y j , i Œ D( j)} by Y t , i.e., the full set of relevant information except one particular series. Similarly, we could leave out more than one series with the obvious notation. The previous definitions can now be used but D( j) with Ut replaced by Yt and Ut - Yt by Y . Thus, for example, suppose that the vector set consists only of two series, X and Y , and that all other – t t information is irrelevant. Then s 2(XΩX) represents the minimum pre- 2 – – dictive error variance of Xt using only past Xt and s (XΩX, Y) repre- sents this minimum variance if both past Xt and past Yt are used to 2 – 2 – – predict Xt. Then Yt is said to cause Xt if s (XΩX) > s (XΩX, Y). The definition of causality is now relative to the set D. If relevant data has not been included in this set, then spurious causality could arise. For instance, if the set D was taken to consist only of the two series Xt and Yt, but in fact there was a third series Zt which was causing both within the enlarged set D¢=(Xt, Yt, Zt), then for the original set D, spurious causality between Xt and Yt may be found.This is similar to spurious cor- relation and partial correlation between sets of data that arise when some other statistical variable of importance has not been included. In practice it will not usually be possible to use completely optimum predictors, unless all sets of series are assumed to be normally dis- tributed, since such optimum predictors may be nonlinear in complicated ways. It seems natural to use only linear predictors and the above defi- nitions may again be used under this assumption of linearity. Thus, for instance, the best linear predictor of Xt using only past Xt and past Yt will be of the form • • PXXYtjtj(), =+ aX- bYjtj- j =11j =

2 – – where the aj’s and bj’s are chosen to minimize s (XΩX, Y). It can be argued that the variance is not the proper criterion to use to measure the closeness of a predictor Pt to the true value Xt. Certainly if some other criteria where used it may be possible to reach different conclusions about whether one series is causing another. The variance does seem to be a natural criterion to use in connection with linear pre- dictors as it is mathematically easy to handle and simple to interpret. If one uses this criterion, a better name might be “causality in mean.” 38 C. W. J. Granger

The original definition of causality has now been restricted in order to reach a form which can be tested.Whenever the word causality is used in later sections it will be taken to mean “linear causality in mean with respect to a specified set D.” It is possible to extend the definitions to the case where a subset of series D* of D is considered to cause Xt. This would be the case if 2 D 2 D-D* D* s (XΩY ) < s (XΩY ) and then Y fi Xt. Thus, for instance, one could ask if past Xt is causing present Xt. Because new concepts are nec- essary in the consideration of such problems, they will not be discussed here in any detail. It has been pointed out already (Granger 1963) that instantaneous causality, in which knowledge of the current value of a series helps in predicting the current value of a second series, can occasionally arise spuriously in certain cases. Suppose Yt fi Xt with lag one unit but that the series are sampled every two time units. Then although there is no real instantaneous causality, the definitions will appear to suggest that such causality is occurring. This is because certain relevant information, the missing readings in the data, has not been used. Due to this effect, one might suggest that in many economic situations an apparent instantaneous causality would disappear if the economic variables were recorded at more frequent time intervals. The definition of causality used above is based entirely on the pre- dictability of some series, say Xt. If some other series Yt contains infor- mation in past terms that helps in the prediction of Xt and if this information is contained in no other series used in the predictor, then Yt is said to cause Xt. The flow of time clearly plays a central role in these definitions. In the author’s opinion there is little use in the practice of attempting to discuss causality without introducing time, although philosophers have tried to do so. It also follows from the definitions that a purely deterministic series, that is, a series which can be predicted exactly from its past terms such as a nonstochastic series, cannot be said to have any causal influences other than its own past. This may seem to be contrary to common sense in certain special cases but it is difficult to find a testable alternative definition which could include the determinis- tic situation. Thus, for instance, if Xt = bt and Yt = c(t + 1), then Xt can be predicted exactly by b + Xt-1 or by (b/c)Yt-1. There seems to be no way of deciding if Yt is a causal factor of Xt or not. In some cases the nota- tion of the “simplest rule” might be applied. For example, if Xt is some complicated polynomial in t and Yt = Xt+1, then it will be easier to predict Xt from Yt-1 than from past Xt. In some cases this rule cannot be used, as the previous example showed. In any case, experience does not indi- cate that one should expect economic laws to be simple in nature. Even for stochastic series, the definitions introduced above may give apparently silly answers. Suppose Xt = At-1 + et, Yt = At + ht, and Zt = At + gt, where et, ht, and gt are all uncorrelated white-noise series with equal Investigating Causal Relations 39

variances and At is some stationary series. Within the set D = (Xt, Yt) the definition gives Yt fi Xt. Within the set D¢=(Xt, Yt), it gives Zt fi Xt. But within the set D≤=(Xt, Yt, Zt), neither Yt nor Zt causes Xt, although the sum of Yt and Zt would do so. How is one to decide if either Yt or Zt is a causal series for Xt? The answer, of course, is that neither is. The causal series is At and both Yt and Zt contain equal amounts of information about At. If the set of series within which causality was discussed was expanded to include At, then the above apparent paradox vanishes. It will often be found that constructed examples which seem to produce results contrary to common sense can be resolved by widening the set of data within which causality is defined.

IV. TWO-VARIABLE MODELS In this section, the definitions introduced above will be illustrated using two-variable models and results will be proved concerning the form of the cross spectrum for such models.

Let Xt, Yt be two stationary time series with zero means. The simple causal model is m m XaXbYtjtj=++ÂÂ- jtj- et , j =11j = m m YcXdYtjtj=++ÂÂ- jtj- ht . (7) j =11j = where et, ht are taken to be two uncorrelated white-noise series, i.e., E[etes] = 0 = E[hths], s π t, and E[etes] = 0 all t, s. In (7) m can equal infinity but in practice, of course, due to the finite length of the available data, m will be assumed finite and shorter than the given time series.

The definition of causality given above implies that Yt is causing Xt provided some bj is not zero. Similarly Xt is causing Yt if some cj is not zero. If both of these events occur, there is said to be a feedback rela- tionship between Xt and Yt. It will be shown later that this new defini- tion of causality is in fact identical to that introduced previously. The more general model with instantaneous causality is m m XbYtt+=0 ÂÂ aX jtj- + bYjtj- +et , j =11j = m m YcXtt+=0 ÂÂ cX jtj- + dYjtj- +ht . (8) j =11j = If the variables are such that this kind of representation is needed, then instantaneous causality is occuring and a knowledge of Yt will improve the “prediction” or goodness of fit of the first equation for Xt. Consider initially the simple causal model (7). In terms of the time shift operator U, that is, UXt = Xt-1, these equations may be written 40 C. W. J. Granger

XaUXbUY= () + () + e , tttt (9) YcUXdUYtttt= () + () +h , where a(U), b(U), c(U), and d(U) are power series in U with the coeffi- 0 m j cient of U zero, i.e., a(U) = S j=1ajU , etc. Using the Cramer representations of the series, i.e., p p itw itw Xt = e dZxt()ww, Y= e dZy () , ÚÚ--p p and similarly for et and ht, expressions such as a(U)Xt can be written as p itww- i aUX() t = e ae() dZx ()w . Ú-p Thus, equations (9) may be written p itww-- i i w eaedZbedZdZ{}[]10- ()x ()www- ()y ()- e ()= , Ú-p p itww-- i i w ecedZ{}- ()x ()www+-[]10 dedZdZ()y ()- h ()= , Ú-p from which it follows that

ÈdZ x ˘ ÈdZe ˘ AÍ ˙ = Í ˙ ÎdZy ˚ ÎdZh ˚ (10) where È1 --ab˘ A = ÎÍ --cd1 ˚˙

-iw and where a is written for a(e ), etc., and dZx for dZx(w), etc. Thus, provided the inverse of A exists,

ÈdZ x ˘ ÈdZe ˘ = A-1 . (11) ÍdZ ˙ ÍdZ ˙ Î y ˚ Î h ˚

As the spectral, cross-spectral matrix for Xt, Yt is directly obtainable from

ÈdZ x ˘ E Í ˙ []dZxy dZ , ÎdZy ˚ these functions can quickly be found from (11) using the known prop- erties of dZe and dZh. One finds that the power spectra are given by

1 2 2 fdb()w =-+1 ss2 2 , x 2pD ()eh

1 2 2 2 2 fcay ()w =+-()sseh1 , (12) 2pD where D=Ω(1 - a)(1 - d) - bcΩ2. Of more interest is the cross spectrum which has the form Investigating Causal Relations 41

1 22 Cr()w =-[]()11 d csseh+-() a b . 2pD Thus, the cross spectrum may be written as the sum of two components

Cr()www= C12()+ C (), (13) where s 2 Cdc()w =-e ()1 1 2pD and s 2 Cab()w =-h ()1 . 2 2pD

If Yt is not causing Xt, then b ∫ 0 and so C2(w) vanishes. Similarly if Xt is not causing Yt then c ∫ 0 and so C1(w) vanishes. It is thus clear that the cross spectrum can be decomposed into the sum of two components – one which depends upon the causality of X by Y and the other on the causality of Y by X.

If, for example, Y is not causing X so that C2(w) vanishes, the Cr(w) = C1(w) and the resulting coherence and phase diagrams will be inter- preted in the usual manner.This suggests that in general C1(w) and C2(w) can each be treated separately as cross spectra connected with the two arms of the feedback mechanism. Thus, coherence and phase diagrams can be defined for X fi Y and Y fi X. For example, C ()w 2 C ()w = 1 xy ff()ww () xy may be considered to be a measure of the strength of the causality X fi Y plotted against frequency and is a direct generalization of coherence.

We call C xy (w) the causality coherence. Further,

-1 imaginary part of C1 ()w fwxy ()= tan real part of C1 ()w will measure the phase lag against frequency of X fi Y and will be called the causality phase diagram.

Similarly such functions can be defined for Y fi X using C2(w). These functions are usually complicated expression in a, b, c, and d; for example,

4 2 s e ()1 - dc C xy ()w = . 2 2 2 2 2 222 ()sssseheh()11- dbca+ ()+- 42 C. W. J. Granger

Such formulae merely illustrate how difficult it is to interpret econo- metric models in terms of frequency decompositions. It should be noted that 0 <ΩC xy (w)Ω<1 and similarly for Cyx (w). As an illustration of these definitions, we consider the simple feed- back system

XbYttt=+-1 e ,

YcXttt=+-2 h , (14) 2 2 where s e = s h = 1. In this case a(w) = 0, b(w) = be-iw, c(w) = ce-2iw, and d(w) = 0.The spectra of the series {Xt}, {Yt} are 1 + b2 fx ()w = 2 21p - bce -3 iw and 1 + c 2 fy ()w = 2 , 21p - bce -3 iw and thus are of similar shape. The usual coherence and phase diagrams derived from the cross spec- trum between these two series are cb22++2 bccosw C()w = ()11+ bc22()+ and cbsin2ww- sin fw()= tan-1 . cbcos2ww+ cos These diagrams are clearly of little use in characterizing the feedback relationship between the two series. When the causality-coherence and phase diagrams are considered, however, we get c 2 b2 C xy()ww= 22, C yx ()= 22 . ()11+ bc()+ () 11+ bc()+

Both are constant for all w, and, if b π 0, c π 0,q xy (w) = 2w (time lag of 1 two units), q xy(w) = w (time lag of one unit). The causality lags are thus seen to be correct and the causality coherences to be reasonable. In particular, if b = 0 then Cyx (w) = 0, i.e., no causality is found when none is present. (Further, in this new case, q xy(w) = 0.)

1 A discussion of the interpretation of phase diagrams in terms of time lags may be found in Granger and Hatanaka (1964, chap. 5). Investigating Causal Relations 43

Other particular cases are also found to give correct results. If, for 2 example, we again consider the same simple model (14) but with s e = 1, 2 s h = 0, i.e., ht ∫ 0 for all t, then one finds C xy (w) = 1,Cyx (w) = 0, i.e., X is “perfectly” causing Y and Y is not causing X, as is in fact the case. If one now considers the model (8) in which instantaneous causality is allowed, it is found that the cross spectrum is given by

1 2 2 Cr()w = []()11- d() c- c0 sseh+-() a() b- b0 (15) 2pD¢ 2 where D¢=Ω(1 - a)(1 - d) - (b - b0)(c - c0)Ω . Thus, once more, the cross spectrum can be considered as the sum of two components, each of which can be associated with a “causality,” provided that this includes instan- taneous causality. It is, however, probably more sensible to decompose

Cr(w) into three parts, Cr(w) = C1(w) + C2(w) + C3(w), where C1(w) and C2(w) are as in (13) but with D replaced by D¢ and

-1 2 2 Ccdba30()w = []()11- sseh+-0 () (16) 2pD representing the influence of the instantaneous causality. Such a decomposition may be useful but it is clear that when instan- taneous causality occurs, the measures of causal strength and phase lag will lose their meaning. It was noted in Section II that instantaneous causality models such as (8) in general lack uniqueness of their parameters, as an orthogonal transformation L applied to the variables leaves the general form of the model unaltered. It is interesting to note that such transformations do not have any effect on the cross spectrum given by (15) or the decom- position. This can be seen by noting that equations (8) lead to

Èdzx ˘ Èdze ˘ AÍ ˙ = Í ˙ Îdzy ˚ Îdzh ˚ with appropriate A. Applying the transformation L gives

Èdzx ˘ Èdze ˘ LLAÍ ˙ = Í ˙ Îdzy ˚ Îdzh ˚ so that

Èdzx ˘ -1 Èdze ˘ Í ˙ = ()LLA Í ˙ Îdzy ˚ Îdzh ˚

-1 Èdze ˘ = A Í ˙ Îdzh ˚ which is the same as if now such transformation had been applied. From its definition, L will possess an inverse. This result suggests that spectral 44 C. W. J. Granger methods are more robust in their interpretation than are simultaneous equation models. Returning to the simple causal model (9),

XaUXbUYtttt= () + () + e ,

YcUXdUYtttt= () + () +h , throughout this section it has been stated that Yt fi/ Xt if b ∫ 0. On intui- tive grounds this seems to fit the definition of no causality introduced in

Section III, within the set D of series consisting only of Xt and Yt. If b ∫ 0 then Xt is determined from the first equation and the minimum vari- 2 ance of the predictive error of Xt using past Xt will be s e . This variance cannot be reduced using past Yt. It is perhaps worthwhile proving this 2 – – 2 result formally. In the general case, it is clear that s (XΩX, Y) = s e , i.e., the variance of the predictive error of Xt, if both past Xt and past Yt are 2 used, will be s e from the top equation. If only past Xt is used to predict Xt, it is a well known result that the minimum variance of the predictive error is given by

p 2 1 1 sppww()XX= exp log fx () d . (17) 2 Ú-p 2 It was shown above in equation (12) that

1 2 2 fdb()w =-+1 ss2 2 x 2pD ()eh where D=Ω(1 - a)(1 - d) - bcΩ2. To simplify this equation, we not that

p 2 log 1-=awediw 0 Ú-p by symmetry. Thus if,

iw 2 pa1 - je fx ()wa= 0 , iw 2 pb1 - je 2 – 2 then s (XΩX) = a0. For there to be no causality, we must have a0 = s e .It is clear from the form of fx(w) that in general this could only occur if ΩbΩ∫ 2 2 0, in which case 2pfx(w) = s e /Ω1 - aΩ and the required result follows.

V. THREE-VARIABLE MODELS The above results can be generalized to the many-variables situation, but the only case which will be considered is that involving three variables. Consider a simple causal model generalizing (7):

XaUXbUYcUZttttt= 1111() + () + () + e , ,

YaUXbUYcUZttttt= 2222() + () + () + e , ,

ZaUXbUYcUZttttt= 3333() + () + () + e , , Investigating Causal Relations 45

where a1(U), etc., are polynomials in U, the shift operator, with the 0 coefficient of U zero. As before, ei,t, i = 1, 2, 3, are uncorrelated, white- 2 noise series and denote the variance ei,t = s i . Let a = a1 - 1, b - b2 = 1, g = c3 - 1, and

Èa b1 c1 ˘ Aa= Í b c ˙, Í 2 2 ˙ Í ˙ Îa3 b3 g ˚ -iw where b1 = b1(e ), etc., as before. Using the same method as before, the spectral, cross-spectral matrix S(w) is found to be given by S(w) = A-1k(A¢)-1 where

2 Ès1 0 0 ˘ Í 2 ˙ K = Í 0 s 2 0.˙ 2 ÎÍ 0 0 s 3 ˚˙

One finds, for instance, that the power spectrum of Xt is

-2 2 2 2 2 2 2 2 fcbcbbbccx ()wsbgsgsb=-+-+-D []1 23 2 13 1 3 12 1 where D is the determinant of A.

The cross spectrum between Xt and Yt is

xy -2 2 2 Ccbcaacbbr ()w=-D [ s1 () bg23() 23- g 2+- s 2()13 1 g 2 ()ag- ca13+- s 3() bc12 c 1 b() ca 12- c 2 a ]. Thus, this cross spectrum is the sum of three components, but it is not clear that these can be directly linked with causalities. More useful results arise, however, when partial cross spectra are considered. After some algebraic manipulation it is found that, for instance, the partial cross spectrum between Xt and Yt given Zt is

2 2 2 2 2 2 xy, z []ss1 2 ba33++ ssb 1 2 a22 ss3 b1 a Cr ()w =- fz¢()w where

2 2 2 2 2 2 fcbcbbbccz¢()wsbg=-+1 23 s2 13 -+- 1 g s3 12 1 b. Thus, the partial cross spectrum is the sum of three components

xy, z xy ,,, z xy z xy z Cr ()w =++ CCC123 where

2 2 xy, z ss1 2 ba33 C1 =- ,etc. fz¢()w 46 C. W. J. Granger

xy,z These can be linked with causalities. The component C 1 (w) represents the interrelationships of Xt and Yt through Zt, and the other two com- ponents are direct generalizations of the two causal cross spectra which arose in the two-variable case and can be interpreted accordingly.

In a similar manner one finds that the power spectrum of Xt, given Zt is

2 2 2 2 2 2 2 2 2 ss1 2 bb3 ++ ss1 3 b ss2 3 1 fxz, ()w = . fz¢()w

The causal and feedback relationship between Xt and Yt can be investigated in terms of the coherence and phase diagrams derived from the second and third components of the partial cross spectrum, i.e., 2 C xy, z coherence()xy,,. z = 2 etc ffxy,, yz

VI. CONCLUSION The fact that a feedback mechanism may be considered as the sum of two causal mechanisms, and that these causalities can be studied by decomposing cross or partial cross spectra suggests methods whereby such mechanisms can be investigated. I hope to discuss the problem of estimating the causal cross spectra in a later publication. There are a number of possible approaches, and accumulated experience is needed to indicate which is best. Most of these approaches are via the model- building method by which the above results were obtained. It is worth investigating, however, whether a direct method of estimating the com- ponents of the cross spectrum can be found.

REFERENCES Basman, R. L. “The Causal Interpretation of Non-Triangular Systems of Eco- nomic Relations.” Econometrica 31 (1963): 439–48. Good, I. J.“A Causal Calculus, I, II.” British J. Philos. Soc. 11 (1961): 305–18, and 12 (1962): 43–51. Granger, C. W. J. “Economic Processes Involving Feedback.” Information and Control 6 (1963): 28–48. Granger, C. W. J., and Hatanaka, M. Spectral Analysis of Economic Time Series. Princeton, N.J.: Princeton Univ. Press, 1964. Nerlove, M. “Spectral Analysis of Seasonal Adjustment Procedures.” Econo- metrica 32 (1964): 241–86. Orcutt, G. H. “Actions, Consequences and Causal Relations.” Rev. Econ. and Statis. 34 (1952): 305–13. Investigating Causal Relations 47

Simon, H. A. “Causal Ordering and Identifiability.” In Studies in Econometric Method, edited by W.C. Hood and T.C. Koopmans. Cowles Commission Mono- graph 14. New York, Wiley, 1953. Strotz, R. H., and Wold, H. “Recursive versus Non-Recursive Systems: An Attempt at Synthesis.” Econometrica 28 (1960): 417–27. Wiener, N. “The Theory of Prediction.” In Modern Mathematics for Engineers, Series 1, edited by E. F. Beckenbach. New York: McGraw-Hill, 1956. CHAPTER 2

Testing for Causality* A Personal Viewpoint C. W. J. Granger

A general definition of causality is introduced and then specialized to become operational. By considering simple examples a number of advantages, and also difficulties, with the definition are discussed. Tests based on the definitions are then considered and the use of post- sample data emphasized, rather than relying on the same data to fit a model and use it to test causality. It is suggested that a bayesian view- point should be taken in interpreting the results of these tests. Finally, the results of a study relating advertising and consumption are briefly presented.

1. THE PROBLEM AND A DEFINITION Most statisticians meet the concept of causality early in their careers as, when discussing the interpretation of a correlation coefficient or a regression, most textbooks warn that an observed relationship does not allow one to say anything about causation between the variables. Of course this warning has much to recommend it, but consider the follow- ing special situation: Suppose that X and Y are the only two random vari- ables in the universe and that a strong correlation is observed between them. Further suppose that God, or an acceptable substitute, tells one that X does not cause Y, leaving open the possibility of Y causing X.In the circumstances, the strong observed correlation might lead to accep- tance of the proposition that Y does cause X. This possibility occurs because of the extra structure imposed on the situation by the knowl- edge that X does not cause Y. As will be seen, the way structure is imposed will be important in definitions of causality. The textbooks, having given a cautionary warning about causality, vir- tually never then go on with a positive statement of the form “the pro- cedure to test for causality is . . .”, although a few do say that causality can be detected from a properly conducted experiment. The obvious

* Journal of Economic Dynamics and Control, 2, 1980, 329–352. Testing for Causality: A Personal Viewpoint 49 reason for the lack of such positive statements is that there is no gener- ally accepted procedure for testing for causality, partially because of a lack of a definition of this concept that is universally liked. Attitudes towards causality differ widely, from the defeatist one that it is impossible to define causality, let alone test for it, to the populist viewpoint that everyone has their own personal definition and so it is unlikely that a generally acceptable definition exists. It is clearly a topic in which individual tastes predominate, and it would be improper to try to force research workers to accept a definition with which they feel uneasy. My own experience is that, unlike art, causality is a concept whose definition people know that they do not like but few know that they do like. It might therefore be helpful to present a definition that some of us appear to think has some acceptable features so that it can be publicly debated and compared with alternative definitions. For ease of exposition, a universe is considered in which all variables are measured just at prespecified time points at constant intervals t = 1, 2,...When at time n, let all the knowledge in the universe available at that time be denoted Wn and denote by Wn - Yn this information except the values taken by a variable Yt up to time n, where Yn ŒWn. Wn includes no variates measured at time points t > n, although it may well contain expectations or forecasts of such values. However, these expectations will simply be functions of Wn. Wn will certainly be multivariate and Yn could be, and both will be stochastic variables. To provide structure to the situation, the following axioms will be assumed to hold:

Axiom A: The past and present may cause the future, but the future cannot cause the past.

Axiom B: Wn contains no redundant information, so that if some vari- able Zn is functionally related to one or more other variables, in a deter- ministic fashion, then Zn should be excluded from Wn.

Thus, for example, if temperature is measured hourly at some loca- tion both in degrees Fahrenheit and degrees Centigrade, there is no point in including both of these variables in the universal information set. Suppose that we are interested in the proposition that the variable Y causes the variable X. At time n, the value Xn+1 will be, in general, a random variable and so can be characterized by probability statements of the form Prob(Xn+1 ΠA) for a set A. This suggests the following:

General Definition: Yn is said to cause Xn+1 if

Prob(Xn+1 Œ AΩWn) π Prob(Xn+1 Œ AΩWn - Yn) for some A.

For causation to occur, the variable Yn needs to have some unique infor- mation about what value Xn+1 will take in the immediate future. 50 C. W. J. Granger

The ultimate objective is to produce an operational definition, which this is certainly not, by adding sufficient limitations. This process will be discussed in section 3, and the definition will also be defended there. In the following section some more general background material will be introduced which will, hopefully, make the defence a little easier.

2. A VARIETY OF VIEWPOINTS ON CAUSALITY The obvious place to look for definitions of causality and discussions of the concept is the writings of philosophers on the topic, of which there have been plenty from Aristotle onwards. A useful discussion of parts of this literature can be found in Bunch (1963). I think that it is fair to say that the philosophers have not reached a consensus of opinion on the topic, have not found a definition that a majority can accept and, in par- ticular, have not produced much that is useful to practicing scientists. Most of the examples traditionally used by philosophers come from clas- sical physics or chemistry, such as asking what causes the flame when a match is struck, or noting that applying heat to a metal rod causes it to become longer. Much of the literature attempts to discuss unique causes in deterministic situations, so that if A occurs then B must occur. Although most writers have seemed to agree with Axiom A, that causes must precede effects, even this is not universally accepted. Quite a few philosophers, at least in the past, seem to believe that causes and effects should be contiguous both in time and space, which undoubtedly reflects the preoccupation with classical physics. Social scientists would surely want to consider the possibility that an event occurring in one part of the world could cause an event elsewhere at a later time. The philoso- phers are not constrained to look for operational definitions and can end up with asking questions of the ilk: “If two people at separate pianos each strike the same key at the same time and I hear a note, which person caused the note that I hear?” The answer to such questions is, of course: “Who cares?” For an interesting discussion of the lack of usefulness of the philosophers’ contribution by a pair of lawyers, another group which clearly requires an operation definition of causation, see Hart and Honore (1959). They take the viewpoint that “the cause is a difference to the normal course which accounts for the difference in the outcome.” They also point out that legally this difference can be not doing some- thing, “as the driver did not put on the brakes, the train crashed.” One interesting aspect of the philosophers’ contribution is that they often try to discuss what the term causality means in “common usage”, although they make no attempt to use common usage terms in their discussion. Rather than trying to decide what the public thinks it means by such a difficult concept as causality, it may be preferable to try to influence common usage towards a sounder definition. Testing for Causality: A Personal Viewpoint 51

The philosophers and others have provided a variety of definitions, but no attempt to review them will be made here, as most are of little relevance to statisticians. Once a definition has been presented, it is very easy for someone to say “but that is not what I mean by causation.” Such a remark has to be taken as a vote against the particular definition, but it is entirely destructive rather than constructive. To be constructive, the critic needs to continue and provide an alternative definition. What is surely required is a menu of definitions that can be discussed and criti- cized but at least defended by someone. Only by providing such a menu can a debate be undertaken which, hopefully, will result in one, or a few, definitions that can receive widespread support. I believe that definitions should be allowed to evolve due to debate rather than be judged solely on a truth or not scale. It is possible, as has been suggested, that every- one has their own definition so that no convergence will occur, but this outcome does seem to be unlikely. Before proceeding further, it is worthwhile asking if there is any need, or demand, for a testable definition of causality. It is worth noting that the Social Science Citation Index lists over 1000 papers with words such as causal, causation or causality in their titles, and in a recent five-year period the Science Citation Index lists over 3000 such articles. Papers mentioning such words in the body of the paper, not in the title, are vastly more numerous. There does therefore seem to be a need for a widely accepted definition. Statisticians already have methods for measuring relationships between variables, but causal relations may be thought of as being in some sense deeper than the ordinarily observed kind. Consider the following three time series:

Xt = number of patients entering a maternity hospital in day t, Yt = number of patients leaving the same hospital in day t, Zt = ice cream sales in the same city in day t.

It seems very likely that the series Xt is useful in forecasting Yt, and it is also possible that Zt may appear to be useful in forecasting Yt,as both variables contain seasonal components. However, most people would surely expect that if a more careful analysis was conducted, using perhaps longer data series, larger information sets including more explanatory variables or more sophisticated techniques, then the observed (forecasting) relationship between Xt and Yt is likely to con- tinue to be found, whereas that between Zt and Yt may well disappear. The deeper relationship is a candidate for the title causal. There thus appears to be both a need, and a demand, for techniques to investigate causality. The possible uses of a causal relationship, if found, will be dis- cussed below. It has been suggested that although such deeper relations need to be named, that name should not involve words like “cause” or “causality”, as these words are too emotion-laden, involve too much preconception 52 C. W. J. Granger and have too long a history. Alternative phrases such as “due to”, “tem- porally interrelated”, “temporally prior” and “feedback free” have been proposed, for example. To my mind, this suggestion reflects a basic mis- understanding about language and its use. Most of the components of a language are just a notation, with generally agreed meanings. If I use words such as “apple” or “fear”, I will not need to define them first, as it is understood that most people mean approximately the same thing by them. Occasionally, with unusual or technical words, such as “therm” or “temperature”, I might need to add a definition. If I start a piece of written work, or a lecture, by carefully defining something, then I can use this as a notation throughout, such as distribution, mean, or variance. If my definition is quite different from general usage, then I may be unpop- ular but will not be logically incorrect, as, for example, if I write cosx for what is usually denoted by x3. As causation has no generally accepted definition, this criticism cannot apply. Provided I define what I person- ally mean by causation, I can use the term. I could, if I so wish, replace the word cause throughout my lecture by some other words, such as “oshkosh” or “snerd”, but what would be gained? It is like saying that whenever I use x, you would prefer me to use z. If others wanted to refer to my definition, they can just call it “Granger causality” to distinguish it from alternative definitions. There already exist many papers in eco- nomics which do just that, some of which are referenced later, and no misunderstanding occurs. If it is later observed that which is called “Granger causality” is identical to the definition introduced by some earlier writer, then the name should be altered. In fact, I would be very surprised if the definition to be discussed in the next section has not been suggested many times in the past. Part of the definition was certainly pro- posed by Norbert Wiener (1958). It would not be a telling argument to appeal to “common usage” in connection with the words cause or causal- ity, as statisticians continually use words in ways different from common usage, examples being mean, variance, moments, probable, significant, normal, regression and distribution. These remarks made so far in this section are designed to defuse certain criticisms that can be made of what is to follow. My experience suggests that I will be unsuccessful in this aim. When discussing deterministic causation, philosophers distinguish two cases: (a) Necessity – if A occurs, then B must occur. (b) Sufficiency – if I observe B did occur, this means that A must have occurred. For example, if one has a metal rod, then event A might be that one heats the rod and event B is that the rod expands.Although causality is defined for pairs of sequences, or functions, obeying axiom A in parts of mathe- matical science, any statistician or any worker dealing with data gener- Testing for Causality: A Personal Viewpoint 53 ated by an animal body, a person’s behavior, part of an economy or an atmosphere, for example, will not be happy with these deterministic def- initions. Rather than saying “If A occurs, then B must occur”, they would probably be happier with statements such as “If A occurs, then the prob- ability of B occurring increases (or changes).” For example, if a person smokes, he does not necessarily get cancer, but he does increase the prob- ability of cancer. If a person goes sailing, he does not necessarily get wet, but he does increase the probability of getting wet. It is therefore impor- tant for a useful definition to deal with stochastic events or processes. It is interesting to note that the advent of quantum physics had a big impact on the philosophical writings about causality, which had relied heavily on classical physics for examples. Bertrand Russell, in particular, dra- matically changed his views of causality at that time. There have, of course, been several attempts to introduce probabilis- tic theories of causality. A particularly convincing attempt, well worth reading, is that by Suppes (1970). One of his definitions is:

An event Bt¢ (occurring at time t¢) is a prima facie cause of the event At if and only if (i) t¢ 0, and (iii) Prob (AtΩBt¢) > P(At). One might observe a large African population, for example, and find that the probability of not getting cholera is 0.91 but that of those inoculated against the disease, the probability of not getting cholera is 0.98. If At is not getting cholera and Bt is inoculation, then the evidence suggests that “inoculation is a Suppes prima facie cause of not getting cholera.” Note that, by replacing A and B with their complements, the same evidence is also likely to lead to the conclusion “not having inoculation is a Suppes prima facie cause of getting cholera.” There is obvious arbitrariness in practice in defining an event. If the inequality in (iii) is reversed, Suppes talks of negative causation. Nevertheless, for probabilistic events, rather than variables or processes, the discussion by Suppes is very useful and is certainly potentially applicable to a series of properly conducted random experiments. Good (1961, 1962) has a somewhat similar definition, although he effectively hides it amongst 24 assumptions and 17 theorems combined with vary little interpretation. If E and F are two events with F occur- ring before E, then he says that there is a tendency for F to cause E, given some state of the universe, if Prob (EΩF) > Prob(EΩnot F). It would be a lengthy task to critically discuss and compare such defini- tions, and so I will not attempt it at this time. At the very start of this paper, the case where random variables X and Y are correlated, but God tells you that causation in one direction is impossible, was briefly discussed. Virtually all definitions of causality require some imposed structure, such as that provided here by God. In many definitions, Axiom A provides this structure, but not all definitions 54 C. W. J. Granger follow this route. The causality concepts discussed by Simon, Wold and Blalock [see Blalock (1964)] and others do not require Axiom A but do presume special knowledge about the structure of relations between two or more variables. Given this structure, the possibility of causal rela- tionships can then be discussed, usually in terms of the vanishing or not of correlation or partial correlation coefficients. Because these defini- tions require a number of assumptions about structure to be true, they will be called conditional causation definitions. If the assumptions are correct, or can be accepted as being correct, these definitions may have some value. However, if the assumptions are somewhat doubtful, these definitions do not prove to be useful. Sims (1977) has discussed the Simon and Wold approach and found it not operational in practice. Cer- tainly there has been little use made of these definitions in recent years, at least in economics. The “path analysis” of Sewell Wright (1964) has similarities with the Wold and Simon approach, but he does state that he would prefer to use his analysis together with Axiom A, which would bring it nearer to the definition discussed in the next section. The full question of priority in these matters is a complex one and, I think, need not detain us here. The question of whether any real statement can be made about causal- ity based just on statistical data is clearly an important one. Naturally, as a statistician. I think that proper statements can be made, if they are carefully phrased. The link between smoking and cancer provides an example. So far the only convincing link has been a statistical one, but it is now generally accepted.The real question for most people is not “Does smoking cause cancer?” but rather “How does smoking cause cancer?” Before the accumulation of statistical evidence, people could be thought of as having subjective, personal probabilities that the statement “Smoking causes cancer, in a statistical sense” is true. Since the evidence has been presented, for most people these subjective probabilities have greatly increased and may well be near one. The weight-of-evidence is certainly in favor of this causality. Smoking is certainly a prima facie cause of cancer and is probably more than that, in the opinion of the majority. A decision, such as for an individual to stop smoking or for a government to ban it, could be a wrong one, but statisticians are used to making decisions under uncertainty and realize that when properly based on the statistical evidence can be wrong but are usually correct. There is one problem with the statistical approach which was pointed out by the philosopher Hume as applying to any testing procedure. It is always possible that the evidence from the past may be irrelevant, as cau- sation can change from the past to the future. It is therefore necessary to introduce:

Axiom C: All causal relationships remain constant in direction through- out time. Testing for Causality: A Personal Viewpoint 55

The strength, and lags, of these relationships may change, but causal laws are not allowed to change from positive strength to zero, or go from zero to positive strength, through time. This axiom is, of course, central to the applicability of all scientific laws and so is generally accepted, even though it is not necessarily true.

3. AN OPERATIONAL DEFINITION The general definition introduced above is not operational, in that it cannot be used with actual data. To become operational, a number of constraints need to be introduced. To do this, it is convenient to first re-state the general definition. Suppose that one is interested in the pos- sibility that a vector series Yt causes another vector Xt. Let Jn be an infor- mation set available at time n, consisting of terms of the vector series Zt, i.e.,

Jn :Zn-j, j 0.

Jn is said to be a proper information set with respect to Xt if Xt is included within Zt. Further, suppose that Zt does not include any components of Yt, so that the intersection of Zt and Yt is zero. Further, define

Jn¢ :Zn-j, Yn-j, j 0, so that Jn¢ is the information set Jn plus the values in past and present Yt.

Denote by F(Xn+1ΩJn) the conditional distribution function of Xn+1 given Jn, so that this distribution has mean E[Xn+1ΩJn].The notation using other information sets is obvious. These expressions are used in the following definitions:

Definition 1: Yn does not cause Xn+1 with respect to Jn¢ if

FJFJ()XXnn++11=¢() nn, so that the extra information in Jn¢ has not affected the conditional distribution. A necessary condition is that

EJEJ[]XXnn++11=¢[] nn.

Definition 2: If Jn¢ ∫Wn, the universal information set, and if

FFY()XXnn++11WWπ-() nnn, then Yn is said to cause Xn+1.

Definition 3: If

FJFJ()XXnn++11¢ π () nn, 56 C. W. J. Granger

then Yn is said to be a prima facie cause of Xn+1 with respect to the infor- mation set Jn¢ .

Definition 4: Yn is said not to cause Xn+1 in mean with respect to Jn¢ if

d n+++111()JE n¢ =¢[]XX nn JE- () nn J is identically zero.

Definition 5: If dn+1(Wn) is not zero, then Yn is said to cause Xn+1 in mean.

Definition 6: If dn+1(Jn¢ ) is not identically zero, then Yn is said to be a prima facie cause in mean of Xn+1 with respect to Jn¢ .

Definition 2 is equivalent to the general definition introduced in the first section, which was discussed in Granger and Newbold (1977). If a less general information set than the universal set is available, Jn¢ , then a prima facie cause can occur, as in Definitions 1 and 3. These definitions can be strengthened by adding phrases such as “almost surely”, or “except on sets of measure zero” at appropriate points, but as these will not help towards the eventual aim of an operational definition capable of being tested, such niceties are ignored.

If, rather than discussing the whole distribution of Xn+1, one is content with just point forecasts using a least squares criterion, then the final three definitions become relevant. To ask for causality in mean is much less stringent than asking for full causality, but does provide a definition much nearer to being operational. If one wishes to use some criterion other than least squares, this can be done, but point forecasts will be made much more difficult to obtain. Definition 6 can be rephrased: Let 2 s (XΩJn) be the variance of the one-step forecast error of Xn+1 given Jn, 2 2 and similarly for s (XΩJn, Y) ∫ s (XΩJn¢ ), then Y is a prima facie cause 2 2 of X, with respect to J¢, if s (XΩJn, Y) < s (XΩJn). Thus knowledge of Yn increases one’s ability to forecast Xn+1, in a least squares sense. This cor- responds to a definition hinted at by Wiener (1958), introduced specifi- cally in Granger (1964) and Granger and Hatanaka (1964), re-introduced in Granger (1969), amplified and applied by Sims (1972, 1977) and then used by numerous authors since, including Black (1978),Williams, Good- hart and Gowland (1976), Skoog (1976), Sargent (1976), Mehra (1977), Gordon (1977), Feige and Pearce (1976a,b) Ciccolo (1978), and Caines, Sethi and Brotherton (1977). However, it should be said that some of the recent writers on this topic, because they have not looked at the origi- nal papers, have evolved somewhat unclear and incorrect forms of this definition. It is rather like the party game where a phrase or rumor is whispered around the room, ending up quite differently from how it started. Testing for Causality: A Personal Viewpoint 57

In this newer formulation, Axiom B becomes:

Axiom B¢: F[YnΩJn] is not a singular distribution, so that is not that of a variable taking only a constant value. This implies that Yn is not deter- ministically related to the contents of Jn.

If purely time-series techniques are used to generate one-step fore- casts, these forecasts will usually be linear functions of the information set because of the present state of the art, although some progress in the use of certain non-linear models is occurring; see for instance, Granger and Andersen (1978) and Swamy and Tinsley (1980). However, if fore- casts are made from reduced form equations derived from a possible non-linear, structural econometric model, then the contents of the infor- mation set may be utilized non-linearly. The definitions discussed here do not require that only linear models are used, although most of the actual applications so far and much of the theoretical discussions have concentrated on the linear case. If the available information in Jn is used only linearly, then it may be possible to observe that Yn is a linear prima facie cause in mean of Xn+1 with respect to Jn¢ and with the available mod- eling and forecasting techniques this provides the operational definition that is being sought. For the remainder of this paper the phrase “linear prima facie cause” will be replaced simply by “cause” for convenience, unless a more general case is being considered. The definition as given relates a pair of vectors, Yn and Xn+1, but the usual case will be concerned with just a pair of individual series, Yn and Xn+1. Further, to actually model data it will usually be necessary to assume either that the series are stationary or that they belong to some simple class of models with time- varying parameters. Again, this is not strictly necessary for the definition but is required for practical implementation. There are a number of important implications of the definition of cause here developed. If, for example, it is found that Yn causes Xn+1 with respect to some information set, then this implies no restrictions on whether or not Xn causes Yn+1; this second causation may occur but need not. If both causations occur, one may say that there is feedback between the two series Xt and Yt. A simple example is

Xt = et + ht-1, Yt = ht + et-1, where et, ht are a pair of independent white noise series. Further, if there are three series Xt, Yt and Zt and it is observed that X causes Y and Y causes Z, then it is not necessarily true that X causes Z, although it can occur.

Example 1: Xt = et, Yt = et-1 + ht, Zt = ht-1, where again et, ht are independent white noises. There are four informa- tion sets that need to be considered: Jn(X,Y) – consisting of past and 58 C. W. J. Granger

present Xn-j, Yn-j ( j 0), and similarly, Jn(X,Z), Jn(Y,Z), and Jn(X,Y,Z) – consisting of past and present Xn-j, Yn-j, Zn-j ( j 0). Then clearly X causes Y with respect to either Jn(X, Y) or Jn(X, Y, Z), Y causes Z with respect to Jn(Y, Z) and to Jn(X, Y, Z), but X does not cause Z with respect to Jn(X, Z) but it does cause Z with respect to Jn(X, Y, Z). This last result occurs because Zn+1 is completely predetermined from Yn-j, Xn-j ( j 0) but not from just Yn-1 ( j 0). The importance of stating the information set being utilized is well illustrated by this example. A further example shows a different situation:

Example 2: Xt = et + wt, Yt = et-1, Zt = et-2 + ht,

where et, ht and wt are three independent white noises. Here X causes Z in Jn(X, Z) but not in Jn(X, Y, Z). One thing that is immediately clear from the definition is that if Yn causes Xn+1, then Yn¢ = a(B)Yn causes Xn¢ +1 = b(B) Xn+1 if a(B) and b(B) • are each one-sided filters of the form aB()= a Bj . However, if  j=0 j two-sided filters are used, as occurs for example in some seasonal adjust- ment procedures, then causality can obviously be lost because Axiom A is disrupted. The use of proper information sets, that is, sets including the past and present values of the series to be forecast Xt, does have the following important implication: It is impossible to find a cause for a series that is self-deterministic, that is, a series that can be forecast without error from its own past. The basic idea of the causal definition being discussed is that knowledge of the causal variable helps forecast the variable being caused. If a variable is perfectly forecastable from its own past, clearly no other variable can improve matters.

2 Example 3: Xt = a + bt + ct and Yt = dXt+1.

Then the following three equations generate Xt exactly, without error, i.e.:

2 -1 Xt = a + bt + ct , Xt = d Yt-1, Xt = 2Xt-1 - Xt-2 + 2c, so that at first sight Xt is “caused” by time, or by Yt-1, or by its own past. If all three equations fit equally well, that is perfectly, it is clear that no kind of data analysis can distinguish between them. It is therefore obvious that in this circumstance a statistical test for causality is impos- sible, unless some extra structure is imposed on the situation. It may be noted that causality tests can be made with variables that contain deter- ministic components, as proved formally by Hosoya (1977), but with this definition one cannot say that the deterministic component of one vari- able causes the deterministic component of another variable. Testing for Causality: A Personal Viewpoint 59

4. SOME DIFFICULTIES Virtually any sophisticated statistical procedure has some problems associated with it, and there is every reason that this will be true also with any operational definition of causality. These difficulties can either be intrinsic to the definition itself or be associated with its practical implementation. Some of the difficulties will arise because of data inadequacies. One obvious problem arises when the data is gathered insufficiently fre- quently. Suppose that a change in wood prices causes a change in furni- ture prices one week later, but prices are only recorded monthly; then the true causal relationship will appear to be instantaneous. It is perhaps worth defining “prima facie apparent instantaneous causality in mean”, henceforth instantaneous causality, between Xn+1 and Yn+1 with respect to Jn¢ if

EX[]nnn++11 J,. Yπ EX[] nn + 1 J Although the phrase ‘instantaneous causality’ is somewhat useful on occasions, the concept is a weak one, partly because Axiom A is not being applied and because, at least in the linear case, it is not possible to dif- ferentiate between instantaneous causation of X by Y, of Y by X or of feedback between X and Y, as simple examples show. If extra structure is imposed, it may be possible to distinguish between these possibilities, as will be discussed below. If one totally accepts Axiom A, then instan- taneous causality will either occur because of the data collection problem just mentioned or because both series have a common cause which is not included in the information set Jn¢ being used. The problem of missing variables, and consequential mis- interpretation of one’s results, is a familiar one in those parts of statis- tics which consider relationships between variables.A simple example of apparent causation due to a common cause is:

Example 4: Zt = ht, Xt = ht-1 + dt, Yt = ht-2 + et, where et, ht and dt are independent white noises. Here Zt is causing both Xt and Yt with respect to information sets Jn(X, Z), Jn(X, Z) and Jn(X, Y, Z), but Xt is causing Yt in Jn(X, Y) but not Jn(X, Y, Z). This apparent causation of Y by X in Jn(X, Y ) may be thought of as spurious because it vanishes when the information set is expanded, some- thing one would not expect with a true cause. Sims (1977) has studied the system

Ytttttt= cBZ() +=eh,, X dBZ() + and found that it is unlikely to give rise to a spurious one-way causation between X and Y based on Jn(X, Y), although presumably a feedback relationship between X and Y is more likely to be found. 60 C. W. J. Granger

An important case where missing variables can lead to misleading interpretations is when one variable is measured with an error having time structure. The following example illustrates the difficulty:

Example 5: Xt = ht, Yt = dt, Zt = Xt + et + bet-1, where ht and dt are white noises with correlation (ht, ds) = 0, t π s, but this correlation equals l when t = s and et is a white noise independent of ht and dt. Zt may be thought of as Xt with an MA(1) measurement error. There is no causation between Xt and Yt apart from instantaneous cau- sation. As Zt is the sum of a white noise and an MA(1) term, it will be MA(1), so that there exists a constant q with ΩqΩ<1 and a white noise series et so that

ZBett=+()1 q . It follows that --11 eBtt=+()11qh++() q B() ebe tt+ -1 .

The one-step forecast of Zn+1 using Zn-j (j 0) is just qen with error en-1, but this error is a function of hn-j (j 0) which is correlated with dn-j (j 0) which is equal to Yn-j. It thus follows that the Yn-j will help forecast Zn+1, so that apparently Yn causes Zn+1 with respect to Jn(Y, Z), but this would not be the case if Xn-j were observable, so that Jn(X, Y, Z) could be considered. This result is at first sight quite worrying, as in many disciplines, such as economics, variables are almost inevitably observed with error, so that xt – the missing variable – will always be missing. However, as the results of Sims (1977) and Newbold (1978) show, by no means does the addition of measurement error to variables necessarily produce spurious causations, as the error has to have partic- ular time-series structure compared to the original series. Nevertheless, the possibility of misleading results occurring from a common type of situation has to be kept in mind when interpreting results. Another situation which needs care in interpretation is when the time that a variable is recorded is different from the time at which the event occurred that led to the variable’s value. For example, March unem- ployment figures in New York City and New York State may not become known to the public until April 1 and April 15, respectively. The values must be associated with March, not the time of their release, otherwise spurious causation may well occur. A further example of this problem is the relationship between lightning and thunder. As the lightning is usually observed before the thunder, because light travels faster than sound, it might seem that lightning causes thunder. However, both are manifestations of what is essentially the same event, and if the observa- tions are placed at the time of the original electrical discharge, the spu- Testing for Causality: A Personal Viewpoint 61 rious causation disappears. If one is being pedantic, the light-producing part of the discharge does occur before the sound-producing part, but both lightning and thunder do have a common cause. A further interpretation problem can arise because of Axiom B. Suppose one has three variables which are related through some linear identity, such as Work force = Unemployed + Employed. It is clear that all three variables cannot be in the information set to be used, but it is not necessarily obvious which one should be excluded. If, for example, total consumption is caused by size of work force, but this latter variable is excluded, one may expect to find that numbers of both unemployed and employed appear to cause consumption. Once one is aware of such interpretational difficulties, it is not difficult to invent strategies for analyzing them, such as excluding different variables and repeating the analysis, or by testing equality of certain coefficients in the model, for example. One apparently serious problem of interpretation, which is suggested by the thunder and lightning example, arises from the idea of a leading indicator. Suppose that X causes both Y and Z, but that the causal lag is shown from X to Y, then from X to Z. If now X is not observed, Y will appear to cause Z. Example 4 shows such a situation.The search for such leading indicators occurs in various fields. In economics, for example, the Bureau of the Census publishes a list of such indicators, plus an index, which are supposed to help indicate when the economy is about to experience a down-turn or an up-turn. A number of possible leading indicators for earthquakes are also being considered, an example being unusual animal behavior. If leading indicators are included in an infor- mation set, tests may well indicate prima facie causality. In most cases this will be just another example of the missing variable problem. Some- times the missing variable will be available and, when added to the infor- mation set, the leading indicator will no longer appear to cause. In other cases the missing variable is not observable and, when this occurs, it will not always be obvious whether a variable is a cause or merely a leading indicator. This relates to the question of how to interpret the outcomes of the causality tests, which are discussed in correct it is helpful to use it, if it is incorrect one may well be worse off by its use. This is certainly true of any causality test that is conditional on the truth of some very specific theory. Whereas in many fields there may be theories, specific or not, that are generally accepted as being true, such theories are much more difficult to find in economics. It is interesting to note that Zellner in his paper never gives a single example of what he would consider to be a “well thought out economic theory” nor even of a specific theory or law that is generally accepted by the majority of economists. Again, one 62 C. W. J. Granger returns to the personal belief aspect of causality testing; an individual may strongly believe some theory and is happy to test causality condi- tional on this theory, whereas someone else would not want to do that. One obvious place where a good theory would be particularly useful would be where extra structure is required to resolve causal directions in what appears to be instantaneous causality/feedback. For example, if sufficient structure can be put on a model to ensure identifiability – in the econometrician’s sense of having a unique model – then a conditional causal test can be constructed.This is very much in the spirit of the Simon and Wold approach to causality, which is very well summarized in Zellner’s article. However, it must be emphasized that only conditional causality can result, and this is potentially very much weaker than the unconditional causality definition discussed earlier. The definitions of causation introduced in the previous section admit- tedly have a number of arbitrary aspects, some of which are potentially removable, others perhaps not. The data is assumed to be measurable on a cardinal scale, whereas actual data often occurs on different scales. If the data is intrinsically ordinal, I consider that it may be difficult to use these definitions, because of the lack of suitable distribution functions. However, it may be possible to build and evaluate forecasting models for such data, and so one aspect of the definitions will go through. With attribute data, without any natural order to the categories, the general definition remains unstable, but clearly the ‘causality in mean’ definitions are not relevant. This type of data is much nearer to the situation of one event causing another that was discussed by Suppes (1970) and Good (1961/62) and may often occur as the outcome of designed experiments. To be relevant to statisticians, a sequence of experiments will be required, as there seems to be no possibility of investigating causal rela- tionships between unique events using statistical procedures. A further arbitrary feature of the definitions is the use of one-step forecasts rather than h-step for any h. It is usually by no means clear what is the natural length of the step, and the pragmatic procedure is to use just the data period of the publicly available data, which can lead to the apparent instantaneous causation problem mentioned above. In the bivariate information set case, where one asks if Y causes X with respect to Jn(X, Y ), Pierce (1975) has shown that if Y causes X using an h-step forecasting criterion, with h > 1, then it will necessarily be found that Y causes X with a one-step criterion. However, this does not seem to be true in the multivariate case:

Example 6: Xt = et, Yt = et-2 + ht, Zt = et-1 + qt, where et, ht, qt are independent zero-mean, white noise series.

Here Zn causes Xn+1 with respect to Jn(X, Z) and Jn(X, Y, Z), Yn causes Xn+2 with respect to Jn(X, Y) and Jn(X, Y, Z), Yn(or Yn-1) causes Xn+1 with Testing for Causality: A Personal Viewpoint 63

respect to Jn(X, Y) but not with respect to Jn(X, Y, Z). Although some justification can be made that one-step forecasts are the most natural to consider, it will remain an arbitrary aspect of the definitions. It is, on occasion, possible to distinguish between different types of causes by considering alternative information sets. For example, one might call Y a primary cause of X, if tests show this to be so for Jn(X, Y), Jn(X, Y, Z) and for all other information sets containing X and Y and any other series. A secondary cause might be one such that X causes Z in Jn(X, Y, Z) but not in Jn(X, Z), as illustrated in example one above. This example shows that X can cause Z, according to the definition, even though X and Z are statistically independent, provided that X can add further information to the primary cause, which in Example 1 is Y.The existence of such secondary causes may be upsetting to some readers, and so it might be relevant to alter the basic definition to deal only with primary causes. However, I personally would not, at this time, wish to emphasize such a change. Most of the problems and difficulties discussed in this section relate not to the basic definition but with making it operational, in my opinion. Some are inherent to any statistical study using an incomplete or finite data set. Many of the difficulties become considerably reduced in inpor- tance once care is taken with interpretation of test results. In the following section a brief discussion of actual test procedures is presented, and in the final section some further important interpreta- tional questions are considered, such as the relevance of control vari- ables and the meaning of exogeneity.

5. TEST PROCEDURES There has been a lot of thought given in recent years to the question of how the above definitions can be actually tested, although the major attention has been given to the case of whether X causes Y with respect to Jn(X, Y ), that is, just the two-variable case. Although most empirical studies have considered this case, it is probably not a particularly impor- tant one in economics, as it is easy to suggest relevant missing variables. It is clear that more attention is needed on how to utilize bigger infor- mation sets. As the two-variable case has been well summarized recently by Pierce and Haugh (1977), only a few of the more important aspects will be discussed here. To give some structure to the discussion, consider the pair of zero-mean, jointly stationary series xt, yt, which are purely non-deterministic. The moving-average, or Wold, representation can be denoted, follow- ing Pierce and Haugh, by

Ê xt ˆ Èyy11()BB 12 ()˘Ê at ˆ = , (1) Ë ¯ Í ˙Ë ¯ yt Îyy21()BB 22 ()˚ bt 64 C. W. J. Granger

where each yij(B) is a power-series, possibly infinite in length, in the backward operator B and (at, bt)¢ is a two-element white noise vector, with zero correlation between at and bs, except possibly when t = s. Assuming that the moving average matrix operator is invertible, the cor- responding autoregressive model can be denoted by

ÈAB() HB ()˘Ê xt ˆ Ê at ˆ = . (2) Í ˙Ë ¯ Ë ¯ ÎCB() DB ()˚ yt bt Rather than considering models for the actual series, one can equally well consider relationships between prewhitened series. If the filters

F(B)xt = ut and G(B)yt = vt produce a pair of series ut and vt that are indi- vidually white noises, then moving average and autoregressive models will exist of the form

Êut ˆ Èqq11()BB 12 ()˘Ê at ˆ = , (3) Ë ¯ Í ˙Ë ¯ vt Îqq21()BB 22 ()˚ bt and

Èab()BB ()˘Êut ˆ Ê at ˆ = . (4) Í ˙Ë ¯ Ë ¯ Îgd()BB ()˚ vt bt There are obviously relationships between the various operators, as described by Pierce and Haugh.

Denote the correlation between ut-k and vt by ruv(k) and consider the regression • vuftjtjt=+Â w - , (5) j=-• where ruv(k) = (su/sv)wk. Similarly, one can consider the regression

yVBxhttt= () + . (6)

Here V(B) = (F(B)/G(B))w(B) and ft, ht are residuals which are uncor- related with ut-j, xt-j, respectively, but are not necessarily white noises. Using this notation, Pierce and Haugh (1977) prove the following two theorems, amongst others:

Theorem 1: Instantaneous (prima facie) causality (in mean) exists if and only if the following equivalent conditions hold:

(i) at least one of cov (at, bt), g (0), b(0) in (4) are non-zero, or (ii) at least one of cov (at, bt), H(0), C(0) in (2) are non-zero. In their 1977 paper, Pierce and Haugh had further conditions, such as ruv(0) π 0 or w 0 π 0, but Price (1979) and Pierce and Haugh (1979) show that these conditions are not necessarily correct when there is feedback between x and y. Testing for Causality: A Personal Viewpoint 65

Theorem 2: y is not a (prima facie) cause (in mean) of x if and only if the following equivalent conditions hold:

(1) y12(B) [equivalently q12(B)] can be chosen zero. (2) q12(B) is either 0 or a constant. (3) y12(B) is either 0 or proportional to y11(B). (4) Vj = 0 (j < 0) in (6). (5) b(B) is either 0 or a constant. (6) H(B) is either 0 or proportional to A(B).

(7) ruv(k) = 0, or equivalently wk = 0 (k < 0).

If any of these conditions do not hold, then y will be a prima facie cause of x in mean with respect to Jn(x, y). (1) and (4) were pointed out by Sims (1972), the first part of (6) was mentioned in Granger (1969), and that of (7) was emphasized in Granger and Newbold (1977). Multivariate gen- eralizations of these conditions, concerning the possibility that the vector y may cause the vector x, have been discussed by Caines and Chan (1975) and elsewhere. Because of this variety of equivalent conditions, there are clearly numerous statistical tests that can be devised based on these con- ditions.The performance of these tests needs further investigation, either using statistical theory or Monte Carlo study, especially as some are sus- pected to be occasionally biased or to be lacking in power. My own experience has largely been with the autoregressive form (2), first fitting the bivariate model with H(B) constrained to be zero and then refitting without this constraint, to see if a significant decrease in the variance of the residual for the xt equation can be achieved. This experience, using both simulated and actual data as, for example, in Chiang (1978), suggests that misleading results do not occur but that the power is not particularly satisfactory. However these tests are not of con- siderable importance for two basic reasons: (i) they deal only with the bivariate case, whereas the more important applications are likely to involve more variables; and (ii) they are not properly based on the def- initions presented above. This latter point arises because these defini- tions are explicitly based on the extra forecasting ability schieved from one information set over another, whereas the equivalent conditions given in Theorem 2, for example, make no mention of forecasts. This makes no difference for populations, as the definition of non-causation in mean and the conditions in Theorem 2 are then equivalent. However, if only a finite sample is available, as will always occur in practice, the equivalence disappears. Suppose that a sample is used to model the rela- tionship between xt and yt in the autoregressive form (2) and the esti- mate of H(B) is found to be significantly different from zero. Then the result is essentially saying that if this fact were known at the start of the sample, it could have been used to improve forecasts of xt. This is quite different from actually producing improved forecasts. It is generally 66 C. W. J. Granger accepted that to find a model that apparently fits better than another is much easier than to find one that forecasts better. Thus tests based on the “equivalent conditions” in Theorem 2 are just tests of goodness of fit, whereas the original definition requires evidence of improved fore- casts. To satisfy this requirement, alternative models, based on different information, can be identified and estimated using the first part of the sample and then their respective forecasting abilities compared on the later part of the sample. The best way to actually test for differences in “post-sample” forecasting ability and the optimum way to divide the sample into a modeling part and a forecast evaluation part need further investigation, but at least a test that is in sympathy with the basis of the definition would result. An application of these ideas, in a two-variable case, is provided by Ashley, Granger and Schmalensee (1979), who consider possible causal relationships between aggregate advertising expenditures and consump- tion spending. They use a five-step procedure: (i) Using a block of data, which is called the sample, each series is

prewhitened by building ARIMA models, to get ut, vt as above. (ii) The cross-correlations ruv(k) are examined to see if there is evi- dence of possible causal relationships. (iii) For each indicated possible causal relationship, a model is built

on these residuals ut, vt. If a one-way cause is suggested, the transfer function methods of Box and Jenkins (1970) may be utilized, but if a two-way causality appears to be present, the method for modeling this situation suggested in Granger and Newbold (1977) can be used. (iv) The models in stages (i) and (iii) are then put together to suggest a model for the original data, in differenced form where neces- sary. This model is estimated, insignificant terms dropped and a final model achieved. (v) The forecasting ability, in terms of mean-squared one-step fore- cast error, of the bivariate model and the single series ARIMA model, are then compared using post-sample data. If the bivari- ate model forecasts significantly better, then evidence of causa- tion is found. These stages are somewhat biased against finding causation, as, if in stage (ii) no evidence of causes is found, then no bivariate models will be constructed. The separation of the modeling period and the evalua- tion period does prevent evidence for spurious causation occurring because of data mining. However, a weakness is that if an important structural change occurs between the sample and the post-sample, the test will lose power. The relevance of Axiom C is evident. Ashley, Granger and Schmalensee, using quarterly data, find evidence that con- sumption causes advertising, but that advertising does not cause con- Testing for Causality: A Personal Viewpoint 67 sumption except instantaneously. These results agree with parts of the advertising literature that find advertising expenditure is determined by management from previous sales figures and that advertising has little or no long-memory ability. On the other hand, these results might well be the opposite of the pre-conceptions of many economists, which illus- trates both the relevance of performing a test and also of not relying on some partly formed theory.

6. DISCUSSION AND CONCLUSIONS The definition of causation proposed and defended above essentially says that Xn+1 will consist of a part that can be explained by some proper infor- mation set, excluding Yn-j(j 0), plus an unexplained part. If the Yn-j can be used to partly forecast the unexplained part of Xn+1, then Y is said to be a prima facie cause of X. It is clear that in practice the quality of the answer one gets from a test is related to the sophistication of the analy- sis used in deciding what is explained and by what. The definition also relies very heavily on Axiom A, that the future cannot cause the past, as using the “arrow of time” imposes the structure necessary for the defin- ition to hold. It also means that the definition does emphasize forecast- ing. If one does not accept Axiom A, the rest of the work connected with the definition becomes irrelevant. It is important to realize that the truth of Axiom A cannot be tested using the methods discussed in this paper. I should point out that the work by physicists on “time-reversibility” does not seem to contradict Axiom A, as a careful reading of the review article by Overseth (1967) will show.Because of the way the definition is framed, and the tests based on it are organized, it is only appropriate for use with sequences of data. It cannot say anything about unique events or con- tribute to topics such as whether there exists an ultimate or first cause. Such topics have to remain the province of philosophers and theologians. In interpreting the test results it has been suggested above that one thinks in terms of changing personal beliefs about whether Y causes X. There is nothing essentially new in this suggestion, as it is certainly what occurs in practice. The definition and tests based on it provide a way to organize the available data in such a way that some workers will feel is appropri- ate for them to need to possibly change their prior probabilities. I leave to others the discussion of the effect of this procedure, and of the whole causation testing methodology on scientific methodology. Some of the economists writing about what is called Granger causal- ity have related this concept to the more familiar one of exogeneity; see, for example, Sims (1977) and Geweke (1978).When econometric models are constructed it is usual to divide variables into exogenous (Z) and endogenous (Y ), and it is assumed that components of Z may cause com- ponents of Y but not vice versa. There is thus assumed to be a one-way causal relationship from Z to Y. For estimation and econometric identi- 68 C. W. J. Granger

fication purposes, it is important that this classification be correct as ques- tions of efficiency, model uniqueness and model specification are con- cerned. Tests for exogeneity are with respect not only to the information set used but also to the division of variables picked. One may find, for instance, that Z minus W is exogenous to Y plus W, for some variable W. It is also possible that missing variables can disrupt the exogenous inter- pretation, as when Z is exogenous to Y but not to some extended Y.The possibility of “instantaneous causality” obviously greatly complicates the problem of how to test for exogeneity.Some of these problems have been discussed elsewhere [Granger (1980)] and so will not be followed up here. Some variables are such that prior beliefs will be strong that they are exogenous: An example is that weather is probably exogenous to the economy. However, other variables have often been considered to be exogenous yet need to be tested, the best examples being the control variables. One can argue that a government controlled interest rate is in fact partly determined by previous movements elsewhere in the economy, and so is not strictly exogenous. The true exogenous part of such a variable is that which cannot be forecast from other variables and its own past, and it follows that it is only this part that has any policy impact. The theory of rational expectations, currently attracting a lot of attention in economics, is relevant here but its discussion is not really appropriate. The effect of the presence of control variables on causal relationships was considered by Sims (1977). It is certainly possible that the actions of a controller can lead to what appears to be a causal relationship between two variables. Equally, it is possible that two variables that would be causally related if no controls were used, would seem to be unrelated in the presence of a control. It is also worth pointing out that controllabil- ity is a much deeper property than causality, in my opinion, although some writers have confused the two concepts. If Y causes X, it does not necessarily mean that Y can be used to control X. An example is if one observes that the editorial recommendations of the New York Times about which candidates to support causes some voters to change their votes. However, if one started controlling these editorials, and this became known, the previously observed causality may well disappear. The reason is clearly that the structure has been altered by changing a previously uncontrolled variable to one that is controlled. If causation is found between a controlled variable and something else, this could be useful in deciding how to control, provided movements are kept near those observed in the past. It seems quite possible that some variables used in the past by governments to control may be so ineffectual that causation will not be found, so testing is worthwhile. The relationship between control, causation and the recent rational expectations litera- ture is potentially an interesting one, but is too large a topic to be con- sidered here. Testing for Causality: A Personal Viewpoint 69

There is clearly much more discussion required of this and other definitions and more experience required with the various methods of testing that have been suggested. It is my personal belief that the topic is of sufficient importance, and of interest, to justify further work in this field.

REFERENCES Ashley, R., C.W.J. Granger and R. Schmalensee, 1979, Advertising and aggregate consumption: An analysis of causality, Research report (Department of Economics. University of California, San Diego, CA). Black, H., 1978, Inflation and the issue of unidirectional causality, Journal of Money, Credit and Banking X, Feb., 99–101. Blalock, H.M. Jr., 1964, Causal inferences in non-experimental research (Univer- sity of North Carolina Press, Chapel Hill, NC). Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden Day, San Francisco, CA). Bunch, M., 1963, Causality (Meridian Books, Cleveland, OH). Caines, P.E. and C.W. Chan, 1975, Feedback between stationary stochastic processes, IEEE Transactions on Automatic Control AC 20, 498–508. Caines, P.E., S.P. Sethi and T. Brotherton, 1977, Impulse response identification and causality detection for the Lydia–Pinkham data, Annals of Economic and Social Measurement 6, Spring, 147–164. Chiang, C., 1978, An investigation of the relationship between price series. Ph.D. thesis (University of California, San Diego, CA). Ciccolo, J.H. Jr., 1978, Money equity values and income: Tests for exogeneity, Journal of Money, Credit and Banking X, Feb., 45–64. Feige, F.L. and D.K. Pearce, 1976a, Inflation and incomes policy: An application of time series models, Journal of Monetary Economics. Supplementary Series 2, 273–302. Feige, E.L. and D.K. Pearce, 1976b, Economically rational expectations: Are innovations in the rate of inflation independent of innovations in measures of monetary and fiscal policy? Journal of Political Economy 84, 499–522. Geweke, J., 1978, Testing the exogeneity specification in the complete dynamic simultaneous equation model. Journal of Econometrics 7, no. 2, 163–186. Good, I.J., 1961/62,A causal calculus. I/II, British Journal of Philosophical Society 11, 305/12, 43–51. Gordon, R.J., 1977, World inflation and monetary accommodation in eight countries, Brookings Papers on Economic Activity, Part 3, 409–478. Granger, C.W.J., 1963, Economic processes involving feedback. Information and Control 6, 28–48. Granger, C.W.J., 1969, Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37, 424–438. Granger, C.W.J., 1980, Generating mechanisms, models and causality, Paper pre- sented to World Econometrics Congress, Aix-en-Provence, Sept. 1980. 70 C. W. J. Granger

Granger, C.W.J. and A. Andersen, 1978, An introduction to bilinear time series models (Vandenhoeck and Ruprecht, Göttingen). Granger, C.W.J. and M. Hatanaka, 1964, Spectral analysis of economic time series (Princeton University Press, Princeton, NJ). Granger, C.W.J. and P. Newbold, 1977, Forecasting economic time series (Acade- mic Press, New York). Hart, H.L.A. and A.M. Honore, 1959, Causation in the law (Oxford University Press, Oxford). Hosoya, Y., 1977, On the Granger condition for non-causality, Econometrica 45, no. 7, 1735–1736. Mehra, Y.P., 1977, Money wages, prices and causality, Journal of Political Economy 85, Dec., 1227–1244. Newbold, P., 1978, Feedback induced by measurement errors, International Economic Review 19, 787–791. Overseth, O.E., 1967, Experiments in time reversal, Scientific American 221, Oct., 88–101. Pierce, D.A., 1975, Forecasting in dynamic models with stochastic regression. Journal of Econometrics 3, 349–374. Pierce, D.A. and L.D. Haugh, 1977, The assessment and detection of causality in temporal systems. Journal of Econometrics 5, 265–293. Pierce, D.A. and L.D. Haugh, 1979, Comment on price, Journal of Econometrics 10, 257–260. Price, J.M., 1979, A characterization of instantaneous causality: A correction, Journal of Econometrics 10, 253–256. Sargent, T.F., 1976, A classical macroeconometric model for the U.S., Journal of Political Economy 84, 207–238. Sims, C.A., 1972, Money, income and causality, American Economic Review 62, 540–552. Sims, C.A., 1977, Exogeneity and causal ordering in macroeconomic models, in: New methods in business cycle research, Proceedings of a conference (Federal Reserve Bank, Minneapolis, MN). Skoog, G.R., 1976, Causality characterizations: Bivariate, trivariate and multivari- ate propositions, Staff Report no. 14 (Federal Reserve Bank, Minneapolis, MN). Suppes, P.,1979, A probabilistic theory of causality (North-Holland,Amsterdam). Wiener, N., 1958, The theory of prediction, in: E.F. Bec, ed., Modern mathemat- ics for engineers, Series 1, Ch. 8. Williams, D., C.A.E. Goodhart and D.H. Gowland, 1976, Money, income and causality: The U.K. experience, American Economic Review 66, 417–423. Wright, S., 1964, The interpretation of multivariate systems, in: O. Kempthorne et al., Statistics and mathematics in biology, Ch. 2 (Hafner, New York). Zellner, A., 1978, Causality and econometrics, in: Proceedings of conference held at University of Rochester, NY, forthcoming. CHAPTER 3

Some Recent Developments in A Concept of Causality* C. W. J. Granger

The paper considers three separate but related topics. (i) What is the relationship between causation and co-integration? If a pair of I(1) series are co-integration, there must be causation in at least one direction. An implication is that some tests of causation based on different series may have missed one source of causation. (ii) Is there a need for a definition of ‘instantaneous causation’ in a decision science? It is argued that no such definition is required. (iii) Can causality tests be used for policy evaluation? It is suggested that these tests are useful, but that they should be evaluated with case.

1. INTRODUCTION Suppose that one is interested in the question of whether or not a vector of economic time series yt ‘causes’ another vector xt. There will also exist a further vector of variables wt which provide a context within which the causality question is being asked. Two information sets are of interest:

Jt: xt-j, yt-j, wt-j, j ≥ 0, and

Jt¢: xt-j, wt-j, j ≥ 0, so that Jt uses all of the available information but Jt¢ excludes the infor- mation is past and present yt. It is important to assume that components of yt are not perfect functions of the other components of Jt, so that there does not exist a function g( ) such that yt = g(wt-j, j ≥ 0), for example. Let f(x|J) be the conditional distribution of x given J and E[x|J] be the cor- responding conditional mean, then the following definitions of causality and non-causality will be used in the following discussion:

* Journal of Econometrics, 39, 1988, 199–211. 72 C. W. J. Granger

(i) yt does not cause xt+1 with respect to Jt if

fx()tt++11 J=¢ fx() tt J . (ii) If

fx()tt++11 Jπ¢ fx() tt J , then yt is a ‘prima facie’ cause of xt+1 with respect to Jt. (iii) If

Ex[]tt++11 J=¢ Ex[] tt J , then yt does not cause xt+1 in mean, with respect to Jt. (iv) If

Ex[]tt++11 Jπ¢ Ex[] tt J , then yt is a prima facie cause in mean of xt+1 with respect to Jt. The ‘in mean’ definitions were introduced in Granger (1963), based on a suggestion by Wiener (1956), and the general definition was dis- cussed in Granger (1980) and elsewhere. The definitions are based on two fundamental principles: (a) The cause occurs before the effect. (b) The causal series contains special information about the series being caused that is not available in the other available series,

here wt. It follows immediately that there are forecasting implications of the definitions. The ‘in mean’ definition implies

that, if yt causes xt, then xt+1is better forecast if the information in yt-j is used than if it is not used, where ‘better’ means a smaller variance of forecast error, or the matrix equivalence of variance. The general definition (ii) implies that if one is trying to fore-

cast any function g(xt+1) of xt+1, using any cost function, then one will be frequently better off using the information in yt-j, j ≥ 0, and never worse off. This has recently been proved formally be Granger and Thomson (1987) and indicates the considerably greater depth of the more general definition (ii) compared to (iv).

If Jt contained all the information available in the universe at time t, then yt could be said to cause xt+1. In practice Jt will contain considerably less information and so the phase ‘prima facie’ has to be used. (ii) is a weaker definition than (i), but it is a definition of a type of causality which is given a specific name. The name is chosen to include the unstated assumption that possible causation is not considered for any arbitrarily selected group of variables, but only for variables for which the researcher has some prior belief that causation is, in some sense, likely. Because the question of possible causality is being asked, yt would Some Recent Developments in a Concept of Causality 73 have been considered a candidate for a cause before the definition was applied. Thus, one may start with a ‘degree of belief’ that yt causes xt+1, measured as a probability, and after using a causality test based on these definitions, one’s ‘degree of belief’ may change. For example, before the test the degree of belief could be 0.3 and after the test this could increase to 0.6. The extent to which the belief probability changes will depend on the perceived quality and quantity of the data, the size and relevance of wt and the perceived relevance, quality or power of the test. Naturally, using just statistical techniques, it is unlikely that the probability will go to one, or to zero, and if one does not like the defin- itions being used, then the tests are irrelevant and the degree of belief cannot change.

For a given yt and Jt, the definition in (ii) is a general one and not specific to a particular investigator. However, interpretations of tests based on the definition do depend on the degrees of belief of the inves- tigator and so are specific. Further, going back a step, as the choice of variables to be considered in a causality analysis is in the hands of the investigator, the definition can also be thought of as being specific in this respect. It would be interesting to try to give a more formal Bayesian view- point to these ideas, incorporating the dynamics of prior beliefs as new information becomes available, but I do not feel competent to undertake such an analysis. There are many tests for causation that have been suggested and some are discussed in Geweke (1984) and will not be considered here. Many empirical papers is economics and in other fields have used definitions

(iii) and (iv) although usually just with a pair of univariate series xt,so that wt is empty. A few papers have considered wider information sets. The definitions have proved useful in various theoretical contexts, includ- ing rational expectations [Lucas and Sargent (1981)], exogeneity [Engle, Hendry and Richard (1983)] and econometric modeling strategy [Hendry and Richard (1983)]. There has also been some interest by philosophers in the definitions [Spohn (1984)]. Criticisms of the defini- tions have ranged from the inconsequential (the word causation cannot be used) to the more substantial. As examples of the latter, Zellner (1979) believes that causation cannot be securely established except in the context of a confirmed subject matter theory, and Holland (1986) believes that tests of causation can only be carried out within the context of controlled experiments. As I have attempted to answer various criticisms elsewhere [Granger (1980, 1986a)], I will not discuss them further here. The present paper considers three separate but related questions:

(i) What is the effect of the relationship between the concepts of co-integration and causality on tests of causality? 74 C. W. J. Granger

(ii) Is there need for a definition of instantaneous causality in a deci- sions science such as economics? (iii) Can causality tests potentially be used for policy evaluation?

It should be noted that in the definitions only discrete time series are considered and that a time lag of one is involved. The size of this unit is not defined. It is merely assumed that a relevant, positive unit does exist for the definitions to hold. There is, of course, no reason for the available data to be measured on the same unit of time that the definitions would require for a proper test of causation.The data may be available monthly and the actual causal lag be only a couple of days, for example. The rel- evance of this difference in units is also discussed briefly in the consid- eration of the second of the above questions. The paper is mostly concerned with bringing various results together for econometricians to see. Section 3 presents some new ideas on instan- taneous causality and its interpretation, and section 4 is largely a summary of an unpublished paper.

2. CO-INTEGRATION AND CAUSATION Define an I(0) series as one that has a spectrum that is bounded above and also is positive at all frequencies. If the first and second moments of the series are time-invariant, then xt will be second-order stationary and it can be assumed that the autocorrelations rk decline exponentially (in magnitude) for k large. In practical terms, xt may be thought to have a generating mechanism that can be well approximated by stationary, invertible ARMA (p, q) model, with finite p and q. A series will be said to be integrated of order one, denoted I(1), if its changes are I(0). I(1) series are sometimes called ‘non-stationary’ because their variance increase linearly with time, provided they started a finite number of time units earlier. There is plenty of empirical evidence that macro- economic series often appear to be I(1). The causality definitions make no assumptions about whether the series being considered are I(0) or I(1), but if they are I(1), some care has to be taken with their empirical analysis.

Suppose that xt, yt are both I(1), without trends in mean, so that their changes are both I(0) and with zero means. Then it will be typically true that any linear combination of xt, yt will also be I(1). However, it is pos- sible that there will exist a constant such that zt = xt - Ayt is I(0). This would happen, for instance, if

xt = Aqt + x1t, (1a) and

yt = qt + y1t, (1b) Some Recent Developments in a Concept of Causality 75

where qt ~ I(1) and x1t, y1t are both I(0). When this occurs xt, yt are said to be ‘co-integrated’. Clearly, not all pairs of I(1) series have this prop- erty. It was shown in Granger (1983) that, if xt, yt are both I(1) but are co-integrated, then they will be generated by an ‘error-correction’ model taking the form

Dxt = g1zt-1 + lagged Dxt, Dyt + e1t, and

Dxt = g2zt-1 + lagged Dxt, Dyt + e2t, where the product g·g2 π 0 and e1t, e2t are finite-order moving averages. Thus, changes in the variables xt, yt are partly driven by the previous value of zt. It can be shown that the line x - Ay = 0 can be considered to be an ‘equilibrium’ or ‘attractor’ for the system in the phase-space, where xt is plotted against yt, so that zt can be interpreted as the extent to which the system is out of equilibrium. Further interpretations, methods of testing for and examples of co-integration can be found in the special issue of the Oxford Bulletin of Economics and Statistics, August 1986, which includes a survey article by Granger (1986b).A consequence of the error- correction model is that either Dxt or Dyt (or both) must be caused by zt-1 which is itself a function of xt-1, yt-1.Thus, either xt+1 is caused in means by yt or yt+1 by xt if the two series are co-integrated. This is a somewhat surprising result, when taken at face value, as co-integration is concerned with the long run and equilibrium, whereas the causality in mean is con- cerned with short-run forecastability. However, what it essentially says is that for a pair of series to have an attainable equilibrium, there must be some causation between them to provide the necessary dynamics. The various concepts can be easily generalized to vectors of economic series, as in Granger (1986b). It is also possible to generalize the results to non- linear equilibria, as discussed in Granger (1986c). It should be noted that in the error-correction model, there are two possible sources of causa- tion of xt by yt-j, either through the zt-1 term, if g1 π 0, or though the lagged Dyt terms, if they are present in the equation. To see what form the cau- sation through zt-1 takes, consider again the factor model (1) where, for simplicity, qt is taken to be a random walk, so that Dqt = at, zero mean white noise, and x1t, y1t, are white noises. Then,

zt = xt - Ayt = x1t - Ay1t.

Now consider Dxt, which is given by

Dxt = Aat +Dx1t =-x1,t-1 + Aat + x1t.

The first term is the forecastable part of Dxt and the final two terms will constitute the one-step forecast error of a forecast made at time t - 1 based on xt-j, yt-j, j ≥ 1. However, the forecast, -x1,t-1, is not directly observable but is, generally, correlated to zt-1, which results in the 76 C. W. J. Granger

causation in mean. This causation will not occur only if zt has zero variance.

If zt is not used in the modeling, then x1,t-1 will be related only to the sum of many lagged Dxt, but this sum will also include the sum of at-j, which will give a ‘noise’ term of possibly large variance. Thus, x1,t-1 will be little correlated to the sum of lagged Dxt. If classical, multivariate time- series modeling techniques are used, as discussed in Box and Jenkins (1970) and in the first edition of Granger and Newbold (1977), then once it is realized that xt, yt are I(1), then their changes will be modelled using a bivariate ARMA (p, q) model, with finite p, q. Without zt being explic- itly used, the model will be mis-specified and the possible value of lagged yt in forecasting xt will be missed. Thus, many of the papers discussing causality tests based on the traditional time-series modeling techniques could have missed some of the forecastability and hence reached incor- rect conclusions about non-causality in mean. On some occasions, cau- sation could be present but would not be detected by the testing procedures used. This problem only arises when the series are I(1) and co-integrated, but this could be a common situation when causality ques- tions are asked. It does seem that many of the causality tests that have been conducted should be re-considered. It would also be interesting to try to relate the causal impact of the zt-1 terms to the frequency-domain causation decompositions considered by Granger (1969) and by Geweke (1982). It is tempting to think that the main impact will be at very low frequencies, but this is not clear.

3. INSTANTANEOUS CAUSALITY One of the earliest, and most telling, criticisms of causality tests based on statistical techniques is that correlation cannot be equated to causal- ity.A major difficulty with looking at a correlation is that it gives no indi- cation about the direction of relationship. If X, Y are correlated random variables, then Y can be used to explain X but X can also be used to explain Y. For a definition of causation to be useful for statistical testing, it must contain an assumption on structure that allows such dual rela- tionships to be disentangled. In the definitions given in the first section this assumption is that the cause occurs before the effect and so the ‘arrow of time’ can be used to help distinguish between cause and effect. Other definitions use alternative methods of performing this distinction. Holland (1986), for instance, considers only situations in which the cause is the input to an experiment and the effect is found from the results of the experiment. Strict application of a time gap requirement means that the definitions given above can make no statements about instantaneous causality. Suppose that xt, yt are a pair of series and let

exExJeyEyJ=- ,,=- xt t[] t t--11 yt t[] t t Some Recent Developments in a Concept of Causality 77

where Jt-1: xt-j, yt-j, j ≥ 1, and suppose that r = corr(ext, eyt) π 0, then one may suppose that there is an apparent instantaneous causality in these series. At the very least, the question of whether or not a causal expla- nation can be given to this finding deserves consideration. Pierce and

Haugh (1977) discuss whether yt causes xt instantaneously by using the information sets Jt¢: Jt, yt and J≤t: Jt, xt. If xt is better ‘forecast’ using Jt¢ rather than Jt, one could say that yt instantaneously causes xt (in mean), a nec- essary condition being just r π 0. However, this same condition is nec- essary for the statement that xt instantaneously causes yt. One is back to the symmetry problem and this definition of instantaneous causality is therefore unsatisfactory, as no direction of relationship can be deduced just from the data. It is possible, on occasions, to add further information and to reach a conclusion. If, for example, one ‘knows’ that xt cannot cause yt (instantaneously or otherwise), then the symmetry is broken. This extra ‘knowledge’ can come from some economic theory or a belief in exogeneity (the economy cannot cause weather) but the conclusion, or the change in the degree of belief about causation, will depend on the correctness of the extra knowledge. Three possible explanations for the apparent instantaneous causality will be discussed. (i) There is true instantaneous causality in an economic system so that some elements in the system react without any measurable time delay to changes in some other elements. (ii) There is no true instantaneous causality, but the finite time delay between cause and effect is small compared to the time interval over which data is collected. Thus, the apparent causation is due to temporal aggregation.

(iii) There is a jointly causal variable wt-1, that causes both xt and yt but is not included in the information set, possibly because it is not observed. It can be argued that true instantaneous causality will never occur in economics, or any other decision science, and that the missing variable explanation is always a possible one, so that a definition of instantaneous causality is never actually needed. In this discussion, it is assumed that the cause can never occur after the effect, so that the causal lag is either zero, giving instantaneous causality, or positive, giving the causal defini- tion used throughout this paper. If follows that one cannot have instan- taneously causality between a pair of flow variables, such as imports and exports, or a pair of production series, as these variables are available only for discrete time, and part of one variable must almost inevitably occur after part of the other. Similarly, a stock variable, such as a price measured at time t cannot instantaneously cause a flow variable, most of which occurs before t. This will be true however short the time interval used, provided it is finite and positive. Thus, instantaneous causality can 78 C. W. J. Granger only strictly be discussed for pairs or groups of stock variables. If one also believes that economic variables and the outcomes of large numbers of decisions made by economic agents or institutions, that each agent can only concentrate on a single decision at a time, so that their brains are single-track decision makers, and that there is always a delay in making a decision, as new information is assimilated, analyzed and a decision rule applied, and that there is then a further delay until the decision is implemented and becomes observable, then the presence of true instan- taneous causality in economics becomes very unlikely. The true causal lag may be very small but never actually zero. The observed or apparent instantaneous causality can then be explained by either temporal aggre- gation or missing causal variables. Temporal aggregation is a realistic, plausible and well-known reason for observing apparent instantaneous causation and so needs no further discussion. It is common practice is statistics in general, and in econometrics in particular, to discuss a pair of random variables, say X and Y, that have a joint distribution function. The residuals ext, eyt introduced above provide an example. However, there is virtually no discussion about the mechanism that produces this joint distribution. How are the values of the variables X and Y, observed at time t, say, actually generated such that they are also characterised by having a joint distribution? Clearly, these values have to be generated simultaneously. For example, if Xt, Yt are respectively stock market closing price indices from the Pacific Stock Exchange and the Sydney Stock Exchange, and suppose both exchanges close at the identical time, then, if xt, yt have a joint distribution, a mech- anism has to be described that can lead to this simultaneous generation of a pair of price indices at sites separated by several thousand miles. Of course, the physical locational difference is irrelevant as the ‘electronic distance’ is negligible, provided the members of one exchange pay very close or constant attention to what is happening at the other exchange. If the two variables are statistically independent, so that their joint distribution is the product of the two marginal distributions, the joint generation is easily understood as all that is needed is two genera- tion mechanisms operating independently of each other. However, the concept of independence is not one that is always well understood, as it depends on the set of variables within which independence is being dis- cussed. For example, if X, Y, Z are three variables, then X and Y can be independent if only this pair is considered but X|Z and Y|Z need not be independent, where X|Z is the conditional variable X given Z. This is easily seen by taking X, Y, Z to be jointly Gaussian with a covariance matrix having cov(X, Y) = 0 but other covariances non-zero. A theorem can be proved that is the reverse of this example, so that if X, Y are not independent, there always could exist another variable Z such that X|Z, Y|Z are independent.Thus, the apparent joint distribution between X and Y occurs because there are really three variables, Z is Some Recent Developments in a Concept of Causality 79 affecting each of X and Y which are independent within the group of three variables, but as Z is unobserved, and thus is marginalized out, the observed joint distribution occurs. Formally, the theorem takes the form: For any bivariate probability definity function (p.d.f.) f(x, y), there exists a trivariate p.d.f. f(x, y, z) such that (i) fxy(),,,,= f() xyzdz Ú and

(ii) ffff()xyz,,= ()()() xz23 yz z . A necessary and sufficient condition for (ii) is that

ff()xz= () xy,. z Here f(x|y) is the conditional distribution of X given Z, after Y has been marginalized out. The theorem states that, if X and Y are a pair of continuous random variables, there potentially could exist a third variable Z such that the joint distribution of X, Y, Z, f(x, y, z), has the property

ffff()xyz,,= ()()() xz yz z , so that X|Z and Y|Z are independent. The result is given as Theorem 1 of Holland and Rosenbaum (1986) and originates with Suppes and Zanotti (1981). In private correspondence, Peter Thomson (Victoria University, New Zealand) proved the result for the case when X, Y, Z are jointly Gaussian. In this case, Thomson shows that, if X, Y, Z are Gaussian with zero means, unit variances and correlations

rr= corr()XY ,,,,12= corr() XZ , r= corr() YZ , then the joint distribution has the required property provided only that r1 ·r2 = r. The theorem can be expanded to any group of random vari- ables X1, X2,...,XN such that if they are conditional on Z1, Z2,...,Zm, m £ N - 1, they will be independent. In the causality context, if the theorem is correct, then any apparent instantaneous causal relationship can be explained by the possible exis- tence of an unobserved variable that causes both (or all) the variables of interest. As the missing variable is unobserved it could occur at an earlier time. It follows that the concept of (real) instantaneous causality is not required as the present definition of causation (with a lag between cause and effect) can be used to explain all joint distributions and thus any apparent instantaneous causality or joint distribution. The question remains of how one can disentangle the actual causal structure between variables that have here been called apparently instantaneously causal. I suspect that this cannot be achieved by purely statistical means, although this important question deserves further con- sideration. One natural approach is to add extra structure, as mentioned 80 C. W. J. Granger above, such as suggested by theory, ‘common sense’ or by beliefs that ‘small’ cannot cause ‘big’, for example. The relevance of conclusions based on such ideas will depend on how correct the assumptions made, the tests will be of ‘conditional causality’ and the interpretation of the test results will depend on the degree of belief that one has of the assumptions being made. As it stands, the discussion in this section probably has no implica- tions for practical econometrics but should have relevance for the inter- pretation of the results obtained from empirical work.

4. CAUSALITY AND CONTROL VARIABLES Although the definitions of causality discussed in the first section are very simple, there can be problems with their use and interpretation. Tests based on the definition can also give some apparently surprising results. Some of these questions can be illustrated with a discussion of the potential usefulness of causality tests on control variables. The illus- tration can be based on a very simple case. Suppose that yt is some eco- nomic variable which the government is trying to control. yt will be called the target series, at will be the desired value for yt and the cost function is the expected square of the difference between yt and at. Let xt be the variable controlled by the government and suppose that yt is generated by what is called in the engineering literature the ‘plant equation’,

yt = ayt-1 + cxt + ut, (2) where ut is zero-mean, white noise. It is easily seen that optimum control is achieved by taking

-1 xcyattt=-[]a -1 - , (3) so that

yt = at + ut, (4) under the assumption that the specification and parameters of the plant equation are unchanged, however xt is generated. There is a difficulty with this specification as there is an apparent instantaneous causation in (2). However, this is largely illusory, as one can take the time interval in this generating process to be the decision lag (rather than the period between data observations), and then note that at will be determined by the government at time t - 1 as a desired value for yt to achieve during the period from t - 1 to t. Thus, one can put at = a¯t-1, placing at the time at which its value is determined. It is seen from (3) that xt is also deter- mined at time t - 1. Thus, the control variable may be denoted wt-1 = xt and is also associated with the time at which it is determined. The gov- ernment will observe wt-1 at time t - 1. The public will observe xt at time t but should equate it with wt-1. It is important in causality discussions to Some Recent Developments in a Concept of Causality 81 associate a variable with the time at which it occurs, rather than when it is observed. This problem is discussed Granger (1980), particularly concerning the temporal relationship between thunder and lightening. The equations now are

yt = ayt-1 + cwt-1 + ut, (5) -1 wcyattt=-()a - , (6)

yt = a¯t-1 + ut. (7) If an economic theory gives the plant equation (5), it may appear that wt-1 should cause yt, but this would be an incorrect interpretation as the whole system has to be considered jointly rather than one equation at a time. From the government’s perspective, the question asked is: Is yt better forecast using the information set Jt-1: yt-j, wt-j, a¯t-j, j ≥ 0, rather than the information set Jt¢-1: yt-j, a¯t-j, j ≥ 0? However, from (6), clearly these information sets contain the same information as wt is exactly explained by yt and a¯t. Thus, the government would not find wt-1 causing yt in this case. This result was proved by Sargent (1976). The same conclusion would hold if wt were selected sub-optimally, but still exactly a function of other variables, such as

wt = g1yt + g2a¯t, (8) as pointed out by Buiter (1984).

The situation for the public is somewhat different if a¯t is not publically announced and is also stochastic, such as if a¯t is generated by

a¯t = byt + g a¯t-1 + et. (9)

Now, wt-1 would seem to cause yt in that yt is better forecast by Yt-j, wt-j, j ≥ 0, than by yt-j, j ≥ 0, alone. The Sargent and Buiter results are not very robust if the very strin- gent conditions of the model are relaxed. For example, if a white-noise error term, vt, is added to (6) the government no longer is able to per- fectly control its variable, as is surely the case in most instances. The vector autoregressive or reduced form representation for the structural system (5), (6) and (9) is

yt = ayt-1 + cwt-1 + ut, -1 -1 wcttttttt=-[]ab() a y---111-+ a cwa g++ n ce[] - a u,

a¯t = abyt-1 + g a¯t-1 + bcwt-1 + et + but. In general, it is easy to use such a VAR model to asked causality ques- tions. Any left-hand-side variable is caused by any right-hand-side vari- able having a non-zero coefficient. Thus, for example, wt-1 will cause a¯t if bc π 0. However, the results above indicate that this type of result only holds true if there is no linear combination of the residuals to the various 82 C. W. J. Granger equations that have zero variance. No such linear combination exists if ut, vt and et all have positive variances, but is not true if vt = 0, all t,as assumed by Sargent and Buiter.

There is one other case where the public finds wt-1 causing yt, but the government will find no causation. If vt = 0 at t, but the plant equation includes a stochastic variable zt,

yt = ayt-1 + cwt-1 + dzt-1 + ut, but zt is observed by the government and not by the public. The optimal value of the control variable will then be

-1 wcydzatttt=-[]a +--11 - .

However, as the public does not observe Zt, but does observe wt which is related to it, again the public will find the control variable causing the target variable. It is thus seen that the public and the government, if performing causality tests of yt by wt-1, can reach different conclusions, depending on who is doing the test and what information set is available. The timing of variables is also clearly important. Some care has to be taken in inter- preting causality tests as this exercise clearly shows. These questions are discussed in more detail in Granger (1987) where the I(1) case and co- integration aspects are also considered.

REFERENCES Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco, CA). Buiter, W.H., 1984, Granger-causality and policy effectiveness, Economica 51, 151–162. Geweke, J., 1982, Measurement of linear dependence and feedback between time series, Journal of the American Statistical Association 77, 304–324. Geweke, J., 1984, Inference and causality in economic time series models, Ch. 19 in: Z. Griliches and M.D. Intriligator, eds., Handbook of econometrics II (North-Holland, Amsterdam). Engle, R.F., D.F. Hendry and J.F. Richard, 1983, Exogeneity, Econometrica 51, 277–304. Granger, C.W.J., 1963, Economic processes involving feedback, Information and Control 6, 28–48. Granger, C.W.J., 1969, Investigating causal relations by econometric models and cross-spectral methods, Econometrica 36, 424–438. Granger, C.W.J., 1980, Testing for causality: A personal viewpoint, Journal of Economic Dynamics and Control 2, 329–352. Granger, C.W.J., 1983, Co-integrated variables and error-correcting models, Economics Department discussion paper no. 83-13 (University of California, San Diego, CA). Some Recent Developments in a Concept of Causality 83

Granger, C.W.J., 1986a, Comment on Holland (1986). Granger, C.W.J., 1986b, Developments in the study of co-integrated economic variables, Oxford Bulletin of Economics and Statistics 48, 213–228. Granger, C.W.J., 1986c, Economic stochastic processes with simple attractors, Economics Department discussion paper no. 86-20 (University of California, San Diego, CA). Granger, C.W.J., 1987, Causality testing of control variables, Economics Depart- ment discussion paper (University of California, San Diego, CA). Granger, C.W.J. and P. Newbold, 1977, Forecasting economic time series, 1st ed. (Academic Press, New York). Granger, C.W.J. and P.J. Thomson, 1987, Predictive consequences of using condi- tioning or causal variables, Econometric Theory, forthcoming. Hendry, D.F. and J.F. Richard, 1983, The econometric analysis of economic time series, International Statistical Review 51, 111–163. Holland, D.W., 1986, Statistics and causal inference, Journal of the American Statistical Association 81, 945–960. Holland, D.W. and P.R. Rosenbaum, 1986, Conditional association and unidi- mensionality in monotone latent variable models, Annals of Statistics 14, 1523–1543. Lucas, R.E. and T.J. Sargent, 1981, Rational expectations and econometric prac- tice (University of Minnesota Press, Minneapolis, MN). Machina, M.J. and W.S. Neilson, 1980, The Ross characterization of risk aversion: Strengthing and extension, Economics Department discussion paper, Aug. (University of California, San Diego, CA). Pierce, D.A. and L.D. Haugh, 1977, Causality in temporal systems: Characteriza- tions and a survey, Journal of Econometrics 5, 265–293. Sargent, T.J., 1976, The observational equivalence of natural and unnatural rate theories of macroeconomics, Journal of Political Economy 84, 631–670. Spohn, W., 1984, Probabilistic causality: From Hume via Suppes to Granger, in: M.C. Galavotti and G. Gambetta, eds., Causalita e modelli probabilistici (Clueb Editrica, Bologna) 64–87. Suppes, P. and M. Zanotti, 1981, When are probabilistic explanations possible?, Syntheses 48, 191–199. Wiener, N., 1956, The theory of prediction, in: E.F. Beckenback, ed., Modern mathematics for engineers (Publisher, Place). Zellner,A., 1979, Causality and econometrics, in: K. Brunner and A. Meltzer, eds., Carnegie–Rochester conference series on public policy, Vol.? (North-Holland, Amsterdam). CHAPTER 4

Advertising and Aggregate Consumption: An Analysis of Causality*1 R. Ashley, C. W. J. Granger, and R. Schmalensee

This paper is concerned with testing for causation, using the Granger definition, in a bivariate time-series context. It is argued that a sound and natural approach to such tests must rely primarily on the out-of-sample forecasting performance of models relating the original (non- prewhitened) series of interest. A specific technique of this sort is pre- sented and employed to investigate the relation between aggregate advertising and aggregate consumption spending. The null hypothesis that advertising does not cause consumption cannot be rejected, but some evidence suggesting that consumption may cause advertising is pre- sented.

1. INTRODUCTION This paper is concerned with two related questions. The first is empiri- cal: do short-run variations in aggregate advertising affect the level of consumption spending?2 Many studies find that advertising spending varies pro-cyclically.3 But firms often use sales- or profit-based decision rules in fixing advertising budgets,4 so that observed correlation might

* Econometrica, 48, 1980, 1149–1167. 1 An earlier version if this paper was written while all three authors were at the Univer- sity of California, San Diego. Financial support was provided by the Academic Senate of that institution and by National Science Foundation Grant SOC76–14326. The authors are indebted to Robert J. Coen of McCann-Erickson, Dee Ellison of the Federal Trade Commission, Joseph Boorstein and Jonathan Goldberg of the Columbia Broadcasting System, and Robert Parker of the U.S. Department of Commerce for assistance in data preparation, and to Christopher A. Sims and two referees for useful comments. Final responsibility for errors and omissions of course remains with the authors. 2 The techniques we employ in this study are not well-suited to the detection of very long- run effects that advertising might have on spending patterns, via induced cultural change, for instance. 3 See, for instance, Simon [16, pp. 67–74] and the references he cites. 4 See, for instance, Kotler [11, pp. 350–351], Schmalensee [15, pp. 17–18], and the references they cite. Advertising and Aggregate Consumption 85 reflect the effect of advertising on consumers’ spending decisions, the effect of aggregate demand on firms’ advertising decisions, or some combination of both effects. Previous studies of this empirical question, surveyed in Section 2, do not adequately deal with the problem of determining the direction of causation between consumption and advertising. The second question with which we are concerned is methodological: how should one test hypotheses about causation in a bivariate time series context? Section 3 proposes a natural approach to such tests that is a direct application of the definition of causality introduced by Granger [8]. We argue that it is appropriate to use Box-Jenkins [2] techniques to pre-whiten the original series of interest and to use cross-correlograms and bivariate modeling of the pre-whitened series to identify models relating the original series. In our view the out-of-sample forecasting per- formance of the latter models provide the best information bearing on hypotheses about causation. The data employed in our study of the advertising/consumption ques- tion are described in Section 4, and the results of applying our testing procedure are presented in Section 5. Our main findings are briefly summarized in Section 6.

2. PREVIOUS STUDIES Some evidence against the view that variations in aggregate advertising affect aggregate demand is provided by numerous studies of advertising behavior at cyclical turning points; aggregate advertising generally lags the rest of the economy at such points.5 Turning point studies do not use much of the information in the time series examined, however, and they do not provide formal tests of hypotheses. Four relatively recent studies have applied statistical techniques to study the relation between advertising and aggregate demand. In the first of these,Verdon,McConnell, and Roesler [23] employed the Printer’s Ink monthly index of advertising spending (hereinafter referred to as PII). They de-trended PII, GNP, and the Federal Reserve index of industrial production, smoothed all three series with a weighted moving average, and examined correlations between the transformed PII series and the other two transformed series at various leads and lags and for various periods. The correlations obtained showed no clear patterns. In a critique of this study, Ekelund and Gramm [7] argued that con- sumption spending, rather than GNP or the index of industrial produc- tion, should be used in tests of this sort. They regressed de-trended quarterly advertising data from Blank [1] on de-trended consumption spending, and all regressions were insignificant.

5 See Simon [16, pp. 67–74] and Schmalensee [15, pp. 17–18] for surveys of these studies. 86 R. Ashley, C. W. J. Granger, and R. Schmalensee

Taylor and Weiserbs [21] considered four elaborations of the Houthakker–Taylor [10] consumption function that included contempo- raneous advertising. Annual data were employed, consumption and income were expressed in 1958 dollars,and advertising spending was used both in current dollars and deflated by the GNP deflator. One of their models performed well, and it had a significant advertising coefficient even when re-estimated by a two-stage least squares procedure that treated advertising as endogenous. Taylor and Weiserbs concluded that aggregate advertising had a significant effect on aggregate consumption. There are at least four serious problems with this study,however. First, as the authors acknowledge, their conclusion rests on the somewhat restrictive maintained hypothesis that the Houthakker–Taylor frame- work is correct. Second, the GNP deflator is not a particularly good proxy for the price of advertising messages.6 Third, their two-stage least squares procedure may not deal adequately with advertising’s probable endo- geneity. It rests on a rather ad hoc structural equation for advertising spending. Further, all structural equations have lagged endogenous vari- ables, so that the consistency of the estimators depends critically on the disturbances being serially uncorrelated.7 Fourth, annual data are likely to be inappropriate here. In a survey of econometric studies of the effects of advertising on the demand for individual products, Clarke [4] finds that between 95 per cent and 100 per cent of the sales response to a main- tained increase in advertising occurs within one year. Similarly, Schmalensee’s [15, Ch. 3] estimates of aggregate advertising spending functions indicate that between 75 per cent and 85 per cent of the adver- tising response to a maintained increase in sales occurs within one year. These findings suggest that in this context so much information is lost by aggregation over time that annual data simply cannot contain much information about the direction of causation. Finally, Schmalensee [15, pp. 49–58] employed an extension of Blank’s [1] quarterly advertising series, deflated to allow for changes in media cost and effectiveness, in connection with several standard aggregate consumption equations specified in constant dollars per capita. Using instrumental variables estimators, the previous quarter’s advertising, the current quarter’s advertising, and the following quarter’s advertising were added one at a time to the consumption equations. It was found

6 Using the sources described in the Appendix, an implicit deflator for the six media con- sidered there was constructed for the period 1950–1975. Over that period, it grew at 2.2 per cent per year, while the GNP deflator increased an average of 3.5 per cent per year. The simple correlation between the first differences of the two series was only .60. 7 We are told that Durbin’s [6] test did not reject the null hypothesis of no serial correla- tion, but that test explicitly considers only the alternative of first-order autoregression. Moreover, the small sample properties of Durbin’s test are not well understood [12], and Taylor and Weiserbs have only 35 residuals. Advertising and Aggregate Consumption 87 that current advertising generally out-performed lagged advertising, and future advertising generally outperformed current advertising in fitting the data. Schmalensee took this pattern to imply that causation ran from consumption to advertising, reasoning that if advertising were causing consumption, past advertising would have outperformed future adver- tising. Schmalensee’s study has at least two major weaknesses. First, no tests of significance are applied to the observed performance differences. Second, nothing rules out the possibility that advertising is causing con- sumption as well as being caused by it. If both effects are present, both affect observed performance differentials, and these can in principle go in either direction. It seems clear that in order to go beyond these studies, one must employ a statistical procedure explicitly designed to test hypotheses about causality in a time-series context. Accordingly, we now present such a procedure.

3. TESTING FOR CAUSALITY The phrase ‘X causes Y’ must be handled with considerable delicacy, as the concept of causation is a very subtle and difficult one. A universally acceptable definition of causation may well not be possible, but a definition that seems reasonable to many is the following: Let Wn repre- sent all the information available in the universe at time n. Suppose that at time n optimum forecasts are made of Xn+1 using all of the informa- tion in Wn and also using all of this information apart from the past and present values Yn-j, j 0, of the series Yt. If the first forecast, using all the information, is superior to the second, than the series Yt has some special information about Xt, not available elsewhere, and Yt is said to cause Xt. Before applying this definition, an agreement has to be reached on a criterion to decide if one forecast is superior to another. The usual pro- cedure is to compare the relative sizes of the variances of forecast errors. It is more in keeping with the spirit of the definition, however, to compare the mean-square errors of post-sample forecasts. To make the suggested definition suitable for practical use a number of simplifications have to be made. Linear forecasts only will be con- sidered, together with the usual least-squares loss function, and the infor- mation set Wn has to be replaced by the past and present values of some set of time series, Rn:{Xn-j, Yn-j, Zn-j,...,j 0}.Any causation now found will only be relative to Rn and spurious results can occur if some vital series is not in this set.

The simplest case is when Rn consists of just values from the series Xt and Yt, where now the definition reduces to the following. 88 R. Ashley, C. W. J. Granger, and R. Schmalensee

Let MSE (X) be the population mean-square of the one-step forecast error of Xn+1 using the optimum linear forecast based on Xn-j, j 0, and let MSE(X, Y) be the population mean-square of the one-step forecast error of Xn+1 using the optimum linear forecast based on Xn-j, Yn-j, j 0. Then Y causes X if MSE(X, Y) < MSE(X). With a finite data set, some test of significance could be used to test if the two mean-square errors are significantly different; one such test is presented below and employed in Section 5. As the scope of this definition has been greatly circumscribed by the simplifications used, the possibility of incorrect conclusions being reached is expanded,8 but at least a useable form of the definition has been obtained. This definition of causation (stated in terms of variances rather than mean-square errors) was introduced into the economic literature by Granger [8]; it has been applied by Sims [17] and numerous subsequent authors employing a variety of techniques. (See [14] for a survey.) The next several paragraphs present the five-step approach to the analysis of causality (as defined above) between a pair of time series Xt and Yt that is employed in Section 5, below.The remainder of this Section them discusses the rationale for our approach. (i) Each series is pre-whitened by building single-series ARIMA models using the Box-Jenkins [2] procedure. Denote the result-

ing residuals by ext and eyt. (ii) Form the cross-correlogram between these two residual series, i.e., compute

reekttk= corr()xy , -

for positive and negative values of k. If any rk for k > 0 are significantly different from zero, there is an indication that

Yt may be causing Xt, since the correlogram indicates that past Yt may be useful in forecasting Xt. Similarly, if any rk is significantly non-zero for k < 0, Xt appears to be causing Yt.If both occur, two-way causality, or feedback, between the series is indicated.

Unfortunately, the sampling distribution of the rk depends on the exact relationship between the series. On the null hypo-

thesis of no relationship, it is well known that the rk are asymp- totically distributed as independent normal with means zero and variances 1/n, where n is the number of observations employed [9, p. 238], but the experience shows that the test sug- gested by this result must be used with extreme caution in finite

8 Sims [20] provides a discussion of possible spurious sources of apparent causation in applications of this definition. In Section 6, below, we consider the likely importance of these in our empirical analysis. Advertising and Aggregate Consumption 89

samples.9 In practice, we also use a priori judgement about the forms of plausible relations between economic time series. Thus,

for example, a value of r1 well inside the interval [-2/n , +2/n ] might be tentatively treated as significant, while a substantially

larger value of r7 might be ignored if r5, r6, r8, and r9 are all neg- ligible. This step is perfectly analogous to the univariate Box-Jenkins identification step, where a tentative specification is obtained by judgmental analysis of a correlogram. The key word is “tenta- tive”; the indicated direction of causation is only tentative at this stage and may be modified or rejected on the basis of sub- sequent modeling and forecasting results.10 (iii) For every indicated causation, a bivariate model relating the residuals is identified, estimated, and diagnostically checked. If only one-way causation is present, the appropriate model is uni- directional and can be identified directly from the shape of the cross-correlogram, at least in theory. However, if the series are related in a feedback fashion, the cross-correlogram has to be unraveled into a pair of transfer functions to help with model identification, by a procedure developed by Granger and Newbold [9, Ch. 7]. (iv) From the fitted mode for residuals, after dropping insignificant terms, the corresponding model for the original series is derived, by combining the univariate models with the bivariate model for the residuals. It is then checked for common factors, estimated, and diagnostic checks applied.11 (v) Finally, the bivariate model for the original series is used to gen- erate a set of one-step forecasts for a post-sample period. The corresponding errors are then compared to the post-sample one- step forecast errors produced by the univariate model devel- oped in step (i) to see if the bivariate model actually does

9 One must apparently be even more careful with the Box-Pierce [3] test on sums of

squared rk; see [5]. 10 See Granger and Newbold [9, pp. 230–266] for a fuller discussion of this approach. Unpublished simulations performed at UCSD (e.g., C. Chiang, “An Investigation of Relationships Between Price Series,” unpublished dissertation, Department of Eco- nomics, 1978) find that it rarely signals non-existent causations but lacks power in that subtle causations are not always detected. 11 OLS estimation suffices to produce unbiased estimates, since all the bivariate models considered are reduced forms. It also allows one to consider variants of one equation without disturbing the forecasting results from the other, and it is computationally simpler. On the other hand, where substantial contemporaneous correlation occurs between the residuals, seemingly-unrelated regressions GLS estimation can be expected to yield noticeably better parameter estimates and post-sample forecasts. All estimation in this study is OLS; a re-estimation of our final bivariate model using GLS might strengthen our conclusions somewhat. 90 R. Ashley, C. W. J. Granger, and R. Schmalensee

forecast better.12 The use of sequential one-step forecasts follows directly from the definition above and avoids the problem of error build-up that would otherwise occur as the forecast horizon is lengthened. Because of specification and sampling error (and perhaps some structural change) the two forecast error series thus produced are likely to be cross-correlated and autocorrelated and to have non-zero means. In light of these problems, no direct test for the significance of improvements in mean-squared forecasting error appears to be available. Consequently, we have developed the following indirect procedure.

For some out-of-sample observation, t, let e1t and e2t be the forecast errors made by the univariate and bivariate models, respectively, of some time series. Elementary algebra then yields the following relation among sample statistics for the entire out-of-sample period:

2 2 2 2 MSE()e12- MSE() e= [] s() e1 - s() e21+[] me()- me()2 , (1) where MSE denotes sample mean-squared error, s2 denotes sample variance, and m denotes sample mean. Letting D =-e e,, and =+ e e (2) ttt12Â2 12 tt equation (1) can be re-written as follows, even if e1t and e2t are correlated [9, p. 281]: 2 2 MSE()e1 - MSE()e2 =[cov()D, Â ]+ []me()1 - me()2 , (3) where cov denotes the sample covariance over the out-of-sample period. Let us assume that both error means are positive; the modifications necessary in the other cases should become clear. Consider the analogue of (3) relating population parameters instead of sample statistics, and let cov denote the population covariance and m denote the population mean. From (3), it is then clear that we can conclude that the bivariate model outperforms the univariate model if we can reject the joint null hypothesis cov (D, S) = 0 and m(D) = 0 in favor of the alternative hypothesis that both quantities are nonnegative and at least one is positive. Now consider the regression equation

Dt =+bb12 -mu()+ t , (4) []Ât Ât

12 Alternatively, one might fit both models to the sample period, produce forecasts of the first post-sample observation, re-estimate both models with that observation added to the sample, forecast the second post-sample observation, and so on until the end of the post-sample period. This would, of course, be more expensive than the approach in the text. Advertising and Aggregate Consumption 91

where ut is an error term with mean zero that can be treated as 13 independent of St. From the algebra of regression, the test outlined in the preceding paragraph is equivalent to testing the null hypothesis b1 = b2 = 0 against the alternative that both are nonnegative and at least one is positive. If either of the two least squares estimates, ˆ ˆ b1 and b2, is significantly negative, the bivariate model clearly cannot be judged a significant improvement. If one estimate is negative but not significant, a one-tailed t test on the other estimated coefficient can be used. If both estimates are positive, an F test of the null hypothesis that both population values are zero can be employed. But this test is, in essence, four-tailed; it does not take into account the signs of the estimated coefficients. If the estimates were independent, it is clear that the prob- ability of obtaining an F statistic greater than or equal to F0, say, and having both estimates positive is equal to one-fourth the significance level associated with F0. Consideration of the possible shapes of iso- ˆ ˆ probability curves for ( b 1, b 2) under the null hypothesis that both pop- ulation values are zero establishes that the true significance level is never more than half the probability obtained from tables of the F distribu- tion. If both estimates are positive, then one can perform an F test and report a significance level equal to half that obtained from the tables. The approach just described differs from others that have been employed to analyze causality in its stress on models relating the original variables and on post-sample forecasting performance. We now discuss these two differences. Many applications of the causality definition considered here (e.g., [13]) essentially stop at our stage (ii) and thus consider only the sample cross-correlogram of the prewhitened series. For a variety of reasons, it seems to us unwise to rest causality conclusions entirely on correlations between estimated residuals. Sims [19], for instance, has argued that there may be a tendency for such correlations to be biased toward zero because of specification error.To see the nature of the argument,suppose Y causes X, so that the appropriate model for X is bivariate. Estimation of such a model on the original series would allow the data to indicate the relative importance of “past X” and “past Y” in forecasting X. Prewhitening X, on the other hand, involves use of a misspecified model in this case, since “past Y” should be included.As in standard discussions

13 In fact, this independence assumption must be violated; a bit of algebra shows that in the population, cov ,u = cov ,D - b var ()ÂÂt t ()ttt 2 () Â

where var denotes the population variance. On the other hand, it is clear that b1 is esti- ˆ mated without bias, and it can be shown that the bias in b 2 is equal to the difference

between the sample and population values of cov (St, ut)/var (St). This bias should thus be of negligible importance in moderate samples. 92 R. Ashley, C. W. J. Granger, and R. Schmalensee of omitted variable bias, correlation between “past X” and “past Y” will tend to lead the misspecified univariate model to over-state the impor- tance of “past X” in forecasting current X. The correlation between the residual series from this model and (original or prewhitened) “past Y” will accordingly be biased toward zero. Thus, models directly relating the original variables provide a sounder, as well as a more natural basis for conclusions about causality. As has been argued in detail by Granger and Newbold [9, Sect. 7.6], however, prewhitening and analysis of the cross-correlogram of the prewhitened series are useful steps in the identification of models relating the origi- nal series, since the cross-correlogram of the latter is likely to be im- possible to interpret sensibly. Because the correlations between the prewhitened series (the rk) have unknown sampling distributions, this analysis involves subjective judgements, as does the identification step in univariate Box-Jenkins analysis. In neither case is an obviously better approach available, and in both cases the tentative conclusions reached are subjected to further tests. It is somewhat less clear how out-of-sample data are optimally employed in an analysis of causality. This question is closely related to fundamental problems of model evaluation and validation and is com- plicated by sampling error and possible specification error and time- varying coefficients. An attempt to sort all this out would clearly carry us well beyond the bounds of the present essay. However, we think the riskiness of basing conclusions about causal- ity entirely on within-sample performance is reasonably clear. Since the basic definition of causality is a statement about forecasting ability, it follows that tests focusing directly on forecasting are most clearly appro- priate. Indeed, it can be argued that goodness-of-fit tests (as opposed to tests of forecasting ability) are contrary in spirit to the basic definition.14 Moreover, within-sample forecast errors have doubtful statistical prop- erties in the present context when the Box-Jenkins methodology is employed. While the power of that methodology has been demonstrated in numerous applications and rationalizes our use of it here, it must be noted that the identification (model specification) procedures in steps (i)–(iv) above involve consideration and evaluation of a wide variety of model formulation. A good deal of sample information is thus employed in specification choice, and there is a sense in which most of the sample’s real degrees of freedom are used up in this process. It thus seems both safer and more natural to place considerable weight on out-of-sample forecasting performance.

14 If one finds that one model (using a wider information set, say) fits better than another, one is really saying “If I had known that at the beginning of the sample period, I could have used that information to construct better forecasts during the sample period.” But this is not strictly operational and thus seems somewhat contrary in spirit to the basic definition of causality that we employ. Advertising and Aggregate Consumption 93

The approach outlined above uses the post-sample data only in the final step, as a test track over which the univariate and bivariate models are run in order to compare their forecasting abilities. This approach is of course vulnerable to undetected specification error or structural change. Partly as a consequence of this, the likely characteristics of post- sample forecast errors render testing for performance improvement somewhat delicate, as we noted above. Finally, the appropriate division of the total data set into sample and post-sample periods in this approach is unclear. (We say a bit about this in light of our advertising/consump- tion results in Section 6.) These are nontrivial problems. But at present, we see no way to make more use of the post-sample data that does not encounter apparently equally severe problems.15 We do not want to seem overly dogmatic on this issue. Our basic point is simply that model specification (perhaps especially within the Box- Jenkins framework) may well be infected by sampling error and polluted by data mining, so that it is unwise to perform tests for causality on the same data set used to select the models to be tested. The procedure out- lined above seems to handle this problem sensibly.

4. THE DATA In light of the evidence on the lengths of the relevant lags noted in Section 2, above, the use of quarterly data seems necessary if defensible judgements are to be reached about the causal relation, if any, between aggregate advertising and aggregate consumption. This section discusses the time series variables used to study that relation. All variables are

15 Two possibilities have been suggested. Both involve goodness-of-fit tests, about which we have some misgivings as footnote 14 indicates. (i) One could use asymptotic variants of covariance analysis (“Chow tests”) to investigate the appropriateness of the sample specification for the post-sample period. Assuming this test is passed by both univariate and bivariate models, goodness-of-fit in the pooled sample could be used to compare model performance. However, depending on the sample/post-sample split, final conclu- sions may be inordinately influenced by the same sample data that guided specification choice. Moreover, it is not clear what should be done if either model fails the stability test. Simply concluding that no inferences about causality can be made seems unsatis- factory, but any other alternative must run the risk of “mining” the post-sample data. Similar problems arise if the post-sample data are used for any critical diagnostic tests on the models selected. (In addition, appropriate testing procedures are unclear, since sampling error implies likely non-whiteness of post-sample errors.) (ii) One could simply re-estimate the univariate and bivariate models derived from the sample using only the post-sample data and compare fits for this period. Depending on the sample/post-sample split, again, these estimates may be unreliable. However, this approach avoids mining the post-sample data, and it yields error series with zero means. But these series will not necessarily be white. Moreover, it seems odd to carry over the specification from the sample period but otherwise to ignore the data on which it is based. Still, if very long time series are available, this second approach may be a viable alternative to the one discussed in the text. 94 R. Ashley, C. W. J. Granger, and R. Schmalensee computed for the period 1956–1975, yielding a total of 80 quarterly observations. A logarithmic transformation of all series is employed to reduce observed heteroscedasticity. We know of two series of U.S. quarterly advertising spending esti- mates: the PII and its successors,16 and extensions by the Columbia Broadcasting System (CBS) of Blank’s [1] series.The Appendix indicates why we elect to use the CBS figures here and describes their employ- ment in the computation of ADN: national advertising in major media, current dollars per capita, seasonally adjusted. In [15, Ch. 3] it is argued that percentage-of-sales decision rules for advertising spending have the strongest theoretical rationale when both advertising and sales are in nominal (current dollar) terms. On the other hand, one might expect the impact of advertising on consumer spending to be most apparent when both quantities are in real terms. Real adver- tising data are obtained by adjusting expenditure figures to take into account changes in both rates and audience sizes; real advertising per capita must measure the number of messages to which an average person is exposed. There apparently exist no quarterly advertising cost or price indices that could be used directly to obtain real advertising, however. One must either deflate nominal spending totals by some arbitrarily chosen alternative quarterly price indices or use interpolated values of annual advertising price indices. Since the cost of advertising messages has changed relative to prices of other goods and services (see footnote 6, above), it seems safest to interpolate. The Appendix describes the use of interpolated annual indices to calculate ADR: national advertising in major media, 1972 dollars per capita, seasonally adjusted. The following consumption series were based on data from the January and March, 1976 issues fo the Survey of Current Business: CTN: total personal consumption expenditure, thousands of current dollars per capita, seasonally adjusted; CGN: personal consumption expenditure on goods, thousands of current dollars per capita, seasonally adjusted; CTR: total personal consumption expenditure, thousands of 1972 dollars per capita, seasonally adjusted; CGR: personal consumption expenditure on goods, thousands of 1972 dollars per capita, seasonally adjusted. The main reason for considering consumption spending on goods only is that the bulk of services consumption is devoted to items that are not heavily nationally advertised, though they may be locally advertised [15, pp. 62–64]. Moreover, services consumption is notoriously stable about its trend. It is relatively well known [18, 24] that the standard methods of sea- sonal adjustment, which have been applied to the series discussed thus

16 These are the Marketing/Communications Index and, beginning in 1971, the McCann- Erickson Index. In recent years, all these estimates have been prepared by McCann- Erickson and reported monthly in the Survey of Current Business. Advertising and Aggregate Consumption 95 far, can lead to sizeable biases in contexts such as ours.17 We would have preferred to begin with a set of time series that had not been seasonally adjusted, and some of the results reported below would seem to support this prejudice. Of the series discussed so far, however, it was only pos- sible to obtain unadjusted numbers corresponding to CTN and CGN. Based on unpublished data supplied by the U.S. Department of Com- merce, we assembled UCTN: total personal consumption expenditure, thousands of current dollars per capita, not seasonally adjusted; and UCGN: personal consumption expenditure on goods, thousands of current dollars per capita, not seasonally adjusted. All series employed are natural logarithms (as noted above) of quar- terly totals at annual rates.All are available from the authors on request.

5. EMPIRICAL RESULTS We initially considered only the first six (seasonally adjusted) series described in Section 4. It was decided to retain the last 20 observations to evaluate out-of-sample forecasting performance, since we reached the judgement that fewer than 60 data points would not permit adequate identification and estimation in this case. As per step (i) of the approach outlined in Section 3, univariate time series models were identified and estimated for the six series con- sidered using the sixty quarterly observations from 1956 through 1970.18 None of the six residual (prewhitened) series showed significant serial correlation. Proceeding to step (ii), cross-correlograms of the appropriate pairs of residual series were computed. Letting ext denote the residual from a univariate model for the variable xt, this involved computation of corr

(eadnt, ectnt-k), corr (eadnt, ecgnt-k), corr (eadrt, ectrt-k), and corr (eadrt, ecgrt-k) for k between -10 and +10. All four cross-correlograms were strikingly similar, indicating that it made little difference whether we worked in nominal or real terms, or whether we used total or goods con- sumption. All four showed a strong contemporaneous correlation (k = 0), which, however, provides no information on the direction of causa- tion. Sizeable positive correlations for k =-1 suggested that advertising might be causing consumption, while similar correlations for k =+1, +2, and +3 suggested consumption causing advertising. All four of these cross-correlograms showed substantial negative values at k =+7 and k =-5. Since the neighboring correlations were

17 See the Appendix, especially footnote 29. Since the Census X-11 procedure used on these data involves a two-sided filter for most of the sample period, its employment in an investigation of causation is particularly worrisome. 18 Descriptions of these models and other statistical results not reported here are contained in an earlier version of this essay, available as Discussion Paper 77–9 from the Depart- ment of Economics, University of California, San Diego (La Jolla, CA 92093). 96 R. Ashley, C. W. J. Granger, and R. Schmalensee clearly negligible, we found it difficult to interpret these in causal terms. Suspecting that the correlations at k =-5 and, possibly, k =+7 were arti- facts of the seasonal adjustment procedures applied to the data, we obtained the unadjusted consumption expenditure series UCTNt and UCGNt defined above. In light of the discussion of services consumption in Section 4 and the similarity of the cross-correlograms discussed above, it was decided to confine our attention initially to UCGNt, current dollar consumption spending on goods. Proceeding as before, the following univariate model was identified, estimated, and checked:

424 ()1- B() 1- B UCGNtt=+--..., 00086() 1 204 B 747 Be ugn ()()()...00043 082 075 (C.1) where B is the lag or backward shift operator, numbers in parentheses are standard errors, and eucgnt is a residual series, as above. (The pres- ence of (1–B4) reflects the use of seasonal differencing.) The corre- sponding univariate model for advertising was the following:

5 ()1- B ADNtt=+-... 00911() 1 256 Be adn (A.1) ()..0022 () 13 The cross-correlogram between the residual series from these models is given as row 1 in Table 4.1. Use of unadjusted consumption substantially reduced the anomalous correlations at k =-5 and k =+7. (An approximate 95 per cent confidence interval for any single correla- tion here is [-.27, +.27].) This suggests that these correlations were in fact artifacts of the use of standard seasonal adjustment procedures. In light of these results, it was decided to restrict further attention to the 19 relation between ADNt and UCGNt. The sample and post-sample performance of the univariate models (A.1) and (C.1) are shown in Table 4.2. As per Section 3, we now proceed to step (iii), modeling the relation between the univariate residual (i.e., prewhitened) series eadnt and eucgnt. Examination of row 1 of Table 4.1 shows that the contemporane- ous (k = 0) correlation is large compared to 1/n , which is .14 here. The correlation at k =+1 is not significant on the usual test, but it and the k = 0 term together suggest a sensible lag structure that deserves further examination. In contrast, the k =-1 and k =-2 terms are clearly negligi- ble. The correlations at k =-3, -4, and -5 are nonnegligible, but it is hard to put them together with the k = 0 term (and the negligible terms in

19 Note that this means that, as mentioned in footnotes 17 and 29, the advertising series has been put through a two-sided filter, while the consumption series has not been. In general, one would expect this to bias our results toward a finding that advertising causes consumption, if the series are actually causally related. Table 4.1 Auto and cross-correlograms for residual series.

Correlation for k =

Row Residual Series -7 -6 -5 -4 -3 -2 -10+1 +2 +3 +4 +5 +6 +7

1 eadnt, eucgnt-k .05 .06 -.14 -.13 -.19 .09 .04 .50 .18 -.02 .16 -.13 .16 -.13 -.13

2 hadnt, hadnt-k -.13 .09 -.20 .00 .19 -.03 .01 1.0 .01 -.03 .19 .00 -.20 .09 -.13

3 eucgn¢t, eucgn¢t-k -.15 -.12 .08 -.05 -.09 .01 -.03 1.0 -.03 .01 -.09 -.05 .08 -.12 -.15

4 eucgn¢t, eucgn¢t-k -.14 -.13 .10 .05 .16 .18 -.10 .50 .13 .05 -.08 -.14 -.09 -.01 -.01

Table 4.2 Performance of univariate and bivariate models

Row Model Model Type Error Term Sample Variancea Post-Sample MSEb

1 (A.1) Univariate eadn 454 722 2 (AC.1) Bivariate on Residuals gadn 435 600 3 (AC.2) Bivariate on Original Series hadn 416 533 4 (C.1) Univariate eucgn 245 261 5 (CA.1) Bivariate on Residuals gucgn 213 290 6 (C.2) Univariate eucgn¢ 268 234 7 (CA.2) Bivariate on Original Series hucgn 263 222 a Sample period (1956–70) variance ¥106; not corrected for degrees of freedom. b Post-sample period (1971–75) mean squared error of one-step-ahead forecasts ¥106. 98 R. Ashley, C. W. J. Granger, and R. Schmalensee between) to form a plausible lag structure. Hence the cross-correlogram tentatively suggests that a unidirectional model, in which eucgnt causes, but is not caused by, eadnt is appropriate. Before proceeding on this assumption, however, it seems appropriate to test it by constructing a forecasting model for eucgnt employing lagged values of eadnt. The best model obtained, called (CA.1) in Table 4.2, includes eadnt-k for k = 3, 4, and 5 only. A comparison of rows 4 and 5 of Table 4.2 shows that this model performs quite badly in the post-sample period. These findings support the tentative identification of unidirec- tional causation. Accordingly, we now consider the impact of prewhitened consump- tion on prewhitened advertising. The form of the cross-correlogram sug- gests that an appropriate identification for a model of this relation- ship is

()1 -aeB adnttt=+() b12 b B e ucgn+ g adn . The aB term is included because it is necessary to have polynomials in the lag operator, B, of the same order on both sides of the equation since the model represents a unidirectional relationship between two white noise series [9, Ch. 7]. If a purely forecasting model is constructed using this identification (by omitting the contemporaneous term), one obtains

()1+ .. 200Beeg adnttt= () 382 B ucgn+ adn , (AC.1) ()..15 () 21 where gadnt appears to be white noise. The within-sample variance of gadnt is only 4 per cent less than that of eadnt, as a comparison of rows 1 and 2 of Table 4.2 indicates. On the other hand, the form of model (AC.1) is economically plausible. Moreover, (AC.1) forecasts well in the post-sample period, yielding a 17 per cent improvement over the per- formance of (A.1). We are now in a position to perform step (iv) of the procedure out- lined in Section 3, the construction of models relating the original series. The evidence so far suggests that a unidirectional bivariate model is appropriate, with UCGNt causing ADNt, but not the reverse. Substitut- ing for eadnt and eucgnt in (AC.1) from (A.1) and (C.1), appropriate forms for the final forecasting model can be identified. Estimation and deletion of insignificant higher-order terms yields the following biva- riate model:

2 ()1+-.. 327B 625 B() 1 - B ADNt (AC.2) ()..13 () 16 52 =++...00665() 636B 317 B UCGNtt+-() 1 . 686 Bh adn , (...0025 )() 21 () 19 () . 19 Advertising and Aggregate Consumption 99

424 ()1- B() 1- B UCGNtt=+--.... 00126() 1 223 B 659 Be ucgn¢ ()()()...00055 12 13 (C.2) Note that (C.2) is not identical to the univariate model (C.1) pre- sented earlier. This is because (C.1) was estimated using a standard univariate Box-Jenkins program that used backforecasting to produce unconditional estimates, whereas all bivariate models had to be esti- mated with a more general (but less convenient) nonlinear least squares program that produces conditional, single-equation estimates [2, Sect. 7.1].20 For most models, these procedures yield virtually identical esti- mates. Rows 4 and 6 in Table 4.2 indicate that (C.1) is slightly better than (C.2) within the sample, but it produces slightly worse forecasts in the post-sample period. Model (C.2) thus appears to be the appropriate one to use for post-sample comparisons.

The auto-correlograms of the residual series hadnt and eucgnt¢are given in rows 2 and 3 of Table 4.1. Both pass the standard single-series tests for whiteness.The cross-correlogram between these two series is given as row 4 in Table 4.1.Several of the correlations for negative k suggest that further lagged values of UCGNt should be added to the right-hand side of (AC.2). A variety of experiments of this sort were performed in the course of iden- tifying the model, however, and no significant or suggestive results were obtained. An examination of the correlations for positive k in row 4 of Table I shows that none exceeds one asymptotic standard error, 1/ n = .14. The correlation at k =+1 is nonnegligible, however, and its size and location are suggestive.If the large contemporaneous correlation between the residual series is partly due to advertising causing consumption, one would expect the previous quarter’s advertising to have some effect on current consumption.This effect should show up as a nonzero correlation between eucgn¢t and hadnt-1. On the other hand, it is hard to rationalize taking the isolated nonnegligible correlation at k =+4 seriously.Thus the marginal term at k =+1 led us to identify and estimate the following model as a check on the (AC.2)/(C.2) structure:

4 ()1 - B() t- B UCGNt (CA.2) 24 =--..001885 121() 1B ADNtt-1 +-() 1 .. 162 B - 684 Bh ucgn . ()()..00090 076 ()() .. 15 11

The series hucgnt passes the standard tests. A comparison of rows 6 and 7 in Table 4.2 indicates that (CA.2) performs slightly better than (C.2) in both sample and post-sample periods. We now turn to step (v) of our procedure, the evaluation of the post- sample forecasting performance of models fitted to the original series.Let

20 See footnote 11. 100 R. Ashley, C. W. J. Granger, and R. Schmalensee us first consider models (C.2) and (CA.2). Use of the formal comparison test presented in Section 3 is ruled out here because, while the bivariate model (CA.2), had a smaller forecast error variance at the 18 per cent level of significance, its mean forecast error was larger at the .1 per cent level. (These significance levels are based on one-tailed t tests on regres- sion equation (4) in Section 3.) The overall post-sample mean-squared error for the bivariate model is only 5.1 per cent lower than for the uni- variate model, and neither of these tests suggests that this difference is significant at any reasonable level. We conclude, therefore, that the biva- riate model (CA.2), is not an improvement on the univariate model for aggregate consumption (C.2);past advertising does not seem to be helpful in forecasting consumption.21 We must accordingly retain the null hypo- thesis that aggregate advertising does not cause aggregate consumption. In contrast, Table 4.2 indicates that our bivariate model for aggregate advertising (AC.2), forecast noticeably better than the univariate model (A.1), reducing the post-sample MSE by some 26 per cent.22 The post- sample forecast error series from both models had positive sample means. The Durbin-Watson statistic for equation (4), in Section 3, was 2.35 (20 observations), so no autocorrelation correction was indicated. Both coefficient estimates were positive, and the F statistic (with 2 and 18 degrees of freedom) corresponding to the null hypothesis that both pop- ulation values are zero was 1.86, significant at the 18.4 per cent level.23 In light of the discussion in Section 3, this means that we can reject the null hypothesis that the two models have equal mean-squared errors in favor of the superiority of the bivariate model at something less than the 9.2 per cent level of significance.This is hardly overwhelming evidence,but it does suggest that aggregate consumption is useful in forecasting aggregate advertising, and this indicates that consumption does cause advertising.

6. CONCLUSIONS Applying the definition of causality discussed in Section 3, the analysis of Section 5 provides evidence that fluctuations in aggregate consump- tion cause fluctuations in aggregate advertising. No significant statistics suggesting that advertising changes affect consumption were encoun-

21 In earlier versions of this paper, we argued that this conclusion was strengthened because

the negative coefficient of (1 - B)ADNt-1 in (CA.2) made no economic sense. Chris Sims has pointed out to us, however, that a negative coeffiecient is not all that implausible. Suppose that the main effect of aggregate advertising is to increase current spending on durables at the expense of future spending. Then, all else equal, a “high” value of past advertising would lead one to expect a “low” value of current consumption spending. 22 It is worth noting that the model built on the original variables, AC.2, out-performs the model built on the prewhitened series (AC.1). This is consistent with specification error in the latter, as discussed toward the end of Section 3. 23 From M. Ambramowitz and I. Stegun, Handbook of Mathematical Functions (Dover, 1972), equation (26.6.4), the significance level of an F-statistic with 2 and n degrees of freedom is given exactly by [n/(n + 2F)]n/2. Advertising and Aggregate Consumption 101 tered. Our empirical results are thus consistent with a model in which causation runs only from consumption to advertising. Of course, any set of empirical results is in principle consistent with an infinite number of alternative models. In order to establish the value of the evidence we have presented, it is necessary to consider whether our results could have arisen from plausible alternative models with dif- ferent causal structures. As we noted in Section 5, our results are consistent with “instan- taneous” causation from advertising to consumption.24 All cross- correlograms between pairs of prewhitened series show high contemporaneous correlations. This suggests the possibility of an instan- taneous or very short-term (within one quarter) relationship between advertising and consumption. But there is no way to tell if this relation- ship involves consumption causing advertising, advertising causing con- sumption, or a feedback structure involving both directions of causation. Thus, sudden unexpected changes in aggregate advertising may affect consumption within a quarter, but the finding that past advertising does not help in forecasting consumption indicates that such effects, if they exist, do not persist over time intervals that are substantial relative to a calendar quarter. It seems implausible to us that advertising affects con- sumption in this fashion.

As Sims [20] has pointed out, if one variable, Xt, is used to stabilize another, Yt, optimally over time, the resultant time series can show spu- rious causation from Yt to Xt. But this does not seem likely to be a problem here. It is somewhat implausible to think that uncoordinated advertising decisions lead the business sector to act “as if” accurately sta- bilizing aggregate consumption. But more importantly, if the structural effect of advertising on consumption were positive, and if the exogenous disturbances to consumption were positively serially correlated, the optimal control hypothesis would imply negative, not positive coef- ficients on lagged consumption in model (AC.2). Though our data set was superior to those previously employed to study the aggregate advertising/consumption relation, it was not entirely satisfactory. First, it would have been preferable to have worked with advertising data that had not been seasonally adjusted. On the other hand, as pointed out in footnote 19, seasonality problems here should have biased our estimates toward finding causation from advertising to consumption. Second, it is at least plausible that ADN is more infected with measurement error than UCGN. As Sims [20] has shown, this can lead to a spurious causal ordering in the direction we find. However, it seems unlikely to us that measurement error in ADNt is sufficiently large relative to its quarter-to-quarter variation to have significantly affected the results reported here.

24 It should be clear that the difficulty of interpreting contemporaneous correlations in causal terms is not particular to our approach to testing for causality or to our data set. 102 R. Ashley, C. W. J. Granger, and R. Schmalensee

Finally, the total sample of 80 observations was not as large as would have been desirable. Given the importance of post-sample testing in our approach, a post-sample period of more than 20 observations might have permitted more precise inferences. Were we to do this study again, we would probably divide the data more evenly between sample and post- sample periods for this reason. Of course, this problem relates to the strength of our conclusions, not directly to the pattern of causation we detect.25 In short, causality testing with typical economic data remains at the frontier of econometric work and is hence a rather non-routine affair. Nevertheless, we believe that the results discussed above showing that fluctuations in past aggregate consumption appear to influence aggregate advertising, but not vice-versa, are valid at the significance level quoted. Moreover, our experience with the test for causality proposed in Section 3 has left us confident of its utility. Its first desirable feature is the focus on the original variables rather than the pre-whitened (resid- ual) series. In the application in Section 5, steps (iv) and (v) yielded much stronger evidence than did the analysis of pre-whitened series in steps (ii) and (iii). The second desirable feature of our approach is its stress on out-of-sample forecasting performance. We discussed the complexi- ties involved in optimal use of out-of-sample data in Section 3. Sample data mining (leading to specification error) and structural instability can lead to difficulty in obtaining useful causal inferences with the method- ology proposed here. However, we find this possibility distinctly prefer- able to the spurious inferences that these problems can easily produce when out-of-sample verification is not employed. Similarly, restricting causal hypothesis testing to a separate out-of-sample period clearly decreases the number of degrees of freedom available for such testing: on the other hand, only then can one be really sure that none of those degrees of freedom have been “used up” in the model identification and estimation process.

APPENDIX The CBS advertising spending estimates are used here instead of the PII for two reasons. First, changes in media coverage in the PII cause a break in 1971.26 Second, within the 1953–1970 period, the media covered by the PII become increasingly unrepresentative over time.27

25 In addition to these problems, we cannot rule out the possibility that our results were generated by a structure in which advertising and consumption both depend on some omitted third variable. But Sims [20] has shown that conditions under which spurious causal orderings can arise in this fashion are rather implausible. 26 See the May and June, 1971 issues of the Survey of Current Business. A similar break occurred between 1952 and 1953 [23, p. 8]. 27 PII covered network radio and television but did not cover the spot markets in these media. (Spot television was added in 1971.) By 1966, national advertising spending for Advertising and Aggregate Consumption 103

In [15, App. B], CBS estimates of quarterly movements of national advertising spending in newspapers, magazines, business papers, outdoor media, network television, spot television, network radio, and spot radio were employed to extend Blank’s [1] series through 1967.28 For this study, we obtained more recent CBS estimates of quarterly spending in all these media except business papers and outdoor media for the 1966–1975 subperiod,29 along with current McCann–Erickson estimates of annual spending totals in these media for the entire 1956–1975 period.30 The quarterly totals reported in [15, App. B] were used for the 1956–1965 subperiod. The quarterly flows for each medium were rescaled, where necessary, so that annual averages equaled the McCann–Erickson annual totals.The six resultant series were used, along with quarterly population from various issues of the Survey of Current Business, to obtain ADN. A set of annual cost-per-million (CPM) indices, which reflect changes in both media costs and audience sizes, were obtained from McCann–Erickson for the media covered by ADN for the 1960–1975 subperiod. These were linked to the Printer’s Ink indices reported in [15, App. A] at 1960. This six CPM indices were then interpolated, using a linear method that ensured that the averages of the quarterly indices equaled the annual value.31 The six current dollar spending series were

spot television was two-thirds that for network television, while spending in spot radio was more than four times that for network radio [15,p.8]. 28 National advertising is prepared centrally and disseminated to several localities, while local advertising is prepared and disseminated in the same locality. Local advertising is largely done by retailers, while national manufacturers are the dominant national adver- tisers. 29 Spending in business papers was excluded because we did not expect it to be causally related to household consumption spending. Outdoor media had to be dropped because CBS had stopped preparing quarterly estimates. The CBS series were seasonally adjusted at the source using (basically) the Census X-11 program. The sources used by CBS in preparing the earlier data are discussed in [1; 15, App. B]. The more recent esti- mates of quarterly movements are based on information from the Television Bureau of Advertising, Broadcast Advertisers Reports, Television/Radio Age, the Radio Advertis- ing Bureau, the Newspaper Advertising Bureau, Publishers’ Information Bureau, and a cooperative service commissioned by the major radio networks. 30 The McCann-Erickson totals include both media charges and production costs. These estimates appear at intervals in Advertising Age. See also [22, Ch. I] and recent numbers of the Statistical Abstract of the United States. 31 Let X(t) be the value of some series in year t, and let x(i, t) be the interpolated value for quarter i of that year. In the interpolation method employed, x(1, t) and x(2, t) were found by linear interpolation between X(t - 1) and an adjusted number X¢(t), and x(3, t) and x(4, t) were similarly based on X¢(t) and X(t + 1). X¢(t) was selected for each year for each series so that the average of the x(i, t) equalled X(t). This makes all the x(i, t) linear functions of x(t - 1), X(t), and X(t + 1). (Ordinary linear interpolation was used for 1975.) Using standard tests for homogeneity of means and variances of percentage changes ending in each of the four quarters, this method performed well relative to a variety of alternative average-preserving interpolation techniques. 104 R. Ashley, C. W. J. Granger, and R. Schmalensee deflated by the resultant quarterly CPM indices, and the deflated totals were used, along with the population series, to obtain ADR.

REFERENCES [1] Blank, D. M.: “Cyclical Behavior of National Advertising,” Journal of Business, 35 (1962), 14–27. [2] Box,G.E.P.,and G. M. Jenkins: Time Series Analysis. San Francisco: Holden-Day, 1970. [3] Box,G.E.P.,and D. A. Pierce:“Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models,” Journal of the American Statistical Association, 65 (1970), 1509–1526. [4] Clarke,D.G.:“Econometric Measurement of the Duration of the Adver- tising Effect on Sales,” Journal of Marketing Research, 13 (1976), 345–357. [5] Davies, N., C. M.Triggs, and P. N ewbold:“Significance Levels of the Box- Pierce Portmanteau Statistic in Finite Samples,” Biometrica, 64 (1977), 517–522. [6] Durbin,J.:“Testing for Serial Correlation in Least-Squares Regression When Some of the Regressors are Lagged Dependent Variables,” Econo- metrica, 38 (1970), 410–421. [7] Ekelund,R.G.,and W. P. Gramm: “A Reconsideration of Advertising Expenditures,Aggregate Demand, and Economic Stabilization,” Quarterly Review of Economics and Business, 9 (1969), 71–77. [8] Granger,C.W.J.:“Investigating Causal Relations by Econometric Models and Cross-Spectral Methods,” Econometrica, 37 (1969), 424–438. [9] Granger,C.W.J.,and P. N ewbold: Forecasting Economic Time Series. New York: Academic Press, 1977. [10] Houthakker,H.S.and L. D. Taylor: Consumer Demand in the United States, 2nd Ed. Cambridge: Press, 1970. [11] Kotler,P.:Marketing Management, 3rd Ed. Englewood Cliffs: Prentice- Hall, 1976. [12] Maddala,G.S.,and A. S. Rao: “Tests for Serial Correlation in Regression Models with Lagged Dependent Variables and Serially Correlated Errors,” Econometrica, 41 (1973), 761–764. [13] Pierce, D.A.:“Relationships – and the Lack Thereof – Between Economic Time Series, with Special Reference to Money and Interest Rates,” Journal of the American Statistical Association, 72 (1977), 11–22. [14] Pierce, D. A., and L. D. Haugh: “Causality in Temporal Systems: Charac- terizations and a Survey,” Journal of Econometrics, 5 (1977), 265–293. [15] Schmalensee, R.: The Economics of Advertising. Amsterdam: North- Holland, 1972. [16] Simon,J.L.:Issues in the Economics of Advertising. Urbana: University of Illinois Press, 1970. [17] Sims, C. A.: “Money, Income, and Causality,” American Economic Review, 62 (1972), 540–552. Advertising and Aggregate Consumption 105

[18] ———:“Seasonality in Regression,” Journal of the American Statistical Association, 69 (1974), 618–626. [19] ———-:“Comment,” Journal of the American Statistical Association,72 (1977), 23–24. [20] ———:“Exogeneity and Causal Ordering in Macroeconomic Models,” in New Methods in Business Cycle Research, ed. by C. A. Sims. Minneapolis: Federal Reserve Bank of Minneapolis, 1977. [21] Taylor,L.D.,and D. Weiserbs: “Advertising and the Aggregate Con- sumption Function,” American Economic Review, 62 (1972), 642–655. [22] U. S. Bureau of the Census: Historical Statistics of the United States: Colo- nial Times to 1970. Washington: U.S. Government Printing Office, 1975. [23] Verdon,W.A.,C.R.McConnell, and T. W. Roesler:“Advertising Expen- ditures as an Economic Stabilizer, 1954–64,” Quarterly Review of Eco- nomics and Business, 8 (1968), 7–18. [24] Wallis,K.F.:“Seasonal Adjustment and Relations Between Variables,” Journal of the American Statistical Association, 69 (1974), 18–31.

PART TWO

INTEGRATION AND COINTEGRATION

CHAPTER 5

Spurious Regressions in Econometrics* C. W. J. Granger and P. Newbold

1. INTRODUCTION It is very common to see reported in applied econometric literature time series regression equations with an apparently high degree of fit, as mea- sured by the coefficient of multiple correlation R2 or the corrected coefficient R2, but with an extremely low value for the Durbin–Watson statistic. We find it very curious that whereas virtually every textbook on econometric methodology contains explicit warnings of the dangers of autocorrelated errors, this phenomenon crops up so frequently in well- respected applied work. Numerous examples could be cited, but doubt- less the reader has met sufficient cases to accept our point. It would, for example, be easy to quote published equations for which R2 = 0.997 and the Durbin–Watson statistic (d) is 0.53. The most extreme example we have met is an equation for which R2 = 0.99 and d = 0.093. However, we shall suggest that cases with much less extreme values may well be entirely spurious. The recent experience of one of us [see Box and Newbold (1971)] has indicated just how easily one can be led to produce a spurious model if sufficient care is not taken over an appropriate for- mulation for the autocorrelation structure of the errors from the regres- sion equation. We felt, then, that we should undertake a more detailed enquiry seeking to determine what, if anything, could be inferred from those regression equations having the properties just described. There are, in fact, as is well-known, three major consequences of autocorrelated errors in regression analysis:

(i) Estimates of the regression coefficients are inefficient. (ii) Forecasts based on the regression equations are sub-optimal. (iii) The usual significance tests on the coefficients are invalid.

The first two points are well documented. For the remainder of this paper, we shall concentrate on the third point, and, in particular, examine

* Journal of Econometrics, 2, 1974, 111–120. 110 C. W. J. Granger and P. Newbold the potentialities for “discovering” spurious relationships which appear to us to be inherent in a good deal of current econometric methodology. The point of view we intend to take is that of the statistical time series analyst, rather than the more classic econometric approach. In this way it is hoped that we might be able to illuminate the problem from a new angle, and hence perhaps present new insights. Accordingly, in the fol- lowing section we summarize some relevant results in time series analy- sis. In sect. 3 we indicate how nonsense regressions relating economic time series can arise, and illustrate these points in sect. 4 with the results of a simulation study. Finally, in sect. 5, we re-emphasize the importance of error specification and draw a distinction between the philosophy of time series analysis and econometric methodology, which we feel to be of great importance to practitioners of the latter.

2. SOME RESULTS IN TIME SERIES ANALYSIS

Let Wt denote a time series which is stationary (it could represent devi- ation from some deterministic trend). Then, the so-called mixed autore- gressive moving average process,

Wt - f1Wt-1 - ...- fpWt-p = at - q1at-1 - ...- qqat-q, (1) where at represents a sequence of uncorrelated deviates, each with the same variance, is commonly employed to model such series. The sequence at is referred to as “white noise”. For brevity, eq. (1) can be written as

fq()BWtt= () Ba, (2) where f(B) and q(B) are polynomial lag operators with appropriate roots to ensure stationarity of Wt and uniqueness of representation. Suppose, now, that one has a given time series Xt. Box and Jenkins (1970) urge that, while this series itself may not be stationary, it can often be reduced to stationarity by differencing a sufficient number of times; that is, there exists an integer d such that

d ∇ Xt = Wt (3) is a stationary time series. Combining eqs. (2) and (3), the series Xt can be represented by the model,

d fq()BX—=tt() Ba. (4) Eq. (4) is said to represent an autoregressive integrated moving average process of order (p, d, q), denoted as A.R.I.M.A. (p, d, q). As regards economic time series, one typically finds a very high serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month.This is because many economic series are rather “smooth”, with changes being small in magnitude compared Spurious Regresssion in Econometrics 111 to the current level. There is thus a good deal of evidence to suggest that the appropriate value for d in eq. (4) is very often one. [See, for example, Granger (1966), Reid (1969) and Newbold and Granger (1974).] Alter- natively, if d = 0 in eq. (4) we would expect f(B) to have a root (1 - fB) with f very close to unity. The implications of this statement are extremely important, as will be seen in the following section. The simplest example of the kind of series we have in mind is the random walk,

∇Xt = at. This model has been found to represent well certain price series, partic- ularly in speculative markets. For many other series, the integrated moving average process,

∇Xt = at - qat-1, has been found to provide good representation. A consequence of this behaviour of economic time series is that a naive “no change” model will often provide adequate, though by no means optimal, forecasts. Such models are often employed as bench- marks against which the forecast performance of econometric models can be judged. [For a criticism of this approach to evaluation, see Granger and Newbold (1973).]

3. HOW NONSENSE REGRESSIONS CAN ARISE Let us consider the usual linear regression model with stochastic regressors: Y = Xb + e, (5) where Y is a T ¥ 1 vector of observations on a “dependent” variable, b is a K ¥ 1 vector of coefficients whose first member b0 represents a constant term and X is a T ¥ K matrix containing a column of ones and T observations on each of (K - 1) “independent” variables which are stochastic, but distributed independently of the T ¥ 1 vector of errors e. It is generally assumed that

E()e = 0, (6) and EI()ee¢ = s 2 . (7) A test of the null hypothesis that the “independent” variables con- tribute nothing towards explaining variation in the dependent variable can be framed in terms of the coefficient of multiple correlation R2.The null hypothesis is

H0 :b1 = b2 = ...= bK-1 = 0, (8) 112 C. W. J. Granger and P. Newbold and the test statistic TK- R2 F = (9) K - 1 1 - R2 is compared with tabulated values of Fisher’s F distribution with (K - 1) and (T - K) degrees of freedom, normality being assumed. Of course, it is entirely possible that, whatever the properties of the individual time series, there does exist some b so that e = Y - Xb satisfies the conditions (6) and (7). However, to the extent that the Yt’s do not constitute a white noise process, the null hypothesis (8) cannot be true, and tests of it are inappropriate. Next, let us suppose that the null hypothesis is correct and one attempts to fit a regression of the form (5) to the levels of economic time series. Suppose, further, that, as we have argued in the previous section is often the case, these series are non-stationary or, at best, highly auto- correlated. In such a situation the test procedure just described breaks down, since the quantity F in eq. (9) will not follow Fisher’s F distribu- tion under the null hypothesis (8). This follows since under that hypoth- esis the residuals from eq. (5),

et = Yt - b0; t = 1,2,...,T, will have the same autocorrelation properties as the Yt series. Some idea of the distributional problems involved can be obtained from consideration of the case:

Yt = b0 + b1Xt + et, where it is assumed that Yt and Xt follow the independent first order autoregressive processes,

Yt = fYt-1 + at, Xt = f*Xt-1 + at. (10) In this case, R2 is simply the square of the ordinary sample correlation between Yt and Xt. Kendall (1954) gives: var()RT=+-1 ()11ff *()- ff * . Since R is constrained to lie in the region (-1, 1), if its variance is greater 1 than –3 then its distribution cannot have a single mode at zero. The necessary condition is ff* > (T - 3)/(T + 3). Thus, for example, if T = 20 and f = f*, a distribution which is not unimodal at the origin will arise if f > 0.86, and if f = 0.9, E(R2) = 0.47. Thus a high value of R2 should not, on the grounds of traditional tests, be regarded as evidence of a significant relationship between autocor- related series. Also a low value of d strongly suggests that there does not Spurious Regresssion in Econometrics 113

Table 5.1 Regressing two independent random walks.

S: 0–1 1–2 2–3 3–4 4–5 5–6 6–7 7–8 Frequency: 13 10 11 13 18 8 8 5 S: 8–9 9–10 10–11 11–12 12–13 13–14 14–15 15–16 Frequency: 3 3 1 5 0 1 0 1

exist a b such that e in eq. (5) satisfies eq. (7). Thus, the phenomenon we have described might well arise from an attempt to fit regression equations relating the levels of independent time series. To examine this possibility, we conducted a number of simulation experiments which are reported in the following section.

4. SOME SIMULATION RESULTS As a preliminary, we looked at the regression

Yt = b0 + b1Xt, where Yt and Xt were, in fact, generated as independent random walks each of length 50. Table 5.1 shows values of bˆ S = 1 ˆ , S.E.()b1 the customary statistic for testing the significance of b1, for 100 simulations. Using the traditional t test at the 5% level, the null hypothesis of no relationship between the two series would be rejected (wrongly) ˆ ˆ on approximately three-quarters of all occasions. If b1 S.E.()b1 were distributed as N(0, 1), then the expected value of S would be 22/p 0.8. In fact, the observed average value of S was 4.5, suggesting ˆ that the standard deviation of b 1 is being underestimated by the multi- ple factor 5.6. Thus, instead of using a t-value of approximately 2.0, one should use a value of 11.2, when attributing a coefficient value to be “significant” at the 5% level. To put these results in context, they may be compared with results reported by Malinvaud (1966). Suppose that Xt follows the process (10) and the error series obeys the model

et = fet-1 + at, so that, under the null hypothesis, Yt will also follow this process, where at and at are independent white noise series. In the case f = f* = 0.8, it ˆ is shown that the estimated variance of b1 should be multiplied by a 114 C. W. J. Granger and P. Newbold factor 5.8, when the length of the series is T = 50. The approximations on which this result is based break down as both f and f* tend to unity, ˆ but our simulation indicates that the estimated variance of b 1 should be ultiplied by (5.6)2 31.4 when T = 50 and random walks are involved.

Our second simulation was more comprehensive. A series Yt was regressed on m independent series Xj,t; j = 1,2,...,m, with m taking values from one to five. Each of the series involved obey the same model, the models being (i) random walks, (ii) white noises, (iii) A.R.I.M.A. (0, 1, 1), (iv) changes in A.R.I.M.A. (0, 1, 1), i.e., first order moving average. All error terms were distributed as N(0, 1) and the A.R.I.M.A. (0, 1, 1) series was derived as the sum of a random walk and independent white noise. The results of the simulations, with 100 replications and series of length 50 are shown in table 5.2.

It is seen that the probability of accepting H0, the hypothesis of no relationship, becomes very small indeed for m 3 when regressions involve independent random walks. The average R2 steadily rises with m, as does the average d, in this case. Similar conclusions hold for the A.R.I.M.A. (0, 1, 1) process. When white noise series, i.e., changes in random walks, are related, classical regression yields satisfactory results, since the error series will be white noise and least squares fully efficient. However, in the case where changes in the A.R.I.M.A. (0, 1, 1) series are considered – that is, first order moving average processes – the null hypothesis is rejected, on average twice as often as it should be. It is quite clear from these simulations that if one’s variables are random walks, or near random walks, and one includes in regression equations variables which should in fact not be included, then it will be the rule rather than the exception to find spurious relationships. It is also clear that a high value for R2 or R2, combined with a low value of d,is no indication of a true relationship.

5. DISCUSSION AND CONCLUSION It has been well known for some time now that if one performs a regres- sion and finds the residual series is strongly autocorrelated, then there are serious problems in interpreting the coefficients of the equation. Despite this, many papers still appear with equations having such symptoms and these equations are presented as though they have some worth. It is possible that earlier warnings have been stated insufficiently strongly. From our own studies we would conclude that if a regression equation relating economic variables is found to have strongly autocor- Spurious Regresssion in Econometrics 115

Table 5.2 Regressions of a series on m independent “explanatory” series.

Series either all random walks or all A.R.I.M.A. (0, 1, 1) series, or changes in these. Yo =

100, Yt = Yt-1 + at,Y¢t = Yt + kbt;Xj,o = 100, Xj,t = Xj,t-1 + aj,t X¢j,t = Xj,t + kbj,t;aj,t,at, bt,bj,t sets of independent N(0, 1) white noises. k = 0 gives random walks, k = 1 gives A.R.I.M.A. (0,

1, 1) series. Ho = no relationship, is true. Series length = 50, number of simulations = 100, R2 = corrected R2.

Per cent times Average Average Per cent a 2 2 Ho rejected Durbin-Watson dR R > 0.7

Random walks Levels m = 1 76 0.32 0.26 5 m = 2 78 0.46 0.34 8 m = 3 93 0.55 0.46 25 m = 4 95 0.74 0.55 34 m = 5 96 0.88 0.59 37 Changes m = 1 8 2.00 0.004 0 m = 2 4 1.99 0.001 0 m = 3 2 1.91 -0.007 0 m = 4 10 2.01 0.006 0 m = 5 6 1.99 0.012 0

A.R.I.M.A. (0, 1, 1) Levels m = 1 64 0.73 0.20 3 m = 2 81 0.96 0.30 7 m = 3 82 1.09 0.37 11 m = 4 90 1.14 0.44 9 m = 5 90 1.26 0.45 19 Changes m = 1 8 2.58 0.003 0 m = 2 12 2.57 0.01 0 m = 3 7 2.53 0.005 0 m = 4 9 2.53 0.025 0 m = 5 13 2.54 0.027 0 a Test at 5% level, using an overall test on R2.

related residuals, equivalent to a low Durbin–Watson value, the only con- clusion that can be reached is that the equation is mis-specified, whatever the value of R2 observed. If such a conclusion is accepted, the question then arises of what to do about the mis-specification. The form of the mis-specification can be considered to be either (i) the omission of relevant variables or (ii) the inclusion of irrelevant variables or (iii) autocorrelated residuals. In general, the mis-specification is best considered to be a combination of these possibilities. The usual recommendations are to either include a lagged dependent variable or take first differences of the variables 116 C. W. J. Granger and P. Newbold involved in the equation or to assume a simple first-order autoregressive form for the residual of the equation. Although any of these methods will undoubtedly alleviate the problem in general, it is doubtful if they will completely remove it. It is not our intention in this paper to go deeply into the problem of how one should estimate equations in econometrics, but rather to point out the difficulties involved. In our opinion the econometrician can no longer ignore the time series properties of the variables with which he is concerned – except at his peril. The fact that many economic “levels” are near random walks or integrated processes means that considerable care has to be taken in specifying one’s equations. One method we are currently considering is to build single series models for each variable, using the methods of Box and Jenkins (1970) for example, and then searching for relationships between series by relating the residuals from these single models. The rationale for such an approach is as follows. In building a forecasting model, the time series analyst regards the series to be forecast as containing two components. The first is that part of the series which can be explained in terms of its own past behaviour and the second is the residual part [at in eq. (4)] which cannot. Thus, in order to explain this residual element one must look for other sources of infor- mation–related time series, or perhaps considerations of a non- quantitative nature. Hence, in building regression equations, the quan- tity to be explained is variation in at – not variation in the original series. This study is, however, still in its formative stages. Until a really satis- factory procedure is available, we recommend taking first differences of all variables that appear to be highly autocorrelated. Once more, this may not completely remove the problem but should considerably improve the interpretability of the coefficients. Perhaps at this point we should make it clear that we are not advo- cating first differencing as a universal sure-fire solution to any problem encountered in applied econometric work. One cannot propose univer- sal rules about how to analyse a group of time series as it is virtually always possible to find examples that could occur for which the rule would not apply. However, one can suggest a rule that is useful for a class of series that very frequently occur in practice. As has been noted, very many economic series are rather smooth, in that the first serial correla- tion coefficient is very near unity and the other low-order serial corre- lations are also positive and large. Thus, if one has a small sample, of say twenty terms, the addition of a further term adds very little to the infor- mation available, as this term is so highly correlated with its predeces- sor. It follows that the total information available is very limited and the estimates of parameters associated with this data will have high variance values. However, a simple calculation shows that the first differences of such a series will necessarily have serial correlations that are small in magnitude, so that a new term of the differenced series adds informa- Spurious Regresssion in Econometrics 117 tion that is almost uncorrelated to that already available and this means that estimates are more efficient. One is much less likely to be misled by efficient estimates. The suggested rule perhaps should be to build one’s models both with levels and also with changes, and then interpret the combined results so obtained. As an example (admittedly extreme) of the changes that can occur in one’s results from differencing, Sheppard (1971) regressed U.K. consumption on autonomous expenditure and mid-year money stock, both for levels and changes for the time period 1947–1962. The regres- sion on levels yielded a corrected R2 of 0.99 and a d of 0.59, whilst for changes these quantities were -0.03 and 2.21 respectively. This provides an indication of just how one can be misled by regressions involving levels if the message of the d statistic is unheeded. It has been suggested by a referee that our results have relevance to the structural model – unrestricted reduced form controversy, the feeling being that the structural model is less vulnerable to the problems we have described since its equations are in the main based on well devel- oped economic theory and contain relatively few variables on the right- hand side. There is some force to this argument, in theory at least, although we believe that in practice things are much less clear-cut. When considering this problem the question immediately arises of what is meant by a good theory.To the time series analyst a good theory is one that provides a structure to a model such that the errors or resid- uals of the fitted equations are white noises that cannot be explained or forecast from other economic variables. On the other hand, some econo- metricians seem to view a good theory as one that appears inherently correct and thus does not need testing. We would suggest that in fact most economic theories are insufficient in these respects as even if the variables to be included in a model are well specified, the theory gener- ally is imprecise about the lag structure to be used and typically says nothing about the time-series properties of the residuals. There are also data problems in that the true lags need not necessarily be integer mul- tiples of the sampling interval of the available data and there will almost certainly be added measurement errors to the true values of the vari- ables being considered.All of these considerations suggest that a simple- minded application of regression techniques to levels could produce unacceptable results. If one does obtain a very high R2 value from a fitted equation, one is forced to rely on the correctness of the underlying theory, as testing the significance of adding further variables becomes impossible. It is one of the strengths of using changes, or some similar transformations, that typ- ically lower R2 values result and so more experimentation and testing can be contemplated. In any case, if a “good” theory holds for levels, but is unspecific about the time-series properties of the residuals, then an equivalent theory holds for changes so that nothing is lost by model 118 C. W. J. Granger and P. Newbold building with both levels and changes. However, much could be gained from this strategy as it may prevent the presentation in econometric lit- erature of possible spurious regressions, which we feel is still prevalent despite the warnings given in the text books about this possibility.

REFERENCES Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco). Box, G.E.P.and P.Newbold, 1971, Some comments on a paper of Coen, Gomme and Kendall, J. R. Statist. Soc. A 134, 229–240. Granger, C.W.J., 1966,The typical spectral shape of an economic variable, Econo- metric 34, 150–161. Granger, C.W.J. and P. Newbold, 1973, Some comments on the evaluation of economic forecasts, Applied Economics 5, 35–47. Kendall, M.G., 1954, Exercises in theoretical statistics (Griffin, London). Malinvaud, E., 1966, Statistical methods of econometrics (North Holland, Amsterdam). Newbold, P. and C.W.J. Granger, 1974, Experience with forecasting univariate time series and the combination of forecasts, J. R. Statist. Soc. A 137, forthcoming. Reid, D.J., 1969, A comparative study of time series prediction techniques on economic data, Ph.D. Thesis (University of Nottingham, U.K.). Sheppard, D.K., 1971, The growth and role of U.K. financial institutions 1880–1962 (Methuen, London). CHAPTER 6

Some Properties of Time Series Data and Their Use in Econometric Model Specification* C. W. J. Granger

1. INTRODUCTION It is well known that time-series analysts have a rather different approach to the analysis of economic data than does the remainder of the econometric profession. One aspect of this difference is that we admit more readily to looking at the data before finally specifying a model; in fact, we greatly encourage looking at the data.Although econo- metricians trained in a more traditional manner are still very much inhib- ited in the use of summary statistics derived from the data to help model selection, or identification, it could be to their advantage to change some of these attitudes. In fact, I have heard rumors that econometricians do data-mine in the privacy of their own offices and I am merely suggest- ing that some aspects, at least, of this practice should be brought out into the open. The type of equations to be considered are generating equations, so that a simulation of the explanatory side should produce the major prop- erties of the variable being explained. If an equation has this property, it will be said to be consistent, reverting to the original meaning of this term. As a simple example of a generally non-consistent model, suppose that one has

yt = a + bxt + et, where yt is positive, but xt is unbounded in both directions. A more spe- cific example is when yt is exponentially distributed and xt normally dis- tributed.The only case when such a model is consistent is when b is zero. Although it would be ridiculous to suggest that econometricians would actually propose such models, it might be noted that two models that appear in the finance literature have

Dt = a + bDt-1 + cEt + et,

* Journal of Econometrics, 16, 1981, 121–130. 120 C. W. J. Granger and

Pt = d + eDt + fEt + et, where Dt represents dividends, Et is earnings and Pt is share price. Note that Pt and Dt are necessarily positive, but that Et can be both positive and negative, as Chrysler and other companies can testify. A further example arises from consideration of the question of whether or not a series is seasonal. For the purposes of this discus- sion, a time series will be said to be seasonal if its spectrum contains prominent peaks round the seasonal frequencies, which are 2pj/12, j = 1, 2,...,6,if the data are recorded monthly. In practice, this will just mean that a plot of the series through time will display the presence of a fairly regular twelve-month repeating shape. Without looking at the data, one may not know if a given series is seasonal or not and economic theory by itself may well not be up to the task of deciding. If now we look at a group of variables which are to be modelled, how does the presence, or lack, of seasonality help with model specification? Considering just single-equation models, which are suitable for the simple point to be made, of the form

yt = a + bxt + czt + et, (1.1) then it would clearly be inconsistent to require

(i) if yt were seasonal, xt, zt not seasonal that et be white noise (or non-seasonal), or

(ii) generally, if yt were not seasonal, but just one of xt or zt was sea- sonal and that et be white noise or AR(1) or any non-seasonal model.

Clearly, we may have information about the time-series properties of the data, in terms of spectral shapes, that will put constraints on the form of models that can be built or proposed. As the point is a very simple one, and ways of dealing with the seasonality are well understood, or are at least currently thought to be so, this case will not be pursued further. There is, however, a special case that is worth mentioning at this point.

Suppose that yt is not seasonal, but that both xt and zt are seasonal, then is model (1.1) a possible one, with et not seasonal? In general, the answer is no, but if it should happen that the term bxt + czt is non-seasonal, then the model (1.1) is not ruled out. This could only happen if there is a con- straint f = c/b such that the seasonal component in xt is exactly the reverse of f times the seasonal component in zt. A simple case where this would occur is if the seasonal components is xt and zt were identical and f takes the value minus one.This does at first sight appear to be a highly unlikely occurrence, but an example will be given later, in a very different context, where such cancellations could occur. Some Properties of Time Series Data 121

It is obvious that the spectrum of one side of a generating equation, such as (1.1), must be identical to the spectrum of the other side. If the spectrum of one side has a distinctive feature, it must be reproduced in the spectrum of the other side, obvious examples being periodic com- ponents, such as the seasonal and trend terms. The majority of this paper will be concerned with discussions of this point in connection with a generalized version of the trend term.

2. INTEGRATED SERIES AND FILTERS To proceed further, it is necessary to introduce a class of time series models that have been popular in parts of electrical and hydraulic engi- neering for some years, but which have so far had virtually no impact in econometrics.

Suppose that xt is a zero-mean time series generated from a white noise series et by use of the linear filter a(B), where B is the backward operator, so that

xaBtt= ()e , (2.1) with

k B et = et-k. Further suppose that -d aB()=-()1, B a¢() B (2.2) where a¢(z) has no poles or roots at z = 0. Then xt will be said to be “integrated of order d” and denoted

xIdt ~.() Further, defining d xBxaBt¢=()1 - tt=¢()e , then

xIt¢ ~0() from the assumed properties of a¢(B). a(B) will be called an “integrating filter of order d.” If a¢(B) is the ratio of a polynomial in B of order m divided by a polynomial of order l, then xt will be ARIMA (l, d, m) in the usual Box and Jenkins (1970) notation. However, unlike the vast majority of the literature on ARIMA models, the class of models here considered allow the order of integration, d, to be possibly non-integer. Clearly, not constraining d to be an integer generalizes the class of models before considered, but to be relevant, the generalization has to be shown to be potentially important. Some earlier accounts of similar models may be found in Hipel and McLeod (1978), Lawrence and 122 C. W. J. Granger

Kottegoda (1977), Mandelbrot and Van Ness (1968) and Mandelbrot and Taqqu (1979), although some details in the form of the models are different than those here considered, which were first introduced in Granger and Joyeux (1981). Some of the main properties of these models may be summarized as follows: The spectrum of xt, generated by (2.1) and (2.2), may be thought of as 22d fzazz()w =-11 ¢(),,= eiw x () if var(et) = 1, from analogy with the usual results from filtering theory. It is particularly important to note that for small w, -2d fcx ()ww= . (2.3)

It was shown in Granger and Joyeux (1981) that the variance of xt 1 increases as d increases, and that this variance is infinite for d –2 , but is 1 finite for d < –2 . Further, writing • xbtjtj= Â e - , j=0 and denoting r = ()xx jttjcorrelation ,- , then, for j large,

2d-1 1 rj = A1 j , d < –2 , d π 0, and

d-1 bj = A2 j , d 1, d π 0, where A1 and A2 are appropriate constraints. When d = 0, both rj and bj decrease exponentially in magnitude as j increases, but with d π 0, it is seen that these quantities decline much slower. Because of this property the integrated series, when d π 0, have been called “long-memory”. For long-term forecasting, the low frequency component is of paramount importance and (2.3) shows that if d is not an integer, this component cannot be well approximated by an ARIMA (l, d, m) model with integer d and low order for l and m. It is not clear at this time if integrated models with non-integer d occur in practice and only extensive empirical research can resolve this issue. However, some aggregation results presented in Granger (1980) do suggest that these models may be expected to be relevant for actual economic variables. It is proved there, for example, that if xjt, j = 1,..., n, are set of independent series, each generated by an AR(1) model, so that

xjt = ajxj,t-1 + ejt, j = 1,...,N, Some Properties of Time Series Data 123

where the ejt are independent, zero-mean white noise and if the aj’s are values independently drawn from a beta distribution on (0,1), where

q-1 dF()aaaaa= ()2101 B() p,,, q 21p- ()- 2 d pq>>00,, (2.4)

N then, if xxtjt= Â , , for N large j=1

xIt ~.()12- q (2.5) The shape of the distribution from which the a’s are drawn is only critical near 1 for this result to hold.

A more general result arises from considering xjt is generated by

xjt = ajxj,t-1 + yj,t + bjWt + ejt, (2.6) where the series yj,t, Wt and ejt are all independent of each other for all 2 j, gjt are white noise with variances s j, yj,t has spectrum fy(w,qj) and is at least potentially observable for each micro-component. It is assumed that there is no feedback in the system and the various parameters a, qj, b and s 2 are all assumed to be drawn from independent populations and the distribution function for the a’s is (2.4). Thus, the xj,t are generated by an AR(1) model, plus a independent causal series yj,t and a common factor causal series Wt. With these assumptions, it is shown in Granger (1980) that (i)

xIdtx~ () where dx is the largest of the three terms (1 - q/2 + dy), l - q + dw and (1 - q)/2, where y¯t ~ I(dy), Wt ~ I(dw), and (ii) if a transfer function model of the form

xaBytt= 12() +a () BWe tt+ is fitted, then both a1(B) and a2(B) are integrating filters of order 1 - q. It should be noted from (2.6) that, if aj < 1 then the spectrum of xj,t is

2 fw()=-11 abww zfw()+ 2 f()+ f(), xj,,,() j[] yj j w ej so that if one assumes that xj,t ~ I(0) it necessarily follows that yj,t and Wt are both I(0). In Granger (1980) it was shown that integrated models may arise from micro-feedback models and also from large-scale dynamic econometric models that are not too sparse. Thus, at the very least, it seems that inte- grated series can occur from realistic aggregation situations, and so do deserve further consideration. 124 C. W. J. Granger

3. THE ALGEBRA OF INTEGRATED SERIES AND IT’S IMPLICATIONS

The algebra of integrated series is quite simple. If xt ~ I(dx) and a(B) is an integrating filter of order d¢, then a(B)xt will be I(dx + d¢). Thus, dx is unchanged if xt is operated on by a filter of order zero. Further, if xt ~ I(dx), yt ~ I(dy) then zt = bxt + cyt ~ I(max(dx,dy)) in general. This result is proved by noting that the spectrum of zt is fbfcfbccrcr()www= 22()+ ()+ () ww+ () zxy2,[] (3.1) where cr(w) is the cross-spectrum between xt and yt and has the 2 property that |cr(w)| fx(w)fy(w). For small w, fA()ww= -2dx and fA() ww= -2dy , x 3 y and clearly the term with the largest d value will dominate at low fre- quencies. There is, however, one special case where this result does not hold, and this will be discussed in the following section. Suppose now that one is considering the relationship between a pair of series xt and yt, and where dx and dy are known, or at least have been estimated from the data. For the moment, it will be assumed that dx and dy are both non-integer. If a model of the form

bBy()tt= cBx() + hB()e, (3.2) is considered, where all of the polynomials are of finite order, and will usually be of low order, and et is white noise, independent of xt, then this model is consistent, from consideration of spectral shapes at low fre- quencies, only if dx = dy. If one knows that dx < dy then to make the model consistent, either c(B) must be an integrating filter of order dx - dx or h(B) is an integrating filter of order dx, or both. In either case, the poly- nomials cannot be of finite order. Similarly, if dx > dy, the necessarily c(B) must be an integrating filter of order dy - dx, and so cannot be of finite order. 1 As an extreme case of model (3.2) inconsistency, suppose that dx < –2 , 1 so variance of xt is finite, but 1 > dy > –2 , so variance of yt is infinite. Using just finite polynomials in the filters, clearly yt cannot be explained by the model, if variance et is finite, which is generally taken to be true. Simi- 1 1 larly if dy < –2 but 1 > dx > –2 , then one is attempting to explain a finite vari- ance series by a infinite variance one. This same problem occurs when the d’s can take integer values, of course. Suppose that one knows that change in employment has d = 0, and that level of production has d = 1, then one would not expect to build a model of the form

Change in employment=+ab() level of production+ f() B et . However, replacing b by b(1 - B) would produce a consistent model, in the sense of this term being used here. Only with integer d values can a Some Properties of Time Series Data 125

filter, which is a polynomial in B of finite length, be applied to a series to reduce the order of integration to zero. Naturally, similar constraints can be derived for models involving more than one explanatory variable, although these constraints can become rather complicated if many variables are involved. As an illus- tration, suppose one has a single-equation model of the form

bBy()tt= cBx() + gBZ() tt+ hB()e , (3.3) where et is white noise independent of xt and zt and dx, dy and dz are assumed known and non-integer. If all of the polynomials are of

finite order, then necessarily dy = max(dx,dz). If this condition does not hold then, generally, at least one of the polynomials has to corre- spond to an integrating filter and hence to be of infinite order. When all of the d’s are integer, rather simpler rules apply. However, care has to be taken in the model specification so that infinite variance variables are not used to explain finite variance variables, or vice versa. In practice, it is still not uncommon to see this type of misspecification in published research.

4. CO-INTEGRATED SERIES This section considers a very special case where some of the previously stated rules do not hold. Although it may appear to be very special, it also seems to be potentially very important. Start with model (3.3) and ask again, is it possible for dy < max(dx,dz). For convenience, initially the no-lag case c(C) = c, g(B) = g is considered, so that

bBy()tt=+ cx gZ t + hB()e t, (4.1) where dy > 0, h(B)et is I(dy) and var(e) = 1. The spectrum of the right- hand side will be 2 c22 f()ww+ g f()+ gc cr() ww+ cr()+ hz() p {}xz[]2 , (4.2) where now cr(w) is the cross-spectrum between xt and yt.The special case of interest has:

2 (i) fx(w) = a fz(w), w small, so dx = dz, (ii) cr(w) = afz(w), w small, so that the coherence C(w) = 1 and the phase f(w) = 0 for w small. A pair of series obeying (i) and (ii) will be called co-integrated. If further, g =-ca, the part of the spectrum (4.2) inside the main brack- ets will vanish at low frequencies and so a model of the form (4.1) will be appropriate even when dy < max(dx,dz) in this special case. It is seen that in this case the difference between two co-integrated series can result in an I(0) series. A slightly more general result arises from con- sidering xt = zt + qt, where dx = dx, dq < dx, and zt, qt are independent, then 126 C. W. J. Granger

xt and zt will be co-integrated, but the difference xt - zt will be I(dq). It should be noted that if a pair of series xt, zt are co-integrated, then so will be a(B)xt, b(B)zt where a(B), b(B) are any pair of finite lag filters; thus, in particular if xt and zt are co-integrated then so will be xt and zt-k for all k, although the approximation that the phase is zero at low frequencies may become unacceptable for large values of k. Co-integrated pairs of series may arise in a number of ways, for example:

(i) If xt is the input and zt the output of a black box of limited capac- ity, or of finite memory, the xt, zt will be co-integrated. For instance the series might be births and deaths in an area with no immigration or emigration, cars entering and leaving the Lincoln Tunnel, patients entering and leaving a maternity hospital, or houses started and houses completed in some region. For these

examples to hold, it is necessary to have dx > 0. (ii) Series for which a market ensures that they cannot drift too far apart, for example interest rates in different parts of a country or gold prices in London and New York.

(iii) If fn,h(Jn) is an optimal forecast of xn+h based on a proper infor- mation set Jn, so that Jn includes xn-j, j 0, then xn+h and fn,h are co-integrated if dx > 0. Thus, if “unanticipated money supply”,

xn+1 - fn,1, is used in a model, this can be appropriate if the vari- able being explained has dx > 0. It should be emphasized that for

this result to hold fn,h must be on optimal forecast and, if dx is not an integer, then this means that in theory the forecast has to

be based on an infinity of lagged x’s. If 1 > dx > 0, but an ARIMA (l,d,m) model is used to form forecasts, with integer d, the fore- casts will not be optimal and the series and its forecast will not be co-integrated. There obviously are pairs of economic series, such as prices and wages, which may or may not be co-integrated and a decision on this has to be determined by an appropriate theory or an empirical investigation. It might be interesting to undertake a wide-spread study to find out which pairs of economic variables are co-integrated. In the frequency domain, the conditions for co-integration of two series state that the two series move in a similar way, ignoring lags, over the long swings of the economy and in “trend”, although the idea of trend is rarely carefully defined and will here mean just the very low frequency component. Although the two series may be unequal in the short term, they are tied together in the long run. The use of the difference between two series to explain the change in a series has been suggested by Sargan (1964) and Hendry (1978) and implemented in a number of models, particularly in Britain. An example is a model of the form Some Properties of Time Series Data 127

aB()DD yttttt= bB() x+-b() y--11 x+ e, and the use of the term b(yt-1 - xt-1) has been found in some cases to produce a better model, in terms of goodness of fit. The form of the model has an important property. The difference equation without the innovations et,

aB()DD ytttt= bB() x+-b() y--11 x , is such that if xt and yt each tend to equilibrium, so that Dxt Æ 0 and Dyt Æ 0 then xt and yt tend to the same equilibrium level. When the stochastic elements et are present, equilibrium becomes much less meaningful, and is replaced by xt and yt tending to having identical means, assuming the means exist. However, if the d values of xt is greater than de, this generating model ensures that xt and yt will be co-integrated. They, therefore, will move closely together in the long run, which is possible the property that most naturally replaces the concept of equi- librium for stochastic processes. It is important to note that this property does not hold if dx = dy = de, as then the coherence between xt and yt need not be high at low frequencies, depending on the relative variances of et and yt.

5. CONCLUSION Having, I hope, made a case for the prior analysis of time series data before model specification inter-relating the variables, it has now to be admitted that the practical implementation of the rules suggested above is not simple. Obviously, one can obtain satisfactory estimates of the spectrum of a series, but it is not clear at this time how d values should be estimated. In the references given earlier, a variety of ways of esti- mating d are suggested, and a number of sensible modifications to these can easily be proposed, but the statistical properties of these d estimates need to be established. It is possible that too much data is required for practical use of the specification rules or that d values for real economic variables are all integers. Only further analysis, both theoretical and empirical can answer these questions.

REFERENCES Box, G. E. P.and G. M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco, CA). Davidson, J. E. H., D. F. Hendry, F. Srba and S. Yeo, 1978, Econometric modeling of the aggregate time-series relationship between consumer’s expenditure and income in the United Kingdom, Economic Journal 88, 661–692. 128 C. W. J. Granger

Granger, C. W. J., 1980, Long-memory relationships and the aggregation of dynamic models, Journal of Econometrics 14, 227–238. Granger, C. W. J. and R. Joyeux, 1981, An introduction to long-memory time series and fractional differencing, Journal of Time Series Analysis V.1. Hipel,W. H. and A. I. McLeod, 1978, Preservation of the rescaled adjusted range, Part 1, Water Resources Research 14, 491–518. Lawrence, A. J. and N. T. Kottegoda, 1977, Stochastic modeling of river-flow time series, Journal of the Royal Statistical Society A140, 1–47. Mandelbrot, B. B. and J. W. Van Ness, 1968, Fractional Brownian motions, fractional noises and applications, Siam Review 10, 422–437. Mandelbrot, B. B. and M. S. Taqqu, 1979, Robust R/S analysis of long-run serial correlation, Research report RC 7936 (IBM, Yorktown Heights, NY). Sargan, J. D., 1964, Wages and prices in the United Kingdom: A study in econo- metric methodology, in: P. E. Hart, G. Mills and J. K. Whitacker, eds., Econo- metric analysis for national economic planning (Butterworths, London). CHAPTER 7

Time Series Analysis of Error-Correction Models* C. W. J. Granger and A. A. Weiss

1. INTRODUCTION The error-correction model considered in the first sections of this paper takes the form d ()()1 - BaBym1111ttt=+b() y-- - Ax d +-()()1 BbBxcB111tt+ ()e (1)

d ()()1 - BaBxm2222tt=+ cB()e (2) where e1t, e2t are a pair of independent, zero-mean white noise series with finite variances, so that E[ejtejs] = 0, t π s, j = 1, 2, m1, m2 are constants, B k is the lag operator so that B zt = zt-k, a1(B), b1(B), etc. are finite polyno- mials in B with the property that a1(1) π 0, b1(1) π 0, etc. and a1(0) = a2(0) = c1(0) = c2(0) = 1. In the main body of the paper, d will take either the values 0 or 1, so that is d = 0 the model is on levels of xt, yt, if d = 1 the model uses differenced data except in the error-correcting term b(yt-1 - Axt-1). In an appendix to the paper, other values of d are briefly consid- ered, including fractional values. The model in (1), (2) has a one-way causal structure, xt causing yt+1 but yt not causing xt+1. By allowing b1(0) to be non-zero, simultaneity between xt and yt is a possibility. It might be noted that there is little point in including terms such as b2(yt-2 - Axt-2) in (1) as the resulting model can always be rewritten in the present form. d d It is assumed that (1 - B) xt, (1 - B) yt are stationary. The main purpose of error-correction models is to capture the time- series properties of variables, through the complex lag-structures allowed, whilst at the same time incorporating an economic theory of an equilibrium type. To see this, consider the case when d = 1 and suppose that for all t > T, e1t = e2t = 0, and with m1 = m2 = 0. Then eventually, after short-term dynamics have worked themselves out, (1 - B)xt = (1 - B)yt

* Studies in Econometrics: Time Series and Multivariate Statistics, edited by S. Karlin, T. Amemiya, and L. A. Goodman, Academic Press, New York, 1983, 255–278. 130 C. W. J. Granger and A. A. Weiss

= 0, and yt = Axt, so the variables have an equilibrium relationship. If the constants m1, m2 are non-zero, then eventually xt, yt will be linear trends but still related by yt = Axt. If d = 0, the equilibria are of a rather trivial kind: xt = constant, yt = constant. By using error-correction models, a link is formed between classical econometric models that rely heavily on such theory but do not utilize a rich lag-structure. They may be thought of as capturing the true dynamics of the system whilst incorporating the equilibrium suggested by economic theory. This paper will consider the time-series properties of series generated by models such as (1), (2) and by various generalizations of this model.

It will be assumed that m1 = m2 = 0. A time-series identification test will be proposed for series obeying such models and empirical examples presented.

In what follows, a series xt will be called integrated of order d, denoted xt ~ I(d), if it has a univariate ARIMA (p, d, q) model of the form d ()()1 - BgBxhBaptqt= () where gp(B), hq(B) are finite polynomials in B of orders p, q respectively, and at is white noise. In particular, it follows that if xt ~ I(d), then (1 - d B) xt ~ I(0). If xt ~ I(d), then at low frequencies, the spectrum of xt will take the form -d AA()1 - cosww -2 d and then gives a distinctive characteristic of the series that has to be reproduced by any model for xt. A number of empirical papers have used error-correction models, including Sargan (1964), Davidson, Hendry,Srba and Yeo (1978),Hendry and von Ungern Sternberg (1980), Currie (1981), and Dawson (1981).

2. THE ONE-WAY CAUSAL MODEL Consider the model (1), (2), the first equation of which may be written as

aa1211()Byttt= () Bx+ c() B e (3) d d with the notation a1(B) = (1 - B) a1(B) - bB and a2(B) = (1 - B) b1(B) - bAB. Eliminating xt from (3) using (2) gives d d (11- BaB )21 ()aa () Bytt= 22212() BcB () e+ cBaB() ()()- B e 1t (4) As, if d = 0 or 1, the right-hand side can always be written as a finite moving average, it follows that yt ~ I(d) regardless of the value of b in d (1). If b π 0, this follows from (4), if b = 0 from (1) since (1 - B) xt ~ I(0). However, if d = 1, the value of b does have a dramatic impact on the low frequency component of yt. If b π 0, it is seen from (4), essentially replac- Time Series Analysis of Error Correction Models 131 ing B by eiw and letting w be small so that the term (1 - eiw) is negligible, that when considered in the frequency domain, the second term on the right hand side of (4) is negligible. Thus, the low frequency component of yt is determined largely by the low frequency component of e2t, which, through (2) also determines the low frequency component of xt. However, if b = 0, substitution for xt from (2) into (1), indicates that the low frequency component of both e1t and e2t will jointly determine the low frequency component of yt. Now consider the series zt = yt - Axt which has the univariate model

a21() Baee () Bzttt= c 2() B[] b 1 () B- AaB 1() 2121+ c() Ba () B . (5)

It follows immediately that zt ~ I(0) even if xt, yt are both I(1). As this is rather a special property it was given a specific name in Granger (1981) as:

Definition:Ifxt ~ I(d), yr ~ I(d) and there exists a constant A such that zt = yt - Axt ~ I(0), then xt ,yt will be said to be co-integrated. A will be unique.

One reason why this property is special is that if d = 1, then both xt and yt will have infinite variance but there exists a constant A so that zt has finite variance. In general, for any pair of infinite variance series xt - Cyt will have infinite variance for all C. It has been shown that if xt, yt are generated by (1), (2) with d = 1, then these series are necessarily co- integrated. Equally, if xt, yt are not co-integrated, then an error-correc- tion model with d = 1 would be inappropriate. This is clear because if xt, yt were not co-integrated, the left-hand side of (1) would have finite vari- ance but the error-correction term on the right-hand side of this equa- tion would have infinite variance, and thus the model would be obviously mis-specified. If d = 1, it easily follows from the definition that the dif- ferenced series (1 - B)xt, (1 - B)yt will have a coherence of one at very low frequencies and zero phase at these frequencies. Thus, Axt and yt will have identical low frequency components but may differ over higher fre- quencies. As this is a long-run property, it can be thought of as a natural generalization for integrated stochastic processes of one form of the equilibrium property considered by economists. It also follows immedi- ately from the definition that if xt, yt are co-integrated, then so will series produced from them by application of linear transformations and finite- length filters, so that for example xt¢ = a + bxt-s, yt¢ = c + fyt-k will be co- integrated for any finite, not too large, values of s and k and for any constants a, b, c, f.

When d = 0, the model is much less interesting. Obviously, if xt and yt are both I(0) then clearly yt - Axt is I(0) for any A. Suppose that xt and yt are related as in (3), then this model can always be written in the form 132 C. W. J. Granger and A. A. Weiss

(1) with d = 1 but xt will be given by (2) with d = 0. Thus, for I(0) series the error-correction model has no special implications.

Returning to the case where xt and yt are both I(1), it is interesting to ask what model a time-series analyst is likely to build, given enough data. Looking at the series individually,differencing will be suggested and then bivariate models of the differenced series considered. Presumably, the model identified will then be

aa1211()BByBBxcBB()111- ttt= ()()- + ()()- e derived from (3), plus (1), assuming one-way causality is determined and polynomials of the correct order are identified. The model is over- differenced but it is quite likely the fact that the moving-average term c1(B)(1 - B)e1t has a unit root may not be realized, given estimation dif- ficulties in this case, especially if one is not looking for it. When an error correction model is a possibility it would be convenient to have an easy way of identifying this. Looking at the coherence function for low fre- quencies is neither cheap nor easy due to estimation problems. The obvious method is to perform the regression

yt = m + Axt + ut giving Â, and then asking if zt = yt - Âxt is I(0). This test, and general- izations of it, are discussed in Section VI.

3. MULTI-COMPONENT CO-INTEGRATED SERIES

An obvious and potentially important generalization is when yt and xt are co-integrated, as in equation (1), but where xt has several distinguishable and observable components, so that xt = x1t + gx2t, for example. The error correction term in (1) now becomes, in the two component case, b(yt-1 - A1x1,t-1 - A2x2,t-1). If y1 ~ I(1), then a necessary condition for both compo- nents to belong in the error-correction term is x1t and x2t ~ I(1). If, say x1t ~ I(d), with d > 1 then the error-correction term cannot be I(0), if d < 1 then x1t cannot contribute to the coherence, at low frequencies, between (1 - B)xt and (1 - B)yt.Thus,it is supposed that yt,x1t,x2t are all I(1).Denot- ing the w-frequency component of yt by yt(w), and similarly for other series, for xt and yt to be co-integrated a sufficient condition is

yAxAxtt()ww= 11()+ 2 2 t() w for small w and some constants A1 and A2. Multiplying this equation by yt ()w and taking expectations, and similarly using x1t ()w and x2 t ()w and expectations gives three equations. Solving out for A1 and A2 gives a relationship between the spectra and cross-spectra of the series at low frequencies. A little algebra then produces the following relationship between coherences at low frequencies: Time Series Analysis of Error Correction Models 133

2 2 2 1 - C12 - C1y - C 2y + 2C12C1yC2y = 0 (6)

2 2 where C 12 = coherence between x1t, x2t, at low frequencies, and C jy = coherence between xjt, yt, j = 1, 2, at low frequencies. Some consequences of (6) are:

(i) If any one pair of the series yt, x1t, x2t are co-integrated, then the remaining pairs must be equally related at low frequencies, e.g.,

if C12 = 1, then C1y = C2y. (ii) If any two pairs are co-integrated, then the remaining pair must

also be co-integrated, as if C1y = C2y = 1, then C12 = 1. (iii) Neither pair yt, x1t or yt, x2t need be co-integrated. For example, 2 2 if C12 = 0, then (6) gives merely 1 = C1y + C 2y. Thus, if yt and xt are co-integrated it does not necessarily mean that yt is co- integrated with any component of xt.

This last property does make a search for co-integrated series more difficult, particularly if one of the necessary components is not observed and no satisfactory proxy is available. For example, if yt is the output price series for some industry, a co-integrated series could have as com- ponents, input prices, wages, and possibly a productivity measure, pro- vided all series are I(1). One cannot test for co-integratedness in pairs, but one has to look at zt = yt - Atx1t - A2x2t and see if zt ~ I(0). Clearly, if one vital component is missing, then co-integration may not be deter- mined. The existence of a relevant theory to indicate a full-list relevant components is obviously particularly useful.

The model can be further generalized to have a vector x t, with several x y y components jt, causing a vector -t with components jt. One or more of the equations for the y components could contain a lagged zt term, where zt =Sfjyjt -SAjxjt. Discovering the correct specification of zt, such that all yjt, xjt are I(1) but zt is I(0) is likely to be rather difficult without the use of a specific, and correct, equilibrium theory.

4. THE BIVARIATE FEEDBACK CASE Now consider the bivariate feedback model d d ()()11- BaBy11111111tt=-be() y-- Ax t+-()() BbBxcBtt+ () (7a) d d ()()11- BaBx22121222tt=-be() y-- Ax t+-()() BbBycBtt+ () (7b) which may be conveniently rewritten as

aa1211()Byttt= () Bx+ c() B e (8a) 134 C. W. J. Granger and A. A. Weiss

aa3422()Bxtt= () By+ c() B e t (8b) where dd abab1111111()BBaBBBBbBAB=-()()11- , ()=-()()- , dd ()BBaBABBbBB()() () ()() abab3222422=-11+ , =- + . To make the model identified, a recursive scheme will be assumed, so that corr(e1t,e2s) = 0 at s,t including s = t, b2(0) = 0, but b1(0) need not be zero. It is also assumed that d = 1. The univariate model for yt takes the form

DBy()tt= c131222() Bae () B+ c() B ae () B t where

DBBB()= aa13() ()+ aa 24() BB ().

The univariate model for xt has D(B) on its left-hand side. For xt, yt to be both I(1), so that D(B) has a factor (1 - B) requires either b1b2 = 0 or A1 = A2. Some further algebra finds that the model for zt = yt - Axt, takes the form

DBz()tt= f1122() Bee+ f() B t and if A1 = A2 = A or if b1b2 = 0, then f1(b), f2(b) have a factor (1 - B), which therefore cancels through the equation for zt, giving zt ~ I(0).Thus, for xt, yt to be co-integrated and for an error-correction term to be present in each equation of (7), necessarily A1 = A2 = A. If only one error- correction term occurs in the model, for instance, if b2 = 0, b1 π 0, then xt, yt will be co-integrated and I(1), with the low frequency component of e2t driving the low frequency component of both xt and yt. If both b1 and b2 are non-zero, the low frequency components of xt and yt are driven by a mixture of the low frequency components of e1t and e2t. The model is thus different when two error-correction components are present.

The only unusual special case seems to be when b1 = 0, b2 π 0, and if b1(B) = Aa1(B), as then xt, yt are both I(2) but zt = yt - Axt is I(0). The series are thus still co-integrated.

5. AGGREGATION

If xt, yt are I(1) and co-integrated, so that zt = yt - Axt is I(0), then chang- ing the sampling interval of the series will not change the situation. If xt is measured weekly, say, and is I(1), then if recorded every k weeks, the new data set will still be I(1). The model for the change in xt will be different but it will remain I(0). Similarly, zt will stay I(0) and so co- integration is unchanged. Here, xt, yt have been considered as stock vari- ables (the same remarks hold if they are both flow variables), but Time Series Analysis of Error Correction Models 135

accumulated over k weeks rather than one week, say. If xt is a flow vari- able and yt a stock variable, temporal aggregation relationships are less clear. It seems doubtful if it is logical to suppose that a stock and a flow variable are co-integrated, given arbitrariness of sampling intervals.

Suppose now that a pair x1t, y1t are co-integrated, both I(1) and z1t = y1t - A1x1t ~ I(0). Similarly for a second pair x2t, y2t, and z2t = y2t - A2x2t. The variables could be income and consumption in two different regions. Now suppose that data for individual regions is not available, the observ- – – – – – able data being yt = y1t + y2t and xt = x1t + x2t. yt , xt are both I(1) but zt = – yt - Axt will not be I(0) unless A1 = A2 (= A) or unless x1t, x2t are co- integrated, with (A1 - A)x1t + (A2 - A)x2t ~ I(0), so that y1t and y2t will necessarily be co-integrated, with (A1 - A)A2y1t + (A2 - A)A1y2t ~ I(0). This may seem an unlikely condition for variables from different regions. If many regions are involved in the aggregation, it seems highly unlikely that the aggregates are co-integrated even if regional components are. It thus seems that rather stringent conditions are required to find error- correction models relevant for some of the most important aggregates of the economy. On the other hand, it is possible for aggregate series, with many components to be co-integrated but for regional components not to be, generalizing some of the results of Section III. For some equilibrium theories in economies, the value of A is deter- mined, for instance, if the ratio of yt to xt is thought to tend to a constant in equilibrium. Then building models on the log variables suggests that A = 1. This could apply to various “regions” and aggregation will then lead to the same error-correction models.

6. TESTING FOR CO-INTEGRATION There are a number of ways that the error-correction specification, or equivalently, co-integration, could be tested.Assuming that xt, yt are both I(1), one way would be to look at estimates of the cross-spectrum between these series of low frequencies. Other ways would be to build the relevant model, such as (1), (2), and test if b is non-zero, or to build

(1), (2) taking b = 0 and then testing if the moving average term c1(B) d (1 - B) e1t has a root on the unit circle. These methods are not simple to use and the latter two require complete specification of the lags in the model. Under the null hypothesis of no error-correction mechanism, the first test is likely to have unsatisfactory properties with medium-sized samples and the same is likely to be so for the third test if the alterna- tive hypothesis is true. It would be useful to have a simple test to identify error-correction, using the time-series sense of the word, meaning a simple test of speci- fication to be used prior to the full data analysis. One possible way to do this is to form the regression

yt = m + Axt + et (9) 136 C. W. J. Granger and A. A. Weiss

using least squares and then to ask if êt = yt --mˆ Âxt is I(0) or I(1). The standard time-series method for doing this is to look at the correlogram of êt and decide, by eye, if it is declining fast enough for I(0) to be appro- priate. This same procedure presumably will have been used to decide that xt, yt are both I(1), assuming that only integer values of d are being considered.There are two obvious difficulties with this identification pro- cedure, the estimate of A will be inefficient in general, as there is no reason to suppose that et is white noise, and no strict test of et ~ I(0) is being used. Take

H0: no error correction mechanism HA: xt, yt are co-integrated.

If HA is true, there will be a single value of  which, in theory, makes the variance of et finite, so that if this value is discovered by the search pro- cedure it should be very distinctive, regardless of the temporal structure of êt. This argument only holds strictly for large samples and it is less clear what happens for ordinary sized samples. What is clear is that the frequently used assumption that a better estimate of A is obtained by assuming et to be AR(1) is not appropriate in this case. As yt is I(1), the model one is inclined to get is just yt = yt-1 + error, with A = 0, as a simple simulation study showed. A more complete procedure is to build models of the form p ymAxttjtktk=+ +Âa () y--- - y1 k=1 q +- bejtk()xx--- tk1 + t (10) k=0 where et should be white noise if p and q are chosen in an ad hoc fashion but are “large enough” to pick up any temporal structure in the I(0) vari- able et in (9), assuming HA is correct. This form does not require an identification of the complete model, will give efficient estimate of para- meters if H0 is true, and is still easily performed. A test based on (9) will be called the “inefficient test”; that based on (10) will be called the effi- cient test. If HA is true, êt from (9) should be I(0), which may be judged from the correlogram of et, and eˆt should be near white noise from (10) if p, q are chosen large enough. In the applications presented in the fol- lowing section, equations (9) and (10) were estimated using least squares. It should be noted that error-correction cannot be tested by estimat- ing models such as (9) or (10) and asking if the estimate of A is signifi- cant because of the spurious regression possibilities, as discussed in

Granger and Newbold (1977). If H0 is true, spurious regression can obvi- ously occur, but this is not a problem when HA is true. Equation (10) does not correspond to equation (1) with d = 1 and so tests based on it are not equivalent to building the system (1), (2). Con- Time Series Analysis of Error Correction Models 137

sider the simple error correcting model yt - yt-1 = b(yt-1 - Axt-1) + et. Then this can be rewritten yt - Axt = (b + 1)(yt-1 - Axt-1) - A(xt - xt-1) + et. This suggests that models of the form p yAxmt-=+ga() y t--11 - Ax t+-Â j() y tk --- y tk 1 k=1 q +-Â bejtk()xx--- tk1 + t (11) k=1 should be estimated, where ΩgΩ < 1, and et should be white noise. Equations (9), (10), and (11) were fitted to various data sets and the results are presented below. As an experiment in (10) the model was also fitted with the k going from 1 to q in the last summation, but little differences in conclusions occurred, and so these results will not always be presented.

7. APPLICATION 1: EMPLOYEES’ INCOME AND NATIONAL INCOME The series considered:

yt = compensation of employees (logs), and xt = national income (logs) both measured in current dollars. The data is quarterly starting 1947-I and the series has 138 terms. In this and the other applications, the data was taken from the Citibank Economic Data base. The fitter version of equation (9) was

yxettt=-0.. 680 + 1 041 + ()(18.. 1 177 6 )

(t-values are shown in brackets, assuming et to be white noise) and, sim- ilarly, the fitted version of equation (10), with p = q = 3, was

yttttt=-0.. 754 + 1 068 x - 1 . 26 Dy---123 - 0 . 028 Dy - 0 . 23 Dy ()()-43.. 7 353 7()- 6 . 3()- 0 . 11()- 1 . 10

--103..Dxtt 109 Dx--12 - 162 . Dx tt +e ()7. 27 ()- 7 . 02 ()- 11 . 64 (12) where Dxt-k = xt-k - xt-k-1 Table I shows the autocorrelations for Dyt, Dxt, et, Det, e*t , et and e*t for lags 1 to 12, where e*t is the residual from (11) with all of the coefficients increased by 10% and similarly for e*t from (12). The correlograms for xt, yt (not shown) stay high and suggest differ- encing. Dxt and Dyt still have positive serial correlation at low lags but d = 1 apears to be an appropriate identification (columns 1, 2).The resid- ual series et from (11) has a correlogram appropriate for an I(0) series, column (3), but if the parameters in (11) are changed upwards by 10%, 138 C. W. J. Granger and A. A. Weiss

Table 7.1 Autocorrelations.

(1) (2) (3) (4) (5) (6) (7)

Lag Dyt Dxt et Det e*t et e*t

1 .65 .51 .89 .61 .95 .45 .92 2 .34 .22 .65 .22 .85 .49 .90 3 .13 -.01 .38 -.13 .74 .41 .85 4 -.08 -.19 .13 -.48 .65 .37 .83 5 -.22 -.25 -.02 -.50 .59 .22 .78 6 -.12 -.17 -.08 -.33 .56 .29 .77 7 -.06 -.02 -.06 -.13 .55 .30 .75 8 -.02 -.01 -.02 .01 .55 .13 .71 9 .02 .12 .03 .16 .55 .17 .69 10 .06 .18 .06 .16 .53 .13 .66 11 .02 .13 .05 .05 .49 .06 .65 12 -.05 -.00 .04 -.10 .47 -.01 .61 (approx. twice standard error is 0.17)

The estimated variances of the residuals are V(et) = .00226, V(e*t ) = 0.025,

V(et) = .42E-03, V(e*t ) = .05E-02.

the resulting residuals e*t have a correlogram, column (5) suggesting that e*t is I(1). Thus the results of the inefficient test suggest that an error- correction model is appropriate. However, the more complete model

(12) does not produce residuals that are white noise; in fact et has con- siderable temporal structure, suggesting either that the model fails this test or that further lagged values in the differenced series are required. However, it was found that adding further lags made little difference. Changing parameters upwards by 10% again produced errors that appear to be I(1), column 7. The estimates of A in both models is near one, but seems to be sta- tistically greater than one. The tests thus seem somewhat inconclusive; the error correction models is not rejected but neither is it strongly supported. Using GNP instead of national income gave similar results.

The model in (12) was re-estimated using Dxt-j, j = 1, 2, 3 instead of Dxt-j, j = 0, 1, 2, but the results in Table I were changed very little. The estimated model became

ytttt=-0.. 743 + 1 064 x - 0 . 173 Dy--12 - 3 . 529 Dy ()0327 . ()- 08 . ()- 20 .

+---+0..... 001Dyttttt----3123 1 60 Dx 1 43 Dx 1 13 Dx e ()0 . 004 ()- 10 . 5 ()- 6 . 8 ()- 7 . 6 A form of equation (11) was also fitted, giving Time Series Analysis of Error Correction Models 139

()yxtt- 0.... 901=+ 0 002 1 002() y t--11 - 0 901 x t ()()()391 . 0 8 . 8 103 . 0

--1.. 054 ()xxtt-1 + e t ()-15. 7 The t-statistics are seen to be very large, and the estimated model can effectively be rewritten

yytt-=----1101. () xx tt+ e t which does not support the error-correction formulation. The residual et has variance 0.14E-03 and estimated serial correlations r1 = 0.63, r2 = 0.36, r3 = 0.21, ΩrkΩ ≤ 1, k > 3. This is the best-fitting model of those estimated, gives a stationary model in differences, but does not find error- correction relevant.

8. APPLICATION 2. M3 AND GNP Here

yt = M3, money supply (logs) and xt = GNP (logs). The data is quarterly, starting in 1959I, and there are 90 observations. The simple model estimated was

yxett=+0.. 028 1 097 + ().73 ( 198 . 6 ) (13) and the more complicated model using p = 3, q = 2 was

ytttt=+0.. 09 1 081 x + 0 . 688 Dy--12 + 0 . 536 Dy ()()2 . 6 189 . 4 () 1 . 46 () 0 . 91

+--1... 41Dytttt--31 0 322 Dx 0 226 Dx +e . ()2 . 98 ()- 1 . 18 ()- 0 . 796 Once more, the estimates of A are near, but significantly greater than one.

Table 7.2 shows the estimated autocorrelations for Dyt, Dxt, et, e*t , et and e*t , where starred values are residuals from equations like (13), (14), but with parameter values increased by ten percent.

The evidence of the correlograms suggests that xt and yt are both I(1). Residuals from perturbing coefficients by ten percent, columns 4 and 6 again appear to be I(1). However, the residuals from equations (13) and (14) are not clearly I(0); their correlograms decline but somewhat slowly, as seen in columns 3 and 5. Adding further lagged differenced variables helps very little. If p = q = 5 is used, the co-efficient on Dyt-5 is the only one on a differenced variable to have a t-value over two, and the resid- uals from this equation have the correlogram shown in column 7 to Table 2. Thus, the residuals are far from white noise and there is little evidence that an error-correction mechanism is appropriate in this case. 140 C. W. J. Granger and A. A. Weiss

Table 7.2 Autocorrelations.

(1) (2) (3) (4) (5) (6) (7)

Lag Dyt Dxt et e*t et e*t ht

1 .77 .25 .90 .95 .88 .95 .89 2 .51 .15 .77 .89 .74 .89 .76 3 .26 .11 .66 .84 .63 .85 .67 4 .13 .16 .55 .80 .54 .81 .56 5 .06 .07 .43 .78 .47 .77 .46 6 .05 .08 .31 .70 .37 .72 .39 7 .12 .12 .22 .65 .29 .67 .30 8 -.05 .03 .12 .60 .20 .63 .17 9 -.05 .07 .05 .55 .11 .58 .07 10 -.05 .17 -.01 .50 .01 .54 -.05 11 -.03 .22 -.07 .46 -.11 .49 -.17 12 -.02 .01 -.15 .41 -.23 .45 -.29 (approx. twice standard error is .21)

Variances of residuals: et e*t et e*t ht .79E-03 .16E-01 .49E-03 .15E-01 .11E-03

A model of form (11) was also estimated, giving

()yxtt- 0.... 095=+ 0 0015 1 035() y t--11 - 1 095 x t ()903 . 0 ()() 7 . 8 33

--079..()xxtt-1 + e t ()-93 . As 1.035 is not significantly different from 1, this model does not support error-correction. et has variance 0.57E-04, which is the smallest residual variance of the equations fitted and has serial correlations r1 = 0.65, r2 = 0.31, ΩrkΩ < 0.12, k > 2.

9. APPLICATION 3. PRICES, WAGES AND PRODUCTIVITY IN THE TRANSPORTATION INDUSTRY In Table 7.3,

yt = price index, U.S. transporation industry x1t = hourly earnings, workers in transport industry x2t = productivity measure, transportation industry Data is monthly, starting in 1969, and there are 151 observations. Analy- sis of the individual series strongly suggested that they are I(1), but the first differences had no temporal structure other than seasonal effects in yt and x1t. Time Series Analysis of Error Correction Models 141

The simple models fitted were

yxettt=+18.. 58 20 04 11 + ()(15 . 4 109 . 73 ) (15) and

yxxetttt=+ 54.. 3 21 81122 - 787 . 69 + . ()()17 . 9 120 . 90 ()- 12 . 30 (16) More complicated models are

ytttt=+ 188.. 200 x112 + 070 . Dy-- + 042 . Dy ()()17 . 9 100 . 6 () 3 . 4 () 2 . 21

--13.. 8Dx11,,ttt-- 8 69 Dx 1 2 +e 1 - ()33 . ()- 19 . (17) and

yxxDytttt=+55.. 08 21 9512 - 810 . 6 + 0 . 53 - 1 ()20 ( 115 ) ()()- 13 . 9 3 . 87

+--025...Dyttt--2111 174 Dx,, 96 Dx ()201 . - () 538 .

++673Dx2212,,ttt 599 Dx - +e (18) It seems that the models relating just prices to wages produce resid- uals with slowly declining correlograms (columns 1, 4), and so this pair of variables appear not to be co-integrated. Using the three variables produces models that appear to be error-correcting using the inefficient test, (column 2), especially compared to residuals from the perturbed model (column 3). However, adding lagged differences does little to drive residuals towards white noise, as seen by comparing columns 2 and 5. Adding further differences altered this very little. Unfortunately, the results are again inconclusive. The inefficient procedure suggests an error-correction model could be appropriate if industrial prices are explained by wages and productivity, but more complicated procedures do not fully support such a conclusion. However, when an equation of form (11) was fitted, a clearer picture occurs. The equation is

(yxtt-24.. 812,,- 94 6 x t) =- 0 .. 199 + 0 941( y t- 1 ()()90 . 044 . ()() 06 . 382 .

--24.. 8xx11,,tt-- 94 6 21)

--22... 4Dx12,,ttt 104 6 Dx + e ()-62 . (()- 044 . et has variance 2.998 which is the smallest achieved by the various models, and has r1 =-0.2 and all other rk small, except for r6 = 0.29, r12 = 0.49, suggesting a seasonal component. Here, the terms involving x2,t are no longer significant and 0.941 is significantly less than 1, suggesting that 142 C. W. J. Granger and A. A. Weiss

Table 7.3 Autocorrelations.

(1) (2) (3) (4) (5)

Lag e1t e2t e*2t e1t e2t

1 .84 .71 .87 .95 .90 2 .78 .61 .83 .88 .78 3 .76 .61 .84 .80 .63 4 .64 .45 .78 .72 .50 5 .58 .33 .72 .64 .38 6 .57 .33 .72 .59 .29 7 .46 .19 .68 .55 .23 8 .44 .16 .66 .51 .18 9 .47 .20 .67 .49 .16 10 .38 .08 .61 .45 .15 11 .36 .10 .58 .42 .16 12 .38 .18 .60 .37 .15 (approx. twice standard error is .16) Variance of residuals 18.1 9.04 23.6 15.7 6.74

an error-correction model may be appropriate. If the model is re- estimated using just x1,t, the same conclusion is reached. On the other hand, if the same model, involving just yt (price) and x1t (wages) is esti- mated using logs of the series, the equation achieved is

()logyxtt- 342 . log1111=- 0997.() log y t-- 342 . log x, t ()895 . 0

-+336.Dxe log 1tt ()-049 . where et has r1 =-0.14, r6 = 0.15, r12 = 0.51, and all other rk small. Thus, error correction is not supported in logs of the variables.

10. CONCLUSIONS The error-correction mechanism is an interesting way of possibly bring- ing economic theory into time-series modeling, but in the applications presented here, and also in some others that have not been presented, the “theory” being applied does seem to be too simplistic. The temporal structure and relationships between series do not fit simply into the class of models being considered, which are restricted to linear forms (possibly in logs) and with time invariant parameters.The tests suggested to help identify error-correction models do appear to have some diffi- culties and require further study. If the economic theory is believed strongly enough, it may be worth building a model inserting the error- Time Series Analysis of Error Correction Models 143 correction term and comparing its results to a model built just on first differences. One further reason for the unsatisfactory results of the applications is that only integer d values were considered. Other d values are briefly discussed in Appendix 1, but a full discussion and the necessary associ- ated empirical work is too lengthy to report here.

APPENDIX 1. FRACTIONAL INTEGRATED SERIES The results of the first five sections go through without change if d is allowed to take any value, rather than just the values zero and one there considered. The case where d is a fraction has been considered by

Granger and Joyeux (1980) and Hosking (1981), so that xt ~ I(d) if (1 - d B) xt can be modeled as an ARMA (p, q) model, with finite, integer p, q. If d is a fraction, (1 - B)d can only be realized as a specific power series in B. Such models can arise from aggregation of dynamic components with different parameters, see Granger (1981). It can be shown that xt has finite variance if d < 1/2, but has infinite variance if d ≥ 1/2.

If xt, yt are both I(d) and generated by (1), (2) for and d, then zt = xt - Ayt will be I(0), and so will be co-integrated. The identification test, based on the cross-spectrum, discussed in Section 6 is still relevant in this more general case.

APPENDIX 2. ERROR CORRECTION AND SEASONALITY A popular class of univariate models for series with seasonal components is that introduced by Box and Jenkins (1980) of the form

d s d s s s ()11- BBaBaBxbBbB()- 12() ()t = 12() ()e t (A2.1)

s where et is white noise, a1(B), b1(B) are polynomials in B, and a2(B ), s s b2(B ) are polynomials in B , where s is the length of the seasonal, so that s = 12 if monthly data is used. The model is completed by adding appro- priate starting up values, containing the typical seasonal shape. One problem with this model is that if it is used to generate a series, although this series will have the correct seasonal shape in early years, it will even- tually drift away from this shape. As many economic series have a varying seasonal, but one that varies about a fairly consistent shape, the model is clearly not completely satisfactory, except in the short-run. A method of improving the model is to add an error-correcting term such as A(xt - St), where St is a strongly seasonal series having the correct constant underlying shape. 144 C. W. J. Granger and A. A. Weiss

REFERENCES Box, G. E. P., and Jenkins, G. M. (1970). “Time Series Analysis, Forecasting and Control.” Holden Day, San Francisco. Currie, D. (1981). The Economic Journal 363. Davidson, J., Hendry, D., Srba, F., and Yeo, S. (1978). Economic Journal 88, 661. Dawson, A. (1981). Applied Economics 3, 351. Granger, C. W. J. (1981). Journal of Econometrics 16, 121. Granger, C. W. J., and Joyeux, R. (1980). Journal of Time Series Analysis 1, 15. Granger, C. W. J., and Newbold, P. (1977). “Forecasting Economic Time Series.” Academic Press, New York. Hendry, D., and von Ungern Sternberg, T. (1980). In (A. Deaton, ed.), Essays in the Theory and Measurement of Consumers’ Behaviour Cambridge University Press. Hosking, J. R. M. (1981). Biometrika 68, 165. Sargan, J. D. (1974). (P. E. Hart, G. Mills, and J. K. Whittaker, eds.), “Economet- ric Analysis for National Economic Planning” Butterworth, London. CHAPTER 8

Co-Integration and Error Correction: Representation, Estimation, and Testing* Robert F. Engle and C. W. J. Granger**

The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples.

If each element of a vector of time series xt first achieves stationarity after differencing, but a linear combination a ¢xt is already stationary, the time series xt are said to be co-integrated with co-integrating vector a.There may be several such co-integrating vectors so that a becomes a matrix.

Interpreting a ¢xt = 0 as a long run equilibrium, co-integration implies that deviations from equilibrium are stationary, with finite variance, even though the series themselves are nonstationary and have infinite variance. The paper presents a representation theorem based on Granger (1983), which connects the moving average, autoregressive, and error correction representations for co-integrated systems.A vector autoregression in dif- ferenced variables is incompatible with these representations. Estimation of these models is discussed and a simple but asymptotically efficient two- step estimator is proposed.Testing for co-integration combines the prob- lems of unit root tests and tests with parameters unidentified under the null. Seven statistics are formulated and analyzed. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined and one test procedure is recommended for application. In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is co-integrated with M2, but not M1, M3, or aggre- gate liquid assets. * Econometrica, 55, 1987, 251–276. ** The authors are indebted to David Hendry and Sam Yoo for many useful conversa- tions and suggestions as well as to Gene Savin, David Dickey, Alok Bhargava, and Marco Lippi. Two referees provided detailed constructive criticism, and thanks go to Yoshi Baba, Sam Yoo, and Alvaro Ecribano who creatively carried out the simulations and examples. Financial support was provided by NSF SES-80-08580 and SES-82-08626. A previous version of this paper was entitled “Dynamic Model Specification with Equi- librium Constraints: Co-integration and Error Correction.” 146 R. F. Engle and C. W. J. Granger

Keywords: Co-integration, vector autoregression, unit roots, error cor- rection, multivariate time series, dickey-fuller tests.

1. INTRODUCTION An individual economic variable, viewed as a time series, can wander extensively and yet some pairs of series may be expected to move so that they do not drift too far apart. Typically economic theory will propose forces which tend to keep such series together. Examples might be short and long term interest rates, capital appropriations and expenditures, household income and expenditures, and prices of the same commodity in different markets or close substitutes in the same market. A similar idea arises from considering equilibrium relationships, where equilib- rium is a stationary point characterized by forces which tend to push the economy back toward equilibrium whenever it moves away. If xt is a vector of economic variables, then they may be said to be in equilibrium when the specific linear constraint

a¢xt = 0 occurs. In most time periods, xt will not be in equilibrium and the uni- variate quantity

zt = a¢xt may be called the equilibrium error. If the equilibrium concept is to have any relevance for the specification of econometric models, the economy should appear to prefer a small value of zt rather than a large value. In this paper, these ideas are put onto a firm basis and it is shown that a class of models, known as error-correcting, allows long-run components of variables to obey equilibrium constraints while short-run components have a flexible dynamic specification. A condition for this to be true, called co-integration, was introduced by Granger (1981) and Granger and Weiss (1983) and is precisely defined in the next section. Section 3 discusses several representations of co-integrated systems, Section 4 develops estimation procedures, and Section 5 develops tests. Several applications are presented in Section 6 and conclusions are offered in Section 7. A particularly simple example of this class of models is shown in Section 4, and it might be useful to examine it for motivating the analy- sis of such systems.

2. INTEGRATION, CO-INTEGRATION, AND ERROR CORRECTION It is well known from Wold’s theorem that a single stationary time series with no deterministic components has an infinite moving average repre- sentation which is generally approximated by a finite autoregressive moving average process. See, for example, Box and Jenkins (1970) or Co-Integration and Error-Correction 147

Granger and Newbold (1977). Commonly however, economic series must be differenced before the assumption of stationarity can be presumed to hold. This motivates the following familiar definition of integration:

Definition: A series with no deterministic component which has a sta- tionary, invertible, ARMA representation after differencing d times, is said to be integrated of order d, denoted xt ~ I(d).

For ease of exposition, only the values d = 0 and d = 1 will be con- sidered in much of the paper, but many of the results can be generalized to other cases including the fractional difference model. Thus, for d = 0 xt will be stationary and for d = 1 the change is stationary. There are substantial differences in appearance between a series that is I(0) and another that is I(1). For more discussion see, for example, Feller (1968) or Granger and Newbold (1977).

(a) If xt ~ I(0) with zero mean then (i) the variance of xt is finite; (ii) an innovation has only a temporary effect on the value of xt; (iii) the spectrum of xt, f(w), has the property 0 < f(0) <•; (iv) the expected length of times between crossings of x = 0 is finite; (v) the autocorrelations, rk, decrease steadily in magnitude for large enough k, so that their sum is finite.

(b) If xt ~ I(1) with x0 = 0, then (i) variance xt goes to infinity as t goes to infinity; (ii) an innovation has a permanent effect on the value of xt, as xt is the sum of all previous changes; (iii) the spectrum of xt has the approximate shape f(w) ~ Aw-2d for small w so that in particular f(0) = •; (iv) the expected time between crossings of x = 0 is infinite; (v) the theoretical autocorrelations, rk Æ 1 for all k as t Æ•. The theoretical infinite variance for an I(1) series comes completely from the contribution of the low frequencies, or long run part of the series. Thus an I(1) series is rather smooth, having dominant long swings, compared to an I(0) series. Because of the relative sizes of the variances, it is always true that the sum of an I(0) and an I(1) will be I(1). Further, if a and b are constants, b π 0, and if xt ~ I(d), then a + bxt is also I(d). If xt and yt are both I(d), then it is generally true that the linear combination

zt = xt - ayt will also be I(d). However, it is possible that zt ~ I(d - b), b > 0. When this occurs, a very special constraint operates on the long-run compo- nents of the series. Consider the case d = b = 1, so that xt, yt are both I(1) with dominant long run components, but zt is I(0) without especially strong low frequencies. The constant a is therefore such that the bulk of the long run components of xt and yt cancel out. For a = 1, the vague idea that xt and yt cannot drift too far apart has been translated into the more precise statement that “their difference will be I(0).” The use of the con- stant a merely suggests that some scaling needs to be used before the 148 R. F. Engle and C. W. J. Granger

I(0) difference can be achieved. It should be noted that it will not gen- erally be true that there is an a which makes zt ~ I(0). An analogous case, considering a different important frequency, is when xt and yt are a pair of series, each having important seasonal com- ponent, yet there is an a so that the derived series zt has no seasonal. Clearly this could occur, but might be considered to be unlikely. To formalize these ideas, the following definition adapted from Granger (1981) and Granger and Weiss (1983) is introduced:

Definition: The components of the vector xt are said to be co-integrated of order d, b, denoted xt ~ CI(d, b), if (i) all components of xt are I(d); (ii) there exists a vector a(π0) so that zt = a¢xt ~ I(d - b), b > 0.The vector a is called the co-integrating vector.

Continuing to concentrate on the d = 1, b = 1 case, co-integration would mean that if the components of xt were all I(1), then the equilib- rium error would be I(0), and zt will rarely drift far from zero if it has zero mean and zt will often cross the zero line. Putting this another way, it means that equilibrium will occasionally occur, at least to a close approximation, whereas if xt was not co-integrated, then zt can wander widely and zero-crossings would be very rare, suggesting that in this case the equilibrium concept has no practical implications. The reduction in the order of integration implies a special kind of relationship with inter- pretable and testable consequences. If however all the elements of xt are already stationary so that they are I(0), then the equilibrium error zt has no distinctive property if it is I(0). It could be that zt ~ I(-1), so that its spectrum is zero at zero frequency, but if any of the variables have mea- surement error, this property in general cannot be observed and so this case is of little realistic interest. When interpreting the co-integration concept it might be noted that in the N = 2, d = b = 1 case, Granger and Weiss (1983) show that a necessary and sufficient condition for co- integration is that the coherence between the two series is one at zero frequency.

If xt has N components, then there may be more than one cointegrat- ing vector a. It is clearly possible for several equilibrium relations to govern the joint behavior of the variables. In what follows, it will be assumed that there are exactly r linearly independent co-integrating vectors, with r N - 1, which are gathered together into the N ¥ r array a. By construction the rank of a will be r which will be called the “co- integrating rank” of xt. The close relationship between co-integration and error correcting models will be developed in the balance of the paper. Error correction mechanisms have been used widely in economics. Early versions are Sargan (1964) and Phillips (1957). The idea is simply that a proportion of the disequilibrium from one period is corrected in the next period. Co-Integration and Error-Correction 149

For example, the change in price in one period may depend upon the degree of excess demand in the previous period. Such schemes can be derived as optimal behavior with some types of adjustment costs or incomplete information. Recently, these models have seen great interest following the work of Davidson, Hendry, Srba, and Yeo (1978) (DHSY), Hendry and von Ungern-Sternberg (1980), Currie (1981), Dawson (1981), and Salmon (1982) among others. For a two variable system a typical error correction model would relate the change in one variable to past equilibrium errors, as well as to past changes in both variables. For a multivariate system we can define a general error correction representation in terms of B, the backshift operator, as follows.

Definition: A vector time series xt has an error correction representa- tion if it can be expressed as:

AB()()1 - Bxttt=-g z-1 + u where ut is a stationary multivariate disturbance, with A(0) = I, A(1) has all elements finite, zr = a ¢xr, and g π 0. In this representation, only the disequilibrium in the previous period is an explanatory variable. However, by rearranging terms, any set of lags of the z can be written in this form; therefore it permits any type of gradual adjustment toward a new equilibrium. A notable difference between this definition and most of the applications which have occurred is that this is a multivariate definition which does not rest on exogene- ity of a subset of the variables. The notion that one variable may be weakly exogenous in the sense of Engle, Hendry,and Richard (1983) may be investigated in such a system as briefly discussed below. A second notable difference is that a is taken to be an unknown parameter vector rather than a set of constants given by economic theory.

3. PROPERTIES OF CO-INTEGRATED VARIABLES AND THEIR REPRESENTATIONS

Suppose that each component of xt is I(1) so that the change in each com- ponent is a zero mean purely nondeterministic stationary stochastic process. Any known deterministic components can be subtracted before the analysis is begun. It follows that there will always exist a multivari- ate Wold representation:

()1 - Bxtt= CB()e , (3.1) taken to mean that both sides will have the same spectral matrix. Further, C(B) will be uniquely defined by the conditions that the function det [C(z)], z = e iw, have all zeroes on or outside the unit circle, and that 150 R. F. Engle and C. W. J. Granger

C(0) = IN, the N ¥ N identity matrix (see Hannan (1970, p. 66)). In this representation the et are zero mean white noise vectors with

Ets[]eets¢ =π0, , ==Gts,, so that only contemporaneous correlations can occur. The moving average polynomial C(B) can always be expressed as CB()= C()11+-()() BC * B (3.2) by simply rearranging the terms. If C(B) is of finite order, then C*(B) will be of finite order. If C*(1) is identically zero, then a similar expres- sion involving (1 - B)2 can be defined. The relationship between error correction models and co-integration was first pointed out in Granger (1981). A theorem showing precisely that co-integrated series can be represented by error correction models was originally stated and proved in Granger (1983). The following version is therefore called the Granger Representation Theorem. Analy- sis of related but more complex cases is covered by Johansen (1985) and Yoo (1985).

Granger Representation Theorem: If the N ¥ 1 vector xt given in (3.1) is co-integrated with d = 1, b = 1 and with co-integrating rank r, then:

(1) C(1) is of rank N - r. (2) There exists a vector ARMA representation

ABx()tt= dB()e (3.3) with the properties that A(1) has rank r and d(B) is a scalar lag poly- nomial with d(1) finite, and A(0) = IN. When d(B) = 1, this is a vector autoregression. (3) There exist N ¥ r matrices, a, g, of rank r such that a ¢C()10= , C()10g = , A()1 =¢ga .

(4) There exists an error correction representation with zt = a¢xt, an r ¥ 1 vector of stationary random variables:

AB*()()1 - Bxtt=-ge z-1 + dB() t (3.4) with A*(0) = IN.

(5) The vector zt is given by

zKBtt= ()e , (3.5)

()1 - Bztt=-ag ¢ z-1 + JB() e t, (3.6) Co-Integration and Error-Correction 151 where K(B) is an r ¥ N matrix of lag polynomials given by a ¢C*(B) with all elements of K(1) finite with rank r, and det (a¢g) > 0. (6) If a finite vector autoregressive representation is possible, it will have the form given by (3.3) and (3.4) above with d(B) = 1 and both A(B) and A*(B) as matrices of finite polynomials. In order to prove the Theorem the following lemma on determinants and adjoints of singular matrix polynomials is needed.

Lemma 1: If G(l) is a finite valued N ¥ N matrix polynomial on l Œ [0, 1], with rank G(0) = N - r for 0 r N, and if G*(0) π 0 in

GG()lll= ()0*,+ G() then

r (i) det() G()lll= g() IN withg ()0, finite (ii) Adj() G()ll= r-1 H() l, where IN is the N ¥ N identity matrix,1 rank (H(0)) r, and H(0) is finite.

Proof: The determinant of G can be expressed in a power series in l as

• i det()G()ldl= Â i . i=0

Each di is a sum of a finite number of products of elements of G(l) and therefore is itself finite valued. Each has some terms from G(0) and some from lG*(l). Any product with more than N - r terms from G(0) will be zero because this will be the determinant of a submatrix of larger order than the rank of G(0). The only possible non-zero terms will have r or more terms from lG*(l) and therefore will be associated with powers of l of r or more. The first possible nonzero di is dr. Defining • ir- g()ldl= Â i ir= establishes the first part of the lemma since dr must be finite. To establish the second statement, express the adjoint matrix of G in a power series in l: • i AdjGH()ll= Â i , i=0 Since the adjoint is a matrix composed of elements which are determi- nants of order N - 1, the above argument establishes that the first r - 1 terms must be identically zero. Thus 152 R. F. Engle and C. W. J. Granger

• rir--+11 AdjGH()ll= Â l i r-1 r-1 = llH().

Because the elements of Hr-1 are products of finitely many finite numbers, H(0) must be finite. The product of a matrix and its adjoint will always give the determi- nant so:

r llgI()N = ()() G()0 + l G*() l H l rr-1 = GH()0 ()ll+ hG() l*. () ll Equating Powers of l we get GH()000 ()= . Thus the rank of H(0) must be less than or equal to r as it lies entirely in the column null space of the rank N - r matrix G(0). If r = 1, the first term in the expression for the adjoint will simply be the adjoint of G(0) which will have rank 1 since G(0) has rank N - 1. Q.E.D.

Proof of Granger Representation Theorem: The conditions of the Theorem suppose the existance of a Wold representation as in (3.1) for an N vector of random variables xt which are co-integrated. Suppose the co-integrating vector is a so that

zt = a ¢xt is an r-dimensional stationary purely nondeterministic time series with invertible moving average representation. Multiplying a times the moving average representation in (3.1) gives

()111- Bztt=¢()aae C()+-() B¢ C*.() B

For zt to be I(0), a¢C(1) must equal 0. Any vector with this property will be a co-integrating vector; therefore C(1) must have rank N - r with a null space containing all co-integrating vectors. It also follows that a ¢C*(B) must be an invertible moving average representation and in particular a ¢C*(1) π 0. Otherwise the co-integration would be with b = 2 or higher. Statement (2) is established using Lemma 1, letting l = (1 - B), G(l) = C(B), H(l) = A(B), and g(l) = d(B). Since C(B) has full rank and equals IN at B = 0, its inverse is A(0) which is also IN. Statement (3) follows from recognition that A(1) has rank between 1 and r and lies in the null space of C(1). Since a spans this null space, A(1) can be written as linear combinations of the co-integrating vectors A()1 =¢ga . Co-Integration and Error-Correction 153

Statement (4) follows by manipulation of the autoregressive structure. Rearranging terms in (3.3) gives: ˜ []AB()+ A()11()- Bxttt=- A() 1 x-1 + dB()e ,

AB*,()()1 - Bxtt=-ge z-1 + dB() t

A*(0) = A(0) = IN. The fifth condition follows from direct substitution in the Wold rep- resentation. The definition of co-integration implies that this moving average be stationary and invertible. Rewriting the error correction rep- resentation with A*(B) = I + A**(B) where A**(0) = 0, and premulti- plying by a¢ gives:

()1 - Bztt=-ag ¢ z-1 +[] a ¢ dB()+¢ a A**()() BCB e t

=-ag ¢zJBtt-1 + () e. For this to be equivalent to the stationary moving average representa- tion the autoregression must be invertible. This requires that det (a¢g) > 0. If the determinant were zero then there would be at least one unit root, and if the determinant were negative, then for some value of w between zero and one,

det()IIrr--¢()ag w = 0 , implying a root inside the unit circle. Condition six follows by repeating the previous steps, setting d(B) = 1. Q.E.D. Stronger results can be obtained by further restrictions on the multi- plicity of roots in the moving average representations. For example, Yoo (1985), using Smith Macmillan forms, finds conditions which establish that d(1) π 0, that A*(1) is of full rank, and that facilitate the transfor- mation from error correction models to co-integrated models. However, the results given above are sufficient for the estimation and testing problems addressed in this paper. The autoregressive and error correction representations given by (3.3) and (3.4) are closely related to the vector autoregressive models so com- monly used in econometrics, particularly in the case when d(B) can rea- sonably be taken to be 1. However, each differs in an important fashion from typical VAR applications. In the autoregressive representation

ABx()tt= e , the co-integration of the variables xt generates a restriction which makes A(1) singular. For r = 1, this matrix will only have rank 1. The analysis of such systems from an innovation accounting point of view is treacherous as some numerical approaches to calculating the moving average repre- sentation are highly unstable. The error correction representation 154 R. F. Engle and C. W. J. Granger

AB*()()1 - Bxttt=-ga ¢ x-1 + e looks more like a standard vector autoregression in the differences of the data. Here the co-integration is implied by the presence of the levels of the variables so a pure VAR in differences will be misspecified if the variables are co-integrated. Thus vector autoregressions estimated with co-integrated data will be misspecified if the data are differenced, and will have omitted important constraints if the data are used in levels. Of course, these constraints will be satisfied asymptotically but efficiency gains and improved multistep forecasts may be achieved by imposing them.

As xt ~ I(1), zt ~ I(0), it should be noted that all terms in the error cor- rection models are I(0). The converse also holds; if xt ~ I(1) are gener- ated by an error correction model, then xt is necessarily co-integrated. It may also be noted that if xt ~ I(0), the generation process can always be written in the error correction form and so, in this case, the equilibrium concept has no impact. As mentioned above, typical empirical examples of error correcting behavior are formulated as the response of one variable, the dependent variable, to shocks of another, the independent variable. In this paper all the variables are treated as jointly endogenous; nevertheless the struc- ture of the model may imply various Granger causal orderings and weak and strong exogeneity conditions as in Engle, Hendry, and Richard (1983). For example, a bivariate co-integrated system must have a causal ordering in at least one direction. Because the z’s must include both vari- ables and g cannot be identically zero, they must enter into one or both of the equations. If the error correction term enters into both equations, neither variable can be weakly exogenous for the parameters of the other equation because of the cross equation restriction. The notion of co-integration can in principle be extended to series with trends or explosive autoregressive roots. In these cases the co- integrating vector would still be required to reduce the series to sta- tionarity. Hence the trends would have to be proportional and any explo- sive roots would have to be identical for all the series.We do not consider these cases in this paper and recognize that they may complicate the esti- mation and testing problems.

4. ESTIMATING CO-INTEGRATED SYSTEMS In defining different forms for co-integrated systems, several estimation procedures have been implicitly discussed. Most convenient is the error correction form (particularly if it can be assumed that there is no moving average term). There remain cross-equation restrictions involving the parameters of the co-integrating vectors; and therefore the maximum Co-Integration and Error-Correction 155 likelihood estimator, under Gaussian assumptions, requires an iterative procedure. In this section, we will propose another estimator which is a two step estimator. In the first step the parameters of the co-integrating vector are estimated and in the second these are used in the error correction form. Both steps require only single equation least squares and it will be shown that the result is consistent for all the parameters. The procedure is far more convenient because the dynamics do not need to be specified until the error correction structure has been estimated. As a byproduct we obtain some test statistics useful for testing for co-integration. From (3.5) the sample moment matrix of the data can be directly expressed. Let the moment matrix divided by T be denoted by:

2 MTxxTtt=¢1 Â . t

Recalling that zt = a¢xt, (3.5) implies that

2 ae¢=MKBxTTtÂ[]() t¢ . t Following the argument of Dickey and Fuller (1979) or Stock (1984), it can be shown that for processes satisfying (3.1),

limEM()T = M a finite nonzero matrix, (4.1) TÆ• and aa¢=MIM00,.or() vec ¢()ƒ = (4.2) Although the moment matrix of data from a co-integrated process will be nonsingular for any sample, in the limit, it will have rank N - r. This accords well with the common observation that economic time series data are highly collinear so that moment matrices may be nearly singu- lar even when samples are large. Co-integration appears to be a plausi- ble hypothesis from a data analytic point of view. Equations (4.2) do not uniquely define the co-integrating vectors unless arbitrary normalizations are imposed. Let q and Q be arrays which incorporate these normalizations by reparametrizing a into q,a j ¥ 1 matrix of unknown parameters which lie in a compact subset of Rj: vec a = q + Qq. (4.3) Typically q and Q will be all zeros and ones, thereby defining one coef- ficient in each column of a to be unity and defining rotations if r > 1. The parameters q are said to be “identified” if there is a unique solution to (4.2), (4.3). This solution is given by ()IMQƒ q =-() IMq ƒ (4.4) 156 R. F. Engle and C. W. J. Granger where by the assumption of identification, (I ƒ M)Q has a left inverse even though M does not.

As the moment matrix MT will have full rank for finite samples, a rea- sonable approach to estimation is to minimize the sum of squared devi- ations from equilibrium. In the case of a single co-integrating vector, aˆ will minimize a¢MTa subject to any restrictions such as (4.3) and the result will be simply ordinary least squares. For multiple co-integrating vectors, define aˆ as the minimizer of the trace (a¢MTa). The estimation problem becomes:

Min tr()aa¢MIMT =¢ƒ Min vec a()T vec a ast...()43 .ast ... () 43 . ¢ =+Min()qQqq() IMƒ T () qQ+ , q which implies the solution ˆ -1 ˆ qaq=-()QI ¢() ƒ MTT Q() QI¢ƒ() M q, vec ˆ =+ q Q. (4.5) This approach to estimation should provide a very good approximation to the true co-integrating vector because it is seeking vectors with minimal residual variance and asymptotically all linear combinations of x will have infinite variance except those which are co-integrating vectors. When r = 1 this estimate is obtained simply by regressing the variable normalized to have a unit coefficient upon the other variables. This regression will be called the “co-integrating regression” as it attempts to fit the long run or equilibrium relationship without worrying about the dynamics. It will be shown to provide an estimate of the elements of the co-integrating vector. Such a regression has been pejoratively called a “spurious” regression by Granger and Newbold (1974) primarily because the standard errors are highly misleading. They were particularly con- cerned about the non-co-integrated case where there was no relation- ship but the unit root in the error process led to a low Durbin Watson, a high R2, and apparently high significance of the coefficients. Here we only seek coefficient estimates to use in the second stage and for tests of the equilibrium relationship. The distribution of the estimated coeffi- cients is investigated in Stock (1984). When N = 2, there are two possible regressions depending on the nor- malization chosen. The nonuniqueness of the estimate derives from the well known fact that the least squares fit of a reverse regression will not give the reciprocal of the coefficient in the forward regression. In this case, however, the normalization matters very little. As the moment matrix approaches singularity, the R2 approaches 1 which is the product of the forward and reverse regression coefficients. This would be exactly true if there were only two data points which, of course, defines a sin- gular matrix. For variables which are trending together, the correlation Co-Integration and Error-Correction 157 approaches one as each variance approaches infinity.The regression line passes nearly through the extreme points almost as if there were just two observations. Stock (1984) in Theorem 3 proves the following proposition:

Proposition 1: Suppose that xt satisfies (3.1) with C*(B) absolutely sum- mable, that the disturbances have finite fourth absolute moments, and that xt is co-integrated (1, 1) with r co-integrating vectors satisfying (4.3) which identify q. Then, defining qˆ by (4.5),

T 1-d qqˆ - æÆæ>p d () 00for . (4.6) The proposition establishes that the estimated parameters converge very rapidly to their probability limits. It also establishes that the esti- mates are consistent with a finite sample bias of order 1/T. Stock pre- sents some Monte Carlo examples to show that these biases may be important for small samples and gives expressions for calculating the lim- iting distribution of such estimates. The two step estimator proposed for this co-integrated system uses the estimate of a from (4.5) as a known parameter in estimating the error correction form of the system of equations. This substantially simplifies the estimation procedure by imposing the cross-equation restrictions and allows specification of the individual equation dynamic patterns sepa- rately. Notice that the dynamics did not have to be specified in order to estimate a. Surprisingly, this two-step estimator has excellent properties; as shown in the Theorem below, it is just as efficient as the maximum likelihood estimator based on the known value of a.

Theorem 2: The two-step estimator of a single equation of an error cor- rection system, obtained by taking aˆ from (4.5) as the true value, will have the same limiting distribution as the maximum likelihood estimator using the true value of a. Least squares standard errors will be consistent esti- mates of the true standard errors.

Proof: Rewrite the first equation of the error correction system (3.4) as

yzWtt=+++-gbegˆˆ---111 tt() zz tt,

zXtt= a, zXˆtt= aˆ , where Xt = xt¢, W is an array with selected elements of Dxt-i and y is an element of Dxt so that all regressors are I(0). Then letting the same variables without subscripts denote data arrays,

-1 Ègg- ˘ ¢ ¢ TzWzWTzWzzTÍ ˙ = []()ˆ, ()ˆ, []()ˆ, ()eg+ ()- ˆ . Îbb- ˚ 158 R. F. Engle and C. W. J. Granger

This expression simplifies because zˆ ¢(z - zˆ ) = 0. From Fuller (1976) or Stock (1984), X¢X/T 2 and X¢W/T are both of order 1. Rewriting, Wz¢-() zˆ T=¢[] WXTT[]()aa- ˆ []1 T, and therefore the first and second factors to the right of the equal sign are of order 1 and the third goes to zero so that the entire expression vanishes asymptotically. Because the terms in (z - zˆ )/T vanish asymp- totically, least squares standard errors will be consistent. Letting S = plim [(zˆ , W)¢(,zˆ W)/T],

Ègg- ˘ A 21- TDSÍ ˙ æÆæ ()0, s Îbb- ˚ where D represents the limiting distribution. Under additional but stan- dard assumptions, this could be guaranteed to be normal. To establish that the estimator using the true value of a has the same limiting distribution it is sufficient to show that the probability limit of [(z, W)¢(z, W)/T] is also S and that z¢e/T has the same limiting distrib- ution as zˆ¢e/T . Examining the off diagonal terms of S first, zWˆ¢-¢=- T zW T T()aaˆ ¢[] WX¢ T()1 T. The first and second factors are of order 1 and the third is 1/T so the entire expression vanishes asymptotically: ()zzˆˆ- ¢ () zzTzzTzzT- =¢ -¢ ˆˆ =-TXXTTT()aaˆˆ¢[]¢ 2 ()() aa- 1 . Again, the first three factors are of order 1 and the last is 1/T so even though the difference between these covariance matrices is positive definite, it will vanish asymptotically. Finally, ()zzˆ - ¢ eaae T=- T()ˆ ¢[] X¢ T1 T, which again vanishes asymptotically. Under standard conditions the estimator using knowledge of a will be asymptotically normal and therefore the two-step estimator will also be asymptotically normal under these conditions. This completes the proof. Q.E.D. A simple example will illustrate many of these points and motivate the approach to testing described in the next section. Suppose there are two series. x1t and x2t, which are jointly generated as a function of pos- sibly correlated white noise disturbances e1t and e2t according to the following model:

x1t + bx2t = u1t, u1t = u1t-1 + e1t, (4.7)

xxuuu1222212tttttt+=arer,,. =- + <1 (4.8) Clearly the parameters a and b are unidentified in the usual sense as there are no exogenous variables and the errors are contemporaneously Co-Integration and Error-Correction 159

correlated. The reduced form for this system will make x1t and x2t linear combinations of u1t and u2t and therefore both will be I(1). The second equation describes a particular linear combination of the random vari- ables which is stationary. Hence x1t and x2t are CI(1, 1) and the question is whether it would be possible to detect this and estimate the parame- ters from a data set.

Surprisingly, this is easy to do. A linear least squares regression of x1t on x2t produces an excellent estimate of a This is the “co-integrating regression.” All linear combinations of x1t and x2t except that defined in equation (4.8) will have infinite variance and, therefore, least squares is easily able to estimate a.The correlation between x2t and x2t which causes the simultaneous equations bias is of a lower order in T than the variance of x2t. In fact the reverse regression of x2t on x1t has exactly the same prop- erty and thus gives a consistent estimate of 1/a. These estimators con- verge even faster to the true value than standard econometric estimates. While there are other consistent estimates of a, several apparently obvious choices are not. For example, regression of the first differences of x1 on the differences of x2 will not be consistent, and the use of Cochrane Orcutt or other serial correlation correction in the co- integrating regression will produce inconsistent estimates. Once the parameter a has been estimated, the others can be estimated in a variety of ways conditional on the estimate of a. The model in (4.7) and (4.8) can be expressed in the autoregressive representation (after subtracting the lagged values from both sides and letting d = (1 - p)/(a - b) as:

Dx1t = bdx1t-1 + abdx2t-1 + h1t, (4.9)

Dx2t =-dx1t-1 -adx2t-1 + h2t, (4.10) where the h’s are linear combinations of the e’s. The error correction representation becomes:

Dx1t = bdzt-1 + h1t, (4.11)

Dx2t =-dzt-1 + h2t, (4.12) where zt = x1t + ax2t. There are three unknown parameters but the auto- regressive form apparently has four unknown coefficients while the error correction form has two. Once a is known there are no longer constraints in the error correction form which motivates the two-step estimator. Notice that if r Æ 1, the series are correlated random walks but are no longer co-integrated.

5. TESTING FOR CO-INTEGRATION It is frequently of interest to test whether a set of variables are co- integrated. This may be desired because of the economic implications such as whether some system is in equilibrium in the long run, or it may 160 R. F. Engle and C. W. J. Granger be sensible to test such hypotheses before estimating a multivariate dynamic model. Unfortunately the set-up is nonstandard and cannot simply be viewed as an application of Wald, likelihood ratio, or Lagrange multiplier tests. The testing problem is closely related to tests for unit roots in observed series as initially formulated by Fuller (1976) and Dickey and Fuller (1979, 1981) and more recently by Evans and Savin (1981), Sargan and Bhargava (1983), and Bhargava (1984), and applied by Nelson and Plosser (1983). It also is related to the problem of testing when some parameters are unidentified under the null as discussed by Davies (1977) and Watson and Engle (1982). To illustrate the problems in testing such an hypothesis, consider the simple model in (4.7) and (4.8). The null hypothesis is taken to be no co- integration or r = 1. If a were known, then a test for the null hypothe- sis could be constructed along the lines of Dickey and Fuller taking zt as the series which has a unit root under the null. The distribution in this case is already nonstandard and was computed through a simulation by Dickey (1976). However, when a is not known, it must be estimated from the data. But if the null hypothesis that r = 1 is true, a is not identified. Thus only if the series are co-integrated can a be simply estimated by the “co-integrating regression,” but a test must be based upon the dis- tribution of a statistic when the null is true. OLS seeks the a which min- imizes the residual variance and therefore is most likely to be stationary, so the distribution of the Dickey-Fuller test will reject the null too often if a must be estimated. In this paper a set of seven test statistics is proposed for testing the null of non-co-integration against the alternative of co-integration. It is maintained that the true system is a bivariate linear vector autore- gression with Gaussian errors where each of the series is individually I(1). As the null hypothesis is composite, similar tests will be sought so that the probability of rejection will be constant over the parameter set included in the null. See, for example, Cox and Hinkley (1974, pp. 134–136). Two cases may be distinguished. In the first, the system is known to be of first order and therefore the null is defined by

Dytt= e1 È()e1t ˘ ,~,.Í ˙ N()0 W (5.1) Dxtt= e 2 Î()e 2t ˚ This is clearly the model implied by (4.11) and (4.12) when r = 1 which implies that d = 0. The composite null thus includes all positive definite covariance matrices W. It will be shown below that all the test statistics are similar with respect to the matrix W so without loss of generality, we take W=I. In the second case, the system is assumed merely to be a stationary linear system in the changes. Consequently, the null is defined over a full Co-Integration and Error-Correction 161 set of stationary autoregressive and moving average coefficients as well as w. The “augmented” tests described below are designed to be asymp- totically similar for this case just as established by Dickey and Fuller for their univariate tests. The seven test statistics proposed are all calculable by least squares. The critical values are estimated for each of these statistics by simula- tion using 10,000 replications. Using these critical values, the powers of the test statistics are computed by simulations under various alternatives. A brief motivation of each test is useful. 1. CRDW. After running the co-integrating regression, the Durbin Watson statistic is tested to see if the residuals appear stationary. If they are nonstationary, the Durbin Watson will approach zero and thus the test rejects non-co-integration (finds co-integration) if DW is too big. This was proposed recently by Bhargava (1984) for the case where the series is observed and the null and alternative are first order models. 2. DF. This tests the residuals from the co-integrating regression by running an auxiliary regression as described by Dickey and Fuller and outlined in Table 8.1. It also assumes that the first order model is correct. 3. ADF. The augmented Dickey-Fuller test allows for more dynam- ics in the DF regression and consequently is over-parametrized in the first order case but correctly specified in the higher order cases. 4. RVAR. The restricted vector autoregression test is similar to the two step estimator. Conditional on the estimate of the co-integrating vector from the co-integrating regression, the error correction represen- tation is estimated. The test is whether the error correction term is sig- nificant. This test requires specification of the full system dynamics. In this case a first order system is assumed. By making the system triangu- lar, the disturbances are uncorrelated, and under normality the t statis- tics are independent. The test is based on the sum of the squared t statistics. 5. ARVAR.The augmented RVAR test is the same as RVAR except that a higher order system is postulated. 6. UVAR. The unrestricted VAR test is based on a vector auto- regression in the levels which is not restricted to satisfy the co-integra- tion constraints. Under the null, these are not present anyway so the test is simply whether the levels would appear at all, or whether the model can be adequately expressed entirely in changes. Again by triangulariz- ing the coefficient matrix, the F tests from the two regressions can be made independent and the overall test is the sum of the two F’s times their degrees of freedom, 2. This assumes a first order system again. 7. AUVAR. This is an augmented or higher order version of the above test. To establish the similarity of these tests for the first order case for all positive definite symmetric matrices W, it is sufficient to show that the residuals from the regression of y on x for general W will be a scalar 162 R. F. Engle and C. W. J. Granger

multiple of the residuals for W=I. To show this, let e1t and e2t be drawn as independent standard normals. Then

yti= Â e1 , it=1,

xti= Â e 2 , (5.2) it=1, and

2 uyxttt=-ÂÂ xyx ttt. (5.3) To generate y* and x* from W, let

e*2t = ce2t, (5.4) e*1t = ae2t + be1t, where

22 cacb==wwxx =- www ,,yx yy yx xx . Then substituting (5.4) in (5.2) x*,*==+ cx y ay bx , 2 uyxyxx***=-ÂÂtt** t* 22 =+-ay bx cxÂÂ() ayttt + bx cx c x t = au, thus showing the exact similarity of the tests. If the same random numbers are used, the same test statistics will be obtained regardless of W. In the more complicated but realistic case that the system is of infinite order but can be approximated by a p order autoregression, the statistics will only be asymptotically similar. Although exact similarity is achieved in the Gaussian fixed regressor model, this is not possible in time series models where one cannot condition on the regressors; simi- larity results are only asymptotic. Tests 5 and 7 are therefore asymptoti- cally similar if the p order model is true but tests 1, 2, 4 and 6 definitely are not even asymptotically similar as these tests omit the lagged regres- sors. (This is analogous to the biased standard errors resulting from serially correlated errors.) It is on this basis that we prefer not to suggest the latter tests except in the first order case. Test 3 will also be asymp- totically similar under the assumption that u, the residual from the co- integration regression, follows a p order process. This result is proven in Dickey and Fuller (1981, pp. 1065–1066). While the assumption that the system is p order allows the residuals to be of infinite order, there is presumably a finite autoregressive model, possibly of order less than p, Co-Integration and Error-Correction 163

Table 8.1 The test statistics: reject for large values.

1. The Co-integrating Regression Durbin Watson: yt = axt + c + ut

x1 = DW. The null is DW = 0.

2. Dickey Fuller Regression: Dut =-fut-1 + et.

x2 = tf: the t statistic for f.

3. Augmented DF Regression: Dut =-fut-1 + b1Dut-1 + ... + biDut - p + et.

x3 = tf.

4. Restricted VAR: Dyt = b1ut-1 + e1t, Dxt = b2ut-1 + gDyt + e2t. 2 2 x4 = tb1 + tb2.

5. Augmented Restricted VAR: Same as (4) but with p lags of Dyt and Dxt in each equation. 2 2 x5 = tb1 + tb2.

6. Unrestricted VAR: Dyt = b1yt-1 + b2xt-1 + c1 + e1t, Dxt = b3yt-1 + b4xt-1 + gDyt + c2 + e2t.

x6 = 2[F1 + F2] where F1 is the F statistic for testing b1 and b2 both equal to zero

in the first equation, and F2 is the comparable statistic in the second.

7. Augmented Unrestricted VAR: The same as (6) except for p lags of Dxt and Dyt in each equation.

g7 = 2[F1 + F2].

Notes: yt and xt are the original data sets and ut are the residuals from the co-integrating regression. which will be a good approximation. One might therefore suggest some experimentation to find the appropriate value of p in either case. An alternative strategy would be to let p be a slowly increasing nonstochas- tic function of T, which is closely related to the test proposed by Phillips (1985) and Phillips and Durlauf (1985). Only substantial simulation experimentation will determine whether it is preferable to use a data based selection of p for this testing procedure although the evidence presented below shows that estimation of extraneous parameters will decrease the power of the tests. In Table 8.1, the seven test statistics are formally stated. In Table 8.2, the critical values and powers of the tests are considered when the system is first order. Here the augmented tests would be expected to be less powerful because they estimate parameters which are truly zero under both the null and alternative. The other four tests estimate no extrane- ous parameters and are correctly specified for this experiment. From Table 8.2 one can perform a 5 per cent test of the hypothesis of non-co-integration with the co-integrating regression Durbin Watson test, by simply checking DW from this regression and, if it exceeds 0.386, rejecting the null and finding co-integration. If the true model is Model II with r = .9 rather than 1, this will only be detected 20 per cent of the time; however if the true r = .8 this rises to 66 per cent. Clearly, test 1 is the best in each of the power calculations and should be preferred for this set-up, while test 2 is second in almost every case. Notice also that the augmented tests have practically the same critical values as the basic 164 R. F. Engle and C. W. J. Granger

Table 8.2 Critical values and power.

I Model: Dy, Dx independent standard normal, 100 observations, 10,000 replications, p = 4.

Critical Values Statistic Name 1% 5% 10%

1 CRDW .511 .386 .322 2 DF 4.07 3.37 3.03 3 ADF 3.77 3.17 2.84 4 RVAR 18.3 13.6 11.0 5 ARVAR 15.8 11.8 9.7 6 UVAR 23.4 18.6 16.0 7 AUVAR 22.6 17.9 15.5

II Model: yt + 2xt = ut, Dut = (r - 1)ut-1 + et, xt + yt = ut, Dut = ht; r = .8, .9, 100 observations, 1000 replications, p = 4.

Rejections per 100: r = .9 Statistic Name 1% 5% 10%

1 CRDW 4.8 19.9 33.6 2 DF 2.2 15.4 29.0 3 ADF 1.5 11.0 22.7 4 RVAR 2.3 11.4 25.3 5 ARVAR 1.0 9.2 17.9 6 UVAR 4.3 13.3 26.1 7 AUVAR 1.6 8.3 16.3

Rejections per 100: r = .8 Statistic Name 1% 5% 10%

1 CRDW 34.0 66.4 82.1 2 DF 20.5 59.2 76.1 3 ADF 7.8 30.9 51.6 4 RVAR 15.8 46.2 67.4 5 ARVAR 4.6 22.4 39.0 6 UVAR 19.0 45.9 63.7 7 AUVAR 4.8 18.3 33.4

tests; however, as expected, they have slightly lower power. Therefore, if it is known that the system is first order, the extra lags should not be introduced. Whether a pre-test of the order would be useful remains to be established. In Table 8.3 both the null and alternative hypotheses have fourth order autoregressions. Therefore the basic unaugmented tests now are mis- specified while the augmented ones are correctly specified (although some of the intervening lags could be set to zero if this were known). Co-Integration and Error-Correction 165

Table 8.3 Critical values and power with lags.

Model I: Dyt = .8Dyt-4 + et, Dxt = .8Dxt-4 + ht; 100 observations, 10,000 replications,

p = 4, et, ht independent standard normal.

Critical Values Statistic Name 1% 5% 10%

1 CRDW .455 .282 .209 2 DF 3.90 3.05 2.71 3 ADF 3.73 3.17 2.91 4 RVAR 37.2 22.4 17.2 5 ARVAR 16.2 12.3 10.5 6 UVAR 59.0 40.3 31.4 7 AUVAR 28.0 22.0 19.2

Model II: yt + 2xt = ut, Dut = (r - 1)ut-1 + .8Dut-4 + et, yt + xt = ut, Dut = .8Dut-4 + ht; r = .9, .8, 100 observations, 1000 replications, p = 4.

Rejections per 100: r = .9 Statistic Name 1% 5% 10%

1 CRDW 15.6 39.9 65.6 2 DF 9.4 25.5 37.8 3 ADF 36.0 61.2 72.2 4 RVAR .3 4.4 10.9 5 ARVAR 26.4 48.5 62.8 6 UVAR .0 .5 3.5 7 AUVAR 9.4 26.8 40.3

Rejections per 100: r = .8 Statistic Name 1% 5% 10%

1 CRDW 77.5 96.4 98.6 2 DF 66.8 89.7 96.0 3 ADF 68.9 90.3 94.4 4 RVAR 7.0 42.4 62.5 5 ARVAR 57.2 80.5 89.3 6 UVAR 2.5 10.8 25.9 7 AUVAR 32.2 53.0 67.7

Notice now the drop in the critical values of tests 1, 4, and 6 caused by their nonsimilarity.Using these new critical values, test 3 is the most pow- erful for the local alternative while at r = .8, test 1 is the best, closely fol- lowed by 2 and 3.The misspecified or unaugmented tests 4 and 6 perform very badly in this situation. Even though they were moderately powerful in Table 8.2, the performance here dismisses them from consideration. Although test 1 has the best performance overall, it is not the recommended choice from this experiment because the critical value is 166 R. F. Engle and C. W. J. Granger so sensitive to the particular parameters within the null. For most types of economic data the differences are not white noise and, therefore, one could not in practice know what critical value to use. Test 3, the aug- mented Dickey-Fuller test, has essentially the same critical value for both finite sample experiments, has theoretically the same large sample criti- cal value for both cases, and has nearly as good observed power prop- erties in most comparisons, and is therefore the recommended approach. Because of its simplicity,the CRDW might be used for a quick approx- imate result. Fortunately, none of the best procedures require the esti- mation of the full system, merely the co-integrating regression and then perhaps an auxiliary time series regression. This analysis leaves many questions unanswered. The critical values have only been constructed for one sample size and only for the bivariate case, although recently, Engle and Yoo (1986) have calculated critical values for more variables and sample sizes using the same general approach. There is still no optimality theory for such tests and alterna- tive approaches may prove superior. Research on the limiting distribu- tion theory by Phillips (1985) and Phillips and Durlauf (1985) may lead to improvements in test performance. Nevertheless, it appears that the critical values for ADF given in Table 8.2 can be used as a rough guide in applied studies at this point.The next section will provide a variety of illustrations.

6. EXAMPLES Several empirical examples will be presented to show performance of the tests in practice. The relationship between consumption and income will be studied in some detail as it was analyzed from an error correc- tion point of view in DHSY and a time series viewpoint in Hall (1978) and others. Briefer analyses of wages and prices, short and long term interest rates, and the velocity of money will conclude this section. DHSY have presented evidence for the error correction model of consumption behavior from both empirical and theoretical points of view. Consumers make plans which may be frustrated; they adjust next period’s plans to recoup a portion of the error between income and consumption. Hall finds that U.S. consumption is a random walk and that past values of income have no explanatory power which implies that income and consumption are not co-integrated, at least if income does not depend on the error correction term. Neither of these studies models income itself and it is taken as exogenous in DHSY. Using U.S. quarterly real per capita consumption on nondurables and real per capita disposable income from 1947-I to 1981-II, it was first checked that the series were I(1). Regressing the change in consumption on its past level and two past changes gave a t statistic of +.77 which is even the wrong sign for consumption to be stationary in the levels. Running the same model with second differences on lagged first differ- Co-Integration and Error-Correction 167 ences and two lags of second differences, the t statistic was -5.36 indi- cating that the first difference is stationary. For income, four past lags were used and the two t statistics were -.01 and -6.27 respectively, again establishing that income is I(1). The co-integrating regression of consumption (C) on income (Y) and a constant was run. The coefficient of Y was .23 (with a t statistic of 123 and an R2 of .99). The DW was however .465, indicating that by either table of critical values one rejects the null of “non-co-integration” or accepts co-integration at least at the 5 per cent level. Regressing the change in the residuals on past levels and four lagged changes, the t sta- tistic on the level is 3.1 which is essentially the critical value for the 5 per cent ADF test. Because the lags are not significant, the DF regression was run giving a test statistic of 4.3 which is significant at the 1 per cent level, illustrating that when it is appropriate, it is a more powerful test. In the reverse regression of Y on C, the coefficient is 4.3 which has reciprocal .23, the same as the coefficient in the forward regression. The DW is now .463 and the t statistic from the ADF test is 3.2. Again the first order DF appears appropriate and gives a test statistic of 4.4. Whichever way the regression is run, the data rejects the null of non-co- integration at any level above 5 per cent. To establish that the joint distribution of C and Y is an error correc- tion system, a series of models was estimated. An unrestricted vector autoregression of the change in consumption on four lags of consump- tion and income changes plus the lagged levels of consumption and income is given next in Table 8.4.The lagged levels are of the appropriate signs and sizes for an error correction term and are individually signifi- cant or nearly so. Of all the lagged changes, only the first lag of income change is significant. Thus the final model has the error correction term estimated from the co-integrating regression and one lagged change in income. The standard error of this model is even lower than the VAR suggesting the efficiency of the parameter restrictions. The final model passes a series of diagnostic tests for serial correlation, lagged dependent variables, non-linearities, ARCH, and omitted variables such as a time trend and other lags. One might notice that an easy model building strategy in this case would be to estimate the simplest error correction model first and then test for added lags of C and Y, proceeding in a “simple to general” spec- ification search. The model building process for Y produced a similar model.The same unrestricted VAR was estimated and distilled to a simple model with the error correction term, first and fourth lagged changes in C and a fourth lagged change in Y. The error correction is not really significant with a t statistic of -1.1 suggesting that income may indeed be weakly exogenous even though the variables are co-integrated. In this case the standard error of the regression is slightly higher in the restricted model but the difference is not significant.The diagnostic tests are again generally good. 168 R. F. Engle and C. W. J. Granger

Campbell (1985) uses a similar structure to develop a test of the per- manent income hypothesis which incorporates “saving for a rainy day” behavior. In this case the error correction term is approximately saving which should be high when income is expected to fall (such as when current income is above permanent income). Using a broader measure of consumption and narrower measure of income he finds the error cor- rection term significant in the income equation. The second example examines monthly wages and prices in the U.S. The data are logs of the consumer price index and production worker wage in manufacturing over the three decades of 50’s, 60’s and 70’s. Again, the test is run both directions to show that there is little differ- ence in the result. For each of the decades there are 120 observations so the critical values as tabulated should be appropriate. For the full sample period the Durbin Watson from the co- integrating regression in either direction is a notable .0054. One suspects that this will be insignificantly different from zero even for samples much larger than this. Looking at the augmented Dickey Fuller test statistic, for p on w we find -.6 and for w on p we find +.2. Adding a twelfth lag in the ADF tests improves the fit substantially and raises the test statis- tics to .88 and 1.50 respectively. In neither case do these approach the critical values of 3.2. The evidence accepts the null of non-co-integration for wages and prices over the thirty year period. For individual decades none of the ADF tests are significant at even the 10 per cent level. The largest of these six test statistics is for the 50’s regressing p on w which reaches 2.2, which is still below the 10 per cent level of 2.8. Thus we find evidence that wages and prices in the U.S. are not co-integrated. Of course, if a third variable such as productivity were available (and were I(1)), the three might be co-integrated. The next example tests for co-integration between short and long term interest rates. Using monthly yields to maturity of 20 year treasury bonds as the long term rate (Rt) and the one month treasury bill rate rt as the short rate, co-integration was tested with data from February, 1952 to December, 1982. With the long rate as the dependent variable, the co-integrating regression gave:

2 Rt = 1.93 + .785rt + ERt, DW = .126, R = .866, with a t ratio of 46 on the short rate. The DW is not significantly differ- ent from zero, at least by Tables 8.2 and 8.3; however, the correct criti- cal value depends upon the dynamics of the errors (and of course the sample size is 340 – much greater than for the tabulated values). The ADF test with four lags gives:

DERtttttt=-+-+-...... 06 ER-----11 25 DDDD ER 24 ER 23 24 ER 09 ER 4 ()()-327. 455 . ()-4.15 ()- 415 . ()- 148 . Co-Integration and Error-Correction 169

When the twelfth lag is added instead of the fourth, the test statistic rises to 3.49. Similar results were found with the reverse regression where the statistics were 3.61 and 3.89 respectively. Each of these test statistics exceeds the 5 per cent critical values from Table 8.3. Thus these interest rates are apparently co-integrated. This finding is entirely consistent with the efficient market hypothe- sis. The one-period excess holding yield on long bonds as linearized by Shiller and Campbell (1984) is:

EHY=-- DRttt-1 () D1 R- r where D is the duration of the bond which is given by

ii-1 Dc=+()111- cc()+ ()( with c as the coupon rate and i the number of periods to maturity. The efficient market hypothesis implies that the expectation of the EHY is a constant representing a risk premium if agents are risk averse. Setting EHY = k + e and rearranging terms gives the error correction form:

-1 DRDtttt=-()1 () R--11- r+¢+ ke , implying that R and r are co-integrated with a unit coefficient and that for long maturities, the coefficients of the error correction term is c, the coupon rate. If the risk premium is varying over time but is I(0) already, then it need not be included in the test of co-integration. The final example is based upon the quantity theory equation: MV = PY. Empirical implications stem from the assumption that velocity is constant or at least stationary. Under this condition, logM, logP, and log Y should be co-integrated with known unit parameters. Similarly, nominal money and nominal GNP should be co-integrated. A test of this hypothesis was constructed for four measures of money: M1, M2, and M3, and L, total liquid assets. In each case the sample period was 1959-I through 1981-II, quarterly. The ADF tests statistics were: M1 1.81 1.90 M2 3.23 3.13 M3 2.65 2.55 L 2.15 2.13 where in the first column the log of the monetary aggregate was the dependent variable while in the second, it was logGNP. For only one of the M2 tests is the test statistic significant at the 5 per cent level, and none of the other aggregates are significant even at the 10 per cent level. (In several cases it appears that the DF test could be used and would therefore be more powerful.) Thus the most stable relationship is 170 R. F. Engle and C. W. J. Granger between M2 and nominal GNP but for the other aggregates, we reject co-integration and the stationarity of velocity.

7. CONCLUSION

If each element of a vector of time series xt is stationary only after differencing, but a linear combination a¢xt need not be differenced, the time series xt have been defined to be co-integrated of order (1, 1) with co-integrating vector a. Interpreting a¢xt = 0 as a long run equilibrium, co-integration implies that equilibrium holds except for a stationary, finite variance disturbance even though the series themselves are non- stationary and have infinite variance. The paper presents several representations for co-integrated systems including an autoregressive representation and an error-correction representation. A vector autoregression in differenced variables is incompatible with these representations because it omits the error cor- rection term.The vector autoregression in the levels of the series ignores cross equation constraints and will give a singular autoregressive opera- tor. Consistent and efficient estimation of error correction models is dis- cussed and a two step estimator proposed. To test for co-integration, seven statistics are formulated which are similar under various main- tained hypotheses about the generating model. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined, and one test procedure is recommended for application. In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is not co-integrated with M1, M3, or total liquid assets, although it is possibly with M2.

REFERENCES Bhargava,Alok (1984): “On the Theory of Testing For Unit Roots in Observed Time Series,” manuscript, ICERD, London School of Economics. Box,G.E.P.,and G. M. Jenkins (1970): Time Series Analysis, Forecasting and Control. San Francisco: Holden Day. Campbell,John Y. (1985): “Does Saving Anticipate Declining Labor Income? An Alternative Test of the Permanent Income Hypothesis,” manuscript, Princeton University. Cox,D.R.,and C. V. Hinkley (1974): Theoretical Statistics. London: Chapman and Hall. Currie, D. (1981): “Some Long-Run Features of Dynamic Time-Series Models,” The Economic Journal, 91, 704–715. Co-Integration and Error-Correction 171

Davidson,J.E.H.,David F. Hendry,Frank Srba, and Steven Yeo (1978): “Econometric Modeling of the Aggregate Time-series Relationship Between Consumer’s Expenditure and Income in the United Kingdom,” Economic Journal, 88, 661–692. Davies, R. R. (1977):“Hypothesis Testing When a Nuisance Parameter is Present Only Under the Alternative,” Biometrika, 64, 247–254. Dawson, A. (1981): “Sargan’s Wage Equation: A Theoretical and Empirical Reconstruction,” Applied Economics, 13, 351–363. Dickey,David A. (1976): “Estimation and Hypothesis Testing for Nonstationary Time Series,” PhD. Thesis, Iowa State University, Ames. Dickey,David A., and Wayne A. Fuller (1979): “Distribution of the Estima- tors for Autoregressive Time Series With a Unit Root,” Journal of the Ameri- can Statistical Assoc., 74, 427–431. ———(1981): “The Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root,” Econometrica, 49, 1057–1072. Engle,Robert F., David F., Hendry, and J. F. Richard (1983): “Exogeneity,” Econometrica, 51, 277–304. Engle,Robert F., and Byung Sam Yoo (1986): “Forecasting and Testing in Co-integrated Systems,” U.C.S.D. Discussion Paper. Evans, G. B. A., and N. E. Savin (1981): “Testing for Unit Roots: 1,” Economet- rica, 49, 753–779. Feller,William (1968): An Introduction to Probability Theory and Its Applica- tions, Volume I. New York: John Wiley. Fuller,Wayne A. (1976): Introduction to Statistical Time Series. New York: John Wiley. Granger, C. W. J. (1981): “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” Journal of Econometrics, 121–130. ———(1983): “Co-Integrated Variables and Error-Correcting Models,” unpub- lished UCSD Discussion Paper 83-13. Granger,C.W.J.,and P. N ewbold (1977): Forecasting Economic Time Series. New York: Academic Press. ———(1974):“Spurious Regressions in Econometrics,” Journal of Econometrics, 26, 1045–1066. Granger C. W. J., and A. A. Weiss (1983): “Time Series Analysis of Error- Correcting Models,” in Studies in Econometrics, Time Series, and Multivariate Statistics. New York: Academic Press, 255–278. Hall,Robert E. (1978): “A Stochastic Life Cycle Model of Aggregate Con- sumption,” Journal of Political Economy, 971–987. Hannan E. J. (1970): Multiple Time Series. New York: Wiley. Hendry,David F., and T. von Ungern-Sternberg (1981): “Liquidity and Inflation Effects on Consumer’s Expenditure,” in Essays in the Theory and Measurement of Consumer’s Behavior, ed. by A. S. Deaton. Cambridge: Cam- bridge University Press. Johansen,Soren (1985): “The Mathematical Structure of Error Correction Models,” manuscript, University of Copenhagen. Nelson,C.R.,and Charles Plosser (1982): “Trends and Random Walks in Macroeconomic Time Series,” Journal of Monetary Economics, 10, 139–162. Pagan, A. R. (1984): “Econometric Issues in the Analysis of Regressions with Generated Regressors,” International Economic Review, 25, 221–248. 172 R. F. Engle and C. W. J. Granger

Phillips, A. W. (1957): “Stabilization Policy and the Time Forms of Lagged Responses,” Economic Journal, 67, 265–277. Phillips, P. C. B. (1985): “Time Series Regression with Unit Roots,” Cowles Foundation Discussion Paper No. 740, Yale University. Phillips,P.C.B.,and S. N. Durlauf (1985): “Multiple Time Series Regression with Integrated Processes,” Cowles Foundation Discussion Paper 768. Salmon, M. (1982): “Error Correction Mechanisms,” The Economic Journal, 92, 615–629. Sargan, J. D. (1964): “Wages and Prices in the United Kingdom: a Study in Econometric Methodology,” in Econometric Analysis for National Economic Planning, ed. by P.E. Hart, G. Mills, and J. N.Whittaker. London: Butterworths. Sargan,J.D.,and A. Bhargava (1983): “Testing Residuals from Least Squares Regression for Being Generated by the Gaussian Random Walk,” Economet- rica, 51, 153–174. Shiller,R.J.,and J. Y. Campbell (1984): “A Simple Account of the Behaviour of Long-Term Interest Rates,” American Economic Review, 74, 44–48. Stock,James H. (1984): “Asymptotic Properties of Least Squares Estimators of Co-Integrating Vectors,” manuscript, Harvard University. Watson,Mark W., and Robert Engle (1985): “A Test for Regression Coeffi- cient Stability with a Stationary AR(1) Alternative,” forthcoming in Review of Economics and Statistics. Yoo,Sam (1985): “Multi-co-integrated Time Series and Generalized Error Correction Models,” manuscript in preparation, U.C.S.D. CHAPTER 9

Developments in the Study of Cointegrated Economic Variables* C. W. J. Granger**

1. INTRODUCTION At the least sophisticated level of economic theory lies the belief that certain pairs of economic variables should not diverge from each other by too great an extent, at least in the long run. Thus, such variables may drift apart in the short run or according to seasonal factors, but if they continue to be too far apart in the long-run, then economic forces, such as a market mechanism or government intervention, will begin to bring them together again. Examples of such variables are interest rates on assets of different maturities, prices of a commodity in different parts of the country, income and expenditure by local government and the value of sales and production costs of an industry. Other possible examples would be prices and wages, imports and exports, market prices of sub- stitute commodities, money supply and prices and spot and future prices of a commodity. In some cases an economic theory involving equilibrium concepts might suggest close relations in the long-run, possibly with the addition of yet further variables. However, in each case the correctness of the beliefs about long-term relatedness is an empirical question. The idea underlying cointegration allows specification of models that capture part of such beliefs, at least for a particular type of variable that is fre- quently found to occur in macroeconomics. Since a concept such as the long-run is a dynamic one, the natural area for these ideas is that of time- series theory and analysis. It is thus necessary to start by introducing some relevant time series models.

Consider a single series xt, measured at equal intervals of time. Time series theory starts by considering the generating mechanism for the series. This mechanism should be able to generate all of the statistical properties of the series, or at very least the conditional mean, variance and temporal autocorrelations, that is the “linear properties” of the

* Oxford Bulletin of Economics and Statistics, 48, 1986, 213–228. ** I would like to acknowledge the excellent hospitality that I enjoyed at Nuffield College and the Institute of Economics and Statistics, Oxford whilst this paper was prepared. 174 C. W. J. Granger series, conditional on past data. Some series appear to be “stationary”, which essentially implies that the linear properties exist and are time- invariant. Here we are concerned with the weaker but more technical requirement that the series has a spectrum which is finite but non-zero at all frequencies. Such a series will be called I(0), denoting “integrated of order zero.” Some series need to be differenced to achieve these prop- erties and these will be called integrated of order one, denoted xt ~ I(1). More generally, if a series needs differencing d time to become I(0), it is b called integrated of order d, denoted xt ~ I(d). Let D denote application of the difference operator b times, if xt ~ I(d) then the bth difference b series D xt is I(d - b). Sometimes a series needs to be integrated (summed) to become I(0), for example the difference of an I(0) series is I(-1) and its integral is again I(0). Most of this paper will concentrate on the practically important cases when d = 0 or 1. The simplest example of an I(0) series is a white noise et, so that rk = corr(et, et-k) = 0 for all k π 0. Another example is a stationary AR(1) series, xt generated by

xt = axt-1 + et (1.1) where |a| < 1 and et is white noise with zero mean. The simplest example of an I(1) series is a random walk, where xt is generated by

xt = xt-1 + et (1.2) as would theoretically occur for a speculative price generated by an informationally efficient market. Here, the first differenced series is white noise. The most general I(1) series replaces et in equation (1.2) by any I(0) series not necessarily having zero mean. Many macro economic series appear to be I(1), as suggested by the “typical spectral shape” (see Granger (1966)), by analysis of Box-Jenkins (1970) modeling techniques or by direct testing, as in Nelson and Plosser (1982). Throughout the paper all error processes, such as those in (1.1), (1.2) are assumed to have finite first and second moments. There are many substantial differences between I(0) and I(1) series. An I(0) series has a mean and there is a tendency for the series to return to the mean, so that it tends to fluctuate around the mean, crossing that value frequently and with rare extensive excursions. Autocorrelations decline rapidly as lag increases and the process gives low weights to events in the medium to distant past, and thus effectively has a finite memory. An I(1) process without drift will be relatively smooth, will wander widely and will only rarely return to an earlier value. In fact, for a random walk, for a fixed arbitrary value the expected time until the process again passes through this value is infinite. This does not mean that returns do not occur, but that the distribution of the time to return is very long-tailed. Autocorrelations {rk} are all near one in magnitude even for large k; an innovation to the process affects all later values and so the process has indefinitely long memory. To see this, note that the pure random walk I(1) solves to give Cointegrated Economic Variables 175

xt = et + et-1 + et-2 + ... + e1 (1.3) assuming the process starts at time t = 0, with x0 = 0. Note that the 2 variance of xt is tse and becomes indefinitely large as t increases and rk = 1 - |k|/t. If xt is a random walk with “drift” (1.2) becomes

xt = xt-1 + m + et where et is zero-mean white noise. The solution is now

t-1 xmtttj=+Âe - (1.4) j=0 so that xt consists of a linear trend plus a drift-free I(1) process (random walk) being the process in (1.3). The only more general univariate process considered in this section is

xmtxtt= ()+¢ where xt¢ is a drift-free random walk, such as generated by (1.3), and m(t) is some deterministic function of time, being the “trend in mean” of xt.

2. COINTEGRATION

Consider initially a pair of series xt, yt, each of which is I(1) and having no drift or trend in mean. It is generally true that any linear combina- tion of these series is also I(1). However, if there exists a constant A, such that

zt = xt - Ayt (2.1) is I(0), then xt, yt will be said to be cointegrated, with A called the coin- tegrating parameter. If it exists, A will be unique in the situation now being considered.As zt has such different temporal properties from those of either of its components it follows that the xt and yt must have a very special relationship. Both xt and yt have dominating low-frequency or “long wave” components, and yet zt does not. Thus, xt and Ayt must have low-frequency components which virtually cancel out to produce zt.A good analogy is two series each of which contain a prominent seasonal component. Generally, any linear combination of these series will also contain a seasonal, but if the seasonals are identical in shape there could exist a linear combination which has no seasonal. The relationship

xt = Ayt (2.2) might be considered a long-run or “equilibrium” relationship, perhaps as suggested by some economic theory, and zt given by (2.1) thus measures 176 C. W. J. Granger

the extent to which the system xt, yt is out of equilibrium, and can thus be called the “equilibrium error”.The term “equilibrium” is used in many ways by economists. Here the term is not used to imply anything about the behaviour of economic agents but rather describes the tendency of an economic system to move towards a particular region of the possible outcome space. If xt and yt are I(1) but “move together in the long-run”, it is necessary that zt be I(0) as otherwise the two series will drift apart without bound. Thus, for a pair of I(1) series, cointegration is a necessary condition for the ideas discussed in the first section of this paper to hold. In some circumstances, an even stronger condition may be required, such as putting complete bounds on zt, which will guarantee that it is I(0), but such cases are not considered here. The extension to series having trends in their means is straightfor- ward. Consider

xmtxtx= ()+¢ t (2.3) ymtyty= ()+¢ t where xt¢, yt¢ are both I(1) but without trends in mean, and let

zxAytt=- t

= mtxytt()- Amt()+¢- x Ay ¢.

For zt to be I(0), and xt, yt not to drift too far apart, it is necessary both that zt have no trend in mean, so that

mxy() t= Am() t (2.4) for all t, and that xt¢, yt¢ be cointegrated with the same value of A as the cointegrating parameter. It is seen that if the two trends in mean are different functions of time, such as an exponential and a cubic, then (2.4) cannot hold. One thing that should be noted is that a model of the form

xt = byt + et where xt is I(0) and yt is I(1), makes no sense as the independent and dependent variables have such vastly different temporal properties. Theoretically the only plausible value for b in this regression is b = 0.

If xt, yt are both I(1) without trends in mean and are cointegrated it has been proved in Granger (1983) and Granger and Engle (1985) that there always exists a generating mechanism having what is called the “error-correcting” form:

DDDxztt=-re11- +lagged() xyB ttt, + d() 1

DDDyztt=-re21- +lagged() xyB ttt, + d() 2 (2.5) where

zt = xt - Ayt, Cointegrated Economic Variables 177

k d(B) is a finite polynomial in the lag operator B (so that B xt = xt-k) and is the same in each equation, and e1t, e2t are joint white noise, possibly contemporaneously correlated and with |r1| + |r2| π 0. Not only must coin- tegrated variables obey such a model but the reverse is also true; data generated by an error-correction model such as (2.5) must be cointe- grated. The reason for this is easily seen, as if xt, yt are I(1) their changes will be I(0) and so every term in the equations (2.5) is I(0) provided zt is also I(0) meaning that xt, yt are cointegrated. If zt is not I(0), i.e., if xt, yt are not cointegrated, then the zt term does not belong in these equa- tions given that the dependent variables are I(0) and hence at least one of r1, r2 does not vanish. These models were introduced into economics by Sargan (1964) and Phillips (1957) and have generated a lot of interest following the work of Davidson, Hendry, Srba and Yeo (1978), Hendry and von Ungern- Sternberg (1980), Curry (1981), Dawson (1981) and Salmon (1982) amongst others. The models are seen to incorporate equilibrium rela- tionships, perhaps suggested by an economic theory of the long-run, with the type of dynamic model favoured by time-series econometricians.The equilibrium relationships are allowed to enter the model but are not forced to do so. The title “error-correcting” for equations such as (2.5) is a little optimistic. The absolute value of zt is the distance that the system is away from equilibrium. Equation (2.5) indicates that the amount and direction of change in xt and yt take into account the size and sign of the previous equilibrium error, zt-1.The series zt does not, of course, certainly reduce in size from one time period to another but is a stationary series and thus is inclined to move towards its mean. A constant should be included in the equilibrium equation (2.2) and in (2.1) if needed, to make the mean of zt zero. There are a number of theoretical implications of cointegratedness that are easily derived from the results so far presented:

(i) If xt, yt are cointegrated, so will be xt and byt-k + wt, for any k where wt ~ I(0), with a possible change in cointegrating para-

meter. Formally, if xt is I(1) then xt and xt-k will be cointegrated for any k, but this is not an interesting property as it is true for any I(1) process and so does not suggest a special relationship,

unlike cointegration of a pair of I(1) series. It follows that if xt, yt are cointegrated but are only observed with measurement error, then the two observed series will also be cointegrated if all measurement errors are I(0).

(ii) If xt is I(1) and fn,h(Jn) is the optimal forecast of xn+h, based on

the information set Jn available at time n, then xt+h, ft,h(Jt) are cointegrated if Jn is a proper information set, that is if it includes

xn-j, j 0. If Jn is not a proper information set, xt+n and its optimum forecast are only cointegrated if xt, is cointegrated with variables in Jt. 178 C. W. J. Granger

(iii) If xn+h, yn+h are cointegrated series with parameter A and are optimally forecast using the information set Jn: xn-j, yn-j, j 0, x y then the h-step forecasts f n,h, f n,h will obey x y f n,h = Af n,h as h Æ•(proved by S. Yoo (1986)). Thus, long-term optimum

forecasts of xt, yt will be tied together by the equilibrium relationships. Forecasts formed without cointegration terms such as univariate forecasts will not necessarily have this property.

(iv) If Tt is an I(1) target variable and xt is an I(1) controllable vari- able, then Tt, xt will be cointegrated if optimum control is applied. (See Nickell (1985).)

(v) If xt, yt are I(1) and cointegrated, there must be Granger causal- ity in at least one direction, as one variable can help forecast the other.This follows directly from the error-correlation model and

the condition that |r1| + |r2| π 0, as zt-1 must occur in at least one equation and thus knowledge of zt must improve forecastability of at least one of xt, yt. Here causality is with respect to the information set Jt defined in (iii). (vi) If xt, yt are a pair of prices from a jointly efficient, speculative market, they cannot be cointegrated. This follows directly from (v) as if the two prices were cointegrated, one can be used to help forecast the other and this would contradict the efficient market assumption. Thus, for example, gold and silver prices, if generated by an efficient market, cannot move closely together in the long-run. Tests of this idea have been conducted by Granger and Escribano (1986).

3. TESTING FOR COINTEGRATION This topic has been discussed at some length by Granger and Engle (1985) and so only an outline of their conclusions is presented here. It is necessary to start with a test for whether a series xt is I(0) and a useful test has been provided by Dickey and Fuller (1981). The following regression is formed

p DDxxtt=+bg--1 Â jtjt xe + j=1 where p is selected to be large enough to ensure that the residual et is empirical white noise. The test statistic is the ratio of bˆ to its calculated standard error obtained from an ordinary least squares (OLS) regres- ˆ sion. The null hypothesis is H0: xt ~ I(1). This is rejected if b is negative and significantly different from zero. However, the test-statistic does not have a t-distribution but tables of significance levels have been provided by Dickey and Fuller (1979). Cointegrated Economic Variables 179

To test for cointegration between a pair of series, that are expected to be I(1), one method is to first form the “cointegration regression”

xt = c + ayt + at (3.1) and then to test if the residual at appears to be I(0) or not. It might be noted that when xt and yt are cointegrated, this regression when esti- mated using, say, OLS should give an excellent estimate of the true coin- tegrating coefficient A, in large samples. Note that at will have a finite (or small) variance only if a = A, otherwise at will be I(1) and thus have theoretically a very large variance in a large sample. Stock (1984) has shown that when series are cointegrated, OLS estimates of A are highly efficient with variances 0(T-2) compared to more usual situations where the variances are 0(T-1), T being the sample size. Stock also shows that the estimates are consistent with an 0(T-1) bias. However, some recent Monte Carlo simulations by Banerjee et al. (1986) suggest that these bias terms can be very substantial in some cases. Two simple tests of the null hypothesis

H0: xt, yt not cointegrated are based either on a Durbin-Watson statistic (D/W) for (3.1), but testing if D/W is significantly greater than zero (see Sargan and Bhargara (1983) who provide critical values), or using the previously mentioned Dickey-

Fuller test for aˆt. The latter test was found by Granger and Engle (1985) to have more stable critical values from a small simulation study and with T = 100 observations approximate significance levels for the pseudo t- statistic testing b = 0 are, 10 per cent ~ 2.88, 5 per cent ~ 3.17, 1 per cent ~ 3.75. A great deal more experience with these tests, and more exten- sive simulation studies, are required before confidence in the quality of these, or alternative, testing procedures is assured. Some estimates of power for this test were found to be quite satisfactory for a sample size of 100. Applying this test, some examples of the outcomes of empirical analy- sis are (mostly from Granger and Engle, 1985) apparently cointegrated US national income and consumption US non-durables, production and sales US short and long-term interest rates UK W, P, H, U, T, (Hall, – this issue) UK Velocity and short-term interest rates (Hendry and Ericsson, 1983) apparently not cointegrated US wages and prices US durables, production and sales US money and prices. 180 C. W. J. Granger

Of course, some of the examples where cointegration was not found strongly suggest that further variables should be included in the investi- gation, such as the addition of productivity to wages and prices. This extension is considered next.

4. GENERALISATION: MANY VARIABLES AND GENERAL COINTEGRATION

Let xt be a vector of N component time series, each without trend in mean and each I(d), d > 0. For the moment, it is assumed that the d-differenced vector series is a zero mean, purely non-deterministic stationary process, so that there is a Wold representation d ()1 - Bxtt= CB()e (4.1) where this is taken to mean that both sides have the same spectral matrix and et is an N ¥ 1 zero-mean white noise vector with

Ets[]eets¢ =π0 ==Gt s so that only contemporaneous correlations can occur. Equation (4.1) is normalized by taking C(0) = IN, the unit matrix. Then xt will be said to be cointegrated CI(d, b) if there exists a vector a such that

zt = a¢xt is I(d - b), b > 0. The case considered in earlier sections has N = 2, d = b = 1. Moving to general values for N, d, b adds a large number of possible inter- relationships and models. In particular it is clear that a need no longer be unique, as there can be several “equilibrium” relationships linking N > 2 variables. If there are r vectors a, each of which produces z’s inte- grated of order less than d, then r is called the “order of cointegration” and it is easily seen that r N - 1. For the practically important case d = b = 1, it is shown in Granger (1983) and in Granger and Engle (1985) that (i) C(1) is of rank N - r (ii) there exists a vector autoregressive (VAR) representation

ABx()tt= dB()e

where A(1) is of rank r with A(0) = IN and d(B) is a scalar stable lag polynomial. If a finite order VAR model exists, it takes this form but with d(B) = 1. (iii) there exist N ¥ r matrices a, g of rank r such that Cointegrated Economic Variables 181

a ¢C()10= C()10g = A()1 =¢ga (iv) there exists an error-correction representation with

zt = a¢xt an r ¥ 1 stationary vector, of the form

AB*()()1 - Bxtt=-ge z-1 + dB() t (4.2)

where A*(0) = IN, A*(1) is of full rank and |A*(w)| = 0 has all its roots outside the unit circle. It should be noted that the first term on the right hand side can be written as (given (iii) and (v))

g zAxtt--11= ()1 and so, for all terms in (4.2) to be I(0) it is necessary that A(1) does not have a row consisting of just one non-zero term. A resulting condition on a is mentioned below. Commenting on these results, (i) concerning the rank of C(1) is a nec- essary and sufficient condition for cointegration and all other results are derived from it. In (ii) concerning VAR, A(B) is the adjoint matrix of C(B) and d(B) is proportional to the determinant of C(B) after dividing out unit roots. It follows from (ii) that if a VAR model is estimated for cointegrated variables, efficiency will be lost unless A(1) is restricted to being of rank r. In (iii) it should be noted that the matrices g, a, are not uniquely defined by the set of equations shown. If q is an r ¥ r matrix of full rank, then g can be replaced by gq and a¢ by q-1a¢ and the equations will still hold. This lack of uniqueness leads to some interpretational problems in the error-correction model (4.2), which are similar to the identification problems of classical simultaneous equations models. To illustrate the problem, suppose that N = 3 and r = 2 and that a1, a2 are a pair of coin- tegrating vectors, giving

zxxxtttt()aa1111122133=++ a a

zxxxtttt()aa2211222233=++ a a as a pair of I(0) variables corresponding to equilibrium relationships a 1¢xt = 0, a 2¢xt = 0. However, generally any combination of a pair of I(0) variables will also be I(0) and so

zzzttt()llala=-()()1 12+ () will also be I(0) [it is assumed that for no l will zt(l) consist of just one component of xt: this is a constraint on the matrix a preventing 182 C. W. J. Granger

zt(l) = xt, for example, which would make zt ~ I(1)]. Thus, the equilib- rium relations are not uniquely identified, and the error-correction models cannot be strictly interpreted as “correcting” for deviations from a particular pair of equilibrium relationships. The only invariant rela- tionship is the line in the (x1, x2, x3) space defined by

zztt()aa12= 00, ()= This same line is given by

zztt()ll12= 00, ()= for any l1 π l2 and will be called the “equilibrium sub-space”. The error- correction model might thus be interpreted as Dxt being influenced by the distance the system is from the equilibrium sub-space. For general N, r, the equilibrium sub-space will be a hyper-plane of dimension N - r. It is unclear if the identification question can be solved in ways similar to those used with simultaneous equations, that is by adding sufficient zeros to A(1) or by appeals to “exogeneity.” For the N = 3, r = 2 case, l’s can be chosen to give

zt = a1x1t + a2x2t and

zt = a3x1t + a4x3t and these seem to provide a natural way for testing for cointegration. For more general N and r, the number of possible combinations becomes extensive and testing will be more difficult, particularly when r is an unknown, as will be usual in practice. Turning briefly to the most general case, with any N, d, b and r, the error-correction model becomes d bdb- AB*()()1111- Bxt =-ge[] -() - B()- B ztt-1 + dB() (4.3) where d(B) is a scalar polynomial in B. It should be noted that [1 - (1 - B)b], if expanded in powers of B, has 0 no term in B and so only lagged zt occur on the right hand side. Again, every term in (4.3) is I(0) when cointegration is present. It is possible to define fractional differencing, as in Granger and Joyeux (1980), and equation (4.3) still holds in this case, although its practical importance has yet to be established. In the general case (with integer N, b, d, r) Yoo (1986) has considered alternative ways of defining the zt’s possibly using lagged xt components, for a given C(B) matrix but with some added assumptions about its form. Johanssen (1985) has also found some mathematically exact and attrac- tive results for the general case, which do not rely on the assumption that all components of xt are integrated of the same order. He points out, for Cointegrated Economic Variables 183

t example, that if xit is I(1) and x2t is I(0), then x1t and x 2t =Sj=0 x2,t-j could be cointegrated, thus expanding the class of variables that might be tested. The work of Yoo and Johanssen suggests a more general definition of cointegration. Let a(B) be an N ¥ 1 vector of functions of the lag oper- ator B, such that each component, such as aj(B) has the property that aj(1) π 0. Then if xt is a vector of I(d) series such that

zBtt=¢a ()x is I(d - b), xt may be called cointegrated. If a cointegrating vector a occurs, as defined in earlier sections there will be many a(B) that also cointegrate, and so uniqueness is lost but extra flexibility is gained. Con- sideration of these possibilities does allow for a generalisation that is potentially very important in economics. Suppose that N = 2, so that xt has just two components, and let a be a cointegrating vector, with a¢= (1, A). In this case a will be unique, if it does not depend on B, so that r = 1. [Generally, one would expect r < N]. However, there may exist another cointegrating vector of quite a different form,

Ê AAA¢¢ˆ a ¢()B =-1 , Ë DD¯ a¢=(1, -A¢) and D=1 - B. An example of this possibility is where xt = (xt,yt),xt,yt are cointegrated with vector a, giving equilibrium error:

zt = xt - Ayt

t and xt, Szt =Sj=0 zt-j are cointegrated, so that xt - A¢Szt is I(0). This would correspond to a cointegrating vector of the form a()BSASAA=-()1, ¢ ¢ where S = 1/D and D=1 - B.

For example, xt, yt could be sales and production of some industry, zt = change in inventory, Szt inventory and xt, yt could be cointegrated as well as xt, Szt. Another example might be xt = income, yt = expenditure, zt = savings, Szt = wealth. Such series might be called “multicointegrated.” Throughout this section, if the series involved have deterministic trends in mean, these need to be estimated and removed before the concepts discussed can be applied. One method of removing trends of general shape is discussed in Granger (1985).

5. FURTHER GENERALIZATIONS The processes considered so far have been linear and with time- invariant parameters. Clearly general models, and possibly more realis- tic ones, are achieved by removing these restrictions. 184 C. W. J. Granger

As institutions, technology and society changes, so may any equilib- rium relationships. In the bivariate case, the cointegrating parameter may be slowly changing with time, for instance. To proceed with analysis of this idea, it is necessary to define time-varying parameter (TVP) I(0) and I(1) processes. Using concepts introduced by Priestley (1981), it is possible to define a time-varying spectrum ft(w) for a process such as one generated by an ARMA model with TVP. For example, consider

xtxttt= be() -1 + where b(t) is a deterministic function of time, obeying the restriction that

|b(t)| < 1 all t. If ft(w) is bounded above and also is positive for all t, w, the process may be called TVP I(0). If the change of xt is TVP I(0), then xt can be called TVP I(1). For a vector process xt that is TVP I(d) and has no deterministic components Cramer (1961) has shown that there exists a generalised Wold representation d ()1 - Bxtt= CB()e t (5.1) where

E[]et = 0

E[]eets¢ = 0

E[]eett¢ = W t

CItN()0 = and if

j CBtjt()= Â CB it will be assumed that

ÂCCjt Wtjt¢<• j

d so that the variance of (1 - B) xt is finite. Assume now that Ct(1) has rank N - 1 for all t, so that the cointegra- tion rank is 1, then there will exist N ¥ 1 vectors a(t), g(t) such that

a ¢()tCt ()10=

Ctt ()10.g ()= The TVP equilibrium error process will then be

ztxtt=¢a () . (5.2) The corresponding error-correction models will be as (4.2) but with A*(B), g, d(B) all functions of time. A testing procedure would involve estimating the equilibrium regression (5.2) using some TVP techniques, Cointegrated Economic Variables 185 such as a Kalman filter procedure, probably assuming that the compo- nents of a(r) are stochastic but slowly changing. It might be thought that allowing a(t) to change with time can always produce an I(0) zt. For example, suppose that N = 2 and consider

zxAtytt=-() t

Taking A(t) = xt/yt clearly gives zt = 0, which is an uninteresting I(0) sit- uation. However, it is also clear that taking,A (t) = xt/yt + d will produce a zt that is I(1) in general. Interpretation of any TVP cointegration test will have to consider this possible difficulty. Turning to the possibility of non-linear cointegration, it might be noted that in the basic error-correlation model (2.5) or (4.2) zt-1 terms appear linearly so that changes in dependent variables are related to zt-1, whatever its size. In the actual economy, a more realistic behaviour is to ignore small equilibrium errors but to react substantially to large ones, suggesting a non-linear relationship. An error-correction model that captures this idea is, in the bivariate case,

DDDxfztt= 11()- + lagged() xy ttt, + e 1

DDDyfztt= 21()- + lagged() xy ttt, + e 2 (5.3) where

zt = xt - Ayt.

It is generally true that if zt is I(0) with constant variance, then f(zt) will also be I(0). Similarly, if zt is I(1) then generally f(zt) is also I(1), pro- • j vided f(z) has a linear component for large z, i.e. f/z(z) ÆSj=0 ajz with a0 π 0.A rigorous treatment of these results is provided by Escribano (1986).

As generally zt and f(zt) will be integrated of the same order, if a test sug- gests that a pair of series are cointegrated, then a non-linear error- correction model of form (5.3) is a possibility.Of course, most of the other results of previous sections do not hold as they are based on the linear Wold representation. Equation (5.3) can be estimated by one of the many currently available non-linear, non-parametric estimation techniques such as that employed in Engle, Granger, Rice and Weiss (1986). Error correction models essentially consider process whose compo- nents drift widely but the joint process has a generalised preference towards a certain part of the process space. In the cases so far consid- ered this preferred sub-space is a hyper-plane but more general pre- ferred subspaces could be considered although with considerably increased mathematical difficulty.

6. CONCLUSION This paper has attempted to expand the discussion about differencing macro-economic series when model building by emphasizing the use of 186 C. W. J. Granger a further factor, the “equilibrium error”, that arises from the concept of cointegration. This factor allows the introduction of the impact of long- run or “equilibrium” economic theories into the models used by the time-series analysts to explain the short-run dynamics of economic data. The resulting error-correction models should produce better short-run forecasts and will certainly produce long-run forecasts that hold together in economically meaningful ways. If long-run economic theories are to have useful impact on econo- metric models they must be helpful in model specification and yet not distract from the short-run aspects of the model. Historically, many econometric models were based on equilibrium relationships suggested by a theory, such as

xt = Ayt + et (6.1) without any consideration of the levels of integratedness of the observed variables xt, yt or of the residual series et. If xt is I(0) but yt is I(1), for example, the value of A in the resulting regression is forced to be near zero. If et is I(1), standard estimation techniques are not appropriate. A test for cointegration can thus be thought of as a pre-test to avoid “spu- rious regression” situations. Even if xt and yt are cointegrated an equa- tion such as (6.1) can only provide a start for the modeling process, as et may be explainable by lagged changes in xt and yt, eventually resulting in an error-correction model of the form (2.5). However, there must be two such equations, which again makes the equation (2.5) a natural form.

Ignoring the process of properly modeling the et can lead to forecasts from (6.1) that can be beaten by simple time-series models, at least in the short-term. Whilst the paper has not attempted to link error-correction models with optimizing economic theory, through control variables for example, there is doubtless much useful work to be done in this area. Testing for cointegration in general situations is still in an early stage of development. Whether or not cointegration occurs is an empir- ical question but the beliefs of economists do appear to support its exis- tence and the usefulness of the concept appears to be rapidly gaining acceptance.

REFERENCES Box, G. E. P. and Jenkins, G. M. (1970). Time Series Analysis Forecasting and Control, San Francisco, Holden Day. Cramer, H. (1961).“On Some Classes of Non-Stationary Processes”, Proceedings 4th Berkeley Symposium on Math, Stats and Probability, pp. 157–78, Univer- sity of California Press. Cointegrated Economic Variables 187

Currie, D. (1981). “Some Long-Run Features of Dynamic Time-Series Models”, The Economic Journal, Vol. 363, pp. 704–15. Davidson, J. E. H., Hendry, D. F., Srba, F. and Yeo, S. (1978). “Econometric Modeling of the Aggregate Time-Series Relationship Between Consumer’s Expenditure and Income in the United Kingdom”, The Economic Journal,Vol. 88, pp. 661–92. Dawson, A. (1981). “Sargan’s Wage Equation: A Theoretical and Empirical Reconstruction”, Applied Economics, Vol. 13, pp. 351–63. Dickey, D. A. and Fuller, W. A. (1979). “Distributions of the Estimators for Autoregressive Time Series with a Unit Root,” Journal of the American Statistical Associations, Vol. 74, pp. 427–31. Dickey, D. A. and Fuller, W. A. (1981). “The Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root, Econometrica, Vol. 49, pp. 1057–72. Engle, R. F., Granger, C. W. J., Rice, J. and Weiss, A. (1986). “Non-Parametric Estimation of the Relationship Between Weather and Electricity Demand”, Journal of the American Statistical Association (forthcoming). Escribano, A. (1986). Ph.D. thesis, Economics Department, University of California, San Diego. Granger, C. W. J. (1966). “The Typical Spectral Shape of an Economic Variable”, Econometrica, Vol. 34, pp. 150–61. Granger, C. W. J. (1983). “Co-Integrated Variables and Error-Correcting Models”, UCSD Discussion Paper, pp. 83–13a. Granger, C. W. J. and Engle, R. F. (1985). “Dynamic Specification with Equilib- rium Constraints: Cointegration and Error-Correction” (forthcoming, Econo- metrica). Granger, C. W. J. and Escribano, A. (1986). “Limitation on the Long-Run Relationship Between Prices from an Efficient Market”, UCSD Discus- sion Paper. Granger, C. W. J. and Joyeux, R. (1980). “An Introduction to Long-Memory Time Series and Fractional Differencing”, Journal of Time Series Analysis,Vol. 1, pp. 15–29. Hendry, D. F. and Ericsson, N. R. (1983). “Assertion without Empirical Basis:An Econometric Appraisal of “Monetary Trends in ... the United Kingdom” by Milton Friedman and Anna Schwartz”, Bank of England Academic Panel Paper No. 22. Hendry, D. F. and von Ungern-Sternberg, T. (1981). “Liquidity and Inflation Effects on Consumer’s Expenditure”, in Deaton, A. S. (ed.), Essays in the Theory and Measurement of Consumers’ Behaviour, Cambridge Univer- sity Press. Johanssen, S. (1985). “The Mathematical Structure of Error-Correction Models”, Discussion Paper, Maths Department, University of Copenhagen. Nelson, C. R. and Plosser, C. I. (1982). “Trends and Random Walks in Macroeconomic Time Series”, Journal of monetary Economics, Vol. 10, pp. 139–62. Nickell, S. (1985).“Error-Correction, Partial Adjustment and All That: an Expos- itory Note”, BULLETIN, Vol. 47, pp. 119–29. Phillips, A. W. (1957). “Stabilization Policy and the Time Forms of Lagged Responses”, Economic Journal, Vol. 67, pp. 265–77. 188 C. W. J. Granger

Priestley, M. B. (1981). Spectral Analysis of Time Series, Academic Press, New York. Salmon, M. (1982). “Error Correction Mechanisms”, The Economic Journal, Vol. 92, pp. 615–29. Sargan, J. D. (1964). “Wages and Prices in the United Kingdom: A Study in Economic Methodology”, in Hart, P., Mills, G. and Whittaker, J. N. (eds.), Econometric Analysis for National Economic Planning, Butterworths, London. Sargan, J. D. and Bhargava, A. (1983). “Testing Residuals from Least Squares Regression for Being Generated by the Gaussian Random Walk”, Economet- rica, Vol. 51, pp. 153–74. Stock, J. H. (1984). “Asymptotic Properties of a Least Squares Estimator of Co-Integrating Vectors”, Manuscript Harvard University. Yoo, S. (1986). Ph.D. thesis, Economics Department, University of California San Diego. CHAPTER 10

Seasonal Integration and Cointegration* S. Hylleberg, R. F. Engle C. W. J. Granger, and B. S. Yoo**

This paper develops tests for roots in linear time series which have a modulus of one but which correspond to seasonal frequencies. Critical values for the tests are generated by Monte Carlo methods or are shown to be available from Dickey–Fuller or Dickey–Hasza–Fuller critical values. Representations for multivariate processes with combi- nations of seasonal and zero-frequency unit roots are developed leading to a variety of autoregressive and error-correction representations. The techniques are used to examine cointegration at different frequencies between consumption and income in the U.K.

1. INTRODUCTION The rapidly developing time-series analysis of models with unit roots has had a major impact on econometric practice and on our understanding of the response of economic systems to shocks. Univariate tests for unit roots were first proposed by Fuller (1976) and Dickey and Fuller (1979) and were applied to a range of macroeconomic data by Nelson and Plosser (1982). Granger (1981) proposed the concept of cointe- gration which recognized that even though several series all had unit roots, a linear combination could exist which would not. Engle and Granger (1987) present a theorem giving several representations of cointegrated series and tests and estimation procedures. The testing is a direct generalization of Dickey and Fuller to the hypothesized linear combination. All of this work assumes that the root of interest not only has a modulus of one, but is precisely one. Such a root corresponds to a zero-

* Journal of Econometrics, 44, 1990, 215–238. ** The research was carried out while the first author was on sabbatical at UCSD and the last author was completing his dissertation. The authors are indebted to the University of Aarhus, NSF SES87-05884, and SES87-04669 for financial support. The data will be made available through the Inter-university Consortium for Political and Social Research at the University of Michigan. 190 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo frequency peak in the spectrum. Furthermore, it assumes that there are no other unit roots in the system. Because many economic time series exhibit substantial seasonality, there is a definite possibility that there may be unit roots at other frequencies such as the seasonals. In fact, Box and Jenkins (1970) and the many time-series analysts influenced by their work implicitly assume that there are seasonal unit roots by using the seasonal differencing filter. This paper describes in section 2 various classes of seasonal processes and in section 3 sets out to test for seasonal unit roots in time-series data both in the presence of other unit roots and other seasonal processes. Section 4 defines seasonal cointegration and derives several representa- tions. Section 5 gives an empirical example and section 6 concludes.

2. SEASONAL TIME-SERIES PROCESSES Many economic time series contain important seasonal components and there are a variety of possible models for seasonality which may differ across series. A seasonal series can be described as one with a spectrum having distinct peaks at the seasonal frequencies wS ∫ 2pj/s, j = 1,...,s/2, where s is the number of time periods in a year, assuming s to be an even number and that a spectrum exists. In this paper, quarterly data will be emphasised so that s = 4, but the results can be naturally extended in a straightforward fashion to monthly data, for example. Three classes of time-series models are commonly used to model seasonality. These can be called: (a) Purely deterministic seasonal processes, (b) Stationary seasonal processes, (c) Integrated seasonal processes, Each is frequently used in empirical work often with an implicit assump- tion that they are all equivalent. The first goal of this paper is to develop a testing procedure which will determine what class of seasonal processes is responsible for the seasonality in a univariate process. Subsequently this approach will deliver multivariate results on cointegration at sea- sonal frequencies. A purely deterministic seasonal process is a process generated by seasonal dummy variables such as the following quarterly series:

xt = mt where mt = m0 + m1S1t + m2S2t + m3S3t . (2.1) Notice that this process can be perfectly forecast and will never change its shape. A stationary seasonal process can be generated by a potentially infi- nite autoregression

jee()Bxtt= , t i.i.d., Seasonal Integration and Cointegration 191 with all of the roots of j (B) = 0 lying outside the unit circle but where some are complex pairs with seasonal periodicities. More precisely, the spectrum of such a process is given by

2 iw 2 fe()ws= j(), which is assumed to have peaks at some of the seasonal frequencies ws. An example for quarterly data is

xt = rxt-4 + et, which has a peak at both the seasonal periodicities p/2 (one cycle per year) and p (two cycles per year) as well as at zero frequency (zero cycles per year).

A series xt is an integrated seasonal process if it has a seasonal unit root in its autoregressive representation. More generally it is integrated of order d at frequency q if the spectrum of xt takes the form -2d fc()wwq=-(), for w near q. This is conveniently denoted by

xIdt ~.q () The paper will concentrate on the case d = 1. An example of an inte- grated quarterly process at two cycles per year is

xt =-xt-1 + et, (2.2) and at one cycle per year it is

xt =-xt-2 + et. (2.3) The very familiar seasonal differencing operator, advocated by Box and Jenkins (1970) and used as a seasonal process by Grether and Nerlove (1970) and Bell and Hillmer (1985) for example, can be written as

423 ()111- Bxtt==-e () B()++ BB + Bx t 2 =-()111BBBx()+ ()+ t

=-()()1 BSBxt , (2.4) which therefore has four roots with modulus one: One is a zero fre- quency, one at two cycles per year, and two complex pairs at one cycle per year. The properties of seasonally integrated series are not immediately obvious but are quite similar to the properties of ordinary integrated processes as established for example by Fuller (1976). In particular they have ‘long memory’ so that shocks last forever and may in fact change permanently the seasonal patterns. They have variances which increase linearly since the start of the series and are asymptotically uncorrelated with processes with other frequency unit roots. 192 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

The generating mechanisms being considered, such as (2.2) or (2.4), are stochastic difference equations. They generalize the ordinary I(1), or

I0(1) in the present notation, process. It is well known that an equation of the form

()1 - Bxtt= e (2.5) has two components to its solution: the homogeneous solution x1t where

()10- Bx1t = and the particular solution x2t given by

xB2tt=-()11()e .

t-1 Thus xt = x1t + x2t, where x1t = x0 (the starting value) and x2t =Sj=0 et-j. Clearly, if E[et] = m π 0, then x2t will contain a linear trend mt. The equation with S(B) = (1 + B)(1 + B2),

SBx()tt= e , (2.6) also has a solution with two components. The homogeneous solution is

tt t xc11t =-()1 + cici 2() +- 3(), where c1, c2, c3 are determined from the starting conditions, plus the requirement that x1t is a real series, i.e., c2 and c3 are complex conjugates. If x-2 = x-1 = x0 = 0 so that the starting values contain no seasonal, then x1t ∫ 0. The particular solution is

-1 xSB2tt= []() e , and noting that

-1 1 []SB() =+[]11() B+-() 1 B() 1+ B2 , 2 some algebra gives

t-1 int[]()t-12 1 j j x =-()11ee+-()D , 2t 2 ÂÂtj- tj-2 j=0 j=0 where D=1 - B and int[z] is the largest integer in z. The two parts of this solution correspond to the two seasonal roots and to eqs. (2.2) and (2.3). The homogeneous solutions to eqs. (2.5), (2.2), and (2.3) are given, respectively, by Seasonal Integration and Cointegration 193

t-1 s1ttj= Âe - for zero - frequency root, j=0 t-1 j s2t =-Â()1 etj- for the two - cycle - per - year root, j=0 int[]()t-12 s32ttj=-Â ()1 De - for the one - cycle - per - year root. j=0 The variances of these series are given by

2 Vs()123ttt= Vs()= Vs()= ts , so that all of the unit roots have the property that the variance tends to infinity as the process evolves. When the series are excited by the same

{et} and t is divisable by four, the covariances are all zero. At other values of t the covariances are at most s2, so the series are asymptotically uncor- related as well as being uncorrelated in finite samples for complete years of data.

It should be noted that, if E[et] = m π 0, all t, then the first term in x2t will involve an oscillation of period 2. The complete solution to (2.6) contains both cyclical deterministic terms, corresponding to “seasonal dummies” plus long nondeclining sums of past innovations or their changes. Thus, a series generated by (2.6) will have a component that is seasonally integrated and may also have a deterministic seasonal com- ponent, largely depending on the starting values. A series generated by (2.6) will be inclined to have a seasonal with peak that varies slowly through time, but if the initial deterministic component is large, it may not appear to drift very fast.

If xt is generated by

4 ()1 - Bxtt= e , (2.7) the equation will have solutions that are linear combinations of those for (2.5) and (2.6). A series with a clear seasonal may be seasonally integrated, have a deterministic seasonal, a stationary seasonal, or some combination. A general class of linear time-series models which exhibit potentially complex forms of seasonality can be written as

dBaB()()() xtt- me= t, (2.8) where all the roots of a(z) = 0 lie outside the unit circle, all the roots of d(z) = 0 lie on the unit circle, and mt is given as above. Stationary sea- sonality and other stationary components of x are absorbed into a(B), while deterministic seasonality is in mt when there are no seasonal unit roots in d(B). Section 3 of this paper considers how to test for seasonal 194 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo unit roots and zero-frequency unit roots when other unit roots are possibly present and when deterministic or stochastic seasonals may be present. A pair of series each of which are integrated at frequency w are said to be cointegrated at that frequency if a linear combination of the series is not integrated at w. If the linear combination is labeled a, then we use the notation

xt ~ CIw with cointegrating vector a. This will occur if, for example, each of the series contains the same factor which is Iw(1). In particular, if

xt = avt + x¯t, yt = vt + y¯t, where vt is Iw(1) and x¯t and y¯t are not, then zt ∫ xt - ayt is not Iw(1), although it could be still integrated at other frequencies. If a group of series are cointegrated, there are implications about their joint generat- ing mechanism. These are considered in section 4 of this paper.

3. TESTING FOR SEASONAL UNIT ROOTS It is the goal of the testing procedure proposed in this paper to deter- mine whether or not there are any seasonal unit roots in a univariate series. The test must take seriously the possibility that seasonality of other forms may be present. At the same time, the tests for conventional unit roots will be examined in seasonal settings. In the literature there exist a few attempts to develop such tests. Dickey, Hasza, and Fuller (1984), following the lead suggested by Dickey and Fuller for the zero-frequency unit-root case, propose a test of the hypothesis a = 1 against the alternative a < 1 in the model xt = axt-s + et. The asymptotic distribution of the least-squares estimator is found and the small-sample distribution obtained for several values of s by Monte Carlo methods. In addition the test is extended to the case of higher- order stationary dynamics.A major drawback of this test is that it doesn’t allow for unit roots at some but not all of the seasonal frequencies and that the alternative has a very particular form, namely that all the roots have the same modulus. Exactly the same problems are encountered by the tests proposed by Bhargava (1987). In Ahtola and Tiao (1987) tests are proposed for the case of complex roots in the quarterly case but also their suggestion may at best be a part of a more comprehensive test strat- egy. In this paper we propose a test and a general framework for a test strategy that looks at unit roots at all the seasonal frequencies as well as the zero frequency. The test follows the Dickey–Fuller framework and in fact has a well-known distribution possibly on transformed variables in some special cases. Seasonal Integration and Cointegration 195

For quarterly data, the polynomial (1 - B4) can be expressed as ()11111- BBBiBiB4 =-()()+ ()- ()+ =-()111BBB()+ ()+ 2 , (3.1) so that the unit roots are 1, -1, i, and -i which correspond to zero fre- 1 1 quency, –2 cycle per quarter or 2 cycles per year, and –4 cycle per quarter or one cycle per year. The last root, -i, is indistinguishable from the one at i with quarterly data (the aliasing phenomenon) and is therefore also interpreted as the annual cycle. To test the hypothesis that the roots of j(B) lie on the unit circle against the alternative that they lie outside the unit circle, it is conve- nient to rewrite the autoregressive polynomial according to the follow- ing proposition which is originally due to Lagrange and is used in approximation theory.

Proposition: Any (possibly infinite or rational) polynomial j(B), which is finite-valued at the distinct, nonzero, possibly complex points q1,...,qp, can be expressed in terms of elementary polynomials and a remainder as follows:

p jj()BBBBB= Â ldkkDD() ()+ ()** () , (3.2) k=1 where the lk are a set of constants, j**(B) is a (possibly infinite or ratio- nal) polynomial, and

1 p d k ()BBBB=-1 ,.D()= ’d k () qk k=1

Proof: Let lk be defined to be

lqkk= j()’ dq jk (), jkπ which always exists since all the roots of the d’s are distinct and the poly- nomial is bounded at each value by assumption. The polynomial

p p jjj()BBBB- Â ldkkD() ()= ()- Â () qddqkj’ () B jk () k=11k= jkπ will have zeroes at each point B = qk.Thus it can be written as the product of a polynomial, say j**(B), and D(B). QED An alternative and very useful form of this expression is obtained by adding and subtracting D(B)Slk to (3.2) to get

p (3.3) jj()BBBBBB= Â lddkkkDD()()()1+ () + ()*, () k=1 196 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

where j*(B) =j**(B) +Slk. In this representation j(0) =j*(0) which is normalized to unity.

It is clear that the polynomial j(B) will have a root at qk if and only if lk = 0. Thus testing for unit roots can be carried out equivalently by testing for parameters l = 0 is an appropriate expansion. To apply this proposition to testing for seasonal unit roots in quar- terly data, expand a polynomial j(B) about the roots +1, -1, i, and -i as qk, k = 1,...,4.Then, from (3.3),

2 2 j()BBBB=+ll1 ()11()+ +-2 () BBB() 11- ()+

+-l3 ()iB()111- B()+ B()- iB

+ l4 ()iB()111- B()+ B()+ iB 4 + j*.()BB()1 -

Clearly, l3 and l4 must be complex conjugates since j(B) is real. Simpli- fying and substituting p1 =-l1, p2 =-l2,2l3 =-p3 + ip4, and 2l4 =-p3 - ip4, gives

23 23 j()B=-pp1 B()11 + BB + + B--2 () B()-+ BB - B 24(3.4) -+()pp43BB()- ()11- B+ j*.() B()- B The testing strategy is now apparent. The data are assumed to be gen- erated by a general autoregression

j()Bxtt= e , (3.5) and (3.4) is used to replace j(B), giving

j*,()By4111221332431tt=++++pp y---- y t p y t pe y tt (3.6) where

23 yBBBxSBx1ttt=++()1 + = (), 23 yBBBx2tt=-()1 - + - , 2 yBx3tt=-()1 - , yBxx4 (3.7) 4ttt=-()1 = D4 . Eq. (3.6) can be estimated by ordinary least squares, possibly with addi- tional lags of y4 to whiten the errors. To test the hypothesis that j(qk) = 0, where qk is either 1, -1, or ±i, one needs simply to test that lk is zero.

For the root 1 this is simply a test for p1 = 0, and for -1 it is p2 = 0. For the complex roots l3 will have absolute value of zero only if both p3 and p4 equal zero which suggests a joint test. There will be no seasonal unit roots if p2 and either p3 or p4 are different from zero, which therefore requires the rejection of both a test for p2 and a joint test for p3 and p4. To find that a series has no unit roots at all and is therefore stationary, we must establish that each of the p’s is different from zero (save Seasonal Integration and Cointegration 197

possibly either p3 or p4). A joint test will not deliver the required evidence. The natural alternative for these tests is stationarity. For example, the alternative to j(1) = 0 should be j(1) > 0 which means p1 < 0. Similarly, the stationary alternative to j(-1) = 0 is j(-1) > 0 which corresponds to p2 < 0. Finally, the alternative to |j(i)| = 0 is |j(i)| > 0. Since the null is two-dimensional, it is simplest to compute an F-type of statistic for the joint null, p3 = p4 = 0, against the alternative that they are not both equal to zero. An alternative strategy is to compute a two-sided test of p4 = 0, and if this is accepted, continue with a one-sided test of p3 = 0 against the alternative p3 < 0. If we restrict our attention to alternatives where it is assumed that p4 = 0, a one-sided test for p3 would be appropriate with rejection for p3 < 0. Potentially this could lack power if the first-step assumption is not warranted. In the more complex setting where the alternative includes the possi- bility of deterministic components it is necessary to allow mt π 0. The testable model becomes

j*,()By4111221332431tt=+++++ppppme y---- y t y t y ttt (3.8) which can again be estimated by OLS and the statistics on the p’s used for inference. The asymptotic distribution of the t-statistics from this regression were analyzed by Chan and Wei (1988). The basic finding is that the asymptotic distribution theory for these tests can be extracted from that of Dickey and Fuller (1979) and Fuller (1976) for p1 and p2, and from Dickey, Hasza, and Fuller (1984) for p3 if p4 is assumed to be zero. The tests are asymptotically similar or invariant with respect to nuisance parameters. Furthermore, the finite-sample results are well approxi- mated by the asymptotic theory and the tests have reasonable power against each of the specific alternatives. It is clear that several null hypotheses will be tested for each case of interest. These can all be computed from the same least-squares regression (3.6) or (3.8) unless the sequential testing of p3 and p4 is desired. To show intuitively how these limiting distributions relate to the stan- dard unit-root tests consider (3.6) with j(B) = 1. The test for p1 = 0 will have the familiar Dickey–Fuller distribution if p2 = p3 = p4 = 0 since the model can be written in the form

yy1111ttt=+()1 pe- + . Similarly,

yy2221ttt=-()1 +pe- + , if the other p’s are zero. This is a test for a root of -1 which was shown by Dickey and Fuller to be the mirror of the Dickey–Fuller 198 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

distribution. If y2t is regressed on -y2t-1, the ordinary DF distribution will be appropriate. The third test can be written as

yy3332ttt=-()1 +pe- + , assuming p4 = 0 which is therefore the mirror of the Dickey–- Hasza–Fuller distribution for biannual seasonality. The inclusion of y3t-1 in the regression recognizes potential phase shifts in the annual compo- nent. Since the null is that p3 = p4 = 0, the assumption that p4 = 0 may merely reduce the power of the test against some alternatives. To show that the same distributions are obtained when it is not known a priori that some of the p’s are zero, two cases must be considered. First, if the p’s other than the one being tested are truly nonzero, then the process does not have unit roots at these frequencies and the corre- sponding y’s are stationary. The regression is therefore equivalent to a standard augmented unit-root test. If however some of the other p’s are zero, there are other unit roots in the regression. However, it is exactly under this condition that it is shown in section 2 that the corresponding y’s are asymptotically uncor- related. The distribution of the test statistic will not be affected by the inclusion of a variable with a zero coefficient which is orthogonal to the included variables. For example, when testing p1 = 0, suppose p2 = 0 but y2 is still included in the regression. Then y1 and y2 will be asymptotically uncorrelated since they have unit roots at different frequencies and both will be asymptotically uncorrelated with lags of y4 which is stationary. The test for p1 = 0 will have the same limiting distribution regardless of whether y2 is included in the regression. Similar arguments follow for the other cases. When deterministic components are present in the regression even if not in the data, the distributions change.Again, the changes can be antic- ipated from this general approach. The intercept and trend portions of the deterministic mean influence only the distribution of p1 because they have all their spectral mass at zero frequency. Once the intercept is included, the remaining three seasonal dummies do not affect the limit- ing distribution of p1. The seasonal dummies, however, do affect the dis- tribution of p2, p3, and p4. Table 10.1a gives the Monte Carlo critical values for the one-sided ‘t’ tests on p1, p2, and p3 in the most important cases. These are very close to the Monte Carlo values from Dickey–Fuller and Dickey–Hasza– Fuller for the situations in which they tabulated the statistics. In Table 10.1b we present the critical values of the two-sided ‘t’ test on p4 = 0 and the critical values for the ‘F ’ test on p3 « p4 = 0. Notice that the distribution of the ‘t’ statistic is very similar to a standard normal except when the auxiliary regression contains seasonal dummies, in which case it becomes fatter-tailed. The distribution for the ‘F’ statistic also looks like an F distribution with degrees of freedom equal to two Seasonal Integration and Cointegration 199 1.52 3.10 1.53 1.56 1.55 1.52 1.54 1.53 1.53 3.24 3.14 3.11 3.07 1.52 1.54 1.52 1.56 3.28 3.14 3.12 ------1.93 3.41 1.90 1.92 1.92 1.90 1.90 1.88 1.90 3.61 3.44 3.44 3.38 1.92 1.89 1.90 1.92 3.66 3.48 3.44 ------3 p ’: t ‘ 2.23 3.69 2.18 2.21 2.24 2.23 2.23 2.18 2.21 3.92 3.72 3.72 3.67 2.27 2.19 2.20 2.21 4.02 3.76 3.72 ------2.66 4.04 2.55 2.58 2.58 2.64 2.61 2.53 2.57 4.31 4.06 4.06 4.00 2.68 2.56 2.56 2.58 4.46 4.12 4.05 ------1.60 2.60 1.57 1.61 1.61 1.60 1.60 1.58 1.59 2.69 2.63 2.59 2.60 1.57 1.60 1.63 1.62 2.73 2.63 2.61 ------1.95 2.91 1.92 1.94 1.95 1.95 1.95 1.91 1.92 3.04 2.94 2.90 2.89 1.91 1.94 1.96 1.95 3.08 2.94 2.93 ------2 p ’: t ‘ Fractiles 2.27 3.18 2.22 2.23 2.24 2.27 2.24 2.21 2.22 3.37 3.22 3.15 3.16 2.24 2.24 2.25 2.25 3.41 3.22 3.18 ------2.67 3.52 2.61 2.60 2.60 2.68 2.61 2.60 2.58 3.75 3.60 3.49 3.50 2.65 2.58 2.65 2.59 3.80 3.60 3.57 ------nid(0, 1). ~ t 1.59 3.18 1.61 1.59 1.62 2.62 2.58 2.58 2.57 2.72 2.63 2.62 2.59 3.21 3.16 3.16 3.15 3.37 3.22 3.21 e ------= t x 4 D 1.95 3.49 1.97 1.93 1.94 2.96 2.88 2.89 2.87 3.08 2.95 2.94 2.91 3.56 3.47 3.46 3.44 3.71 3.53 3.52 ------1 p ’: t ‘ 2.29 3.74 2.26 2.25 2.23 3.25 3.14 3.17 3.13 3.39 3.22 3.23 3.18 3.85 3.73 3.75 3.70 4.04 3.80 3.80 ------2.72 4.05 2.60 2.62 2.62 3.66 3.47 3.51 3.48 3.77 3.55 3.56 3.51 4.23 4.07 4.09 4.05 4.46 4.09 4.15 ------200 200 200 200 200 Critical values from the small-sample distributions of test statistics for seasonal unit roots on 24000 Monte Table 10.1a Table Carlo replications: data-generating process Auxiliary regressionsNo intercept T 48 0.01 0.025 0.05 0.10 0.01 0.025 0.05 0.10 0.01 0.025 0.05 0.10 No seas. dum. 100 No trend 136 Intercept 48 No seas. dum. 100 No trend 136 Intercept 48 Seas. dum. 100 No trend 136 Intercept 48 No seas. dum. 100 Trend 136 Intercept 48 Seas. dum. 100 Trend 136 200 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo 4 p 3 p ’: F ‘ Fractiles 4 p ’: t ‘ 1.351.321.31 1.33 1.31 1.30 1.72 1.67 1.66 2.05 2.00 1.99 2.49 2.40 2.38 2.45 2.39 2.41 3.26 3.12 3.14 4.04 3.89 3.86 5.02 4.89 4.81 1.48 1.55 1.97 2.31 2.71 5.56 6.57 7.56 8.96 1.301.33 1.291.301.31 1.30 1.67 1.28 1.27 1.68 1.97 1.65 1.65 2.04 2.36 1.97 1.97 2.41 2.42 2.32 2.31 2.32 3.16 2.35 2.36 3.04 3.92 3.08 3.00 3.78 4.81 3.81 3.70 4.78 4.77 4.73 1.291.53 1.281.531.52 1.54 1.65 1.52 1.51 1.96 1.96 1.93 1.92 2.35 2.30 2.29 2.28 2.81 2.37 2.73 2.71 5.50 3.12 5.56 5.56 6.60 3.86 6.57 6.63 7.68 4.76 7.72 7.66 9.22 8.74 8.92 1.541.33 1.531.281.29 1.26 1.95 1.28 1.26 1.64 2.32 1.65 1.62 1.96 2.78 1.98 1.92 2.37 5.56 2.32 2.31 2.23 6.61 2.31 2.33 2.95 7.53 2.98 3.04 3.70 8.93 3.71 3.69 4.64 4.70 4.57 1.291.48 1.261.511.51 1.51 1.64 1.51 1.53 1.97 1.96 1.92 1.96 2.34 2.30 2.28 2.31 2.78 2.34 2.69 2.78 5.37 3.07 5.52 5.55 6.55 3.76 6.60 6.62 7.70 4.66 7.52 7.59 9.27 8.79 8.77 nid(0, 1). ------~ t e = t x 4 1.76 1.68 1.68 1.92 1.65 1.72 1.68 1.68 1.66 1.98 1.96 1.96 1.96 1.70 1.65 1.64 1.66 1.91 1.94 1.94 ------D 2.11 2.01 1.99 2.27 1.98 2.06 1.99 1.98 1.98 2.37 2.32 2.31 2.33 2.05 1.97 1.97 1.97 2.26 2.32 2.78 ------2.51 2.43 2.44 2.65 2.43 2.44 2.38 2.36 2.36 2.86 2.78 2.72 2.74 2.41 2.38 2.36 2.35 2.75 2.76 2.71 ------200 200 200 200 200 Critical values from the small-sample distributions of test statistics for seasonal unit roots on 24000 Monte Table 10.1b Table Carlo replications: data-generating process Auxiliary regressionsNo Intercept T 48 0.01 0.025 0.05 0.10 0.90 0.95 0.975 0.99 0.90 0.95 0.975 0.99 No seas. dum.No trend 100 136 InterceptNo seas. dum.No trend 100 48 136 InterceptSeas. dum.No trend 48 100 136 InterceptNo seas. dum.Trend 100 48 136 InterceptSeas. dum.Trend 48 100 136 Seasonal Integration and Cointegration 201 and T minus the number of regressors in (3.6). However, when seasonal dummies are present, the tail becomes fatter here as well.

4. ERROR-CORRECTION REPRESENTATION In this section, an error-correction representation is derived which explicitly takes the cointegrating restrictions at the zero and at the sea- sonal frequencies into account. As the time series being considered has poles at different locations on the unit circle, various cointegrating situ- ations are possible. This naturally makes the general treatment mathe- matically complex and notationally involved. Although we treat the general case we will present the special cases considered to be of most interest.

Let xt be an N ¥ 1 vector of quarterly time series, each of which poten- tially has unit roots at zero and all seasonal frequencies, so that each 4 component of (1 - B )xt is a stationary process but may have a zero on the unit circle. The Wold representation will thus be

4 ()1 - Bxtt= CB()e , (4.1) where et is a vector white noise process with zero mean and covariance matrix W, a positive definite matrix. There are a variety of possible types of cointegration for such a set of series. To initially examine these, apply the decomposition of (3.2) to each element of C(B). This gives

p CB()= Â DDkk() Bd () B+ C**()() B D B , k=1 where dk(B) = 1 - (1/qk)B and D(B) is the product of all the dk(B). For quarterly data the four relevant roots, qk, are 1, -1, i, and -i, which after solving for the L’s becomes

23 23 CB()=+++YY1[]11 B B B+-+-2 [] B B B 24 (4.2) ++()YY34BBCBB[]11- + **()()- , where Y1 = C(1)/4, Y2 = C(-1)/4, Y3 = Re[C(i)]/2, and Y4 = Im[C(i)]/2. Multiplying (4.1) by a vector a¢ gives

4 ()1 - Bxaa¢=¢tt CB() e.

Suppose for some a = a1, a1¢C(1) = 0 = a 1¢Y1, then there is a factor of (1 - B) in all terms, which will cancel out giving

23 2 ()111++BB + Baa112¢=¢ xt {YYY[]() + B++()34 B[]+ B 23 + CB**()[]1++ BBB + }et , so that a 1¢xt will have unit roots at the seasonal frequencies but not at zero frequency. Thus x is cointegrated at zero frequency with cointe- grating vector a1, if a 1¢C(1) = 0. Denote these as 202 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

xt ~ CI0 with cointegrating vector a1.

Notice that the vector y1t = S(B)xt is I0(1) since (1 - B)y1t = C(B)et, while a 1¢y1t is stationary whenever a 1¢C(1) = 0 so that y1t is cointegrated in exactly the sense described in Engle and Granger (1987). Since y1t is essentially seasonally adjusted xt it follows that one strategy for estima- tion and testing for cointegration at zero frequency in seasonal series is to first seasonally adjust the series. 2 Similarly, letting y2t =-(1 - B)(1 + B )xt, (1 + B)y2t =-C(B)et so that y2t has a unit root at -1. If a 2¢C(-1) = 0, then a 2¢Y2 = 0 and a 2¢y2t will not have a unit root at -1. We say then that xt is cointegrated at frequency 1 w = –2 , which is denoted

xt ~ CI1/2 with cointegrating vector a2.

2 2 Finally denote y3t =-(1 - B )xt, which satisfies (1 + B )y3t =-C(B)et 1 and therefore includes unit roots at frequency –4. If a 3¢C(i) = 0 which 1 implies that a 3¢Y3 = a 3¢Y4 = 0, then a 3¢y3t will not have a unit root at –4, implying that

xt ~ CI1/4 with cointegrating vector a3.

1 Cointegration at frequency –4 can also occur under weaker conditions. Consider the bivariate system:

È10 ˘ 1+ Bx2 = e , ()ttÍ 2 ˙ ÎBB1+ ˚ in which both series are I1/4(1) and there is no fixed cointegrating vector. However, the polynomial cointegrating vector (PCIV), as introduced by Yoo (1987), of (-B, 1) will generate a stationary series. It is not surpris- ing with seasonal unit roots, that the timing could make a difference. We now show that the need for PCIV is a result purely of the fact that one vector is sought to eliminate two roots (±i) and that one lag in the coin- tegrating polynomial is sufficient. Expanding the PCIV a(B) about the two roots (±i) using (3.2) gives aa()BiBiBB= Re[]() + Im[] aa() + **()()1+ 2 2 ∫+()aa34BBB+ a**()()1+ , so that the condition that a¢(B) C(B) have a common factor of (1 + B2) depends only on a3 and a4. The general statement of cointegration at 1 frequency –4 then becomes

xCIt ~,14with polynomial cointegrating vector aa 3+ 4 B

if and only if ()aa343¢+ ¢ii()YY- 4= 0, which is equivalent to a(i)¢C(i) = 0. Seasonal Integration and Cointegration 203

There is no guarantee that xt will have any type of cointegration or that these cointegrating vectors will be the same. It is however possible that a1 = a2 = a3, a4 = 0, and therefore one cointegrating vector could reduce the integration of the x series at all frequencies. Similarly if a2 = a3, a4 = 0, one cointegrating vector will eliminate the seasonal unit roots. This might be expected if the seasonality in two series is due to the same source. A characterization of the cointegrating possibilities has now been given in terms of the moving-average representation. More useful are the autoregressive representations and in particular, the error-correction representation. Therefore, if C(B) is a rational matrix in B, it can be written [using the Smith–McMillan decomposition [Kailath (1980)], as adapted by Yoo (1987), and named the Smith–McMillan–Yoo decompo- sition by Engle (1987)] as follows: --11 CB()= UB() MBVB ()(), (4.3) where M(B) is a diagonal matrix whose determinant has roots only on the unit circle, and the roots of the determinants of U-1(B) and V(B)-1 lie outside the unit circle. This diagonal could contain various combinations of the unit roots. However, assuming that the cointegrat- ing rank at each frequency is r, the matrix can be written without loss of generality as

ÈI Nr- 0 ˘ MB()= Í ˙, (4.4) Î0 D4I r˚ where Ik is a k ¥ k unit matrix. The following derivation of the error- correction representation is easily adapted for other forms of M(B). Substituting (4.3) into (4.1) and multiplying by U(B) gives -1 D4UBx()tt= MBVB()()e . (4.5)

The first N - r equations have a D4 on the left side only while the final r equations have D4 on both sides which therefore cancel. Thus (4.5) can be written as -1 MBUBx()()tt= VB()e , (4.6) with

ÈD4I Nr- 0 ˘ MB()= Í ˙. (4.7) Î0 I r ˚ Finally, the autoregressive representation is obtained by multiplying by V(B) to obtain

ABx()tt= e , (4.8) where 204 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

AB()= VBM() ()() BUB. (4.9) Notice that at the seasonal and zero-frequency roots, det[A(q)] = 0 since A(B) has rank r at those frequencies. Now, partition U(B) and V(B) as

È ¢ ˘ UB1 () UB()= Í ˙,,,VB()= [] V1 () Bg () B Í ¢ ˙ Îa()B ˚ where a(B) and g(B) are N ¥ r matrices and U1(B) and V1(B) are N ¥ (N + r) matrices. Expanding the autoregressive matrix using (3.3) gives

23 2 ABBBBB()=+++PP1 []111--2 BBB[]()()+ 24 +-()PP43BBBABB[]11+ + *,()[]- with P1 =-g (1)a¢(1)/4 ∫-g1a 1¢, P2 =-g(-1)a(-1)¢/4 ∫-g2a 2¢, P3 = Re[g (i)a(i)¢]/2, and P4 = Im[g (j)a(i)¢]/2. Letting a1 = a(1)/4, a2 = a(-1)/4, a3 = Re[a(i)]/2, and a4 = Im[a(i)] while g1 = g (1), g2 = g (-1), g3 = Re[g (i)], and g4= Im[g (i)], the general error-correction model can be written

AB*()D411112221334432 xtt=¢+¢ga y-- ga y t -() ga ¢-¢ ga y t -

+¢+¢()ga43 ga 34y 31tt- + e, (4.10) where A*(0) = C(0) = IN in the standard case. This expression is an error- correction representation where both a, the cointegrating vector, and g, the coefficients of the error-correction term, may be different at differ- ent frequencies and, in one case, even at different lags.This can be written in a more transparent form by allowing more than two lags in the error- correction term. Add D4(g3a 4¢ + g4a 3¢ + g4a 4¢B)xt-1 to both sides and rearrange terms to get

AB*()D411112221 xtt=¢+¢ga y-- ga y t

-+()gg34BBy() aa 3¢+ 432¢ tt- + e (4.11) where Ã*(B) is a slightly different autoregressive matrix from A*(B). The error-correction term at the annual seasonal enters potentially with two lags and is potentially a polynomial cointegrating vector. When a4 = 0 or g4 = 0 or both, the model simplifies so that, respectively, cointe gration is contemporaneous, the error correction needs only one lag, or both. Notice that all the terms in (4.11) are stationary. Estimation of the system is easily accomplished if the a’s are known a priori. If they must be estimated, it appears that a generalization of the two-step estimation procedure proposed by Engle and Granger (1987) is avail- able. Namely, estimate the a’s using prefiltered variables y1, y2, and y3, respectively, and then estimate the full model using the estimates of the a’s. In the PCIV case this regression would include a single lag. It is conjectured that the least-squares estimates of the remaining para- Seasonal Integration and Cointegration 205 meters would have the same limiting distribution as the estimator knowing the true a’s just as in the Engle–Granger two-step estimator. The analysis by Stock (1987) suggests that although the inference on the a’s can be tricky due to their nonstandard limiting distributions, infer- ence on the estimates of A*(B) and the g’s can be conducted in the standard way. The following generalizations of the above analysis are discussed formally in Yoo (1987). First, if r > 1 but all other assumptions remain as before, the error-correction representation (4.11) remains the same but the a’s and g’s now become N ¥ r matrices. Second, if the cointegrating rank at the long-run frequency is r0, which is different from the cointegrating rank at the seasonal frequency, rs, (4.11) is again legitimate with the sizes of the matrices on the right-hand side appro- priately redefined.Thirdly, if the cointegrating vectors a1, a2, and a3 coin- cide, equalling say, a, and a4 = 0, a simpler error-cointegrating model occurs:

AB*,()D41 xttt= ga() B¢+ x- e (4.12) where the degree of g (B) is at most 3, as can be seen either from (4.10) or from an expansion of g (B) using (3.2). For four roots there are poten- tially four coefficients and three lags. Finally, some of the cointegrating vectors may coincide but some do not. A particularly interesting case is where a single linear combination eliminates all seasonal unit roots. Thus suppose a2 = a3 ∫ as and a4 = 0. Then (4.10) becomes

AB*,()DD4111 xttsstt=¢ga SBx()--+ g() B a¢+ x 1 e (4.13) where gs(B) has potentially two lags. Thus zero-frequency cointegration occurs between the elements of seasonally adjusted x, while seasonal cointegration occurs between the elements of differenced x. This is the case examined by Engle, Granger, and Hallman (1989) for electricity demand. There monthly electricity sales were modeled as cointegrated with economic variables such as customers and income at zero frequency and possibly at seasonal frequencies with the weather. The first relation is used in long-run forecasting, while the second is mixed with the short- run dynamics for short-run forecasting. Although an efficiency gain in the estimates of the cointegrating vectors is naturally expected by checking and imposing the restrictions between the cointegrating vectors, there should be no efficiency gain in the estimates of the “short-run parameters”, namely A*(B) and g’s, given the superconsistency of the estimates of the cointegrating vectors. Hence, the representation (4.11) is considered relatively general and the impor- tant step of model-building procedure is then to identify the cointegrat- edness at the different frequencies. This question is considered in the next section. 206 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

5. TESTING FOR COINTEGRATION: AN APPLICATION

In this section it is assumed that there are two series of interest, x1t and x2t, both integrated at some of the zero and seasonal frequencies, and the question to be studied is whether or not the series are cointegrated at some frequency. Of course, if the two series do not have unit roots at cor- responding frequencies, the possibility of cointegration does not exist. The tests discussed in section 3 can be used to detect which unit roots are present. Suppose for the moment that both series contain unit roots at the zero frequency and at least some of the seasonal frequencies. If one is inter- ested in the possibility of cointegration at the zero frequency, a strategy could be to form the static O.L.S. regression

x1t = Ax2t + residual, and then test if the residual has a unit root at zero frequency, which is the procedure in Engle and Granger (1987). However, the presence of seasonal unit roots means that A may not be consistently estimated, in sharp contrast to the case when there are no seasonal roots when  is estimated superefficiently. This lack of consistency is proved in Engle,

Granger, and Hallman (1989). If, in fact, x1t and x2t are cointegrated at both the zero and the seasonal frequencies, with cointegrating vectors a1 and as and with a1 π as, it is unclear what value of A would be chosen by the static regression. Presumably, if a1 = as, then  will be an estimate of this common value. These results suggest that the standard procedure for testing for cointegration is inappropriate. An alternative strategy would be to filter out unit-root compo- nents other than the one of interest and to test for cointegration with the filtered series. For example, to remove seasonal roots, one could form

xSBxxSBx˜1122tttt= () , ˜ = () , where S(B) = (1 - Bs)/(1 - B), and then perform a standard cointegra- tion test, such as those discussed in Engle and Granger (1987), on x˜1t and x˜2t. If some seasonal unit roots were thought to be present in x1t and x2t, this procedure could be done without testing for which roots were present, but the filtered series could have spectra with zeros at some sea- sonal frequencies, and this may introduce problems with the tests. Alter- natively, the tests of section 3 could be used, appropriate filters applied just to remove the seasonal roots indicated by these tests, and then the standard cointegration tests applied. For zero-frequency cointegration, this procedure is probably appropriate, although the implications of the pretesting for seasonal roots has not yet been investigated. Seasonal Integration and Cointegration 207

To test for seasonal cointegration the corresponding procedure would be to difference the series to remove a zero-frequency unit root, if required, then run a regression of the form

s-2 DDxx12tjtj=+Âa - residual, j=0 and test if the residual has any seasonal unit roots. The tests devel- oped in section 3 could be applied, but will not have the same distribu- tion as they involve estimates of the aj. The correct test has yet to be developed. A situation where the tests of section 3 can be applied directly is where a1 = as and some theory suggests a value for this a, so that no esti- mation is required. One merely forms x1t - ax2t and tests for unit roots at the zero and seasonal frequencies. An example comes from the permanent income hypothesis where the log of income and the log of consumption may be thought to be cointe- grated with a = 1. Thus c - y should have no unit roots using a simplis- tic form of this theory, as discussed by Davidson, Hendry, Srba, and Yeo (1978), for instance. To illustrate the tests, quarterly United Kingdom data for the period 1955.1 to 1984.4 were used with y = log of personal disposable income and c = log of consumption expenditures on nondurables. The data are shown in Fig. 10.1. From the figure, it is seen that both series may have a random-walk character implying that we would expect to find a unit root at the zero frequency. However, the two series seem to drift apart whereby cointe- gration at the zero frequency with cointegrating vector (1, -1) is less likely. For the seasonal pattern, it is clear that c contains a much stronger and less changing seasonal pattern than y, although even the seasonal consumption pattern changes over the sample period. Based on these preliminary findings, one may or may not find seasonal unit roots in c and y or both, but cointegration at the seasonal frequencies cannot be expected. The tests are based on the auxillary regression (3.6) where f(B) is a polynomial in B. The deterministic term is a zero, an intercept (I), an intercept and seasonal dummies (I, SD), an intercept and a trend (I, Tr), or an intercept, seasonal dummies, and a trend (I, SD, Tr). In the augmented regressions nonsignificant lags were removed, and 4 for c and y this implied a lag polynomial of the form 1 - f1B - f4B - 5 f5B , where f1 was around 0.85, f4 around -0.32, and f5 around 0.25. For c - y the lag polynomial was approximately 1 - 0.29B - 0.22B2 + 0.21B4. The “t” statistics from these augmented regressions are shown in Table 10.2. Figure 10.1. Income and consumption in the UK. Seasonal Integration and Cointegration 209

Table 10.2 Tests for seasonal unit roots in the log of UK consumption expenditure on nondurables c, in the log of personal disposable income y, and in the difference c - y; 1955.1–1984.4.

Auxiliary ‘t’: p1 ‘t’: p2 ‘t’: p3 3 VAR regression (zero frequency) (biannual) (annual) ‘t’: p4 ‘F’: p3 p4

— 2.45 -0.31 0.22 -0.84 0.38 I -1.62 -0.32 0.22 -0.87 0.40 cI, SD -1.64 2.22 -1.47 -1.77 2.52 I, Tr -2.43 -0.35 0.18 -0.85 0.38 I, SD, Tr -2.33 -2.16 -1.53 -1.65 2.43 — 2.61 -1.44 -2.35b -2.51b 5.68b I -1.50 -1.46 -2.38b -2.51b 5.75b yI, SD -1.56 -2.38 -4.19b -3.89b 14.74b I, Tr -2.73 -1.46 -2.52b -2.23b 5.46b I, SD, Tr -2.48 -2.30 -4.28b -3.46b 13.74b — 1.54 -1.10 -0.98 -1.17 1.19 I -1.24 -1.10 -1.03 -1.16 1.22 c - yI, SD -1.19 -2.64 -2.55 -2.79b 8.25b I, Tr -2.40 -1.21 -1.05 -1.04 1.12 I, SD, Tr -2.48 -2.84 -2.72 -2.48b 7.87b a The auxiliary regressions were augmented by significant lagged values of the fourth difference of the regressand. b Significant at the 5% level.

The results indicate strongly a unit root at the zero frequency in both c, y, and c - y implying that there is no cointegration between c and y at the long-run frequency, at least not for the cointegrating vector [1, -1].

Similarly, the hypothesis that c, y, and c - y are I1/2(1) cannot be rejected implying that c and y are not cointegrated at the biannual cycle either. The results also indicate that the log of consumption expenditures on nondurables are I1/4(1) as neither the “F” test nor the two “t” tests can reject the hypothesis that both p4 and p3 are zero. Such hypotheses are, however, firmly rejected for the log of personal disposable income and conditional on these results, c and y cannot possibly be cointegrated at this frequency or at the frequency corresponding to the complex conju- gate root, irrespective of the forms of the cointegrating vectors. In fact, conditional on p4 being zero, the “t” test on p3 cannot reject a unit root in c - y at the annual frequency in any of the auxiliary regressions. The assumption that p4 = 0 is not rejected when seasonal dummies are absent and the joint “F” test cannot reject in these cases either. When the aux- iliary regression contains deterministic seasonals, both p4 = 0 and p3 « p4 = 0 are rejected, leading to a theoretical conflict which can of course happen with finite samples. 210 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo

6. CONCLUSION The theory of integration and cointegration of time series is extended to cover series with unit roots at frequencies different from the long-run frequency. In particular, seasonal series are studied with a focus upon the quarterly periodicity. It is argued that the existence of unit roots at the seasonal frequencies has similar implications for the persistence of shocks as a unit root at the long-run frequency. However, a seasonal pattern generated by a model characterized solely by unit roots seems unlikely as the seasonal pattern becomes too volatile, allowing “summer to become winter.” A proposition on the representation of rational polynomials allows reformulation of an autoregression isolating the key unit-root parame- ters. Based on least-squares fits of univariate autoregressions on trans- formed variables, similar to the well-known augmented Dickey–Fuller regression, tests for the existence of seasonal as well as zero-frequency unit roots in quarterly data are presented and tables of the critical values provided. By extending the definition of cointegration to occur at separate fre- quencies, the error-correction representation is developed by use of the Smith–McMillan lemma and the proposition on rational lag polynomi- als. The error-correction representation is shown to be a direct general- ization of the well-known form, but on properly transformed variables. The theory is applied to the UK consumption function and it is shown that the unit-elasticity error-correction model is not valid at any fre- quency as long as we confine ourselves to only the consumption and income data.

REFERENCES Ahtola, J. and G.C.Tiao, 1987, Distributions of least squares estimators of autore- gressive parameters for a process with complex roots on the unit circle, Journal of Time Series Analysis 8, 1–14. Barsky, R.B. and J.A. Miron, 1989, The seasonal cycle and the business cycle, Journal of Political Economy 97, 503–534. Bell, W.R. and S.C. Hillmer, 1984, Issues involved with the seasonal adjustment of economic time series, Journal of Business and Economic Statistics 2, 291–320. Bhargava, A., 1987, On the specification of regression models in seasonal differ- ences, Mimeo. (Department of Economics, University of Pennsylvania, Philadelphia, PA). Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco, CA). Seasonal Integration and Cointegration 211

Chan, N.H. and C.Z. Wei, 1988, limiting distributions of least squares estimates of unstable autoregressive processes, Annals of Statistics 16, 367–401. Davidson, J.E., D.F. Hendry, F. Srba, and S. Yeo, 1978, Econometric modeling of aggregate time series relationships between consumer’s expenditure and income in the U.K., Economic Journal 91, 704–715. Dickey, D.A. and W.A. Fuller, 1979, Distribution of the estimators for autore- gressive time series with a unit root, Journal of the American Statistical Asso- ciation 84, 427–431. Dickey, D.A., H.P.Hasza, and W.A. Fuller 1984, Testing for unit roots in seasonal times series, Journal of the American Statistical Association 79, 355–367. Engle, R.F., 1987, On the theory of cointegrated economic time series, U.C.S.D. discussion paper no. 87-26, presented to the European meeting of the econo- metric Society, Copenhagen, 1987. Engle, R.F. and C.W.J. Granger, 1987, Co-integration and error correction: Representation, estimation and testing, Econometrica 55, 251–276. Engle, R.F., C.W.J. Granger, and J. Hallman, 1989, Merging short- and long-run forecasts:An application of seasonal co-integration to monthly electricity sales forecasting, Journal of Econometrics 40, 45–62. Fuller, W.A., 1976, Introduction of statistical time series (Wiley, New York, NY). Grether, D.M. and M. Nerlove, 1970, Some properties of optimal seasonal adjust- ment, Econometrica 38, 682–703. Hylleberg, S., 1986, Seasonality in regression (Academic Press, New York, NY). Kailath, T., 1980, Linear systems (Prentice-Hall, Englewood Cliffs, NJ). Nelson, C.R. and C.I. Plosser, 1982, Trends and random walks in macroeconomic time series, Journal of Monetary Economics 10, 129–162. Nerlove, M., D.M. Grether, and J.L. Carvalho, 1979, Analysis of economic time series: A synthesis (Academic Press, New York, NY). Stock, J.H., 1987, Asymptotic properties of least squares estimates of cointegrat- ing vectors, Econometrica 55, 1035–1056. Yoo, S., 1987, Co-integrated time series: Structure, forecasting and testing, Ph.D. dissertation, University of California, San Diego, CA. CHAPTER 11

A Cointegration Analysis of Treasury Bill Yields* Anthony D. Hall, Heather M. Anderson, and Clive W. J. Granger**

Abstract

This paper shows that yields to maturity of U.S. Treasury bills are coin- tegrated, and that during periods when the Federal Reserve specifically targeted short-term interest rates, the spreads between yields of differ- ent maturity define the cointegrating vectors. This cointegrating rela- tionship implies that a single non-stationary common factor underlies the time series behavior of each yield to maturity and that risk premia are stationary. An error correction model which uses spreads as the error correction terms is unstable over the Federal Reserve’s policy regime changes, but a model using post 1982 data is stable and is shown to be useful for forecasting changes in yields.

1. INTRODUCTION A topic which is discussed frequently in the term structure literature is that of the relationships between yields associated with bonds of differ- ent maturities. Arbitrage arguments, often augmented by considerations about risk are generally used to justify such relationships; the underly- ing problem is to explain the empirical observation that yields of differ- ent maturity appear to move together over time. Formal empirical analysis of the relationships between yields of dif- ferent maturities is not straightforward because nominal yields are not generally considered to be stochastically stationary. It has long been

* Review of Economics and Statistics, 74, 1992, 116–126. ** Australian National University, University of California, San Diego, and University of California, San Diego, respectively. The authors wish to thank David Hendry and the referees for helpful comments on an earlier draft of this paper. Financial support for Hall from the Australian Research Council, Anderson from the P.E.O. International Peace Scholarship Fund, and for Granger from the National Science Foundation Grant SES 8902950 is gratefully acknowledged. Cointegration and Treasury Bill Yields 213 recognised that it is possible for sets of nonstationary variables to move together over time. Granger (1981) formalised this concept, defining such sets of variables as cointegrated variables, and since then various tests for cointegration and techniques for working with cointegrated variables have been developed. The literature which relates cointegration to the theory of the term structure is currently small. A few authors have tested for (and found) cointegration between the yield on a long-term bond and that on a short- term bond,1 but the question of how one might further apply the theory of cointegration to study the term structure is largely unanswered. This study suggests that the term structure for U.S. Treasury bills is well mod- elled as a cointegrated system. The organization of the paper is as follows. Section II relates the theory of cointegration, error correction models and common factors to well-known models of the term structure. Here, it is shown that if yields to maturity are integrated processes, then the term structure data are theoretically cointegrated.The cointegration expected here is of a special type, and the theoretical restrictions which should characterize this cointegration are derived and explored. Section III describes the data which have been used in this study. The empirical evidence that yields are cointegrated according to the predictions made in section II is presented in section IV. An estimated error correction model is presented to illustrate how this information can be utillised. The estimated model is statistically significant and is shown to be potentially useful for forecasting yields of Treasury bills. Section V concludes.

2. THEORETICAL FRAMEWORK

A. Theory of the Term Structure Let R(k, t) be the continuously compounded yield to maturity of a k period pure discount bond, (k = 1, 2, 3, . . .) and let the forward rate F(k, t) be the rate of return from contracting at time t to buy a one period pure discount bond which matures at time t + k. Then F(1, t) = R(1, t), and forward rates can be recursively calculated from the Fisher- Hicks formulae, 1 È k ˘ Rk(),, t = ÍÂ Fjt()˙ k Î j=1 ˚ K for k = 123,,, . (1)

1 See, for instance, Campbell and Shiller (1987) or Engle and Granger (1987). 214 A. D. Hall, H. M. Anderson, and C. W. J. Granger

Forward rates F(j, t) typically differ from the yield R(1, t + j - 1) actually realised, so that investors may be assumed to rely on their expec- tations of R(1, t + j - 1) when they choose between investing now or later. The relationship between forward and expected rates is assumed to be

Fjt(),,=+- ERt []()11 t j+ L() jt , (2) where Et denotes expectations based on information available at time t and the L(j, t) are premia, which may account for risk considerations or for investors’ preferences about liquidity. Substitution of equation (2) into equation (1) leads to a very general relationship between yields of different maturities, i.e.,

1 È k ˘ Rk(),,,, t =+-ÍÂ ERt[]()11 t j˙ + Lkt() k Î j=1 ˚ 1 È k ˘ where Lk(),, t = L()jt k ÍÂ ˙ Î j=1 ˚ (3)

This equation indicates that the yields of bonds with similar maturities will move together. Many of the traditional theories of the term struc- ture focus on the properties of the premia L(k, t). The pure expectations hypothesis asserts that the L(k, t) are zero, while other versions of the expectations hypothesis assert that the premia are constant over time. Other assumptions about the premia would lead to different theories about the term structure, many of which are consistent with the frame- work described here. Despite its simplicity, equation (3) does not provide an immediately useful basis for empirical studies of the term structure. None of the vari- ables on the right hand side of this equation are directly measurable, and there is considerable empirical evidence that yields to maturity are inte- grated rather than stationary processes, so that conventional statistical analysis is not necessarily appropriate in this context.

B. Integration and Cointegration within the Term Structure A series X(t), which needs to be differenced d times before it has a stationary invertible ARMA representation, is said to be integrated of order d, and this property is represented by the notation X(t) ~ I(d). It is generally accepted that interest rates, and Treasury bill yields in particular, are well described as I(1) processes.2

2 See, for instance, Campbell and Shiller (1988), Stock and Watson (1988) or Engle and Granger (1987). For a formal analysis of Treasury Bill yields, see Anderson, Granger and Hall (1990). Cointegration and Treasury Bill Yields 215

Given that the vector series X(t) has only I(1) components, it is some- times possible to find vectors of constants a1, a2,...,ar such that the linear combinations ai¢X(t) are all I(0). In this case we say that X(t) is cointegrated, and we define the vectors a1, a2,...,ar to be cointegrat- ing vectors. The space spanned by the cointegrating vectors is called the cointegration space. Assuming that yields to maturity are integrated I(1) processes, the possibility that they might be cointegrated is seen by rearranging equation (3) to obtain

k=1 ji= 1 (4) []Rk(),, t- R()1 t =+ÂÂ ERtjLktt D ()1 ,+ () , k i=1 j=1 where R(k, s) = R(k, s) - R(k, s - 1). The right hand side of equation (4) is stationary provided that R(1, t) and the premia L(k, t) are sta- tionary. Given these conditions, it follows that the left hand side of equa- tion (4) is stationary and that (1, -1)¢ is a cointegrating vector for X(t) = [R(k, t), R(1, t)]¢. This implies that each yield R(k, t) is cointegrated with R(1, t), and that the spreads between R(k, t) and R(1, t) are the station- ary linear combinations of X(t) which result from the cointegration of X(t). We define the spread between the yields R(i, t) and R(j, t) as S(i, j, t) = R(i, t) - R(j, t). The cointegration implied by the above considerations is of a very special type. Specifically, the model predicts that any yield series is coin- tegrated with the one period yield, so that if we were to consider a set of n yield series (which included the one period yield), then each of the (n - 1), n dimensional spread vectors contained in the set

()-110,,,KKKK , 0¢ ,()- 1010 ,,,, , 0¢ , ,()- 10 ,, 01 , ¢ [ ] is cointegrating for the (now augmented) vector X(t) = [R(1, t), R(k2, t), R(k3, t),...,R(kn, t)]¢ (in which k2, k3,...,kn are the maturities of the other (n - 1) bills). As these spread vectors are linearly independent, the cointegration space has rank (n - 1). Given the above arguments, it is straightforward to show that the spread between any two yields will be cointegrating. The spread vector associated with any two yields is just a linear combination of two of the spread vectors defined using the one period yield, and since linear com- binations of stationary variables are also stationary it follows that this more general spread vector is cointegrating. An implication of the finding that any spread is cointegrating is that any set of (n - 1) linearly independent spread vectors defined in an n dimensional space will com- prise a basis for the cointegrating space associated with X(t) = [R(1, t),

R(k2, t), R(k3, t),...,R(kn, t)]¢. Thus any set of n yields will have a coin- tegrating rank of (n - 1). 216 A. D. Hall, H. M. Anderson, and C. W. J. Granger

This cointegration between yields of different maturity implies anal- ogous cointegration between the one month holding returns associated with Treasury bills of different maturities. If H(k, t + 1) is the continu- ously compounded rate of return from t to t + 1 (one month) on a Trea- sury bill with k months to maturity at t, then it is straightforward to demonstrate that Ht()11,,+ = Rt() 1 Hkt(),,,+ 11= kRkt[]()-- Rk() t --+kRk[]D ()11, t +-+Rk()11,, tfor k≥ 2 and that the return in excess of the one-month rate will be Hkt(),,+ 111-+ H() t = kRkt[](),,-- Rk()1 t --()kRkt111[]D ()-+, +-[]Rk()11,, t- R() tfor k≥ 2 . It follows from the properties of the yields that the holding returns are also I(1) processes, that any set of n holding returns will have a cointe- grating rank of (n - 1). If this set includes the one-month holding return, the (n - 1) “excess returns” will form a basis for the cointegrating space.

C. Modeling Cointegrated Data It was shown in Engle and Granger (1987) that cointegration implies and is implied by an error correction representation, which in the case of the series X(t)¢=[R(1, t), R(2, t),...,R(k, t)], can be expressed by the equation

DXt()=-dm[] St() -1 - + cB()D Xt()- 1 + dB()()e t (5) where d is a non-zero n ¥ (n - 1) matrix, S(t) is an (n - 1) ¥ 1 vector of spreads, c(B) and d(B) are polynomials in the lag operator B, and e(t) is a vector of white noise, which may be contemporaneously correlated. The vector [S(t - 1) - m] is called the error correction term, while d is a matrix of adjustment coefficients. Statistical significance of the d will show that the error correction model is a valid representation of the data, and support the hypothesis that the spreads contained in S(t) are cointegrating. The error correction model has a very sensible economic interpreta- tion in this context. Equation (5) shows that although yields on bonds of Cointegration and Treasury Bill Yields 217 different maturity may diverge in the short run, the yields will adjust when the spreads between them deviate from the equilibrium value m, so that in the long run yields of different maturity will move together. The error correction model does not necessarily imply that yields adjust because the spreads between them are out of equilibrium. As Campbell and Shiller (1987, 1988) point out in the context of their present value models of the term structure, the spreads might measure anticipated changes in yields. Using the short yields as an example, this merely implies that agents have more information in the spread for forecasting changes in short yields, than is available in the history of short yields alone. Thus the spreads are useful for forecasting changes in short yields, and the error correction model arises because of agents’ forward looking behavior. An alternative interpretation of the cointegration between yields of different maturities arises from the relationship between cointegration and common factors. Stock and Watson (1988) show that when there are (n - p) linearly independently cointegrating vectors for a set of nI(1) variables, then each of these n variables can be expressed as a linear com- bination of pI(1) common factors and an I(0) component. Applying this result to the current context, we expect that there will be a single non- stationary common factor in yields of different maturity. Denoting the I(1) common factor by W(t), a simple representation of how it links the yields curve is given by

Rt()11,,= At()+ bWt1 ()

Rt()22,,= AtbWt()+ 2 () ...

Rnt(),,= Ant()+ bWtn () in which the A(i, t) are I(0) variables. Since W(t) is I(1) while the A(i,t) are I(0),the observed long-run movement in each yield series is pri- marily due to movement in the common factor. Thus W(t) “drives” the time series behavior of each yield and determines how the entire yield curve changes over time.There may be a number of additional factors that explain the variation in the I(0) variables A(i, t), but these factors will be stationary and dominated by the nonstationary factor W(t). The assertion that the same common variable underlies the time series behavior of each yield to maturity is not new to the literature on the term structure. Cox, Ingersoll and Ross (1985) build a continuous time general equilibrium model of real yields to maturity in which the instantaneous interest rate is common to all yields. In the discrete time model developed in this paper it is emphasized that there is only one non- stationary I(1) common variable. Here, one could interpret this nonsta- tionary common factor as the one period yield, or for that matter, any of the other period yields. It is also appropriate to think of this common 218 A. D. Hall, H. M. Anderson, and C. W. J. Granger factor as something exogenous to the system of yields such as inflation, measures of monetary growth, or measures of investment.

3. THE DATA The analysis has been conducted on the nominal yield to maturity data from the FAMA Twelve Month Treasury Bill Term Structure File of the U.S. Government Securities File of the Center for Research in Securities Prices (CRSP) at the University of Chicago.The file contains twelve yield series on Treasury bills; one series for bills with one month to maturity, another for bills with two months to maturity, and so on to a series with twelve months to maturity. Full details of how the file has been con- structed are given in the CRSP documentation. These data are particu- larly appropriate for an investigation of the term structure.The observed yield on each bill has been derived from the price of that bill on a given day (the last trading day of the month), so that the data relate to bills which are identical in all respects other than term, and unlike many yield data sets, the raw data have been neither interpolated over time nor interpolated over maturities. The nominal yield series studied here have been derived by taking the average of bid and asked quotes. The yields are standardized to a 30.4 day basis, and are expressed in percentages. The sample used consists of 228 observations for each series, dating from January 1970 until Decem- ber 1988, but the series on yields to maturity for twelve month bonds has not been used because many of the observations were missing.3 The sample covers three monetary regimes which are distinguished by the degree of interest rate targeting undertaken by the Federal Reserve. The first regime, covering the period up to and including Sep- tember 1979, corresponds to a period during which the Federal Reserve was targeting interest rates. The period from October 1979 to Septem- ber 1982 covers the Federal Reserve’s “new operating procedures,” when it ceased targeting interest rates. The final regime, from October 1982 onwards, corresponds to the abandonment of the “new operating proce- dures” and the resumption of partial interest rate targeting. Plots of the yield data and differenced yield data for the four yields of shortest term are provided in Figs. 11.1 and 11.2. These are representative of all the yields to maturity, and they illustrate the similar behavior of the yields over the sample period. In particular, they illustrate that the yields were considerably more volatile during the “new operating procedures” regime than they have been at other times. Most of the analysis is based on full sample, but in view of the regime changes described above, and

3 Two observations for the eleven-month bill and one for the ten month bill were also missing. These missing values were interpolated from the observed movements in the yield of the nine-month bill. Cointegration and Treasury Bill Yields 219

Figure 11.1. Yields to maturity (% per month).

Figure 11.2. Differenced yields to maturity (% per month). of empirical evidence that these caused structural changes in the term structure,4 three subsets corresponding to the monetary regimes have also been analyzed. The SHAZAM (White (1978)) and PC-GIVE (Hendry (1989)) computer packages were used for the computations.

4. THE EMPIRICAL EVIDENCE

A. Time Series Properties of Individual Yields Augmented Dickey-Fuller unit root test statistics were computed for each of the eleven yield series, and the details of this analysis can be

4 See, for instance, Huizinga and Mishkin (1986) or Hardouvelis (1988). 220 A. D. Hall, H. M. Anderson, and C. W. J. Granger found in Anderson, Granger and Hall (1990). The full sample test sta- tistics show no evidence against the null hypothesis that there is a unit root in yield levels, but the data clearly reject the null hypothesis that there is a unit root in the differences. When the three subsamples are examined, the same pattern emerges for each of the eleven yield series. A reasonable conclusion is that each yield to maturity is an I(1) process, over each of the Federal Reserve’s monetary regimes.5

B. Cointegration Analysis We now consider the hypotheses of interest, namely, that the yields are cointegrated with (n - 1) cointegrating vectors corresponding to any set of n yields, and that the cointegrating vectors are the spread vectors.6 Johansen (1988) and Johansen and Juselius (1990) have developed likelihood based procedures which test for cointegration, estimate the cointegrating vectors and permit the testing of restrictions on the coin- tegrating vectors. These techniques have been applied to test the hypotheses of interest. The results for the analysis which uses the eleven yields to maturity are presented in Table 11.1. Johansen’s l-max and trace statistics accept the restriction that the rank of the cointegrating space is not more than ten, but strongly reject the hypotheses that the rank is not more than nine. This supports the proposition suggested by the theory that there are ten cointegrating vectors for the set of eleven yields. Conditional on there being ten cointegrating vectors, the null hypothesis that ten linearly independent spreads formed from the eleven yields comprise a basis for the cointegration space is rejected. This likelihood ratio test statistic is distributed as a chi-squared random variable with ten degrees of freedom under the null hypothesis. The value of the test statistic is 30.28,

5 The conclusion that yields to maturity are integrated processes cannot be true in a very strict sense because integrated series are unbounded, while nominal yields are bounded below by zero. Nevertheless, it is evident from the data that the statistical characteristics of yields are closer to those of I(1) series than they are to I(0) series, so that for the pur- poses of building models of the term structure it is appropriate to treat these yield series as if they were I(1). 6 A unit root analysis of spreads provides indirect evidence on these hypotheses and such an analysis of each spread between all of the yields can be found in Anderson, Granger and Hall (1990). The spreads are found to be stationary over the full sample and in the first and third subsamples, consistent with the proposal that each spread is cointegrating. In the second subsample, many of the spreads are found to be nonstationary. The yields may still be cointegrating in this subsample, but the spreads do not define the cointe- grating vectors. While relevant in this context, a unit root test on a spread tests the null hypothesis that the vector [-1, 1]¢ is not cointegrating, rather than the required null that the vector [-1, 1]¢ is cointegrating. As well, for sets of more than two yields, the unit root tests do not test the joint hypotheses that the spread vectors are cointegrating. Cointegration and Treasury Bill Yields 221

Table 11.1 Hypothesis tests to determine the cointegrating rank for the set of yields R(1, t),...,R(11, t) full sample (1970:3–1988:12).

Null Hypothesis l-max Test 5% Critical trace Test 5% Critical about Rank r Statistic Value Statistic Value r £ 10 6.33 8.08 6.33 8.08 r £ 9 29.38 14.10 35.71 17.84

Note: The critical values are from Johansen and Juselius (1990). compared to its 5% critical value of 18.31.There are two plausible expla- nations for this rejection; either the spreads are not cointegrating, con- tradicting the theory, or the rejection has been caused by problems associated with the changes in the monetary regimes. To investigate the first possibility, subsets of the spreads were tested to see whether singly or jointly they are contained in the cointe- grating space. A selection of all the possible tests of hypothesis involv- ing subsets of the various spreads between the eleven yields are summarized in Table 11.2. In this table, the first column lists m yields, and the null hypothesis in each case is that the (m - 1) linearly independent spreads formed from these yields are contained in the cointegration space. These likelihood ratio test statistics are all conditional on the rank of cointegration space being ten. The first row reports the test statistic that the ten linearly independent spreads span the cointegration space. The next block of test statistics considers the null hypotheses that an indi- vidual spread belongs to the cointegrating space. We report the tests for all spreads involving the one-month yield and all spreads involving adja- cent maturities. For the tests involving the one-month yield, the null is accepted for seven out of the ten spreads. We find that the spreads S(2, 1, t), S(3, 1, t) and S(4, 1, t) are not cointegrating. For the tests involving adjacent maturities, the null is accepted for six of the ten spreads, and in this instance the spreads S(2, 1, t), S(3, 2, t) S(7, 6, t) and S(11, 10, t) are not cointegrating. The next block of table 2 reports the test statistics obtained when we progressively increase the number of yields (k) in the subset, and test the null hypotheses that a set of (k - 1) linearly inde- pendent spreads formed from these yields belongs to the cointegrating space. All of these joint hypotheses are rejected. The final block of sta- tistics reports test statistics on all possible subsets of spreads involving the four shortest yields to maturity. Again we find a mixture of accep- tances and rejection of the null hypotheses. In general, rejections seem to occur when the spread involves either the one-month, two-month, three-month or eleven-month yields. A subsample analysis has not been performed in the eleven variable case due to degrees of freedom considerations. In order to analyze the 222 A. D. Hall, H. M. Anderson, and C. W. J. Granger

Table 11.2 Tests that spread vectors are cointegrating full sample (1970:3–1988:12).

Spreads between Test Statistic DF 5% Critical Value

R(1) through R(11) 30.28 10 18.31 R(1), R(2) 4.39 1 3.84 R(1), R(3) 6.56 1 3.84 R(1), R(4) 4.36 1 3.84 R(1), R(5) 2.48 1 3.84 R(1), R(6) 1.54 1 3.84 R(1), R(7) 0.65 1 3.84 R(1), R(8) 0.34 1 3.84 R(1), R(9) 0.21 1 3.84 R(1), R(10) 0.11 1 3.84 R(1), R(11) 0.01 1 3.84 R(2), R(3) 6.95 1 3.84 R(3), R(4) 0.00 1 3.84 R(4), R(5) 0.26 1 3.84 R(5), R(6) 0.73 1 3.84 R(6), R(7) 5.49 1 3.84 R(7), R(8) 2.51 1 3.84 R(8), R(9) 0.72 1 3.84 R(9), R(10) 1.15 1 3.84 R(10), R(11) 4.57 1 3.84 R(1) through R(3) 7.27 2 5.99 R(1) through R(4) 13.31 3 7.81 R(1) through R(5) 14.34 4 9.49 R(1) through R(6) 14.34 5 11.07 R(1) through R(7) 21.96 6 12.59 R(1) through R(8) 21.96 7 14.07 R(1) through R(9) 21.97 8 15.51 R(1) through R(10) 22.58 9 18.31 R(2), R(4) 2.30 1 3.84 R(1), R(2), R(4) 4.81 2 5.99 R(1), R(3), R(4) 7.84 2 5.99 R(2), R(3), R(4) 12.81 2 5.99

Note: R(k) is the yield to maturity of a k period bill. Column one lists m yields. The null hypothesis in each case is that (m - 1) linearly independent spreads formed from these yields belong in the cointegration space. The tests are conditional on the rank of the cointegration space being 10, and the test statistics have a chi-squared distribution with DF degrees of freedom. possible effects of the changes in the Federal Reserve’s operating pro- . cedures, a detailed analysis of the four shortest yields has been per- formed. Table 11.3 reports the results of the tests to determine the cointegrating rank of these four yields. Over the full sample, the tests accept the null hypothesis that the rank of the cointegrating space is not Cointegration and Treasury Bill Yields 223

Table 11.3 Hypothesis tests to determine the cointegrating rank for the set of yields R(1, t), R(2, t), R(3, t), and R(4, t).

Null Hypothesis l-max Test 5% Critical trace Test 5% Critical Sample about Rank r Statistic Value Statistic Value

Full Sample r £ 3 6.42 8.08 6.42 8.08 70:3–88:12 r £ 2 50.79 14.60 57.20 17.84 First Sample r £ 3 0.31 8.08 0.31 8.08 70:3–79:9 r £ 2 39.27 14.60 39.58 17.84 Second Sample r £ 3 3.22 8.08 3.22 8.08 79:10–82:9 r £ 2 10.50 14.60 13.72 17.84 r £ 1 20.14 21.28 33.87 31.26 r = 0 28.56 27.34 62.43 48.42 Third Sample r £ 3 1.64 8.08 1.64 8.08 82:10–88:12 r £ 2 16.97 14.60 18.61 17.84

Note: The critical values are from Johansen and Juselius (1990). more than three, but reject the null that the rank is not more than two. This confirms, as the theory predicts, that the four shortest yields are cointegrated and that the cointegrating rank is three. This result is repeated in the first and third subsample, but in the sample during which the new procedures were operating the tests suggest that the cointe- grating rank is two. Hypothesis tests that the spreads are contained in the cointegration space are reported in Table 11.4. For the full sample, conditional on there being three cointegrating vectors, the hypothesis that three linearly independent spreads span the cointegrating space is rejected, and an analysis of subsets of these spreads also leads to some rejections.These results are consistent with the results of testing the same hypotheses in the eleven yield model. However, in the first and third sub- samples, we can accept the hypothesis that the spreads form a basis for the cointegration space. With only one exception, each of the hypothe- ses that subsets of these spreads are contained in the cointegration space is not rejected. These results are consistent with the predictions of the theory. On the other hand, the results from an analysis of the second subsample are not consistent with the theory. Conditional on there being two cointegrating vectors, the tests indicate that none of the possible spread vectors are cointegrating. On the basis of this evidence, we conclude that during periods in which the Federal Reserve has targeted interest rates as an instrument of monetary policy, the tests broadly support the predictions of the theory. We find (n - 1) cointegrating vectors among each n yields to maturity, and it is reasonable to conclude that the spreads form a basis for the cointegrating space. This cointegrating relationship has the 224 A. D. Hall, H. M. Anderson, and C. W. J. Granger

Table 11.4 Tests that spread vectors are cointegrating.

Sample Period 5% Spreads Critical Between 70:3–88:12 70:3–79:9 79:10–82:9 82:10–88:12 DF Value

R(1) through R(4) 14.66 5.56 ... 1.87 3 7.81 R(1), R(2) 3.77 5.39 8.08 0.24 1 3.84 R(1), R(3) 5.49 2.83 8.04 0.02 1 3.84 R(1), R(4) 3.44 2.13 8.57 0.00 1 3.84 R(2), R(3) 6.20 0.00 7.29 0.15 1 3.84 R(2), R(4) 1.69 0.02 10.02 0.23 1 3.84 R(3), R(4) 0.45 0.05 12.92 0.29 1 3.84 R(1), R(2), R(3) 6.32 5.56 13.92 1.11 2 5.99 R(1), R(2), R(4) 3.86 5.48 21.50 1.66 2 5.99 R(2), R(3), R(4) 5.98 4.20 24.04 1.86 2 5.99 R(2), R(3), R(4) 14.09 0.05 23.21 0.30 2 5.99

Note: R(k) is the yield to maturity of a k period bill. Column one list m yields. The null hypothesis in each case is that (m - 1) linearly independent spreads formed from these yields belong in the cointegration space. For the full sample and subsamples (70:3–79:9) and (82:10–88:12), the test statistics are conditional on there being 3 cointegrating vectors. For the subsample (79:10–82:9), the test statistics are conditional on 2 cointegrating vectors. All test statistics have a chi-squared distribution with DF degrees of freedom.

important implication that the risk or liquidity premia of Treasury bills are stationary I(0) variables. This conclusion follows directly from con- sideration of equation (4) and the empirical evidence that yields are I(1) and cointegrated processes, and the findings that the spreads between the yields define the cointegrating relationships. These relationships appear to have broken down during the period of the new operating procedures. During this time, the Federal Reserve placed primary emphasis on controlling the growth of reserves available to depository institutions while greatly expanding the allowable range of fluctuations in the federal funds rate. This period experienced wide gyra- tions in quarterly monetary growth rates despite the announced policy of controlling the growth in monetary aggregates, unusually high real interest rates, changing inflation and deteriorating economic conditions. Short-term interest rates were influenced almost exclusively by the private sector. There was a marked increase in the short-run volatility of interest rates, a conventional measure of the risk of holding long-term debt, presumably with a substantial impact on risk or liquidity premia. Over this period we observe a change in the cointegrating relationships between yields on Treasury bills. Yields are still cointegrated, but the spreads no longer define the cointegrating relationships, and there appears to be at least one extra nonstationary common factor over this Cointegration and Treasury Bill Yields 225 period. A reasonable explanation is that because of the uncertainty caused by the enhanced volatility in monetary growth, interest rates and economic activity resulting from the introduction of the new procedures, the risk or liquidity premia became nonstationary over this period, causing a breakdown of the cointegrating relationships.

C. Error Correction Models In this section we present an estimated error correction model using the four shortest yields to illustrate how these cointegration results might be utilised. The spreads are used to define the cointegrating vectors, but because this is not consistent with the data over the whole sample, esti- mation of the model is restricted to the period after the new operating procedures were abandoned.7 The error correction model presented here was derived by sequen- tially reducing a general unrestricted model that contained four lags of each of the differenced series. Details of the advantages of mod- eling by reduction may be found in Hendry (1989). Sets of (n - 1) linearly independent spread vectors are not unique for n ≥ 2, so that it was necessary to choose which spreads were to be used in the esti- mation of the error correction model. In theory it should not matter which spreads are used; in practice, we used the spreads S(2, 1, t), S(3, 2, t) and S(4, 3, t), since these were the least correlated. In estimat- ing the model, it was necessary to include two dummy variables, D84 and D87, to account for outliers which occurred in October 1984 and October 1987 (the second outlier is presumably due to the effects of the stock market crash). Ordinary least squares (OLS) and full information maximum likeli- hood (FIML) model reductions lead to the same model specification and these estimates are presented in Tables 11.5 and 11.6. Diagnostic statis- tics reveal little insample evidence of misspecification.The diagnostic test statistics are those produced by PC-GIVE and details of each test sta- tistic can be found in Hendry (1989). Forecast Chow tests of the null hypothesis that there is no change in any parameter between the sample period (January 1983 until December 1987) and the forecast period (January 1988 until December 1988) show weak evidence of change in the equation for R(1, t). This is apparently due to an outlying obser- vation for R(1, t) in December 1988, which was more than five stan- dard deviations away from the sample mean. Disregarding the effects of

7 As expected, error correction models estimated over the full sample showed evidence of instability. It may have been possible to obtain stable models of yields over longer samples by including (exogenous) volatility variables in the error correction model, or by introducing ARCH errors into the models, but these approaches were not tried here. Table 11.5 OLS error correction model for the four variable system (1983:1–1987:12).

Model for DR(1, t) Model for DR(2, t) Model for DR(3, t) Model for DR(4, t) Explanatory Variable Coefficient S.E. Coefficient S.E. Coefficient S.E. Coefficient S.E.

S(2, 1, t - 1) .644 .205 — — — — — — S(3, 2, t - 1) .665 .296 .994 .272 — — — — S(4, 3, t - 1) -.682 .438 -.268 .406 .664 .382 — — DR(2, t - 1) .293 .108 .224 .100 — — — — Constant -.014 .006 -.012 .005 -.004 .004 .0002 .004 D84 -.127 .030 -.113 .028 -.105 .029 -.098 .030 D87 -.176 .030 -.125 .028 -.126 .029 -.100 .030

Diagnostic Statistics

Model for DR(1, t) Model for DR(2, t) Model for DR(3, t) Model for DR(4, t)

Type Distribution Test Distribution Test Distribution Test Distribution Test

Dep. Var. S.D. 0.0444 0.0375 0.0350 0.0345 R2 0.6050 0.5185 0.3812 0.2723 Standard Error 0.0295 0.0272 0.0283 0.0299 Serial Correlation F(12, 41) 1.35 F(12, 42) 0.75 F(12, 44) 0.31 F(12, 45) 0.49 ARCH F(4, 45) 2.40 F(4, 46) 1.45 F(4, 48) 0.41 F(4, 49) 0.19 Normality c2(2) 1.39 c2(2) 0.41 c2(2) 1.00 c2(2) 1.51 Heteroskedasticity F(11, 41) 1.00 F(9, 44) 0.51 F(5, 50) 0.52 F(3, 53) 0.45 Reset F(1, 52) 0.60 F(1, 53) 0.21 F(1, 55) 0.85 F(1, 56) 0.11 Functional Form n.a. F(11, 42) 0.45 F(4, 51) 0.52 F(2, 54) 0.48 Chow F(12, 53) 2.75a F(12, 54) 0.89 F(12, 56) 0.64 F(12, 57) 0.61

Note: — implies that in the reduction process the estimated coefficient was found to be insignificant; n.a. means that the statistic was not computed. a Significant at the 5% critical level. Table 11.6 FIML error correction model for the four variable system.

Model for DR(1, t) Model for DR(2, t) Model for DR(3, t) Model for DR(4, t) Explanatory Variable Coefficient S.E. Coefficient S.E. Coefficient S.E. Coefficient S.E.

S(2, 1, t - 1) .864 .114 — — — — — — S(3, 2, t - 1) .602 .183 .983 .094 — — — — S(4, 3, t - 1) -.873 .283 -.462 .188 .558 .106 — — DR(2, t - 1) .186 .067 .092 .035 — — — — Constant -.016 .005 -.011 .004 .003 .004 .0002 .004 D84 -.128 .029 -.116 .027 -.105 .028 -.098 .029 D87 -.171 .029 -.119 .027 -.126 .028 -.100 .029

Diagnostic Statistics

Type Model for DR(1, t) Model for DR(2, t) Model for DR(3, t) Model for DR(4, t) ractual/predicted .7675 .7088 .6168 .5218 Standard Error .0283 .0262 .0273 .0292

Note: — implies that in the reduction process the estimated coefficient was found to be insignificant. 228 A. D. Hall, H. M. Anderson, and C. W. J. Granger this outlying observation, further Chow tests provide no evidence that the estimated models are unstable. Error correction terms have statistically significant coefficients, thereby confirming the cointegration found earlier and the validity of the error correction representation. It is interesting to note the manner in which the cointegrating vectors enter into each equation; the spreads are not relevant in the model for changes in the yield of longest maturity, but successively more spreads are needed to “explain” changes in yields as the term to maturity becomes shorter. This pattern is also found in other error correction models (not reported in this paper), estimated with different sets of yields. This type of model suggests that yields of longer maturities “drive” the term structure, with short-term yields adjusting to movements in the longer term yields. One interpretation of this observation is based on an expectations argument. The spreads between yields at the longer end of the term structure contain informa- tion about future shorter-term rates, and current short-term rates adjust according to this information.

D. Forecasts The existence of an error correction model implies some Granger- causality in the system, which in turn suggests that the error correction model may be a useful forecasting tool. The error correction model esti- mated by FIML has been used to obtain 12 one-step-ahead forecasts over the period 1988:1 to 1988:12, to illustrate its use for this purpose. These forecasts are compared with a set of naive no-change forecasts and the forecasts from an unrestricted second order vector autoregres- sion (VAR). The dummy variables discussed above are included in the VAR model. Table 11.7 provides the summary statistics for these forecasts. The biases are all of the same order of magnitude and all forecast standard deviations are high, but the error correction model has smaller forecast standard deviations, leading to consistently smaller root mean square errors. The error correction model gives between a 4% and 16% reduc- tion in root mean square error over the naive model, and smaller gains over the VAR. The improvement in forecasts using the error correction model are small (and not statistically significant), but they illustrate the potential of this type of model.

5. CONCLUSION This paper shows that it is appropriate to model the term structure of U.S. Treasury bills as a cointegrated system. During monetary regimes Cointegration and Treasury Bill Yields 229

Table 11.7 Summary statistics for one step-ahead forecast errors (1988:1–1988:12).

Statistic

Variable Method Mean St. Dev. RMSE

DR(1) Naive .0144 .0568 .0586 VAR .0070 .0543 .0548 ECM .0033 .0491 .0492 DR(2) Naive .0188 .0239 .0304 VAR .0111 .0268 .0290 ECM .0151 .0228 .0274 DR(3) Naive .0184 .0164 .0247 VAR .0130 .0195 .0235 ECM .0152 .0169 .0228 DR(4) Naive .0164 .0182 .0245 VAR .0091 .0227 .0245 ECM .0151 .0181 .0236

RMSE Ratios

Ratio DR(1) DR(2) DR(3) DR(4)

ECM/Naive .8406 .9006 .9220 .9626 ECM/VAR .8987 .9421 .9696 .9631

characterized by stabilizing the short-run fluctuations in the federal funds rate, the spreads between yields of different maturity define the cointegrating vectors in this system. An error correction model implied by this cointegration is estimated, found to be statistically significant, and seems to provide more accurate forecasts of yields than naive no-change forecasts, or forecasts based on a VAR. During the period of the new operating procedures, yields are still cointegrated, but the spreads no longer define the cointegrating vectors. The type of cointegration found for monetary regimes that emphasize controlling short-term interest rates has the important implications that the term or liquidity premia of Treasury bills are stationary processes and that a single nonstationary common factor underlies the time series behavior of each yield to maturity. The common factor cannot be uniquely identified, and it could be a linear combination of several I(1) variables. It is worth emphasizing that this is a nonstationary factor and it may be possible to find a number of common stationary factors 230 A. D. Hall, H. M. Anderson, and C. W. J. Granger that are useful in explaining the behavior of Treasury bill yields.8 Further research may suggest a useful way of identifying the common nonsta- tionary factor so that it can then be estimated and studied. Much might be learned about the term structure if this common factor can be related to economic variables such as monetary growth and/or inflation, and further research on the com-mon factor interpretation of cointegration in the term structure will undoubtedly improve our understanding of how the term structure changes over time.

REFERENCES Anderson, Heather M., Clive W. J. Granger, and Anthony D. Hall (1990), “Trea- sury Bill Yield Curves and Cointegration,” University of California, San Diego Discussion Paper Number 90–24. Campbell, John Y., and Robert J. Shiller (1987), “Cointegration and Tests of Present Value Models,” Journal of Political Economy 95, 1062–1088. —— (1988), “Interpreting Cointegrated Models,” Journal of Economic Dynam- ics and Control 12, 505–522. Cox, John C., Jonathan E. Ingersoll, and Stephen A. Ross (1985), “A Theory of the Term Structure of Interest Rates,” Econometrica 53, 385–407. Engle, Robert F., and Clive W. J., Granger (1987), “Cointegration and Error Cor- rection Representation, Estimation, and Testing,” Econometrica 55, 251–276. Granger, Clive W. J. (1981), “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” Journal of Econometrics 16, 121–130. Hardouvelis, Gikas A. (1988), “The Predictive Power of the Term Structure during Recent Monetary Regimes,” The Journal of Finance 43, 339–356. Hendry, David F. (1989), “PC-GIVE: An Interactive Econometric Modeling System,” Institute of Economics and Statistics and Nuffield College, Univer- sity of Oxford. Huizinga, John, and Frederic S. Mishkin (1986), “Monetary Policy Regime Shifts and the Unusual Behavior of Real Interest Rates,” Carnegie-Rochester Con- ference on Public Policy 24, 231–274. Johansen, Soren (1988), “Statistical Analysis of Cointegration Vectors,” Journal of Economic Dynamics and Control 12, 231–254. Johansen, Soren, and Katarina Juselius (1990), “Maximum Likelihood Esti- mation and Inference on Cointegration–with Applications to the Demand for Money,” Oxford Bulletin of Economics and Statistics 52, 169–210. Knez, Peter, Robert Litterman, and José Scheinkman (1989), “Explorations into Factors Explaining Money Market Returns,” Goldman Sachs & Co., Discus- sion Paper No. 6.

8 For this reason, our analysis is consistent with estimated factor models that use the sta- tionary excess holding returns of Treasury bills. With these data, Stambaugh (1989) finds two common factors while Knez, Litterman and Scheinkman (1988) report estimated models with three and four factors. Cointegration and Treasury Bill Yields 231

Stambaugh, Robert F. (1988), “The Information in Forward Rates: Implications for Models of the Term Structure,” Journal of Financial Economics 21, 41–70. Stock, James H., and Mark W. Watson (1988), “Testing for Common Trends,” Journal of the American Statistical Association 83, 1097–1107. White, Kenneth J. (1978), “A General Computer Program for Econometric Methods-SHAZAM,” Econometrica 46, 239–240. CHAPTER 12

Estimation of Common Long-Memory Components in Cointegrated Systems* Jesus Gonzalo and Clive Granger

The study of cointegration in large systems requires a reduction of their dimensionality. To achieve this, we propose to obtain the I(1) common factors in every subsystem and then analyze cointegration among them. In this article, a new way of estimating common long-memory compo- nents of a cointegrated system is proposed. The identification of these I(1) common factors is achieved by imposing that they be linear combi- nations of the original variables Xt, and that the error-correction terms do not cause the common factors at low frequencies. Estimation is done from a fully specified error-correction model, which makes it possible to test hypotheses on the common factors using standard chi-squared tests. Several empirical examples illustrate the procedure.

Keywords: common factors; cointegration; error-correction model; permanent–transitory decomposition.

If xt and yt are both integrated of order 1, denoted I(1), so that their changes are stationary, denoted I(0), they are said to be cointegrated if there exists a linear combination zt = yt - Axt, which is I(0). Several useful generalizations can be made of this definition, but this simple form is suf- ficient for the points proposed in this article. The basic ideas of cointe- gration were discussed by Granger (1986) and in the book of readings edited by Engle and Granger (1991). A simple constraint that results in cointegration involves an I(1) common factor ft:

Èyt ˘ ÈA˘ Èy˜t ˘ Í ˙ = Í ˙ft + Í ˙, (1) Îxt ˚ Î 1 ˚ Îx˜ t ˚ where y˜t and x˜ t are both I(0). Clearly zt =-y˜t A x˜ t , being a linear com- bination of I(0) series, will never be I(1) and usually will be I(0). The reverse is also true – if (xt, yt) are cointegrated, there must exist a

* Journal of Business and Economic Statistics, 13, 1995, 27–35. Estimation of Common Long Memory Components 233 common factor representation of the form (1), as proved by Stock and Watson (1988). A natural question that arises is how to estimate the common factor ft, which might be an unobserved factor and is the driving force that results in cointegration. It has been suggested in the literature quoted previously that cointegration can be equated with certain types of equilibrium in that, in the long-run future, the pair of series is expected to lie on the attractor line xt = Ayt. Although much attention has been given to estimation of the cointegrating vector (1, -A), relatively little attention has been given to estimation of ft. Notice that when the long- run equilibrium is estimated, the common factor ft is eliminated. There are several reasons why it is interesting to recover ft – for example, situations in which the model of the complete set of variables appears very complex, although in fact, if we are interested in the long-run behav- ior, a simpler representation, using a small set of common long-memory factors could be adequate. This is the case for cointegration in large systems. Economists often conduct research on what might be consid- ered to be natural subdivisions of the macroeconomy. The analysis of the long-run behavior of the whole macrosystem can be conducted by first finding the common factors in every subdivision of the economy and then studying cointegration among them. Another reason for singling out the ft is that the estimation of this common factor allows one to decompose ( yt, xt) into two components ( ft, (y˜t ,x˜ t )) that convey dif- ferent kinds of information. For example, policymakers may be primar- ily interested in the trend (permanent component ft) behavior, but those concerned with business cycles are more interested in the cyclical component (transitory component). Moreover, singling out the common factors allows us to investigate how they are related to other variables. The final goal of any factor model is to be able to identify the common factors with some observable variable. This article proposes a way of achieving this. The situation studied here has analogies with the decomposition of an I(1) series into permanent and transitory components, where these com- ponents are considered to be I(1) and I(0), respectively. This question was considered by Quah (1992). Because the sum of an I(1) and I(0) series is I(1), it is easily seen that the question, as posed, does not com- pletely identify the I(1) permanent components. To achieve identifica- tion, a further condition has to be imposed, such as maintaining that the permanent component is a random walk, or requiring the two compo- nents to be orthogonal at all leads and lags. In this article a different con- dition is used. This is possible because the situation being studied here involves more than one series, and this extra dimension allows a differ- ent type of condition to be considered. Basically the conditions imposed are that ft be a linear combination of ( yt, xt) and that the part that is left

(y˜t ,x˜ t ), not have any permanent effect on ( yt, xt). The first condition 234 J. Gonzalo and C. W. J. Granger

makes ft observable; the second one makes ft a good candidate to sum- marize the long-run behavior of the original variables. By these two con- ditions, we identify ft up to a nonsingular matrix multiplication to the left. The linear combination is easily estimated from a fully specified error- correction model (ECM). This makes the suggested decomposition very convenient, mainly because the ECM takes care of the unit-root problem (see Johansen 1988; Phillips 1991), and therefore hypothesis testing on the linear combination ft can be conducted using standard chi-squared tests. Another advantage is that any extension (nonlinearities, time- varying parameters, etc.) that could be incorporated in the ECM can be easily taken into account in this decomposition. This article is organized as follows. Section 1 describes the factor model (1) for p variables and proposes a way to identify the common long-memory factors ft. Section 2 shows how to estimate the linear com- binations that form the common factors and how to test hypotheses on these linear combinations. Section 3 is an application of the method. Section 4 concludes. Proofs of the main results are in the Appendix.

1. FACTOR MODEL

Let Xt be a ( p ¥ 1) vector of I(1) time series with mean 0, for simplicity, and assume that the rank of cointegration is r [there exists a matrix ap¥ r of rank r, such that a ¢Xt is I(0)]. It follows that

1. The vector Xt has an ECM representation

• DGDXXt =¢+gatitit--1 Â X + e, (2) prrp¥¥ i=1 where D=I - L, with L the lag operator.

2. The elements of Xt can be explained in terms of a smaller number ( p - r) of I(1) variables, ft, called (common) factors plus some I(0) components ˜ XAfXt =+1 t t , (3) ppk¥¥¥¥1 k 11p where k = p - r. In the standard factor analysis, mostly oriented to cross-section data [for time series, see Peña and Box (1987)], the main objective is to esti- mate the loading matrix A1 and the number k of common factors from (3). In our case, these two things are already known once the cointe- grating vectors, a, have been estimated: k = p - r and A1 is any basis of the null space of a ¢(a ¢A1 = 0). The goal of this article is to estimate ft.In factor analysis this is done from (3), after imposing constraints on ft and ˜ Xt that are not adequate in time series. Even dynamic factor analysis Estimation of Common Long Memory Components 235

(see Geweke 1977) needs the assumption of stationarity that does not hold here.As will be shown in Section 2, the common factors can be esti- mated from the ECM (2) instead of from (3).

One of the conditions that will identify the common factors, ft, is to impose that ft be linear combinations of the variables Xt:

fBXt = 1 t . (4) kkp¥¥¥1 p 1

This condition not only helps to identify ft but also to associate the common factors with some observable variables, which is always advisable in factor analysis. The other condition that will identify ft (up to a nonsingular matrix multiplication to the left) is to impose ˜ that A1 ft and Xt form the permanent and transitory components Xt, respectively, according to the following definition of a permanent– transitory (P–T) decomposition [part of this definition follows Quah (1992)].

Definition 1: Let Xt be a difference-stationary sequence.A P–T decom- position for Xt is a pair of stochastic processes Pt, Tt such that

1. Pt is difference-stationary and Tt is covariance stationary, 2. var (DPt) and var (Tt) > 0, 3. Xt = Pt + Tt, 4. we let

ÈDPt ˘ ÈuPt ˘ HL*()Í ˙ = Í ˙ (5) pp¥ Î Tt ˚ ÎuTt ˚ be the autoregressive (AR) representation of (DPt, Tt), with uPt and uTt uncorrelated, then

∂EX() ()a lim tth+ π 0 h Æ• ∂uPt and

∂EX() ()b limtth+ = 0 , h Æ• ∂uTt where Et is the conditional expectation with respect to the past history. According to Condition 4, the only shocks that can affect the long- run forecast of Xt are those coming from the innovation term, uPt, of the permanent component, Pt. Condition (4) is not included in Quah’s definition, and it is this that makes Pt and Tt permanent and transitory components, respectively. The next proposition clarifies this condition. 236 J. Gonzalo and C. W. J. Granger

Proposition 1: Let

ÈHL11() HL 12 ()˘ÈDPt ˘ Èu1t ˘ Í ˙Í ˙ = Í ˙ (6) ÎHL21() HL 22 ()˚Î Tt ˚ Îu2t ˚ be the AR representation of (DPt, Tt). Condition (4) in Definition 1 is satisfied iff the total multiplier of DPt with respect to Tt is 0; equivalently

H12 ()10= . (7) Apart from the instantaneous causality between the innovations

(u1t, u2t) of both components that is likely to occur in economics because of temporal aggregation (see Granger 1980), Condition (4) says that

Tt does not Granger-cause Pt in the long run or at frequency 0 [see Geweke (1982) and Granger and Lin (1992) for a formal definition of causality at different frequencies]. Let us consider the following example:

Xt = Pt + Tt, (8) where

DPt = a1Tt-1 + a2DTt-1 + u1t (9) and

Tt = b1DPt-1 + u2t. (10)

This is a P–T decomposition according to Definition 1 iff a1 = 0. When a1 π 0, even though Tt is I(0), this term cannot be called transitory because it will have a permanent effect on Xt (i.e., an effect on the long-run forecast of Xt). Notice that changes in the permanent compo- nent can affect the transitory component and also that changes in the transitory component could have an impact on the changes of the per- manent component (a transitory impact on the levels of Pt and therefore on Xt). There are decompositions that do not satisfy Condition (4). For instance, in the decomposition proposed by Aoki (1989), based on a dynamic factor (state-space) model, the I(0) component may have a per- manent effect on the levels of the I(1) component and therefore on Xt. Another example is the decomposition of Kasa (1992): --11 Xfzttt=¢aaa^^^()+¢ aaa(), (11) where ft = a¢^Xt and zt = a ¢Xt. In general (see the proof of the next propo- ˜ -1 sition) Xt = a(a ¢a) zt will not be “transitory” according to Condition (4) in Definition 1. The next proposition shows that the two conditions required for the common factors are enough to identify them up to a nonsingular transformation. Estimation of Common Long Memory Components 237

Proposition 2: In the factor model (2) the following conditions are suf-

ficient to identify the common factors ft:

1. ft are linear combinations of Xt. ˜ 2. A1 ft and Xt form a P–T decomposition. ˜ Substituting (4) in (3), we obtain X t = (I - A1B1)Xt = A2a ¢Xt = A2zt, where zt = a ¢Xt. Then, from the ECM (2), it is clear that the only linear ˜ combinations of Xt such that Xt has no long-run impact on Xt are

fXt =¢g ^ t , (12) kp¥ p ¥ 1 where g ¢^g = 0 and k = p - r. These are the linear combinations of DXt that have the “common feature” (see Engle and Kozicki 1990) of not containing the levels of the error correction term zt-1 in them. Once the common factors ft are identified, inverting the matrix

(g^, a)¢, we obtain the P–T decomposition of Xt proposed in this article:

XAt =¢+¢12ga^ XAt ^ Xt , (13) ppk¥¥1 kp¥ prrp¥ ¥

-1 -1 where A1 = a^(g ¢^a^) and A2 = g (a ¢g ) . In the next proposition, it is shown when this common factor decom- position (13) exists.

Proposition 3: If the matrix II = gp¥ ra r¢ ¥ p has no more than k = p - r eigenvalues equal to 0 – that is, if det(a ¢g) π 0 – then (g ^, a)¢ is nonsin- gular and the factor model (13) exists.

Even though ft is not estimated from the factor model (3), the assump- tions made to identify the common factors imply certain constraints on the P–T components that are the counterpart of assumptions imposed in standard factor analysis.

Proposition 4: The factor model

Xt = A1 ft + A2zt, (14) where ft = g^Xt and zt = a ¢Xt satisfies the following properties:

1. The common factors ft are not cointegrated.

2. Cov(Df *it, z*j,t-s) = 0 (i = 1,...,k; j = 1,...,p - k; s ≥ 0), where Df *it =Dfit - E(Dfit Ω lags(DXt-1)) and z*jt = zjt - E(zjt Ω lags(DXt-1)). The first property follows from Proposition 3 and the second from the

ECM (2). This second property is another way of expressing that zt does not cause ft in the long run. 238 J. Gonzalo and C. W. J. Granger

Properties (1) and (2) are equivalent to the assumptions made in standard factor analysis on the uncorrelatedness of the factors and the orthogonality between the factors and the error term (A2zt). As mentioned before, most of the P–T decompositions have been designed and used in a univariate framework. Stock and Watson (1988) proposed a common-trends decomposition that basically extends the univariate decomposition proposed by Beveridge and Nelson (1981) to cointegrated systems. The next proposition shows the connection between the common-trends decomposition of Stock and Watson and decomposition (14).

Proposition 5: The random-walk component (in the Beveridge–Nelson sense) of the I(1) common factor ft in the decomposition (14) corre- sponds to the common trend of the Stock–Watson decomposition.

The advantage of our decomposition with respect to the common- trends model of Stock and Watson is that in our case it is easier to esti- mate the common long-memory components and to test hypotheses on them, as is shown in Section 2.

Notice that alternative definitions of ft will vary only by I(0) compo- nents and therefore will be cointegrated. In the univariate case, part of the literature has been oriented to obtaining orthogonal P–T decomposition (see Bell 1984; Quah 1992; Watson 1986). To the best of our knowledge, nothing has been written about the multivariate case. From the factor model (14) an orthogonal decomposition can be obtained such that the corresponding Dft and zt are uncorrelated at all leads and lags. First, project zt on Dft-s for all s and get the residuals

zzPzf˜tt=-[] tD ts- " s. (15) ˜ Then define the new I(1) common factors ft as ˜ -1 fAAAXAzttt=¢()11 12¢-()˜ . (16) ˜ It is clear that Dft and z˜t are uncorrelated at all leads and lags, but notice ˜ that, unless the z˜t are linear combinations of current Xt, ft will not be a linear combination of contemporaneous Xt. This is what is lost if orthog- onality is required.To obtain an orthogonal P–T decomposition (accord- ing to Definition 1), one has to allow the common factors to be linear combinations of future, present, and past values of Xt.

2. ESTIMATION AND TESTING

In this section it is shown how to estimate and test hypotheses on g^. Most of the proofs in this section are based on Johansen and Juselius (1990). Estimation of Common Long Memory Components 239

Consider a finite ECM with Gaussian errors,

H1 : DXt =PXt-1 +G1DXt-1 + ...+Gq-1DXt-q+1 + et, t = 1,...,T, (17) where e1,...,eT are IINp(0, L), X-q+1,...,X0 are fixed, and P =¢ga . (18) pp¥ prrp¥¥ Following Johansen (1988), we can concentrate the model with respect to P, eliminating the other parameters. This is done by regressing DXt and Xt-1 on (DXt-1,...,DXt-q+1).This gives residuals R0t and R1t and resid- ual product matrices

T -1 STij=¢=Â RRij it jt ,,01 ,. (19) t=1 The remaining analysis will be performed using the concentrated model

R0t = ga¢R1t + et. (20) The estimate of a is determined by reduced-rank regression in (20) (see Ahn and Reinsel 1990; Anderson 1951; Johansen 1988) and is found by solving the Eigenvalues problem

-1 ΩlS11 - S10S 00S01Ω=0 (21) ˆ ˆ ˆ for Eigenvalues l 1 > ··· > l p and eigenvectors V = (vˆ 1,..., vˆ p). The maximum likeihood estimators are given by aˆ = (vˆ 1,...,vˆ r), gˆ = S01aˆ , ˆ and L = S00 -¢gˆ gˆ . Finally the maximized likelihood function becomes

r p -1 -2 T ˆ ˆˆÈ ˘ LSmax==L 00 ’’()11 -lli =- S00 . 1 Í ()i ˙ , (22) i==+1 Îir1 ˚ -1 where S00.1= S00 - S01S11 S10. The next theorem shows how to estimate g^.

Theorem 1: Under the hypothesis of cointegration H2: P=ga ¢, the maximum likelihood estimator of g^ is found by the following procedure: First solve the equation

-1 ΩlS00 - S01S11S10Ω=0, (23) ˆ ˆ ˆ giving the Eigenvalues l 1 > ··· > l p and Eigenvectors M = ( mˆ 1,..., ˆ ˆ mˆ p), normalized such that M¢S00M = I. The choice of gˆ ^ is now gˆ = ()mmˆ ,..., ˆ , (24) ^+rp1 which gives the maximized likelihood function (22). 240 J. Gonzalo and C. W. J. Granger

Notice, as Johansen (1989) pointed out, the duality between g^ and a. This is the idea of the proof of the preceding theorem. Both estimates come from the canonical correlation analysis between R0t and R1t. They are the canonical vectors and can be found by solving the follow- ing equations:

È-li SS00 01 ˘Èmˆ i ˘ ==01,ip ,..., , (25) Í ˙Í ˆ ˙ Î SS10-li 11 ˚Î vi ˚ ˆ ˆ ˆ ˆ with the normalizations M¢S00M = Ip and V¢S11V = Ip. From (25) and the preceding normalizations, it is clear that

mˆ ¢j S01 vˆ i = 0, i π j. (26)

Because aˆ = (vˆ 1,...,vˆ r) and gˆ = S01aˆ , then gˆ ^ = ( mˆ r+1,..., mˆ p). If for any reason a is not estimated by maximum likelihood or simul- taneous reduced-rank least squares [see Gonzalo (1994) for different methods of estimation], the way to estimate g^ is the following: Insert the estimate of a,a˜ , into the ECM (17), use this to estimate gˆ , and then solve

lggS00 -¢=˜˜ 0, (27) prrp¥¥ ˜ ˜ ˜ giving the Eigenvalues l 1 > ··· > l p ( l r+j = 0, j = 1,...,p - r) and ˜ ˜ ˜ Eigenvectors M = ( m˜ 1,..., m˜ p) normalized such that M¢¢S00M = I. The choice of g˜ ^ is now g˜ ^ = ( m˜ r+1,..., m˜ p); the Eigenvectors corre- sponding to the Eigenvalues equal 0.

To find the asymptotic distribution of gˆ ^, it is convenient to decom- ˆ ˆ -1 pose g ^¢ as follows: gˆ ^ = g^d + gaˆ , where d = (g ¢^g^) g ¢^gˆ ^, and aˆ = -1 (g ¢g ) g ¢gˆ ^.

Theorem 2: When T Æ•, Td12ggˆ ˆ - 1- fi NV() ()^ ^ 0,, (28) -1 where fi means convergence in distribution, V = g (g ¢ (S00 -L) g ) g ¢ g ¢^ L g^, and S00 var (DXt ΩDXt-1,...,DXt-q+1). As mentioned earlier, one of the advantages of our decomposition is that one can test whether or not certain linear combinations of Xt can be common factor. Johansen (1991) showed how to test the hypotheses on a and g : H 3 :,ar=££Jrsp pr¥¥ pssr¥ and H 4a :,.gy=££Qrnp pr¥ pn¥ nr¥ Estimation of Common Long Memory Components 241

In the next theorem it is shown how to test the hypotheses on g^: H 4b :.gq^ ==-££Gkprkmpwith and pk¥ pmmk¥¥

Theorem 3: Under the hypotheses H4b: g^ = Gq, one can find the maximum likelihood estimator of g^ as follows:

First solve

-1 ΩlG¢S00G - G¢S01S11S10GΩ=0 (29) ˆ ˆ ˆ for l 4b.1 > ··· > l 4b.m and M4b = ( mˆ 4b.1,..., mˆ 4b.m) normalized by ˆ ˆ M4¢b (G¢S00G) M4b = I. Choose ˆ ˆ qgqmx()pr- = ()mmˆ 41 bm..()+ --() pr,..., ˆ 4 bm and ˆ ^ = G. (30) The maximized likelihood function becomes

-1 Ê p ˆ LS-2 T ()H =-1 lˆ , (31) max40014bbimp .Ë ’ () . +-()¯ ir=+1 which gives the likelihood ratio test of the hypothesis H4b in H2 as

p HH ˆˆ -◊211ln() ;42bbimpiin =-T ln{}() -ll 4. +-()()- r+1 2 ~.c ()pr- ¥-() pm (32)

Finally one may be interested in estimating a and g^ under H3 and H4b. The way to proceed is to convert H4b into H4a. Notice that Q (the matrix in H4a) is formed by the p - m eigenvectors of GG¢ correspond- ing to the eigenvalues equal to 0. Following theorem 3.1 of Johansen

(1991), a and g can be estimated under H3 and H4a. Once g is estimated, we are in the situation described in (27).

3. APPLICATIONS In the first two examples (consumption and gross national product (GNP), dividends and stock prices), it is shown how to obtain the common factors directly from an ECM. The third application (interest rates in Canada and the United States) shows, step by step, how to esti- mate the common factors and how to decompose these variables into permanent and transitory components.

3.1 Consumption and GNP, Dividends and Stock Prices The (vector autoregression) VAR (ECM) models of Tables 12.1 and 12.2 are reproduced from Cochrane (1991). Focusing out attention in the 242 J. Gonzalo and C. W. J. Granger

Table 12.1 Consumption and GNP Regressions (Cochrane 1991).

Left Right variable

2 variable const. ct-1 - yt-1 Dct-1 Dct-2 Dyt-1 Dyt-2 R

1. Vector autoregression

Dct coeff. -.43 -.02 .07 -.02 .09 -.02 .06 t stat. -.49 -1.23 .90 -.19 1.91 -.40

Dyt coeff. 5.19 .08 .52 .16 .22 .14 .27 t stat. 3.49 3.45 3.81 1.12 2.74 1.89 2. P–T decomposition

g ^¢ = (1, 0); a¢=(1, -1).

Èc t ˘ È1˘ È 0˘ = fz+ , Í y ˙ Í ˙ ttÍ- ˙ Î t ˚ Î1˚ Î 1˚

where ft = g ^¢ (ct, yt)¢=ct and zt = a¢(ct, yt)¢=ct - yt.

Note: yt denotes real GNP and ct denotes log (nondurable + services consumption).

D denotes first differences, Dyt = yt - yt-1. Data sample: 1947:1–1989:3.

Table 12.2 Dividend and Price Regressions (Cochrane 1991).

Left Right variable

2 variable const. dt-1- pt-1 Ddt-1 Ddt-2 Dpt-1 Dpt-2 R

1. Vector autoregression

Ddt coeff. 20.01 .038 .046 -.06 -.08 -.04 .038 t stat. .78 .47 .25 .34 -.65 .32

Dpt coeff. 78.65 .225 .06 -.08 .114 -.09 .14 t stat. 2.34 2.11 .25 -.36 .68 -.55 2. P–T decomposition

g ^ = (1, 0); a = (1, -1).

Èdt ˘ È1˘ È 0˘ Í ˙ = Í ˙ fztt+ Í ˙ Î pt ˚ Î1˚ Î-1˚

where ft = dt and zt = dt - pt.

Note: dt denotes log dividends and pt denotes log price (cumulated returns) on the value- weighted New York Stock Exchange portfolio. D denotes first differences; Dpt is the log return. Data sample: 1927–1988. consumption–GNP example, it can be seen from the VAR of Table 12.1 that the error-correction term (ct-1 - yt-1) does not appear to be signifi- cant in the consumption equation; therefore, g ¢=(0, 1) and g ¢^ = (1, 0). In other words, the I(1) common factor (permanent component) in our decomposition is Estimation of Common Long Memory Components 243

Èct ˘ ft = ()10,,Í ˙ Îyt ˚ a multiple of the consumption variable. This means that, if consump- tion is kept fixed, any change in the income is going to affect (ct, yt) only through zt (the transitory component) and therefore will only have transitory effects (see the factor model in Table 12.1).This is exactly the conclusion reached by Cochrane (1991) through the impulse-response functions: GNP’s response to a consumption shock is partly permanent but also partly temporary. More importantly, GNP’s response to a GNP shock holding consumption constant is almost entirely transitory.This finding has a natural interpretation: If consumption does not change, perma- nent income must not have changed, so any change in GNP must be entirely transitory. (p. 2) The same kind of conclusion is obtained in the second example with dividends and stock prices in Table 12.2. From the factor model it can be seen that a shock in dividends has a permanent (long-run) effect in prices and dividends, but a shock in prices, with no movements in dividends, is completely transitory.

3.2 Interest Rates in Canada and the United States The main purpose of this application is to find the permanent com- ponent that is driving the interest rates of Canada and the United States in the long run. To do that, three interest rates with different maturities have been considered in each country – short-term, medium- term, and long-term interest rates. In Canada (see Fig. 12.1), the short- term rate is the weighted average of the yields on successful bids for three-month treasury bills (x1c), the medium-term rate refers to govern- ment bonds with original maturity of 3 to 5 years (x2c), and the long- term rate refers to bonds with original maturity of 10 years and over

(x3c). In the United States (see Fig. 12.2), the short-term rate is an annual average of the discount rate on new issues of three-month treasury bills (x1u), the medium-term rate refers to 3-year constant maturity government bonds (x2u), and the long-term rate refers to 10-year constant-maturity bonds (x3u). The data consist of 240 monthly obser- vations from 1969:1 to 1988:12 and were obtained from the IMF data base. To show the potential of our decomposition as a dimension- reduction method, two different approaches have been followed to obtain the common permanent component of the whole set of interest rates. In the first approach, the interest rates are considered within countries, and in each country the I(1) common factor is estimated. The 244 J. Gonzalo and C. W. J. Granger

% 21 20 19 18 17 16 15 14 13 12 ST 11 10 LT 9 MT 8 7 6 5 4 3 MAR68 DEC70 SEP73 JUN76 MAR79 NOV81 AUG84 MAY87 FEB90 Month

Figure 12.1. Canada Interest Rates (1969:1–1988:12): ——, Short-Term (ST); ...., Medium-Term (MT); –·–·, Long-Term (LT).

% 17 16 15 14 13 12 11 10 LT 9 8 MT 7 ST 6 5 4 3 MAR68 DEC70 SEP73 JUN76 MAR79 NOV81 AUG84 MAY87 FEB90 Month Figure 12.2. U.S. Interest Rates (1969:1–1988:12): ——, Short-Term (ST); ...., Medium-Term (MT); –·–·, Long-Term (LT).

common permanent component between these two I(1) country factors will be the factor that is driving the whole system of interest rates in the long run. In this process the number of variables involved at every step is at most 3. This is what makes this first approach very convenient for analyzing cointegration in big systems. The second approach consists of analyzing the cointegration of the whole system (6 variables) without any a priori partition. This second way becomes unfeasible when the number of variables is large (greater than 10). The conclusion obtained by these two different approaches Estimation of Common Long Memory Components 245

Table 12.3 Augmented Dickey–Fuller Statistics for Tests of a Unit Root.

ADF(0) ADF(1) ADF(2) ADF(3) ADF(4)

x1ct -1.46 -2.18 -2.10 -1.97 -1.93 x2ct -1.67 -1.93 -1.80 -1.86 -1.63 x3ct -1.64 -1.75 -1.67 -1.73 -1.6 x1ut -2.05 -2.7 -2.17 -2.15 -2.07 x2ut -1.72 -2.35 -1.78 -1.83 -1.79 x3ut -1.55 -1.88 -1.60 -1.66 -1.62

^ q Note: ADF(q) is the t statistic of d in the regression Dxt = c + dxt-1 +Sj=1 f i Dxt-i + et.The critical values (from Mackinnon 1991) for n = 240 are 1% (-3.46), 5% (-2.87), and 10%

(-2.57). xijt denotes the i term interest rate in country j at time t, for i = 1 (short), i = 2 (medium), i = 3 (long), j = c (Canada), and j = u (U.S.). Data are from the IMF. Sample period: 1969:1–1988:12.

matches perfectly.There is only one common long-memory factor in the whole system formed by the six interest rates, and that factor is the U.S. common permanent component. To reach the preceding conclusion these steps have been followed: (1) Unit-root tests (Table 12.3): Using the augmented Dickey– Fuller test, the null of the unit root is not rejected for any of the six interest rates. (2) Cointegrations tests (Table 12.4): Using the Johansen likelihood ratio (LR) test, for a VAR of order 3 (order suggested by the Akaike information criterion), it is found that Canada, as well as the United States, has two cointegrating vectors, and the whole system has five cointegrating vectors. Therefore there is one common I(1) factor in each country, and they are cointe- grated, implying that there is only one common permanent com- ponent in the whole system. (3) Estimation of the cointegration structure: In Table 12.5 we provide the estimates of the cointegrating vectors and of the linear combinations that define our common permanent com- ponents. From these estimates, following Section 1, all interest rates can be decomposed into permanent and transitory com- ponents. Some examples are shown in Figures 12.3 and 12.4. (4) Testing hypotheses on the long-memory common factors: From Table 12.5, the I(1) common factor of the whole system

is f1 =-.006x1c + .034x2x - .003x3c + .112x1u - .22x2u + .26x3u. Following Theorem 3, we tested that the U.S. interest rates are the only variables driving the whole system in the long run; that is, 246 J. Gonzalo and C. W. J. Granger

Table 12.4 Testing for Cointegration.

H2 Trace Trace (.90) lmax lmax (.90)

Canada r ≤ 2 3.52 6.50 3.52 6.50 r ≤ 1 25.22 15.66 21.70 12.91 r = 0 56.63 28.71 31.40 18.90 United States r ≤ 2 3.98 6.50 3.95 6.50 r ≤ 1 29.18 15.66 25.23 12.91 r = 0 61.98 28.71 32.79 18.90 Canada and United States r ≤ 5 3.79 6.50 3.79 6.50 r ≤ 4 16.49 15.66 12.70 12.91 r ≤ 3 36.59 28.71 20.10 18.90 r ≤ 2 68.89 45.23 32.29 24.78 r ≤ 1 104.11 66.49 35.23 30.84 r = 0 153.87 90.39 49.75 36.35

Note: The critical values have been obtained from Osterwald-Lenum (1992). Test statis- tics for the hypothesis H2 are for several values of r versus r + 1 (l max) and versus general alternative H1 (trace) for Canadian and U.S. interest rates data (1969:1–1988:12).

% 30

20

Px1 10 x1c

0 Tx1

–10 MAR68 DEC70 SEP73 JUN76 MAR79 NOV81 AUG84 MAY87 FEB90 Month Figure 12.3. Canada: P–T Decomposition of Short-Term Interest

Rates (x1c); f1 =-.006x1c + .034x2c - .003x3c + .112x1u - .22x2u + .26x3u;

z1 = .008x1c + .037x2c - .081x3c - .083x1u + .075x2u + .032x3u;z2 =-007x1c +

.046x2c - .074x3c + .073x1u - .275x2u + .242x3u;z3 = .068x1c - .100x2c - .075x3c

- .053x1u + .101x2u + .039x3u;z4 =-.019x1c + .181x2c - .239x3c - .011x1u -

.014x2u + .089x3u;z5 =-.041x1c - .022x2c + .007x3c + .023x1u + .019x2u +

.030x3u;Px1c = 7.86f1;Tx1c =-3.58z1 - 6.92z2 + 4.52z3 + 1.31z4 - 18.16z5:

——, x1c; ...... , Px1c; –·–·–·, Tx1c. Table 12.5 Estimation of the Cointegration Structure.

Canada United States Canada and United States

Eigenvalues lˆ (.123, .086, .014) (.128, .10, .016) (.187, .136, .126, .080, .051, .016) Eigenvectors Vˆ a x1c -.066 .009 .008 x1u .091 -.045 .016 x1c .008 -.007 .068 -.019 -.041 .001 x2c .109 -.149 -.031 x2u -.275 -.004 -.062 x2c .037 .046 -.100 .181 -.022 .015 x3c -.030 .148 .051 x3u .191 .046 .076 x3c -.081 -.074 -.075 -.239 .007 -.008 x1u -.083 .073 -.053 -.011 .023 .012 x2u .075 -.275 .101 -.014 .019 -.063 x3u .032 .242 .039 .089 .030 .070 Eigenvectors Mˆ b x1c -.016 .018 .079 x1u -.100 -.079 .107 x1c .034 .015 .035 -.040 -.143 -.006 x2c .058 -.380 .004 x2u .324 -.123 -.189 x2c -.160 .076 -.202 .296 -.099 .034 x3c .095 -.466 .059 x3u -.179 .273 .243 x3c .225 -.023 .031 -.417 .189 -.003 x1u -.110 -.053 -.045 -.047 .029 .112 x2u .033 .270 .221 -.026 .128 -.220 x3u .095 -.276 .004 .190 -.103 .260

Note: The Eigenvalues l and Eigenvectors Vˆ , Mˆ based on the normalizations Vˆ S11Vˆ = l and Mˆ S00M = l for Canada and U.S. interest rate data (1969:1–1988:12). a The first r columns form aˆ . b The last p - r columns form ˆg ^. 248 J. Gonzalo and C. W. J. Granger

% 16 15 14 13 12 11 10 x3u 9 8 Px3 7 6 5 4 3 2 1 0 Tx3 –1 –2 MAR68 DEC70 SEP73 JUN76 MAR79 NOV81 AUG84 MAY87 FEB90 Month Figure 12.4. United States P–T Decomposition of Long-Term Interest

Rates (x1u); See Definitions of Variables in Figure 12.3. Px1u = 5.91f1;

Tx1u = 6.01z1 - 2.81z2 + .252z3 - 1.27z4 + 1.79z5. ——, x3u; ...... , Px3u; –·–·–·,

Tx3u.

È000˘ Í000˙ Í ˙ H Í000˙ 4b :.gq^ ==GGwith Í ˙ Í100˙ Í010˙ Í ˙ Î001˚ ˆ Under H4b,q = (.2, -.25, .27).This hypothesis is not rejected with a p value of .86. The same conclusion was obtained when the analysis was done by countries. The common long-memory factor in Canada is f1c = .08x1c + .004x2c + .06x3c, and in the United Sates f1u = .11x1u - .19x2u + .24x3u. These two common factors are cointegrated, and the hypothesis tests that in the long-run the driving force of these two common factors is f1u has a p value of .45. Results in more detail can be found in the paper by Gonzalo and Granger (1992).

4. CONCLUSION The results of this article have implications on three fronts. In the first place, they provide a new form of estimating the I(1) common factors that ensure that a set of variables are cointegrated, thus allowing us to gain more understanding of the nature of economic time series. Second, they show a new method for estimating the permanent com- Estimation of Common Long Memory Components 249 ponent (“trend”) of a time series using multivariate information, and third, they provide a new way of studying cointegration in large systems by using the common long-memory factors of every “natural” subsystem. Further research needs to be done on the small-sample proper- ties of gˆ ^ and on how to incorporate different characteristics of the ECM (nonlinearities, time-varying parameters, etc.) in the estimation of the common factors and therefore in the estimation of P–T decompositions.

ACKNOWLEDGMENTS This research was partially supported by the Sloan Foundation and U.S. National Science Foundation Grant SES-9023037. We thank Chor-Yiu Sin and two anonymous referees for helpful comments.

APPENDIX: PROOFS OF THE MAIN RESULTS Proof of Proposition 1: Inverting (6),

11 12 ÈDPt ˘ ÈHL() HL ()˘ Èu1t ˘ = , Í ˙ Í 21 22 ˙ Í ˙ (A.1) Î Tt ˚ ÎHLHL() ()˚ Îu2t ˚ we obtain the moving average representation of DPt

11 12 DPHttt= ()11 u1 + H() u2 ()()LH˜˜11 Lu H12 () Lu (A.2) +-1 {}1tt+ 2 , where 11jj˜ 1 j HL()= H()112+-()() ILHL,,, j= (A.3) and ut = (u1t, u2t) is a vector white noise with covariance matrix

ÈSS11 12 ˘ S = Í ˙. ÎSS21 22 ˚

Assuming that u1t and u2t are not perfectly correlated, they can be decom- posed as

-1 uu==+and u uu . (A.4) 121tPt tÂ11 Â21 tTt From (A.2) and (A.4),

∂Ep() -11 lim tth+ = HH11()11+ 12 () (A.5) hÆ• Â11 Â21 ∂uPt and 250 J. Gonzalo and C. W. J. Granger

∂Ep() limtth+ = H12 ()1 . (A.6) hÆ• ∂uTt Noticing that

limEXtth()+ = lim EPtth()+ , (A.7) hƕ hƕ

12 (Pt, Tt) will be P–T decomposition according to Definition 1 iff H (1) = -1 -1 -1 H11(1) H12(1) [H21(1)H11(1) H12(1)- H22(1)] = 0. In other words, iff the total multiplier of DPt with respect to Tt is 0, -1 (A.8) HH11 ()11012 ()= .

Proof of Proposition 3: If ga ¢ has only p - r eigenvalues equal to 0, then rank(a ¢g) = r. Taking determinants on the right side of the matrix multiplication

Èa ¢˘ Èag¢¢ ag^ ˘ Í ˙ []gg^¢ = Í ˙, (A.9) Îg ^¢ ˚ Î 0 gg^^¢ ˚ it follows that this matrix has full rank and therefore

Èa ¢˘ rank of Í ˙ = p. (A.10) Îg ^¢ ˚

Proof of Proposition 5: In this proof, for simplicity it is assumed that

Xt follows an AR(q) as in (17).

Multiplying the ECM (17) by g ¢^ and substituting Xt = A1 ft + A2zt into (17), we get the AR representation of the common factors ft

q-1 q-1 DGDGDfAfAztiti=¢ÂÂggge^-1 +¢^-iti2 +¢^ t . (A.11) i=1 i=1 From (A.11), the random-walk part [in the Beveridge–Nelson (1981) sense] of ft is

q-1 -1 Ê ˆ -1 1 -¢gg^Gi AIL^¢-() et . (A.12) Ë Â 1 ¯ i=1 The common trend decomposition of Stock and Watson (1988) is obtained from the Wold representation of DXt, ˜ DDXCLCttt= ()ee= ()1 + CL() e t, (A.13) where -1 C()1 =¢ag^^()Y a ^ g^¢ (A.14) Estimation of Common Long Memory Components 251 with

Y=mean lag matrix in H1 = I - ··· -Gq-1 +P. (A.15) Therefore,

q-1 -1 -1 È Ê ˆ -1 ˘ CI()1 =¢aga()¥-¢Á gG a˜ aga()¢ g¢ . (A.16) ^^^ Í Ë ^^ i ¯ ^^^ ˙ ^ Î i=1 ˚ -1 The result follows from noticing that A1 = a^(g ¢^a^) .

Proof of Theorem 1: Johansen (1989) showed that the likelihood function of Model (20) can be expressed as

-2 T -1 LSSmax=¢00 . 1ggg^^^ 00 ¢-() SSSS 00 01 11 10 g^ . (A.17) Therefore L is maximized by maximizing

-1 gggg^¢-()SSSS00 01 11 10^^¢ S 00 ^. (A.18)

This is accomplished by choosing g^ to be the Eigenvectors correspond- -1 ing to the p - r smallest Eigenvalues of S01S11S10 with respect to S00 and the maximal value is

p ˆ ’ ()1 - li . (A.19) ir=+1 The result follows from substituting (A.19) in (A.17).

Proof of Theorem 2: The proof follows from proposition 3.11 of Johansen and Juselius (1990).

Proof of Theorem 3: Substituting g^ by Gq in (A.17), it is clear that q can be estimated as the Eigenvectors corresponding to the ( p - r) -1 smallest Eigenvalues of G¢S01S11S10G with respect to G¢S00G.

The distribution of the LR test follows from proposition (3.13) of Johansen and Juselius (1990).

REFERENCES Ahn, S. K., and Reinsel, G. C. (1990), “Estimation for Partially Nonstationary Multivariate Autoregressive Models,” Journal of the American Statistical Association, 85, 813–823. 252 J. Gonzalo and C. W. J. Granger

Anderson, T. W. (1951), “Estimating Linear Restrictions on Regression Co- efficients for Multivariate Normal Distributions,” The Annals of Mathemat- ical Statistics, 22, 327–351. Aoki, M. (1989), “A Two-Step Space Time Series Modeling Method,” Computer Mathematical Applications, 17, 1165–1176. Bell,W.R. (1984),“Signal Extraction for Nonstationary Time Series,” The Annals of Statistics, 12, 646–664. Beveridge, S., and Nelson, C. R. (1981), “A New Approach to Decomposition of Economic Time Series Into Permanent and Transitory Components With Particular Attention to Measurement of the ‘Business Cycle’,” Journal of Monetary Economics, 7, 151–174. Cochrane, J. (1991), “Univariate vs. Multivariate Forecasts of GNP Growth and Stock Returns: Evidence and Implications for the Persistence of Shocks, Detrending Methods, and Tests of the Permanent Income Hypothesis,” Working Paper 3427, National Bureau of Economic Research, Cambridge, MA. Engle, R. F., and Granger, C. W. J. (eds.) (1991), Long-Run Economic Relation- ships: Readings in Cointegration (Advance Texts in Econometrics), Oxford, U.K.: Oxford University Press. Engle, R. F., and Kozicki, S. (1990), “Testing for Common Feature,” Discussion Paper 9023, University of California, San Diego, Dept. of Economics. Geweke, J. (1977), “The Dynamic Factor Analysis of Economic Time Series Models,” in Latent Variables in Socioeconomic Models, eds. D. Aigner and A. Goldberger, Amsterdam: North-Holland, pp. 365–383. (1982), “Measurement of Linear Dependence and Feedback Between Multiple Time Series,” Journal of the American Statistical Association, 378, 304–324. Gonzalo, J. (1994), “Five Alternative Methods of Estimating Long-Run Equilibrium Relationships,” Journal of Econometrics, 60, 203–233. Gonzalo, J., and Granger, C. W. J. (1992), “Estimation of Common Long- Memory Components in Cointegrated Systems,” Discussion Paper 4, Boston University, Dept. of Economics. Granger, C. W. J. (1980), “Testing for Causality: A Personal Viewpoint,” Journal of Economic Dynamics and Control, 2, 329–352. (1986), “Developments in the Study of Cointegrated Economic Variables,” Oxford Bulletin of Economics and Statistics, 48, 213–228. Granger, C. W. J., and Lin, J. (1992), “Causality in the Long-Run,” Discussion Paper 9215, Academica Sinica. Johansen, S. (1988), “Statistical Analysis of Cointegrating Vectors,” Journal of Economic Dynamics & Control, 12, 231–254. (1989), “Likelihood Based Inference on Cointegration. Theory and Applications,” unpublished lecture notes, University of Copenhagen, Institute of Mathematical Statistics. (1991), “Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models,” Econometrica, 59, 1551– 1580. Johansen, S., and Juselius, K. (1990), “Maximum Likelihood Estimation and Inference on Cointegration – With Applications to the Demand for Money,” Oxford Bulletin of Economics and Statistics, 52, 169–210. Estimation of Common Long Memory Components 253

Kasa, K. (1992), “Common Stochastic Trends in International Stock Markets,” Journal of Monetary Economics, 29, 95–124. Mackinnon, J. G. (1991), “Critical Values for Cointegration Tests,” in Long-Run Economic Relationships: Readings in Cointegration, eds. R. Engle and C. Granger, Oxford, U.K.: Oxford University Press, pp. 267–276. Osterwald-Lenum, M. (1992), “A Note With Quantiles of the Asymptotic Distribution of the Maximum Likelihood Cointegration Rank Test Statistics,” Oxford Bulletin of Economics and Statistics, 54, 461–472. Peña, D., and Box, G. E. P. (1987), “Identifying a Simplifying Structure in Time Series,” Journal of the American Statistical Association, 82, 836–843. Phillips, P. C. B. (1991), “Optimal Inference in Cointegrated Systems,” Econometrica, 59, 283–306. Quah, D. (1992), “The Relative Importance of Permanent and Transitory Components: Identification and Some Theoretical Bounds,” Econometrica, 60, 107–118. Stock, J. H., and Watson, M. W. (1988), “Testing for Common Trends,” Journal of the American Statistical Association, 83, 1097–1107. Watson, M.W. (1986),“Univariate Detrending Methods With Stochastic Trends,” Journal of Monetary Economics, 18, 1–27. CHAPTER 13

Separation in Cointegrated Systems and Persistent-Transitory Decompositions* Clive W. J. Granger and Niels Haldrup*

1. INTRODUCTION It is a frequent empirical finding in macroeconomics that several coin- tegration relations may exist amongst economic variables but in the par- ticular way that the single relations appear to have no variables in common. It is also sometimes found in such systems that the error cor- rection terms or other stationary variables from one set of variables may have important explanatory power for variables in another set. For example Konishi et al. (1993) considered three types of variables of US data: real, financial and interest rate variables. They found that cointe- gration existed between variables in each subset but not across the vari- ables such that the different sectors did not share a common stochastic trend. On the other hand, it was also found that the error correction terms of the interest rate relation and the sector of financial aggregates had predictive power with respect to the real variables of the system. As argued by Konishi et al. (1993) the situation sketched above may extend the usual “partial equilibrium” cointegration set-up to a more “general equilibrium” setting although in a limited sense. The notion of separation initially developed by Konishi and Granger (1992) and Konishi (1993) provides a useful way of describing formally the above possibility: Consider two groups of I(1)-variables, X1t and X2t of dimension p1 and p2, respectively, X1 and X2 are assumed to have no variables in common and in each sub-system there is cointegration with the cointegration ranks being r1 < p1 and r2 < p2. Hence it follows that the dimensions of the associated common stochastic trends of each system are p1 - r1 and p2 - r2. Denote the two sets of I(1) stochastic trends W1t and W2t. It follows from Stock and Watson (1988) that each sub- system can be given the representation:

* Oxford Bulletin of Economics and Statistics, 59, 1997, 449–464. ** The first author acknowledges support from NSF grant SBR 93-08295. The research was undertaken while the second author was visiting the UCSD during fall, 1995. We would like to thank Namwon Hyung, the Editor and an anonymous referee for helpful comments. Cointegrated Systems and Persistent-Transitory Decompositions 255

~ X1t = G1W1t + X1t ~ (1) X2t = G2W2t + X2t ~ where Gi, i = 1, 2, are pi ¥ (pi - ri) matrices and the Xit components are stationary I(0) relations. Separate cointegration across sub-systems means that the components W1t and W2t are not cointegrated so that there is no long-run relationship between the X1t and X2t variables. As a consequence the stacked time series Xt = (X 1¢t, X 2¢t)¢ will be of dimension p = p1 + p2 and have cointegration rank r = r1 + r2. The full system sto- chastic trend component will have the dimension p - r. Despite this separation of variables it can easily occur that a rela- tionship exists between X1t and X2t in the short-run. Essentially there are two ways this can happen: DX (DX ) may appear in the transitory I(0) ~ ~ 2t 1t component X1t(X2t) and/or error correction terms from one system may enter the second. Absence of these two sorts of interactions will be referred to as separation of types A and B, respectively. Although it will be possible to distinguish between short-run and long-run separation of Type A, long-run separation appears to be the most interesting for the present purpose as we shall demonstrate. Presence of both types of sep- aration is denoted complete separation, and if only one of these is present we refer to partial separation. What will be of concern in this paper is also to consider a decom- position of the vector time series Xt in persistent-transitory (P-T) components for separated cointegration models in order to see how the single components interact across systems. Identification of the P-T components are generally non-unique since any I(1) process can be contaminated with an I(0) process and still have the I(1) property. As a result various additional requirements have been suggested in the literature to identify the components and more recently Gonzalo and Granger (1995), using a factor model approach, suggest that the tempo- rary component be defined in terms of the error correction relations such that it will have no explanatory power on the series in the long run. Moreover, the single factors can be measured in terms of the observed variables Xt. One of the findings of the present paper (which in some respects actu- ally goes beyond the particular decomposition suggested by Gonzalo and

Granger), is that if the decomposition Xt = P(Xt) + T(Xt) is considered where P(Xt) and T(Xt) are the persistent (long-memory) and transitory

(short-memory) components, respectively, the persistent component P1t associated with the X1-system, for instance, can be expressed as P1t = P1(X1t,X2t) unless some sort of separation is present. Hence, in order to extract observable persistent components in a separated system, it is not generally sufficient to consider each sub-system in isolation, since all system variables may be needed to define the components. We also demonstrate that if one (wrongly) treats a partially separated system as 256 C. W. J. Granger and N. Haldrup completely separated, both the persistent and the transitory components of the other system may turn out to affect the persistent component of the sub-system considered. Only when the entire system is completely separated will it be sufficient to look at the sub-models to find the long- memory components and the associated common stochastic trends. This result is interesting because it suggests that cointegration analysis, and especially common stochastic trends analysis and P-T decompositions, may suffer from only looking at small partial models.1 In the inter- pretation of common stochastic trends the idea of “general equilibrium” cointegration is therefore relevant since persistent and transitory components may interact across systems. On the other hand, estimation is another (if not the prime) important issue in cointegration analysis, and it is certainly not costless to consider larger systems in a “general equilibrium” setting. For systems of increas- ing size practical problems are common with respect to estimation and difficulties easily arise in interpreting and identifying cointegration rela- tions. Moreover, as emphasized by e.g. Abadir et al. (1996) significant finite sample inaccuracies may appear in cointegrated VAR models as the number of variables increases. Therefore, once estimation is brought into the picture, there is a size-precision trade-off that must be addressed as well. Hence there seem to be conflicting suggestions for empirical practice and obviously the size of the system at hand should reflect the purpose of the analysis. Testing for complete separation may be useful in bringing together these diverging opinions. The plan of the paper is the following.In Section II we provide a formal definition of the various separation concepts and we briefly review some of the literature concerned with decomposition of a series into persistent and transitory components. The following section focuses on the decom- position in the context of separated cointegrated models.We demonstrate that if a partially separated system is treated as if it is complete, both the long- and short-memory components of the neglected system may poten- tially affect the persistent component of the system being analyzed. However, the problem can be avoided by considering the full system but with the implication that the (true) long- and short-memory factors may depend upon all the model variables of each sub-system. In Section IV possible extensions to non-linear error correction models are considered and we demonstrate that fairly strong restrictions need to be imposed on the functional forms across systems in order to ensure stability.In the final section we conclude. We should emphasize that although this paper is strictly on representations,we will briefly provide discussions of the impli- cations for estimation where appropriate.

1 Similar problems arise for instance in impulse response analysis where it is well known that the impulse responses are quite sensitive to the information set of the econometrician. Cointegrated Systems and Persistent-Transitory Decompositions 257

2. DEFINITION OF THE CONCEPTS We shall here define formally the different concepts that will be used in the sequel.

2.1 Notions of Separation in Cointegrated Systems The definition of separation provided below extends Konishi and Granger (1992) and Konishi (1993).

Definition 1: Consider the p-dimensional cointegrated vector time series Xt = (X 1¢t,X 2¢t)¢ where X1t and X2t are of dimension p1 and p2 (p = p1 + p2) and have no variables in common.Then the associated error correction model reads

DGDXXLXt =¢+ga t-11() tt- + e (2) prrp¥¥ pp¥ where r is the cointegration rank and et is i.i.d., with covariance matrix W. If the matrix of cointegration parameters can be factored as

Êa11¢ 0 ˆ a ¢= Ë ¯ (3) 0 a 22¢ where a ii¢ is pi ¥ ri, i = 1, 2, the system is said to have separate cointegra- tion with cointegration ranks for each sub-system given by r1 and r2, respectively. Conformably with this partitioning, consider also the matrices

Ê g 11 g 12 ˆ Ê G11 ()L G12 ()L ˆ g = and G()L = . Ë ¯ Ë ¯ (4) g 21 g 22 G21 ()L G22 ()L Given separate cointegration we define type A-separation (separation in dynamic adjustment in the long-run) when G12(1), G21(1) = 0. Type B- separation (separation in error correction) occurs when g12, g21 = 0. Partial separation is present when either type A or type B separation is present and finally there is complete separation when both type A and type B separation is present. Observe that the maintained assumption is that we have separate cointegration and that in the definition of type B separation there is no feedback from the error correction terms across each sub-system. If the model considered is a Gaussian VAR model type B separation means that X2t is weakly exogenous with respect to the long-run parameters of system 1 and X1t is weakly exogenous with respect to the long-run para- meters of system 2, compare e.g. Ericsson (1992) and Johansen (1992). With respect to type A separation there is no feedback, in the long-run, from the first differenced variables across the systems. However, we do not preclude the possibility that the first differences of the variables in 258 C. W. J. Granger and N. Haldrup one system may have explanatory power in the other system in the short- run. We could consider short-run separation in the sense that G12(L), G21(L) = 0 but, as we shall see in what follows, only the notion of long- run separation will be of interest. Notice that if, for instance, g21, G21(L) = 0, X2t is strongly exogenous, i.e. the conjunction of weak exogeneity and Granger non-causality. Johansen (1992) proves that from an estimation viewpoint in a partial model, weak exogeneity is sufficient for obtaining fully efficient esti- mates of the economically interesting long-run parameters and the adjustment coefficients. Moreover, partial separation of type B and weak exogeneity will make inference easy in the single systems since limit dis- tributions become a mixture of Gaussian distributions.Within our set-up full efficiency will be lost, however, if there is not complete separation but the system is treated as such. In this situation type B separation is not sufficient in order to obtain nice properties from an estimation point of view; type A separation is needed as well.

2.2 P-T Decomposition of a Vector Time Series It is frequently of interest to decompose a time series into components that may have different characteristics, for instance a Persistent- Transitory (P-T) decomposition may be relevant, see e.g. Beveridge and Nelson (1981) and Quah (1992). For a vector time series similar decom- positions may be considered, see e.g. Stock and Watson (1988), Kasa (1992), Mellander et al. (1992), Gonzalo and Granger (1995), and Proietti (1995). However, since identification of such factors is generally non-unique,2 additional identifying requirements are needed. Gonzalo and Granger have suggested that the persistent I(1) factors should (1) be observable, i.e. such that the persistent components be expressed in terms of the original variables Xt,and (2),the shocks to the transitory part should have no impact on the persistent components in the long-run.3 Essentially, this is why the two types of factors for this particular decomposition may be given the economic interpretation of long-memory and short-memory components.The second condition stated above says that if we let Xt = Pt + Tt be factorization of Xt into a persistent and a transitory component, then the components can be given the VAR representation

Ê HL11 () HL12 ()ˆ ÊDPt ˆ Êe pt ˆ = Ë ¯ Ë ¯ Ë ¯ (5) HL21 () HL22 () Tt eTt such that Tt does not cause DPt in the long-run if H12(1) = 0.

2 Recently Abadir et al. (1996) have suggested to use the Jordan decomposition of the first order companion matrix of the VAR as a vehicle to extract the common stochastic trends. 3 See e.g. Hosoya (1991) and Granger and Lin (1995) for a definition of causality at dif- ferent frequencies. Cointegrated Systems and Persistent-Transitory Decompositions 259

Observability of the factors can be achieved by considering the expression

XPXTXtt= ()+ () t (6) where P(Xt) = A1ft and T(Xt) = A2zt with ft = g ^¢ Xt and zt = a¢Xt and -1 -1 where A1 = a^(g ^¢) and A2 = g(a¢g) . The matrices a^ and g^ are orthog- onal complements of a and g, i.e. such that g ^¢g = 0 and a^¢a = 0. Through- out the symbol “^” will indicate the orthogonal complement of the associated matrix. Notice that the orthogonal matrices in the present case are both p ¥ (p - r) and that the factorization of the vector process exists since a¢g is invertible by definition of the cointegration rank r. The per- sistent (or long-memory) component is given by P(Xt) which can be seen to be expressed in terms of the (p - r) common stochastic trends ft, and similarly the temporary (or short-memory) component can be expressed by the r error correction terms in a particular way. Observe that P(Xt) and T(Xt) do not necessarily constitute an orthogonal factorization; this will only happen in special situations. The Gonzalo–Granger decompo- sition has similarities with other decompositions in the literature.

For instance the ft term is identical to the common stochastic trends of Stock and Watson (1988) which is a multivariate generalization of the Beveridge–Nelson decomposition of a univariate time series. In a recent paper Proietti (1995) compares the various representations in a common set-up and he demonstrates that the Gonzalo–Granger decomposition can be obtained from the Beveridge–Nelson decomposition by adding a particular distributed lag polynomial of the first differences of the series to the long-memory component. The reason why this can be done is, of course, that any stationary component can be added to the sto- chastic trend (or I(1)) component without altering the dominant I(1) characteristics.

3. PERSISTENT-TRANSITORY DECOMPOSITION IN SEPARATED COINTEGRATING SYSTEMS In this section we focus our attention on different types of separated models to see how the long- and short-memory components in their P- T factorizations will depend upon the particular type of separation.

3.1 Erroneously Treating Non- and Partially-separated Systems as Completely Separated In order to interpret the outcome of cointegration analysis it is fre- quently considered advantageous to consider systems of low dimension. Assume that the econometrician correctly considers a separated cointe- grated system, but wrongly assumes that separation is complete rather 260 C. W. J. Granger and N. Haldrup than partial. The difference is, naturally, that the feedback from other cointegrating relations through the error correction terms and/or the first differenced variables from the other system are ignored in the analysis. With no loss of generality we assume for simplicity that the model is recursive to make the subsequent arguments more intelligible. P-T decompositions are considered for both systems, i.e. X1t = P1t + T1t and X2t = P2t + T2t.

Proposition 2: Let Xt = (X1¢t ,X 2¢t) be generated according to (2)–(4) with the additional requirement that g21 = 0 and G21(L) = 0, such that X2t is recursively determined compared to X1t. Then, if the econometrician considers the Xt system in isolation, that is

DGDXXLXu11111111111tt=¢ga ,,-- +() itt+ (7) it follows that:

(1) T1t Æ/ DP1t (2a) Partial separation of type A: DP2t Æ/ DP1t. (2b) No partial separation of type A: DP2t ÆDP1t, unless G12(1) Œ space(g11). (3a) Partial separation of type B: T2t Æ/ DP1t. (3b) No partial separation of type B: T2t ÆDP1t, unless g12 Œ space(g11). The notation “Æ” and “Æ/ ” signifies the influence or non-influence, respectively, of one component on the other in the long-run.

Proof: The error term given in (7) captures what has been left out from the analysis, so (8) uXLX112222112211tt=¢ga,,-- +GD() tt+ e.

The X2 system reads (9) DGDXXLX222222122212tt=¢ga,,-- +() tt+ e. By treating (7) as an isolated system the common stochastic trends are given by premultiplication of the error correction model (7) by the p1 ¥ ^¢ ^¢ (p1 - r1) orthogonal complement of g11, i.e. g 11 where g 11g11 = 0.This yields x ^¢ ^¢ ^¢ fX1tt==gg 11 1 11G 11() LXu 1,, t- 1+ g 11Â 1 tj- . (10) j=0 In accordance with the Gonzalo–Granger decomposition we can define (with an obvious notation)

^¢ X1t = P1t + T1t = A11g 11X1t + A21a11¢ X1t = A11f1t + A21XZ1t (11)

^ ^¢ ^ -1 -1 where A11 = a 11(g 11a 11) , and A21 = g11(a11¢ g11) . Similarly we can define X2t = P2t + T2t. The difference of the system 1 permanent component is Cointegrated Systems and Persistent-Transitory Decompositions 261

^¢ now given by DP1t = A11g 11DX1t and by using (8)–(11) and the fact that g12Z2,t-1 = g12a 22¢ T2,t-1 it follows that

^¢ DGDDPA11111111111ttt= g () LP(),,--+ T ^¢ (12) +¢+ATLPT11gga 11{} 12 22 2,,,,tttt--- 1GD 12()() 2 1+ D 2 1+ e 1 . Now result (1) follows directly. With respect to the results (2a) and (2b) it is seen that partial separation of type A, G12(1) = 0, implies that DP2,t has no influence on DP1t in the long-run. On the other hand this result does not hold it separation of type A is absent. Observe, though, the ^¢ exception when g 11G12(1) = 0, that is, when G12(1) is in the column space of g11. The results (3a) and (3b) follow accordingly. Separation of type B means that g12 = 0 so in this case T2t has no influence on DP1t in the long- run while the reverse result applies if type B separation is absent. The ^¢ latter result is modified, however, if g 11g12 = 0, i.e. when g12 is in the column space of g11. First, notice that the simplifying assumption g21, G21(L) = 0 means that the variables X2t are strongly exogenous w.r.t. the long-run parameters of system 1. This has no implications for the qualitative results presented but it simplifies the algebra considerably and makes it more clear how the interaction across systems works. The result (1) is seen to be fully in accordance with the Gonzalo–Granger decomposition such that in the long-run the system 1 temporary component will have no impact on the persistent component of the same system. More interestingly, (2b) and

(3b) demonstrate how (apart from the cases where g12 and G12(1) are in the space spanned by g11) the components of the second system affect the first. In fact, both components from system 2 will have an influence on the persistent component of system 1 by the absence of type A or type B separation. It demonstrates that by looking at small partial models and ignoring information from other systems, P-T decomposi- tions will be produced which differ from the “true” factorizations that rely on a correct specification of the VAR model. Similarly, common sto- chastic trends analysis will be affected more generally (see also equation 10), which mirrors the influence of the information set in e.g. impulse response analysis.

We have here put the main emphasis on the DP1t component. The way that the temporary component T1t is affected by T2t and DP2t is straight- forward due to its residual nature and hence the properties mirror the above discussion. Previously we have noted that partial separation of type B is closely related to the notion of weak exogeneity. Observe, however, that fol- lowing the discussion given in Section 2.1, treating a sub-system as com- pletely separated when, in fact this is not the case, full efficiency will be lost by analysing the sub-system in isolation. Both type A and type B separation is needed to obtain efficiency in a partial system, unless the 262 C. W. J. Granger and N. Haldrup excluded variables from the other system are taken into account in the sub-system analysis.

3.2 Partial Separation and P-T Decomposition of the Full System The proper way to proceed, in order to avoid the caveat emphasized in the previous section, is to consider the two sub-systems jointly.Again we assume for simplicity that g21 = 0 and let G21(L) = 0.

Proposition 3: Let Xt = (X 1¢t,X 2¢t)¢ be generated according to (2)–(4) with the additional requirement that g21 = 0 and G21 = 0, such that X2t is recur- sively determined compared to X1t.Then, if the econometrician considers the X1 and X2-systems jointly, persistent-temporary factorizations of the system can be characterized as follows:

XPXXTXX1112112ttttt= (),,+ ()

XPXTX22222tt= ()+ () t. (13)

It also follows that

(1) DP2t ÆDP1t apart from the conditions given in (19) below. (2) T1t, T2t -DP1t as required. Proff: Define the matrix ^¢ g *¢ Êg 11 12 ˆ g ^¢=Á ^¢ ˜ (14) Ë 0 g 22 ¯

^¢ ¢ such that g ^¢g = 0 whereby g 12* will satisfy g 11g12 + g 12*g22 = 0. Notice that if g12 Œ space(g11) this could imply that g 12* = 0 or more generally that g 12* Œ nullspace(g22). The common stochastic trends of the full system read

^¢ ¢ f1t = g 11X1t + g 11* X2t ^¢ f2t = g 22 X2t. (15)

From the definition of X1t given in (11) the decomposition (13) follows. The result for X2t is trivially given. Consider now the interaction of persistent and temporary compo- nents across sub-systems. The long-memory components read

DPt = A1g ^¢DXt (16)

-1 ^ ^¢ ^ -1 where A1 = a^(g ^¢a^) . Define now the matrices A11 = a 11(g 11a 11) , A21 ^ ^¢ ^ -1 -1 ^¢ = a 22(g 22 a 22) , and A22 = g22(a 22¢ g22) where it is noted that I - A21g 22 = A22a 22¢ . By straightforward matrix operations, using rules of partitioned inverse, it can be shown that Cointegrated Systems and Persistent-Transitory Decompositions 263

^¢ *¢ DP1t = A11g 11DX1t + A11g 12 A22a 22¢ DX2t

^¢ DP2t = A12g 22 DX2t. (17)

By using the error-correction model (2)–(4) for DX1t and DX2t in the ^¢ ¢ present set-up and using the fact that g 11g22 + g 12*g22 = 0, it follows that the single components are related in the following way:

^¢ DGDDPA11111111111ttT= g () LP(),,--+ T ^≤¢ * + {}ALAALPT11GG 11 12()+ 11 G 12 22a 22¢¢ G 22()() D 2,,TT-- 1+ D 2 1 ^¢¢ * ++AAA11ge 11 1tt 11 g 12 22 ae 22¢ 2 PA^¢¢ ()LP() T A^¢ (18) D 21222t = g GD22 2,,tt-- 1+ D 2 1+ 12ge 22 2 t.

This proves the second part of the Proposition. Note the particular cases where DP2t does not influence DP1t, i.e. when ^¢ *¢ AAA11gga 22GG 12()110+ 11 12 22 22¢ 22 ()= . (19)

A special case where this occurs is when both g12 and G12(1) lie in the space spanned by g11 because then also g 12* will lie in the nullspace spanned by g22. this case includes complete separation of the X1- and X2- systems. However, generally the condition in (19) is not satisfied. From the above Proposition it follows that by considering the two sub- systems jointly the common stochastic trends and the Gonzalo-Granger decomposition effectively separates the adjustment of error correction errors from the long-memory component as intended. However, it is interesting to observe that generally the variables of the full system will be needed in both the long- and the short-memory components of the

X1-system. Note that P1t and P2t are not cointegrated. Since P1t is I(1) plus I(0) in a particular way, it can also be seen that P1t, which essentially is determined by f1t given in (15), will have X1t as the only factor if g 12* = 0. In particular this is the case if separation is of type B. In general, however,

X2t will contribute to both the I(1) and the I(0) components. Concerning the second part of the Proposition the autoregressive rep- resentation of the components as they are given in (18) demonstrates that in the long-run T1t and T2t will not have any explanatory power with respect to the long-memory components in either system. This is fully consistent with their definition of course. However, the long-memory component of the X2-system will cause the corresponding component of the X1-system, but without being cointegrated. An exception occurs for instance when g12 = 0 and G12(1) = 0. Then the condition (19) is satisfied so not surprisingly the P1t and P2t components do not interact in the long- run due to complete separation of the sub-systems. The analysis of the past two sections demonstrates the importance of considering whether error correction terms and other short-run dynam- 264 C. W. J. Granger and N. Haldrup ics from other systems may have an impact on the system of interest when cointegration is separate.Although it is not going to affect the coin- tegration properties of the data, it clearly becomes of importance in extracting and interpreting the common stochastic trends and the long- and short-memory components of the multivariate system. In this sense it is of interest to consider the notion of cointegration in a general (rather than a partial) equilibrium framework.After all, it can be seen that exam- inations including common stochastic trends analysis should be done with care due to the dependence of such trends with respect to the information set.

4. EXTENSIONS TO NON-LINEAR ERROR CORRECTION MODELS Cointegrated models with non-linear error correction mechanisms have recently attracted much attention in the literature, compare e.g. Granger and Swanson (1996), and Granger and Teräsvirta (1993) and the refer- ences therein. The types of non-linearity entering such systems need to be restricted, however, in order to ensure stability of the model. In this section we demonstrate how the restrictions required in one system may or may not restrict the other system when cointegration is separate. Non-linear error correction models may take many different forms. Consider, for example, a simple system with the non-linear error cor- rection mechanism entering as follows:

DGDXZLXtt=¢gq() b-1 + () tt+ e , (20) where Zt = a¢Xt. As usual Xt is a p-vector time series and we let q(b¢Zt-1) be a (p - r) vector of non-linear functions of the lagged error correction terms; notice that since b is r ¥ 1, b¢Zt is assumed to be a scalar variable. Here we want to emphasize the non-linear property and assume for sim- plicity that G(L) = 0. Multiplying (20) by a¢ we obtain

DZZttt=¢agqb() ¢-1 +¢ ae (21) which is a non-linear VAR(1) process. In defining

ZhZttt=¢()bh-1 + (22) where hZ()=+¢ Zagqb() ¢ Z (23) the admissible class of functions ensuring stability should satisfy the nec- essary and sufficient stability conditions, see Tweedie (1975), Lasota and Mackey (1989), and Granger and Teräsvirta (1993),

hZ()£< aZ for Z c and a 1 (24) Cointegrated Systems and Persistent-Transitory Decompositions 265 and hZ() is finite for all finite Z . (25)

ΩΩ.ΩΩ can be any norm, not necessarily the Euclidean norm. It follows that for the case of one dimension, the functions satisfying stability must be dominated by a linear function with slope less than one. For instance, if q(Z) is one dimensional the function could be logistic in Z or log(Z).The stability condition above applies to the vector Z. If this is stable, so are the single components, but it is not generally possible to provide condi- tions on the stability of each element in q(Z). The restrictions above can be weakened in some cases meaning that only a subset of the functions in q(Z) need to be restricted, i.e. the func- tions for which the adjustments lie in the space spanned by a^ we need no restrictions to be imposed to ensure stability. Assume for simplicity that this space is empty such that each element of q(Z) should be con- sidered in derivation of the stability conditions. Despite non-linearity in the adjustment and error correction terms, the common stochastic trends ft, in the Stock–Watson and Gonzalo– -1 Granger sense, turn out to behave linearly since ft = g ^¢Xt = g ^¢D et in the present situation. In other words, the common stochastic trends will have no non-linear feature. Assume now that cointegration is separate, using the terminology of Section II, and that error correction is non-linear in the following way,

Ê DX it ˆ Êg 11 g 12 ˆÊ qba111111()¢¢X ,t- ˆ Ê e1t ˆ = + (26) Ë ¯ Ë ¯Ë ()X ¯ Ë ¯ DX 2 t 0 g 22 qba222221¢¢ ,t- e 2 t using an obvious notation. In case of complete separation, which in the present set-up means that g12 = 0, the common stochastic trends (with no non-linear feature) are easily calculated for each sub-system. This case is rather trivial. So is the situation where Xt = (X 1¢t , X 2¢t)¢ is treated jointly and separation is partial (g12 ≠ 0). In this case g ^¢ effectively kills both the non-linear error correction terms. Consider instead the case where system 1 is treated as completely separated although it is only partially separated. In this case the common stochastic trends of the X1-system read ^¢ ^¢ ^¢ DDfX1tt==gggqbage 11 1 11 12 2() 2¢¢ 22 X 2, tt- 1+ 11 1 . (27)

Hence, although the common stochastic trends of the X2-system are linear, the corresponding trends of the X1-system will generally have a non-linear feature.

What restrictions are needed on q1(.) and q2(.) in the partially sepa- rated system to ensure stability? We have that

DZZ11111111111122221111tt=¢agqb() ¢,,--+¢ agq() b ¢ Z tt+¢ ae (28) DZZ222222221tt=¢agqb() ¢ , - . 266 C. W. J. Granger and N. Haldrup

So, the stability requirements in this case are not affected: As long as the stability conditions of system 2 are satisfied, the stability conditions that are necessary for system 1 will be unaffected by system 2. Observe, however, that if we introduce g21 ≠ 0 such that a 22¢ g21q1(Z1,t-1) will appear in the expression for DZ2t in (28), the stability condistions for the single systems cannot be calculated in isolation.The systems have to be treated jointly in this case, i.e. by letting Zt = (Z1¢t ,Z¢2t)¢ and considering the system (21). The joint stability requirements of q1(.) and q2(.) are given by (24) and (25). It is clearly a restriction implied by the particular non-linear model considered above, that the functional forms of the error correction terms associated with the X2-system, and entering in the X1-system, must be the same as those arising in the X2-system with respect to the same error correction terms. Many other model constructions could be considered. For instance, the model

Ê DX1t ˆ Ê gq11 11() b 11¢ ZZ 1,,tt-- 1+¢ gq 12 12() b 12 2 1 ˆ Ê e1t ˆ = + (29) Ë ¯ Ë ()ZZ()¯ Ë ¯ DX 2 t gq21 21 b 21¢ 1,,tt-- 1+¢ gq 22 22 b 22 2 1 e 2 t could be analyzed.This class of model is probably more relevant in prac- tice, but its increased flexibility adds to the complexity of deriving common stochastic trends and P-T decompositions. No results are presently available for this type of non-linear error correction models, but it is certainly a class of dynamical models that will be of interest for future research.

5. CONCLUSION Separation in cointegrated systems is a useful notion which helps to reduce the complexity of large systems and eases their interpretation. Within a cointegrated VAR set-up, cf. Johansen (1988, 1991), both par- tially and completely separated models can be easily tested by con- sidering particular hypotheses on the cointegration vectors and the adjustment coefficients, and hence this should become an integral part of cointegration analysis (see Konishi and Granger, 1992). Moreover, looking at small models clearly has advantages with respect to the para- meter accuracy that can be obtained in finite samples as has been demon- strated by e.g. Abadir et al. (1996). However, although it increases the dimension of the model, the absence of error correction or short-run separation, i.e. where error cor- rection terms and stationary variables from other systems may enter the model, is an important possibility to consider as well, not only because it may improve the model for forecasting purposes, but also, as we have demonstrated, because the implied short-tun dynamics actually may add to our understanding of the stochastic trends driving the system as well Cointegrated Systems and Persistent-Transitory Decompositions 267 as the complex dynamical interaction that may exist across systems. It is therefore our suggestion for empirical practice that the applied econo- metrician is aware of such important links rather than just focusing on the long-run properties of the data in terms of cointegration. More generally it is our suggestion to consider VAR models with a size that reflects the purpose of the analysis. Too large models give rise to degrees of freedom problems with respect to estimation and inference and it complicates the interpretation of empirical results. On the other hand, common stochastic trends analysis, persistent-temporary decom- positions and impulse response analysis is rather sensitive to the infor- mation set of the econometrician, and looking at small models may thus have very misleading implications. Generalizations to non-linear models, and in particular, non-linear error correction models, are still in their infancy, but potentially a rich class of dynamical systems can be analyzed within this set-up. However, much more research needs to be done in order to obtain results that are useful for the practitioner.

REFERENCES Abadir, M., Hadri, K. and Tzavalis, E. (1996). “The influence of VAR dimensions on estimator biases”, Discussion paper, University of York. Beveridge, S. and Nelson, C. R. (1981). “A New Approach to the Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to the Measurement of the Business Cycle”, Journal of Monetary Economics, Vol. 7, pp. 151–74. Ericsson, N. R. (1992). “Cointegration, Exogeneity, and Policy Analysis: An Overview”, Journal of Policy Modeling, Vol. 14, pp. 251–80. Gonzalo, J. and Granger, C. W. J. (1995). “Estimation of Common Long-Memory Components in Cointegrated Systems”, Journal of Business and Economic Sta- tistics, Vol. 13, pp. 27–35. Granger, C. W. J. and Lin, J. (1995). “Causality in the Long-Run”, Econometric Theory, Vol. 11, pp. 530–36. Granger, C. W. J. and Swanson, N. (1996). “Further Developments in the Study of Cointegrated Variables”, BULLETIN, Vol. 58, pp. 537–53. Granger, C. W. J. and Teräsvirta, T. (1993). Modeling Nonlinear Economic Rela- tionships, Oxford University Press. Hosoya, Y. (1991). “The Decomposition and Measurement of the Interdepen- dence between Second-order Stationary Processes”, Probability Theory and Related Fields, Vol. 88, pp. 429–44. Johansen, S. (1988). “Statistical Analysis of Cointegration Vectors”, Journal of Economic Dynamics and Control, Vol. 12, pp. 231–54. Johansen, S. (1992).“Cointegration in Partial Systems and the Efficiency of Single Equation Analysis”, Journal of Econometrics, Vol. 52, pp. 389–402. 268 C. W. J. Granger and N. Haldrup

Johansen, S. (1991).“Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models’, Econometrica, Vol. 59, pp. 1551–80. Kasa, K. (1992). “Common Stochastic Trends in International Stock Markets”, Journal of Monetary Economics, Vol. 29, pp. 95–124. Konishi, T. (1993). Separation and Long-Run Non-causality in a Cointegrated System, PhD Dissertation, UCSD. Konishi, T. and Granger, C. W. J. (1992). “Separation in Cointegrated Systems”, Manuscript, Department of Economics, UCSD. Konishi, T., Ramey, V. A. and Granger, C. W. J. (1994), “Stochastic Trends and Short-Run Relationships between Financial Variables and Real Activity”. Manuscript, Department of Economics, UCSD. Lasota, A. and MacKey, M. C. (1989). “Stochastic Perturbation of Dynamical Systems: the Weak Convergence of Measures”, Journal of Mathematical Analy- sis and Applications, Vol. 138, pp. 232–48. Mellander, E.,Vredin,A. and Warne,A. (1992).“Stochastic Trends and Economic Fluctuations in a Small Open Economy”, Journal of Applied Econometrics, Vol. 7, pp. 369–94. Proietti,T. (1997),“Short Run Dynamics in Cointegrated Systems”, BULLETIN, Vol. 59, pp. 405–22. Quah, D. (1992). “The Relative Importance of Permanent and Transitory Components: Identification and Some Theoretical Bounds’, Econometrica, Vol. 60, pp. 107–18. Stock, J. H. and Watson, M. W. (1988). “Testing for Common Trends”, Journal of the American Statistical Association, Vol. 83, pp. 1097–107. Tweedie, R. L. (1975). “Sufficient Conditions for Ergodicity of Spectra”, in Grenander, U. (ed.), Probability and Statistics, New York, Wiley. CHAPTER 14

Nonlinear Transformations of Integrated Time Series* C. W. J. Granger and Jeff Hallman

Abstract

In this paper we consider the effects of nonlinear transformations on inte- grated processes and unit root tests performed on such series. A test that is invariant to monotone data transformations is proposed. It is shown that series are generally not cointegrated with nonlinear transformations of themselves, but the same transformation applied to a pair of cointe- grated series can result in cointegration between the transformed series.

Keywords: Nonlinear transformations; integrated processes; unit root tests; cointegrated series; monotone data transformations; autocorrela- tions; Dickey–Fuller statistics.

1. INTRODUCTION In this paper we are concerned with the effects of nonlinear transfor- mations on integrated, particularly I(1), processes. Three questions are considered.

(i) If xt is integrated and zt = f(xt), will zt appear to be integrated as well?

(ii) Are xt and zt cointegrated? (iii) If xt, yt are I(1) and cointegrated, will g(xt), g(yt) also be cointegrated? These questions arise naturally when considering regressions of the form

wtttt=+ a bx + cz()or y + residuals where wt is stationary. The terms xt, zt or yt can only occur on the right- hand side if they are either I(0) or cointegrated. For example, a

* Journal of Time Series Analysis, 12, 1991, 207–224. 270 C. W. J. Granger and J. Hallman

researcher may try to explain the unemployment rate in terms of rt and logrt, where rt is an interest rate. The outline of the paper is as follows. Following this introduction, in Section 2 we address question (i) by contrasting the effects of several nonlinear transformations on the empirical autocorrelations and Dickey–Fuller (DF) statistics of a random walk. The DF test appears to be much more sensitive to nonlinear transformation than is the empiri- cal autocorrelation function.A simple modification of the DF is proposed which works correctly for a large class of transformations. In Section 3 we consider questions (ii) and (iii); the answers are generally no and yes, respectively, although the DF test is again somewhat misleading. The topics considered are relevant because of the current interest in integrated, or ‘unit root’ series in econometrics and macroeconomics, and in nonlinear time series models. Properties of nonlinearly transformed series are examined in greater detail by Granger and Hallman (1988), while nonlinear theoretical relationships between integrated series are considered by Granger (1988) and Hallman (1989).

2. UNIT ROOT TESTS ON TRANSFORMED SERIES There is now a substantial literature on the topic of testing for unit roots in linear time series models. The result obtained by Phillips (1987) forms the basis for the distributional theory of the various tests. Phillips assumes that a series yt is generated by

yt = yt-1 + ut where y0 = 0 and ut is assumed to satisfy the following assumptions.

Assumption 2.1: (Phillips)

(a) Eut = 0 b (b) supt E|ut| <•for some b > 2.

2 Ï 1 2 ¸ (c) s = limEÌ ()Âut ˝ TÆ• ÓT ˛ exists and is greater than zero. • • 1-2/b (d) {ut}1 is strong mixing with coefficients am that satisfy S1 am <•. Given these assumptions, Phillips shows that Dyy ˆ Â tt-1 TT()a - 1 ∫ 2 Â yt-1 2 22 W()1 -ssu Æ 2 1 2Ú 0 Wrdr() Nonlinear Transformations of Integrated Time Series 271 and 2 assˆ - 111()()u W - ta ∫ 12 Æ 12 sy˜ 2 12 ()Â t-1 2()Ú 0 Wrdr() where W(r) is a standard Brownian motion and s˜ 2 is the usual estimate of the variance of the residuals from the regression. The statistic -ta is called the Dickey–Fuller (DF) test statistic and its distribution is known by the same name. If y0 π 0 it is subtracted from the other terms in the series. Models with more complicated serial correlation but still only a single unit root can be handled by including lags of Dyt in the regres- sion; this is the augmented Dickey–Fuller (ADF) test. For example, the ADF test using four lags is minus the t statistic of the coefficient a in the regression

4 ˆ ˆ DDyytt=+a --1 Â by iti. (2.1) i=1 Both the simple and augmented versions of the test have the same lim- iting distribution. The test given by (2.1) is designed to have power against the alterna- tive hypothesis that yt is generated by a stationary AR model with zero mean. A test for the more general alternative where the mean of the series may be nonzero is constructed by performing the same regression with the addition of a constant term, i.e.

4 ˆ ˆ DDycttiti=+a y--1 +Â by. (2.2) i=1 It is interesting to ask how the DF and ADF tests work with nonlin- early transformed series. Suppose that

xt = xt-1 + et, where et meets the requirements of Assumption 2.1, and let

yfxtt= (). A mean value expansion has

yt = yt-1 + ht with

hetttt=¢fx()-1 + r where rt lies in the interval [yt-1, yt]. There is no reason to expect ht to meet the requirements of Phillips’ assumption unless f(·) is affine. As examples, consider the following transformations, noting that the term 2 2 (Sut) in (c) is just y1. 2 2 yt = xt : here ht = et + 2xt-1et and this violates all four parts of Assump- tion 2.1. 272 C. W. J. Granger and J. Hallman

3 2 3 yt = xt : here ht = 3xt-1et + 3xt-1et + et also violates all four parts of Assumption 2.1. T 2 2 2 yt = sgn(xt): this violates (c) since (S1 ht) = yT = 1, so that s = 0. yt = sinxt: Granger and Hallman (1988) show this to be a stationary 1 4t AR(1) process with variance –2 + ca , implying that the limit in (c) is 12+ ca 4T 1 lim== lim 0 TÆ•2TTT Æ• 2

yt = exp(xt): in this case ht = {exp(et) - Eexp(et)}yt-1 has a variance exploding faster than t, thus violating (b) and (c). It is also clear from the expression for ht that it is not mixing. yt = 1/xt: to avoid problems associated with xt taking nonpositive 2 values, assume that x0 is large. Then yt will be bounded and limTÆ• (yt /T) will be zero, violating (c). (d) also fails as

223 hetttt=-yy--1 +() es - t1 . As a simple example of what can happen to the DF test when the series tested is a transformation of a random walk, let xt be the simplest type of random walk given by

xt = xt-1 + et (2.3) with x0 = 0 and where et (t = 1,2,...,T) is an independent identically 1 distributed (i.i.d.) series with prob(et = 1) = prob(et =-1) = –2 . xt meets the conditions of Assumption 2.1, and so the DF test when performed on it will have the DF distribution. Considering the transformed series yt = sgn(xt), it is seen that the change series Dyt is just

Ï 20if xxtt>< and -1 0 Ô Dyt = Ì-<20if xxtt and -1 > 0 Ô Ó 0 otherwise so that Dytyt-1 is -2 if xt crosses zero between time t - 1 and t, and is zero 2 otherwise. yt-1 = 1 for all t, of course, and so

T T 12 ˆ Ê 2 ˆ DF()yyyyttt∫-ÂÂD -1 sÁ t-1 ˜ t=1 Ë t=1 ¯

-¥2 no. of zero crossings of xt =- 12 . ()Tsˆ 2 2 Since sˆ is just the mean square error (MSE) of the regression of Dyt on yt-1, 14T ˆ 22 s £=¥Â Dyt no. of zero crossings of xt T t=1 T so that Nonlinear Transformations of Integrated Time Series 273

Table 14.1 Dickey–Fuller Empirical Distribution.

Transformation 1% 5% 10% 25% 50% 75% 90% 95% 99% x -0.68 0.06 0.48 1.05 1.59 2.15 2.62 2.90 3.54 x2 -2.61 -0.87 0.02 1.15 1.84 2.46 3.21 3.74 4.86 x3 -4.06 -1.58 -0.11 1.23 1.99 2.65 3.33 3.78 4.78 ΩxΩ -0.48 0.34 0.80 1.40 2.01 2.60 3.24 3.70 4.76 sgn(x) 1.45 2.16 2.67 3.58 4.58 6.05 8.37 11.31 14.25 sinx 5.75 6.17 6.34 6.70 7.07 7.46 7.82 8.00 8.50 exp(x) -11.6 2.99 4.04 5.05 6.03 7.22 8.68 10.13 36.06 ln(x + 75) -0.74 0.03 0.50 1.06 1.59 2.14 2.63 2.94 3.50 1/(x + 75) -0.96 0.01 0.49 1.07 1.59 2.14 2.66 2.97 3.56

Table 14.2 Augmented Dickey–Fuller Empirical Distribution.

Transformation 1% 5% 10% 25% 50% 75% 90% 95% 99% x -0.83 0.03 0.40 1.03 1.57 2.14 2.64 2.95 3.58 x2 -2.63 -1.20 -0.25 1.07 1.82 2.45 3.06 3.44 4.23 x3 -4.0 -2.03 -0.61 1.04 1.87 2.48 3.04 3.39 3.95 ΩxΩ -0.78 0.12 0.59 1.29 1.89 2.47 3.05 3.35 4.23 sgn(x) 0.53 1.24 1.56 2.08 2.82 4.01 6.08 6.90 10.92 sinx 3.71 4.08 4.27 4.63 5.06 5.49 5.89 6.15 6.70 exp(x) -10.5 -2.29 1.88 3.25 4.07 4.72 5.39 7.76 39.1 ln(x + 75) -0.85 -0.03 0.42 1.03 1.58 2.14 2.64 2.94 3.60 1/(x + 75) -1.02 -0.07 0.43 1.04 1.59 2.15 2.65 2.97 3.64

2 ¥ no. of crossings DF()yt ≥ 12 ()4¥ no. of crossings 12 = ()no. of zero crossings of xt . Feller (1968) shows that for the simple random walk given by (3.3) the number of returns to the origin divided by T1/2 is asymptotically distrib- uted as a truncated normal random variable. As the probability of a zero crossing is just half the probability of a return to the origin, it follows that twice the number of crossings divided by T1/2 has the same distri- 1/4 bution. The DF test statistic for yt is at least O(T ) and will become infinitely large as the sample size grows. Tables 14.1 and 14.2 show the empirical distributions of the DF and ADF tests on several transformations of a Gaussian random walk. These were found by creating 2000 random walks of length 200, making the indicated transformations and recording the values of the test statistics. Four lags of the dependent variable were used for the ADF statistic, and constants were included in both the ADF and DF regressions. 274 C. W. J. Granger and J. Hallman

Table 14.3 Autocorrelations.

Transformation Lag 1 234567 8 9 10 x 0.96 0.92 0.87 0.83 0.79 0.75 0.72 0.69 0.65 0.62 x2 0.93 0.87 0.80 0.74 0.68 0.63 0.58 0.54 0.49 0.45 x3 0.92 0.85 0.79 0.71 0.64 0.60 0.55 0.50 0.46 0.42 ΩxΩ 0.93 0.87 0.81 0.75 0.70 0.66 0.62 0.58 0.54 0.50 sgn(x) 0.94 0.81 0.76 0.73 0.68 0.65 0.62 0.60 0.58 0.54 sinx 0.59 0.33 0.20 0.12 0.09 0.06 0.01 -0.03 -0.06 -0.05 exp(x) 0.60 0.42 0.30 0.23 0.19 0.18 0.17 0.15 0.12 0.12 ln(x + 50) 0.96 0.91 0.87 0.83 0.79 0.75 0.72 0.68 0.65 0.61 1/(x + 50) 0.96 0.91 0.87 0.83 0.78 0.75 0.71 0.68 0.65 0.61

In these tests the null hypothesis is that the series is I(1) and the alter- native is that it is I(0). The first row of Table 14.1 shows that the test sta- tistic is less than 2.90 95% of the time when H0 is true. The results show that not only is H0 always (correctly) rejected for sinxt, but it is also usually rejected for the long-memory processes sgn(xt) and exp(xt). It would certainly be incorrect to accept the latter two series as being I(0). The other transformations are also rejected too often, except for the last two. This is misleading, however, because the effect of adding 75 to the random walk before transforming it is to reduce the curvature of the transformation greatly, making both ln(xt + 75) and 1/(xt + 75) nearly linear transformations of xt over this range. Adding a smaller constant than 75 would undoubtedly move the DF and ADF distributions to the right. Only those realizations in which xt crossed the zero axis between observations 5 and 195 were used in obtaining the statistics for the trans- formation sgn(xt). About 85% of the realizations in the simulation had at least one such crossing. In the Box–Jenkins modeling strategy, the shape of the correlogram is used to decide whether a series seems to be I(0) or I(1). If the autocorrelations decline slowly with lag length, an I(1) model is chosen. Table 14.3 presents the means across replications of the first ten auto- correlations obtained in an experiment similar to the one generating Tables 14.1 and 14.2. For most of the transformed series, the correlogram closely resembles that of a random walk, even for the bounded series sgn(x). The exceptions are the stationary series sinxt and the explosive series exp(xt). The DF and ADF unit root tests appear to be more sensitive than the autocorrelations to series transformations. Economists often transform their variables by taking logarithms, using Box–Cox transformations etc., before building models and making inferences. It can easily happen that a unit root exists in the original series, but the usual tests reject a unit root in the transformed series despite a high degree of autocorrelation Nonlinear Transformations of Integrated Time Series 275 in the latter. A test for unit roots that is invariant to a broad class of transformations would avoid this outcome. It is not possible to construct a test which is invariant to every possi- ble transformation of the series being tested. As a trivial example, con- sider the transformation T(xt) = 568.3. No test on the transformed series can possibly yield any information about the original series. More inter- estingly, Granger and Hallman (1988) show that the sine (or cosine) of a random walk yields a stationary AR(1). This suggests that any periodic transformation will result in a stationary series, since periodic transfor- mations can be arbitrarily well approximated by Fourier transforms. Given such a series with no long memory properties, it will not be pos- sible to detect that it resulted from transforming a random walk. Many nonparametric tests are based on notions of rank and ordering. Since the ordering of a series is unaffected by strictly monotone trans- formations, tests based on these notions have distributions that are unaf- fected by monotone transformations of the data. The ranks Rt of a time series xt are defined by

Rt = the rank of xt among x1, x2,...,xT. A simple test for unit roots in a (possibly) transformed series is to cal- culate the DF or ADF statistic of the ranks of the series rather than of the original series itself. Here the null hypothesis is that there exists a strictly monotone transformation of the time series being tested which has a unit root. The question immediately arises: what is the distribution of what will be called the rank Dickey–Fuller (RDF) statistic and its augmented cousin (RADF)? Unfortunately we have not obtained an analytical answer to this question. Phillips’ distributional results can be extended via the continuous mapping theorem to find the distribution of a test for a given transformation, but this will not solve the problem since the rank transformation is different for every sample. Rank statistics are usually applied in situations where it is known that the normalized sample ranks R(xi)/N converge to the population distribution function F(xi). For the null hypothesis here, there is no well-defined distribution function for the ranks to converge to, as xt is a nonstationary random walk. Despite the fact that ‘nice’ analytic representations of their distribu- tion functions are not available, the RDF and RADF statistics are random variables which are easily computed for any given time series. The approach taken here is to investigate their usefulness as tests in some specific cases by means of computer simulation. Figure 14.1 shows esti- mated densities for the RDF statistic (with constant) for sample sizes 25, 50, 100, 200, 400 and 800. The plots were constructed by generating 5000 independent random walks of the indicated sample sizes, calculating and recording the RDF test statistics and finally estimating the density with 276 C. W. J. Granger and J. Hallman

RDF Densities

0.6 n = 25 n = 50 n = 100 n = 200 n = 400 0.4 n = 800

0.2

0.0

01234 Figure 14.1. Rank Dickey–Fuller densities. a kernel estimator. As the Figure indicates, the density does not change much as the sample size changes, except for the smallest sample size of 25. Figure 14.2 shows the corresponding densities for the augmented version of the test, RADF. Here there is marked variation in the density as the sample size varies, but this is also true of the ADF test as seen in Figure 14.3. The fact that an elegant asymptotic theory is available for the ADF but not for the RADF does not seem to make much difference to their small-sample behavior. Tables 14.4 and 14.5 give percentiles of the RDF and RADF tests under the null hypothesis that xt is a mono- tone transformation of a pure random walk. Figures 14.4 and 14.5 compare the power of the RDF test with that of the DF. The upper left panel of Figure 14.4, for example, shows the fraction of rejections of the hypothesis H0 :r = 0 in the model

Dxt =-rxt-1 + et for several values of r when the sample size is 50. The four lines on the plot show the rejection percentages for the DF and RDF at the 5% and 10% significance levels. Figure 14.4 compares the tests when there is no constant allowed in the regression, while Figure 14.5 compares the tests with a constant included. Similar power comparisons between the ADF and RADF statistics are a topic for further research. As the DF test without a constant is equivalent to a likelihood ratio test, it is not surprising that it is more powerful than its rank counter- Nonlinear Transformations of Integrated Time Series 277

RADF Densities

0.6 n = 25 n = 50 n = 100 n = 200 0.4 n = 400 n = 800

0.2

0.0

01234 Figure 14.2. Rank augmented Dickey–Fuller densities.

ADF Densities

0.5

0.4 n = 25 n = 50 n = 100 n = 200 0.3 n = 400 n = 800

0.2

0.1

0.0

–1 01234 Figure 14.3. Augmented Dickey–Fuller densities. 278 C. W. J. Granger and J. Hallman

Table 14.4 Rank Dickey–Fuller Percentiles.

Without constant Constant included Sample size 10% 5% 1% 10% 5% 1%

25 1.70 2.03 2.71 2.63 2.98 3.70 50 1.77 2.13 2.79 2.63 2.93 3.49 100 1.82 2.14 2.76 2.68 2.95 3.60 200 1.87 2.18 2.80 2.71 3.00 3.53 400 1.88 2.18 2.82 2.75 3.01 3.57 800 1.97 2.28 2.83 2.78 3.06 3.59

Table 14.5 Rank Augmented Dickey–Fuller Percentiles.

Without constant Constant included Sample size 10% 5% 1% 10% 5% 1%

25 1.67 2.05 2.87 2.39 2.72 3.48 50 1.57 1.91 2.56 2.37 2.66 3.25 100 1.61 1.92 2.52 2.41 2.68 3.24 200 1.66 1.95 2.57 2.48 2.75 3.27 400 1.70 2.04 2.61 2.55 2.82 3.42 800 1.79 2.08 2.73 2.65 2.92 3.51

part.What is surprising is that the RDF is apparently more powerful than the DF when constants are allowed into the regression.A possible expla- nation is as follows. When there are no lags of Dyt involved, regression (2.2) is equivalent to (2.1) using the mean-corrected y˜t = yt - y. When this is done with the original data, the mean is a parameter that has to be estimated, using up a degree of freedom. For ranks, however, the mean is almost completely determined by the number of observations – it is just NN()+ 1 rank() y - N . 2 N Since it is nearly deterministic, having to estimate it has little effect on the power of the test. Finally, Tables 14.6 and 14.7 show the empirical distributions of the RDF and RADF tests from a simulation in which the statistics were com- puted for 200 observations of the indicated transformations of a pure random walk. 500 trials were performed to obtain the percentiles shown. Rank statistics are invariant to monotone transformations, and so the Nonlinear Transformations of Integrated Time Series 279

Power of DF and RDF Tests without Intercept

100

80 80 % rejected % rejected

60 60

40 40

20 20

0.85 0.85 0.90 0.95 1.00 0.85 0.85 0.90 0.95 1.00 n = 50 n = 100

100 100

80 80 % rejected % rejected

60 60

40 40

RDF 90% 20 20 RDF 95% DF 90% DF 95%

0.85 0.85 0.90 0.95 1.00 0.90 0.92 0.94 0.960.98 1.00 n = 200 n = 400 Figure 14.4. Power of Dickey–Fuller and rank Dickey-Fuller tests without intercept.

computed statistics for x, x3, exp(x), ln(x + 75), and 1/(x + 75) are all identical. Since |x| = (x2)1/2, their statistics are also identical. For the strictly monotone transformations in the tables, RDF and RADF have the correct size by construction. For the other transforma- tions, a comparison of Tables 14.6 and 14.7 with Tables 14.1 and 14.2 indi- cates that the RDF and RADF distributions appear considerably more robust than the DF and ADF distributions. Only for the sinx transfor- 280 C. W. J. Granger and J. Hallman

Figure 14.5. Power of Dickey–Fuller and rank Dickey–Fuller tests with intercept. mation do the RDF and RADF tests consistently reject the null hypoth- esis, but this is the correct thing to do as sinxt is a stationary AR(1). A reasonable strategy for unit root testing is to compute both the con- ventional and the rank versions of the DF and ADF tests, since it is rarely known with certainty that the underlying data-generating process (DGP) is linear. If it is, both kinds of tests have the correct size and similar power. Otherwise the rank versions of the tests are more applicable. If Nonlinear Transformations of Integrated Time Series 281

Table 14.6 Rank Dickey–Fuller (With Constant) Empirical Distribution.

Transformation 1% 5% 10% 25% 50% 75% 90% 95% 99% x 0.49 0.91 1.16 1.48 1.90 2.34 2.81 3.06 3.70 x2 0.97 1.36 1.57 1.95 2.43 3.11 3.77 4.22 5.09 x3 0.49 0.91 1.16 1.48 1.90 2.34 2.81 3.06 3.70 ΩxΩ 0.97 1.36 1.57 1.95 2.43 3.11 3.77 4.22 5.09 sgn(x) -0.99 -0.09 0.21 1.57 2.84 4.11 5.15 5.53 6.53 sinx 6.14 6.45 6.58 6.92 7.25 7.63 7.96 8.14 8.51 exp(x) 0.49 0.91 1.16 1.48 1.90 2.34 2.81 3.06 3.70 ln(x + 75) 0.49 0.91 1.16 1.48 1.90 2.34 2.81 3.06 3.70 1/(x + 75) 0.49 0.91 1.16 1.48 1.90 2.34 2.81 3.06 3.70

Table 14.7 Rank Augmented Dickey–Fuller (With Constant) Empirical Distribution.

Transformation 1% 5% 10% 25% 50% 75% 90% 95% 99% x 0.22 0.71 0.91 1.28 1.74 2.14 2.59 2.89 3.30 x2 0.67 0.98 1.21 1.59 2.09 2.61 3.17 3.49 4.49 x3 0.22 0.71 0.91 1.28 1.74 2.14 2.59 2.89 3.30 ΩxΩ 0.67 0.98 1.21 1.59 2.09 2.61 3.17 3.49 4.49 sgn(x) -1.24 -0.42 0.01 0.53 1.59 2.33 2.92 3.53 4.16 sinx 3.71 4.09 4.27 4.67 5.06 5.54 5.93 6.22 6.63 exp(x) 0.22 0.71 0.91 1.28 1.74 2.14 2.59 2.89 3.30 ln(x + 75) 0.22 0.71 0.91 1.28 1.74 2.14 2.59 2.89 3.30 1/(x + 75) 0.22 0.71 0.91 1.28 1.74 2.14 2.59 2.89 3.30

the ADF test rejects its null while the RADF does not, for example, we might look at a plot of the rank transformation to see if it is suggestive of a parametric transformation yielding a series that could reasonably be modeled as a linear I(1) process. The fact that all the transformed series in Tables 14.1 and 14.2 have DF and ADF distributions shifted to the right indicates that the case where RADF rejects and ADF does not is unlikely unless the process really is linear. In this case we might find the ADF test more believable on the grounds that its asymptotic distribu- tion has been worked out.

3. COINTEGRATED VARIABLES Two questions will be considered.

(i) If xt is I(1), can xt and g(xt) be cointegrated for some func- tion g(·)? 282 C. W. J. Granger and J. Hallman

(ii) If xt, yt are I(1) and cointegrated, will g(xt), g(yt) also be cointegrated?

It will be assumed that xt is a pure Gaussian random walk, possibly with drift, generated by

xt = m + xt-1 + et

2 2 et ~ i.i.d. N(0, s ), so that xt ~ N(mt, s t). For the second question, yt will be assumed to be given by

yt = axt + et where e is i.i.d. Gaussian, mean zero and independent of et. Denote E{g(xt)} ∫ mt. In general, this will be a function of time. For 2 2 2 2 example, if g(x) = x , then mt = m t + s t which is a function of time even if xt has no drift. If g(xt), xt are cointegrated with a constant cointegrating parameter a, then

gx()tt-=ma x tt + a where at is I(0). aˆ t will be uncorrelated with xt if a is estimated by ordi- nary least squares (OLS). Is there a constant a such that g(xt) - mt - axt is I(0)? A simple form of Stein’s lemma says that if x is Gaussian then cov{}gxx() =¢ Eg{}() x var() x and so the OLS estimate of a tends asymptotically to E{g¢(xt)}.There are essentially three cases.

(i) limtÆ• E{g¢(xt)} = c, a constant, in which case cointegration will occur.

(ii) limtÆ• E{g¢(xt)} = 0 and there is no cointegration. (iii) limtÆ• E{g¢(xt)} = Gt, a function of time. In this case there is no constant-parameter cointegration. There may or may not be time-varying parameter cointegration, but this will not be con- sidered in this paper.

k It is easily seen that if g(x) = ax for some integer k, then xt and g(xt) can only be (constant-parameter) cointegrated if k = 1. Similarly, if g(x) = exp(lx), there cannot be cointegration. An example where apparent cointegration might seem possible is when g(x) = ln(a + x), where a is large and positive throughout the sample period and it is assumed that xt has no drift, so that m = 0. In this case, 1 gx¢()= ax+ Ê x x2 ˆ ª-+a--1 Á1 ˜ + O()a 4 Ë a a2 ¯ Nonlinear Transformations of Integrated Time Series 283

Table 14.8 Percentiles of Dickey–Fuller Cointegration Test.

Transformation 55% 60% 65% 70% 75% 80% 85% 90% 95% x2 2.81 2.95 3.13 3.33 3.52 3.77 4.04 4.34 4.75 x3 3.16 3.30 3.45 3.62 3.82 4.07 4.27 4.67 5.24 sinx 1.87 1.96 2.14 2.23 2.38 2.56 2.67 2.88 3.12 exp(x) 2.96 3.10 3.21 3.30 3.44 3.57 3.76 4.06 4.49 ln(x + 75) 3.13 3.26 3.42 3.58 3.74 4.01 4.27 4.53 5.06 1/(x + 75) 3.15 3.26 3.43 3.59 3.73 3.99 4.29 4.54 5.06

Table 14.9 Percentiles of Augmented Dickey–Fuller Cointegration Test.

Transformation 55% 60% 65% 70% 75% 80% 85% 90% 95% x2 2.69 2.84 2.97 3.10 3.36 3.58 3.75 4.19 4.87 x3 2.83 2.96 3.08 3.23 3.42 3.62 3.82 4.07 4.33 sinx 1.69 1.80 1.94 2.05 2.16 2.28 2.47 2.65 3.06 exp(x) 2.00 2.10 2.18 2.30 2.39 2.52 2.74 2.88 3.30 ln(x + 75) 2.98 3.09 3.26 3.39 3.54 3.74 3.95 4.25 4.59 1/(x + 75) 2.96 3.11 3.23 3.40 3.54 3.75 3.92 4.25 4.58

so that 1 s 2t Eg{}¢() x =+ +O()a-4 a a3 Provided that s2 times the number of observations included in a sample is small compared with a3, E{g¢(x)} will approximate the (small) constant 1/a and apparent constant-parameter cointegration may occur. In Tables 14.8 and 14.9 the results of tests for cointegration (DF and ADF) between x and g(x) are given for several functions g(·). Selected percentiles of the empirical distribution of the DF and ADF tests per- formed on the residuals of a regression of xt on the indicated function g(xt), where xt is a pure random walk of 200 observations, are shown. The tables are based on a simulation experiment with 500 trials for each function. The values in the table can be compared with the 5% and 10% criti- cal values of the DF(3.37, 3.02) and ADF(3.25, 2.98) tests for cointegra- tion from Engle and Yoo (1987). It is seen that, except for the sine function, the cointegration tests can be somewhat misleading. For all the other transformations, the tests find cointegration a third or more of the time when it should not theoretically be there. It should be noted that the critical values for these tests were found using independent series xt, yt. Certainly xt and g(xt) are not independent of each other. 284 C. W. J. Granger and J. Hallman

Turning to the second question, a mean value expansion shows that

gy()ttt=+ g()ae x

ª gx()aeatt+¢ g() x tt + r where rt is some remainder term. As seen in Section 4, the second term will generally appear to be I(0) in mean with some heteroskedasticity, particularly if xt, et are independent. If it is assumed that this is correct, g(yt) - g(axt) is I(0). It follows that g(xt), g(yt) are cointegrated if either (i) a = 1 or (ii) g(x) is homogeneous, so that g(ax) = alg(x), in which case the cointegrating parameter is al. It should be pointed out that these results are only approximate. The answer to the two questions posed at the beginning of the section are generally no and yes respectively.The second case requires g(·) to be homogeneous or the series to be scaled so that the cointegrating coefficient is 1. Granger and Hallman (1988) give an example where xt, 2 2 yt are not cointegrated but xt , yt are.

4. CONCLUSIONS Nonlinear transformations of integrated series generally retain the long memory properties of traditional I(1) series, such as slowly declining autocorrelations. However, the DF and ADF unit root tests performed on such transformed series will often reject the null hypothesis that the series was generated by a linear process with a unit root. Since an inves- tigator is rarely certain that the generating process for his data is in fact linear, a unit root test that is invariant to monotone data transformations is desirable. The test proposed here is to perform the DF or ADF test on the ranks of the series, rather than on the series itself. The power functions of the rank tests are very close to the power functions of the conventional tests, but the rank versions have the desired invariance property by construction. In theory, a nonlinearly transformed series generally cannot be coin- tegrated with the original series. This emphasizes the importance of having the correct functional form when investigating a hypothesized long-run relationship yt = f(xt). If the actual cointegrating relationship is yt = g(xt), then yt and f(xt) will be cointegrated only if g is an affine trans- formation of f. Hallman (1989) addresses this issue. Testing for cointe- gration by performing unit root tests on the residuals from a regression of xt on f(xt) can be misleading, often finding cointegration when it theoretically cannot be there.

Finally, if xt, yt are cointegrated series, then g(xt), g(yt) can also be coin- tegrated if either (i) g(·) is homogeneous or (ii) the data are scaled so that the cointegrating coefficient for xt, yt is 1. Nonlinear Transformations of Integrated Time Series 285

ACKNOWLEDGEMENTS This paper was prepared under National Science Foundation grant SES 8902950.

REFERENCES Engle, R. F.and Yoo, B. S. (1987) Forecasting and testing in cointegrated systems. J. Economet. 35, 143–59. Feeler, W. (1968) An Introduction to Probability Theory and Its Application, Vol. 1. New York: Wiley. Granger, C. W. J. (1988) Introduction to processes having equilibria as simple attractors: the Markov case. Discussion Paper, University of California, San Diego. —and Hallman, J. J. (1988) The algebra of I(1). Finance and Economics Dis- cussion Series 45, Board of Governors of the Federal Reserve System. Hallman, J. J. (1989) Cointegration with transformed variables. Finance and Economics Discussion Series, Board of Governors of the Federal Reserve System. In preparation. Phillips, P.C. B. (1987) Time series regression with a unit root. Econometrica 55, 277–301. CHAPTER 15

Long Memory Series with Attractors Clive W. J. Granger and Jeff Hallman

1. INTRODUCTION The results presented in this paper can be motivated by considering the prices of some agricultural product, say tomatoes, in two parts of a country, denoted PNt, PSt for the prices in the north and south. At a time t, values of these prices will be a point in the plane with axes PN, PS.In this plane the line PN = PS may be considered to be an attractor because, if two prices are quite different, and thus off this line, there will be market pressure to bring the prices together. If PN is much larger than PS it will be a profitable enterprise to buy tomatoes in the south, transport them to the north and sell them there. This activity will raise demand and thus prices in the south, and raise supply, and thus lower prices, in the north. As the prices becomes near each other, the profitability of this activity will decline and so the strength of the attraction becomes small. This example illustrates a type of behavior that might be expected to occur frequently in economics. One may have a pair of economic series xt, yt each of which varies over a wide range but plots of xt against yt suggest that the economy has a preference for these points to lie in or near some region which could be called the attractor. This preference may occur through a market mechanism or by the action of government policy, say, when the market is fairly efficient, so that there are no trade barriers for instance, and when the government policy is effective. It might also be assumed that because of sticky prices, long-run contracts or delays in policy implementation a point off the attractor is not brought directly back on to it. The economy is taken to be stochastic, being influenced by frequent unforecastable shocks, and the attractor is not capturing so that if (xt, yt) is on the attractor the economy is liable to be taken off it by a shock or innovation. The object of the paper is to char- acterize attractors, to study the properties of series having attractors and then to consider the empirical aspects of these concepts.

* Oxford Bulletin of Economics and Statistics, 53, 1991, 11–26. Long Memory Series with Attractors 287

The proposal can be considered to be a nonlinear generalization of the concept known as cointegration which is discussed in Granger (1986) and in the book of readings, Engle and Granger (1990), and which has been widely used in macroeconomics and in finance. Although a variety of generalizations are available, the concept of cointegration is easily explained using characterizations of time series as being either I(0) or I(1). An I(0) series can be taken as being just a stationary, trend free series whereas an I(1) series is such that its difference is I(0). These two types of series have quite different appearances and properties, some of which are discussed in the next section. In particular, under reasonable assumptions, the variance of an I(0) series is bounded whereas the unconditional variance of an I(1) series increases without bound as t increases. A pair of series xt, yt are said to cointegrate if they are each I(1) but there exists a linear combination zt = xt - Ayt which is I(0). In this case it is shown later that the line x = Ay may be thought of as an attractor. A generating mechanism that produces cointegrated series is

xt = AW t + x˜ t

yt = Wt + y˜ t (1.1) where x˜ t, y˜ t is a bivariate system of I(0) series and Wt is I(1). As this system has three components it will be called a “three-factor” system. A simpler, two factor generating mechanism is

xt = AW t + azt

yt = Wt + bzt where a - Ab = 1. If xt, yt are generated by a two factor mechanism, then one can solve exactly for the factors whereas this is not true in the three factor case. Any pair of cointegrated series must have a representation such as (1.1), so that the cointegration property is produced by the single

I(1) factor Wt. Another generating mechanism that must occur is known as the error-correcting (EC) model, of the form

Dxt = r1zt-1 + lags of Dxt, Dyt + residual

Dyt = r2zt-1 + lags of Dxt, Dyt + residual (1.2) where at least one of r1, r2 is non-zero, zt = xt - Ayt and the residuals are white noises and hence I(0). If x = Ay is considered to be an equilibrium, the equation (1.2) may be thought of as the disequilibrium mechanism that produces this equilibrium. If xt, yt are generated by (1.2) they will be cointegrated and if they are cointegrated then they must have an EC representation. If economic theory suggests linear equilibrium relationships between series, the cointegration idea is sufficient for exploration of this theory. 288 C. W. J. Granger and J. Hallman

However, if the theory suggests a nonlinear equilibrium the cointegra- tion ideas have to be generalized. In particular the characterization of series being I(0) or I(1) is too linear and has to be replaced by a more general method of characterization, and this is attempted in the next section. Section 3 introduces the nonlinear generalization of coin- tegration and discusses some properties of processes having attractors. The exposition is mostly descriptive, is not necessarily completely rigorous and considers only the bivariate case. Generalizations to more variables is straightforward in concept but clearly more complex mathematically.

2. SHORT AND LONG MEMORY

Consider the conditional probability density function of xt+h given the information set It:xt-j, Qt-j, j ≥ 0 where Qt is a vector of other explanatory variables. The series xt will be said to be short memory in distribution (SMD) with respect to It if

Prob()xAIBth++ in t in - Prob() xA th in Æ0 (2.1) as h ≠•for all appropriate sets A, B such that Prob(I, in B) > 0. The definition is clearly closely related to uniform mixing. If (2.1) does not hold xt can be called long memory in distribution (LMD). More specific are definitions of memory in mean. Defining the conditional mean

Ex()th+ I t= f th, so that ft,h is the optimum least squares forecast of xt+h using It, then xt is said to be short memory in mean (SMM) if

lim fFth, = hƕ where F is a random variable with distribution D and if D does not depend on It. The case of particular interest here is where D is singular, so that F just takes a single value, m, which is the unconditional mean of xt, assumed to be finite. Other cases include limit cycles and process with strange (possibly fractionally dimensional) attractors. Although interest- ing these cases are less easily associated with the simple concepts of equi- librium considered in this paper. If ft,h depends on It for all h, xt is said to be long memory in mean (LMM).

It is clear that if xt is SMD then it is SMM and also any function of xt is also SMM, provided the unconditional mean of the function exists. If xt is LMM then it must be LMD but not necessarily vice versa. However, in general if xt is LMD then many functions of g(xt) will be LMM, pro- vided the mean exists, as shown in Granger and Thompson (1987). An example of a series that is SMM but LMD is if xt = etyt, where yt is LMM and independent of et which is I(0), such as a white noise, as shown in Long Memory Series with Attractors 289

Granger and Hallman (1989, 1990). In the same papers it is proposed that if xt is LMM then any monotonic nondecreasing function of xt is also LMM and this hypothesis is found to be correct when xt is a Gaussian random walk and for a variety of actual functions. However it is also found there that if xt is a Gaussian random walk then sin xt is SMM, in particular it has the linear properties of a stationary AR(1) process. It is thus suggested that if xt is LMM then sin xt and cos xt will often be SMM. It will be assumed below that this proposition is correct.

A single series xt will be said to have the point attractor m if xt is short memory in mean, so that

x lim fmth, = h as h ≠ and for all t and also provided that

x var()xfth+ - th, £ finite constant as h ≠, so that the asymptotic forecast error is bounded. This definition may be considered to be a special case of processes with strange attractors, where xt is generated by a deterministic mecha- nism but with a very small stochastic added noise, perhaps computer and round-off error, in which case ft,h Æ F, where F lies on an attractor of reduced, and sometimes fractional dimension, which does not depend on the initial values xt-j, j ≥ 0.

3. BIVARIATE ATTRACTOR

The definition that is proposed for an attractor for a pair of series xt, yt is based on Figure 15.1. In the (x, y) plane suppose there is region A, illustrated as a curve in the figure, (xt, yt) is the point taken by the bivari- A A ate process at time t and (xt , yt ) is the point on A nearest to (xt, yt), using a Euclidean measure of distance. Denote y = at + btx the tangent to A at A A the point (xt , yt ), where this tangent is assumed to be defined and unique, for convenience. Clearly at, bt will be functions of (xt, yt) and of A, by construction, except when A is a straight line. Define

zt = yt - at - btxt

A A so that zt is the signed distance from (xt, yt) to (xt , yt ). The bivariate process (xt, yt) may be said to have A as an attractor if zt is short memory in mean with m = 0 and has bounded variance. A stronger condition is that zt is SMD with mean zero and finite variance but this is a difficult hypothesis to test. It may be noted from the definition of zt that it may be difficult to distinguish between a nonlinear attraction and a time- varying (linear) cointegration.

If xt, yt are each individually SMD then any function of these series including zt, will also be SMD.The only interesting case is where xt, yt are 290 C. W. J. Granger and J. Hallman

Figure 15.1.

long memory in mean but a particular function of them, zt = f(xt, yt), is short memory in mean and the attractor is then A: z = 0, i.e. (x, y) such that f(x, y) = 0. Clearly, not all pairs of LMD series will possess such an attractor, as defined here.

A sufficient condition for zt to be SMM is that some other distance from (xt, yt) to A is SMM. The form of function studied below is

qgxhytt= ()- () t.

From this definition qt must be at least as big in magnitude as zt, so that if qt is SMM, so will be zt. A method of generating LMM processes having an attractor is as follows. Suppose that the curve f(x, y) = 0 can be written

gx()= hy() (3.1) and define G(x) = g-1(x), H(y) = h-1(y) assuming these inverse functions exist. Let wt be a Gaussian random walk so that

wt = wt-1 + et (3.2)

A where et is zero mean, Gaussian, constant variance white noise. Let xt = A G(wt), yt = H(wt). If G, H are monotonic nondecreasing, then from the Long Memory Series with Attractors 291

A A results stated earlier, xt , yt will be LMM and will lie on the attractor. A A A A The tangent to the attractor at xt , yt has slope H¢[g(xt )]g¢(xt ) using the notation introduced above and where H¢(x) ∫ dH/dx corresponding to -1 A qt = tan [slope]. As xt is a function of wt, one can just write qt ∫ q(wt). xt, yt now can be generated by

A xt = xt - zt sin qt (3.3) A yt = yt + zt cos qt where zt is a zero mean SMM, finite variance series generated indepen- dently from wt. This is a generalization of the “two factor” mechanism that generates I(1) cointegrated series, wt corresponding to the common factor that is LMD. Clearly other generalizations are possible, with wt being LMD other than a simple random walk or using “three factor” form, but these will not be considered here. Note that if zt is SMM then so will be zt sin qt from the result stated in Section II about products of processes. With the construction (3.3) it is clear that as the long run forecast of zt is zero, because it is SMM, then the optimum long run forecasts of the pair of series xt, yt will lie on the attractor. A single series that is SMM and has an attractor must have a point attractor. It follows that a pair of LMM series cannot have an attractor that is bounded in all directions. Consider a possible attractor that is a circle of radius r and with center

(0, 0). The distance from xt to the origin is then zt + r which is necessar- ily SMM. A similar argument can be applied to other bounded shapes. It follows that if a pair of LMM series have an attractor, that attractor must be unbounded in some direction. A form of the error correction model can be found from (3.3). Write the first equation as

xGwzstttt= ()- (3.4) where st = sin qt = sin q(wt). Note that

Gw()ttt++11=+ Gw()e 1 = G() w+¢ee G() w+¢¢2 G() w etc tt++11 t2 t t using a Taylor series expansion.It follows that the best forecast of G(wt+1) made at time t is 1 fG = G() w+¢¢s 2 G() w etc t ,1 tt2 e ∫ Gw()tt+ f() w (3.5) assuming et is zero mean white noise. From (3.4) the optimum forecast of xt+1 is 292 C. W. J. Granger and J. Hallman

x G z s f t,1 = f t,1 - f t,1 f t,1 (3.6) given that z, s are independent, given the assumption that zt, wt are independent. Writing

x x f t ,1 = xt+1 - et ,1 (3.7)

x where et ,1 is the one-step forecast error, it follows by substitution of (3.6) into (3.7) and subtracting from (3.4) that

2 s x xxfwzsffett+11-=() ttttt+-,,11 +t ,

z Suppose that fzt ,1 =+rbtjtj D z- is the best linear predictor of zt from j its own past and using a Taylor series expansion on s(wt) gives

2 Dxtttt+1 - f() w=- z[]()1 rrs s-¢¢e s() w t+ etc x +¢¢terms in Dztt , s() w etc+ e t,1 which is a form of error correction model. The leading term on the right-hand side is the error correction term with a time varying parame- ter. The other right-hand side terms are SMM. The left-hand side is not just Dxt+1 but has to be modified by subtracting f(wt). It should be noted that Dxt+1 is not necessarily SMM given the method of generating xt.

4. ESTIMATION OF THE ATTRACTOR If one has no prior information about the shape of a possible attractor, a nonparametric estimator is worth consideration. A technique that is clearly appropriate is the Alternating Conditional Expectations (ACE) algorithm proposed by Breiman and Friedman (1985). Although origi- nally suggested for use with cross sectional data it can easily be used with time series. Starting with a sample from a pair of random variables x, y the objective is to find a pair of instantaneous transformations q(x), f(y) such that the correlation between these transformed variables is maxi- mized. This criterion is equivalent to maximizing R2 for the regression of f(y) on q(x). Essentially the steps in the algorithm are

(i) fix q0(x) = x/x (ii) consider a smooth spline functions f1(y) that maximizes corr(q0(x), f1(y)) (iii) fix f1(y) and consider smooth spline functions q1(x) so that corr(q1(x), f1(y)) is maximized (iv) fix q1(x) and find f2 so that corr(q1(x), f2(y)) is maximized, and so forth until an appropriate stopping rule becomes operative. Details can be found in the paper by Breiman and Friedman (1985). Clearly different nonparametric estimators could be used instead of the smooth splines. Long Memory Series with Attractors 293

In the ACE implementation used for this paper a fixed-window regression smooth is employed, computing E(Y|X) as follows: (a) sort the observations by x value.

(b) Define the window Wn as the set of all observations · xj, yj} such that |j - n| £ k, where k is that predetermined minimum window size (minus one).

(c) E(yn|X) is the fitted value of yn from a linear regression of y on a constant and X, using only the observations in the window Wn. (d) For technical reasons detailed in Breiman and Friedman, it is necessary for the data smooths to always have a zero mean, so that sample mean of the computed E(Y|X) is subtracted before the observations are sorted back into their original order.

If k = T, the sample size, the smooth is just the linear regression yt = a + bxt and the returned values are {bxt}. At the other extreme, k = 0 will return y minus its mean. In between, larger values of k trade more smoothness for less ability to track discontinuities and sharp changes in the slope of Y|X.The effect of reducing the window size is similar to what happens in a linear regression as more variables are allowed to enter. Just how many “equivalent parameters” are used by ACE is a question explored in the next section. The smoother used in Breiman and Friedman’s ACE implementation is the “supersmoother” of Friedman and Stuetzle (1982). It differs from the fixed window smoother by making several passes with different window sizes and then choosing a window size for each observation based on a local cross validation measure. Unfortunately, it tends to choose window sizes that are too small in moderate sample sizes or when there is sorted data. Both are to be expected in our applications, so a fixed window is used instead. One other point should be mentioned. Breiman and Friedman prove that for a stationary, ergodic process, ACE converges to the optimal transformations if the smooths used are (i) uniformly bounded as T Æ •, (ii) linear, and (iii) mean squared consistent. Marhoul and Owen (1984) have shown regression smooths to be mean squared consistent under conditions not satisfied in our setup. More work will be needed to find conditions under which ACE will always find an existing attractor. This does not prevent us from using ACE to find candidate attractors which we can then test using the procedure of the next section. As an example of the use of the technique, data was generated with

xt = wt

3 yt = wt wt a pure Gaussian random walk and using sample size 200. Figure 15.2 shows the generated data, the actual underlying cubic relationship and 294 C. W. J. Granger and J. Hallman

40

30

y = fhat (x) y = x3 20 y

10

0 ADF(4) = 5.71

0123 x Figure 15.2. the estimated curve from the ACE algorithm. As the two curves are so close to each other, there is no point in labeling them.

5. TESTING FOR AN ATTRACTOR

Assume that xt, yt are long memory. If the set A defined by

Axygxhy= (),:()= () is an attractor set, then a sufficient condition that zt, as defined in Section 3, is SMM is that wt = g(xt) - h(yt) is SMM, and thus will appear to be I(0) in tests. On the other hand, if A is not an attractor set, wt will have a unit root. Rejecting the hypothesis of a unit root in the wt estimated by ACE is evidence that A is an attractor. The augmented Dickey-Fuller (ADF) statistic for testing the unit root hypothesis is the negative of the t statistic for d in the regression

k DDwwtt=+db--1 Â jtjt wu +. j=1

The hypothesis of a unit root in {wt} is rejected if the ADF is large enough. If wt has a nonzero mean it is subtracted off before performing the test. When wt is a residual from ACE or from a regression including a constant term, it has mean zero by construction. The use of the ADF as a test for linear cointegration was first sug- gested by Engle and Granger (1987), and its distribution has been studied by Engle and Yoo (1986), and others. Engle and Yoo provide tables of critical values for the test. These depend on both the number Long Memory Series with Attractors 295 of observations in the sample and on the number of parameters esti- mated in the cointegrating regression. This presents a problem, in that ACE does not estimate parameters. However, shrinking window sizes in ACE is much like allowing for more parameters in a regression. What is needed is an indication of how many equivalent parameters are being used by ACE for different window sizes. A simple Monte Carlo experiment was conducted using just 100 repetitions of the following: (i) generate x, e as vectors of 100 i.i.d. N(0,1) random variables (ii) form summations

t Sxtj= Â x j=1 t Seetj= Â j=1

(iii) form Syt by

(a) Syt = 0.33Sxt + Set

(b) Syt = 3Sxt + Set If the series were stationary, these would correspond to R2 values of 0.1 and 0.9 respectively. In fact, Syt, Sxt are I(1) series that are not cointe- grated. The ACE algorithm was applied to the series, for various window sizes. The series wt were formed and the ADF statistic formed, using 4 lags. Table 15.1 shows the estimated percentiles in the two cases (a) and (b). For purposes of comparison, Table 15.2 shows the percentiles for ADF statistics or residuals from linear regressions of a random walk on k - 1 other independent random walks.The last rows in Table 15.1(a), (b) are for linear regressions. For “middle” window sizes, the distributions in Table 1 are fairly stable and roughly correspond to the k = 3 case in Table 2. It is seen that the “degree of explanation” very roughly corresponding to R2 does matter but generally, window size does not make a big differ- ence. Even though the results in Table 1 are for two series the use of the ACE algorithm approximately adds a further independent series to the process. Clearly a great deal of further research is required on this topic but “spurious regression” problems do not seem to be excessive and it is recommended that ADF statistical tables can be used as though an extra series was involved, as a practically useful approximation.

6. AN APPLICATION Two monthly series for the period January 1947 to December 1985 were taken from the Citibase data tape: 296 C. W. J. Granger and J. Hallman

Table 15.1(a) ADF percentiles for case (a).

W 5% 10% 20% 50% 80% 90% 95% Mean

9 1.57 1.98 2.32 2.90 3.72 4.25 4.75 2.98 14 0.96 1.82 2.07 2.63 3.31 4.01 4.35 2.74 19 0.84 1.67 1.94 2.53 3.26 3.59 4.08 2.57 24 0.57 1.49 1.88 2.33 3.23 3.42 3.81 2.44 29 0.67 1.40 1.83 2.30 3.05 3.51 3.64 2.35 34 0.64 1.37 0.75 2.28 3.03 3.43 3.55 2.30 39 0.39 1.27 1.66 2.19 3.03 3.33 3.59 2.26 44 0.71 1.18 1.62 2.18 3.00 3.28 3.59 2.23 49 0.70 1.02 1.55 2.16 3.07 3.24 3.48 2.20 100 0.43 0.92 1.36 1.92 2.66 2.90 3.18 1.97

Table 15.1(b) ADF percentiles for case (b).

W 5% 10% 20% 50% 80% 90% 95% Mean

9 1.26 1.56 1.87 2.65 3.35 3.78 4.10 2.66 14 1.05 1.56 1.85 2.49 3.18 3.61 3.93 2.55 19 1.06 1.33 1.74 2.36 3.16 3.45 3.89 2.45 24 0.88 1.37 1.71 2.34 3.08 3.50 3.79 2.40 29 0.97 1.31 1.67 2.33 3.01 3.52 3.65 2.37 34 0.84 1.29 1.65 2.26 3.04 3.43 3.78 2.34 39 0.74 1.28 1.66 2.29 2.98 3.48 3.74 2.31 44 0.84 1.28 1.63 2.28 3.01 3.47 3.81 2.31 49 0.81 1.26 1.60 2.30 2.99 3.55 3.86 2.32 100 0.80 1.20 1.54 2.17 2.94 3.29 3.66 2.20

Table 15.2 ADF percentiles for OLS (based on 1000 trials).

K 5% 10% 20% 50% 80% 90% 95% Mean

2 0.37 0.82 1.31 1.99 2.71 3.07 3.31 1.96 3 0.96 1.39 1.74 2.41 3.06 3.36 3.61 2.37 4 1.46 1.80 2.10 2.76 3.36 3.68 4.01 2.74 5 1.79 2.06 2.34 2.98 3.61 3.97 4.31 2.99

RMB: US base money (M0) divided by the consumer price index, and FY3: interest rate on three month US Treasury bills The sample size is 468. The standard linear analysis for both series did not reject the null hypothesis that are I(1) but the two stage procedure Long Memory Series with Attractors 297

Figure 15.3.

suggested in Engle and Granger (1987) did not find the series to be (linearly) cointegrated. Some details of these results are RMB: Augmented Dickey-Fuller (ADF) with 4 lags =-0.05 FY3: ADF with 4 lags = 1.94

zt: ADF with 4 lags = 2.92 A value of about 3.50 would be needed to be found for those ADF sta- tistics to reject the null hypothesis that the series is I(1). Figures 15.3 and

15.4 show the transformations of the series, q(RMBt), f(FY3t) achieved by use of the ACE algorithm with a window of size 93. The transforma- tion of money base is seen to be almost linear but that for 3 month inter- est rates is very nonlinear.The 4 lag ADF statistics for these transformed series are -0.05 and 1.47 respectively, so for both an I(1) characteriza- tion is not rejected. Figure 15.5 shows the transformation of the line q(RMB) = f(FY3) back to the original variable plot. This curve is the potential nonlinear attractor. The ADF (4 lags) statistics for the original variables and their transformations are

Original Transformed

RMB -0.05 -0.05 FY3 1.94 1.47 298 C. W. J. Granger and J. Hallman

Figure 15.4.

Figure 15.5.

There is thus no evidence that a null hypothesis of long memory (i.e. I(1)) should be rejected. Table 15.3 shows the ADF statistics for ˆ ˆ qt = qf()RMB- () FY3 for various window sizes. R2 and ADF statistics increase as window size becomes smaller and using the rule suggested in the previous section the Long Memory Series with Attractors 299

Table 15.3

Window R2 ADF

46 0.90 6.30 70 0.88 5.60 93 0.86 4.99 116 0.82 4.36 140 0.79 4.14 163 0.75 3.85 186 0.72 3.71 210 0.70 3.64 233 0.67 3.63 468 0.49 2.92

Figure 15.6.

ADF statistics appear to be significant for window sizes of 140 and less. The analysis reported here used window size 93, as being representative but not so narrow that spurious nonlinearity is likely to occur. Figure

15.6 shows the estimated qt using this window, and this series seems to be short rather than long memory. The evidence certainly suggests that these two series are not cointe- grated linearly, but they do have a nonlinear attractor, which can be viewed as a nonlinear cointegration. An alternative way to consider this evidence is by constructing error correction models using the original and the transformed data. For each series Dxt was regressed on lagged 300 C. W. J. Granger and J. Hallman

Dxt, Dyt and a single lagged wt (zt in the original data). Insignificant lagged Dxt, Dyt were dropped and the remaining model re-estimated. Using ordi- nary t-statistics, zt-1 was not significant for either original series, as expected because zt was I(1). For the transformed series wt-1 was not significant in the equation for DTRMB but for the TFY3 the error cor- rection model estimated was

DTFY3ttt=- 0.. 0001 0 076w --11 + 0 . 32 TFY 3 ()0.50 ()-53.. () 73

++012. TFY 3t-3 residual ()27. 2 Rt = 0.133, Durbin Watson = 1.99. A clearly significant coefficient for wt-1 is observed. This example appears to find a nonlinear attractor but its interpreta- tion needs some care. In Figure 3, the values on the low part of the attrac- tor often also correspond to observations for the early part of the period used. Thus what is being here interpreted as a nonlinear attractor could possibly be viewed as time varying cointegration between the variables. Methods of distinguishing between various types of nonlinear models need to be investigated. In Hallman (1990) two other examples are presented with somewhat similar results. (a) monthly Standard and Poor common stock components index and earnings per share, January 1954 to January 1986, and (b) ratio of M1 to M2 and the 6 month Treasury bill rate, quarterly averages from 1959 to 1985. In both examples the original series were not cointegrated but evi- dence of a nonlinear attractor was found after using the ACE algorithm, particularly in example (a). In various other examples examined no linear or nonlinear attractor was found. Our experience has been that it is not easy to find examples where there seems to be nonlinear cointe- gration but not linear cointegration.At least the suggested methods seem not to produce spurious results.

7. CONCLUSION A definition of a nonlinear attractor has been proposed that we believe falls into some views of macroeconomic equilibrium and which is capable of being estimated and tested with actual data. We have shown that the ACE algorithm provides a practical estimation technique and tests can be derived using it, although a great deal of further work is required on the test procedures. It is hoped that this work will provide a useful starting point for generalizations of linear cointegration. Long Memory Series with Attractors 301

REFERENCES Breiman, L. and Friedman, J. H. (1985). “Estimating Optimal Transformations for Multiple Regression and Correlation”, Journal of American Statistical Association, Vol. 80, pp. 580–97. Engle, R. F. and Granger, C. W. J. (1987). “Cointegration and Error Correc- tion: Representation, Estimation and Testing”, Econometrica, Vol. 55, pp. 251–76. Engle, R. F. and Granger, C. W. J. (1990). Long Run Economic Relationships: Readings in Cointegration, Oxford University Press. Engle, R. F. and Yoo, S. (1987). “Forecasting and Testing in Cointegrated Systems”, Journal of Econometrics, Vol. 35, pp. 143–59. Friedman, J. H. and Stuetzle, W. (1982). “Smoothing of Scatter Plots”. Technical Report ORION 006, Department of Statistics, Stanford University. Granger, C. W. J. (1986). “Developments in the Study of Cointegrated Economic Variables”, BULLETIN, Vol. 46, pp. 213–28. Granger, C. W. J. and Hallman, J. (1989). “The Algebra of I(1)”, Finance and Economics Discussion Series, Paper 45, Division of Research and Statistics, Federal Reserve Board, Washington DC. Granger, C. W. J. and Hallman, J. (1990). “Nonlinear Transformations of Inte- grated Time Series”, forthcoming in Journal of Time Series Analysis. Granger, C. W. J. and Thompson, P. J. (1987). “Predictive Consequences of Using Conditioning or Causal Variables”, Econometric Theory, Vol. 3, pp. 150–52. Hallman, J. J. (1990). Ph.D. Thesis, Economics Department. University of Cali- fornia, San Diego. Marhoul, J. C. and Owen, A. B. (1980). “Consistency of Smoothing with Running Linear Fits”. L. C. S. Technical Report #8, November 1980, Department of Statistics, Stanford University. CHAPTER 16

Further Developments in the Study of Cointegrated Variables* C. W. J. Granger and Norman Swanson**

1. INTRODUCTION Since the publication of a paper with a similar title, Granger (1986), there has been considerable interest and activity concerning cointegration. This is illustrated by the books by Engle and Granger (1991), Banerjee, Dolado, Galbraith and Hendry (1993), Johansen (1995) and Hatanaka (1995) plus many papers, both theoretical and applied. Much of the work has been highly technical, and impressive and very useful but has not necessarily helped economists interpret their data. This work has often accepted the constraints imposed by the early papers and has not ques- tioned these constraints. It is the objective of this paper to suggest and examine generalizations whilst maintaining the main idea of cointegra- tion and consequently to, hopefully, provide ways of making interpreta- tions of the results of cointegration analysis both more realistic and more useful. The paper raises more questions than it solves and so can be thought of as a research agenda rather than a completed project. The same also was clearly true for the 1986 paper.

The standard theory begins with a vector of n components xt, all of which are I(1), so that each component of Dxt is stationary in the sim- plest form. Assume that there exists an n ¥ r, r < n matrix a such that

zxt = a¢ t (1) has components that are all I(0), or zero mean stationary processes, where zt has r < n components. A vector xt having these properties is said to be cointegrated. It was clear from the beginning that cointegra- tion could only arise if one had a “common factor”, later called a “common trend” representation

xDwxttt=+*

* Oxford Bulletin of Economics and Statistics, 58, 1996, 374–386. ** This study supported by NSF award SBR-93-08295 and by a Penn State University Research and Graduate Studies Office Faculty Award. Further Developments in the Study of Cointegrated Variables 303

where D is an n ¥ m matrix, with m = n - r, wt is an m ¥ 1 I(1) vector x*t is an n ¥ 1 vector of I(0) components. The z ’s have the I(0) property because there are fewer common factors, the w’s, than x’s, so that there must exist linear combinations of the x’s that eliminate the w’s.The other crucial property of the common trend representation is that the I(1) property dominates the I(0) property, so that an I(1) variable plus an I(0) one is always I(1).An important consequence of cointegration is that the x’s must at least appear to have been generated by an error-correction system of equations

AB()D xt =+g zt-1 et (2) where g is an n ¥ r matrix, e t is a stationary multivariate disturbance, and A(B) is the usual lag polynomial with A(0) = I and A(1) having all finite elements.

If one assumes that wt can be written as a linear combination of x’s, a somewhat more constrained but more interesting representation can be obtained. A popular way of doing this is to take

wxtt= g ¢^ (3) where g ^¢ g = 0, g ^¢ is an m ¥ n matrix and 0 here is m ¥ r as discussed by Warne (1991) and Gonzalo and Granger (1995). This definition has the advantage that there is no causality from zt to wt at zero frequency, as discussed in Granger and Lin (1995), so that it is natural to view the w’s as contributors to the permanent components, and the z’s as con- tributors to the transitory components of the system. As there are now n equations relating w’s and z’s to x’s these can be inverted to give (from Warne (1991))

-1 -1 x = a (g ^¢ g ^) w + g ()a ¢g z t ^ t t (4) = permanent compontent + transitory component. It is well known that the actual z terms are not identified, since any linear combination of z’s will still be I(0), but replacing zt by rzt, where r is a square r ¥ r matrix such that r¢=r I does not affect the decomposition (4). Multiplying the error-correction equation (2) first by a¢ and then, separately, by g ^¢ and using (4) to replace lagged Dxt by lagged Dzt and Dwt one gets the transformed VAR model zI=-()ag¢ z+++lags of DD w lags of z innovations¸ tt-1 t t ˝. (5) DDDwwzt =++lags of t lags of t innovations ˛ It should be noted that constants may have to be added to these equa- tions to ensure that each component of zt has zero mean. Some papers assume that the common trends should be random walks, possibly following from the Beveridge and Nelson (1981) decom- 304 C. W. J. Granger and N. R. Swanson position of an I(1) variable into permanent and transitory components. The above decomposition which is used throughout this paper may not have this property, which is viewed as coming from an alternative arbi- trary assumption to the one we use. However, the alternative approach does have difficulties with some situations. Suppose that two series, Xt, Yt, are analyzed individually and each is found to be a random walk, so that the first difference of each produces series ex,t, ey,t that is uncorre- lated with it’s own past, i.e. corr(ex,t,, ex,t-k) = 0, and similarly for ey,t. Can Xt, Yt be cointegrated? It might seem unlikely if the common trend also has to be a random walk. But if the common trend is allowed to be IMA (1,1) for instance, some algebra shows that cointegration can occur, with ()1+ bB e X = t +=hs,,22b s t ()1 - B t he and similarly for Yt. There are several obvious generalizations and several problems with the decomposition approach. The distinction between I(1) and I(0) can be broadened to I(d) and I(b), d > b, using d = 2 or fractionally inte- grated processes, which is simple in theory but not in practice.The effects of increasing the contents of xt or changing the series in xt can lead to difficulties with interpretation in practice. The greatest difficulty, of how to define I(0) precisely, and hence I(1) as the accumulation of I(0), was immediately clear and still remains. A (second-order) stationary process is clearly an example of an I(0) process but does not necessarily make up the complete set of all possibilities, particularly once non-Gaussian, non-linear or time-varying coefficient cases are considered. (An I(0) process can have time varying coefficients but it has become common in the literature to call I(1) “non-stationary”, showing how confusion can easily enter a field.) Pragmatically, an I(0) processes is one that does not fail a powerful test having some general form of I(0) as a null.

2. SIMPLE GENERALIZATIONS The traditional approach is to start with cointegration as in (1), the equa- tion for zt. Then (2), the error-correction model is specified. This leads to (3), the common trend representation. Finally, (5), the generating mech- anism for the zt and wt components is derived. To consider generaliza- tions it is easier to reverse this sequence, first generating processes zt and wt, which may not be observed and which have distinctly different prop- erties, and then generating xt from zt and wt, and finally deriving the corresponding generalized error-correction model. As an example of this approach, consider the bivariate case, where there is a pair of variables Xt and Yt. First generate a pair of univariate series zt and wt by Further Developments in the Study of Cointegrated Variables 305

zzetttztt=+ll-1 , <1¸ ˝ wwetttwtt=+ff-1 , >1˛ (6) where ez,t, ew,t will be taken to be Martingale difference series (or perhaps white noise). Suppose, further, that

zt = xt - atyt and

wt = c1xt + c2tyt.

Then, assuming that c2t = 1 + c1at yields

xt = c2tzt + atwt

yt =-c1zt + wt, which links the observed series xt, yt with the generated components zt, wt. The error-correction model is

()1 -fgttBx=+11 tt z- innovation ()1 -fgttBy=+21 tt z- innovation where

glf12tttt=-c ()

glf21ttt=-c () - . In the above example, several parameters are allowed to vary through time, particularly l, f and a. These could be deterministic functions of t, stochastic functions based on observed variables such as measures of the state of the business cycle, or unobserved series such as used in the sto- chastic unit root literature, e.g. Granger and Swanson (1995), where ft = exp(at) and at is an I(0) process with non-zero mean such that E[ft] = 1. The cointegrating parameter, a, could change seasonally, as explored by Fransis (1992) or it could switch between a pair of regimes, as discussed in Granger and Teräsvirta (1993), for example. It is clear that if one just looks at the error-correction equation, and if ft does not vary far from unity, then if the g ’s are found to appear to vary over time it will be dif- ficult to determine exactly where in the generating process this time variation originates. It should be emphasized that the bivariate set-up here is not the most general possible, as (6) can contain further lags, and as xt, yt are linear functions of both zt and wt, for example. However, the basic ideas of coin- tegration are preserved even though wt is not necessarily a standard I(1) variable and zt is not necessarily a standard I(0) variable.

3. NONLINEAR GENERALIZATIONS

Suppose that xt is a vector of n components, each of which is I(1) and that 306 C. W. J. Granger and N. R. Swanson

zxt = a¢ t is a vector of r I(0) components with zero means. An interpretation that has become almost standard has a¢xt = 0 determining an attractor or “equilibrium” for the system, so that ΩΩztΩΩ is a measure of the extent to which the system is out of equilibrium, for some norm. If, in some sense, the economy “prefers” zt to be small, there must be associated costs with non-zero values of zt. Consider the myopic cost function r n ) 2 Ct +1 = Â 2Gj(q j¢zt+1 + Â lk ()Dxk,t +1 (7) j=1 k=1 where xt+1 is chosen to minimize

JECI= tt[]+1 t where It is some information set available at time t, including xt-j, j ≥ 0. The first terms in (7) are disequilibrium costs and the second group of terms are the costs associated with changing the x values. Substituting for zt, q ¢jtz +1 becomes f ¢jtx +1, say, so that ∂J/∂xk,t+1 = 0 gives

DxgIkt,,++11= Âge kj j() t+ kt , where gfl=- kj,, jk k and E[]G ¢ ()f¢ x I = g ()I , j t +1 t j t with G¢(w) = dG(w)/dw. Assuming that g ()I = g ()¢ z j t j d j t one gets the nonlinear error-correction model r Dx = g ()¢z + k,t+1 Âg kj j d j t e k,t+1 (8) j =1 which can be written

Dxgzt++11= gd()¢ t + et (9) with obvious notation. Note that there are n equations and, by con- struction, there are r factors on the right hand side. Thus there will be n - r independent linear combinations, wt = g ^ xt such that

Dwtt= ge^ (10) so that each element of wt is a random walk. Multiplying (9) by a¢ gives Further Developments in the Study of Cointegrated Variables 307

Dzgz=¢ag¢¢() d+ ae . (11) tt+1 t+1 It is seen that this particular nonlinear cost function leads to a non- linear error-correction model (8), wt, zt are linear functions of xt and wt is a standard I(1) processing but zt is generated by a non-linear vector AR(1) model. Clearly, constraints will be required on g (z) and on a¢g to ensure that zt is I(0), or at least not dominated by I(1) components. Simple sufficient conditions for a form of stability are given by Mokka- dem (1987) and Lasota and MacKey (1987). To get more dynamics into the error-correction equations, further costs associated with changes of 2 the form (xk,t+1 - xk,t-j) need to be added to the cost function. If the cost of change term in (7) is some positive function other than quadratic, then a much more complicated form of error-correction model results. This type of generalization is not considered further here. One obvious generalization is where xk,t = fk(yk,t), k = 1,...,n, where yk,t are the observed series, and the xk,t are suitably transformed series that have the regular I(1) and cointegration properties, as discussed by Granger and Hallman (1991). Other generalizations are discussed in Swanson (1995). A related, but different situation occurs when the persistent compo- nent is a growth process. A positive series Wt will be called a growth process if W È tk+ ˘ 1 ProbÍ > 1˙ ≥ 2 Î Wt ˚ with k > 0 and as t Æ•Prob (Wt Æ•) = 1. Examples are: (i) DWt = m(t) mt()+ k + e with m(t) a deterministic increasing trend, so that ≥ 1 , k > 0; t mt()

(ii) DlogWt = a + et, a > 0, and (iii) DWt = g(Wt-1) + et with g(W) > 0, where in each case et = htet, et is iid with mean zero and ht can be a stochastic or deterministic heteroskedasticity term. In these examples

Wt will be a growth process provided ht does not grow too rapidly. For the third example, Granger, Inoue and Morin (1995) provide a full discussion.

A pair of growth processes W1t, W2t will have W1t “dominating” W2t if

W2t Æ 0 in probability as t Æ•. In example (ii), if W1t has parameter a1 W1t in its generation and W2t has a2, with a1 > a2 > 0 then W1t will dominate W2t. Clearly a growth process will always dominate a nongrowth one. To illustrate some characteristics of growth processes, possible coin- tegrating relations between pairs of growth series Xt, Yt and between log Xt and logYt, plus some other situations are discussed in the remainder of this section. Upper case letters are used to denote growth processes, which are not usually I(1).

The case considered will be where Xt, Yt are given by 308 C. W. J. Granger and N. R. Swanson

XAWAWxtttt=++11 22 ˜ ¸ ˝ (12) YWttt=+1 y˜ ˛ where x˜ t, y˜t are both I(0), that is stationary or short-memory series, W1t, W2t are each positive, growth processes, being persistent series possibly containing deterministic elements. It is assumed that W1t dominates W2t and that A1 and A2 are both positive. Taking logs of (12) and assuming that t is large enough so that W2t/W1t is negligible gives

Ê x˜ t ˆ¸ logXAWtt=+ log11 log ++ log 1 Ô Ë AW11t ¯Ô ˝ (13) Ê y˜t ˆ Ô logYWtt=++ log1 log 1 Ô Ë W1t ¯ ˛

A pair of growth series Xt, Yt will be said to be cointegrated if there is a linear combination, Xt - AY t which is not a growth process. There are two cases:

(i) If A2 = 0, so W2t does not enter Xt, then both the levels Xt, Yt and logXt, logYt are cointegrated, with cointegration vectors (1, A1), (1, -1) respectively,

(ii) If A2 π 0, then logXt, logYt are cointegrated but not Xt, Yt.

If a rather more general form of (13) had been used, with W2t also enter- ing the Yt equation more possibilities occur but no extra insights result. The residual terms in (13) need not disappear asymptotically if one can write x˜ t = W1tx*, where x* is I(0) and similarly for y˜t. Some consequences of this example are

(i) If logXt, logYt are cointegrated then Xt, Yt may or may not be. It is not possible for Xt, Yt to be cointegrated but logXt, logYt q q not to be. More generally, it may be true that Xt , Yt are coin- tegrated for q < qo but not for q > qo. (ii) If one hopes to “find” cointegration it is more likely to be found using log variables. The reverse is that if cointegration is found between levels, then a more stringent condition has been passed.

4. CURRENT INTERPRETATIONS Initially, applied papers just asked if a pair of series were cointegrated. Then papers considered small systems and asked how many cointegra- tions were found, and if the cointegrations could be interpreted in terms of known economic laws. Only later did it become standard practice to specify and examine the error-correction model, even though this is more fundamental. Cointegration is just a property, whereas the error- correction (EC) system is a possible data generating mechanism. Once this mechanism is known, the common trends, wt, can be determined, Further Developments in the Study of Cointegrated Variables 309 with some possible implied long-run non-causalities, and also the EC generating mechanism for Dxt can be transformed into a VAR in zt and Dwt. As these latter variables are not identified, it may be possible to apply linear transformations to them in order to generate simplified forms of the VAR which have interesting and useful interpretations.

Once the zt and wt variables are known, the vector xt, and thus each of its components, can be decomposed into its permanent and transitory components, as in (4). Although exchanging n variables for 2n not lin- early independent ones may not seem much of an accomplishment, these components can be directly used to test equilibrium theories, to test the effects of structural breaks, and possibly to suggest conditional long-run forecasts of the “if investment grows at 1% then...” variety (using per- manent components) and to examine seasonal adjustments, short-term forecasts, and leading indicators (using transitory components). One use of this approach is for the amalgamation of cointegration type studies across different sectors of an economy. Suppose that x1,t and x2,t are two vectors of economic variables from different sectors that have been analyzed and modeled separately.Assume that w1,t and w2,t are the common trends found in the two sectors, and suppose that tests indicate that x1,t and x2,t are not cointegrated with each other. In this case the two sectors may be said to “separate in the long-run” and the cointegra- tions z1,t and z2,t found in the analyses of x1,t and x2,t individually are, theoretically, all that would be found if these variables were analyzed together as a complete system. However, even if there is long-run sepa- ration there may be short-run relationships. For example the error- correction model may take the form

DDx=++gg z22 z,k- 1 lags of x +innovations, (14) 1,t 1 11,t- 1,t so that disequilibrium errors from one sector may enter the error- correction equations of another sector. These questions have been studied in Konishi and Granger (1993) and Konishi, Ramey and Granger (1995). It has been found that the z’s from one sector offer an efficient mechanism for transferring short-run information into another sector’s error-correction model. (See for example, Kozicki (1994) who used z’s from real macroeconomic variables to help explain interest rate spreads.) One problem that appears to occur in practice and that has important implications whenever cointegrations are interpreted, is that although a zt may seem to be as I(0), in that it has short memory, it often does not have a simple, tight, attractor of the kind assumed to occur in Granger

(1986). If a zt starts from a high positive value, say, and starts to fall, it often does not appear to slow down around the attractor at zero but rapidly continues through it. An example, kindly supplied by Dr. Gawon Yoon, is shown in Figure 16.1. Using Johansen’s technique (as described in his book) on three major quarterly U.S. series, Y = income, C = con- sumption, and INV = investment for the period 1959:1–1994:1 (see 310 C. W. J. Granger and N. R. Swanson

Figure 16.1. Plot of Error Correction Term: Z = Y - INV (Z is the stan- dardized error correction term from a VEC(2) model estimated using Johansen’s method.) below), two cointegrations were found. The one shown is the more volatile which is essentially: Z = Y - INV. During the two “oil-shock” recessions this variable (which has been nor- malized) is seen to start from a high value and to proceed to a large neg- ative value with no hesitation around zero. (The transitory components of the three variables behave similarly, as this z is an important part of the term.) Such behavior would be consistent with a broad attractor, with the economy preferring to be in the attractor. However, the attractor is now a band rather than just a line, as mentioned in Granger (1993). It can be represented by a non-linear error-correction term which has g zt-1 replaced by

g ()zzzz=-<<0, 00¸ a ˝ (15) =-ga()zz0 ,,> 0 ˛ with g(-z) = g(z) for example, although there is no particular reason why g(z) should be symmetric. Provided that there is mean reverting behav- Further Developments in the Study of Cointegrated Variables 311 ior, the basic interpretation is unaltered. However, the need for non- linear error-correction equations, and hence cost-functions, becomes clearer.

5. EXAMPLE OF NONLINEAR ERROR-CORRECTION In this section a summary is given of an extension of the study by King, Plosser, Stock and Watson (1991) (henceforth KPSW) using an updated data set and considering some simple nonlinear possibilities. For the period 1959:1 to 1994:1, quarterly data for six U.S. macro variables were considered: C: Real per capita consumption expenditures (log). Y: Real per capita “private” gross national product (log). INV: Real per capita gross private domestic fixed investment (log). M: Real balances, the log of M2 per capita minus the log of the implicit price deflator. R: Nominal interest rate, 3-month U.S. Treasury bill rate. INF: Price inflation (measured as an annual percentage). A detailed description of the data is given in KPSW (following KPSW, we assumed that C, Y, and INV can be characterized as I(1) processes with drift, while M, R, and INF are I(1) processes without drift.) Using 1959:1–1985:4 as the in-sample period, and retaining the data from 1986:1–1994:1 for ex-post forecast evaluations, three cointegrations were found which were essentially:

Z1 = C - Y + 0.01INF Z2 = INV - Y + 0.02INF Z3 = M - Y + 0.01R which are similar to those found by KPSW. Error-correction equations were estimated for each of the six variables using a constant, two lags of every differenced variable and the three Z’s lagged once, giving 16 coef- ficients per equations. In all, nearly one hundred coefficients were esti- mated, which are too many to reproduce here. In order to examine the system for possible evidence of non- linearity, each Z term was replaced by Z+ and Z-, where ZZZ+ =≥ if 0 ¸ Ô = 0 otherwise˝ -+Ô ZZZ=- . ˛ Figure 16.2 shows the significant (i.e. p-value £0.05) cases, where a Z enters the error-correction equation. Thus Z1,t-1 (i.e. C - Y) only affects

DINF and DY. DR is only affected by Z2,t-1 (i.e. INV - Y) and DM is only 312 C. W. J. Granger and N. R. Swanson

Figure 16.2. Linear and Nonlinear Causation from Z’s to Dependent Variables (Causation from Z’s to changes of dependent variables is depicted with arrows. Solid lines are associated with p-values of 0.05 or less, while dotted lines denote significance at the 10% level. A* indi- cates possible evidence of a nonlinear relationship at the 5% level).

affected by any Z3,t-1. The dotted line is for a p-value less than 0.10. A star on a line indicates that a non-linear term is involved when Z+ and Z- are used in the error-correction equations. All of this particular type of nonlinearity occurs in connection with DINV, DY and DM. In particu- + lar, Z1 becomes marginally significant when Z1 is introduced. Z3 is clearly + significant by itself (t = 2.55), but when Z3 is introduced it becomes sig- nificant (t = 2.20), while Z3 is no longer significant (t =-0.51). The nonlinear error-correction equations for DC, DY, DINV, and DM all fit better in terms of adjusted R2 and log likelihood.Also fewer lagged endogenous variables enter significantly into the nonlinear equations than into the linear equations. In this sense, the nonlinear equations are more parsimonious than their linear counterparts. (In the linear equa- tions, the Z’s are only allowed to enter linearly.) Interestingly, while the nonlinear equations for DC, DY, DINV, and DM produce superior fitting equations in-sample, the DY, DR and DINF nonlinear alternatives are superior based on ex-post evidence, using the 1986:1–1994:1 period, as shown in Table 1, Panel (A) (for the linear models) and Panel (B) (for the nonlinear models, denoted nonlinear 1). Further Developments in the Study of Cointegrated Variables 313

A different type of non-linearity was considered by defining

Dt =Æ1 if the business cycle is in a down swing () peak trough = 0 otherwise, } and then generating the variables DZ and (1 - D)Z. The peaks and troughs correspond to the NBER turning-point dates. An excellent overview of NBER business-cycle dating procedures is given by Zarnowitz and Moore (1986). For the period 1959:1 to 1985:4 it was found that ˆ DDCttttt=-0.. 004 0 002 R--1112131 - 0 . 003 DZ,,, - 0 . 006 DZ -- + 0 . 006 DZ ()t = 31..()- 20()- 113 .()- 25 . () 27 . (16) with other terms being insignificant. One DZ term was also significant in each of the DINV and DR equations. For DC, this particular non- linearity appears to produce a superior fitting equation in-sample and using an out-of-sample forecast period of 1986:1–1994:1, (16) forecasts better for 4 of 6 series and when compared with linear models. Results for all six series are given in Table 16.1, Panel (A) (for linear models) and Panel (B) (for nonlinear models, denoted nonlinear 2). So far, the nonlinearities examined have been of a type other than in the proposed model, (8). We now consider nonlinear equations as in (8) with -1 1 g j ()d j¢ z t = ()1 + exp{}-d¢j zt - 2 , j = 1,2,3. (17)

Thus, g¢(·) is the logistic cumulative distribution function where zt¢ = (Z1,t Z2,t Z3,t), and g(0) = 0.The equations were estimated using nonlinear least squares. Interestingly, all 6 equations fit better in sample than their linear counterparts when one nonlinear error-correction term was used in place of all three linear error-correction terms, as evidenced in Panel (A) of Table 1. The greatest in-sample improvement was seen in the DC equation, where it was found that ˆ ˆ DDCRgzttt=-0.. 003 0 002-1 + 0 . 005 ()d ¢ , ()t = 33..()- 24 () 28 . where dˆ¢=(9.8 -3.2 24.3). The money equation also showed marked improvement, and was found to be ˆ ˆ DDMtt=-+0... 443 M--11 0 004 D R t 0 002 D INF t - 1 + 0 . 009 g()d ¢ z t, ()33...(- 39 ) () 26 () 27 . where dˆ¢=(-1.4 -3.5 -2.0). Table 1, Panel (A) lists some summary measures of the out-of-sample forecasting ability of these nonlinear equations, and the usual linear error-correction equations. The nonlinear equations perform better than Table 16.1 Comparison of linear versus nonlinear models1

Panel (A)

R 2 Log Likelihood AIC SIC RMSE

Variable Linear Nonlinear Linear Nonlinear Linear Nonlinear Linear Nonlinear Linear Nonlinear

DC 0.203 0.249 387.2 389.2 -9.909 -9.984 -9.504 -9.630 0.00577 0.00554 DINV 0.389 0.403 263.9 264.0 -7.559 -7.559 -7.155 -7.245 0.0220 0.0215 DM 0.561 0.572 383.1 383.3 -9.831 -9.871 -9.426 -9.517 0.0178 0.0140 DY 0.340 0.365 345.0 345.0 -9.104 -9.143 -8.699 -8.789 0.00814 0.00714 DR 0.270 0.290 -120.7 120.5 -0.234 -0.277 0.170 0.077 0.529 0.534 DINF 0.378 0.386 -195.7 -196.1 1.194 1.164 1.599 1.518 1.414 1.542

Panel (B)

R 2 Log Likelihood AIC SIC RMSE

Variable Nonlinear 1 Nonlinear 2 Nonlinear 1 Nonlinear 2 Nonlinear 1 Nonlinear 2 Nonlinear 1 Nonlinear 2 Nonlinear 1 Nonlinear 2

DC 0.215 0.276 389.8 394.1 -9.901 -9.982 -9.9421 -9.502 0.00595 0.00563 DINV 0.426 0.399 269.0 266.6 -7.599 -7.553 -7.119 -7.073 0.0265 0.0218 DM 0.578 0.586 387.0 388.0 -9.847 -9.866 -9.367 -9.386 0.0201 0.0159 DY 0.358 0.377 347.4 349.0 -9.093 -9.124 -8.613 -8.644 0.00770 0.00943 DR 0.255 0.283 -120.0 -117.9 -0.191 -0.229 0.290 0.251 0.481 0.534 DINV 0.362 0.370 -195.2 -194.5 1.243 1.229 1.723 1.710 1.396 1.406 12R , log likelihood, Aikake Information Criteria (AIC) and Schwartz Information Criteria (sic) are calculated in-sample using the period 1959:1–1985:4. Out-of- sample root mean squared forecast errors (RMSE) are calculated using one-step-ahead forecasts for the period 1986:1–1994:1. Variable corresponds to the depen- dent variables in each of six equations calculated using two lags of each variable and various linear and nonlinear error-correction terms. In Panel (A), the linear models use the error-correction terms in the usual linear way, while the nonlinear models replace the linear error-correction term with one nonlinear error-cor- rection term as in (17) above, which is calculated using nonlinear least-squares. In Panel (b), the results for the other two nonlinear models are given. The model, Nonlinear 1 corresponds to the case where Z+ and Z- are used in place of Z from the linear model, while Nonlinear 2 corresponds to the nonlinear case discussed above where regression slope dummy variables are used in place of the usual linear error-correcting terms. Further Developments in the Study of Cointegrated Variables 315 the linear equations for 4 of 6 variables based on root-mean squared error (RMSE).The largest gains made by the nonlinear alternative seem to be for the DC and DY equations, which are superior to all other linear and nonlinear models discussed above, based on the RMSE. In all, our limited analysis suggests that there is some evidence of nonlinear error- correction. However, detailed empirical analyses need to be carried out on systems of equations as well as on sectors of the economy before the role of nonlinearities of the type examined here can be unambiguously determined.

6. EARLY WARNINGS, FRAGILITY AND THE FUTURE Modeling a sector of an economy may be broken into two parts.The first part considers the inter-relationships between the variables, or the “structure” of the economy. The second part examines the effect on this structure of “exogenous” shocks, where exogenous is taken to have its old-fashioned meaning of “coming from outside” the sector. Structural change will include both changes in policy variables and also parameter movements which are attributable to changes in taste or technology, say. The possibility that structural changes affect the frequency and distri- bution of exogenous shocks is ruled out in our example, although it clearly could occur. In a linear world, an exogenous shock has the same effect at any state of the economy, but this may not be the case with a non-linear error-correction equation. Consider an equation of the form

DXZtt= ge()--11++()1 ahZ[]() tt (18) where Zt is an error-correction term and et is an exogenous shock, for simplicity. First consider the case with a = 0, so that no heteroskedastic- ity is present in (15). Assume that g (z) is given by (15) with a large, so that g (z) is small over some central band but takes large values for ΩzΩ outside this band. If the exogenous shock is small but ΩzΩ is large, Xt will change substantially. If the shock is large, but opposite in sign to g (z), then Xt may change very little. Finally if the shock and g(z) are of a similar sign and both are large, then Xt will change greatly and be fragile. This fragility occurs particularly when ΩzΩ is large in this example. Now consider the case where g (z) is always small, a is non-zero, and ah(z) > 0 is small for ΩzΩ in (0, Z0), but is very large for ΩzΩ outside the band (0, Z0). Now a small shock can be amplified if Zt-1 happens to be large in magnitude. An analogy can be given by considering an avalanche. A snow pack on a hill side can accumulate, and remain stable, until a certain depth/temperature combination occurs. At this point, a shock in the form of a gunshot will produce instability and thus an avalanche. In all other circumstances, the gunshot will have absolutely no impact on the hillside. Thus the effect of the shock interacts with the 316 C. W. J. Granger and N. R. Swanson measure of the disequilibrium, in our example. Also, different kinds of shocks could have different effects. A sudden reduction in temperature could increase snow stability while an increase in temperature could make the snow pack less stable – actually we are unsure of the true mechanics of avalanches as this account probably shows. The above example suggests that consideration of a collection of dis- equilibrium errors, or Z’s, could indicate whether an economy, or a major sector of an economy, is fragile or not. If the economy is far from equi- librium in several important directions it could be considered to be fragile, as an exogenous shock (or even a large endogenous one) could produce very large changes in the Z’s or in-transitory components, and hence in the short-run volatility of major variables. It may be worthwhile to examine fragility indices for major macro variables, policy variables, the labor sector, the financial sector, the international sector, and various other sectors of the economy which are of interest. However, discussion and empirical investigation is required before deciding whether such indices are likely to be useful, and for determining how they should be constructed. Error-correction terms and transitory components will be amongst those variables measuring the economy which will be most likely to react to exogenous shocks. Presumably, there will be a subset of variables that are publicly available, and that react quickest to these shocks. However, as there are several different types of exogenous shocks, the same set of variables will not always react first to a new shock. For example, assume that there is a “core” economy that will eventually be affected by any shock. The idea is to specify a group of “early warning variables”, that between them provide warnings of new shocks plus estimates of the delays between the effects of the shocks going from this outer set of vari- ables to the core. Again, an obvious analogy exists in weather forecast- ing, before satellite images were available.Around a country, monitoring stations recorded weather changes and depending on which direction the new weather came from, some of these stations provided leading indicators for the bulk of the country. For the economy, some of the desirable features for early warning variables are that they are recorded frequently, weekly or at most monthly, that they have major transitory components and so can change in value rapidly, and that economic reasoning suggests that they should lead the macro economy. Some retail sales and financial variables, such as interest rate spreads, are obvious candidates. Once candidates for inclusion have been selected it should be possible to form nonlinear weighted averages of normalized variables, with weights that are zero unless large enough changes occur in the indicator. Clearly empirical investigation is required so that the forecasting ability of the early warning variables can be determined. Standard impulse response analy- sis is unlikely to be helpful as it attempts to identify shocks with indi- Further Developments in the Study of Cointegrated Variables 317 vidual variables, and inevitably has difficulties doing so. Also, exogenous shocks are likely to affect many variables, but with differing lags. One model which may have some potential is a dynamic factor analysis model. However, such models are difficult to implement in practice. As the definition of cointegration becomes relaxed, allowing for gen- eralizations of I(0), I(1), and I(d) variables to be utilized by using time- varying parameter and nonlinear in mean generating mechanisms, for example more helpful and presumably more innovative interpretations of cointegration based analysis should arise. Economists may get used to thinking of a sector of an economy not in terms of the basic variables

xt but rather in terms of the derived disequilibrium measures, zt and the common trends wt. These variables can perhaps be made more useful by some suitable normalizations or rotations suggested either by empirical properties of the data or by statistical or economic theory. The move from linear to nonlinear specifications has to be justified not only from apparent beliefs of the actuality in nonlinearity in the economy or from empirical evidence of its existence, but also from important uses, other than forecasting, which rely on non-linearity in the system.

REFERENCES Banerjee, A., J. Dolado, J.W. Galbraith and D.F. Hendry (1993), Cointegration, Error-Correction and the Econometrics Analysis of Non-Stationary Data, Oxford University Press. Beveridge, Stephen and Charles R. Nelson (1981),“A New Approach to Decom- position of Economic Time Series into Permanent and Transitory Components with Particular Attention to Measurement of the ‘Business Cycle’,” Journal of Monetary Economics, 7, 151–174. Engle, R.F. and C.W.J. Granger (1991), Long-run Economic Relationships: Read- ings in Cointegration, Oxford University Press. Francis, Philip Hans (1992), “A Multivariate Approach to Modeling Univariate Seasonal Time Series,” Discussion Paper, Econometric Institute, Erasmus Uni- versity, Rotterdam. Gonzalo, J. and C.W.J. Granger (1995), “Estimation of Common Long Memory Components in Cointegrated Systems”, to appear. Granger, C.W.J. (1986), “Developments in the Study of Cointegrated Economic Variables”, Oxford Bulletin of Economics and Statistics, 68, 213–228. Granger, C.W.J. (1993), “What Are We Learning About the Long Run?”, Eco- nomic Journal, 103, 307–317. Granger, C.W.J., T. Inoue and N. Morin (1995), “Non-linear Stochastic Trends,” to appear. Granger, C.W.J. and J. Hallman (1991), “Long Memory Processes with Attrac- tors”, Oxford Bulletin of Economics and Statistics, 53, 11–26. Granger, C.W.J. and Jin-Lung Lin (1995), “Causality in the Long Run” to appear in Econometric Theory. 318 C. W. J. Granger and N. R. Swanson

Granger, C.W.J. and N.R. Swanson (1995), “An Introduction to Stochastic Unit Root Processes”, UCSD Working Paper. Granger, C.W.J. and T. Teräsvirta (1993), Modeling Nonlinear Economic Rela- tionships, Oxford University Press. Hatanaka, M. (1995), Time Series Based Econometrics: Unit Roots and Cointe- grations, Oxford University Press. Johansen, S. (1995), Likelihood Based Inference on Cointegration in the Vector Autoregressive Model, Oxford University Press. King, Robert G., Charles I. Plosser, James H. Stock and Mark M. Watson (1991), “Stochastic Trends and Economic Fluctuations,” American Economic Review, 81, 819–840. Konishi, T. and C.W.J. Granger (1993), “Separation in Cointegrated Systems”, UCSD Working Paper. Konishi, T., V. Ramey and C.W.J. Granger (1995), “Stochastic Trends and Short- Run Relationships Between Financial Variables and Real Activity”, to appear. Lagota, A. and M.C. Mackey (1987), “Noise and Statistical Periodicity,” Physica, 28D, 143–154. Mokkadem, A. (1987), “Sur un modéle autorégressif non linéar, ergodicité et ergodicité géométrique,” Journal of Time Series Analysis 18, 195–204. Swanson, N.R. (1995), “LM Tests and Nonlinear Error-Correction in Economic Time Series,” Working Paper, Economics Department, Pennsylvania State University. Warne, A. (1991), “A Common Trends Model: Identification, Estimation and Asymptotics”, Working Paper, Economics Department, University of Stockholm. Zarnowitz, V. and G.H. Moore (1977), “The Recession and Recovery of 1973–1976,” Explorations in Economic Research 4, 471–577. PART THREE

LONG MEMORY

CHAPTER 17

An Introduction to Long-Memory Time Series Models and Fractional Differencing* C. W. J. Granger and Roselyne Joyeux

Abstract

The idea of fractional differencing is introduced in terms of the infinite filter that corresponds to the expansion of (1 - B)d. When the filter is applied to white noise, a class of time series is generated with distinctive properties, particularly in the very low frequencies and provides poten- tially useful long-memory forecasting properties. Such models are shown possibly to arise from aggregation of independent components. Genera- tion and estimation of these models are considered and applications on generated and real data presented.

Keywords: Fractional differencing, long-memory, integrated models.

1. ON DIFFERENCING TIME SERIES It has become standard practice for time series analysts to consider dif- ferencing their series “to achieve stationarity.” By this they mean that one differences to achieve a form of the series that can be identified as an ARMA model. If a series does need differencing to achieve this, it means that strictly the original, undifferenced series has infinite variance. There clearly can be problems when a variable with infinite variance is regressed on another such variable, using least squares techniques, as illustrated by Granger and Newbold (1974).A good recent survey of this topic is by Plosser and Schwert (1978). This has led time series analysts to suggest that econometricians should at least consider differencing their variables when building models. However, econometricians have been somewhat reluctant to accept this advice, believing that they may lose something of importance. Phrases such as differencing “zapping out the low frequency components” are used. At first sight the two view- points appear irreconcilable, but it will be seen that by considering a

* Journal of Time Series Analysis 1, 1980, 15–29. 322 C. W. J. Granger and R. Joyeux general enough class of models, both sides of the controversy can be correct.

Suppose that xt is a series that, when differenced d times, gives the series yt, which has an ARMA representation. xt will then be called an integrated series, with parameter d, and denoted xt ~ I(d). If yt has spec- trum f(w), then xt does not strictly possess a spectrum, but from filtering considerations the spectrum of xt can be thought of as -2 d fzfx ()www=-10(), π (1) where z = e-iw. This follows by noting that differencing a series once mul- 2 tiplies its spectrum by Ω1 - zΩ = 2(1 - cosw). If yt is strictly ARMA, then limwÆ0f(w) = c, where c is a constant. c is taken to be positive, as, if c = 0, this may be thought to be an indication that the series has been over- differenced. It follows that

-2 d fcx ()ww= for w small.

Now consider the case where fx(w) is given by (1), but d is a fraction 0 < d < 1. This corresponds to a filter a(B) = (1 - B)d which when applied 1 to xt results in an ARMA series. It will be shown that if –2 d < 1, the xt has infinite variance, and so the ordinary Box–Jenkins identification procedure will suggest that differencing is in order, but if xt is differenced, the spectrum becomes --21()d ffDx ()www=-[]21()cos() , so that fDx(0) = 0, and an ARMA model with invertible moving average component is no longer completely appropriate. Thus, in this case, the time series analysts will suggest differencing to get finite variance, but if the series is differenced, its zero frequency component will be removed and the econometrician’s fears are realized. It seems that neither differ- encing nor not differencing is appropriate with data having the spectrum (1) with fractional d. In later sections, properties of these series are dis- cussed and some open questions mentioned. It should be pointed out that if a series has spectrum of the form (1), with fractional d, then it is possible to select a model of the usual ARMA(p, d, q) type, with integer d, which will closely approximate this spectrum at all frequencies except those near zero. Thus, models using fractional d will not necessarily provide clearly superior shortrun fore- casts, but they may give better longer-run forecasts where modeling the low-frequencies properly is vital. It will be seen that fractional d models have special long-memory properties which can give them extra poten- tial in long-run forecasting situations. It is this possibility that makes consideration of single series of this class of interest.The generalized dif- ferencing and the solution of the difference-or-not controversy, together with the chance of obtaining superior relationships between series, is a further reason for believing that these models may be of importance. A Long-Memory Time Series Models and Fractional Differencing 323 discussion of how fractional integrated models may arise is given in Granger (1980), and is summarized below. Long-memory models have been much considered by workers in the field of water resources. Good recent surveys are those by Lawrance and Kottegoda (1977) and Hipel and McLeod (1978). Many aspects of the models were first investigated by Mandelbrot (e.g., (1968), (1971)) in a series of papers. However, the fundamental reasoning underlying the long-memory models is quite different in these previous papers than that utilized here. The models that arise are not identical in details, and the statistical techniques used both differ and sometimes have different aims. However, it should be emphasized that many of the results to be reported have close parallels in this previous literature. The results presented below are closer in form to the classical time series approach and are, hopefully, easier to interpret. It should also be realized that this paper represents just those results achieved at the start of a much more detailed, and wider-ranging, investigation.

2. TIME SERIES PROPERTIES

Consider a series xt with spectrum -d f ()wa=-()1 cos w , (2) where a is a positive constant. This is a series which, if differenced d times, will produce white noise. However, we now consider the case -1 < d < 1, but d ≠ 0, so that “fractional differencing” may be required. It will be assumed that xt is derived from a linear filter applied to zero- mean white noise and that xt has zero mean. The autocovariances, if they exist, will be given by 2p mtwwwt = cos fd() Ú0 p -2 d = atwww22-d cos() sin d Ú0 by noting that 2 ()122- cosww= () sin . Using the standard formula (Gradshteyn and Ryztik (1965), page 372, equation 3.631.8)

p ppcos a 2 sinn -1 xaxdx cos = Ú0 n -1 Ê nn++aa1 -+1ˆ 2 ◊◊n B , Ë 2 2 ¯ and some algebra gives

1+d G()t + d 1 mat =◊2 sin() pd ◊-G()12ddd provided -< 1 < , π0 . G()t ++1 d 2 324 C. W. J. Granger and R. Joyeux

1 It will be seen later that, if d –2 , then m0, the variance, is infinite. It follows that the autocorrelations are given by G()1 - d G()t + d rt = ◊ . (3) G()d G()t +-1 d Using the standard approximation derived from Sheppard’s formula, that for j large, G(j + a)/G(j + b) is well approximated by ja-b, it follows that

2d-1 rt A(d)t (4) 1 ≠ for t large, and d < –2 , d 0. Note that for a stationary ARMA model

t rt Aq , ΩqΩ < 1 for t large, and that these values tend to zero exponentially and thus quicker than rt given by (4). This illustrates the “long-memory” aspect of series with spectrum (1) or (2).

The infinite moving average representation of xt will be denoted by • xbtjtj= Â e - j=0

= bB()e t , using the backward operator, B. It follows that the spectrum of xt is

fabzbz()w = ()(), -iw 2 where z = e and a = s e/2. Thus, as --dd fzz()wa=-()11,()- we can take a = a and -d bz()=-()1. z Such a filter b(B) will be called an integrating filter of order d. Using the standard binomial expansion

• k -d G()kdz+ ()11- z =+Â , k=1 GG()dk()+ 1 it follows that G()jd+ b = j ≥ 1 (5) j GG()dj()+ 1 d-1 Aj (6) for j large and an appropriate constant A.

Consider now an MA(•) model with bj, j 1, given exactly by (6), i.e., Long-Memory Time Series Models and Fractional Differencing 325

• d-1 yAjt =+Â eetj- t j=1 so that b0 = 1. This series has variance Ê • ˆ Vy()=+ A22s Á1 j 2()d- 1˜. e Ë Â ¯ j=1 From the theory of infinite series, it is known that

• Â j -s converges for s> 1 j=1 but otherwise diverges. Since it is easily shown that the variance of xt and that of yt differ only by a finite quantity, it follows that the variance of xt 1 1 is finite provided d < –2 , but is infinite if d –2 . The AR(•) representation of xt is • Â axjtj- ==e t, a0 1 , j=0 i.e.,

aBx()tt= e , which gives spectrum a f ()w = azaz()() so that, comparing with (2), d az()=-()1. z Hence G()jd- a = j ≥ 1 (7) j GG()11- dj()+ and, for j large -+()1 d aAjj . (8)

From (6) and (8), it is seen that bj and ΩajΩ tend to zero slower than expo- nential. It follows that no ARMA(p, q) model, with finite p and q would provide an adequate approximation for large j. From (5) and (7) it can be noted that aj is positive and bj negative if d is negative, and aj is neg- ative and bj positive if d is positive. The case d = 0 has been excluded throughout this section but this is just the white noise case, so that rj, bj and aj all are zero for j > 0. If a series is generated by the more general model, compared to (2), 326 C. W. J. Granger and R. Joyeux

-d xBaBt =-()1 ¢()e t where 0 < a¢(0) < • and et is white noise, results (4), (6) and (8) continue to hold for j large. A filter of the form

d aB()=-()1 B which, using the previously introduced phrase, is an integrating filter of order -d, can also be called a fractional differencing operator. This is 1 easily seen by taking d = –2 , as then applying a(B) twice corresponds to an ordinary, full difference and thus applying it once gives a half, or frac- tional, difference. The idea of half differencing should not be confused with that of differencing over a half sampling interval, as the two con- cepts are quite unrelated. It is not clear at this time if integrated models with non-integer d occur in practice and only extensive empirical research can resolve this issue. However, some aggregation results presented in Granger (1980) do suggest that these models may be expected to be relevant for actual eco- nomic variables. It is proved there, for example, that if xjt, j = 1,...,n are set of independent series, each generated by an AR(1) model, so that

xjt = aj xj,t-1 + e jt, j = 1,...,N where the ejt are independent, zero-mean white noise and if the aj’s are values independently drawn from a beta distribution on (0, 1), where

1 q-1 dF()aaaaa= 21p- ()10100- 2 dpq, ££ and > , > Bpq(), then if N xx=-Â jt, for N large, xIq~.()12 j-1 The shape of the distribution from which the a’s are drawn is only crit- ical near 1 for this result to hold.

A more general result arises from considering xjt generated by

xjt = aj xj,t-1 + yj,t + bjWt + ejt where the series yj,t, W1 and ejt are all independent of each other for all 2 j, ejt are white noise with variances s j , yj,t has spectrum fy(w, qj) and is at least potentially observable for each micro-component. It is assumed that there is no feedback in the system and the various parameters a, qj, b and s 2 are all assumed to be drawn from populations and the distrib- ution function for the a’s independent are generated by an AR(1) model, plus an independent causal series yj,t and a common factor causal series W1. With these assumptions, it is shown that Long-Memory Time Series Models and Fractional Differencing 327

(i) xId˜ tx~ () where dx is the largest of the three terms 1 - q/2 + dy,1 - q + dw and 1 - q/2, where yt ~ I(dy), Wt ~ I(dw), and (ii) if a transfer function model of the form

xaByaBWe˜ tt= 12() + () tt+ is fitted, then both a1(B) and a2(B) are integrating filters of order 1 - q. In Granger (1980) it was shown that integrated models may arise from micro-feedback models and also from large-scale dynamic econometric models that are not too sparse. Thus, at the very least, it seems that inte- grated series can occur from realistic aggregation situations, and so do deserve further consideration. There are a number of ways in which data can be generated to have the long-memory properties, at least to a good order of approximation. Mandelbrot (1971) has come down heavily in favour of utilizing aggre- gates of the form just discussed and has conducted simulation studies to show that series with the appropriate properties are achieved. An alter- native technique, which appears to be less efficient but to have easier interpretation has been proposed by Hipel and McLeod (1978). Suppose that you are interested in generating a series with autocorrelations rt, t = 0,1,...,with r0 = 1. Define the N ¥ N correlation matrix C = r Nij[]- , and let this have Cholesky decomposition

T CN = M◊M , where T denotes transpose and

M = [mij] is an N ¥ N lower triangular matrix.

Then it is easily shown that if et, t = 1,...,N are terms in a Gaussian white noise series with zero mean and unit variance, then the series t ymettii= Â (11) i=1 will have the autocorrelations rt. The generating process is seen to be non-stationary and is expansive for large N values. However, by using rt given by (3), a series with long-memory properties is generated. An obvious alternative is to use a long autoregressive representation, of the form m xaxtjmtj+=Â , - e t (12) j=1 where et is white noise and the aj,m are generated by solving the first m Walker–Yule equations with the theoretical values of rt given by (3), i.e., 328 C. W. J. Granger and R. Joyeux

È a1,m ˘ È r1 ˘ Ía ˙ Ír ˙ Í 2 ,m ˙ -1 Í 2 ˙ =-Cm (13) Í MM˙ Í ˙ Í ˙ Í ˙ amm, rm Î ˚ Î ˚ and

aj,m = 0, j > m. Clearly, m will have to be fairly large for a reasonable approximation to be achieved. The obvious problem with this technique is what starting- up values to use for the y’s. If the starting-up values do not belong to a long-memory process, such as using a set of zeros, the long-memory prop- erty of the model means that it will take a long time to forget this incor- rect starting-up procedure. To generate data, we decided to combine the Hipel–McLeod and the autoregressive methods. The Hipel–McLeod method was used to generate N observations which are then used as start-up values for the autoregressive equation, taking m = N, and then n values for yjt are generated. The methods just described are appropri- 1 ≠ 1 ate only for -1 < d < –2 , d 0. To generate data yt with –2 d < 1, xt is 1 first formed for -–2 < d < 0 and then yt generated by

yt = xt + yt-1. We have had a little experience with this generating procedure, with d = .25 and d = .45. Using N = 50 and n = 400 or 1000, the estimated auto- correlations did not compare well with the theoretical ones, but using N = 100 and n = 100 or 400 the estimated and theoretical autocorrelations matched closely for d = .25 but were less good for d = .45. Clearly, more study is required to determine the comparative advantages of alterna- tive generation methods and the properties of the series produced.

4. FORECASTING AND ESTIMATION OF d The obvious approach to forecasting models with spectra given by (2) is to find an ARMA or ARIMA model which has a spectrum approximat- ing this form. Unfortunately, this is a rather difficult problem as func- tions of the form (2), when expressed in z = e-iw, are not analytic in z and so standard approximation theory using rational functions does not apply. Whilst realizing that much deeper study is required, in the initial stages of our investigations we have taken a very simple viewpoint, and used AR(m) models of the type discussed above, in equations (12) and (13). In practice, one would expect series to possibly have spectral shapes of form (2) at low frequencies but to have different shapes at other fre- quencies. These other shapes, which can perhaps be thought of as being generated by short-memory ARMA models, will be important for short- term forecasts but will be of less relevance for long-term forecasts. Thus, the AR(m) model may be useful for forecasting 10 or 20 steps ahead, Long-Memory Time Series Models and Fractional Differencing 329 say, in a series with no seasonal. If m is taken to be 50, for example, a rather high order model is being used, but it should be noted that the parameters of the model depend only on d, and so the model is seen to be highly parsimonious. The method will, nevertheless, require quite large amounts of data. To use this autoregressive forecasting model, the value of d is required. There are a variety of ways of estimating the essential parameter model d. The water resource engineers use a particular re-scaled range variable which has little intuitive appeal (see, for instance, Lawrance and Koltegoda (1977)). Other techniques could be based on estimates of the logarithm of the spectrum at low frequencies. At this time we are taking a very pragmatic approach, by using ARd(m) models, with a grid of d values from -.9 to +.4, excluding d = 0, forming ten-step forecasts and then estimating the mean squared errors for each of the models with dif- ferent d-values, together with white noise random walk models. Some initial results that have been obtained are discussed in the following section. The method we have used is clearly arbitrary and sub-optimal, as the following theory shows. Suppose that the observed series xt has two uncorrelated components

xt = yt + zt, where yt is a “pure” long-memory series, having spectrum given exactly by (2), and zt is a stationary standard short-memory model that can be represented by an ARMA(p, q) model with small p and q and with all 2 roots not near unity. For large t, the autocovariances m t of zt will be neg- ligible, and so

x y r t Ar t ,

y where rt is given by (3) and var y A = . (14) var x To derive an autoregressive model of order m appropriate for long-run forecasting, the coefficients ajm in (12) should be solved from the new Walker–Yule equations L -1 È 1 Ar1 Ar2 Arm-1 ˘ Í Ar 1 Ar L Ar ˙ a = Í 1 1 m-2 ˙ r ◊A m Í M ˙ m Ar1 1 . Í . . ˙ ÎArm-1 1 ˚ 1 where am = (a1m, a2m,...,amm) and rm = (r1, r2,...,rm) which may be written

-1 aICmmmm=-[]()1 AA+ r ◊ A (15) 330 C. W. J. Granger and R. Joyeux

where Im is the m ¥ m unit matrix and Cm is the autocovariance matrix as introduced in the previous section. Obviously, if A = 1, (15) becomes identical to (13). (15) may be rewritten --11-1 (16) ammmmm=+CI[] DC r where 1 - A D = A and this can be expanded as

2 ---11È D 2 ˘ ammm=-++CÍ I DC m C m...˙r m . Î 2 ˚ Thus, the zero-order approximation, assuming D is small, is

()01- ammm= C r , which is identical to (13). The first-order approximation is

()11-- 2 amm=-[]CDC mmr , etc. There now effectively become two parameters to estimate, d and D. As stated before, in our preliminary investigation just zero-order approximations were used, and the relevance of this with real data needs investigation. The techniques discussed in this section only apply for the range -1 < d < .5, d ≠ 0. If d lies in the region .5 d < 1 a number of approaches could be taken, for instance, one could first difference and then get a series with -1 < d < 0, or one could apply the fractional differencing operator (1 - B).5, and then get a series with 0 < d < .5. The first of these two suggestions is much the easier, but the second may provide a better estimate of the original d. Clearly, much more investigation is required. As an indication of the forecasting potential of the long-memory models, Table 17.1 shows the following quantities: -d Vd()==-variance of yt () 1 B e t , where et is white noise with unit variance. yt is thus a pure long-memory 1 series, with spectrum given by (2) for all frequencies and with a = –2 p, N -1 2 SdNj()= Â bd(), j=0 which is the variance of the N-step forecast error using the optimal fore- cast for yn+N, using yn-j, j 0, and where bj are the theoretical moving average coefficients given by (5), and Vd()- S() d Rd2 ()= N N Vd() Long-Memory Time Series Models and Fractional Differencing 331

Table 17.1 Forecasting properties of long-memory models.

1. Ten-step Forecasts (N = 10) d V(d) S10(d) R10(d)

-.9 1.81 1.81 .369 E – 05 -.8 1.648 1.648 .227 E – 04 -.7 1.504 1.504 .745 E – 04 -.6 1.380 1.380 .182 E – 03 -.5 1.273 1.273 .365 E – 03 -.4 1.183 1.182 .611 E – 03 -.3 1.109 1.108 .840 E – 03 -.2 1.052 1.051 .868 E – 03 -.1 1.014 1.014 .485 E – 03 .1 1.019 1.017 .223 E – 02 .2 1.099 1.078 .185 E – 01 .3 1.316 1.204 .857 E – 01 .4 2.070 1.425 .312 E – 00 .5 1.791 .6 2.376 .7 3.289 .8 4.691 .9 6.816

2. Twenty-step Forecasts (N = 20) d V(d) S20(d) R20(d)

-.9 1.81 1.81 .460 E – 06 -.8 1.648 1.648 .331 E – 05 -.7 1.504 1.504 .127 E – 04 -.6 1.380 1.380 .362 E – 04 -.5 1.273 1.273 .823 E – 04 -.4 1.183 1.183 .164 E – 03 -.3 1.109 1.109 .263 E – 03 -.2 1.052 1.052 .315 E – 03 -.1 1.014 1.014 .204 E – 03 .1 1.019 1.018 .126 E – 02 .2 1.099 1.085 .121 E – 01 .3 1.316 1.232 .645 E – 01 .4 2.070 1.510 .270 E – 00 .5 2.016 .6 2.913 .7 4.487 .8 7.222 .9 11.938

which is a measure of the N-step forecastability of yt. Clearly, this measure only applies for series with finite variance, and so V and R2 are only defined for d < 0.5. The table shows these quantities for both ten- and twenty-step forecasts. It is seen that variance decreases as d goes 332 C. W. J. Granger and R. Joyeux from -.9 to -.1 and then increases again as d goes from .1 to .4. It might be noted that d =-1, which corresponds to a differenced white noise, has

V(-1) = 2 and, of course, RN(-1) = 0, N > 1. For d negative the amount of forecastability is low, but as d approaches .5 the results are much more impressive. For example, with d = .4, which corresponds to a finite vari- ance model and would presumably be identified as low-order ARMA by the standard Box–Jenkins techniques, the table shows that ten-step fore- cast error variance is 30% less than forecasts using just the mean and twenty-step forecast error variance is 27% less than simple short- memory forecasting models would produce. It is clear that the long- memory models would be of greatest practical importance if the real world corresponded to d values around 0.5. A slightly curious feature of the table is that RN(d) does not quite increase monotonically as d goes from -.9 to +.4, as there is a slight dip at d =-.1.

5. PRACTICAL EXPERIENCE To this point in time we have had only limited experience with the tech- niques discussed above, but it has been encouraging. Using the method described in section three, series of length 400 were generated with d = .25 and d = .45, using an AR(100) approximation with an initial 100 terms being generated by the Hipel–McLeod moving average model for use as start-up values. The following two tables show the theoretical and estimated auto- correlations for levels and differences together with some estimated partial autocorrelations. These allow ARIMA models to be identified and estimated by standard techniques.

d = .25 Level Differences

Est. Theor. Est. Est. Lag Autocorr. Autocorr. Partial Autocorr.

1 .41 .33 .41 -.42 2 .31 .24 .17 -.02 3 .24 .19 .08 -.09 4 .27 .17 .15 .03 5 .27 .15 .11 .01 6 .26 .14 .07 -.04 7 .29 .13 .13 .11 8 .19 .12 -.03 -.08 9 .18 .11 .01 .02 10 .14 .11 -.01 -.08 11 .20 .10 .07 .10 12 .15 .10 -.03 -.08 13 .18 .09 .06 .05 14 .15 .09 .00 -.02 15 .15 .09 .02 -.01 Long-Memory Time Series Models and Fractional Differencing 333

The approximate standard error for small lags is 0.05. For levels, the first 96 estimated autocorrelations are all non-negative and the first 22 are more than twice the standard error. If an AR(2) model is identified, it is estimated to be

xxtt=+-+...34--12 17 x t 03 e t ()()6 . 85 3 . 45 ()- . 05 (brackets show t-values) and the estimated residuals pass the usual simple white noise tests. An alternative model might be identified as ARIMA(0, 1, 1), but this will have similar long-run forecasting proper- ties as a random walk.

d = .45 Levels Differences

Est. Theor. Est. Partial Est. Est. Lag Autocorr. Autocorr. Corr. Autocorr. Partial

1 .67 .82 .67 -.38 -.38 2 .59 .76 .25 -.03 -.20 3 .53 .74 .12 -.07 -.19 4 .51 .71 .13 .06 -.06 5 .45 .70 .01 -.07 -.11 6 .44 .69 .07 .02 -.07 7 .41 .68 .04 -.05 -.11 8 .41 .67 .08 .02 -.09 9 .40 .66 .06 .02 -.04 10 .39 .65 .03 .01 -.02 11 .37 .65 .01 .05 -.06 12 .31 .64 -.07 -.08 -.04 13 .31 .63 .02 .02 -.01 14 .28 .63 -.01 -.02 -.04 15 .27 .63 .01 .02 -.03

For levels, the first 34 autocorrelations are non-negative and the next 65 are negative but very small. The first 16 are greater than twice the standard error. The results suggest that the generating mechanism has not done a good job of reproducing the larger lag autocorrelations. A relevant ARIMA model for this data might be IMA(1, 1) and the fol- lowing model was estimated:

()1- Bxtt=-ee 0.. 61 t-1 + 008 ()15 . 5 () 0 . 4

The grid-search method was applied to each series, that is ARd(50) models, with parameters depending just on different d-values, were used to forecast ten-steps ahead. The various mean-squared (ten-step) fore- cast errors (MSE) were found to be: 334 C. W. J. Granger and R. Joyeux

(i) d = 0.25 Selected d Resulting MSE d = .1 1.29 d = .2 1.36 d = .25 1.30 d = .3 1.263 d = .4 1.264 MSE (random walk) = 2.45 MSE (mean) = 1.45 MSE (estimated AR(2)) = 1.36 MSE (random walk) is the ten-step forecast error mean-squared error if one had assumed the series were a random walk. MSE (mean) is that resulting from forecasts made by a model of mean plus white noise, where estimate of the mean is continually updated. The grid method “estimates” d to be about 0.3, which is near the true value of 0.25, and produces forecasts that have a ten-step error variance which is somewhat better than the fitted AR(2) model and is considerably better than using a random walk or IMA(1, 1) model. The theoretical MSE with d = 0.25 is 1.13, which suggests that the forecasting method used is not optimal. The results are biased in favour of the AR(2) model, as its parameters are estimated over the same data set from which the mean-squared fore- cast errors were estimated. (ii) d = 0.45 Selected d Resulting MSE d = .1 2.27 d = .2 2.15 d = .3 2.06 d = .4 2.02 d = .45 2.01 MSE (random walk) = 2.84 MSE (mean) = 2.42 The theoretical achievable MSE using d = 0.45 is 1.58. Once more, the grid search has apparently selected the correct d (although values of d greater than 0.45 were not considered), the forecast method used was not optimal but the ten-step MSE achieved was about thirty percent better than if a random walk or IMA(1, 1) model had been used. The grid search procedure has been used by us on just one economic series so far, the U.S. monthly index of consumer food prices, not sea- sonally adjusted, for the period January 1947 to June 1978. The ordinary identification of this series is fairly interesting. The correlogram of the raw series clearly indicates that first differencing is required. For the first Long-Memory Time Series Models and Fractional Differencing 335 differenced series, the autocorrelations up to lag 72 are all positive and contain fifteen values greater than twice the standard error. The first twenty-four autocorrelations are: lag123456789101112 rk .36 .22 .17 .11 .14 .22 .17 .08 .17 .21 .19 .24 lag 13 14 15 16 17 18 19 20 21 22 23 24 rk .18 .07 .05 .10 .07 .11 .05 .07 .05 .08 .15 .11

The standard errors are .05 and .07 for the two rows. The partial auto- correlations are generally small except for the first, but the sixth, ninth and tenth are greater than twice the standard error. The parsimoni- ous model probably identified by standard procedures is thus ARIMA (1, 1, 0). Using first differenced series, plus extra d differencing, the grid gives the following ten-step forecasting mean square forecast errors for the series in level form: d -.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 MSE 139 128 115 101 86.1 72.1 58.4 46.1 35.7 d .1 .2 .3 .35 .4 .45 MSE 21.6 17.8 16.1 16.03 16.4 17.32

The variance of the whole series is 1204.6 and the ten-step forecast MSE using just a random walk model is 27.6. The evidence thus suggests that the original series should be differenced approximately 1.35 times and that substantially superior ten-step forecasts then result. The ten-step forecast MSE using an ARIMA(1, 1, 0) model is 19.85, which should be compared to the minimum grid value of 16.03. The actual model fitted was

()1- Bxttt=-... 37() 1 Bx-1 ++ 246 t e ()763 . () 477 . There is clearly plenty of further work required on such questions as how best to estimate d, how best to form forecasts for integrated models and the properties of these estimates and forecasts. It is clear that the techniques we have used in this paper are by no means optimal but hope- fully they do illustrate the potential of using long-memory models and will provoke further interest in these models. It is planned to investigate the above questions and also to find if these models appear to occur, and can be used to improve long-term forecasts, in actual economic data.

APPENDIX: THE d = 0 CASE The d = 0 case can be considered from a number of different viewpoints, which lead to different models. Some of these viewpoints are: 336 C. W. J. Granger and R. Joyeux

(i) If f(w) = a(1 - cosw)-d then simply taking d = 0 gives the usual sta- tionary case with f(0) = c, where c is a positive but finite constant. This corresponds to taking d = 0 in an ARIMA(p, d, q) model. (ii) If one considers aggregates of the form

xjt =aj xj,t-1 + bjet N zxtjt= Â j=1 then approximately È b ˘ zdFett= ÍÚ ()ab, ˙ Î 1 - ab ˚ i.e.,

zBett=-b log() 1 if a, b are independent and a is rectangular on (0, 1). This corresponds to a series with spectrum proportional to log(1 - z) log(1 - z–), which takes the form (logw)2 for small w and so is infinite at w = 0. The moving average form corresponding to this model has bi A/j for j large, which is the same as equation (6) with d = 0. The autocovariances take the form log t u s 2 for large t . t e t This type of model can be thought of as arising from applying filters of the form [(1 - B)d - 1]/d to white noise, and then letting d Æ 0. (iii) By looking at equations (4) and (8), one could ask what models correspond to autoregressive equations with A a for large j j j or have autocovariances of form A m t t for large t, which arose in section 3 from a particular aggregation. The relationships between these various viewpoints and the relevance for forecastings need further investigation.

REFERENCES Gradshteyn, I. S. and I. M. Ryzhik (1965) Tables of Integrals, Series and Prod- ucts (4th Edition), Academic Press. Long-Memory Time Series Models and Fractional Differencing 337

Granger, C. W. J. (1980) Long Memory Relationships and the Aggregation of Dynamic Models. To appear Journal of Econometrics. Granger, C. W. J. and P. Newbold (1974) Spurious Regressions in Economics, Journal of Econometrics, 2, 111–120. Hipel and McLeod (1978) Preservation of the Rescaled Adjusted Range Parts 1, 2 and 3, Water Resources Research 14, 491–518. Lawrance, A. J. and N. T. Kottegoda (1977) Stochastic Modeling of Riverflow Time Series, Journal of the Royal Statistical Society, A 140, 1–47. Mandelbrot, B. B. and J. W. Van Ness (1968) Fractional Brownian Motions, Fractional Noises and Applications, SIAM Review 10, 422–437. Mandelbrot, B. B. (1971) A Fast Fractional Gaussian Noise Generator, Water Resources Research 7, 543–553. Plosser, C. I. and G. W. Schwert (1978) Money, Income and Sunspots: Measur- ing Economic Relationships and the Effects of Differencing, Journal of Mon- etary Economics 4, 637–660. Rosenblatt, M. (1976) Fractional Integrals of Stochastic Processes and the Central Limit Theorem, Journal of Applied Probability 13, 723–732. CHAPTER 18

Long Memory Relationships and the Aggregation of Dynamic Models* C. W. J. Granger

By aggregating simple, possibly dependent, dynamic micro-relationships, it is shown that the aggregate series may have univariate long-memory models and obey integrated, or infinite length transfer function rela- tionships. A long-memory time series model is one having spectrum or order w-2d for small frequencies w, d > 0. These models have infinite vari- 1 1 ance for d –2 but finite variance for d > –2 . For d = 1 the series that need to be differenced to achieve stationarity occur, but this case is not found to occur from aggregation. It is suggested that if series obeying such models occur in practice, from aggregation, then present techniques being used for analysis are not appropriate.

1. INTRODUCTION In this paper it is shown that aggregation of dynamic equations, that is equations involving lagged dependent variables, can lead to a class of model that has fundamentally different properties to those in current use in econometrics. If these models are found to arise in practice, then they should prove useful in improving long-run forecasts in economics and also in finding stronger distributed lag relationships between economic variables. The following definitions are required for later sections:

Suppose that xt is a zero-mean time series generated from a zero- 2 mean, variance s white noise series et by use of the linear filter a(B), where B is the backward operator, so that

k xaBBtt= ()eee,,ttk= - (1) and that a(B) may be written

-d aB()=-()1, B a¢() B (2)

* Journal of Econometrics, 14, 1980, 227–238. Long Memory Relationships and Aggregation 339

where a¢(z) has no poles or roots at z = 0. Then xt will be said to be “integrated of order d” and denoted

xIdt ~.() Note that d need not be an integer. Further, defining d xBxaBt¢=()1 - tt=¢()e , then xt¢ ~ I(0), because of the stated properties of a¢(b). a(B) will be called an “integrating filter of order d”. If a¢(B) is the ratio of two

finite polynomials in B of orders l and m, and if d is an integer, then xt will be ARIMA (l,d,m) in the usual Box and Jenkins (1970) notation. In the more general models considered here, d need no longer be an integer. To help with interpretation, one can consider the idea of “fractional differencing.” The usual differencing procedure consists of using the operator (1 - B). Suppose there is a filter a(B) such that when used twice, one gets the usual difference, i.e., a(B)2 = (1 - B). Clearly, such a filter can exist and also that if this filter is used just once, it can be thought of as “half differencing”, which is an example of fractional differencing with 1 d = –2 . An integrated series is one that requires fractional differencing to achieve a stationary ARMA series. An introduction to this class of models may be found in Granger and Joyeux (1980); other accounts and references may be found in Hipel and McLeod (1978), Lawrance and Kottegoda (1977), and Mandelbrot and Van Ness (1968). Some of the main properties of these models may be summarized as follows:

Using the well-known results of filtering theory, the spectrum of xt, given by (1) and (2), is seen to be

2 1 2 s f ()w = az¢() ,. z= eiw x 2d 2p 1 - z It follows that for small w, -2d fcx ()ww, (3) where

2 s 2 ca=¢()()1. 2p

Note that for a stationary ARMA series, fx(w) c for w small, but for an ARIMA (p,1,q) series, fx(w) is as (3) but with d = 1. Thus, neither model provides an adequate approximation to the non-integer inte- grated model. It should be also noted that for longer-run forecasting purposes, it is this low-frequency part of the spectrum that is the most important. 340 C. W. J. Granger

Consider now the case where a(B) is a pure integrating filter of order d, so that -d xBt =-()1 et , (4) then it is shown in Granger and Joyeux (1979) that

2 s e G()kd+ cov()xxttk ,- = sin()p d G()12- d , 21p G()kd+- 1 provided d < –2 . The variance of xt increases as d increases and is infinite 1 for d –2 . It follows that G()1 - d G()kd+ rkttk= corr()xx ,- = , (5) G()d G()kd+-1 1 for d < –2 and d π 0. Of course,

rk = 0, k > 0ifd = 0, which is the white noise case. Writing

• • xbtjtj==ÂÂee- and axjtj- t , j=00j= as the MA(•) and AR(•) representations of xt, one finds that with xt generated by (4),

G()jd+ bj = ,,,jj10π (6) GG()dj()+ 1 and

G()jd- aj = ,,j 1 (7) GG()-dj()+ 1 Using the fact, easily derived from Sterling’s theorem, that G(j + a)/G(j + b) is well approximated by ja-b for large j, it follows that (5), (6) and (7) may be approximated for large j by

2d-1 rj A1j ,(5¢)

d-1 bj A2j ,(6¢)

-(1+d) |aj| A3j ,(7¢) where A1, A2 and A3 are appropriate constants. If xt is generated by the more general model (1), (2), then eqs. (5¢), (6¢) and (7¢) will still hold for the autocorrelations, moving average and autoregressive parameters for large j but the constants A1, A2 and A3 will alter. The fact that the rj and Long Memory Relationships and Aggregation 341

bj decline at a slower rate than for any ARMA (l,m) model with finite l, m suggests that the series will possess interesting and potentially useful long-memory properties, particularly if d > 0. This is also seen from (3), which gives a low-frequency form for the spectrum that is different from any ARIMA (l,d,m) model with finite l, m and integer d. As the low- frequencies are of central importance in long-run forecasting, getting the correct model corresponding to (3) is seen to be important. It should be noted that if d Æ-•, then bj Æ 0 for j > 0, so that white noise is obtained.

The algebra of integrated series is quite simple, as if xt ~ I(d) and an integrating filter is applied to it, to form -d¢ yBxt =-()1,t then

yIddt ~.()+¢

If x1t ~ I(d1) and x2t ~ I(d2), with x1t and x2t independent, then

xx=+12tt x~ I() max() dd 12 , . It follows that if a relationship of the form

xaByettt= () + is constructed, with xy ~ I(dx), yt ~ I(dy) and a(B) is an integrating filter of order d, then et ~ I(de) with dx = max(de,d + dy) and if dy > dx, then d < 0.

2. AGGREGATION OF INDEPENDENT SERIES

Suppose that x1t, x2t are a pair of series generated by

xjt = ajxj,t-1 + ejt, j = 1,2, (8) where e1t, e2t are a pair of independent, zero-mean white noise series, then their sum,

x¯t = x1t + x2t, is easily shown to obey an ARMA (2,1) model [see, for example, Granger and Morris (1976) and Granger and Newbold (1977)]. The autore- gressive part of this model is (1 - a1B)(1 - a2B). If now N independent series are added, each obeying an AR(1) model with different a values, of the form (8), then their sum will be ARMA(N,N - 1) unless cancel- lation of roots occurs between the autoregressive and moving average sides of the model. Many of the important microeconomic variables are aggregates of a very large number of micro-variables: Total personal income, unemployment, consumption of non-durable goods, inventories, and profits are just a few examples. Although the components of these 342 C. W. J. Granger macroseries are not independent, the above results can be gener- alised to suggest that the expected models for aggregates will have huge numbers of parameters, which is not what is found in practice. An alter- native point of view is obtained by considering f¯(w), the power spectrum of the aggregate series

N xxtjt= Â , j=1 where each xjt is generated by an AR(1) model, such as (8). The power spectrum of xjt is

1 var()e jt -iw fj ()w = 2 ◊ ,,z = e (9) 1 -a z 2p j and the spectrum of x¯ is then

N ff()ww= Â j (), j=1 as the components are independent. If the aj are assumed to be random variables drawn from a population with distribution function F(a), and similarly var(ejt) are drawn from some population and are independent of the a’s, one gets the approximation N 1 f ()w E[]var()e ◊ dF()a . (10) 2p jt Ú 2 1 -az This is, of course, a standard technique for considering the effects of aggregation; see Theil (1954). If F(a) is the distribution function of a dis- crete random variable on the region -1 to 1, so that a can take just m specific values in this range, then f¯(w) will be the spectrum of an ARMA(m,m - 1) process. However if a can take any value in some region, so that it is a continuous variable, then f¯(w) will correspond to no ARMA process having of finite number of parameters. To proceed further it is necessary to assume a particular distribution for a, and for mathematical convenience it will be assumed that a has a form of beta distribution on the range (0,1). It will be argued later that the exact form selected for this distribution function is not of critical importance except near a = 1. The range 0 to 1 can be easily changed and alternatives are considered below. The particular form of the beta distribution used here is

2 q-1 ddF()aaaaa= 21p- ()101- 2 ,, Bpq(), = 0, elsewhere, (11) Long Memory Relationships and Aggregation 343 where p > 0, q > 0. A wide variety of shapes can be taken by this func- tion with different choices of p and q. Noting that one can write

1 1 È1+az 1+az ˘ = + , 22ÎÍ1 -az 1 -az ˚˙ 1 -aaz ()1 - and that

• 1+az j =+12 ()az , 1 -az  j=1 it follows from (10) and (11) that the coefficient of zk in f¯(w) is

1 2 q-2 aaa21pk+-()1 - 2 d , Bpq(), Ú 0 – – and this coefficient is m k, the kth autocovariance of xt, from the standard Fourier expansion of a spectrum. Thus, provided q < 1,

Bp()+- k21, q m = k Bpq(), GG()q - 12()pk+ = ◊ , Bpq(), G()pk++-21 q which, for large k, gives the approximation

– 1-q m k = A4k . – Comparing this with (5¢), it follows that xt ~ I(1 - q/2). It should be noted 1 – that if q > 1, then 1 - q/2 < – and xt will have finite variance, but if – 2 0 < q £ 1, xt will not have finite variance. It is seen that the order of the integration, 1 - q/2, does not depend on p and so the shape of dF(a) appears to be of little relevance in this respect except near a = 1, where q determines the slope of dF(a) in the form chosen. Because of this, it is easily seen that the range of a can be changed to a to 1, a > 0, without any effect on the main result. If the upper end of the range is changed from 1 to b, important changes do occur. If b > 1, x– can become explo- sive, which is generally considered to be inappropriate for economic vari- ables. If b < 1, the beta distribution on the range (0,b) is

q-1 2 aa21p- ()b22- ()pq+-1 , Bpq(), b and from this it is simple to show that

– k 1-q m k A5b k , for large k. 344 C. W. J. Granger

Although this does not strictly correspond to the autocovariance of any ARMA model with a finite number of parameters, such a model is likely to provide a good approximation in most cases, provided b is not very – near to 1. Thus, to get xt ~ I(0), one way is to require the aj’s in the indi- vidual AR(1) component models to be constrained to be less than some quantity which is strictly less than one. A second way is to take b = 1 but let q Æ•.

The assumption that the ejt are all white noises can be removed. Suppose that the xjt are generated by

xjt = ajxj,t-1 + yjt, (12) where yjt has spectrum fy(w,qj) depending on the vector of parameters qj. Further suppose that the yjt are all independent and that the a’s and q’s are drawn from independent populations. Then eq. (10) becomes

1 fNEf()wwq (,. ) dF()a q []y Ú 2 1 -az It follows immediately with the previous assumptions that if

N yyIdtjty= Â ~,() j=1 then xId()+- q ty~ 1 from the previous results. (j) The power cross-spectrum between xjt and yjt, cr (w), is seen from (12) to be

f ()wq, cr ()j ()w = yj,,z = e-iw a z 1 - j and, because all components are assumed independent, the cross- – – spectrum between xt and yt, denoted cr(w), is given by

N cr()ww= Â cr ()j (), j=1 and so approximately

1 cr()wwqNEq [] fy (,. ) dF()a Ú 1 -az Using the same distribution as before for a gives Long Memory Relationships and Aggregation 345

• Bp() k q k + 2, cr()wwqNEq [] fy (, )Â z , k=0 Bpq(), and so the coefficient of zk in the sum is of the order of k-q. If a one-way causal transfer function or distributed lag equation is fitted relating x¯t and y¯t, of the form

xaByettt= () + , then as az()= cr()wwf () y , it follows that a(B) will be an integrating filter of order d = 1 - q from (6¢), providing q π 1.

3. AGGREGATION OF DEPENDENT SERIES Initially, consideration is given just to the perfectly dependent set of series xjt generated by

xjt = ajxj,t-1 + bjWt. (13) Later, these models will be embedded in a more general, and acceptable, class. It is seen immediately that

N Ê b j ˆ x = Á ˜W , (14) t ËÂ 1 -a B¯ t j=1 j which can be approximated by

Ê 1 ˆ xNEtt[]b Ú d,FW()a Ë 1 -aB ¯ assuming that the a’s and b’s are drawn from independent populations. With the usual assumption about F(a), from the result at the end of the previous section, it is seen that

Ê • Bp()+ k2, qˆ xNEB []b Á k ˜W , t ËÂ ¯ t k=0 Bpq(), and so the coefficient of Bk for large k is of order k-q. It follows from (6¢) that if Wt I(dW), then

xItW~.()1 -+ qd If a one-way causal transfer function equation of the form

xaBWtt= () , 346 C. W. J. Granger is fitted, then – as before – a(B) will be an integrating filter of order 1 - q. Consider now the more general model

xjt = ajxj,t-1 + yj,t + bjWt + ejt, (15) where the series yjt, Wt and ejt are all independent of each other for all j, 2 ejt are white noises with variances s j, yj,t has spectrum fy(w,qj) and is at least potentially observable for each micro-component. It is assumed that there is no feedback in the system, so that xjt does not cause yjt or 2 Wt. The various parameters a, q, b and s are all assumed to be drawn from independent populations and the distribution function for the a’s is still given by (11). With these assumptions the results obtained above can be combined to give:

(i) xIdtx~,() where dx is the largest of the three terms: 1 - q/2 + dy (coming from the yjt components), 1 - q + dw (coming from the wt components), and 1 - – q/2 (coming from the ejt components), where yt ~ I(dy) and Wt ~ I(dW).] It is thus seen that, with the type of aggregation considered, integrated models are inclined to occur, unless the a’s are constrained to be strictly less than one. Note that with the beta distribution used, given in (11), prob(a = 1) = 0 if q > 0. Further, if dy = dw = 0, then dx < 1 with q > 0. Thus, in this case, ordinary single differencing will not be required because of aggregation. With dy = dw = 0, to get dx = 1 one needs a distribution for the a’s with prob(a = 1) π 0. However, if dy > 0, for instance, dx = 1 can occur in the case generally considered in this paper. (ii) If a transfer-function model of the form

xaByaBWett= 12() + () tt+ (16) is fitted, then both a1(B) and a2(B) will be integrating filters of order 1 - – q. The term in (16) involving yt would contribute 1 - q + dy to dx, which can be compared to the 1 - q/2 + dy contributed by all the yjt’s. As q > 0, – it is seen that the y term makes a smaller contribution to dx, which may be thought of as a measure of the information loss in using the aggre- – – gate y to “explain” xt rather than all the individual yj,t’s. If et ~ I(dx), then if dx > 1 - q + dw, it follows that de = dx, but if dx = 1 - q + dw, then de can – be less than dx, assuming yt and et independent.

4. SOME OTHER MODELS The possibility of feedback between the micro-variables was excluded in the previous sections. To show that similar results are likely to be found from simple feedback situations, consider the micro-model Long Memory Relationships and Aggregation 347

1 1 ¸ xjt = e jt + h jt Ô 1 -a j BB1 - b j Ô ˝, (17) 1 1 y = e + h Ô jt 1 -g BBjt 1 -d jt Ô j j ˛ where ejt, hjt are a pair of independent zero-mean white noise series. This is a two-way causal, or feedback system for each micro-component. – – Clearly, from the results of section 2, the aggregates xt and yt will both be integrated series if the a’s, b’s, g ’s and d’s are all drawn from inde- – – pendent beta populations. The relationships between xt and yt are of a feedback nature and the transfer functions involved will correspond to integrating filters. The exact analytical details are not important, as eqs. (17) are not of the usual form for feedback models and assumptions of independence between the parameters a, b, g and d may not correspond to actual microeconomic theory. It is possible to present a heuristic argument that long-memory or integrating processes can arise from very large-scale dynamic, econo- metric models. Suppose that the model takes the simple form (18) ABx()tt= e , where the (j,k)th element of the N ¥ N matrix A(B) is ajk + bjkB, xt is a N ¥ 1 vector of economic variables, and et is an N ¥ 1 white noise vector. Thus, the equations of the model allow any variable to be lagged once. Writing

-1 xABtt= ()e , and nothing that each element of A-1(B) will be the ratio of a polyno- mial in B of order N - 1 divided by a polynomial in B of N, and further that all such ratios can be written as

N Â()cBjj()1 -a , (19) j=1 assuming all the roots of |(A(z)| = 0 are real, for convenience, it follows that each element of xt will be the sum of N components, one for each ejt, and each of these components will be the sum of N AR(1) type filters applied to a white noise series. Comparing this construction with those met in sections 2 and 3 above strongly suggests that each component of xt will be integrated and relationships between components will involve integrating transfer functions, provided the roots of |A(z)| = 0 are drawn from a beta distribution on the range (a,1). This argument is not rigor- ous, because the cj in (19) will not be independent of each other or of the a’s. The details seem to be very complex but the likely conclusions from a more careful analysis are probably those already indicated by the heuristic argument. 348 C. W. J. Granger

Although the results presented in this paper suggest that integrated processes, and long-memory relationships are likely to occur from aggre- gation of dynamic models, it should be pointed out that they by no means necessarily arise. For example, if

xjt = ejt + bjej,t-1, – so that each microvariable is MA(1), then xt will also be MA(1). – Similarly, if each xjt is IMA(d,q), then so will be the aggregate xt.This dif- ference between aggregating AR(1) and MA(1) models is quite dra- matic.

5. CONCLUSION It has been shown that aggregation of dynamic equations can lead to models, both for single variables and relating pairs of aggregates, that are quite different from those currently in use. This means that present models may well be mis-specified and are being inefficiently estimated. The practical problems of estimating these new, alternative models requires further research.

REFERENCES Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden Day, San Francisco, CA). Granger, C.W.J. and R. Joyeux, 1980. A introduction to long-memory time series and fractional differencing, Journal of Time Series Analysis 1, forthcoming. Granger, C.W.J. and M. Morris, 1976, Time series modeling and interpretation, Journal of the Royal Statistical Society A 38, 246–257. Granger, C.W.J. and P. Newbold, 1977, Forecasting economic time series (Acade- mic Press, New York). Hipel, W.H. and A.I. McLeod, 1978, Preservation of the rescaled adjusted range, Parts 1–3, Water Resources Research 14, 491–518. Lawrance, A.J. and N.T. Kottegoda, 1977, Stochastic modeling of river-flow time series, Journal of the Royal Statistical Society A 140, 1–47. Mandelbrot, B.B. and J.W. Van Ness, 1968, Fractional Brownian motions, frac- tional noises and applications, SIAM Review 10, 422–437. Theil, H., 1954, Linear aggregation of economic relations (North-Holland, Amsterdam). CHAPTER 19

A Long Memory Property of Stock Market Returns and a New Model* Zhuanxin Ding, Clive W. J. Granger, and Robert F. Engle**

Abstract

A “long memory” property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns themselves, but the power trans- d formation of the absolute turn |rt| also has quite high autocorrelation for d long lags. It is possible to characterize |rt| to be “long memory” and this property is strongest when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our Monte-Carlo study shows that both ARCH type models based on squared returns and those based on absolute return can produce this property. A new general class of models is proposed which allows the power d of the heteroskedasticity equation to be estimated from the data.

1. INTRODUCTION

If rt is the return from a speculative asset such as a bond or stock, this d paper considers the temporal properties of the functions |rt| for positive values of d. It is well known that the returns themselves contain little serial correlation, in agreement with the efficient market theory.

However, Taylor (1986) found that |rt| has significant positive serial cor- relation over long lags. This property is examined on long daily stock d market price series. It is possible to characterize |rt| to be “long- memory”, with quite high autocorrelations for long lags. It is also found, as an empirical fact, that this property is strongest for d = 1 or near 1 compared to both smaller and larger positive values of d. This result

* Journal of Empirical Finance, 1, 1993, 83–116. ** We thank Jurg Barlocher, Xiaohong Chen,Takeo Hoshi, Bruce Lehman,Victor Ng, and Ross Starr for helpful comments and discussions. We are also grateful to the editor (Richard T. Baillie) and two anonymous referees for their constructive comments. The second and third authors would like to thank NSF for financial support. 350 Z. Ding, C. W. J. Granger and R. F. Engle

Table 19.1 Summary statistics of rt.

sample studentized normality data size mean std skewness kurtosis min max range test

rt 17054 0.00018 0.0115 -0.487 25.42 -0.228 0.154 33 357788 appears to argue against ARCH type specifications based upon squared returns. The paper examines whether various classes of models are consistent with this observation. A new general class of models is then proposed which allows the power d of the heteroskedasticity equation to be estimated from the data. The remainder of this paper is organized as follows: In section 2, we give a brief description of the data we use. In section 3 we carry out the autocorrelation and cross-correlation analysis. The special pattern of the autocorrelogram and crosscorrelogram of the stock returns is exploited and presented. Section 4 investigates the effect of temporal aggregation on the autocorrelation structure and examines the short sample auto- correlation property of stock returns. Section 5 presents a Monte Carlo study of various financial models. Based on this, we propose a new general class of models in section 6. Section 7 concludes the analysis.

2. THE DATA The data set we will analyze in this paper is the Standard & Poor 500 (hereafter S&P 500) stock market daily closing price index.1 There are altogether 17055 observations from Jan 3, 1928 to Aug 30, 1991. Denote pt as the price index for S&P 500 at time t (t = 0,...,17055). Define

rt = lnpt - lnpt-1 (1) as the compounded return for S&P 500 price index at time t (t = 1,..., 17054).

Table 19.1 gives the summary statistics for rt. We can see from Table 19.1 that the kurtosis for rt of 25.42 is higher than that of a normal dis- tribution which is 3. The kurtosis and studentized range statistics (which is the range divided by standard deviation) show the characteristic “fat- tailed” behavior compared with a normal distribution. The Jarque–Bera normality test statistic is far beyond the critical value which suggests that rt is far from a normal distribution. Figs. 19.1, 19.2 and 19.3 give the plots of pt, rt and |rt|. We can see from the figures the long run movement of daily pt, rt,|rt| over the past 62 years. There is an upward trend for pt but rt is rather stable around mean

1 We are indebted to William Schwert for providing us the data. A Long Memory Property of Stock Market Returns 351

Figure 19.1. Standard & Poor 500 daily price index 01/03/28–08/30/91.

Figure 19.2. Standard & Poor 500 daily returns 01/04/28–08/30/91.

Figure 19.3. Standard & Poor 500 daily absolute returns 01/04/28– 08/30/91.

m = 0.00018. From the series |rt|, we can clearly see the observation of Mandelbrot (1963) and Fama (1965) that large absolute returns are more likely than small absolute returns to be followed by a large absolute return. The market volatility is changing over time which suggests a suit- able model for the data should have a time varying volatility structure as suggested by the ARCH model. During the Great Depression of 1929 and early 1930s, volatilities are much higher than any other period.There is a sudden drop in prices on Black Monday’s stock market crash of 1987, but unlike the Great Depression, the high market volatility did not last very long. Otherwise, the market is relatively stable.

3. AUTOCORRELATION ANALYSIS OF THE RETURN SERIES It is now well established that the stock market returns themselves contain little serial correlation [Fama (1970), Taylor (1986)] which is in 352 Z. Ding, C. W. J. Granger and R. F. Engle

Table 19.2 Autocorrelations of rt. data lag 1 2 3 4 5 10 20 40 70 100

rt 0.063 -0.039 -0.004 0.031 0.022 0.018 0.017 0.000 0.000 0.004

|rt| 0.318 0.323 0.322 0.296 0.303 0.247 0.237 0.200 0.174 0.162 a r t 0.218 0.234 0.173 0.140 0.193 0.107 0.083 0.059 0.058 0.045

0.3 0.2 0.1

0.0 0 20 40 60 80 100 Figure 19.4. Autocorrelation of |r|. r**2, r from high to low. agreement with the efficient market theory. But this empirical fact does not necessarily imply that returns are independently identically distrib- uted as many theoretical financial models assume. It is possible that the series is serially uncorrelated but is dependent. The stock market data is especially so since if the market is efficient, a stock’s price should change with the arrival of information. If information comes in bunches, the dis- tribution of the next return will depend on previous returns although they may not be correlated. Taylor (1986) studied the correlations of the transformed returns for 40 series and concluded that the returns process is characterized by sub- stantially more correlation between absolute or squared returns than there is between the returns themselves. Kariya et al. (1990) obtained a similar result when studying Japanese stock prices. Extending this line d we will examine the autocorrelation of rt and |rt| for positive d in this section, where rt, is the S&P 500 stock return. 2 Table 19.2 gives the sample autocorrelations of rt,|rt| and rt for lags 1 2 to 5 and 10, 20, 40, 70, 100. We plot the autocorrelogram of rt,|rt| and rt from lag 1 to lag 100 in fig. 19.4. The dotted lines show ±1.96/T which is the 95% confidence interval for the estimated sample autocorrelations if the process rt is independently and identically distributed (hereafter i.i.d.). In our case T = 17054 so ±1.96/ T = 0.015. It is proved [Bartlett

(1946)] that if rt is a i.i.d process then the sample autocorrelation rt is approximately N(0, 1/T). In fig. 19.4, about one quarter of the sample autocorrelations within lag 100 are outside the 95% confidence interval for a i.i.d process. The first lag autocorrelation is 0.063 which is significantly positive. Many other researchers [see Fama (1976), Taylor A Long Memory Property of Stock Market Returns 353

d Table 19.3 Autocorrelations of |rt| . d lag 1 2 3 4 5 10 20 40 70 100

0.125 0.110 0.108 0.102 0.098 0.121 0.100 0.100 0.095 0.065 0.089 0.25 0.186 0.181 0.182 0.176 0.193 0.164 0.164 0.148 0.120 0.131 0.5 0.257 0.255 0.263 0.251 0.259 0.222 0.221 0.192 0.166 0.165 0.75 0.297 0.299 0.305 0.286 0.291 0.246 0.241 0.207 0.180 0.173 1 0.318 0.323 0.322 0.296 0.303 0.247 0.237 0.200 0.174 0.162 1.25 0.319 0.326 0.312 0.280 0.295 0.227 0.211 0.174 0.153 0.138 1.5 0.300 0.309 0.278 0.242 0.270 0.192 0.170 0.136 0.122 0.106 1.75 0.264 0.276 0.228 0.192 0.234 0.149 0.125 0.095 0.088 0.073 2 0.218 0.234 0.173 0.140 0.193 0.107 0.083 0.059 0.058 0.045 3 0.066 0.088 0.036 0.025 0.072 0.019 0.009 0.004 0.006 0.003

(1986), Hamao et al. (1990)] also found that most stock market return series have a very small positive first order autocorrelation. The small positive first order autocorrelation suggests that the rt do have some memory although it is very short and there is a portion of stock market returns that is predictable although it might be a very small one. So the efficient market or random walk hypothesis does not hold strictly.Alter- natively, this could be from non-synchronous measurement of prices.The second lag autocorrelation (=-0.039) is significantly negative which sup- ports the so called “mean-reversion” behaviour of stock market returns. This suggests that the S&P 500 stock market return series is not a realization of an i.i.d process.

Furthermore, if rt is an i.i.d process, then any transformation of rt is 2 also an i.i.d process, so will be |rt| and rt .The standard error of the sample autocorrelation of |rt| will be 1/ T = 0.015 if rt has finite variance, the 2 same standard error is applicable for the sample autocorrelation of rt providing the rt also have finite kurtosis. But from Fig. 19.4, it is seen that 2 not only the sample autocorrelations of |rt| and rt are all outside the 95% confidence interval but also they are all positive over long lags. Further, the sample autocorrelations for absolute returns are greater than the sample autocorrelations for squared returns at every lag up to at least 100 lags. It is clear that the S&P 500 stock market return process is not an i.i.d process. Based on the finding above, we further examined the sample auto- d correlations of the transformed absolute S&P 500 returns |rt| for various d d positive d, Table 19.3 gives corr(|rt| ,|rt+t| ) for d = 0.125, 0.25, 0.50, 0.75, 1, 1.25, 1.5, 1.75, 2, 3 at lags 1 to 5 and 10, 20, 40, 70, 100. Figs. 19.5, 19.6 d show the autocorrelogram of |rt| from lag 1 to 100 for d = 1, 0.50, 0.25, 0.125 in Fig. 19.5 and d = 1, 1.25, 1.5, 1.75, 2 in Fig. 19.6. From Table 19.3 and Figs. 19.5, 19.6 it is seen that the conclusion obtained above remains 354 Z. Ding, C. W. J. Granger and R. F. Engle

0.4 0.3 0.2 0.1 0.0

0 20406080100 Figure 19.5. d - 1, 0.5, 0.25, 0.125 from high to low.

0.5 0.4 0.3 0.2 0.1 0.0 0 20406080100 Figure 19.6. d - 1, 1.25, 1.50, 1.75, 2 from high to low.

0.30 0.30

0.20 0.20 rho rho 0.10 0.10

0.0 0.0 012345 012345 d d Figure 19.7. Autocorrelation Figure 19.8. Autocorrelation of |r|**d at lag 1. of |r|**d at lag 2.

valid. All the power transformations of the absolute return have significant positive autocorrelations at least up to lag 100 which supports the claim that stock market returns have long-term memory. The auto- correlations decrease fast in the first month and then decrease very d slowly.The most interesting finding from the autocorrelogram is that |rt| has the largest autocorrelation at least up to lag 100 when d = 1 or is near 1. The autocorrelation gets smaller almost monotonically when d goes away from 1. To illustrate this more clearly, we calculate the sample autocorrela- tions rt(d) as a function of d, d > 0, for t = 1, 2, 5, 10 and taking d = 0.125, 0.130,...,1.745, 1.750, 2, 2.25,...,4.75, 5. Figs. 19.7, 19.8, 19.9 and 19.10 give the plots of calculated rt(d) at t = 1, 2, 5, 10. It is seen clearly from these figures that the autocorrelation rt(d) is a smooth function of d. A Long Memory Property of Stock Market Returns 355

0.30 0.25

0.20 0.15 rho 0.10 rho 0.05 0.0 0.0 012345 012345 d d Figure 19.9. Autocorrelation Figure 19.10. Autocorrela- of |r|**d at lag 5. tion of |r|**d at lag 10.

0.3

0.2

0.1

0.0 0 500 1000 1500 2000 2500 Figure 19.11. Autocorrelation of |r| up to lag 2500.

d Table 19.4 Lags at which first negative autocorrelations of |rt| occurs. d 0.125 0.25 0.5 0.75 1 1.25 1.5 1.75 2 3 t* 2028 2534 2704 2705 2705 2705 2705 2685 2598 520

˜ ˜ There is a saddle point d between 2 and 3 such that when d < d, r.t(d) is ˜ a concave function and when d > d, rt(d) is a convex function of d. There is a unique point d* around 1 such that rt(d) reaches its maximum at this point, rt(d*) > rt(d) for d π d*. d In fact, |rt| has positive autocorrelations over a much longer lags than 100. Table 19.4 shows the lags (t*) at which the first negative autocorre- d lation of |rt| occurs for various d. It can be seen from the table that in d most cases, |rt| has positive autocorrelations over more than 2500 lags. Since there are about 250 working days every year, the empirical finding d suggests that |rt| has positive autocorrelations for over 10 years! We pick |rt| as a typical transform of the return series here and plot its sample autocorrelations up to lag 2500 in Fig. 19.11. The dotted 356 Z. Ding, C. W. J. Granger and R. F. Engle lines are 95% confidence interval for the estimated sample autocorrela- tion of an i.i.d process as before. It is striking that all the sample auto- correlations are not only positive but also stay outside the confidence interval. Different models have been tried to approximate this sample autocorrelation curve, including: (1) rt an exponentially decreasing func- t tion of t(rt = ab ) (which is similar to the autocorrelation function of a ARMA model); (2) rt the same as the autocorrelation function of a fractionally integrated process [see Granger and Joyeux (1980)] G()1 - b G()tb+ r = t G()b G()tb+-1 G()11- b ()tb+-◊◊◊ bbG() = G()b ()tb- ◊◊◊()11 - bG()- b ()tb+-1 ◊◊◊ b = ()tb- ◊◊◊()1 - b ()tb+-1 = rt -1 ; (2) ()tb- b and (3) rt a polynomially decreasing function of t(rt = a/t ) which is approximately the same as (2) when t is large. It is found, compared to the real data, that the fitted autocorrelation using method (1) decreases too slowly at the beginning and then too fast at the end while by using methods (2) and (3) the opposite result is found. The final preferred model is a combination of these methods. A theoretical autocorrelation function is specified as follows:

b1 t art -12 b rt = (3) t b3 which can easily be transformed to a linear model

logrt = loga + b1logrt-1 + tlogb2 - b3logt. (4)

Let a* = loga, b1* = b1, b1* = logb2, and b3* =-b3, then

logrt = a* + b1*logrt-1 + b2*t + b3*logt. (5)

Ordinary Least Squares gives as estimates:

-4 logrrtttt=-0 . 049 + 0 . 784 log-1 - 0 . 195 ¥ 10 - 0 . 057 log , ()()- 3.. 9 62 9()- 5 . 9()- 9 . 1 R2 = 0.92, D - W = 2.65. (6) The t-statistics inside parentheses show that all the parameters are significant. After transferring the above equation back to autocorrela- tions one gets: A Long Memory Property of Stock Market Returns 357

Figure 19.12. Autocorrelation of |r| (solid line) and its fitted value (dotted line).

0.. 784t 0 057 rrtt= 0.. 893-1 () 0 999955 t . (7) Fig. 19.12 plots the fitted autocorrelations (dotted line) and the sample autocorrelations themselves. It is seen that the theoretical model fits the actual sample autocorrelations quite well. Similar studies were also carried out for the New York Stock Exchange daily price index and the German daily stock market price index (DAX) over a shorter sample period (1962–1989 for NYSE, 1980–1991 for DAX); we get similar autocorrelation structures for trans- formed returns. Furthermore, we did the cross-correlation analysis of transformed S&P 500 and New York Stock Exchange daily returns series and also found the cross-correlation is the biggest when d = 1 and that it also has long memory. This suggests there may be volatility co- persistence for these two stock market index prices (see Bollerslev and Engle 1989). Our conjecture is that this property will exist in most financial series.

4. SENSITIVITY OF AUTOCORRELATION STRUCTURE We now further investigate the effect of temporal aggregation on the d autocorrelation structure. Table 19.5 gives the autocorrelations of |rt,5| , where rt,5 is the 5 day temporal average of rt. i.e. 1 rrr=() + +◊◊◊+ r, (8) ttt,51255 ¢+ ¢+ t¢+ where t = 1,2,...,3410 and t¢=5(t - 1). It can be seen that the tempo- ral aggregation does not change the long memory property of the d absolute return series. rt(|rt,5| ) still reaches a unique maximum when d is found 1 or 1.25 for different lags t. Compared with the original daily d series, the first order autocorrelation for |rt,5| is much bigger than the second one.Although the temporally aggregated return series here is not 358 Z. Ding, C. W. J. Granger and R. F. Engle

d Table 19.5 Autocorrelations of |r t,5| . d lag 1 2 3 4 5 10 20 40 70 100

0.125 0.145 0.109 0.148 0.149 0.136 0.105 0.129 0.077 0.072 0.041 0.25 0.187 0.155 0.184 0.184 0.169 0.137 0.158 0.102 0.095 0.052 0.5 0.247 0.213 0.229 0.227 0.204 0.180 0.188 0.136 0.126 0.065 0.75 0.296 0.255 0.261 0.255 0.223 0.212 0.203 0.161 0.149 0.074 1 0.332 0.279 0.279 0.267 0.227 0.233 0.205 0.175 0.163 0.079 1.25 0.352 0.286 0.282 0.263 0.217 0.243 0.197 0.178 0.168 0.080 1.5 0.356 0.277 0.271 0.245 0.196 0.242 0.180 0.173 0.164 0.076 1.75 0.349 0.255 0.250 0.217 0.169 0.231 0.160 0.160 0.153 0.069 2 0.332 0.227 0.223 0.186 0.140 0.214 0.138 0.144 0.138 0.061 3 0.237 0.109 0.115 0.075 0.048 0.124 0.068 0.079 0.073 0.026

exactly the same as a weekly return series, we expect a similar result will hold for the weekly data. It should also be noted from Fig. 19.2 that the volatility structure differs considerably between the pre-war and the post-war period. The per-war period (1928–1945) is much more volatile than the post-war period (1946–1986). It will be interesting to look at the memory struc- d ture for these two periods. Table 19.6 shows the autocorrelations of |rt| for the pre-war period (1928–1945). It is seen that the magnitude of the d autocorrelation for |rt| is about the same as those in Table 19.2 |rt| has the largest autocorrelation for the first two lags and then this property 0.75 0.5 becomes strongest for |rt| or |rt| . d Table 19.7 gives the autocorrelations of |rt| for the post-war period (1946–1986). It is clear from the table that during this less volatile period the market has both a smaller and a shorter memory in the sense that the autocorrelations are smaller and decrease faster. The autocor- relations are only about two thirds as big as those of the pre-war period. Comparing Tables 19.2, 19.6 and 19.7 we can probably say that the long memory property that was found in the whole sample period can be mainly attributed to the pre-war period. The market has a strong and long memory of big events like the great depression in 1929 and the early 1930s when volatility was very high.

5. MONTE-CARLO STUDY OF VARIOUS FINANCIAL TIME SERIES MODELS The empirical findings of section 3 and 4 have strong implications for the modeling of financial time series. Taylor (1986) showed that neither day- of-the-week effects nor a linear, correlated process can provide satisfac- A Long Memory Property of Stock Market Returns 359

d Table 19.6 Autocorrelations of |rt| 1928–1945. d lag 1 2 3 4 5 10 20 40 70 100

0.125 0.114 0.135 0.126 0.117 0.138 0.131 0.122 0.118 0.067 0.115 0.25 0.201 0.227 0.231 0.204 0.215 0.200 0.197 0.183 0.128 0.158 0.5 0.273 0.298 0.311 0.275 0.276 0.245 0.245 0.216 0.169 0.172 0.75 0.300 0.323 0.332 0.296 0.294 0.251 0.248 0.212 0.172 0.162 1 0.310 0.329 0.329 0.296 0.293 0.241 0.232 0.192 0.159 0.141 1.25 0.310 0.323 0.310 0.280 0.281 0.223 0.205 0.163 0.138 0.116 1.5 0.302 0.310 0.283 0.256 0.260 0.199 0.173 0.130 0.114 0.090 1.75 0.289 0.292 0.251 0.226 0.236 0.175 0.141 0.099 0.091 0.067 2 0.273 0.272 0.218 0.196 0.211 0.151 0.111 0.072 0.070 0.047 3 0.201 0.194 0.114 0.098 0.128 0.076 0.034 0.012 0.020 0.007

d Table 19.7 Autocorrelations of |rt| 1946–1986. d lag 1 2 3 4 5 10 20 40 70 100

0.125 0.089 0.062 0.054 0.057 0.086 0.047 0.054 0.051 0.041 0.038 0.25 0.129 0.095 0.086 0.102 0.126 0.082 0.082 0.068 0.058 0.053 0.5 0.162 0.128 0.121 0.141 0.158 0.111 0.106 0.082 0.068 0.066 0.75 0.181 0.151 0.143 0.164 0.175 0.126 0.119 0.088 0.067 0.068 1 0.191 0.167 0.157 0.180 0.182 0.133 0.123 0.089 0.062 0.064 1.25 0.194 0.178 0.163 0.191 0.180 0.134 0.120 0.084 0.053 0.056 1.5 0.189 0.182 0.160 0.198 0.170 0.129 0.110 0.074 0.042 0.046 1.75 0.178 0.179 0.150 0.200 0.154 0.119 0.095 0.061 0.031 0.036 2 0.163 0.170 0.135 0.199 0.133 0.105 0.078 0.047 0.021 0.027 3 0.099 0.104 0.066 0.173 0.056 0.047 0.023 0.010 0.002 0.005

tory explanation of the significant correlations among absolute return series, where a linear correlated process can be represented as

• rrtiti=+Âa e- . (9) i=0 where r and ai are constants with a0 = 1, et is a zero-mean i.i.d process. Taylor concludes that any reasonable model must be a non-linear one. d Furthermore, the special autocorrelation pattern of |rt| found in section 3 implies that any theoretical model should be also to capture this before the model can be considered to be “adequate”. It should be noted that a process can have zero autocorrelations but have autocorrelations of squares greater than for moduli. For example, consider the following nonlinear model: 360 Z. Ding, C. W. J. Granger and R. F. Engle

rt = |st|et, (10) st = ast-t + ht, where et ~ N(0, 1), E(st) = E(ht) = E(rt) = 0, |a| < 1, et and ht are stochas- tically independent, st is independent of ht+t for t > 0, st, st-t are jointly 2 normally distributed with variance 1, hence we have var(ht) = 1 - a and 2 2 ht ~ N(0, 1 - a ). The conditional variance of rt when st is known is st , 2 i.e. var(rt|st) = st . For this model corr(rt, rt-t) = 0 but by using numerical integration it is found that with |a| < 1 22Ê ˆ a 2 corr() r,,. r=+- E s sa s222< corr() r r = (11) tt--ttp Ë tt tp ¯ tt - t4 It is thus seen that the results of Table 19.5 do not necessarily occur. One possible explanation for the large positive autocorrelation d d between |rt| and |rt+t| or |rt| and |rt+t| is the heteroskedasticity of the data, i.e. the variance or conditional variance is changing over time. One family of nonlinear time series models that is able to capture some aspects of the time varying volatility structure is Engle’s ARCH (AutoRegressive Conditional Heteroskedasticity) model [Engle (1982)]. In its original setting, the ARCH model is defined as a data generating process for a random variable which has a conditional normal distribution with con- ditional variance a linear function of lagged squared residuals. More formally, the ARCH(p) model is defined as follows:

rt = m + et.

etttt= se,~,, e N()01 p 2 2 stiti=+aae0 Â - . (12) i=1

d It is easily shown that rt is not autocorrelated with each other but |rt| is.

Hence the distribution of rt is dependent on rt-i, i > 0. Since its introduc- tion by Engle (1982), the ARCH model has been widely used to model time-varying volatility and the persistence of shocks to volatility. Much work has also been done both theoretically and empirically. Many modifications and extensions of the original ARCH model have also appeared in the literature. For example, in order to capture the long memory property of the con- ditional variance process,Bollerslev (1986) introduced the GARCH(p,q) model, which defines the conditional variance equation as follows:

p q 2 2 2 sstiti=+aaeb0 ÂÂ- + jtj- . (13) i=1 j=1 Taylor (1986) modeled the conditional standard deviation function instead of conditional variance. Schwert (1989), following the argument A Long Memory Property of Stock Market Returns 361 of Davidian and Carroll (1987), modeled the conditional standard deviation as a linear function of lagged absolute residuals. The Taylor/ Schwert GARCH(p, q) model defines the conditional standard deviation equation as follows:

p q sstiti=+aaeb0 ÂÂ- + jtj- . (14) i=11j= One may, at first glance, think that it would be better to use Taylor/ Schwert model than Bollerslev’s GARCH since the model is expressed in terms of absolute returns rather than squared returns. But this conclusion is not necessarily true when the model is a nonlinear one. In fact, our Monte Carlo study shows both Bollerslev’s GARCH and Taylor/Schwert’s model with appropriate parameters can produce the special correlation patterns found in section 3. Both models were estimated for S&P 500 returns and the following results were obtained: (1) GARCH

rttt=+0.., 000438 0 144ee-1 + ()7.. 2 ( 18 4 ) 2 22 ssttt=++0.... 0000008 0 091e -1 0 906 (15) ()12... 5 () 50 7 () 43 4 log likelihood: 56822. (2) Taylor/Schwert

rttt=+0.. 0004 0 139ee-1 + , ()7.. 0 ( 19 6 )

ssttt=+0.. 000096 0 104e -1 + 0 .. 913 (16) ()12. 6 () 67 () 517 log likelihood: 56776. The first order moving average term is in the mean equations of both models to account for the positive first order autocorrelation for the return series. We can see all the parameters are very significant in the above models. The normality test statistic of the standardized residuals for both models are far beyond the critical value of a normal distribu- tion as assumed by both models. This is not surprising since there are definitely other factors affecting the volatility. Nevertheless, the log- likelihood value for Bollerslev’s GARCH is significantly larger than that of Taylor/Schwert model. Based on the estimation results, some simulations have performed using the parameter estimated above assuming et ~ IID N(0, 1). Our 362 Z. Ding, C. W. J. Granger and R. F. Engle

0.3

0.2

0.1

0.0 0 20406080100 Figure 19.13. Bollerslev’s GARCH model.Autocorrelation of |r|, r***2, r from high to low.

0.4 0.3 0.2 0.1 0.0 0 20406080100 Figure 19.14. Bollerslev’s GARCH model. d - 1, 0.5, 0.25, 0.125 from high to low.

0.4 0.3 0.2 0.1 0.0 0 20406080100 Figure 19.15. Bollerslev’s GARCH model. d - 1, 1.25, 1.50, 1.75, 2 from high to low. purpose is to check whether theoretical ARCH models can generate the same type of autocorrelations as stock market return data. Obviously if the theoretical model does not exhibit the same pattern of autocorrela- tions as stock market return data, then it follows that the theoretical model is misspecified for these data. A total of 18054 observations was generated and the first 1000 were discarded in order to be less affected by the initial value of s0 which was set to be the unconditional standard deviation of the S&P 500 returns. Figs. 19.13, 19.14, 19.15 and 19.16, 19.17, 19.18 plot the simulated autocorrelogram of the data generated by the two models. It can be seen that the special autocorrelation pattern does exists here. For both models, |r|d has the largest autocorrelations when d = 1, and the autocorrelation gets smaller when d goes away from 1. It is interesting that Bolleslev’s GARCH model can produce this result even though the conditional variance is a linear function of squared A Long Memory Property of Stock Market Returns 363

0.3 0.2 0.1 0.0 0 20406080100 Figure 19.16. Taylor/Schwert model.Autocorrelation of |r|, r**2, r from high to low.

0.4 0.3 0.2 0.1 0.0 0 20406080100 Figure 19.17. Taylor/Schwert model. d - 1, 0.5, 0.25, 0.125 from high to low.

0.5 0.4 0.3 0.2 0.1 0.0 0 20406080100 Figure 19.18. Taylor/Schwert model. d - 1, 1.25, 1.50, 1.75, 2 from high to low.

returns. For Bollerslev’s GARCH model, the autocorrelation between |rt| 1.25 1.25 and |rt+t| is very close to that between |rt| and |rt+t| . But for the Taylor/Schwert model, the autocorrelation between |rt| and |rt+t| after lag 0.5 0.5 40 is close to that between |rt| and |rt+t| . One major difference between autocorrelograms of the two simulated data series and the real data is that the autocorrelations of the real data decreases rapidly in the first month and then decrease very slowly over a long period, but the autocorrelations of the two simulated data decrease almost constantly over time.

6. A NEW MODEL – ASYMMETRIC POWER ARCH The Monte Carlo study shows that the ARCH model generally captures the special pattern of autocorrelation existing in many stock market 364 Z. Ding, C. W. J. Granger and R. F. Engle returns data. Both Bolleslev’s GARCH and Taylor/Schwert’s GARCH in absolute value model can produce this property. It seems there is no obvious reason why one should assume the conditional variance is a linear function of lagged squared returns (residuals) as in Bollerslev’s GARCH, or the conditional standard deviation a linear function of lagged absolute returns (residuals) as in Taylor/Schwert model. Fortu- nately, a more general class of model is available which includes Bolleslev’s GARCH, Taylor/Schwert and five other models in the liter- ature as special cases. The general structure is as follows:

etttt= se,~,, e N()01 p q d d d sstitiiti=+aaegeb0 ÂÂ()-- - + jti- , where (17) i=11j=

a0 > 0, d 0,

ai 0, i = 1,...,p,

-1 < gi < 1, i = 1,...,p,

bj 0, j = 1,...,q.

The model imposes a Box–Cox power transformation of the conditional standard deviation process and the asymmetric absolute residuals. By using this transformation we can linearize otherwise nonlinear models. The functional form for conditional standard deviation is familiar to economists as the constant elasticity of substitution (CES) production function of Arrow et al. (1961). The asymmetric response of volatility to positive and negative “shocks” is well known in the finance literature as the leverage effect of the stock market returns [Black (1976)], which says that stock returns are negatively correlated with changes in return volatility – i.e. volatility tends to rise in response to “bad news” (excess returns lower than expected) and to fall in response to “good news” (excess returns higher than expected) [Nelson (1991)]. Empirical studies by Nelson (1991), Glosten, Jaganathan and Runkle (1989) and Engle and Ng (1992) show it is crucial to include the asymmetric term is financial time series models [for a detailed discussion, see Engle and Ng (1992)]. This generalized version of ARCH model includes seven other models (see appendix A) as special cases. We will call this model Asymmetric Power ARCH model and denote it as A-PARCH.

If we assume the distribution of rt is conditionally normal, then the d d condition for existence of Est and E|et| is (see appendix B):

p d -1 q 1 ddÊ d + 1ˆ ag()112+ +-() g 2 G +

If this condition is satisfied, then when d 2 we have et convariance stationary. But d 2 is a sufficient condition for et to be covariance stationary. The new model is estimated for S&P 500 return series by the maximum likelihood method using the Berndt–Hall–Hall–Hausman algorithm. The estimated model is as follows: rttt=+0.., 00021 0 145ee-1 + ()3.. 2 ( 19 0 )

143. 143. 143. sstttt=+0.. 000014 0 083()ee--11 - 0 . 373+ 0 ., 920 (19) ()4.. 5 ( 32 4 ) ()- 20 . 7 ()() 474 33 . 7 log likelihood: 56974. The estimated d is 1.43 which is significantly different from 1 (Taylor/Schwert model) or 2 (Bollerslev GARCH).The t-statistic for the asymmetric term is 32.4 which is very significant implying the leverage effect does exist in S&P 500 returns. By using the log-likelihood values estimated, a nested test can easily be constructed against either Boller- slev’s GARCH or Taylor/Schwert model. Let l0 be the log-likelihood value under the null hypothesis that the true model is Bollerslev’s GARCH and l be the log-likelihood value under the alternative that 2 the true model is A-PARCH, then 2(l - l0) should have a c distribution with 2 degrees of freedom when the null hypothesis is true. But in our example 2(l - l0) = 2(56974 - 56822) = 304 which is far beyond the critical value at any reasonable level. Hence we can reject that the data is generated by Bollerslev’s GARCH model. The same procedure is applicable to Taylor/Schwert model and we can also reject it.

6. CONCLUSION In this paper, a “long-memory” property of the stock market returns series is investigated. We found not only there is substantially more cor- relation between absolute returns than returns themselves, but the power d transformation of the absolute return |rt| also has quite high autocorre- lation for long lags. Furthermore, for fixed lag t, the function rt(d) = d d corr(|rt| ,|rt+t| ) has a unique maximum point when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our Monte Carlo study shows both ARCH type of model based upon squared return and those based upon absolute return can produce this property. The ARCH specification based upon the linear relationship among absolute returns is neither necessary nor sufficient to have such a property. Finally, we propose a new general class of ARCH models which we call Asymmetric Power ARCH model and denote A-PARCH. The new model encompasses seven other models in 366 Z. Ding, C. W. J. Granger and R. F. Engle the literature. We estimate S&P 500 returns by the new model and the estimated power d for the conditional heteroskedasticity function is 1.43 which is significantly different from 1 (Taylor/Schwert model) or 2 (Bollerslev’s GARCH).

APPENDIX A We now show that the new model includes the following seven ARCH models as a special case. (1) Engle’s ARCH(p) model [see Engle (1982)], just let d = 2 and

gi = 0, i = 1,...,p, bj = 0, j = 1,...,q in the new model. (2) Bollerslev’s GARCH(p, q) model (see Bollerslev 1986), let d =

2 and gi = 0, i = 1,...,p. (3) Taylor/Schwert’s GARCH in standard deviation model let d = 1

and gi = 0, i = 1,...,p. (4) GJR model [see Glosten et al. (1989)], let d = 2.

When d = 2, we have when 0 gi < 1

p q 2 2 2 sstitiitijtj=+aaegeb0 Â ()-- - + Â - i=1 j=1 p q 2 2 2 =+aageb0 ÂÂiiti()1 - - + jtjs - i=1 j=1 p 22- 2 ++Âagii{}()11--() g iSiti e- i=1 p q 2 2 2 =+aa0 ÂÂi ()1 -geiti- + bjtjs - i=1 j=1 p - 2 + Â 4agiiiS e ti- , where i=1

- Ï10if eti- < Si = Ì Ó0 otherwise. If we further define 2 aaii* =-()1 g i,

gagiii* = 4 , then we have

p q p 2 22* - 2 ssSti=+aaeb0 ÂÂ* ti-- +Â j tj + ge i i ti- i=1 j=1 i=1 which is exactly the GJR model. A Long Memory Property of Stock Market Returns 367

When -1 < gi < 0 we have

p q p 2 2 22+ 2 ssStii=+aageb0 ÂÂ()14 + ti--+-Â j tj age i i i ti- , where i=1 j=1 i=1

+ Ï10if eti- > Si = Ì Ó0 otherwise, define 2 aaii* =+()1 g i,

gagiii* =-4 , we have

p q p 2 22* + 2 ssSti=+aaeb0 ÂÂ* ti-- +Â j tj + ge i i ti- i=1 j=1 i=1 which allows positive shocks to have a stronger effect on volatility.

(5) Zakoian’s TARCH model (see Zakoian 1991), let d = 1 and bj = 0, j = 1,...,q. We have

p stitiiti=+aaege0 Â ()-- - i=1 p p + - =+aageage0 Â i()11 - i ti- -+Â i() i ti- , where i=1 i=1

+ Ïeeti-- if ti> 0 eti- = Ì and Ó0 otherwise, - + eeeti- =- ti-- ti. So by defining

+ aaii=-()1, g i - gaii=+()1, g i we have

p p + +-- stitiiti=+aaeae0 Â - -Â - i=1 i=1 which is the exact TARCH form. If we further let bj π 0, j = 1,...,q then we get a more general class of TARCH models. (6) Higgins and Bera’s NARCH model [see Higgins and Bera

(1990)], let gi = 0, i = 1,...,p and bj = 0, j = 1,...,q. Our model becomes 368 Z. Ding, C. W. J. Granger and R. F. Engle

p d d stiti=+aae0 Â - , i.e. i=1 p 2 dd2 2 ()stiti2 =+aae0 Â ()- . i=1 Define

dd*,= 2 p * dd2 Ê ˆ * aaw00==-Á1 Â awi ˜ . Ë i=1 ¯ We have exactly Higgins and Bera’s NARCH. (7) Geweke (1986) and Pantula (1986)’s log-ARCH model.The log- ARCH model is the limiting case of our model when d Æ 0. Since

p q d d d sstitiitijtj=+aaegeb0 Â ()-- - + Â -, i=1 j=1 decompose a0 as:

p q Ï d ¸ d aagbwoitiitij=-Ì1 Â Ee()-- - e - Â ˝ Ó i=1 j=1 ˛

d = aw0* , d d hence Est = w . Then we have

d p q d st - 1 Ï d ¸ ()w - 1 =-Ì1 ÂagitiitiEe()-- - e - Â b j˝ d Ó i=1 j=1 ˛ d

p d q d ()egeti--- i ti - 1 stj- - 1 + Âa i + Â b j i=1 d j=1 d

p d Ee()ti---g i e ti - 1 - Âa i , i=1 d when d Æ 0 the above equation becomes

p q Ï d ¸ logsEeetitiitij=-Ì1 agbw lim()-- - - ˝ log  dÆ 0 Â Ó i =1 j =1 ˛ p q

+-ÂÂaegebi log()ti-- i ti+ j log stj- i =11j = p

--Â agi log Ee()ti-- i e ti i =1 A Long Memory Property of Stock Market Returns 369

p p * =-awa0 logÂÂi log2 p +aegei log()ti-- - i ti i ==11j q

+ Â b jtjlogs - , j =1 where

p q * Ï d ¸ aa0 =-Ì1 itiitijlimEe()-- - gb e - ˝ Â dÆ0 Â Ó i=1 j=1 ˛ Ï p q ¸ =-Ì1 Âabij -Â ˝, Ó i=1 j=1 ˛

d since limdÆ0E(|et-i| - giet-i) = 1. This is a generalized version of Geweke/ Pantula model. If we further let gi = 0, i = 1,...,p, and bj = 0, j = 1,..., q, then we get the exact Geweke/Pantula model.

APPENDIX B. CONDITIONS FOR THE d d EXISTENCE OF Est AND E|et| If we assume the distribution is conditional normal, then the condition d for existence of Est of the new model is

p q d ÂagitiitiEe()--- e +<Â b j1, where i=1 j=1

x2 dd+• 1 2 Ee()ti---g i e ti=-() xg i x ed x 2p Ú-• d -1 1 ddÊ d + 1ˆ =+()112gg+-()2 G . 2p []iiË 2 ¯ So the condition becomes

p d -1 q 1 ddÊ d + 1ˆ ag()112+ +-() g 2 G +

ddd EEeEsettt= d 1 Ê d + 1ˆ d = 2 2 G Es . p Ë 2 ¯ t

d d So the condition for the existence of E|et| is the same as that of Est .The proof of the above results is almost identical to the proof of theorem 1 in Bollerslev (1986). When condition (B1) is satisfied, we have the unconditional expecta- d tion of st as follows 370 Z. Ding, C. W. J. Granger and R. F. Engle

p q d Ê d ˆ Estitiitij=-aag0 Á1 Â E() e-- - e - Â b˜ Ë i=1 j=1 ¯ = w d and d d 1 Ê d + 1ˆ d EEse = 2 2 G ttp Ë 2 ¯ d 1 Ê d + 1ˆ d = 2 2 G w . p Ë 2 ¯

In its special case, when d = 2, and gi = 0, we have the covariance stationarity condition for et as

21- 1 p Ê ˆ Ê 21+ ˆ q ab222 G + ÂÂi Á ˜ Ë ¯ j 2p i==11Ë ¯ 2 j 1 p Ê 1ˆ Ê 1ˆ q = ab2 G + ÂÂi Ë ¯ Ë ¯ j p i==112 2 j p q =+<Âabij 1 i=1 j=1 which is the same as that derived by Bollerslev (1986).

When d = 2 and gi π 0, we have the covariance stationarity condition for GJR model as

p 21- q 1 222 Ê 21+ ˆ ag()112+ +-() gG + b ÂÂii[] i Ë ¯ j 2p i==112 j p q =+ÂÂagii[]11+< bj . i==11j

When d = 1 and gi = 0, we have the condition for existence of Est and E|et| of Taylor/Schwert model

1 p q ÂÂabi 21G()+ j 2p i==11j p q =+<2 p ÂÂabi j 1, i==11j

p q Since ()2 p < 1, so even if Si=1ai +Sj=1 bj > 1 it can still be true that Est or E|et| exists and is finite, this condition is weaker than the covariance 2 stationarity condition of the model. It is possible E|et| does not exist and et is not covariance stationary even if this condition is satisfied. d When d = 1 and gi π 0, we have the existence condition of Est and E|et| for the Asymmetric Taylor/Schwert model or the generalized Zakoian A Long Memory Property of Stock Market Returns 371

p model which is the same as that for the Taylor/Schwert model 2 pSi=1ai q +Sj=1 bj < 1. Under the assumption that

p d -1 q 1 ddÊ d + 1ˆ ag()112+ +-() g 2 G +

REFERENCES Black, Fisher, 1976, Studies in stock price volatility changes, Proceedings of the 1976 business meeting of the business and economics statistics section,Amer- ican Statistical Association, 177–181. Bollerslev, T., 1986, Generalized autoregressive conditional heteroskedastickity, Journal of Econometrics 31, 307–327. Bollerslev,T. and R. F. Engle, 1992, Common persistence in conditional variance, Forthcoming in Econometrica. Davidian, M. and R. J. Carroll, 1987, Variance function estimation, Journal of American Statistical Association 82, No. 400, 1079–1091. Eatwell, J., M. Milgate and P. Newman (eds.), The new Palgrave: Finance (New York, Norton). Engle, R. F. 1982, Autoregressive Conditional heteroskedasticity with estimates of the variance of U.K. Inflation, Econometrica, 50, 987–1008. Engle, R. F. 1990, Discussion: stock volatility and the crash of ’87, Review of Financial Studies, Vol. 3, No. 1, 103–106. Engle, R. F. and T. Bollerslev, 1986, Modeling the persistence of conditional variances, Econometric Review, 5, 1–50, 81–87. Engle, R. F. David Lilien and Russ Robins, 1987, Estimating time varying risk premia in the term structure:The ARCH-M Model, Econometrica, 55, 391–407. Engle, R. F. and G. Gonzalez Rivera, 1991, Semiparametric ARCH models, Journal of Business and Economic Statistics, 9, 345–360. Engle, R. F. and V.Ng, 1992, Measuring and testing the impact of news on volatil- ity, Forthcoming in Journal of Finance. Fama, E. F., 1970, Efficient capital markets: A Review of Theory and Empirical Work, Journal of Finance, 25, 383–417. Fama, E. F., 1976, Foundations of finance: Portfolio decision and security prices, New York: Basic Books Inc. French, Ken, William Schwert and Robert Stambaugh, 1986, Expected stock returns and volatility, Journal of Financial Economics, 19, 3–29. Glosten, L., R. Jaganathan and D. Runkle, 1989, Relationship between the expected value and the volatility of the nominal excess return on stocks, 372 Z. Ding, C. W. J. Granger and R. F. Engle

unpublished manuscript, J.L. Kellogg Graduate School, . Granger, C. W. J., 1991, Forecasting stock market prices: Lessons for forecasters, UCSD Working Paper. Granger, C. W. J., 1980, Long memory relationships and the aggregation of dynamic models. J. of Econometrics 14, 227–238. Granger, C. W. J. and A. P.Anderson, 1978, An introduction to bilinear time series model, Vandenhoek and Ruprecht, Gottingen. Granger, C. W. J. and R. Joyeux, 1981, An introduction to long-memory time series models and fractional differencing, J. of Time Series Analysis 1, 15–29. Granger, C. W. J. and O. Morgenstern, 1970, Predictability of stock market prices. Heath-Lexington Press. Granger, C. W. J. and Paul Newbold, 1986, Forecasting economic time series, New York, Academic Press. Hamao, Y., R. W. Masulis, V. Ng, 1990, Correlation in price changes and volatil- ity across international stock markets, Review of Financial Studies, Vol. 3, No. 2. 281–307. Higgins, M. and A. Bera, 1990, A class of nonlinear ARCH models, Working Paper, Department of Economics, University of Wisconsin at Milwaukee. Kariya, T. Tsukuda, Y. Maru, J., 1990, Testing the random walk hypothesis for Japanese stock prices in S. Taylor’s Model, Working paper, University of Chicago. Nelson, D. B., 1990, Stationarity and persistence in the GARCH(1, 1) model, Econometric Theory, 6, 318–334. Nelson, D. B., 1991, Conditional heteroskedasticity in asset returns: A New Approach, Econometrica, Vol. 59, No. 2, 347–370. Schwert, W., 1990, Stock volatility and the crash of ’87, Review of Financial Studies, Vol. 3, No. 1, 77–102. Taylor, S., 1986, Modeling financial time series, New York, John Wiley & Sons. Zakoian, J., 1991, Threshold heteroskedasticity model, unpublished manuscript, INSEE. Index

Abadir, M., 256, 258n2, 266 Black, F., 364 Adelman, I., 19 Black, H., 56 advertising, and aggregate Blalock, H. M., Jr., 54 consumption, 84–104 Blanchard, O. J., 16 aggregation, and error correction Blank, D. M., 85–6, 94, 103 models, 134–5 Bollerslev, T., 19, 357, 360, 366, 369, Ahn, S. K., 16, 239 370 Ahtola, J., 194 Box, G. E. P., 4, 9–10, 15, 66, 76, 109, Alexander, S., 3 116, 121, 143, 146, 190–1, 234 Alternating Conditional Box-Jenkins models, 85, 92, 174, 274. Expectations (ACE), 292–4 See also Box, G. E. P.; Jenkins, Andersen, A. P., 4, 6, 17, 57 G. M. Anderson, H. M., 220 Breiman, L., 292–4 Anderson, T. W., 4, 19, 214n2, 239 Brotherton, T., 56 Aoki, M., 236 Buiter, W. H., 81–2 Ashley, R., 12, 66–7 Bunch, M., 50 AutoRegressive Conditional Heteroskedasticity (ARCH) Caines, P. E., 56, 65 model, 360–6, 367–9 Campbell, J. Y., 3, 15–16, 168, 213n1, 214n2, 217 Baba, Y., 14 Carroll, R. J., 361 Bachelier, M. L., 2 causality. See Granger causality Balke, N., 15 Center for Research in Securities Banerjee, A., 302 Prices (CRSP), 218 Basman, R. L., 35 central limit theorem, 6–7 Bates, J. M., 9 Chan, C. W., 65 Bell, W. R., 3, 191, 238 Chan, N. H., 16, 197 Bera, A., 367–8 chaos and chaos theory, 5 Beveridge, S., 17, 238, 258, 303 Chiang, C., 65, 89n10 Bhargava, A., 160, 179, 194 Christoffersen, P., 8 Bierens, H., 6 Ciccolo, J. H., Jr., 56 bivariate attractor, and long Citibank Economic Database, 137 memory, 289–92 Clarke, D. G., 86 bivariate feedback model, and error Clemen, R. T., 9 correction, 133–4 Cleveland, W. P., 3 374 Index

Cochrane, J., 241 189, 202–6, 213n1, 214n2, 216, cointegration, 13–18, 74–6, 129–43, 232, 237, 283, 287, 294, 297, 145–70, 173–86, 189–201, 203, 302, 357, 360, 364, 366 212–30, 232–51, 254–67, 281–4, Ericsson, N. R., 257 302–17 error correction conditional causality, 80 and causality, 75 constant elasticity of substitution and cointegration, 145–70, 182, (CES), 364 185–6, 216–18, 225–9, 234, consumption 264–6, 308–9, 311–15 GNP and stock dividends and and seasonality, 201–5 prices, 241–3 and time series analysis of income and co-integrating models, 129–43 regression of, 166–7 Escribano, A., 178, 185 and advertising, 84–104 Evans, G. B. A., 160 Cootner, P., 3 exogeneity, and testing for causality, Corradi, V., 8n2 68 cost, of advertising, 94. See also price expected rates, of treasury bills, 214 cost-benefit analysis, 8 Cowles, A., 3 factor model, and long memory Cowles Foundation, 7 components, 234–8 Cox, D. R., 9, 160 Fair, R. C., 8n2 Cox, J. C., 217 Fama, E., 3, 351–2 Cramer, H., 2, 184 Federal Reserve, 212–30 Cramer representation, 32 feedback, and causal relations, 34–6 Currie, D., 130, 149, 177 Feige, E. L., 56 Feller, W., 147 Davidian, M., 361 Fomby, T. B., 15 Davidson, J., 5, 130, 149, 177, 207 forecasting Davies, R. R., 160 and cointegration analysis of Dawson, A., 130, 149, 177 treasury bill yields, 228, 229t Delgado, M. A., 6 and long-memory models, 328–32 Deutsch, M., 7, 9 fractional differencing, and long- Dickey, D. A., 155, 160, 162, 178, memory models, 321–36 189, 194, 197. See also Dickey- fractional integrated series, and Fuller test error correction models, 143 Dickey-Fuller (DF) test, 161–8, Franses, P. H., 305 270–1, 273, 275–6, 277f, Friedman, J. H., 292–4 278–81, 283–4, 294, 296t, 297–9 Friedman, M., 2 Diebold, F. X., 8 Frisch, R., 15 Ding, Z., 19 Fuller, W. A., 4, 155, 158, 160, 162, Dolado, J. J., 11, 302 178, 189, 191, 194, 197. See Durbin, J., 86n7 also Dickey-Fuller test Durbin-Watson statistic, 109, 156, full information maximum 161, 168 likelihood (FIML) model, Durlauf, S. N., 16, 163, 166 225, 227t, 228

Ekelund, R. G., 85 Galbraith, J. W., 302 Engle, R. F., 5, 10, 16–17, 19, 73, 149, Gallant, A. R., 5 154, 160, 166, 176, 178–80, 185, GARCH model, 360–6 Index 375

Geweke, J., 11, 67, 235–6, 368–9 Hillmer, S. C., 191 Ghysels, E., 4, 17 Hinkley, C. V., 160 GJR model, 366, 370 Hipel, K. W., 18, 121, 323, 327–8, 332 Glosten, L., 364, 366 Hoffman, D. L., 8, 14 GNP Holland, D. W., 73, 76, 79 and deflator, 86 Honore, A. M., 50 money supply and error Hoover, K. D., 11 correction models, 139–40 Horvath, M. T. K., 14 and stock dividends and prices, Hosking, J. R. M., 143 241–3 Hosoya, Y., 11, 58, 258n3 Gonzalo, J., 17, 240, 248, 255, 258, Huizinga, J., 219n4 303 Hume, D., 54 Good, I. J., 36, 53, 62 Hurst, H. E., 18 Goodhart, C. A. E., 56 Hylleberg, S., 16–17 Gordon, R. J., 56 Gowland, D. H., 56 impulse response analysis, 256n1 Gradshteyn, I. S., 323 income Gramm, W. P., 85 consumption and co-integrating Granger, C. W. J. See specific subjects regression of, 166–8 Granger causality, 10–12, 31–46, error-correction models and 48–69, 71–82, 84–104 analysis of employees’ and Granger Representation Theorem, national, 137–9 150–2 information sets, and causality, 71 Great Depression, 351 Ingersoll, J. E., 217 Grether, D. M., 191 Inoue, T., 307 Gross National Product. See GNP instantaneous causality, 34–6, 38–9, Gumbel, D., 8 43, 59, 68, 76–80, 101 integrated seasonal processes, 190–1 Haldrup, N., 18 integration, 12–13, 121–5, 146–9, Hall, A. D., 17, 214n2, 220 189–210, 269–84, 286–300 Hall, R. E., 166 interest rates, and long memory, Hallman, J., 9, 18, 205–6, 270, 272, 243–8, 296–300 275, 284, 289, 300, 307 Hamao, Y., 353 Jaganathan, R., 364 Hamilton, J. D., 16 Jenkins, G. M., 4, 10, 66, 76, 116, 121, Hannan, E. J., 3–4, 150 143, 146, 190–1 Hansen, L. P., 7 Johansen, S., 15–16, 150, 182–3, 220, Härdle, W., 6 234, 238–40, 251, 257–8, 266, Hardouvelis, G. A., 219n4 302, 309 Hart, H. L. A., 50 Joyeux, R., 19, 122, 143, 182, 356 Hasza, H. P., 194, 197 Juselius, K., 220, 238, 251 Hatanaka, M., 2–3, 11, 32–5, 42n1, 56, 302 Kailath, T., 203 Haugh, L. D., 63–4, 77 Kariya, T., 352 Heller, W., 5 Kasa, K., 236, 258 Hendry, D. F., 13–14, 16, 73, 126, 130, Khintchine, A., 1 149, 154, 177, 207, 219, 225, King, R., 16, 18, 311 302 Klein, L., 14 Higgins, M., 367–8 Knez, P., 229n8 376 Index

Kolb, R. A., 8n2 Mellander, E., 258 Kolmogorov, A. N., 1 Mincer, J., 9 Konishi, T., 254, 257, 266, 309 Mishkin, F. S., 219n4 Kosobud, R., 14 Mizon, G. E., 13 Kottegoda, N. T., 18, 122, 323, 329 Mizrach, B., 8n2 Kozicki, S., 237 Mokkadem, A., 307 Moore, G. H., 313 Lasota, A., 264, 307 Morgenstern, O., 2 Lawrence, A. J., 18, 121, 323, 329 Morin, N., 307 leading indicators, and testing for Morris, M., 6 causality, 61–2 Lee, T.-H., 5 Neilson, J. P., 6 Leitch, G., 8 Nelson, C. R., 17, 160, 174, 189, 238, Li, Q., 6 258, 303 likelihood ratio (LR) test, 245 Nelson, D. B., 364 Lin, J.-L., 8, 11, 236, 258n3, 303 Nerlove, M., 3–4, 32, 191 linear models, and cointegrated Newbold, P., 4, 7, 8n2, 9, 12–13, 56, variables, 314t 60, 65–6, 76, 89, 92, 109, 111, Linton, O., 6 136, 147, 156, 321 Lippi, M., 19 New York Stock Exchange, 357 Litterman, R., 229n8 Ng, V., 364 Liu, T., 5 non-causality, definitions of, 71–3 Lo, A. W., 3, 19 Lobato, I., 19 Olivetti, C., 8n2 long memory, 191, 232–51, 286–300, one-step forecasts, 62–3 321–36, 349–71 one-way causal model, 130–2 Lucas, R. E., 73 Orcutt, G. H., 11, 35 Lütkepohl, H., 11 Ouliaris, S., 16 Overseth, O. E., 67 Mackey, M. C., 264, 307 MacKinley, A. C., 3 Pagan, A. R., 13 Malinvaud, E., 113 Palm, F., 7 Mandelbrot, B. B., 3, 19, 122–3, 327, Pearce, D. K., 56 351 Peña, D., 234 Mann, H. B., 1 Pesaran, M. H., 8 Mariano, R. S., 8n2 Phillips, A. W., 177 Marketing/Communications Index, Phillips, P. C. B., 1, 11, 16, 148, 163, 94n16 166, 234, 270 maturity, and yields of treasury bills, Pierce, D. A., 3, 63–4, 77 216 Plosser, C. I., 160, 174, 189, 311, 321 McCann-Erickson Index, 94n16, 103 price McConnell, C. R., 85 consumption and stock McLeish, D. L., 5 dividends, 241–3 McLeod, A. I., 18, 121, 323, 327–8, wages and productivity in 332 transportation industry and mean-squared forecast errors error correction models, 140–2 (MSE), 333–4 U. S. monthly index of consumer, Meese, R. A., 8n2 334–5. See also cost Mehra, Y. P., 56 Price, J. M., 64 Index 377

Priestley, M. B., 4, 184 Shiller, R. J., 8n2, 16, 213n1, 214n2, productivity, and prices and wages 217 in transportation industry, Siklos, P. L., 4 140–2 Simon, H. A., 11, 35, 54, 62 Proietti, T., 258–9 Simon, J. L., 85n5 purely deterministic seasonal simple causal models, 34–5, 39, 44 process, 190 Sims, C. A., 3–4, 7, 11, 54, 56, 59, 60, 65, 67–8, 88, 91, 100n21, 101, Quah, D., 16, 233, 235, 238, 258 102n25 Skoog, G. R., 56 Ramanathan, R., 10 Slutsky, E., 1, 12 Ramey, V., 309 social sciences, and causality, 50 random walk hypothesis spectral analysis, 1–3, 32–5 long-memory models and Spohn, W., 73 forecast errors, 334 Srba, F., 130, 149, 177, 207 and nonlinear transformations, Stambaugh, R. F., 229n8 272 Standard & Poor price index, 350, and regression analysis, 111, 113 357 and spectral analysis, 2–3 Star, R. M., 14 Rasche, R. H., 8, 14 stationary seasonal process, 190–1 Reid, D. J., 111 Stegun, I., 100n23 Reinsel, G. C., 16, 239 Stekler, H. O., 8n2 Rice, J., 5, 185 Stock, J. H., 6, 14, 16, 155–8, 205, Richard, J. F., 73, 149, 154 214n2, 217, 233, 238, 250, 254, Rissman, E., 15 258–9, 311 Robinson, P. M., 5–6, 19 Strotz, R. H., 35 Roesler, T. W., 85 Stuetzle, W., 293 Rogoff, K., 8n2 Suppes, P., 53, 62, 79 Rosenbaum, P. R., 79 Swanson, N. R., 8n2, 9, 11, 18, 264, Ross, S. A., 217 305, 307 Runkle, D., 364 Russell, B., 53 Tanner, J. E., 8 Ryztik, I. M., 323 Taylor, L. D., 86 Taylor, S., 349, 351–2, 358–9, 360 Saikkonen, P., 16 Teräsvirta, T., 5, 9, 264, 305 Salmon, M., 149, 177 Thomson, P. J., 72, 79 Samuelson, P., 3 Tiao, G. C., 3, 15, 194 Sargan, J. D., 13, 126, 130, 148, 160, Timmerman, A. G., 9 177, 179 Toda, H. Y., 11 Sargent, T. F., 56 transitory components, of long Sargent, T. J., 2, 7, 73, 81–2 memory, 233 Savin, N. E., 19, 160 treasury bills Scheinkman, J., 229n8 and cointegration analysis of Schmalensee, R., 12, 66–7, 85n5, yields, 212–30, 243 86–7 long memory and interest rate, Schwert, G. W., 321, 360–1 296–300 seasonality, 3–4, 143, 189–210 Tsay, R. S., 8 Sethi, S. P., 56 Tukey, J., 2 Sheppard, D. K., 117 Tweedie, R. L., 264 378 Index unit roots Weiserbs, D., 86 and long memory components of Weiss, A. A., 5, 9, 14–15, 146, 148, interest rates, 245 185 and nonlinear transformations, White, H., 5, 8n2, 9, 219 270–81 Wiener, N., 11, 36, 52, 56, 72 and testing for seasonality, Williams, D., 56 194–201 Wold, H., 1, 35, 54, 62 Working, H., 3 Vahid-Araghi, F., 10 Wright, S., 54 Van Ness, J. W., 19, 121 Vector Autoregression (VAR), X-11 program, 3–4 153–4, 160–7, 170, 228, 258–9 Verdon, W. A., 85 Yamamoto, T., 11 Von Neumann, J., 2 Yeo, S., 130, 149, 177–8, 207 Von Ungern Sternberg, T., 130, 149, Yoo, S., 15–17, 150, 153, 166, 182–3, 177 202–3, 205, 283, 294 Yoon, G., 309 Wald, A., 1 Young, A. H., 4 Wallis, J., 19 Yule, G. U., 1, 13 Wallis, K. F., 3–4 Warne, A., 303 Zaffaroni, P., 19 Watson, M. W., 14, 16, 160, 214n2, Zakoian, J., 367, 370–1 217, 233, 238, 250, 254, 258–9, Zanotti, M., 79 311 Zarnowitz, V., 9, 313 Wei, C. Z., 16, 197 Zellner, A., 4, 7, 11, 61–2, 73