ECONOMIC STUDIES 88
PETER WELZ QUANTITATIVE NEW KEYNESIAN MACROECONOMICS AND MONETARY POLICY
PETER WELZ
QUANTITATIVE NEW KEYNESIAN MACROECONOMICS AND MONETARY POLICY Department of Economics, Uppsala University
Visiting address: Kyrkogårdsgatan 10, Uppsala, Sweden Postal address: Box 513, SE-751 20 Uppsala, Sweden Telephone: +46 18 471 11 06 Telefax: +46 18 471 14 78 Internet: http://www.nek.uu.se/
______
ECONOMICS AT UPPSALA UNIVERSITY
The Department of Economics at Uppsala University has a long history. The first chair in Economics in the Nordic countries was instituted at Uppsala University in 1741.
The main focus of research at the department has varied over the years but has typically been oriented towards policy-relevant applied economics, including both theoretical and empirical studies. The currently most active areas of research can be grouped into six categories: x Labour economics x Public economics x Macroeconomics x Microeconometrics x Environmental economics x Housing and urban economics
______
Additional information about research in progress and published reports is given in our project catalogue. The catalogue can be ordered directly from the Department of Economics.
© Department of Economics, Uppsala University ISBN 91-87268-95-7 ISSN 0283-7668 To my Mother
Abstract Doctoral dissertation to be publicly examined in Horsal¨ 1, Ekonomikum, Uppsala University, 28 October, 2005, at 10.15 a.m. for the degree of Doctor of Philosophy. The examination will be held in English.
WELZ, Peter, 2005, Quantitative New Keynesian Macroeconomics and Monetary Policy; Department of Economics, Uppsala University, Economic Studies 88, xviii, 128 pp., ISBN 91-87268-95-7.
This thesis consists of four self-contained essays. Essay 1 compares the dynamic behaviour of an estimated New Keynesian sticky- price model with one-period delayed effects of monetary policy shocks to the dy- namics of a structural vector autoregression model. The model is estimated with Bayesian techniques on German pre-EMU data. The dynamics of the sticky-price model following either a demand shock or monetary policy shock are qualitatively and quantitatively comparable to those of the estimated structural VAR. When com- pared to the delayed-effects model, an alternative model with contemporaneous ef- fects of monetary policy is rejected according to the posterior-odds ratio criterion. Essay 2 addresses the transmission of exchange-rate variations in an estimated, small open-economy model. In contrast to the standard New Open Economy Macroeco- nomics framework, imported goods are treated here as material inputs to production. The resulting model structure is transparent and tractable while also able to account for imperfect pass through of exchange-rate shocks. The model is estimated with Bayesian methods on German data and the key finding is that a substantial depre- ciation of the nominal exchange rate leads to only modest effects on CPI inflation. An extended version of the model reveals that relatively small weight is placed on foreign consumption. Essay 3 (with Annika Alexius) analyses the strong responses of long-term interest rates to shocks that are difficult to explain with standard macroeconomic models. Augmenting the standard model to include a time-varying equilibrium real interest rate generates forward rates that exhibit considerable movement at long horizons in response to movements of the policy-controlled short rate. In terms of coefficients from regressions of long-rate changes on short-rate movements, incorporating a time- varying natural rate explains a significant fraction of the excess sensitivity puzzle. Essay 4 (with Par¨ Osterholm)¨ argues that the common finding of a large and signifi- cant coefficient on the lagged interest rate in Taylor rules may be the consequence of misspecification, specifically an omitted variables problem. Our Monte Carlo study shows that omitting relevant variables from the estimated Taylor rule can generate significant partial-adjustment coefficients, despite the data generating process con- taining no interest-rate smoothing. We further show that misspecification leads to considerable size distortions in two recently proposed tests to distinguish between interest-rate smoothing and serially-correlated disturbances. Peter Welz, Department of Economics, Uppsala University, P.O. Box 513, SE-75120 Uppsala, Sweden ISSN 0283-7668 ISBN 91-87268-95-7
Acknowledgements
Writing one’s thesis is as much a product of intellectual discourse and collaboration as of lonely hours at the writing desk. As a result, I am grateful to many people. I would like to thank my supervisor, Nils Gottfries, for many insightful discussions, encouragement and support throughout my graduate studies. His valuable suggestions, comments and careful reading of initial drafts have helped transform these into essays that form this thesis. I am indebted to Annika Alexius for her ideas, collaboration and comments and to Anders Klevmarken for guiding me through the first year of the graduate programme. Over the course of my graduate studies I have also benefited from stimulating research environments at the Monetary Policy and the Research Departments at Sveriges Riksbank, the Department of Economics at Universitat Pompeu Fabra, the Econometric Modelling Unit of the Research Department at the European Central Bank, the Ifo Institute and the Department of Economics at the University of Munich. I would like to thank the staff at these institutions for their kind hospitality. I am particularly grateful to Ulrich Woitek for his invitations to Munich. I have appreciated and benefited from insightful discussions with Peter McAdam, Malin Adolfson, Ramon´ Adalid-Lozano, Mikael Carlsson, Kirsten Hubrich, Keith Kuster,¨ Frank Smets, Ulf Soderstr¨ om,¨ Roland Straub, Mathias Trabandt, Mattias Villani, Ulrich Woitek and my co-author, Par¨ Osterholm.¨ Sune Karlsson and Jesper Linde´ were kind enough to offer thorough comments on my work, leading to considerable improvements. In Fabio Canova, Sune Karlsson and Mark Steel I have found three excellent teachers who introduced me into the exciting field of Bayesian Statistics and Econometrics. Volker Clausen provided encouragement throughout my studies. Working for him as a research assistant while undergraduate student gave me the opportunity to share his enthusiasm for economic research and laid the basis for my decision to embark on a doctoral programme. The administrative staff at the department, especially Eva Holst, have shown utmost efficiency and professionalism. Many thanks also to Åke Qvarfort for providing excellent computing services, and beyond that, entertaining conversations. A thank you goes to my felIow students for lightening life as a graduate student: especially to Qian Liu, for inspiring and entertaining lunches, Mikael Nordberg for being a discussion partner during the night shifts and Hanna Ågren, Ranjula Bali Swain and Jovan Zamac for our talks
ix x Acknowledgements beyond economics. In Alexandre Dmitriev I found an enthusiastic office mate, friend and team worker who keenly took on any intellectual discussion and contributed to finalising most mathematical proofs ‘q.e.d.’. Meredith Beechey carefully proofread the manuscript and provided valuable com- ments on Chapter 3. Financial support from the Jan Wallander and Tom Hedelius foun- dation, C. Borgstrom¨ and C. Berch’s Foundation, the Ifo Institute Munich and Centro de Estudios Monetarios y Financieros (CEMFI) Madrid, is gratefully acknowledged. Finally I am most grateful to Shanti, Uwe and my mother for their encouragement and support and their patient attempts to stabilise mood cycles. Without their help this project would not have reached this final stage.
Peter Welz Frankfurt am Main, September 2005 Table of Contents
Acknowledgements ix
Introduction 1
1 Assessing Predetermined Expectations in the Standard Sticky Price Model: A Bayesian Approach 9 1.1 Introduction ...... 9 1.2 The Sticky Price Model ...... 11 1.2.1 Households ...... 11 1.2.2 Firms ...... 13 1.2.3 Central Bank ...... 16 1.2.4 Solution of the Model ...... 17 1.3 Estimation ...... 18 1.3.1 Data ...... 18 1.3.2 Estimation Methodology ...... 19 1.3.3 Specification of Priors ...... 21 1.4 Results ...... 22 1.4.1 Parameter Estimates ...... 22 1.4.2 Empirical Performance of the Model ...... 25 1.4.3 Impulse-Response Analysis ...... 28 1.4.4 Comparison to VAR ...... 30 1.4.5 Comparison to a Model with Contemporaneous Effects ...... 31 1.4.6 Estimation Diagnostics ...... 33 1.5 Summary and Conclusions ...... 34 Appendices ...... 36 A Model with Delays ...... 36 A.1 Matrix Representation ...... 36 B Bayesian Concepts ...... 40 B.1 Metropolis-Hastings Algorithm ...... 40 B.2 Marginal Likelihood Computation...... 40
xi xii TABLE OF CONTENTS
C Model with Contemporaneous Effects ...... 42 C.1 Estimation Results ...... 42 C.2 Prior and Posterior Kernels ...... 43 C.3 Impulse Responses ...... 44
2 Transmission of Exchange-Rate Variations in an Estimated, Small-Open Economy Model 51 2.1 An Alternative Open-Economy Model ...... 53 2.1.1 Aggregate Supply ...... 54 2.1.2 Aggregate Demand and Wage Setting ...... 56 2.1.3 Monetary Policy ...... 59 2.1.4 Foreign Economy ...... 59 2.2 Including Foreign Consumption Goods ...... 60 2.3 Solution and Estimation ...... 62 2.3.1 Model Solution ...... 62 2.3.2 Methodology ...... 62 2.4 Data and Prior Specification ...... 63 2.4.1 Data ...... 63 2.4.2 Prior Specification ...... 64 2.5 Results ...... 65 2.5.1 The Benchmark Model ...... 65 2.5.2 Transmission of Shocks in the Benchmark Model ...... 67 2.5.3 The Extended Model ...... 68 2.5.4 Evaluation ...... 69 2.6 Conclusions ...... 70 Appendices ...... 72 A Data and Sources ...... 72 B Derivation of Model Dynamics in the Benchmark Model ...... 74 B.1 Firms ...... 74 B.2 Aggregate Demand ...... 75 B.3 Net Foreign Assets ...... 76 C The Extended Model ...... 77 D Figures and Results ...... 79 D.1 Benchmark Model ...... 79 D.2 Exchange-Rate Pass Through and Price Stickiness ...... 83 D.3 Prior- and Posterior Density - Extended Model ...... 84 TABLE OF CONTENTS xiii
3 Can a Time-Varying Equilibrium Real Interest Rate Explain the Excess Sensitivity Puzzle? 91 3.1 Introduction ...... 91 3.2 A Stylised Model ...... 93 3.3 Estimating a Time-Varying Equilibrium Real Rate ...... 95 3.3.1 Data ...... 96 3.3.2 Empirical Specification ...... 96 3.3.3 Results ...... 98 3.4 The Puzzle Explained? ...... 100 3.4.1 Impulse Responses of Nominal Interest Rates ...... 101 3.4.2 Regression Evidence from Interest Rate Changes ...... 103 3.4.3 Sensitivity Analysis ...... 104 3.5 Conclusions ...... 105 Appendices ...... 107 A Derivation of the Equilibrium Real Rate ...... 107 B State Space Representation ...... 108 C Conditional Likelihoods ...... 109 D Output Gap and Real Rate Gap ...... 110
4 Interest-Rate Smoothing versus Serially Correlated Errors in Taylor Rules: Testing the Tests 113 4.1 Introduction ...... 113 4.2 The Taylor Rule ...... 115 4.2.1 Basic Specification and Empirical Evidence ...... 115 4.2.2 Related Literature ...... 117 4.3 Two Tests for Interest-Rate Smoothing ...... 118 4.4 Model and Data Generating Process ...... 119 4.5 Simulations and Results ...... 120 4.5.1 Results ...... 120 4.6 Discussion and Conclusion ...... 124
List of Figures
1.1 Delayed Effects Model: Prior- and Posterior Density ...... 24 1.2 Autocorrelation Functions ...... 26 1.3 Demand and Monetary Policy Shock ...... 28 1.4 Cost- and Technology Shock ...... 29 1.5 Dynamics of DSGE- and VAR Model ...... 31 1.6 CUSUM-Test ...... 34 1.7 Contemporaneous Effects Model: Prior- and Posterior Density ...... 43 1.8 Demand- and Monetary Policy Shock ...... 44 1.9 Cost- and Technology Shock ...... 45
2.1 Uncovered Interest-Parity Shock - Benchmark Model ...... 67 2.2 Monetary Policy Shock - Benchmark Model ...... 68 2.3 Uncovered Interest-Parity Shock - Extended Model ...... 69 2.4 Prior- and Posterior Density - Benchmark Model ...... 79 2.5 Prior- and Posterior Density (continued) - Benchmark Model ...... 80 2.6 CUSUM Diagnostic - Benchmark Model ...... 81 2.7 CUSUM Diagnostic (continued) - Benchmark Model ...... 82 2.8 Exchange-Rate Pass Through and Price Stickiness ...... 83 2.9 Prior- and Posterior Density - Extended Model ...... 85 2.10 Prior- and Posterior Density (continued) - Extended Model ...... 86
3.1 Estimated Equilibrium Real Rate ...... 98 3.2 Model With Constant Equilibrium Real Rate ...... 102 3.3 Model with Time-Varying Equilibrium Real Rate ...... 103 3.4 Optimisation Diagnostic ...... 109 3.5 Output Gap and Real Rate Gap ...... 110
4.1 First Test under the Assumption of Interest-Rate Smoothing ...... 121 4.2 First Test under the Assumption of Serially Correlated Errors ...... 122 4.3 First test for the Reduced Form ...... 123 4.4 Second Test: Allowing for Smoothing and Serial Correlation ...... 124
xv
List of Tables
1.1 Prior Specification and Posterior Estimates ...... 23 1.2 Standard Deviations of Simulated and Actual Data ...... 25 1.3 Acceleration Phenomenon and Output-Interest-Rate Dynamics ...... 27 1.4 Model Comparison by Bayes Factors ...... 33 1.5 Prior Specification and Posterior Estimates ...... 42
2.1 Prior Specification and Estimation results - Benchmark model ...... 66 2.2 Correlations between Inflation and Nominal Depreciation ...... 70 2.3 Prior Specification and Estimation Results - Extended Model ...... 84
3.1 Estimation Results ...... 99 3.2 Model Calibration ...... 101 3.3 Regression Results ...... 104
4.1 First Test under the Assumption of Interest-Rate Smoothing ...... 121 4.2 First Test under the Assumption of Serially Correlated Errors ...... 122 4.3 First Test for the Reduced Form ...... 123 4.4 Second Test: Allowing for Smoothing and Serial Correlation ...... 124 4.5 Documenting the Distortion of the Test Size ...... 125
xvii
Introduction
This thesis consists of four self-contained essays. The first two essays investigate the em- pirical properties of dynamic stochastic general equilibrium (DSGE) models by means of Bayesian estimation. The third essay offers an explanation for the sensitivity of long-term interest rates and the last essay examines the econometric properties of the well-known Taylor rule. A common theme linking the four essays is the presence of endogenous mon- etary policy in which the central bank pursues the twin aims of restoring inflation to target and stabilising economic activity. The term ‘New Keynesian Macroeconomics’ arises throughout the thesis and war- rants explanation. The first and second essays analyse a class of models that has become prominent over the last decade, labelled collectively as the ‘New Neoclassical Synthesis’ by Goodfriend and King (1997) or as ‘New Keynesian Models’ by Gal´ı (2003). Well over a decade ago, Mankiw and Romer (1991) edited two volumes under the title ‘New Keynesian Economics’. What do these labels represent? As Goodfriend and King’s la- bel suggests, such models are the result of a synthesis of real business cycle (RBC) theory and New Keynesian theory. Following in the same vein as RBC models, a central assump- tion in modern DSGE models is that rationally optimising agents make forward-looking decisions based on the solution to an intertemporal optimisation problem. Keynesian ele- ments, such as nominal rigidities and imperfect competition, are introduced to make such models more suitable for the analysis of monetary policy. The neoclassical (RBC theory) aspect yields a dynamic equilibrium, a path that is reached when all prices and wages are perfectly flexible. Deviations around this equilibrium are crucial in influencing wage- and price-setting decisions. Prices are assumed to be sticky, yielding the Keynesian charac- teristic that monetary policy has real effects in the short-run but is neutral in the long-run, making it the ideal tool for stabilisation policy. This class of models is attractive from a theoretical perspective because the structural relations are based on microfoundations, rendering the parameters invariant to policy and thus robust to the Lucas critique. The utility-based framework allows derivation of welfare measures for normative policy analysis and has triggered a rich literature on the optimality of monetary policy actions (Woodford, 2003). Another important ingredient of the first two essays is the econometric approach taken
1 2 Introduction to estimate DSGE models. The standard approach in RBC theory, pioneered by Kydland and Prescott (1982) and Long and Plosser (1983), has been to calibrate parameters and compare moments generated from the model with those of actual data. This method, however, lacks formal statistical foundations (Kim and Pagan, 1994) and hinders testing the results. Sargent (1989) suggested as an alternative maximum likelihood estimation of DSGE models, but potential misspecification due to omitted non-linearities, incorrect assumptions about preferences and technology or incorrectly-specified exogenous shocks easily lead to computational difficulties (Lubik and Schorfheide, 2005).
The Bayesian estimation methodology chosen here follows developments by DeJong et al. (2000a,b), Otrok (2001) and Smets and Wouters (2003). Bayesian analysis formally incorporates uncertainty and prior information regarding the parametrisation of the model. It combines the likelihood with a priori information on the parameters of interest that may have come from earlier microeconometric or macroeconometric studies. By introducing prior information about the structural parameters in the form of probability densities, the likelihood function is re-weighted by the prior density. The degree of uncertainty about the prior information can thereby be expressed in terms of the standard deviation of the prior density. Hence, the common practice of fixing some parameters in maximum likelihood estimation has the Bayesian interpretation that no uncertainty exists about the chosen values. The Bayesian approach also allows a formal comparison of different, and not necessarily nested, models through the marginal likelihood of the model. This approach is employed in both the first and second essays to compare alternative model specifications.
In the following, I will summarise each chapter and discuss the main findings in turn. Chapter 1, Assessing Predetermined Expectations in the Standard Sticky-Price Model: A Bayesian Approach, estimates a small scale sticky-price model with delayed effects of monetary policy. Motivated by stylised facts rather than strict microfoundations, mon- etary impulses are assumed to have a one-period delayed effect on output and inflation. This is built into the model by having agents form their expectations conditional on in- formation from the previous period. A fraction of consumers and firms use a simple rule-of-thumb when deciding on consumption expenditure and prices, respectively. Rule- of-thumb consumers simply choose the average aggregate consumption level from the previous period and rule-of-thumb price setters update last period’s average price using last period’s inflation. The approach permits comparison of the dynamics of the DSGE model to those of a recursively estimated VAR on the same data, even when the two iden- tification schemes are not identical. The setup is particularly interesting because a rich literature on monetary VAR models has documented that a recursive identification scheme fits the data quite well. Introduction 3
Rather than use synthetic data for the Euro area as is often done, the model is esti- mated with German data prior to the advent of the European Monetary Union (EMU). The German economy represents the largest share in the EMU and its stable monetary policy regime over almost 25 years makes this an attractive choice of data set. The main findings of the chapter are the following. The inflation adjustment equation exhibits considerable weight on expected inflation while the demand equation is almost purely backward-looking, with prices estimated to be fixed for about 6.5 quarters. The dynamics of the DSGE model following a demand shock and monetary policy shock, re- spectively, are qualitatively and quantitatively comparable to those of the estimated struc- tural VAR. When compared to the delayed-effects model, an alternative DSGE model that allows for contemporaneous effects of monetary policy is rejected according to the posterior-odds ratio criterion. Chapter 2, Transmission of Exchange Rate Variations in an Estimated, Small-Open Economy Model, extends the closed economy model with contemporaneous effects of monetary policy to the open economy. The focus is upon how exchange-rate shocks, modelled as exogenous impulses to the uncovered interest parity (UIP) condition, influ- ence domestic consumer prices. As opposed to the standard assumption in the New Open Economy Macroeconomics (NOEM) literature but in line with McCallum and Nelson (1999, 2000), foreign goods are assumed to be intermediate inputs to production. Hence, no distinction is made between foreign and domestic consumption goods and their asso- ciated price indices. Domestic producers use labour and intermediate inputs to produce final consumption goods. As in Chapter 1, domestic firms are assumed to set their prices in a staggered fashion. In this international setting, it implies that exchange-rate shocks which influence the import price of intermediate goods (that is, the real exchange rate) do not completely pass through onto consumer price inflation in the initial period. How- ever, in the long-run pass-through is complete, in line with the empirical exchange-rate literature (Campa and Goldberg, 2002). The motivation for this approach is based on an observation of Burstein et al. (2005), amongst others, that final consumption goods account for a small fraction of all imported goods paired with the fact that most products contain services with non-tradeable goods characteristics. The model is again estimated on data from Germany, where according to the input-output tables, imported consumption goods account only for 9 percent of total final private consumption expenditure. The benchmark model is compared to one that explicitly allows for imported con- sumption goods. In order to make this channel comparable to the transmission channel on the production side, retail firms are assumed to import consumption goods and set prices in an analogous manner to domestic producers. The results are as follows: a shock to 4 Introduction the UIP condition that causes a 1.5 percent nominal depreciation raises marginal cost by 0.36 percent and prices by less than 0.1 percent (or 5.5 percent of the nominal deprecia- tion). In the extended model, the estimated fraction of imported consumption is small (5 percent) but a low degree of price stickiness in the import sector results in a much larger pass through of exchange rate shocks to domestic consumer price inflation. On pure sta- tistical grounds, that is in terms of the posterior odds ratio, the two models cannot be distinguished. However, judged by correlations between contemporaneous inflation and contemporaneous and past nominal depreciations, the benchmark model without foreign consumption goods comes closer to correlations in the actual data. Chapter 3, Can a Time-Varying Equilibrium Real Interest Rate Explain the Excess Sensitivity Puzzle?, is co-authored with Annika Alexius. We address the empirical regu- larity that long-term interest rates display considerable movements in response to short- term interest rate shocks, in contrast to the predictions of standard macroeconomic mod- els. While the literature has largely focused on learning mechanisms (Beechey, 2004; Orphanides and Williams, 2003) and varying central bank preferences (Ellingsen and Soderstr¨ om,¨ 2005), our hypothesis is that a time-varying equilibrium real interest rate can contribute to solving the puzzle. A constant equilibrium real interest rate is a common assumption, for example, the mean of the real rate over the sample period. Our suggested mechanism instead assumes that the equilibrium real rate is time-varying and persistent because it is a linear combination of highly persistent shocks. We assume that monetary policy can be described by a Taylor rule in which the equilibrium real rate is embedded in the intercept. Because of the central bank’s response to movements in this real interest rate, we get the result that forward rates fluctuate more than in the model with a constant equilibrium real rate. We estimate the time-varying equilibrium real rate to be close to a random walk in an unobserved components model and this measure is then used to augment a semi-structural New Keynesian model. We find that forward rates move between 29 and 117 basis points at the ten year horizon in response to a 100 basis points increase in the short rate trig- gered by structural shocks. Simulated data from the model is used to show that in terms of the coefficients in regressions of long-rate movements on short-rate movements, incor- porating a time-varying equilibrium real rate contributes significantly to explaining the sensitivity puzzle. Chapter 4, Interest-Rate Smoothing versus Serially Correlated Errors in Taylor Rules: Testing the Tests, is written with Par¨ Osterholm.¨ In the paper we examine the size proper- ties of two tests recently designed by English et al. (2003) to distinguish between interest- rate smoothing and autocorrelated errors in estimated Taylor rules. A common empirical finding when estimating Taylor rules is a high degree of interest-rate smoothing, implying Introduction 5 an implausibly slow partial-adjustment mechanism. Even with a large coefficient on the lagged interest rate, estimated equations typically continue to exhibit serially correlated errors. Our working hypothesis is that a high degree of interest-rate smoothing may indi- cate omitted variable bias. If in fact the central bank responds to more variables than just deviations of inflation from target and the output gap, the omission of relevant explanatory variables that are not orthogonal to inflation and the output gap renders the estimated co- efficients inconsistent. Consequently, this leads to the failure of the English et al. (2003) tests since they are based on autocorrelation correction to account for omitted variables. In this chapter we set up a Monte Carlo analysis that employs an estimated struc- tural VAR for the U.S. economy as the basis for the data generating process. The results demonstrate that omitted variable bias has the potential to generate a falsely significant lagged interest-rate term in a Taylor rule. In addition, we are able to show that the tests have a much larger empirical size than the nominal size of 5 percent. Whilst we do not identify which variables are most likely to cause the omitted variable bias, our results do point to problems with empirical Taylor rules that feature a high degree of interest-rate smoothing. 6 Introduction
References
Beechey, M. (2004). Excess sensitivity and volatility of long interest rates: The role of limited information in bond markets. Sveriges Riksbank Working Paper No 173, Sveriges Riksbank.
Burstein, A., Eichenbaum, M., and Rebelo, S. (2005). Large devaluations and the real exchange rate. Journal of Political Economy (forthcoming).
Campa, J. M. and Goldberg, L. S. (2002). Exchange rate pass-through into import prices. Review of Economics and Statistics (forthcoming).
DeJong, D., Ingram, B., and Whiteman, C. (2000a). A Bayesian approach to dynamic macroeconomics. Journal of Econometrics, 98(2):203–223.
DeJong, D., Ingram, B., and Whiteman, C. (2000b). Keynesian impulses versus Solow residuals: Identifying sources of business cycle fluctuations. Journal of Applied Econo- metrics, 15(3):311–329.
Ellingsen, T. and Soderstr¨ om,¨ U. (2005). Why are long rates sensitive to monetary policy? Working Paper, Bocconi University.
English, W. B., Nelson, W. R., and Sack, B. P. (2003). Interpreting the significance of the lagged interest rate in estimated monetary policy rules. Contributions to Macroeco- nomics, 3(1):Article 5.
Gal´ı, J. (2003). New perspectives on monetary policy, inflation and the business cycle. In Dewatripont, M., Hansen, L., and Turnovsky, S., editors, Advances in Economics and Econometrics: Theory and Applications. Eighth World Congress, volume III, pages 151–197. Cambridge University Press.
Goodfriend, M. and King, R. G. (1997). The New Neoclassical Synthesis and the role of monetary policy. NBER Macroeconomics Annual, 12:231–283.
Kim, K. and Pagan, A. (1994). The econometric analysis of calibrated macroeconomic models. In Pesaran, H. and Wickens, M., editors, Handbook of Applied Econometrics, volume 1. Blackwell Press, London.
Kydland, F. E. and Prescott, E. C. (1982). Time to build and aggregate fluctuations. Econometrica, 50(6):1345–1370.
Long, J. B. and Plosser, C. (1983). Real business cycles. Journal of Political Economy, 91(1):39–69.
Lubik, T. and Schorfheide, F. (2005). A Bayesian look at New Open Economy Macro- economics. NBER Macroeconomics Annual (forthcoming).
Mankiw, N. G. and Romer, D., editors (1991). New Keynesian Economics, volume 1+2. MIT Press, Cambridge. REFERENCES 7
McCallum, B. T. and Nelson, E. (1999). Nominal income targeting in an open-economy optimizing model. Journal of Monetary Economics, 43:553–578.
McCallum, B. T. and Nelson, E. (2000). Monetary policy for an open economy: An alter- native framework with optimizing agents and sticky prices. Oxford Review of Economic Policy, 16(4):74–91.
Orphanides, A. and Williams, J. C. (2003). Imperfect knowledge, inflation expectations, and monetary policy. NBER Working Paper No. 9884.
Otrok, C. (2001). On measuring the welfare cost of business cycles. Journal of Monetary Economics, 47(1):61–92.
Sargent, T. (1989). Two models of measurements and the investment accelerator. Journal of Political Economy, 97(2):251–287.
Smets, F. and Wouters, R. (2003). An estimated dynamic stochastic general equilibrium model of the Euro area. Journal of the European Economic Association, 1(5):1123– 1175.
Woodford, M. (2003). Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton University Press, Princeton, New Jersey.
Chapter 1
Assessing Predetermined Expectations in the Standard Sticky Price Model: A Bayesian Approach
1.1 Introduction
In recent years a new paradigm has arisen in macroeconomics that combines elements of real business cycle theory (RBC) and New Keynesian Macroeconomics (NKM). The standard model involves a dynamic stochastic general equilibrium (DSGE) structure with intertemporally optimising agents who are assumed to make decisions based on rational expectations, an assumption that reflects the RBC origins of the paradigm. As a result, equilibrium conditions for aggregate variables can be computed from the optimal individ- ual behaviour of consumers and firms. NKM features are introduced by explicitly allow- ing for monopolistic competition as well as costly - and therefore gradual - price and/or wage adjustment. In this environment, monetary policy takes on a stabilisation role be- cause actions taken by the monetary authority have significant effects on real economic activity in the short- to medium run. Furthermore, due to the rigorous microfoundations on which such models are based, it is possible to evaluate the welfare implications of al- ternative policy regimes. Ideally, such evaluations should serve as the basis for economic policy advice.
The purpose of the present study is to estimate a small-scale DSGE model with sticky prices for the German economy prior to European Monetary Union (EMU). A number of authors, including Smets and Wouters (2003, 2004), Adolfson et al. (2005), Levin et al. (2005), have recently estimated medium- to large scale models and found that these mod- els fit the data fairly well. This paper deviates from the existing literature in two respects. The model is kept much simpler and thus closer to the types of models commonly used for
9 10 1. Assessing Predetermined Expectations normative monetary policy analysis.1 In addition, I introduce the assumption that the con- sumption and price-setting decisions of optimising agents are determined one period in advance. In this way decisions are based on information up to and including the previous period which introduces a delayed effect of monetary impulses on output and inflation in the model.2 While not dictated by microfoundations, this assumption makes it possible to compare the impulse responses of the estimated DSGE model to those of a (recursively) identified VAR model, even when the two identification schemes differ. Many VAR stud- ies of monetary policy have found that an identification scheme that leads to one-period delayed effects of monetary impulses on output and inflation fit the data quite well, at least in closed economies.3 It is therefore interesting to understand the effect of a similar identification strategy within a DSGE model. The model in this paper also deviates from its simplest counterpart through the as- sumption of endogenous persistence on both the demand and supply side. I introduce endogenous persistence by assuming that the population can be divided into two types. One group solves an optimisation problem according to the rational expectations hypoth- esis whereas the other group deviates from fully rational behaviour and follows a rule- of-thumb. Specifically, rule-of-thumb individuals make decisions based on information from the previous period rather than optimising over an infinite horizon. This assumption may be justified because such forward-looking optimisation is complicated and costly and requires acquiring large amounts of information. At a more general level, another motivation for introducing endogenous persistence is that the purely forward-looking sticky-price model cannot account for observed persis- tence in inflation and consumption (Fuhrer and Moore, 1995). Because the sticky price model is designed to analyse the short-run effects of monetary policy and to study optimal monetary policy, it is important that the model can account for the empirical regularities. Other studies have employed habit formation to introduce endogenous persistence on the demand side (McCallum and Nelson, 1999) or indexation of a fraction of prices to past inflation to generate persistence on the supply side (Christiano et al., 2005). In contrast, this paper provides a consistent modelling perspective by assuming rule-of-thumb be- haviour in both consumption and price-setting, thus treating both sides of the economy symmetrically. Several of the above-mentioned studies use a synthetic Euro area data set rather than data for individual countries within the EMU. However, analysis at the country level is important and the focus of this paper will be on the German economy. Germany de- serves special attention not only because of its relative importance in the aggregate EMU
1But see Levin et al. (2005) for an exception. 2Woodford (2003, Chapter 4) also discusses delayed effects of monetary policy. 3Favero (2001) provides an overview of this literature. 1.2. The Sticky Price Model 11 economy but because of its unique monetary regime for the two decades prior to EMU. The model is estimated with techniques developed by DeJong et al. (2000a,b) and Otrok (2001). The approach takes a Bayesian view that formally incorporates prior in- formation about the parameters of the model. Smets and Wouters (2003, 2004) apply this technique to estimate the 34 parameters of a New Keynesian model with capital in- vestment, sticky prices and wages using Euro area data. Adolfson et al. (2005) apply the same method to estimate an open-economy version of the model with a larger number of parameters. These larger models clearly have a greater chance of empirical success but deviate from the parsimonious sticky-price model commonly used for optimal monetary policy analysis. They also require a number of additional assumptions about the exact investment technology and the use of capital. The approach here is more modest in at- tempting to fit a small-scale New Keynesian model with 17 parameters. Thus, the model resembles more closely the standard class of models used in theory. The remainder of the paper is structured as follows. Section 1.2 presents the theo- retical model and discusses the solution method while Section 1.3 covers the estimation methodology and specification of priors. In Section 1.4, the results are presented and the DSGE impulse response functions are compared to those from an identified VAR. Section 1.5 summarises and draws some conclusions.
1.2 The Sticky Price Model
The theoretical model used for estimation purposes here is an extension of the standard sticky-price model with fixed capital commonly used for the analysis of optimal monetary policy (Gal´ı, 2003; Woodford, 2003). Only a brief description is given in this section. No explicit reference is made to money balances because the central bank is assumed to follow an interest rate rule. Introducing money balances for instance into an additively separable utility function, would only add a money demand equation which endogenously determines the magnitude of money balances without affecting the general results.
1.2.1 Households
The economy consists of a continuum of infinitely-lived consumers of measure one where each individual is indexed by j ∈ [0, 1]. It is assumed that expenditure decisions are made one period in advance and subsequently altered only due to disturbances to pref- erences. Following Amato and Laubach (2003) I assume that re-optimisation is costly due to information-gathering or information-processing constraints. Hence, every period a randomly chosen fraction of households 1 − αy decides to base its consumption decision on optimising behaviour, whereas the remaining fraction αy follows a rule-of-thumb that 12 1. Assessing Predetermined Expectations simply implies choosing the optimal consumption level from the previous period, i.e.
r = . Ct Ct−1 (1.1) Assuming that the individual household is too small to affect the level of consumption ∞ Cr, the re-optimisation problem is to find a sequence Co that maximises the present t jt t=1 discounted value of expected life-time utility ∞ o 1−σ +ϕ C N 1 s−t gs js us js Et−1 β e − e , (1.2) 1 − σ 1 + ϕ s=t where β is a discount factor, gt is a preference shock affecting the individual’s time dis- count factor, and an individual’s disutility derives from supplying work hours, N jt, per- turbed by ut (to be explained below). The intertemporal elasticity of substitution is defined by σ−1 and labour supply elasticity is denoted by ϕ−1. Note that the expectation in (1.2) is conditional upon information up to and including time period t − 1, reflecting the pre- determined nature of the expenditure decision. Aggregate consumption in the economy is given by the standard Dixit-Stiglitz aggre- gate 1 −1 −1 Ct = Cit di , (1.3) 0 where >1 denotes the elasticity of substitution among the varieties of goods. The associated aggregate price index that gives the minimum expenditure PitCit for which the amount Ct(i) of the composite consumption basket can be purchased is given by
1 1 1− 1− Pt = Pit di . (1.4) 0 This specification leads to the familiar isoelastic demand function for each variety of the consumption good − Pit Cit = Ct. (1.5) Pt Financial markets are assumed to be complete in this economy, that is, each household can insure against any type of idiosyncratic risk through purchase of the appropriate port- folio of securities. Since by assumption households are identical ex ante they are willing to enter such insurance contracts. The advantage of this assumption is that the represen- tative agent framework can be preserved, avoiding the need to keep track of an additional state variable of households’ wealth distribution. As a result of the homogeneity assump- o, tion, all optimising households choose the same level of consumption Ct and per capita ≡ α o + − α r. consumption in period t is given by Ct yCt (1 y)Ct Each household then faces the same flow budget constraint
−1 PtCt + (1 + Rt) Bt ≤ Bt−1 + WtNt + Tt, (1.6) 1.2. The Sticky Price Model 13
i.e. households’ income consists of security holdings from the previous period, Bt−1, labour income, Wt, and a transfer, Tt, that they receive in order to balance the wealth effects of choosing consumption according to the optimality condition instead of the rule- of-thumb (Amato and Laubach, 2003). Since the model also abstracts from government expenditure, goods-market clearing requires that Ct = Yt in each time period. Thus the rule-of-thumb for consumption in (1.1) r = becomes Ct Yt−1 and output in period t is given by
= − α o + α . Yt (1 y)Ct yYt−1 (1.7)
Maximising (1.2) subject to the budget constraint (1.6) and substituting the market clear- ing condition and the output relation (1.7) yields an Euler equation whose log-linearised form leads to the following intertemporal IS equation: (1 − α )δ (1 − α )δ = δ { } + − δ − y { − π } + y − { } , yt Et−1 yt+1 (1 )yt−1 σ Et−1 it t+1 σ (gt Et−1 gt+1 ) (1.8) −1 where δ ≡ 1 + αy . The equation is log-linearised around a zero inflation steady state, so πt ≡ log(Pt/Pt−1) is the inflation rate and it is the percent deviation from its steady- state level associated with zero inflation. Furthermore, yt denotes the percent deviation of output from its steady state level. For the case in which all households base their consumption decisions on optimisation, i.e. αy = 0, and there are no implementation delays, the standard intertemporal IS equation is obtained:
−1 −1 yt = Et {yt+1} − σ (it − Et {πt+1}) − σ Et {∆gt+1} . (1.9)
Comparing (1.8) with (1.9) we notice that introducing rule-of-thumb behaviour in consumption generates a backward-looking term in the IS equation. This is appealing from an empirical point of view as will become clear below.
1.2.2 Firms
Firms indexed by i ∈ [0, 1] produce a continuum of goods in a monopolistically com- petitive market with a decreasing returns-to-scale technology perturbed by an exogenous labour productivity shock at that is common to all firms:
at α Yit = (e Nit) , (1.10)
Since α<1, firms with different production levels face different real marginal cost given by 1 W N W MC = t = it t , it α α α−1 α At Nit Pt Yit Pt 14 1. Assessing Predetermined Expectations which can be related to average marginal cost by
Nt Wt MCt = . αYt Pt
Using the production function, the demand function (2.9) and Yit = Cit, the following relationship can be derived in log-linearised form (1 − α) = − − .4 mcit mct α (pit pt)
Then, real marginal cost can be shown to be given by − α + ασ + ϕ = 1 − + ϕ + = + mct α yt (1 )at ut mct ut, (1.11) where the first-order condition with respect to the labour decision has been substituted in.5 Turning to price setting, I make the same assumption as in Gal´ı and Gertler (1999) and Amato and Laubach (2003) that a fraction of firms re-optimise their prices and another fraction sets prices following a rule-of-thumb. Those firms who are assumed to optimise follow the setup suggested by Calvo (1983); every period a random fraction 1 − θ of firms resets prices to the new optimal price whereas the remaining fraction of firms leaves prices unchanged from the period before. In addition, I assume that a fraction απ does not act according to Calvo’s price-setting mechanism but uses a backward-looking rule-of-thumb for setting their prices.6 Analogous to the motivation for the rule-of-thumb behaviour on the demand side, this could be justified by the fact that it is time-consuming to gather information about the stance of the economy, costly to obtain this information and that firms possess limited information-processing capacity. In addition, in order to match the commonly-made assumption in identified VAR models that monetary disturbances do not have contemporaneous effects on inflation, I assume that newly chosen prices take effect one period later (Woodford, 2003, Chapter 4). With these assumptions, the log-linearised aggregate price level evolves according to
= θ + − θ ∗, pt pt−1 (1 )pt (1.12)
∗ where pt is the (log-linearised) price index of prices set in period t, ∗ = − α f + α b. pt (1 π)pt π pt (1.13)
f The latter is a convex combination of the price pt set by the forward-looking firms follow- b ing the Calvo (1983) rule and the price pt set by the remaining backward-looking firms 4See for instance Sbordone (2002) or Walsh (2003), Chapter 5. 5 This condition (in log-linearised form) is given by wt − pt = ut + ϕnt + σct. 6This is the argument of Gal´ı and Gertler (1999). Amato and Laubach (2003) use a slightly different motivation that leads to the same specification of the Phillips curve below. 1.2. The Sticky Price Model 15 that follow the rule-of-thumb. The forward-looking price can be derived from firms’ profit maximisation and is given by7
f = − βθ + + βθ f . pt (1 )Et−1(mct pt) Et−1 pt+1 (1.14)
The backward-looking price setters are assumed to set their price equal to the average price in the previous period corrected for past inflation, i.e.
b = ∗ + π , pt pt−1 t−1 (1.15) where, importantly, past inflation serves as the forecast for actual inflation. Equations (1.12)-(1.15) can be combined to yield the following ‘hybrid’-Phillips curve
b f πt = γ πt−1 + γ Et−1{πt+1} + λ(Et−1{mct} + ut), (1.16) where the parameters are defined as follows
−1 λ ≡ Φ (1 − απ)(1 − θ)(1 − βθ)µ f −1 b −1 γ ≡ Φ βθ, γ ≡ Φ απ α µ = + − α − 1 (1 )( 1) Φ ≡ θ + απ 1 − θ(1 − β) .
Thus, as first suggested by Fuhrer and Moore (1995), inflation is both forward- and backward-looking and depends on the forecastable component of real marginal cost. As in Clarida et al. (2001), the ‘cost-push’ shock ut derives from the random disturbance per- turbing the labour supply decision in the utility function in (1.2). In effect, it introduces a wedge between the marginal rate of substitution between leisure and consumption and the real wage and can be interpreted as a stochastic wage markup. Analogous, to the discussion of the IS equation, the purely forward-looking New Key- nesian Phillips curve results when all firms follow the Calvo pricing rule, i.e. απ = 0, and prices are not preset one period in advance,
πt = βEtπt+1 + λmct + λut, (1.17)
λ = (1−θ)(1−βθ) µ. where θ The lagged inflation term in (1.16) is again important to account for the empirically observed inflation persistence. In the purely forward-looking specification, inflation would become a jump variable and the price level a state variable. Estrella and Fuhrer (2002) have shown that purely forward-looking specifications like (1.17) and the IS relation (1.9) imply counterfactual
7The assumption is as in Gal´ı and Gertler (1999) that all consumers choose consumption optimally so that the marginal utility of consumption is identical across consumers. 16 1. Assessing Predetermined Expectations
− 8 relationships. The former implies that inflation and the output gap (yt yt) are posi- tively correlated while the correlation between the change in inflation and output gap is negative. This is at odds with the ‘acceleration phenomenon’ according to which high economic activity should move hand-in-hand with positive movements in inflation. The argument is similar for the IS equation, that is equation (1.9) stipulates a negative cor- relation between the consumption level and the expected real interest rate and a positive correlation between consumption growth and the expected real interest rate. This implies that when the expected real interest rate rises above its steady state value, the level of consumption must decline but its growth rate remain positive. This is only possible when consumption ‘jumps’ down initially and approaches its lower level from below. To assess these predictions, in section 1.4 I compare the characteristics of the actual data with those of simulated data from the estimated model.
1.2.3 Central Bank
The model is closed by assuming that the central bank follows a Taylor-type interest-rate rule. That is, it adjusts its instrument in response to deviations of inflation and output from their respective target levels of price stability and potential output. In addition, I include a lagged interest rate term to account for the fact that central banks generally do not move their instrument in large steps (Goodhart, 1997), = φ + − φ φ − + φ π + εi. it iit−1 (1 i) y(yt yt) π t t (1.18)
εi Here t is a white-noise, exogenous shock to the interest rate that can be interpreted as the unsystematic component of monetary policy. All coefficients are assumed to be positive and the smoothing or partial-adjustment coefficient is assumed to obey the restriction φ ∈ , . i [0 1) Existence of a stable solution of the model requires certain restrictions on the policy coefficients (Clarida et al., 1999). Namely, in response to an increase in expected inflation, the central bank must increase the nominal interest rate sufficiently to achieve a rise in the real interest rate that dampens economic activity. I confine the analysis to stable unique solutions of the model in the estimation procedure. Specifically, stability and uniqueness of the model solution will be checked by the numerical solution algorithm. Against the background of the Bundesbank’s official money growth-targeting strat- egy, it may be surprising that in this model central bank behaviour is modelled in terms of the interest rate. However, the instrument of the Bundesbank when conducting mon- etary policy has always been a short term interest rate. Clarida and Gertler (1996) argue that the behaviour of the Bundesbank in the post Bretton-Woods era can be described
8Under the assumptions made in this model, there is a proportional relationship between marginal cost and the output gap. 1.2. The Sticky Price Model 17 well by a Taylor-type rule that also incorporates the output gap. Furthermore, between 1975 and 1985 the Bundesbank announced a rate of ‘unavoidable inflation’ that ranged between 4.5% and 3%. From 1986 onwards the Bundesbank went a step further, announc- ing that an inflation rate of 2% was consistent with price stability (Deutsche Bundesbank, 1995). Also supporting the interest-rate rule formulation, it has been observed that the Bundesbank allowed deviations of money growth from target more often than deviations of inflation from its prescribed values; by analysing the effects of changes in forecasted money growth and forecasted inflation on the interest rate instrument, Bernanke and Mi- hov (1997) find that money growth plays a quantitatively unimportant role in explaining variations in the interest rate. This leads them to conclude that implementation of Bun- desbank’s monetary policy is described well with an interest-rate rule. However a recent study by Gerberding et al. (2004) using real time data shows that a broad monetary aggregate enters significantly into a Taylor-type rule.
1.2.4 Solution of the Model
The three endogenous variables, yt,πt, it, are determined by three equations: the IS- equation (1.8), the Phillips curve (1.16) and the monetary-policy rule (1.18). The stochas- tics of this system of rational-expectations equations are assumed to be driven by four independent exogenous shocks: the preference shock gt, the productivity shock at, the i cost-push shock ut, and the monetary policy shock ε . The first three are assumed to fol- low stationary AR(1)-processes, while the monetary policy shock is assumed to be white noise. Because data for three series is employed, at least three shocks need to be specified in order to avoid a singular covariance matrix in the likelihood computation. However, Smets and Wouters (2003) note that allowing for richer stochastic specifications than dic- tated by the number of time series may be helpful in the estimation procedure. The system has the following matrix representation9
Γ0(ξ)st =Γ1(ξ)st−1 +Ψzt +Πϑt, (1.19) and is solved using the method developed by Sims (2002). ξ is a (17×1)-vector containing ffi ρ ,ρ,ρ, the parameters of the model including the autoregressive coe cients, g u a and the standard deviations of the shock processes σd, d ∈ {g, u, a, i}
ξ = β, α, , σ, ϕ, θ, φ ,φ ,φ ,α ,α ,ρ ,ρ ,σ ,σ ,σ ,σ . i π y π y g a g u a i
The matrices Γ0(ξ), Γ1(ξ), Ψ and Π are the (12 × 12), (12 × 12), (12 × 4) and (12 × 6) coefficient matrices respectively, zt is the (4 × 1)-vector of exogenous disturbances, ϑt =
9See Appendix A for full details. 18 1. Assessing Predetermined Expectations
Xt − Et−1Xt isa(6× 1)-vector of expectational errors, i.e. Et(ϑt+1) = 0(6×1) and
= ,π, , , , ,1,0,π1,π0,0, 0 , st yt t it mct gt at yt yt t t it mct ≡ ∈ , 1,π,π1, , where I have defined xt Et xt+1 for xt yt yt t t it mct and added the six equations = + ϑx 10 xt xt−1 t to the system. The general solution to (2.36) has a VAR(1)-representation
= ξ + ξ η . st T( )st−1 R( ) t (1.20)
Note that the system is stochastically singular since st has dimension 12 but there are only four stochastic shocks, rendering the covariance matrix of the disturbances singular. Hence the series for output, inflation and the interest rate are selected via the measurement equation
Yt = Zst, (1.21) where Yt is a (3 × 1)-vector and Z a(3× 12)-matrix. In the model the natural level of output - the level of output obtained when prices are flexible and no cost shocks are present - is driven by the unobservable stochastic technology process. Hence, it is treated as unobservable in the estimation procedure as well.
1.3 Estimation
1.3.1 Data
The data ranges from the first quarter of 1975 to the fourth quarter of 1998, covering the post Bretton-Woods era up until the launch of European Monetary Union. Real Gross Domestic Product (GDP) and the Consumer Price Index (CPI) are taken from the OECD Main Economic Indicators Database, and the interest-rate series is constructed as the quarterly average of the monthly average of the bank call rate published in Deutsche Bun- desbank’s time-series database11. The raw data is transformed so that it is conformable with the theoretical model. GDP data for Western Germany is employed until 1991Q3, after which the GDP series is for unified Germany. I account for the level shift and the possible trend break by regressing each series on an individual constant and individual linear trend. An alternative method to treat the statistical effect of reunification on the output series would be to link the series for Western Germany, for which observations are available until 1994Q4, with the series for unified Germany for which data are available from 1991Q1 onwards. This strategy may understate the initial economic boom related to
10See Sims (2002) for a thorough discussion of this method and again Appendix A for a brief description. 11http://www.bundesbank.de/stat/zeitreihen/index.htm, series code SU0101. 1.3. Estimation 19 reunification that began shortly after the inner border was opened in the end of 1989 and would also assume that the Eastern and Western German economies had equal growth rates prior to reunification which seems implausible. The inflation series is calculated as the difference between CPI-inflation and a quasi inflation-target series. This series, published in Gerberding et al. (2004), is comprised of announcements made by the Bundesbank about what they first called ‘unavoidable inflation’ and later termed inflation consistent with price-stability. A series for the nominal interest rate is obtained by regressing the interest rate on this inflation-target series and removing the mean of the resulting series.
1.3.2 Estimation Methodology
Traditionally, DSGE models are calibrated such that certain theoretical moments given by the model match as closely as possible their empirical counterparts.12 However, this method lacks formal statistical foundations (Kim and Pagan, 1994) and makes testing the results difficult.13 One approach used recently in the monetary-economics literature that has improved on this shortcoming is to minimise the distance between the theoretical impulse response functions of the model and the empirical impulse responses estimated from a VAR (Christiano et al., 2005; Rotemberg and Woodford, 1997, for example). Since DSGE models provide by construction only an abstraction of reality, one advantage of this method is that it allows the researcher to focus on that dimension of the model for which it was designed, for example, the effects of a monetary policy shock. Following Sargent (1989), it has become more common to estimate monetary DSGE models with maximum likelihood (ML) (Bergin, 2003; Kim, 2000). Well known prob- lems that arise with this method are that parameters take on corner solutions or implau- sible values, and that the likelihood function may be flat in some dimensions. GMM estimation is a popular alternative for estimating intertemporal models (Gal´ı and Gertler, 1999, and others). However, Christiano and Haan (1996) show by estimating a business cycle model on U.S. data that GMM estimators often do not have the distributions im- plied by asymptotic theory. In addition, Linde´ (2005) finds that parameters in a simple New Keynesian model are likely to be estimated imprecisely and with bias. Parameters sometimes need to be fixed beforehand, implying that results are valid conditional only on these a priori ‘calibrated’ parameters. This aspect often remains undiscussed in the fi- nal assessment of the model, despite the fact that calibration calls for a careful sensitivity analysis.
12For an overview see Favero (2001). 13See, however, Canova and Ortega (2000) for a discussion on how testing in calibrated DSGE models could be conducted. 20 1. Assessing Predetermined Expectations
The Bayesian approach taken in this paper follows work by DeJong et al. (2000a,b), Otrok (2001), Smets and Wouters (2003, 2004) 14 and can be seen as a combination of likelihood methods and the calibration methodology. Bayesian analysis allows uncer- tainty and prior information regarding the parametrisation of the model to be formally incorporated by combining the likelihood with prior information on the parameters of interest from earlier microeconometric or macroeconometric studies. In the Bayesian ap- proach such values could be employed as the means or modes of the prior densities to be specified, while a priori uncertainty can be expressed by choosing the appropriate prior variance. For example, the restriction that AR(1)-coefficients lie within the unit interval can be implemented by choosing a prior density that covers only that interval, such as a truncated normal or a beta density. This strategy may help to mitigate such problems as a potentially flat likelihood as estimates of the maximum likelihood are pulled towards values that the researcher would consider sensible a priori. This effect will be stronger when the data carry little information about a certain parameter, that is the likelihood is relatively flat whereas the effect will only be moderate when the likelihood is very peaked. Uncertainty about the specification of the structural model can also be accommodated by the Bayesian approach. I do so in the robustness analysis in Section 1.4 when the model is compared to a model without delayed effects. By Bayes’ theorem, the posterior density ϕ(ξ | Y) is related to prior and likelihood as follows f (Y | ξ)π(ξ) ϕ(ξ | Y) = ∝ f (Y | ξ)π(ξ) = L(ξ | Y)π(ξ), (1.22) f (Y) π ξ ξ, ξ | ≡ | ξ where ( ) denotes the prior density of the parameter vector L( Y) f (Y ) is the likelihood of the sample Y and f (Y) = f (Y | ξ)π(ξ)dξ is the unconditional sample density. The unconditional sample density does not depend on the unknown pa- rameters and consequently serves only as a proportionality factor that can be neglected for estimation purposes. In this context it becomes clear that the main difference be- tween ‘classical’ and Bayesian statistics is a matter of conditioning. Likelihood-based non-Bayesian methods condition on the unknown parameters ξ and compare f (Y | ξ) with the observed data. Bayesian methods condition on the observed data and use the full distribution f (ξ, Y) = f (Y | ξ)π(ξ) and require specification of a prior density π(ξ). The likelihood function can be computed with the Kalman filter using the state-space representation of the above model, where (2.37) is the transition equation and (2.38) is the measurement equation. Denoting st as the optimal estimator of st based on observations up to Yt−1 and Pt = E (st − st)(st − st) as the covariance matrix of the estimation error,
14There are by now numerous applications of the approach, for example Adolfson et al. (2005), Justiniano and Preston (2004), Lubik and Schorfheide (2003), Rabanal and Rubio-Ram´ırez (2005). 1.3. Estimation 21 the prediction equations are given by
st|t−1 = Tst−1 (1.23) Pt|t−1 = TPt−1T + RQR (1.24) and the updating equations are
= + −1 − st st|t−1 Pt|t−1Z Ft (Yt Zst|t−1) (1.25) = − −1 , Pt Pt|t−1 Pt|t−1Z Ft ZPt|t−1 (1.26)
where Ft = ZPt|t−1Z (Harvey, 1989, p. 106). The updating equations describe the solution to the signal extraction problem based on information up to and including time t − 1, the prediction equations are one-step ahead = η η predictions and Q E t t . The recursions are then initialised with the values of the −1 15 unconditional distribution s1|0 = 0 and vec(P1|0) = (I−T⊗T) vec(RQR ) (Harvey, 1989, p. 121). Finally, the likelihood can be computed conditional upon the initial observation
Y0 using a prediction-error decomposition (Harvey, 1989, p. 125). The prediction error is defined as νt = Yt − Zst|t−1, and assuming that st is Gaussian, st|t−1 is also Gaussian with covariance matrix Pt|t−1. It follows that the log-likelihood can be written as