® "The Q - Group"

THE INSTITUTE FOR QUANTITATIVE RESEARCH ® IN FINANCE

Founded 1966 -- Over 50 years of

Research and Seminars Devoted to the

State-of-the-Art in Investment Technology

Summary of Proceedings

Volume 8 2011 - 2015

ALL RIGHTS RESERVED. NO PART OF THIS PUBLICATION MAY BE REPRODUCED, STORED IN A RETRIEVAL SYSTEM, OR TRANSMITTED, IN ANY FORM OR BY ANY MEANS ELECTRONIC, MECHANICAL, PHOTOCOPYING, RECORDING OR OTHERWISE, WITHOUT THE PRIOR WRITTEN PERMISSION OF THE PUBLISHER AND COPYRIGHT OWNER.

Published by The Institute for Quantitative Research in Finance, P.O. Box 6194, Church Street Station, , NY 10249.6194

Copyright 2017 by The Institute for Quantitative Research in Finance, P.O. Box 6194, Church Street Station, New York, NY 10249.6194

Printed in The United States of America

PREFACE TO VOLUME 8

iv

TABLE OF CONTENTS PREFACE TO VOLUME 8 ...... iv Active Asset – Alpha ...... 1 1. Cross-Firm Information Flows (Spring 2015) 1 Anna Scherbina, Bernd Schlusche 2. Dissecting Factors (Spring 2014) 2 Juhani Linnainmaa 3. The Surprising “Alpha” From Malkiel’s Monkey And Upside-Down Strategies (Fall 2013) 4 Jason Hsu 4. Will My Risk Parity Strategy Outperform? Lisa R Goldberg (Spring 2012) 5 Lisa Goldberg Active Asset Management – Mutual Fund Performance ...... 7 5. Target Date Funds (Fall 2015) 7 Ned Elton, Marty Gruber 6. Patient Capital Outperformance (Spring 2015) 8 Martijn Cremers 7. Do Funds Make More When They Trade More? (Spring 2015) 10 Robert Stambaugh 8. A Sharper Ratio: A General Measure For Ranking Investment Risks (Spring 2015) 11 Kent Smetters 9. Scale And Skill In Active Management (Spring 2014) 13 Lubos Pastor 10. Time-Varying Fund Manager Skill (Fall 2013) 14 Marcin Kacperczyl 11. Does Mutual Fund Size Matters: The Relationship Between Size And Performance (Fall 2012) 16 Edwin J. Elton, Martin J. Gruber 12. Conflicting Family Values In Fund Families (Spring 2011) 17 Utpal Bhattacharya Alternative Investments ...... 18 13. The Investment Performance Of Art And Other Collectibles (Spring 2014) 18 Elroy Dimson 14. Estimating Returns From Limited Partner Cash Flows (Spring 2014) 19 Andrew Ang 15. Impact Of Hedge Funds On Asset Markets (Spring 2014) 21 Andrew Patton 16. Global Pricing Of Gold (Spring 2013) 22 Campbell R. Harvey 17. The Cost Of Capital For Alternative Investments (Spring 2013) 24 Erik Stafford

v

18. Can Financial Engineering Cure Cancer? A New Approach For Funding Large-Scale Biomedical Innovation (Fall 2012) 26 Andrew Lo 19. Institutional-Quality hedge funds (Fall 2011) 27 David Hsieh 20. The Effect Of Housing On Portfolio Choice (Spring 2011) 28 Raj Chetty 21. Portfolio Choice With Illiquid Assets (Spring 2011) 29 Andrew Ang Banking and Financial Industry ...... 31 22. Toward Determining Systemic Importance (Spring 2012) 31 Mark Kritzman 23. Why Bank Equity Is Not Expensive. (Fall 2011) 33 Anat Admati 24. Neglected Risks, Financial Innovation, And Financial Fragility (Spring 2011) 36 Andrei Shleifer 25. Who is doing what to whom on and why? (Spring 2011) 38 Charlie Gasparino Behavioral Finance ...... 39 26. “The Worst, The Best, Ignoring All The Rest: The Rank Effect and Trading Behavior” (2014 Fall) 39 Samuel Hartzmark 27. The Success Equation: Untangling Skill And Luck In Business, Sports And Investing (Fall 2013) 40 Michael Mauboussin 28. Natural Expectations, Economic Fluctuations, And Asset Pricing (Spring 2012) 41 David Laibson 29. New Research In Behavioral Finance (Spring 2012) 43 Nicholas Barberis Economics ...... 45 30. The College Earnings Premium, The Higher Ed Scorecard, And Student “Sorting”: Myths, Realities And Explanations (Fall 2015) 45 Eleanor Dillon 31. Risk (Fall 2015) 47 Francis Longstaff 32. Systematic Risk And The Macroeconomy: An Empirical Evaluation (Fall 2015) 49 Bryan Kelly 33. Why Global Demographics Matters For Macro-Finance, Asset Pricing, Portfolios and Pensions (Fall 2014) 50 Amlan Roy 34. “The Financial Crisis Six Years On: What Have We Learned” (Fall 2014) 52 Jeremy Siegel 35. The Economics of Big Time College Sports (Spring 2014) 54 Roger Noll 36. Housing And The Macro Economy (Spring 2013) 56 vi

Karl Case 37. Are We Headed For A Demographic Winter? (Fall 2012) 57 Joel Kotkin 38. Hurricane Outlook for the 21st Century. (Spring 2012) 57 Hugh E. Willoughby 39. Investment for Effect: Design for Undersea Warfare in the 21st Century (Fall 2011) 58 Rear Admiral Micheal Yuring 40. Demography Changes, Financial Markets And The Economy (Fall 2011) 59 Robert D Arnott Factor Models ...... 61 41. Cross-Sectional Asset Pricing With Individual Stocks: Betas VS. Characteristics (Fall 2015) 61 Tarun Chordia 42. Betting Against Beta Or Demand For Lottery? (Spring 2015) 64 Scott Murray 43. The Recognition Of One Of Our Most Valued Members (Spring 2015) 66 Jack Treynor 44. “Risky Value” (Fall 2014) 67 Scott Richardson 45. The Not So Well-Known Three-and-One-Half Factor Model (Spring 2014) 69 Steven Thorley 46. A Model of Momentum (Spring 2014) 71 Lu Zhang 47. Horizon Pricing (Spring 2013) 72 Robert Korajczyk 48. Dynamic Conditional Beta (Spring 2013) 73 Robert Engle 49. Momentum Crashes (Fall 2012) 75 Kent Daniel 50. Speculative Betas (Fall 2012) 77 Harrison Hong 51. Revisiting exotic beta (Fall 2011) 78 Mark Carhart 52. Betting against beta (Fall 2011) 80 Andrea Frazzini Fixed Income, Bonds, and Interest Rates ...... 81 53. The Equilibrium Real Funds Rate: Past, Present And Future (Fall 2015) 81 Ken West 54. The Return On Your Money Or The Return Of Your Money: Investing When Bond Yields Are Negative (Fall 2015) 83 Vineer Bhansali 55. On The Fundamental Relation Between Equity Returns And Interest Rates (Spring 2015) 85 Robert Whitelaw 56. Central Bank Policy Impacts On The Distribution Of Future Interest Rates (Fall 2013) 86

vii

Douglas Breeden 57. Duration Targeting: A New Look At Bond Portfolios (Spring 2013) 87 Martin Leibowitz, Stanley Kogelman International Finance ...... 88 58. The International CAPM Redux (Spring 2015) 88 Adrien Verdelhan 59. PPP and Exchange Rates: Evidence from Online Data in Seven Countries (Spring 2015) 90 Alberto Cavallo 60. Is The World Stock Market Co-Movement Changing? (Fall 2015) 91 N. K. Chidambaran 61. Global Crises And Equity Market Contagion (Spring 2012) 93 Geert Bekaert 62. The International Monetary System: Living With Asymmetry (Fall 2011) 95 Maurice Obstfeld Investor Sentiment ...... 96 63. The Behavior Of Sentiment-Induced Share Returns: Measurement When Fundamentals Are Observable (Spring 2015) 96 Ian Cooper 64. Investor Sentiment Aligned: A Powerful Predictor of Stock Returns (2014 spring) 98 Guofu Zhou 65. Bubbling with excitement: an experiment. (Spring 2012) 99 Terrance Odean Passive Asset Management ...... 101 66. The Mutual Fund Industry Worldwide: Explicit An Closet Indexing, Fees, And Performance (Spring 2013) 101 Martijn Cremers Predictability ...... 102 67. King Of The Mountain: Finding -Term Efficacy In The Shiller P/E Ratio (Fall 2015) 102 Rob Arnott 68. Does Academic Research Destroy Stock Return Predictability? (Fall 2014) 104 David McLean 69. The Shiller CAPE Ratio: A New Look (Fall 2013) 106 Jeremy J. Siegel 70. Changing Times, Changing Values: A Historical Analysis Of Sectors Within The U.S. Stock Market 1872-2013 (Fall 2013) 108 Robert J. Shiller 71. Is Economic Growth Good For Investors? (Fall 2013) 111 Jay Ritter 72. Market Expectations in the Cross Section of Present Values (Spring 2013) 112 Bryan Kelly 73. Innovative Efficiency And Stock Returns (Fall 2011) 113 David Hirshleifer 74. The Other Side Of Value: Good Growth And The Gross Profitability Premium (Fall 2011) 115 Robert Novy-Marx

viii

75. Share Issuance And Factor Timing (Spring 2011) 116 Robin Greenwood 76. Hard Times (Spring 2011) 118 Christopher Polk 77. Socially Responsible Investing And Expected Stock Returns (Spring 2011) 119 Sudheer Chava Recovery Theory ...... 121 78. The Recovery Theorem (Spring 2013) 121 Stephen A. Ross Risk and Return: Variance/Volatility, Correlation & Equity Premium ...... 122 79. How Do Investors Measure Risk? (Fall 2015) 122 Jonathan Berk 80. Monetary Policy Drivers Of Bond And Equity Risks (Fall 2015) 124 Carolin Pflueger 81. Origins Of Stock Market Fluctuations (Fall 2014) 125 Martin Lettau 82. “The Divergence of High- and Low-Frequency Estimation: Causes and Consequences” (Fall 2014) 127 Mark Kritzman 83. “Visualizing The Time Series Behavior Of Volatility, Serial Correlation And Investment Horizon” (Fall 2014) 128 Ralph Goldsticker 84. Panel Presentations And Discussion Of Risk Parity And “Where Will It Take Us Next Year” (Fall 2013) 130 Clifford Asness, Paul Kaplan 85. The Equity Premium And Beyond (Fall 2012) 132 Antti Ilmanen 86. Panel Discussion: Rethinking The Equity Risk Premium (Fall 2012) 133 Brett Hammond, Marty Leibowitz, Bill Sharpe, Laurence B. Siegel 87. What Is Risk Neutral Volatility? (Spring 2012) 136 Stephen Figlewski Securities Lending & Short Selling ...... 138 88. A Solution To The Palm- Spin-Off Puzzles (Spring 2015) 138 Chester Spatt 89. “In Short Supply: Short-sellers and Stock Returns” (Fall 2014) 140 Charles Lee 90. Short Sellers And Financial Misconduct (Spring 2011) 142 Jonathan M. Karpoff Social Security, Retirement & Health Care ...... 144 91. “Made Poorer By Choice: Worker Outcomes In Social Security vs. Private Retirement Accounts” (Fall 2014) 144 Terrance Odean

ix

92. “How Much Would It Take? Achieving Retirement Income Equivalency Between Final- Average Pay Defined Benefit Plan Accruals And Voluntary Enrollment 401(K) Plans In The Private Sector” (Spring 2014) 146 Jack Van DerHei 93. Optimal Annuitization With Stochastic Mortality Probabilities (Fall 2013) 147 Kent Smetters 94. Health And Mortality Delta: Assessing The Welfare Cost Of Household Choice (Fall 2012) 149 Motohiro Yogo 95. Health Care Reform In The United States: Past, Present And Future (Fall 2012) 150 Jonathan Gruber 96. The Market Value Of Social Security (Spring 2012) 151 Stephen Zeldes, John Geanakoplos 97. Pension Funding And Investment Strategy Issues From Market Value Account (Spring 2012) 153 Barton Warning 98. Diversification Over Time (Fall 2011) 155 Barry Nalebuff Trading, Information, Transaction Cost ...... 157 99. Forced Liquidations, Fire Sales, And The Cost Of Illiquidity (Fall 2015) 157 Andrew Weisman 100. Trading Costs Of Asset Pricing Anomalies (Fall 2014) 159 Toby Moskowitz 101. A Primer On Fast Electronic Markets (Spring 2014) 161 Larry Harris 102. The High-Frequency Trading Arms Race: Frequent Batch Auctions As A Market Design Response (Spring 2014) 162 Peter Cramton 103. Those Who Know Most: Insider Trading In 18th C.Amsterdam (Fall 2013) 164 Peter Koudijs 104. No News Is News: Do Markets Underreact To Nothing? (Spring2013) 165 Kelly Shue 105. Which News Moves Stock Price? A Textual Analysis (Spring 2013) 167 Shimon Kogan 106. High Frequency Trading And Price Discovery (Fall 2012) 168 Jonathan Brogaard 107. The Impacts Of Automation And High Frequency Trading On Market Quality (Fall 2012) 169 Robert Litzenberger 108. Noise As Information For Illiquidity (Spring 2012) 170 Jiang Wang 109. What Does Equity Orderflow Tell Us About The Economy? (Fall 2011) 172 Michael Brandt 110. The Flash Crash: The Impact Of High Frequency Trading On An Electronic Market (Spring 2011) 174 Albert Kyle

x

111. Paired Bond Trades (Spring 2011) 177 Eric Zitzewitz SEMINAR PROGRAMS ...... 180 AUTHORS INDEX ...... 189

xi

SUMMARY OF PROCEEDINGS VOLUME 8

She identified a set of market leaders Active Asset Management – using the following Granger causality Alpha specification to run a series of rolling one- 1. Cross-Firm Information Flows (Spring year regressions for all possible pairs of 2015) stocks: Anna Scherbina, Bernd Schlusche ! !" !" !"# !" ! Anna Scherbina, Associate Professor of Ret! = �! + �! Ret!!! + �! Ret!!! Finance at the University of , Davis, + �!"Ret! + �!" presented a lecture that contributed to our ! !!! ! understanding of how information gets Stock j is considered a leader for stock i !" impacted into stock prices. She showed that it if the t-statistic of coefficient �! is is possible, using Granger causality statistically significant in the above methodology, to identify, statistically, a regression, and, hence, the lagged return on collection of bellwether stocks that lead other stock j predicts the current period return of followers. Most of the previous literature has stock i. It is defined as a positive (negative) focused on using ex-ante identifiable leader if the coefficient �!" is positive company characteristics to identify the ! (negative). This regression is run for all leaders. These have included firm size, possible pairs of stocks using monthly data, analyst coverage, supplier/customer and, on average, she finds 286.89 statistically relationships, strategic alliances, and merger significant leaders. This compares with ~150 prospects. In contrast, Scherbina focuses on situations where the information may be related to important news developments, may be transitory, and is difficult to identify, ex- ante. Scherbina motivated her empirical analysis by discussing several examples of firm-level news that could be relevant for other firms: Texaco and an employee discrimination lawsuit in 1994-1996, Novartis and an Indian patent law case in 2012, John Wiley & Sons and the resale of false positives one would have expected out U.S. items priced cheaper abroad in 2008- of 3,305 pairs of stocks. Other summary 2013, and Worldcom’s earnings manipulation information is contained in the following in 1999-2002. On average, she noted that table: there are about 218 important firm-level news Having identified a set of �! leaders for items issued daily, and that only about 20% ! stock i at time t, Scherbina aggregates all of are related to financial information. Her the leaders’ signals: empirical results looked at whether these ! information releases cause some firms to be !! market leaders --- with information only w �!" Ret! , gradually over time reflected in followers’s !,!!! !,!!! !!! stock prices. !!!

1 where the weights (w) are either equal or value weights, and then examines whether the signals have out- of-sample forecasting ability. Ten decile portfolios are formed based on the composite leader signal. Portfolio 1 (10) contains the stocks that are predicted to perform the most poorly (the best). The results for the equally-weighted signal are in the following table; they indicate an out-of-sample alpha of 0.64% per month,

In conclusion, Scherbina noted that it is possible to identify leaders based on Granger causality regressions. The existence of leaders is consistent with the limited attention of investors and the costs of acquiring information. Given the size of trading costs faced by most investors, prices appear to lie within no-arbitrage trading bounds.

Bibliography based on the Fama French 4-factor alpha, from going long Portfolio 10 and short Scherbina, Anna, and Bernd Schlusche, portfolio 1. Scherbina found that the leaders Cross-Firm Information Flows and the can be smaller stocks and they can belong to Predictability of Stock Returns, Working different industries than their followers. Paper, available at Although the results are strongly http://www.q-group.org/wp- significant over the entire sample period, content/uploads/2014/11/QGroup_paper_Sch Scherbina presented a graph (not reproduced) erbina.pdf of cumulative monthly returns that showed 2. Dissecting Factors (Spring 2014) the predictability at monthly frequencies has Juhani Linnainmaa declined over time, and that the strategy may It is the widely used Fama French (FF) no longer be profitable. A similar graph equity pricing model and its problems that (reproduced below) shows that some intrigue Juhani Linnainmaa, Assistant profitability remains when the strategy is Professor of Finance at the University of executed on a weekly basis. The strategy Chicago’s Booth School. Because the model requires high turnover; the break-even trading cannot price corner portfolios when stocks costs are about 45 bps for the equally- are sorted by size and book-to-market value, weighted portfolio strategy and about 27 bps Fama and French (1996) themselves rejected for the value-weighted strategy—the same the three factor model. In spite this rejection order of magnitude as the effective bid-ask by the creators, the FF model remains widely spread for a typical stock. used by academics and practitioners nearly 2 twenty years later. Curious about the stock gets a credit. In fact, the two strategies, problems with FF, instead of further mapping HML and SMB, really are bundles of the model’s boundaries, Linnainmaa turns to multiple factors Linnainmaa calls “baby the model itself for an explanation of its factors.” The strategies themselves each can problems. be split into two systematic parts each with different risk premia: one with a positive The idea behind the FF model is that price of risk; one with a zero risk price. stocks co-move with other stocks of their type. Thus value co-move with value, growth To look at this Linnainmaa takes the part with growth and small stocks with others in of firm size orthogonal to value and project it their category. But, Linnainmaa says, stocks on long- term changes in the market value of based on style and cap can be in the group for equity. The priced components are the many reasons, not all of which earn a projections and the unpriced components are premium or discount. This leads him to ask the residuals. The results of this analysis are two questions: are all value stocks (and small stocks) created equal; if not, what are the implications for performance evaluation and “new” anomalies? Small stocks Linnainmaa points out, co- move with other small stocks and value stocks co-move with other value stocks, but not all versions of size and value are compensated with higher average returns. HML and SMB can each be split into two systematic parts with different risk premia, one with a positive price of risk and the other with a zero price of risk. Because of the mismatch between covariances and average shown below. returns, the three-factor model incorrectly Linnainmaa then asks how this insight penalizes some small and value stocks. this would impact mutual fund performance Simply, Linnainmaa says, not all flavors of evaluation. Answer this he creates an size and value are compensated with higher augmented model that includes the insights average returns. into priced and unpriced factors. The original How can this happen? A value a stock can and augmented models are shown below. be deemed value if it has always been a value r − r = α + b MKTRF + s SMB firm, or when the market value of its equity !,! !,! ! ! ! ! ! decreases or the book value of the equity + h!HML! + �!,! increases. Similarly a company is small if it r!,! − r!,! = α! + b!MKTRF! + s!SMB! was small at the time of its IPO or when the ! + s!,!"#$%"&me!,!"#$%"& market value of equity decreases. These ! different reasons for categorization result in a + h!,!"#$%"&bm!,!"#$%"& + �!,! mismatch between covariances and average Where: r!,! is fundj’s montht return net of returns. Thus, the three-factor model does not fees, r is the one-month Treasury bill rate, condition on enough information to pick up !,! MKTRF is the return on the value-weighted on these distinctions: a small value stock, gets t penalized in alphas while a large growth market portfolio in excess of the Treasury bill 3 rate, SMBt and HMLt are the size and value whose weights are rebalancedt annually and ! control factors of the three-factor model, bm!,!"#$%"&is are based on volatility, market beta and the montht return on along-short portfolio downside semi-deviation. He finds that all of based on the unpriced part of book-to-market, these alternative indices produce higher ! returns and higher Sharpe Ratios than the U.S. and me!,!"#$%"& , is the montht return on a long-short portfolio based on the unpriced Capitalization-Weighted Benchmark, as part of size. shown in the above table. Using this augmented model Linnainmaa Is this the pot of gold? Before we leap, finds that estimated alphas shift significantly let’s have some confirmation, and Hsu to the right. He says that most managers trade provides it. the uncompensated flavors of size and value To confirm these results Hsu inverts each and thus do not offset the costs they impose of the strategies. He anticipates that the on investors for their skill. He reports that in inverted algorithms will underperform the 2010 only 2% of fund managers had enough benchmark by a roughly same amount it had skill to cover the costs they imposed on outperformed. They did not. Not only do the investors. Using the augmented model, up to inverted algorithm strategies not perform as 18% of fund managers had cost-offsetting expected, they outperform the benchmark skill. (following table). This outcome was Linnainmaa concludes his presentation unexpected. talking about the power of gross profit as an Looking further Hsu finds that these same addition to the model. He says that a strategy results hold for global stocks using the same that combines value and profitability is very weighting schemes and for U.S. and global attractive. On the basis of this work, stocks indices that utilize fundamental managers should be aware that HML and weights such as book value, 5-year average SMB are bundles of multiple factors, some of earnings, a blend of fundamentals, and which are not priced. To deal with this, the earnings growth. Not only do fundamentally manager must either clean the value signal by weighted indices outperform, the inverse taking into account recent changes in the strategies outperform. Hsu goes on to say that market value of equity or combine book-to- the following smart beta strategies also market with gross profitability so as to outperform: separate “good” value from “bad” value. • Diversity Weighting - a blend between 3. The Surprising “Alpha” From cap weighting and equal weighting. Malkiel’s Monkey And Upside-Down • Fundamental Weighting - strong Strategies (Fall 2013) fundamentals deliver high returns. Jason Hsu Jason Hsu of Research Affiliates (RAFI) • Maximum Diversification – return is said that investment practitioners and many proportional to volatility. academics are understandably preoccupied • Minimum Variance - low risk generates with identifying strategies based on stock high return. characteristics that can deliver high risk- • Risk Cluster Equal Weight - equally adjusted returns. Many have translated these weighted country/industry clusters. strategies into portfolio weighting schemes to • Risk-Efficient - return proportional to create indices that outperform the Cap- downside semi-deviation. Weighted Benchmark. Hsu focuses on indices As did the inverse of the smart beta strategies! 4

How can this be? What is going on? This inverse strategies. is what Hsu sets out to discover. He and his co- authors first ask if Burton Malkiel might Finally, Hsu asks, what we are to make of have been right when, in A Random Walk the result that popular strategies, when Down Wall Street, he wrote, “A blindfolded inverted, produce outperform the strategies monkey throwing darts at a newspaper’s themselves? Hsu says that intuition about the financial pages could select a portfolio that results can be gleaned from the following would do just as well as one carefully relationship that holds true for any portfolio: selected by experts.” E[Rp] = E[Rew] + n x cov (wi, ri) Challenged by Malkiel’s contention, Hsu The expected return on any portfolio (p) tests with a blindfolded monkey. To be clear, is equal to the expected return on an equally- he does not actually use a monkey: it would weighted portfolio of n securities plus n times be time-consuming and costly to arrange for a the cross- sectional covariance between the monkey to throw darts at the Wall Street portfolio weights of each security (i) and its Journal’s stock pages, not to mention dealing returns. Hsu argues that due to mean with the problem of tracking down 50 years reversion the covariance between market of archived copies of WSJ stock lists. Instead capitalization weights and returns is negative. he simulates a dart-throwing monkey by Hence, the right-hand-term in the above picking a random 30-stock portfolio out of equation is negative for a market the top 1,000 largest market capitalization- capitalization weighted index. For any other weighted stocks once a year. These random weighting scheme Hsu expects this term to be stock selections are equal-weighted to form a close to zero. This suggests that all non- “Malkiel monkey” portfolio. Only twice did capitalization weighted indices should the simulated monkey underperform the outperform those that are cap- weighted. Hsu Capitalization-Weighted Benchmark. Even concludes that it is the benchmark that is simulated monkies do better. skewed, not the inverse results that are odd. Ditch the algorithms and the computer 4. Will My Risk Parity Strategy farms, grab some Wall Street Journals? No, Outperform? Lisa R Goldberg (Spring Hsu says the source of these odd results lies 2012) in the data. When he looks at a four-factor Lisa Goldberg model decomposition for the U.S., and finds Lisa R. Goldberg, Executive Director of that the data is tilted toward value and small Analytic Initiatives at MSCI Barra and size stocks. Hsu finds the same tilts to small Adjunct Professor of Statistics, University of and value when he looks at the global and California, Berkeley presented the last paper fundamental data. In fact, he finds the same of the 2012 Spring Q-Group Meetings, “Will bias in all non-cap weighted strategies My Risk Parity Strategy Outperform?” she (shown in the following table), including the coauthored with Robert M. Anderson, Director, Coleman Fung Risk Management Research Center and Professor of Economics and Mathematics, University of California, Berkeley and Stephen W. Bianchi, doctoral candidate, University of California, Berkeley. The question Goldberg tackles, as have many of her predecessors, is “Can We Beat the Market?” In this paper she looks 5 specifically at whether the lower-beta-higher- She then breaks the data into the return/higher-beta-lower-return anomaly, following subsamples: thought to be the result of investor constraints on leverage, can be exploited to achieve a • Pre-1946 (1926-1945) realized Sharpe ratio that is higher than that • Post-War (1946-1982) of the value-weighted market portfolio with • Bull Market (1983- 2000) same volatility. She says that one of the • Dot-Com & Beyond (2001-2010) major reasons for this work was the content Goldberg finds a time dependent of a paper by Frazzini and Pedersen given at difference: for the Post-War period the 60/40 Fall Q-Group meetings. portfolio is superior to all other strategies. Goldberg starts with three general Critical to the success of the levered risk strategies typically used to beat the market: parity strategy is the cost of borrowing. She 1. Levered Risk Parity: Equal risk evaluates the strategy, first assuming a risk- contributions from stocks and bonds free borrowing rate and then the Eurodollar levered to match the market volatility. rate, making sure to include trading costs. 2. Unlevered Risk Parity: Equal risk Taking costs into account the 60/40% contributions from stocks and bonds fully constant asset allocation strategy out invested. performs leveraged risk parity for the entire 3. Fixed Asset Mix: A constant asset period and all sub periods. The levered risk allocation, typically 60% Stocks/40% parity strategy premium is significant only Bonds. when it is financed at the risk free rate and trading costs are neglected. Her results are To assess whether levered risk parity outperforms the other strategies, Goldberg uses stock and bond data 1926-2010 to examine what would have happened. The performance of each of her four strategies, including one that is value weighted, is shown in the next column (the end points of the lines are shown in the order listed in the legend). Overall, levered risk parity

shown in the chart on the next page. Goldberg concludes that on the basis of this research: • While CAPM investors hold the market portfolio and lever or delever to adjust risk, practical considerations imply that levering a portfolio diminishes Sharpe Ratio. • A weaker strategy may outperform a outperforms. stronger one over periods of several 6

decades, certainly beyond the investment Chiefly, Elton and Gruber set out, in this horizon of most individuals and presentation, to show the characteristics and institutions like pension funds or performance of TDFs. endowments. • Results of back tests depend on the test The below table shows a typical “glide period and assumptions about transaction path,” that of a Vanguard TDF. The numbers costs. are percentages allocated to each asset class Finally she says, “When the experiments starting today, and is inferred from the are done, we still have to decide what to holdings of Vanguard TDFs in 2011 with believe.” (Jonah Lehrer, The Truth Wears Off) Active Asset Management – Mutual Fund Performance 5. Target Date Funds (Fall 2015) Ned Elton, Marty Gruber Q Members experienced a neat “tag-team” different maturities: presentation, where both Ned Elton and Marty Gruber presented their paper on Target How typical are the Vanguard allocations? Date Funds. Of course, Ned and Marty are The table below summarizes the asset well-known for a long history of research in allocations for TDFs that have a target date of investments, and their textbook is a classic 2035 across 50 different management that has been in print for over 30 years (this companies. There are a wide variety of asset writer, Wermers, learned investment theory allocations—indicating that many different from an early edition, perhaps the 1st Edition, perspectives of how glide paths should be in 1986).1 structured: Ned Elton noted that there is a vast Next, Elton showed the average TDF literature that indicates that participants in holds 17 underlying funds, of which 63% are 401(k) plans make suboptimal decisions, such drawn from the same fund management as the tendency to invest equal amounts in all company! Underlying funds chosen from funds in their plan (the Benartzi and Thaler, outside the management company are almost 2001, 1/N rule). Investors also tend to chase past returns and don’t invest enough. Elton then told Q members that 72% of 401(k) plans, in 2013, offer Target Date Funds (TDFs), and 41% of 401(k) investors hold positions in these funds—amounting to 20% of the total dollar value of 401(k) assets. Further, the growth of TDF’s has been astounding over the past several years. always passively managed, and are in sectors/styles not offered by the management company. An important finding expressed by Elton is that investing in a TDF involves an extra 1 “Modern Portfolio Theory and Investment layer of fees, but this surprisingly results in Analysis,” by Edwin J. Elton and Martin J. Gruber, about the same level of “all-in” fees as a “do- Wiley, now in its 9th Edition. 7 it-yourself” portfolio of the underlying funds. same-family fund that is underperforming in This results because the TDF gains access to order to assist that fund. Investigating this issue further, E&G found that TDFs tend to invest in new underlying funds which tend to underperform, indicating that TDFs “seed” the new funds. Overall, E&G conclude that TDFs appear to roughly match what investors could do lower fee share classes of the underlying themselves (if they were disciplined enough funds than can be obtained by direct to follow a glide path), but that TDFs would investment. The below table shows that the do a lot better if they held higher allocations “Total fees” of TDFs roughly match those of to index funds. a do-it-yourself (“Investor matching portfolio fees”) strategy that replicates the TDFs Bibliography holdings of underlying funds: Benartzi, Shlomo, and Richard H. Thaler. Gruber then took over the podium to 2001. “Naïve Diversification Strategies in discuss the performance and fund selection Defined Contribution Saving Plans,” ability of TDFs. He noted the potential American Economic Review 91, 79-98. problem of changing betas (changing risk Elton, Edwin, Martin Gruber, Andre de exposures) of TDFs over time— which is Souza, and Chris Blake, “Target Date what they are designed to do, but which Funds,” available at http://www.q- brings problems to a factor model that group.org/wp- assumes constant risk exposures. Elton and content/uploads/2015/05/TargetDateFunds- Gruber use a “bottom-up” approach, which UPDATED-8.24.15.pdf. computes alphas and betas each month for 6. Patient Capital Outperformance each underlying fund of a TDF, and then (Spring 2015) computes the value-weighted average beta for Martijn Cremers the TDF. Martijn Cremers, Professor of Finance at The results from these factor models the University of Notre Dame, built on his indicate that TDFs do not exhibit selection prior well-known paper with Antti Petajisto abilities in choosing their underlying funds. that introduced “Active Share”. Active Share Overall, TDFs perform, net of expenses, at (AS) uses the portfolio holdings of a fund to about the same (or slightly worse) than the compute the average absolute distance of average mutual fund. Of course, this is an weights from the “best fit benchmark” for analysis of “alpha,” and does not capture the that fund. In that paper, AS was shown to ability of TDFs to potentially counteract predict fund performance over the next common investor asset-allocation mistakes. several months. Elton and Gruber conjecture that there is In his new paper with Pareek, Cremers a trade-off in TDF funds. They should benefit more deeply investigates the role of AS in from having non-public information on the predicting fund performance by separating abilities of the underlying fund managers, but funds in a second dimension: the holding they suffer from a conflict-of-interest. For duration of stocks. For instance, one can instance, the management company could imagine two extreme types of high AS potentially pressure a TDF fund to purchase a managers. The first trades a lot, which results 8 in holding portfolios that vary a lot over time, expression is across all overlapping positions. but that usually deviate from the benchmark. The second type are “patient managers,” who Cremers showed that U.S. equity mutual trade very little, holding stocks potentially for funds ranked in the top quintile by their AS years at a time, but also in weights that measures have very different following-year deviate significantly from the benchmark. alphas (using a four-factor Fama-French plus momentum model, plus a liquidity factor), The research question that Cremers and depending on each fund’s duration measure. Pareek (CP) address is “which of these two High AS funds with a long duration measure types of high AS managers performs better?” (those that deviate strongly from the This is an important follow-on question to the benchmark, but have low turnover) produced AS paper, as it is natural to think of a high an alpha that exceeded 2%/year, while short AS manager as having high portfolio duration funds produced an alpha below - turnover—but, this need not be the case! In 2%/year (see the red bars below): addition, Cremers stated that over the past 10 years (since 2001), the highest AS funds have What explains the outperformance of not outperformed their benchmarks; therefore, these high AS, long duration funds? Cremers the AS measure is insufficient, by itself, to finds that about half of the alpha can be find outperforming fund managers. explained through the tendency of such funds to invest in high quality stocks (those that are Thus, CP suggest adding their measure of profitable and growing). the holding period of a stock, the “duration measure,” as a second “conditioning variable” Finally, Cremers showed that high AS, in addition to the AS of a fund manager. long duration funds also outperform when we What is the duration measure? It is simple: it examine all 13(f) institutional investors, as is the average length of time a dollar in a well as when we examine hedge funds (that manager’s portfolio has been invested in the file 13(f) statements). same stock (over the past 5 years). For The takeaway for Q members is that example, in 2010, Berkshire Hathaway had a Active Share, by itself, is not sufficient to duration measure of roughly 4 years, while locate skilled managers. One must also find Fidelity had a measure of about 2 years. patient managers! Cremers conducts his tests of AS Bibliography combined with duration with 3 samples: all- equity retail mutual funds (net returns), all Cohen, Tim, Brian Leite, Darby Nielson, and institutional portfolios (long-only 13(f) data), Andy Browder, 2015, Active Share: A and hedge funds (long-only 13(f) data). As a Misunderstood Measure in Manager side note, Cremers recommends a new easier- Selection, Investment Insights (Fidelity to-compute version of AS: Investments), available at https://www.fidelity.com/bin- ������ �ℎ��� public/060_www_fidelity_com/documents/le

= 100% − Σ!!!��� �!"#$,!�!"#$!!"#$,! , adership-series_active-share.pdf. where the summation in the above Cremers, Martijn, and Ankur Pareek, 2014, Patient Capital Outperformance, Working Paper, available at http://www.q- group.org/wp- content/uploads/2014/11/cremers-paper- SSRN-id2498743.pdf. 9

Cremers, Martijn, and Antti Petajisto, 2009, The primary test of this hypothesis is How active is your fund manager? A new whether the coefficient b > 0 in the following measure that predicts performance, Review of regression: Financial Studies 22, 3329-3365. R!,! = � + � ∗ ��������!,!!! + �!,! Frazzini, Andrea, Jacques Friedman, and Lukasz Pomorski, 2015, Deactivating Active Where R!,! is the benchmark-adjusted Share, AQR White Paper, available at return for fund i at time t, ��������!,!!! is https://www.aqr.com/~/media/files/papers/aqr the turnover over the most recent 12 month -deactivating-active-share.pdf. period for fund i, and �!,! is the error term. 7. Do Funds Make More When They Stambaugh presented results from four basic Trade More? (Spring 2015) variants of this regression that have different Robert Stambaugh assumptions about the intercept term: (i) the Robert Stambaugh, the Miller Anderson intercept is fixed over time and the same for & Sherrerd Professor of Finance at the all funds (no fund or month fixed effects), (ii) Wharton School, presented new research the intercept can vary by fund but is fixed examining the empirical link between mutual over time (fund-only fixed effects), (iii) the fund trading activity and future performance. intercept can vary by month but is the same Many previous papers have looked at aspects for each fund every month (time-only fixed related to the cross-sectional relationship effects), and (iv) there are both month and between trading activity and performance, fund fixed effects. The main results of the including the Cremers paper (“Paper #1” above) at this conference. Cremers, for instance, finds that funds that are more patient (i.e. trade less than other firms in the cross-section) tend to have superior returns to other funds that trade more intensely. In contrast, Stambaugh focused on the time series relationship between trading activity and future performance for each fund. paper can be summarized in the following In particular, he examined whether periods of table. The bottom row of the table shows that, above-average trading activity (relative to a once one allows fund fixed effects (i.e. the time series mean of trading activity for that intercept can differ across funds), the fund) result in subsequent periods of above- coefficient b is estimated to be positive and average returns. The test, conducted on a very statistically significant: sample of 3,126 mutual funds during the The 0.00123 estimate for b (with a t- 1979 to 2011 period, is based on the simple statistic of ~6.6) suggests that a one standard idea that skillful managers should trade more deviation increase in turnover is associated when the opportunity set is more promising. with a 0.65% per year increase in performance for the typical fund.

10

Stambaugh then showed that the relation 8. A Sharper Ratio: A General Measure between turnover and future performance is For Ranking Investment Risks (Spring 2015) Kent Smetters Kent Smetters, Boettner Professor of Business Economics and Public Policy at the Wharton School, presented a new and “Sharper Ratio”! It is known that, when trading occurs at a higher frequency (e.g., daily) than the periods used to compute a Sharpe Ratio (e.g., monthly), a fund manager can use a dynamic strategy to artificially boost the Sharpe Ratio without any particular stronger for smaller funds and funds with investment skills (e.g., Goetzmann, Ingersoll, high expense ratios. He argued that the results Spiegel, and Welch, 2007). It is also known are concordant with a priori theory: the result that certain strategies, such as a “selling put should be stronger for smaller funds where option” strategy can generate elevated Sharpe the alpha potential is less hurt by the Ratios in the absence of any manager skills. decreasing returns to scale in asset In both cases, the resulting distributions of management, and the link should be stronger manager returns are non-normal, violating a for more talented managers who are likely to key assumption of the Sharpe Ratio. Another command a higher fee. perspective on the problem is that these trading strategies alter higher moments of the Stambaugh also found that, in aggregate, return distribution in ways that can destroy funds trade more when investor sentiment is the Sharpe Ratio’s usefulness as a high, when cross- sectional volatility is high, performance evaluation tool. and when stock market liquidity is low; and that the predictive link between trading Smetters motivated his talk with the activity and future performance weakens in following example: Consider the following periods when funds act more in concert. baseline investment strategy: Despite the fact that high turnover funds (in a 1. 50% of wealth invested in a hypothetical time series sense) outperform low turnover risky asset whose returns are generated funds, he found that both sets of funds have from an i.i.d. normal distribution, negative benchmark adjusted returns on the calibrated to have the same mean and order of 5 bps per month. standard deviation as the S&P1500 Bibliography 2. 50% in the risk-free asset 3. Portfolio rebalanced each week to 50/50 Pastor, Lubos, Robert Stambaugh, and Lucian 4. Sharpe Ratio and all return moments are Taylor, Do Funds Make More When They measured on final annual return Trade More? distribution Working Paper, available at The resulting Sharpe Ratio is 0.62, with http://www.q-group.org/wp- content/uploads/2014/11/stambaugh-paper- turnover5.pdf. higher moments shown below:

11

So far, this is showing that the rebalance Smetters, and reflected in the below graph: strategy—increasing risk when portfolio value drops, and decreasing when it rises— Simply put, the graph says to increase risk creates a slightly non-normal portfolio return after losing money and decrease risk after distribution over a year (as shown by, for making money. While this could be example, that skewness (M3) equals 0.232, considered a “cheater’s graph,” a contrarian rather than zero—its value when normally investing strategy follows this approach (buy distributed.) when the market is down, sell when it is up). The result, compared with our above two Now, let’s add, to this baseline strategy, a strategies, is shown below: strategy of selling 10% out-of-the-money three-month put options on the index—with Again, we have created an increased the number of options sold designed to Sharpe Ratio, over the Baseline strategy, but generate 3%/year return (note that put options at the expense of giving the investor worse are a synthetic dynamic trading strategy that higher moments. gets rebalanced every instant, again creating a One common way of dealing with this problem for the Sharpe Ratio whose problem in practice is to use the Sharpe ratio usefulness is predicated on a stationary to rank investment alternatives in combination with other measures of risk such as maximum drawdown, Sortino ratio etc. Smetters argues that while using underlying distribution): multidimensional measures of risk sounds sophisticated, it lacks a theoretical foundation, Notice how the “Baseline + Puts” strategy often has to resolve conflicting conclusions, is different from the “Baseline” strategy. The and that, ultimately, expected utility is the B+P strategy generates lower negative most compelling way to trade off risk and moments (M3, M5, M7, and M9) and higher return. positive moments—when rational investors will prefer exactly the opposite! Yet, its Smetters motivated his “Sharper Ratio” substantially increased Sharpe Ratio falsely by briefly reviewing past attempts to extend signals that this is a superior investment. the Sharpe ratio. The prior literature typically started with a simple problem of maximizing As another example, suppose that we the expected utility of an investor with invest $1 in a 50/50 mix of the randomly unknown preferences. This literature showed generated “S&P 1500” plus riskfree asset. that the first-order condition for utility Each week, we rebalance according to a maximization (first derivative with respect to “Sharpe-Ratio optimal rule” derived by allocation to stocks) can be written as an infinite Taylor Series expansion (which has a unique single root). Then, to produce a measure that ranks by the first N moments, you can truncate the Taylor Series expansion at N terms. Unfortunately, this literature stopped here, as there are many roots, some real and some complex (when the series is truncated at N terms), so it’s not clear what to do next!

12

Smetters’ ability at math becomes 9. Scale And Skill In Active Management apparent next. He proposed an approach for (Spring 2014) selecting the correct root from the many roots Lubos Pastor noted above. The idea is that, if a power Active fund management’s ability to series (such as a Taylor Series) is truncated at outperform passive benchmarks depends on N terms, then it has N roots. Then, under management’s raw skill in identifying certain regularity conditions on the utility investment opportunities, the supply of those function and portfolio risk distribution (which opportunities, and the competition to harvest are not too restrictive), the smallest (in them. Pastor points out that one potential absolute value) real root among the N roots constraint facing active managers is converges to a unique root as we increase N decreasing returns to scale. He says that if to infinity. And, there is a finite value of N skill impacts performance, then skill and such that we can stop at this N and we will scale interact. The result could be that a more have our unique real root. This solves the skilled large fund can underperform a less problem noted above, and we can now use skilled small fund. To understand skill, Pastor the N-term Taylor Series! says, we must understand the effects of scale. This is exactly what Pastor and his colleagues After demonstrating how the Taylor set out to do. Series can be used to form a new “Sharper Ratio,” Smetters argued that the new ratio Pastor asks, what do we know about the retains several attractive properties of the skill scale performance interaction? He says Sharpe Ratio, including that it is independent that there are two hypotheses regarding the of leverage, but now properly accounts for interaction between performance and scale. higher moments of the portfolio return The first is that decreasing returns to scale are distribution. The only cost is that the Sharper related to fund size. The second is that Ratio depends on the risk tolerance of the decreasing returns are related to the aggregate investor (while Sharpe Ratio does not). size of all actively managed funds. Whether Smetters argued that this should not be too scale has an impact on returns at either level burdensome, as financial advisors are charged is not clear. The fund-level hypothesis has with understanding the risk-aversion of their been tested in a number of recent studies with clients anyway! mixed results. Bibliography Pastor says there are two challenges in estimating the effect of fund size on Goetzmann, William, Jonathan Ingersoll, performance. The first is the endogeneity of Matthew Spiegel, and Ivo Welch, 2007, fund size: skill might be correlated with both Portfolio performance manipulation and size and performance. A simple OLS estimate manipulation-proof performance measures, of the size-performance relationship is likely Review of Financial Studies 20, 1503–1546. to suffer from an omitted- variable bias in Smetters, Kent, 2015, A Sharper Ratio, that skill is an unobservable variable. The available at second challenge is the bias that results from the positive, contemporaneous correlation https://www.amcs.upenn.edu/sites/neutron_a between changes in fund size and unexpected mcs/files/SharperRatio07-14-2013B.pdf. fund returns. To deal with this, Pastor develops a recursive demeaning procedure that eliminates the bias.

13

To test the model he uses CRSP and 1. Practitioners: Be aware that you are more Morningstar data from 1979 to 2011 for skilled than you predecessors, but so are domestic active equity mutual funds with your competitors. While it is harder for an assets of $15 million or more. The two active manager to outperform in a larger different datasets are used to cross-check all industry, especially if they manage high- of the data and the analysis only relies on the turnover, high-volatility, and small-cap intersection of data that can be cross- funds, staying away from crowded validated. This results in a sample of 350,000 trades/strategies/industries provides monthly observations of 3,126 funds. The opportunity for better performance. researchers implicitly associate the skill with 2. Investors: Invest in young funds since the fund - not the manager running the fund. fund performance deteriorates over time For example, there is a certain skill associated due to competition and they are relatively with Fidelity’s Magellan fund - independent more skilled. of whether it was being managed by Peter 10. Time-Varying Fund Manager Skill Lynch, Morris Smith or Jeff Vinik. (Fall 2013) Using cleaner data and more robust Marcin Kacperczyl Kacperczyk says that much has been techniques than previous researchers, Pastor written about whether investment managers finds at the industry level there is strong add value for their clients. Kacperczyk posits evidence of decreasing returns to scale. This that managers have limited attention and evidence is stronger for high-turnover, high- resources and must make an optimal decision volatility, and small-cap funds. At the fund about how to allocate time and resources. He level, there is mixed evidence of decreasing believes that business cycle variation causes and insignificant returns to scale after the optimal allocation to change (theory) and removing econometric biases. As for skill, it looks for evidence of that in his empirical appears that active funds have become more work. To understand this better, and consider skilled over time, although their performance its source, he decomposes investment has not improved due to the increase in the performance into stock picking and market size of all actively managed money. In timing and finds evidence, conditioning on addition there is a negative age-performance the state of the economy, that skilled relationship: a fund’s performance decreases managers can successfully pick stocks in over its lifetime, and younger, newly entering times when the economy is expanding and funds seem to be more skilled and outperform are able to market time in during . older funds. To measure this he proposes a new measure Pastor says that their conclusions are of fund-manager skill that gives more weight robust when: to a fund’s market timing in recessions and its stock picking during expansions. 1. Controlling for business cycle variables and family size. For his study, Kacperczyk uses data on 2. Trimming extreme outliers in FundSize. 3,477 actively managed open-ended U.S. 3. Different functional forms for FundSize. equity mutual funds from the CRSP 4. Alternate benchmark-adjustments: survivorship bias-free mutual fund database a. Fama-French. (1/1980-12/2005), and merges it with b. Morningstar benchmark with holdings data from Thomson Reuters and estimated betas. fundamental data from the CRSP / Compustat stock- level database. Kacperczyk follows the Pastor has advice for practitioners and investors. 14 approach used by previous researchers to highest stock picking ability. decompose investment performance: The same managers that exhibit stock- 1. Stock picking skill is measured as the picking ability also exhibit market-timing product of a fund’s portfolio weights in skill, and at the right times. deviation from market weights multiplied When looking at the performance of the by the firm-specific component of returns th of each stock in the portfolio. median, 75th and 95 percentile, funds during 2. Market timing skill is measured as the recessions, he finds the positive timing effect product of a fund’s portfolio weighs in for the most successful managers is almost deviation from market weights multiplied four times that for the median manager. by the systematic component of stock Interestingly, the negative effect of a returns. on the stock- picking of the most successful managers is almost twice the size The major novelty in his work is the use of that for the median manager. In fact, these of dummy variables to look at both stock- picking and market- timing measures of skill in each of two different periods -- expansion and recession. He holds constant a number of control variables related to such things as fund size, manager age, expenses, turnover, loads, and style. The results suggest that managers do a better job of stock picking in periods of expansion and a better job of market timing during recessions. Specifically, managers’ market timing ability is a monthly 14 bp managers also show market timing ability

higher in recessionary periods and their stock during recessions. picking ability is 14.4bp lower in periods of recession. He uses NBER’s method of dating These managers can time the market, but recessions, though similar results hold when how do they do it successfully? They have, recession are determined ex-ante. The results on average, more cash in recessions (40 bp also hold at both the fund and at the manager more actual cash and 300 bp more implied levels. cash), lower betas during recessions (1.00 versus 1.11 in expansion periods), and tend to Kacperczyk wonders whether the same engage in successful sector rotation by managers have timing and stock picking allocating more to defensive sectors during ability. In the table that follows he shows the recessions and more cyclical industries in ability of the 25% of the funds that have the expansions. As to the success, the top funds tend to be smaller and trade more actively than their less successful counterparts. In addition, the fees charged are higher (26 bp per year), the inflows of new assets higher, and the number of stocks fewer. In addition, these funds have greater industry concentration, their betas deviate more from their peers. 15

11. Does Mutual Fund Size Matters: The Relationship Between Size And Performance (Fall 2012) Edwin J. Elton, Martin J. Gruber In earlier work, Elton and Gruber found that mutual fund managers’ past performance was predictive of future performance. Others, Gruber noted that the probability of such as Berk and Green (2004), argued that getting an alpha as high as this is past performance should not predict future considerably less than 1%. performance. Elton and Gruber’s research directly addresses the Berk and Green results Berk and Green suggested that a and comes to different conclusions. successful manager would capture excess return by increasing expense ratios. To test To determine whether there is this, Elton and Gruber estimate expense ratios predictability of performance, they use for each decile and find that the 10th decile variations of the well- known Fama–French has the lowest expense and turnover ratios. (FF) model and weekly information on all Further, they find that expense ratios tend to domestic common stock funds in the CRSP decline over time as shown below. database from 1999–2009. They exclude funds with less than $15 million in assets, Elton and Gruber employ regression less than three years of history, or an R2 that is analysis to explain the alpha using a variety less than 0.6. In addition, they exclude index, of variables including cash flows for the fund, sector, life cycle and flexible funds and they its size, the prior year expense and turnover eliminate funds backing variable annuity ratios, and the mutual fund family size. They products. conclude that there is a strong relationship between past and future alphas. In addition, For each fund they compute an alpha (α) they find the following: in the ranking year and use it to form deciles. 1. The best performing funds produce positive future alphas. 2. Growth in fund size erodes predictability, but slowly. 3. Size is not as important as past speculation would suggest. Gruber concluded that performance and predictability don't disappear for large funds for two reasons: 1. Expense ratios go down with size. 2. Size and particularly fund flows lead to either larger trades or more investments. They then examine the subsequent year’s α, 3. Large funds may gain better quality finding that the top decile stocks have a analysts (they can pay more) or command positive and significant alpha: the top decile the best resources of the fund family to outperforms the index by 1.5% per year as which they belong. shown below:

16

12. Conflicting Family Values In Fund funds and that AFoMFs inflows to funds Families (Spring 2011) occur when outsider outflows from the other Utpal Bhattacharya funds in the family is high. Thus, he says the Utpal Bhattacharya, Associate Professor, AFoMFs act like the Fed’s discount window Kelley School of Business, by investing in funds in the family to offset University, presented “Conflicting Family their temporary liquidity shortfalls. This Values in Fund Families” co-authored with investing offsets potentially costly fire sales Jung Hoon Lee, PhD candidate, Kelley when the fund in family is experiencing very School of Business, Indiana University and large redemptions. Veronika Krepely Pool, Assistant Professor of Finance, Kelley School of Business, Bhattacharya and his colleagues’ goal is Indiana University. to investigate the relationship between AFoMFs flows and outside investor flows, Mutual fund families have been studied in especially when the outside investor flows are a variety of ways. Bhattacharya and his large and negative (an outflow). To study this colleagues address an interesting question: they use AFoMFs and other family funds the role of a mutual fund that invests solely in holdings data from the Morningstar Principia other funds in the same fund family. and the CRSP Survivor-Bias-Free Mutual Bhattacharya calls these funds affiliated Fund databases from October 2002 to January funds of mutual funds (AFoMFs). 2008. Their data covers more than 90% of the While there are a number of interesting AFoMFs universe. issues with these funds, Section 17 of the To test whether AFoMFs provide an Investment Company Act of 1940 which insurance pool to offset temporary liquidity severely restricts trades between individual shocks of member funds they document that funds, is not one of them. Bhattacharya says affiliated funds-of-funds invest a that AFoMFs can both invest (i.e., lend) and disproportionately large amount of money in disinvest (i.e., borrow) in funds in their own the distressed funds in the family and provide family without running afoul of the law. several subsample results to show that this Bhattacharya next describes the behavior is consistent with liquidity importance of these funds in the industry: provisions. From their analysis they find that while these funds were virtually non-existent the cost for industry wide AFoMFs to provide in the 1990s, by 2007, the last year of their liquidity is 7.11 basis points per month or sample, 27 of the 30 large fun families had about $88 million. The benefit to the AFoMFs, constituting 75% of the mutual distressed fund is 2.94 basis points per month fund industry. Given that this kind of fund for a family wide saving of $107 million. maximizes the interest of the whole family The coauthors conclude that: rather than the interest of its shareholders, he and his colleagues seek to answer two • AFoMFs offset severe liquidity shortfalls questions: in other funds in the family, thus providing the fund family with an 1. How do the internal capital markets of a insurance pool against temporary liquidity fund family operate? shocks to other same family funds. 2. Do these internal capital markets conflict • The AFoMFs sacrifice does benefit the with some shareholder objectives? family by preventing other fund fire sales Bhattacharya hypothesizes that AFoMFs and thus improving other fund’s provide liquidity to distressed family member performance.

17

• The benefit exceeds the cost for the collectables--stamps, musical instruments and family, which suggests that the cross- art around the world. subsidy is rational for the family. For each of the major collectables, long Although the family benefits because • term returns outperformed those from bills, target funds can avoid fire-sales, the cost bonds, and gold. However equity, in part of this insurance is borne by the investors because of its dividends, outperformed them in the AFoMFs. all. Dimson’s interest in another collectible, While the benefit to the family outweighs wine, is more than that of a researcher. He is the costs, what Bhattacharya and his part of a family with a long history of being coauthors do not answer is why the manager, wine merchants in London. and more importantly the AFoMF fund board, sacrifice the fund’s investment performance Wine as a collectable asset, he says, to benefit the family, and why the SEC does provides a challenge for the following not close this loophole on behalf of the reasons. First, it ages into being a collectible, AFoMFs shareholders. thus making its price path challenging to understand. Second, there are no studies of wine’s long-term return-generating process. Alternative Investments Third, there is an Interaction between 13. The Investment Performance Of Art drinkability/collectability and financial And Other Collectibles (Spring 2014) returns. Fourth, with no puns intended, wine, Elroy Dimson Dimson starts by saying that collectibles as an asset is illiquid, heterogeneous, lacks have long attracted high prices and have been transparency, has high transaction costs, and perceived as potentially interesting has no future cash flows beyond a potential investments. Using a map below, he shows resale. It is, Dimson says, an asset of passion, investments by high net worth investors in and emotional assets are under-researched. Dimson distinguishes two types of wine: low-quality wines that decrease in quality over time; high-quality wines that improve in quality. Both the value of consuming a bottle of wine and the pleasure of owning a bottle depend on its quality, type, age, and buyer’s wealth. The fundamental value of a bottle of wine at each date is max of the benefit of immediate consumption or the present value of consumption at maturity plus the psychic dividend received until then.

18

To examine the returns from wine, In comparison to other collectables and Dimson creates a database of investment- assets, wine outperforms them all, except for quality wine prices since 1899. He limits the equities, with which it has a positive correlation. Dimson concludes by saying 1. Since 1900, emotional assets beat bonds, Treasury bills, and gold. 2. Long-term returns were not as high as some enthusiasts assert. 3. Given waves in market and transaction costs, a long view is needed. 4. Higher returns in some submarkets reflect data to five Bordeaux first growths (premiers changing wealth or tastes. crus) collecting data from wine auctions and 5. Benefits from diversification and from from a dealer. In total he has prices for 9,492 inflation-hedging are still unclear. year/château/vintage/transaction Still, he says, there is a psychic dividend combinations. The above chart shows the that financial assets cannot provide. price per bottle in British pounds from 1899 14. Estimating Private Equity Returns to 2012 in real and nominal terms. From Limited Partner Cash Flows Dimson creates a model that separates the (Spring 2014) impact on price of chateau, vintage, Andrew Ang Ang’s research focus is on private equity, a major institutional asset class and a significant portion of the investments in portfolios of college endowments, foundations, pension funds, and sovereign wealth funds. He notes that there is one big problem that private equity poses for all transaction type and age as shown below. investors -- the lack of a transactions-based rate-of-return series that can be used for However, there is multicollinearity performance evaluation, risk management, between time, vintage and age. Thus, he and asset allocation. He notes that there is replaces vintage effects with variables some return private equity information measuring production and weather. In available, however, most is constructed from addition, since the effect of age again may non-market estimates of book value, multi- depend on vintage quality, he interacts the year internal rates of return, or multiples of age polynomial with weather quality. He then investment. Ang says these are not compares the returns from wine to other sufficiently reflective of the actual returns assets. To do this, he first creates a deflated since they are estimates. price index, shown below. Ang uses the experience of the endowment during the 2008 Financial Crisis to set the scene. Harvard was an early adopter of the endowment model, a model that lauds the diversification benefits of alternative assets. Despite the supposed benefits of a broadly diversified portfolio, 19

Harvard’s endowment lost 27% of its value They then obtain the returns (aka discount from June 2008 to June 2009. At the same rates) that most closely result in a NPV of all time, it found that almost 2/3 of its portfolio private equity cash flows in their sample that was illiquid or semi-liquid thus making is equal to zero. So long as at least one fund portfolio reallocations difficult and in each quarter has a positive or negative cash hampering the university’s ability to use flow, the procedure produces the quarterly endowment distributions for essential time series of estimated private equity rates university support. In addition to the losses, of return. the potential demands of future private equity capital calls caused further significant distress. One of the key questions that Ang asks is, Available data on private equity returns had how do the returns to private equity compare been misleading and had biased risk estimates to returns to liquid market investments with downward, until 2008. Ang reports on the attempt to create a more accurate market-based private equity return index. He and his collaborators develop a method to estimate a time series of private equity returns based on actual cash flows accruing to limited partners. The data is the cash flow data set of Preqin, purchased as of June 2011, that contains quarterly aggregated investments, distributions, and net similar systematic risk? In order to answer asset values (NAVs) made by private equity this question, he posits a functional form for funds. Ang then estimates the returns from private equity rates of return where the private equity risk premium is sum of a: 1. Time invariant constant. 2. Premia attributable to exposure to the systematic risk of liquid market factors. 3. Private equity specific latent factor that varies over time. The graph below compares Ang’s estimated private equity index with the returns on small stocks (bottom line) and the

S&P500 (second line from the bottom). Over this time period, private equity private equity and its various sub classes outperformed both equity indices -- ignoring from 1993 to 2011. any differences in risk. As to the sources of return, the following chart shows the total To estimate private equity rates of return, private equity return (top line), the risk Ang uses a net present value (NPV) premium (the lowest line) and the active investment framework). Ang and his fellow private equity premium (the lowest and researchers allow the discount rate/rate of flattest line). return for private equity to vary each period.

20

The charts below show disaggregated risk 15. Impact Of Hedge Funds On Asset premiums for the four classes of private Markets (Spring 2014) equity: buy- outs, , real estate, Andrew Patton Hedge funds have sparked enormous interest from a number of different constituencies: wealthy individuals and institutional investors interested in high returns with promised low risk; academics interested the returns and risks underlying these returns; regulators and policymakers who are wary about the industry since the collapse of Long-Term Capital Management nearly sparked a financial crisis. Collapses of other major hedge funds have perpetuated the concerns that such collapses again could impact the underlying asset markets. Patton says that these concerns are understandable. and high yield. While the global industry has The dramatic drop in premiums post 2001 about U.S. $1.5 trillion of assets, their was largely due to the private equity focus on substantial leverage and the high levels of real estate and/or venture capital. This is trading means their impact may be information that, Ang says, would have been disproportionately large. In spite of this useful to Harvard University when building importance, little empirical evidence connects its endowment portfolio pre the financial the activity of hedge funds to returns in crisis. underlying asset markets. Patton seeks to correct this. In conclusion, Ang says this approach to estimating the historical time series of returns Patton looks at the ability of a measure of to private equity is quite general, requires hedge fund illiquidity to predict the returns of only information about cash contributions 72 portfolios of international equities, and distributions accruing to limited partners, corporate bonds, and currencies from 1994 to and is robust to sparse data. By decomposing 2011. The illiquidity measure is based the the private equity returns into components autocorrelation in hedge fund returns. In due to traded factors and a time-varying particular, he uses an equally-weighted private equity premium he finds strong average of the autocorrelations from a sample cyclicality in the premium component that of 30,000 hedge funds created by merging differs according to fund type. In addition, he data from HFR, TASS, CISDM, Morningstar says, this study provides support for an earlier and Barclay Hedge. The individual fund hypothesis (Kaplan and Strömberg 2009) that autocorrelation coefficients used for the capital market segmentation helps to underlying analysis are based on simple determine the private equity premium. rolling window estimates using 12 months of data. He excludes negative individual fund correlation coefficient estimates to create “trimmed” estimates. The equally weighted

21 average correlation Patton calls an illiquidity expect small bid-ask bounce from noise measure. In the chart below you see the trader purchases and large bid-ask spread illiquidity measure over time, annotated for bounce from noise trader sales. In hedge-fund industry events. equilibrium, this causes expected returns to be high when autocorrelations are high. To The chart below shows Patton’s average test this, Patton examines whether the hedge fund illiquidity measure by hedge-fund predictive power of the illiquidity measure is strategy. greater for more illiquid assets and stronger Patton presents evidence of the hedge following market declines when liquidity is fund illiquidity measure’s ability to predict likely to be less. The following table shows aggregate returns for a variety of markets. It show the predictive power is somewhat has, he says, statistically significant stronger following negative returns. He takes predictive ability for all the equity markets this as evidence supporting this hypothesis. (shown below), 75% of the corporate bond indices and 2/3 of currency markets. There is significant forecasting ability to about 6 months in the future, Patton finds, and his

Patton concludes by saying that he believes these results are a useful addition to findings are robust. the literature on hedge funds: the model The apparent ability of hedge fund liquidity provides additional testable autocorrelations to predict these asset class predictions, which are (mostly) borne out in returns raises two questions: are these the data. autocorrelations primarily a reflection of 16. Global Pricing Of Gold (Spring 2013) hedge fund illiquidity; is hedge fund Campbell R. Harvey illiquidity causing future higher expected The goal of Harvey’s presentations (and the returns or could there be a different associated paper “The Golden dilemma” transmission mechanism? co-authored with Claude B, Erb) is to Patton posits a model where hedge funds enhance our understanding of the role of gold are the marginal market maker and low hedge in asset allocation. Discussions about gold as fund liquidity leads to higher asset returns. In an investment generate an almost religious that model, hedge funds require a certain fervor. To show the nature of the debate, amount of liquidity as they face potential Harvey first quotes Warren Buffet (2012): outflows from redemptions. The basic argument is that when hedge fund liquidity is What motivates most gold purchasers is low, hedge funds market makers will skew their belief that the ranks of the fearful will the bid-ask spread such that the mid-price is grow. During the past decade that belief has below fair market value. Thus, one would proved correct. Beyond that, the rising price 22 has on its own generated additional buying the price of gold. Harvey says that any enthusiasm, attracting purchasers who see perceived relationship is driven by a single the rise as validating an investment thesis. As year, 1980. “bandwagon investors join any party, they create their own truth – for a while.” To provide a very long historic context Harvey compares present day military pay to And then, Ray Dalio: that in Rome at the time of Emperor Augustus. He uses gold as a numeraire to Gold is a very under-owned asset, even compare the Roman salaries to those though gold has become much more popular. If you ask any central bank, any , any individual what percentage of their portfolio is in gold in relationship to financial assets, you'll find it to be a very small percentage. It's an imprudently small percentage, particularly at a time when we're losing a currency regime. currently paid by the US Army. In his presentation, Harvey debunks a He finds there is little or no income number of myths regarding gold as an growth in military pay over 2,000 years, and investment. He examines six different he concludes that while gold may be a good arguments for holding gold: hedge over a very long time, over shorter time periods it is not. 1. Gold provides an inflation hedge. 2. Gold is a currency hedge. As to whether gold is an attractive asset 3. Gold historically generates good returns when returns are low, he says the likely in a low real-return world. explanation of its attraction is that when real 4. Gold is a safe haven in times of stress rates are very low there is the possibility of a (such as hyperinflations). tail event, and a surge in the price of gold. 5. Gold should be owned because the world Perhaps gold is a “safe haven” investment may be returning to a de facto world gold used for protection in “a time of stress.” If so, standard. Harvey says, how do we define stress? 6. Gold is under-owned and hence demand After examining each of the reasons for and supply dynamics are compelling. holding gold, Harvey concludes that: Harvey addresses each of these reasons 1. Gold does not provide a hedge against starting with inflation and reports the strength traditional unexpected inflation. of each using charts and graphs to illustrate 2. The argument for gold as a currency his conclusions. One such chart is shown hedge is really just the inflation hedge below: a scatter plot of year-to-year changes argument. in inflation plotted against gold price changes 3. Gold is useful in hyperinflation and price over the 1975-2011 period. He says that the can be very sensitive to small chart shows no correlation between probabilities (e.g. tail risk). unexpected inflation, measured by the actual 4. Low real asset return-gold correlation is year-to-year changes in the inflation rate, and spurious. 5. The de facto gold standard argument does little to help us understand gold.

23

What could make a real difference is a on price. change in demand. 2. If all investors held gold in terms of its weight in “market portfolio,” there would 1. Given limited production, a move by be substantial upward pressure on price. developing markets to hold more gold could exert substantial upward pressure The global equity and fixed income on price. markets have a combined market value of 2. If all investors held gold in terms of its about $90 trillion. Institutional and weight in “market portfolio,” there would individual investors own most of the be substantial upward pressure on price. outstanding supply of stocks and bonds. Is gold a good investment today? The However, at current gold prices investors exhibit below plots the real price of gold own only about 20 percent, or less than $2 since the advent of U.S. futures trading in trillion, of the outstanding supply of gold. Since the new supply of gold that comes to the market each year has not substantially increased over the past decade, a move by individual investors to “market weight” gold in their portfolios would force nominal and real prices higher. Will prices rise based on demand, and should investors have a market weight target for gold in their portfolio? Campbell asks whether investors could achieve “market weight” even if they wanted 1975. to. The exhibit shows that the real price of 17. The Cost Of Capital For Alternative gold is quite volatile over time, but tends to Investments (Spring 2013) mean revert. This is further illustrated by the Erik Stafford plot of ten-year returns in gold against the Hedge fund performance before fees was real gold price, that shows the significant very strong from 1996-2010 using standard negative correlation between the real price of measures of performance. Over the same time gold and future 10-year real gold returns. period the Fund Research, Inc. (HFRI) asset- weighted index annual return was 13.6%, the The current real price of gold is S&P500 of 8.5%, with lower total and exceptionally high and would seem to systematic risk. While the traditional linear portend lower future returns. Could this time factor models appear to do a good job be different? The positive arguments for gold explaining the variation in hedge fund include: returns, Stafford argues they are poor at 1. Given limited production, a move by replicating non-local and/or extreme moves developing markets to hold more gold in hedge fund returns during volatile times could exert substantial upward pressure August 1998 and September-October of 2008). To gain some insight into hedge fund returns, Stafford creates a low cost method to replicate hedge fund returns. He uses four different naked index put writing strategies. In contrast to most attempts to create a 24 benchmark to replicate the risk, Stafford be eroded by severe market designs the option strategies to be mean- downturns. matching. He compares the put replication 2. Institutional: returns (to the right in the charts below) to a. Funds with outside capital: investors are likely to be liquidity constrained after downside shocks and withdraw funds. b. Funds using a broker’s balance sheet and short-term financing: capital funding, terms, and requirements are highly sensitive to extreme market moves as the broker’s financial those actual HFRI and linear model returns (shown to the left). condition is related to short-term downside shocks. From the charts it is apparent that the put- In the case of investments with downside writing-strategy portfolios: exposure, the magnitude of the required 1. Reliably match the mean of the hedge returns is large relative to those implied by fund index, while linear factor models linear models. An accurate assessment of the consistently produce significant mean cost of capital, Stafford says, is fundamental return shortfalls. to the efficient capital allocation throughout 2. Derivative based replicating portfolios the economy and investors in alternatives are produce feasible residuals that cannot be not exempt. reliably distinguished from normal, while Using the put-writing strategy, net of fees, a majority of the feasible residuals from investors in equity-related hedge fund linear factor models exhibit non- strategies have essentially covered their cost normality. of capital, as shown in the following table. He Stafford discusses seven reasons, in two notes one caveat: in practice, many investors categories, that hedge funds might have have realized lower returns than those in the downside exposure similar to the naked put writing strategies: 1. Economic: a. Long only index investor: the index is exposed to downside shocks. b. Merger arbitrage: the probability of deal failure increases following large market declines. c. Convertible arbitrage: the options embedded in convertible securities are exposed to downside shocks. survivorship biased indices he examined. d. Credit: implicit options of credit sensitive securities are exposed to Stafford concludes: downside shocks. 1. The results show that linear factor models e. Distressed investing: the margin of tend to understate the cost of capital when safety of distressed investments can assets have non-linear payoffs and allocations are large. 25

2. The HFRI broad hedge fund index has not bone marrow transplants; powerful new generated returns that have reliably computational tools for medical imaging and exceeded the cost of capital. radiosurgery; diagnostic applications of 3. The put writing strategies show returns nanotechnology; the identification of above the cost of capital of 4.9% per biomarkers for certain diseases; the annum for risk tolerant investors and 2.7% sequencing of the human genome; and per annum for endowment investors. patient specific gene based compounds. With 18. Can Financial Engineering Cure past breakthroughs and new needs, why does Cancer? A New Approach For Funding the industry appear to be so challenged for Large-Scale Biomedical Innovation funding, he asked? Lo concluded that there is (Fall 2012) a mismatch. Andrew Lo Biomedical research is complex, Can financial engineering cure cancer? expensive, uncertain, lengthy, and fraught Lo believes it can by bringing a new with conflicting non-pecuniary motivations approach for funding large-scale biomedical and public-policy implications. Potential innovation to the field of cancer research. shareholders and limited partners are not the Lo believes that biomedical research is most effective funding sources: ownership of handicapped by the existing structure which public equities implies constant scrutiny of he characterized as “an expensive, lengthy, corporate performance from many different and risky process that challenges traditional types of shareholders; and limited partner funding vehicles which are limited in size, investments are sparse. This pushes senior scope, and risk appetite.” Specifically, he management of companies in this industry believes the current business model for life toward projects with clearer and more sciences research and development is flawed, immediate pay-offs, and away from more as evidenced by the following facts: speculative but potentially transformative research. The combination of an uncertain 1. The productivity of big pharmaceutical economy and the real concern about the companies has declined in recent years, as availability of future rounds of financing has their stock-price performance. deters venture capitalists. There is a 2. The aggregate research and development preference for proven and economically (R&D) budget of the pharmaceutical viable technologies. In addition, recent industry has almost doubled from 2002 to evidence suggests there is the existence of a 2010 with little appreciable impact on the “valley of death”—a funding gap between number of new drugs approved. basic biomedical research and clinical 3. Life sciences venture-capital investments development. have not fared much better. Lo proposed an alternative for funding Given declining real prescription-drug biomedical innovation through the use of spending, rising costs, shrinking R&D “financial engineering.” The approach budgets, expiring blockbuster patents, post- involves the creation of large diversified Vioxx fallout, lower levels of funding and portfolios of projects and structuring the risk tolerance among venture capitalists, and financing for these portfolios through unprecedented stock- market volatility and combinations of equity and securitized debt. uncertainty, Lo said that it is not surprising The specific form of the special purpose that the future of this industry appears so vehicle (SPV) is shown below. bleak. It is bleak in spite of many promising breakthroughs: stem-cell therapies such as 26

Lo proposed creating an initial fund of by presenting “Institutional-Quality Hedge $30 billion that invests in 150 early stage Funds.” projects. He demonstrated that such an investment could provide a reasonable return Hsieh is interested is in hedge funds and while gaining the risk reducing benefits of their returns and poses the question,” What is diversification. Lo was compelling in his the experience of the average dollar invested information and analysis and persuasive in in hedge funds?” To answer that question he his infectious passion as he urged the Q- says he needs a market portfolio of hedge Group participants to become early investors funds: asset weighted hedge fund returns like in the concept. the CRSP VW index. Since hedge funds are private that data is neither available nor is it Lo admitted that this proposed application incomplete. Even such data as assets under of securitization may be untested, but argued management are incomplete. It is the data that the techniques are not. Rather than problem that is the subject of this shying away from SPVs because of their role presentation. in the financial crisis, he advocated a more measured response that builds on their Hsieh’s first task is to create a useable strengths to solve the most pressing social hedge fund database from both public and priorities such as cancer research and global private sources. There are 2608 firms from warming. In today’s low-interest-rate public source firms that provide 94% of his environment investors are seeking new data. Using this data base, he provides investment opportunities that are less information final data set he creates. He says correlated with traditional asset classes. that there has been significant growth in the Instead of asking whether we can afford to institutional use of hedge funds. Furthermore, invest billions more at this time, Lo said we the very largest hedge funds dominate the should be asking whether we can afford to in the industry.

wait. When examining the firms by assets under management (AUMs) he finds that 19. Institutional-Quality hedge funds (Fall large firms manage more than 80% of the 2011) David Hsieh assets, survive longer, tend to stay in the David Hsieh, Professor, sample longer and have lower entry and exit Fuqua School of Business Duke University rates. concluded the Fall 2011 Q-Group meetings He sorts his data by deciles and finds that:

27

1. Excess return, alpha and the Sharpe Ratio, 20. The Effect Of Housing On Portfolio have a U-shaped pattern across the deciles. Choice (Spring 2011) 2. Large firms have less equity and more Raj Chetty credit spread exposure. Raj Chetty Professor of Economics, 3. Equity risk is lower in a bear market. Harvard University and National Bureau of 4. All funds have emerging markets Economic Research (NBER) presented “The exposure. Effect of Housing on Portfolio Choice” coauthored with Adam Szeidl, Associate Hsieh then turns to what he calls an S&P- Professor, Department of Economics, like hedge fund analog, Institutional Quality University of California at Berkeley and (IQ) firms: 8-10% of the sample number of NBER. firms that manage 60-75% of the AUMs. Over the past decade: Chetty starts with the question that motivates this research. How does 1. These IQ firms grew continuously until homeownership effect household financial 2008. investment decisions? He says that the 2. Their AUMS outgrew those from smaller interaction between housing and financial firms. markets has attracted attention because of its As for the data base created by Hsieh, he importance for understanding the link says that there is good and bad news: between macroeconomic fluctuations, asset pricing and, in this case, portfolio 1. The good news is that management. a. Adding the approximately 100 mega firms to the data greatly increases the Theoretical studies have shown that industry AUMs. housing, due to its twin roles as a b. Reporting and non-reporting firms of consumption good and as an illiquid asset, similar size have similar effects optimal portfolio allocations through survival/performance characteristics. two channels. Owning a home increases a c. An AUM weighted portfolio of IQ household’s exposure to risk while firms provides a good proxy for the adjustment costs in housing, such as moving, market portfolio of hedge funds and effectively amplify risk aversion because only requires collecting data from a both force households to concentrate manageable number of non-reporting fluctuations in wealth away from housing. firms. While these effects can have a large 2. The bad news is that quantitative impact on portfolios, theory and a. Commercial data bases contain a evidence reach conflicting conclusions about declining share of total industry the nature of the impact: theory predicts that AUMs. housing lowers the demand for risky assets; b. Survival and performance data vary empirical studies find no systematic across firm size, thus making any relationship between housing and portfolios. weighting scheme relevant. In their work Chetty and his coauthor Finally, the best news, an asset-weighted identify two things that reconcile the portfolio of IQ firms provides a good proxy theoretical predictions and the evidence: for the market portfolio of hedge funds and 1. Separating mortgage debt and home only requires collecting data from a equity effects on a portfolio: mortgage manageable number of non-reporting firms. debt reduces demand for stocks while home equity raises it. 28

2. Understanding that endogeneity of and LA, versus places, such as , housing choice biases previous empirical where land is abundant. estimates: those who buy bigger houses 3. To incorporate the risk preferences of may face lower labor income risk. As an people who buy houses when local house example of how this may happen, Chetty prices are high, they use panel data that discusses the very different house tracks changes in the portfolio for same purchase decisions of tenured and household over time. untenured faculty members. Using their model they find that housing In previewing the conclusions, Chetty purchases have both immediate and long- says that there are large impacts of housing lasting effects on household portfolio choices. on portfolios of the same order of magnitude as the impacts that come from variations in • Increasing mortgage debt, holding wealth income and wealth. constant, reduces a household’s propensity to participate in the stock For their research they use a two-period market and reduces the share of stocks in Merton-style portfolio model with housing the portfolio conditional on participation: featuring both risks, the covariance between they estimate the elasticity of the share of home prices and stock returns, and illiquidity, liquid wealth allocated to stocks with the probability that housing cannot be respect to mortgage debt is -0.3. adjusted in second period of housing. For • Increases in home equity wealth, while data they use the 1990-2004 asset modules holding property value fixed, increases from the Survey of Income and Program stockholding. The estimated elasticity of Participation. They observe asset data both the stock share of liquid wealth with before and after the purchase of a new house respect to home equity is 0.44. for 2,784 households. • These elasticities are larger for To generate the information on the households with larger adjustment costs, variation in mortgages and home equity they but similar across high and low-risk use several sets of information. housing markets. In practical terms, Chetty concludes: 1. To identify housing price information they use state level repeat-sale home- • Mortgage debt/committed consumption price indices for property value and home may be a useful predictor of fluctuations equity wealth. This measure compares the in demand for risky assets and asset prices. average location, state, house price in • Households should have more year in which portfolio is observed conservative portfolios when they hold a (“current year”) and the house price in the lot of housing commitments. state in year of home purchase. 2. One concern is that of the impact of labor Clearly, home ownership plays a very market conditions on home prices. To important role in assessing the risk and incorporate fluctuations in house prices planning the asset allocation for individual with labor market conditions, they financial portfolios, and it did so in the most correlate them with labor market recent financial crisis. conditions using national house price 21. Portfolio Choice With Illiquid Assets interaction with variation in land (Spring 2011) availability across states. This, Chetty Andrew Ang says, incorporates information about land Andrew Ang, Ann F Kaplan Professor of scarcity in places such New York, Boston, Business, Columbia Business School and 29

NBER presented “Portfolio Choice with Ang says that the primary cause of the Illiquid Assets” co-authored with Dimitris distress was the adoption an “Endowment Papanikolaou, Assistant Professor of Finance, Model” that includes a significant investment Kellogg School of Management in illiquid assets, exacerbated by an illiquid Northwestern University, and Mark M asset overweighting. The Harvard’s Westerfield, Assistant Professor of Finance Endowment policy model and actual portfolio and Business Economics, Marshall School of in 2008 are shown below. As a result of the decline in its endowment, Harvard found itself with four choices: liquidate Harvard; get increased donations; cut expenses; issue debt. Harvard cut expenses and was forced to raise over $2 billion in debt, half of it taxable. Ang says that this prompted him and his coauthors to Business University of Southern California. think about the normal asset allocation model, Ang had previously presented papers at the ® a standard Merton model, in the light of Q-Group seminars in the Spring of 2004 and potential asset illiquidity. the Fall of 2007. The Merton model has tradable assets Ang looks at one of the “victims” of the with an agent that is only concerned with Financial Crisis of 2008, the Harvard wealth. In this model, risk comes from the Endowment and why it lost so much value. possible loss when total wealth goes to zero. While the endowment had a slightly positive However, since the agent can only consume performance relative to the S&P500 from out of liquid wealth, Ang says they should June 2008 to June 2009, it still lost 27.3% of not be concerned with total wealth but liquid its value. This was a significant turnaround wealth going to zero. The potential loss of from past performance as its assets shrank liquid wealth is an important source of risk. from $36.9 billion to $26.0 billion. This risk associated with illiquidity leads Harvard University relies heavily on Ang to examine how illiquid assets affect endowment distributions for operations. In asset allocations. His model includes riskless 2008, the overall endowment contributions bonds, risky assets, and equity that is freely represented 34% of the University’s $3.5 tradable, as well as illiquid risky assets that billion revenue, and some schools within are tradable only at random times based on a Harvard were more heavily reliant on the Poisson distribution. As outputs he endowment for their operating budgets. The determines the appropriate asset allocation spending rate for the endowment is variable and spending rate given the newly but smoothed over time and by June 2008 it incorporated risk of illiquidity. was 4.8%. Any significant decline in assets impacts that payout. Clearly, this dichotomy In the model the presence of illiquidity between spending and earning sets up a induces time-varying, endogenous, risk conundrum during an endowment fund’s aversion. The ratio of liquid to total wealth downturns. Ang says that the Endowment losses from the financial crisis meant that Harvard’s budget had to shrink by about 20%not including the massive cash outflows due to its swap position.

30 becomes a state variable and effective risk Banking and Financial Industry aversion depends on liquidity solvency ratios. 22. Toward Determining Systemic Ang provides a graphic illustration of Importance (Spring 2012) effective relative risk aversion (RRA) as Mark Kritzman shown below. Mark Kritzman, President and CEO of Ang spells out the implications of his Windham Capital Management, LLC, and model: Senior Lecturer, MIT Sloan School of Management, made the final presentation on 1. Illiquidity markedly reduces optimal Tuesday of the Spring 2012 Q-Group holdings relative to the Merton Meetings, entitled “Toward Determining benchmark model. Furthermore, illiquid Systemic Importance” coauthored with asset holdings are highly skewed. William B. Kinlaw, Managing Director and 2. In the presence of illiquidity, near- Head of the Portfolio and Risk Management arbitrage opportunities arising from high Group, State Street Associates and David correlations are not exploited. There is no Turkington, Vice President, State Street arbitrage because illiquid and liquid Global Markets. assets are not close substitutes. 3. To be able to trade the illiquid asset Until the shock of the Global Financial continuously an investor requires liquidity Crisis, systematic risk (the extent to which premiums that depend upon skewness. movement of a broad market or economic Ang quantifies the premiums as shown factor imparts risk to a narrow entity such as below. an individual company or stock) had long been the focus of investors as they sought to design efficient portfolios and to avoid uncompensated risks. In the wake of the Global Financial Crisis, focus shifted to systemic risk (risk that results from a relatively narrow shock that propagates quickly and broadly throughout the financial system and the real economy) for good reason. The Global Financial Crisis made it Ang concludes with three observations. clear that narrow events such as ’ default could cause the global stock 1. Illiquidity risk induces time-varying risk market to crash, paralyze the financial system, aversion that is greater than the constant and cast the world economy into a deep and risk aversion coefficient of utility. This is long recession. because illiquid assets cannot be used to fund immediate consumption. Systemic risk and measuring it is 2. The periodic inability to trade has a large important to investors who need to assess the impact and creates significant outcomes vulnerability of a portfolio to systemic shocks for allocation and a consumption that and who need to develop a defensive strategy. differs from that from the standard It is equally important to policy makers to Merton model. ensure that policies target the right entities 3. For endowments, the endowment model before a shock and effectively engage in and its spending regime must include the preventive or corrective measures when impact of illiquidity as the proportion of circumstances warrant intervention. While less liquid assets increases. 31 important, Kritzman says, measuring For a market to sell off significantly, two systemic risk is difficult. conditions typically must prevail: the market must be fragile and it must receive a negative Attempts have been made to gauge the shock. Despite false positives Kritzman degree of systemic risk within the financial reports that the absorption ratio distinguishes system and to identify the linkages across relatively benign market conditions from financial institutions and other key entities. dangerous ones and does so in advance. He This effort has been difficult for several provides the following chart (the dark area is reasons: securitization obscures many the AR and the line represents the S&P 500 connections among stakeholders; private from 1998 to 2012). transactions create opacity; deceptive accounting methods conceal financial He extends the idea to a market timing dependencies. Moreover, the complexity and strategy. Based on the systematic risk index, volume of derivatives contracts render it he creates a dynamic switching model and nearly impossible to disentangle the web of compares its performance to that of a static connections throughout the financial system. 50/50% allocation. His encouraging results Lehman Brothers, for example, had more than 1.5 million derivatives contracts on its books when it defaulted. Kritzman introduces his measure of systemic importance, the “absorption ratio.” The absorption ratio (AR) is measured as the fraction of the total variance of a set of asset returns explained, “absorbed”, by a fixed number of characteristic vectors (eigenvectors) (E). He says this ratio captures both an asset’s riskiness and its connectivity to other risky assets during periods of high are shown in the chart below. systemic risk. When the absorption ratio is low, markets are more resilient to shocks and Kritzman next turns to a discussion of less likely to exhibit a system-wide response extending the absorption ratio to determining to bad news. Compact markets are more systemic importance by constructing a measure of centrality that includes: The asset’s vulnerability to failure. A measure of how broadly and deeply an asset is connected to other assets in the system. The riskiness of the other assets to which it is connected. It is important to note that none of the measures is sufficient, but collectively they may offer the best observable indication of fragile. systemic importance.

32

To create a measure of centrality U.S. stock market data. Kritzman multiplies the asset’s weight (absolute value) in each eigenvector by the Most of the list is what would be expected, relative importance of the eigenvector however, Kritzman notes the absence of (measured as the percentage of variation Lehman Brothers among the top 10. He explained by that eigenvector divided by the explains that Lehman was not always sum of the percentage of variation explained systemically important, but became by all of the eigenvectors that comprise the systemically important in the period leading subset of those that are most important). up to the Financial Crisis and maintained relatively high centrality throughout the crisis !� ! �� = �� − � � �� − � right up to its collapse. He then ranks the systemic importance of global financial Where: institutions as of October 2011, and urges that �� = vector distance from multivariate the results be heeded but interpreted with average circumspection. � = return series � He notes that the centrality measure is not µ = mean vector of return series � � an indication of an entity’s financial strength ∑ = covariance matrix of return series �� or weakness, nor is it a gauge of Kritzman contrasts this specification to creditworthiness or a predictor of investment previous measures that use the principal performance. Rather, it is a statistical eigenvector alone. In this centrality measure, representation of an entity’s vulnerability to the uses several eigenvectors in the numerator failure and connectivity to other risky entities of the absorption ratio suggesting that most of and is derived solely from historical returns the time several factors contribute and by ignoring current fundamental importantly to market variance. He says the information. centrality score measures the degree to which Kritzman concludes that his approach to a particular asset or industry drives market measuring systemic importance relies solely variance. To demonstrate how the measure on the behavior of asset prices thus giving it works he ranks the systemic importance of two virtues: it is simple and easily updated; it industries (December 1997 - June 2011) and captures risks and linkages that may not be stocks (December 1992- June 2011) using otherwise observable. However, as a measure, it is limited because it fails to consider fundamental factors that may not be embedded in security prices. 23. Why Bank Equity Is Not Expensive. (Fall 2011) Anat Admati Anat R. Admati, George G.C. Parker Professor of Finance and Economics, Graduate School of Business, Stanford University presented “Fallacies, Irrelevant Facts, and Myths in the Discussion of Capital Regulation: Why Bank Equity is Not Expensive,” she coauthored with Peter M. DeMarzo, Mizuho Financial Group Professor of Finance Graduate School of Business, 33

Stanford University, Martin F. Hellwig, Max Well-designed capital regulation that Planck Institute for Research on Collective requires much more equity, might will Goods, and Paul Pfleiderer, C.O.G. Miller increase the stability of banks. At the same Distinguished Professor of Finance, Graduate time, however, it would restrict enhance their School of Business, Stanford University. ability to provide good loans to the rest of the economy and remove significant distortions. Admati begins with the motivation for This may reduces the growth of banks. this presentation: the 2007-2008 Financial However, it and has will have positive effects Crisis and its cause. She asks, “Was it mainly for all (except possibly bankers). (or just) a liquidity problem, a run that affected a wonderful but inherently fragile Admati says that there is a pervasive modern banking system, or the result of sense in discussions of bank capital excessive leverage and risk, and distorted regulation that equity is expensive and that incentives?” This basic question leads to higher equity requirements, while beneficial, others: also entail a significant cost. The arguments she and her coauthors examine are those most 1. Can a large financial institution “fail?” often made in this context and are, what she Can bankruptcy or regulation be made to calls, fallacious, irrelevant, or very weak. work? Entwined in this question is Admati continues that their analysis leads whether banks are too them to conclude that requiring banking big/interconnected/important to fail or be institutions to be funded with significantly allowed to fail. more equity entails large social benefits and

2. What is systemic risk and what are minimal, if any, social costs. She provides the systemically important financial following arguments made against high institutions? equity requirements and explains why they 3. What are the costs and benefits of are either incorrect or unsupported: regulation? She points out that health and safety issues arise in other regulated 1. Increased equity requirements would industries: airlines, medicine, force banks to “set aside” or “hold in environment, nuclear plants. Should reserve” funds that can otherwise be financial institutions be included in this used for lending. To this she says there is group? no immediate relationship between Admati uses a quote from a November 20, liquidity requirements and capital 2009, interview with Josef Ackermann, CEO requirements: capital requirements refer of , to illustrate the crux of the to the mix of debt and equity used to fund bank equity tradeoff. banks; liquidity or reserve requirements relate to the type of assets and asset mix More equity might increase the stability banks must hold. Since they are two of banks. At the same time, however, it different sides of the balance sheet, there would restrict their ability to provide loans to is no immediate relationship between the rest of the economy. This reduces growth them. and has negative effects for all. 2. Increased equity requirements would increase banks’ funding costs because She then edits it to show the “real deal.” equity requires a higher return than (Her additions are shown in underlines and debt. This, Admati says, is a fallacious deletions by strikethroughs.) argument. Instead, she says, required return on equity, which includes a risk premium, must decline when more equity 34

is used. Any argument or analysis that by equity shareholders or through holds fixed the required return on equity alternative governance mechanisms. when evaluating changes in equity capital d. The discipline provided by debt requirements is fundamentally flawed. generally relies upon a fragile capital 3. Increased equity requirements would structure funded by short-term debt lower the banks’ return on equity that must be frequently renewed. (ROE) and result in a loss in value. e. There must be less costly ways to Fallacious, she says. The expected ROE solve governance problems. of a bank increases with leverage and 6. Increased equity requirements would would thus indeed decline if leverage is force or cause banks to cut back on reduced. One exception is when increased lending and/or other socially valuable leverage brings more government activities. Admati counters this saying subsidies. that higher equity capital requirements do 4. Increased equity requirements would not mechanically limit banks’ activities. increase banks’ funding costs because Any “debt overhang” problem can be banks would not be able to borrow at alleviated if regulators require the favorable rates created by tax undercapitalized banks to recapitalize shields and other subsidies. It is true, quickly by restricting equity payouts and Admati says. Through taxes and mandating new equity issuance. Once underpriced explicit or implicit properly capitalized, Admati points out, guarantees, debt financing is subsidized banks would make better lending and and equity financing is effectively investment decisions and issuance costs penalized. Policies that encourage high would be reduced. leverage are distorting and paradoxical 7. The fact that banks tend to fund because high leverage is a source of themselves primarily with debt and systemic risk. The subsidies come from have high levels of leverage implies that public funds. If some activities performed this is the optimal way to fund bank by banks are worthy of public support, activities. There is no logic to this subsidies should be given in ways that do argument, Admati says. Just because not lead to excessive leverage. financial institutions choose high leverage 5. Increased equity requirements would does not mean this form of financing is be costly since debt is necessary for optimal. Rather, it means that the providing market discipline to bank observed behavior is the result of factors managers. In theory, Admati says, debt unrelated to social concerns, (e.g. tax can sometimes play a disciplining role, incentives and other subsidies), and to but the arguments for lenders as frictions associated with conflicts of disciplinarians are weak. Instead, interests and to an inability to commit in a. High leverage actually creates many advance to certain investment and frictions including incentives for financing decisions. banks to take excessive risk. 8. High equity requirements will drive b. There is little or no evidence that banking activities from regulated to banks’ debt holders provided any unregulated sectors and would thus be significant discipline during the ineffective or even harmful. First, in the Financial Crisis, run-up to the crisis, many activities and c. The argument ignores the potential entities in the so-called “shadow banking disciplining role that can be played system” relied on credit backstops and other commitments made by regulated 35

entities. Thus, these activities and entities investors/shareholders allow banks to lobby were, and continue to be, within against sensible regulations using flawed regulators’ reach. Second, defining the claims?” entities and activities that should be regulated will always be a challenge. It is 24. Neglected Risks, Financial Innovation, And Financial Fragility (Spring 2011) far from clear that, given the tools already Andrei Shleifer and potentially available to lawmakers Andrei Shleifer, Professor of Economics, and regulators, the challenge of effective Harvard University, presented “Neglected capital regulation cannot be met. Risks, Financial Innovation, and Financial Admati provides the following Fragility” coauthored with Nicola Gennaioli, recommendations: Assistant Professor, Universitat Pompeu Fabra, CREI and Robert Vishny, the Myron S. 1. Since bank equity is not expensive, Scholes Distinguished Service Professor of regulators should use equity requirements Finance at the University of Chicago Booth as a powerful, effective, and flexible tool School of Business. Shleifer had previously with which to maintain the health and presented a paper at the Q-Group® seminar in stability of the financial system. the Fall of 1995. 2. Regulators should use restrictions on equity payouts and mandate equity In this first presentation of the Spring issuance to help banks. This will assure 2011 Q-Group Seminar series, Shleifer and that they maintain adequate and high his coauthors seek to understand the financial equity capitalization. fragility that created the chaos of the recent 3. If certain activities of the banking sector financial crisis. At the heart of his concern is are deemed to require subsidies, then the fact that in this crisis, as opposed to the subsidies should be given in ways that earlier internet crisis, intermediaries alleviate market frictions and not through concentrated the risk. a system that encourages high leverage. Shleifer’s and his coauthors’ perspective 4. Better resolution procedures for distressed on the cause of this crisis is new: they do not financial institutions, while necessary, focus on leverage and institutional structure, should not be viewed as alternatives to what he calls the plumbing. Rather, they having significantly better capitalized focus on the creation of “false substitutes” banks. that meet the demand for safe investments in 5. Higher equity requirements are superior a high- demand, limited-supply environment. to attempts to fund bailouts through a This increase in substitute products creates “bailout fund” supported by bank taxes. private money and importantly, he says, 6. Approaches based on equity are superior increases risk as both investors and, to a to those that rely on non-equity securities lesser degree, intermediaries neglect risks and such as long term debt. leverage. Shleifer says that the focus on To conclude Admati says after the leverage and plumbing by policy makers Financial Crisis, the burden of providing a needs to be expanded to include the speed of compelling argument for why high leverage innovation, a critical variable in creating a of banks is justified must be on those making financial crisis. the claim. Policy should not be made on the basis of fallacious claims and must be based In this work Shleifer adapts a standard on social costs, benefits and solid reasoning. model of financial innovation where: Finally, she asks, “Why do

36

• Investors demand a particular (often safe) • In equilibrium intermediaries buy back stream of cash flows. many of the new securities. However, in a • Traditional securities become limited and crisis fueled by innovation the supply is intermediaries create new substitute huge. Over issuance causes prices to fall securities from risky assets. These new sharply even without fire sales. securities are designed to offer the desired • Prices of new claims collapse below cash flow stream to risk-shy investors. initial values as investors flee to safety. • Large numbers of these new securities, • Intermediaries’ wealth provides insurance called by Shleifer “false substitutes,” are against price collapses but the glut of new issued to meet the ensuing demand. securities is a crucial driver of the crisis: • At some point previously unattended to new claims are risky due to over-issuance risks are revealed, surprising investors and intermediaries’ wealth is insufficient and intermediaries. for the volume. • There is a flight from the “false Shleifer says that their two-agent substitutes.” financial market model includes the risk- Shleifer notes that recent events fit this adverse investor and the risk-neutral innovation model: securitization; intermediary and it emphasizes the central collateralized mortgage obligations; money role of neglected low probability risks. It market funds. He suggests that junk bonds, includes both an assessment of the nature of and the import of Drexel as an intermediary, financial innovation and of financial fragility. may be included on the list. After discussing the two-asset, three- To this familiar innovation model, period model, he turns to describing how it Shleifer and his colleagues add two real provides a novel perspective on the cause of world adaptations. the frozen asset-based commercial paper market in the summer of 2007. Shleifer does • Surprise when investors, and often acknowledge there are two alternative intermediaries as well, recognize the explanations for the events: institutional unusual risks that accompany the new speculation and what is called a “perfect “safe cash flow” securities. storm.” • Investors’ subsequent flight from the “false substitutes” to safe traditional 1. Institutional speculation. This is the securities. leading alternative explanation of the events of 2007. Institutions speculated in Important to their model is their AAA-rated securities using short-term assumption that investors have a preferred finance while counting on a government habitat for safe assets modeled as infinite risk bailout. The positions sustained huge aversion. Shleifer says that their model losses and led to the eventual bailouts. suggests the following scenario as the 2. “Perfect storm.” In this view investors innovation proceeds: correctly consider the extremely low • Markets for new securities are fragile. probability of a crisis and price securities When news about unattended risks accordingly. However, in this instance the catches investors by surprise, they dump very-low-likelihood event occurred. the “false substitutes” and flee to the Shleifer questions both explanations. First, safety of traditional securities. Over- as to the role of institutional speculation, he issuance then is the source of risk. reminds us that prior to the events of the

37 summer of 2007, financial markets investors buy securities through an universally perceived AAA-rated MBS and intermediary that explicitly or implicitly CDOs to be safe, the banks’ credit default guarantees them. Regulators may wish to swaps traded as if banks were totally safe, require that intermediaries hold enough and banks provided repo financing to hedge capital to make good on those guarantees or funds using MBS as collateral with very low else refrain from making them. This might be haircuts. He argues that both the banks and a particularly significant issue when the the investors neglected the risks and were safety of either securities or intermediaries is shocked by what subsequently happened. illusory. Second, as for the “perfect storm,” Shleifer says that this view is inconsistent with the 25. Who is doing what to whom on Wall Street and why? (Spring 2011) assertion that investors actually used the Charlie Gasparino wrong models rather than the correct models Leo Guzman introduced the Monday without adequate evaluation of low- dinner speaker, Charlie Gasparino, Senior probability extreme events. Correspondent for Network, a Their model, Shleifer points out, is in Wall Street insider, book author and agreement with the widely accepted commentator. prescription that greater capital and liquidity His topic was “Who Is Doing What to of financial intermediaries would lead to Whom on Wall Street and Why?” However, more stable markets. However, he says, it he suggested his talk really was simpler and goes further by questioning the idea that all suggested the title, What’s Going on Inside creation of private money by the banking Wall Street? He began with his sense of what system is necessarily desirable: at least in those on Wall Street are talking about at some cases such securities owe their very present: they seemed obsessed with insider existence to neglected risks and have proved trading, although he was unsure how to to be “false substitutes” for the traditional define insider trading. With regard to the SEC, ones. “False substitutes” by themselves lead he said he was unsure of what action they to financial instability and may reduce might be planning, and which financial welfare, even without the effects of excessive executives might be implicated. Further, in leverage. He does conclude that financial talking about some widely known company fragility in their model could interact, perhaps heads, he said that their tenure may be dangerously so, with leverage when coming to an end and there are those who are mispriced securities are used as collateral and gleeful at the prospect. thus can result in fire sales. Sales from unwinding levered positions and sales from Gasparro was especially concerned about disappointed expectations thus go in the same the bailouts of bankers and wondered whether direction. some of those bankers who say their firm was not bailed out were actually bailed out, albeit Shleifer concludes with public policy indirectly. He said that he is opposed to concerns. He says that recently proposed banker bail outs and would like to have them policy, while desirable in terms of its intent to stopped. This would significantly reduce risk control leverage and fire sales, does not go taking, he said, describing hedge funds as an far enough. It is not just the leverage, he example of risk taking that does not depend points out, but the scale of financial upon the comfort of bail outs. innovation and of the creation of new claims itself that require regulatory attention. Such Finally, turning to politics he said that attention might be especially warranted when while Wall Street does not entirely trust 38

Obama, they will support him in the next interprets as being evidence of joint election. In answer to a question about who evaluation. The main empirical result is would be the Republican candidate, he said summarized in the following graph that he had no idea who it might be. depicts the probability of a position being sold based on the rankings of cumulative Behavioral Finance returns since purchase. 26. “The Worst, The Best, Ignoring All The graph on the left shows that The Rest: The Rank Effect and individual investors are 20% more likely to Trading Behavior” (2014 Fall) sell a position that has had the highest Samuel Hartzmark Modern portfolio theory emphasizes the cumulative return since purchase, relative importance of looking at individual to a security that has had average returns, security trading decisions in a portfolio context. In contradiction, the behavioral finance literature has largely focused on psychological aspects of investing in a narrow framing context --- where the buy and sell decision for any individual security is looked at in isolation. Professor Samuel M. Hartzmark, Assistant Professor of Finance at The

University of Chicago Booth School of and 15% more likely to sell a position that Business gave a behavioral finance focused has had the lowest return -- all based on talk that looked at whether there is joint or individual investor account data from separate evaluation in the context of 1991-1996. The corresponding numbers for portfolio formation; that is, whether mutual funds, depicted in the graph on the investors consider the performance of other right, are 12% and 17% --- based on securities in their portfolio when they make quarterly reporting data from 1990-2010. decisions about a given security. He finds These numbers are significantly larger than strong evidence of a rank effect, which he

39 the disposition effect, which is 2.9% in the In summation, Hartzmark urged that we sample. move beyond narrow framing in trying to understand portfolio decisions. The rank Hartzmark then proceeded to discuss a effect suggests both individual and mutual number of tests he conducted to show that funds are more likely to sell the extreme the rank effect is not the result of simple ranked stocks, and that the evaluation of any rebalancing, not related to firm specific one stock depends on its portfolio context. information, not correlated with the magnitude of returns, not the result of tax 27. The Success Equation: Untangling Skill loss selling, and does not depend on And Luck In Business, Sports And whether every stock in the portfolio is Investing (Fall 2013) either at a gain or a loss. The following Michael Mauboussin table, for instance, depicts the difference in Michael Mauboussin was the dinner average sale probabilities between two speaker on Monday evening of the Fall 2013 investors. The first investor holds a stock Q Group Meetings. He spoke about the ideas that is the best performer in his portfolio, in his most recent book, “The Success while the second investor holds the same Equation: Untangling Skill and Luck in stock, but it is not the best performer. The Business, Sports, and Investing.” Central to second row shows analogous results for a the book, and his conversation with those at stock that is worst for one investor, but not dinner, was the role that skill and luck play in for the second. This would seem to rule out success and failure. Some games, like roulette firm- specific information as a primary and the lottery, are pure luck, while others, driver, and shows the importance of the like chess, exist at the other end of the luck- rank effect. skill spectrum. Mauboussin says that many things in life, including sports, are a Hartzmark finds that the rank effect is combination of skill that comes from training, present during every month of the year, and is practice, and luck. Extreme outliers (e.g. Joe marginally stronger in December and January. Dimaggio’s 56 game hitting streak) are always a combination of extreme skill and Hartzmark views the rank effect as one luck. aspect of a salience effect --- the propensity for investors to trade more actively attention- He gave two examples of luck or is it skill: grabbing stocks. He presented results that show the rank effect is even stronger when 1. A humorous example of luck/skill was combined with other salient attributes such as one winner of El Gordo - the Spanish alphabetical ordering, extreme returns, and Christmas, lottery, the oldest, high volume. continuously run, lottery in the world. The man who won had a number that He concluded by presenting evidence ended with 48. Where did 48 come from, from Fama-MacBeth regressions suggesting he was asked. He replied that for seven that investors can earn excess risk-adjusted straight nights he had kept dreaming returns by taking advantage of the return about the number seven. Since seven reversals caused by price pressure from the times seven equals 48 (so he thought) that selling attributable to the rank effect. These was his lottery number. results suggest an alpha of 25.7 basis points 2. Another is the strange path the Mona Lisa per month for the best-ranked stocks, and by Leonardo da Vinci. Originally, not in 94.1 basis points per month for the worst- the list of the top da Vinci paintings, a ranked stocks. well-publicized theft in 1911 created a

40

media sensation. Rumored to have been Luck, Mauboussin warns, is hard to stolen by modernist enemies of traditional understand. Given an event, whether it is a art, even by Picasso, the painting was success or failure and true or not, we rapidly returned in 1913 after considerable public create a narrative about what has happened. outcry. The news sensation turned Mona One part of the brain, he calls “The Lisa into a huge attraction at the Louvre Interpreter,” takes the facts and creates a that continues today. story even when luck is the act of randomness. Mauboussin proposes that we think of “The storytelling mind is allergic to outcomes as resulting from the mixture of uncertainty, randomness, and coincidence and two distributions, one associated with skill is addicted to meaning. If the storytelling and the other with luck. He believes the skill mind cannot find meaningful patterns in the distribution in sports has moved to the right world it will try to impose them…it is a and become more compact. He uses as an factory that churns out true stories when it example the times between the #1 and #20 in can, but will manufacture lies when it can’t.” the Olympics. In 1932 the difference was 39 Mauboussin concludes by saying that minutes, or as he says, the runner coming in most outcomes involve a combination of luck first had time to shower and change before and skill that is hard to untangle. While #20 crossed the line. At 2012 Olympics the deliberate practice is very important to difference had dropped to 7 minutes. Another improving performance, highly skilled people, example in this baseball season, shows how like those at the Q-Group dinner, should important this change can be. In 1941 Ted remember that as skill increases, luck Williams was hitting 0.406, 4-sigma event. becomes more important. Today, a 4-sigma event would result in batting average of 0.380. Moreover, 28. Natural Expectations, Economic Mauboussin says, the batting distribution is Fluctuations, And Asset Pricing more compact for major league players today. (Spring 2012) David Laibson When we consider skill, Mauboussin says David Laibson, Robert I. Goldman that skill increases when you are young and Professor of Economics, Harvard University eventually begins to decline. He calls this and NBER, made the first presentation of the pattern the arc of skill. Golfers, he says, peak Spring 2012 Q-Group Meetings, “Natural around 35 and tennis Grand Slam winners Expectations and Macroeconomic around 24 (using data from 1968-2013). As to Fluctuations” coauthored with Andreas Fuster, financial intelligence, Mauboussin cites work Federal Reserve Bank of New York, and by David Laibson and colleagues, Harvard Brock Mendel, Ph.D. candidate in Economics, University that looks at financial intelligence Harvard University. as a combination of fluid intelligence, Laibson starts by reminding us that in peaking in the early 20s, and crystalized recent decades, research in economics and intelligence that grows over time and is based finance has largely focused on the rational on experience. Crystalized intelligence is actor model: economic agents process all used to deal with new and complicated available information perfectly. Because this problems. Laibson and his co- authors found model rules out unjustified optimism or that for those making financial decisions the pessimism as an amplifying force for peak comes in their mid-50s. Some of the aggregate fluctuations, it struggles to explain decline comes from habit and the rest from some of the most prominent events including comfort that comes with age: you are less large swings in asset prices and credit and likely to question your own work. 41 investment cycles that contribute to the length 5. Rational agents should hold high equity and severity of economic contractions. allocations on average and follow counter- cyclical asset allocation policy. He starts with two assumptions. First, he assumes that the fundamentals such as Laibson suggests that relaxing the earnings momentum in the short-run and assumption of perfect rationality is a potential earnings mean reversion in the long-run way of explaining aggregate volatility. follow a humped shape dynamic. The process Unfortunately, economists lack a consensus starts with a pulse of news that inflates the on how to do this. market (he calls this a unit shock) followed He uses a quasi-rational model which by a reversal in 2-3 weeks before reaching a falls between the rational expectations and steady state. Laibson shows the trajectory of intuitive (naïve) expectations models. He earnings from three different unit shocks in calls this a natural expectations model. the chart below. His second assumption is that agents estimate simple models that are tractable and parsimonious, the typical models used in economics literature. He goes on to discuss the reasons for parsimony. 1. Economic: A tradeoff between model flexibility and over fitting when there are numerous potential parameters. Laibson finds that this model is sophisticated 2. Psychological reasons that limit the enough to capture the short-run momentum, number of parameters: but fails to fully reflect the more subtle long- a. Myopia run mean reversion. Hence, when the true b. Regency bias dynamics are hump-shaped, natural c. Complexity aversion expectations overstate the long-run d. Preference for tractability persistence of economic shocks. Clearly, agents with natural expectations turn out to Anchoring and representatives also lead form beliefs that assume that good times (or agents to underestimate the mean reversion. bad times) last forever. The parsimony, however, comes with One thing is clear to Laibson: investors consequences: primarily rely on recent data in setting 1. Agents recognize the short- term expectations. Thus, they over emphasize the momentum but miss some of the long-run period of the recent unit shock and neglect mean reversion. longer term historic data, data from well 2. Asset returns are excessively volatile and before the time of the shock. In doing so, exhibit overreaction. investors fail to incorporate the past lower 3. Real economic activity has amplified mean into their forecasts. The chart below cycles. shows the impulse response functions (IRFs) 4. The equity premium is large even though for cumulative excess returns caused by a long-run equity returns co-vary weakly unit shock to dividends: the shock quickly with long-run consumption growth. drives up the cumulative returns and is This premium, he notes, nearly vanishes followed by post-shock decline. Most if agents have rational expectations. important to this analysis is the historic period incorporated into the forecast. Laibson 42 incorporates historic data of from 1 to 40 that exhibit excess volatility and long-run months and finds that the shorter the period mean reversion. of incorporated history, the more the shock 2. There are cycles in consumption and impacts dividends and the more precipitous investment. the reversion post shock levels. The model 3. The covariance of returns and that results in the lowest shock and consumption growth first rises and then subsequent reversal is based on 40 months of falls. data. Clearly an emphasis on recent history There are important consequences that can exacerbate shocks. Laibson notes: For Laibson, history makes a difference 1. Since agents do not recognize the mean and the longer the history, at least up to 10 reversion, equity is perceived as many years, the better. The precise amount of times riskier than it actually is. In fact, history may vary from market-to-market equities are about 9 times less risky than around the world, he says, but the effect they are perceived to be. exists in all markets. 2. Rational investors should hold far more equity than typical investors. 3. Rational investors should follow a countercyclical investment strategy. Since high earnings should be a bearish indicator, in the future Laibson wants to use what he has learned to build a new predictor for bear markets. 29. New Research In Behavioral Finance (Spring 2012) Nicholas Barberis Nicholas Barberis, Stephen & Camille Schramm Professor of Finance, Yale School of Management, presented “Testing Theories Using this insight Laibson creates a of Investor Behavior Using Neural Data,” he model that combines natural expectations coauthored with Cary Frydman, PhD with a simple dynamic macroeconomic Candidate, Colin Camerer, Robert Kirby model. He finds this the model’s predictions Professor of Behavioral Economics, Peter match many observed patterns: high volatility Bossaerts, William D. Hacker Professor of of asset prices; predictable up- and-down Economics and Management and Professor of cycles in equity returns; and a negative Finance, and Antonio Rangel, Associate relationship between current consumption Professor of Economics, California Institute growth and future equity returns. He tests the of Technology. model using annual real per-capita consumption data, excess returns and P/Es Barberis begins with the two major from 1929-2010 and finds a number of paradigms in finance: interesting things: 1. The “rational agent’’ framework that 1. Low order forecasting equations miss posits sensible beliefs are updated some of the mean reversion in properly when new information arrives fundamentals, thus resulting asset prices and investors make sensible decisions in the face of risk. 43

2. Behavioral finance that asserts that some samples of data and it would predict long- run market participants are less than fully mean reversion in stock returns. He cites as a rational and hold less than fully rational practical example of this theory, the “hot beliefs and decision-making under risk. hand” effect in basketball (a player has a For guidance on how people deviate from better chance of hitting a shot after a hit than rationality, he draws on research in after a miss). Widely held, this theory has not psychology about beliefs (e.g. overconfidence been proven using the evidence from and representativeness) and the psychology professional basketball. of decision-making under risk (e.g. prospect An alternative to representativeness theory and ambiguity aversion). As for theory is probability weighting. This theory testing, he says one can start with a postulates that the rational decision-maker psychological principle, derive and test the should evaluate risk by: predictions or one can start with a puzzling fact propose and test a psychology-based 1. Considering the different possible future outcomes. 2. Deciding how good or how bad each outcome will make him feel. 3. Weighting each outcome by its probability creating an “expected utility” framework. Unfortunately this process may not be a good description of how people actually think about risk. Rather, he says, the brain may weigh probabilities in a nonlinear way by overweighting low probabilities and underweighting high probabilities. This assessment process may create an outcome more like that shown in the chart below. The dotted line represents rational choice and the solid line represents how individuals might hypothesis. He starts with a puzzling fact. act if using probability weighting. Probability weighting is an element of prospect theory His puzzling fact is the “disposition effect” and it is a model of how people incorporate (individual investors have a greater loss aversion. This sort of idea, he says, propensity to sell stocks trading at a gain than captures the odd fact that simultaneously to sell stocks trading at a loss). This effect has people demand both lotteries and insurance. attracted considerable attention because it is challenging to explain using simple rational Barberis notes that probability weighting models of trading behavior. Barberis looks to predicts that a security’s own skewness will four models for an explanation: one be priced: positively skewed assets will be traditional (representativeness) and three overpriced and will earn low average returns; others that are behavioral (probability negatively skewed assets will be underpriced weighting, mutual fund flows, realization and will earn high average returns. Thus, by utility). taking a significant position in a positively skewed asset, an investor has a small chance Representativeness posits that people of making a lot of money and, based on draw overly strong inferences from small 44 probability weighting, a small chance is “good” state or a “bad” state. In a good state, highly valued. the stock goes up with a probability of 55% and in a bad state it goes down with a 45% He provides a number of examples of probability. The subjects are not told the where this behavior has been identified. states of the three stocks but can try to infer Stock options predicted to have more them from the price updates they observe. positively skewed returns have lower average Each subject starts with $350 and can, at each returns. Stocks with high idiosyncratic decision point over the experiment, buy a volatility have low average returns while share (if not currently owned) or sell shares if those that are positively skewed have the he does own them. The stock’s price changes reverse. As for explaining the disposition only when a price update is shown on the effect, a model of the stock market in which screen the subject sees. It is important to note investors process risk according to prospect that from a subject’s perspective, each stock theory, seems to do better than does price is positively auto correlated. representativeness. Probability weighting plays a key role. Realization utility says that people experience a burst of pleasure that can be Barberis next turns to the mutual fund seen on a brain scan when they sell an asset at flows idea. This notion suggests that we can a gain. A specific area of the brain, the make sense of stock price movements by ventral striatum (vStr) is widely believed to thinking about mutual fund flows. He points encode “hedonic value,” subjective feelings to three facts about these flows: of pleasure. Thus, neural activity in the vStr 1. Mutual fund returns in the current period should spike up around the moment where a positively predict mutual fund flows in subject issues a command to sell a stock at a the next period. gain as compared to when a subject continues 2. Mutual fund flows are positively serially to hold a stock with a similar embedded gain. correlated. Barberis plots a time series of neural 3. When mutual funds receive new flows, activity in the vStr when a subject issues a fund managers allocate a substantial command to sell a stock at a gain. He then fraction of them to existing positions. compares this activity to when the subject Understanding these facts leads to a new holds a stock with a gain. The results interpretation of a number of phenomena: confirm his prediction -- realization utility is momentum; the smart money effect; fund the culprit in the “disposition effect.” performance persistence. Finally Barberis turns to the third theory: Economics realization utility. This theory suggests that 30. The College Earnings Premium, The investors feel a burst of pleasure (pain) when Higher Ed Scorecard, And Student they sell an asset at a gain (loss). It is this “Sorting”: Myths, Realities And theory he tests. He sets up an experimental Explanations (Fall 2015) Eleanor Dillon market where 28 Caltech students and Eleanor Dillon, Assistant Professor of employees trade stocks while their brain Economics at State University, led a activity is monitored using an fMRI scanner fascinating post-appetizer dinner talk about (functional Magnetic Resonance Imaging). the market for college education in the United For this experiment, there are three stocks. States. Education is an investment good that At each moment the stocks are in either a you invest in during one period of your life, and which then pays dividends the remainder 45 of your working career. Education is a huge market with approximately 17.7 million students enrolling as undergraduates in the 2012-13 academic year. This is more than three times the number of homebuyers in contained in the following table: 2013 (5.5 million). These include: 1) enrolling is not the The market is not only large in terms of same as graduating: less than 50% graduate the number of participants, but also from four-year colleges, 2) two-year colleges increasing in the proportion participating. are a particularly risky business, with less Sixty-five percent of the high school class of than 10% graduating with a Bachelor’s 2012 enrolled in college the following Fall, degree within five years, 3) there is an compared to 49% in 1980. It is a market that earnings premium from just starting college, is also large in economic terms. and 4) there is a strong correlation between Undergraduate tuition and fees in 2012-13 the quality of the undergraduate institution year were around $156 billion --- a bill paid and both the graduation rate (only 25% jointly by students and the government. In graduate from the lowest quality 4-year addition, American households owed a total 2 colleges compared to 70% from the highest of $966 billion in student debt as of 2012. It quality institutions) and earnings ten years is, thus, very important from both an annual after college ($23,500 for the lowest quartile flow and a stock perspective. versus $39,800 for the highest quartile). Dillon kept the Q crowd engaged The table above shows that the addressing three primary questions: ``outcomes” differ depending on the quality 1. How do returns to college vary across of the educational institutions. The table institutions? below shows that the “inputs” also differ 2. What are true individual costs of substantially: investing in college? Dillon then discussed her estimates, from 3. How do students decide how to invest in a multivariate analysis, of the gains from college? attending a higher quality institution, holding Her analysis was based on the National student ability and other characteristics Longitudinal Survey of Youth 1997 Cohort, constant. The following chart, for instance, which provides detailed information on shows that the quality of the institution has a college enrollment, earnings, high school and strong positive effect on graduations rates --- college grades, SAT/ACT test scores, and regardless of student ability. In fact, the another standardized test (ASVAB) taken by quality of the institution has a much larger all respondents. Some of her key findings are impact on graduation rate than the ability of the student. Institution quality similarly has a strong

2 Since 2010, student debt has surpassed credit card debt to become the second largest source of debt, after mortgages. By comparison, in 2000, it was the smallest source of household debt after credit cards, auto loans, and “other”. 46 positive impact on subsequent earnings --- attending college, after financial aid, is not across all levels of student ability. Professor known until after you apply and have been Dillon thus concluded that: The college rat accepted. Hoxby and Turner (2013) randomly race is correct! Getting into a better school target some high achieving low income does pay off. students (top 10% of SAT scores) and send them detailed packages of information about Dillon next looked at the cost side of net costs. They found that students who college. We would expect that colleges with received the packages were substantially more resource- intensive instruction (and more likely to apply to and enroll at highly higher quality) can charge more. While this is selective colleges. The policy implication is true for the sticker price for college, it is not that the Federal Government may wish to always true for the net price paid by students play a greater role as an information provider after financial aid. As the following table for high-school students! shows, the net price of college increases much more slowly with quality than the By the end of dinner, it was quite apparent to us that Dillon’s talk elicited lots of questions and plenty of Q buzz! Bibliography: Dillon, Eleanor W., “The College Earnings Premium, The Higher Ed Scorecard, and Student “sorting”: Myths, Realities And Explanations,” presentation slides available at http://www.q- group.org/wp- content/uploads/2015/05/Dillon_CollegeMar posted sticker price: ket101915.pdf. Finally, Professor Dillon looked at the Caroline Hoxby and Sarah Turner, 2013, problem of “under-matching”. This is where “Expanding College Opportunities for High- students attend a college substantially below Achieving, Low Income Students,” Stanford the quality level they could potentially attend Institute for Economic Policy Research, given their innate ability. Dillon’s results Discussion Paper No. 12-014. show that only 6% of the “under-matched” 31. Deflation Risk (Fall 2015) students applied to a well-matched college Francis Longstaff and didn’t get in. Seventy-two percent of Francis Longstaff presented a paper that them never apply to a well- matched college. estimated the market’s assessment of Students from the top 25% of households by deflation, using inflation swaps and inflation wealth are 6 percentage points less likely to options. The objective of the paper is to under-match than students from the bottom estimate the market’s view of the probability 25% of households. Overall, 25% of students of various levels of deflation or inflation, and under-match. The offspring of college to estimate the market’s price of insuring graduates are 5 percentage points less likely against such levels. The overall results were to under-match than the offspring of high that: school graduates. They are also 5 percentage 1. The market expects inflation of 2.5%/year points more likely to overmatch. Eleanor says over 10-30 year horizons that a large part of the under-matching is due 2. Over the October 2009 to October 2012 to inadequate information. The actual cost of sample period, the average market 47

probability of deflation occurring over a deflation; shaded areas are NBER recessions): two-year horizon was 11.4%, while it was 2.8% over a 30-year horizon But, relatively little is known about the 3. Deflation risk is priced much more highly probability of deflation, perhaps because than inflation risk (as suggested by the inflation is difficult to measure. Ang, Bekaert, difference between the risk-neutral and Wei (2007), for instance, show that probabilities and objective probabilities in econometric models perform poorly in the two states of the world) estimating expected inflation. Survey data does better, but only looks at first moments As an anecdote of a rare disaster (related (expected inflation), not tail probabilities. to Vineer Bhansali’s talk above), consider this: In 1933, FDR devalued the U.S. dollar Longstaff’s paper uses a new market- by 40%--this resulted in bonds issued by based approach to measure deflation risk; French and Swiss governments that yielded specifically, it uses market prices of inflation negative rates. calls and puts to solve for the risk-neutral density of inflation, in combination with the Deflation has played a central role during term structure of inflation swaps to back out the worst economic meltdowns in U.S. the risk-premium for inflation. 3 Together, history, including these especially salient these two are used to come up with the periods: market’s probability density for inflation— 1. Banking panic of 1837, which resulted in i.e., the probability that the market assigns to six years of deflation, where prices fell by a particular level of deflation or inflation 30% occurring in the future. 2. Long Depression of 1874-1896: 40% of The results from Longstaff’s estimation corporate bonds were in default, and the procedure using options and swaps is shown price level declined by 1.7%/year 3. Great Depression: prices fell by more than 40% from 1929-1933 Recent years have brought growing fears of deflation in the financial press, with quotes such as: 1. “Nightmare scenario” 2. “Looming disaster” 3. “Growing threat” The below graph shows that deflation is a recurring theme in the U.S., with some long periods (including the 50-year period prior to 3 The use of inflation call and put prices to solve 2008) with no deflation (year-over-year for risk-neutral densities is common in the option pricing literature. The intuition behind using inflation swaps to estimate the inflation risk-premium is that the inflation swap rate is higher than expected inflation when the inflation risk-premium is positive; the difference between the swap rate and expected inflation is a measure of the risk-premium. Longstaff’s procedure relied heavily on the prices of deep OTM options. During the Q&A session, there was a lively debate about those price quotes and whether they reflected trade prices or matrix pricing. 48 in his Figure 1: 1. He used predictive quantile regression, which focuses on the left tail of the Note that the peak probability lies at distribution of a set of macroeconomic positive inflation levels, but that there is a variables (rather than central tendencies), significant probability mass in the left tail, to examine the relationship between which indicates that the market assesses systematic risk measures and negative inflation (deflation) to carry a small macroeconomic shocks. This is important, but non-trivial probability. This is the central because extreme left-tail events are much result of Longstaff’s paper. more important to predict than less Longstaff proceeds to show that the time- extreme macroeconomic events. varying deflation risk, as estimated through 2. Building on research he presented at Q in his procedure, is correlated with real-world Spring 2013, he showed how to use macroeconomic conditions that capture stress dimension reduction techniques (principal in the financial system. He concludes that components quantile regression and deflation is a real risk, as assessed by the partial quantile regression) to construct an market, that it carries a risk-premium that is aggregated SR measure with a higher larger than that of inflation risk, and that signal content relative to the noise than a deflation risk—using his procedure to extract disparate set of individual systematic risk this risk from market prices of derivatives—is measures, which each on their own have related to the risk of major systemic shocks high noise levels. based on macroeconomic indicators. Kelly’s analysis focused on the downside Bibliography quartiles of macroeconomic shocks to industrial production and the Chicago Fed Ang, Andrew, Geert Bekaert, and Min Wei, National Activity Index. He and his co- 2007, “Do Macro Variables, Asset Markets, authors evaluated 19 measures of systematic or Surveys Forecast Inflation Better?” risk, for a sample period extending over Journal of Monetary Economics 54, 1163- several decades, in the United States, Europe, 1212. and the UK. Fleckenstein, Matthias, Francis Longstaff, Systematic risk measures broadly fell into and Hanno Lustig, “Deflation Risk,” four categories: available at http://www.haas.berkeley.edu/groups/finance a. Aggregated versions of institution- /current.pdf. specific risk measures for the 20 largest financial institutions in each region (such 32. Systematic Risk And The as Co-VAR or marginal expected Macroeconomy: An Empirical shortfall), Evaluation (Fall 2015) Bryan Kelly b. Co-movement and contagion measures Bryan Kelly, Associate Professor of (such as the Kritzman absorption ratio), Finance and Richard N. Rosett Faculty c. Credit and liquidity measures (such as the Fellow at the University of Chicago, returned TED or term structure spreads), and to Q to talk about systematic risk (SR) d. Volatility and instability measures (such measures that can be used to predict as the average equity volatility of large economic downturns. There were two financial institutions or book leverage). particularly novel aspects of the research he Kelly presented evidence showing that all presented: SR measures spiked during 2008/09 crisis --- not too surprising given their a posteriori 49 origins --- but that they, on average, have low 2.2% per quarter. pairwise correlations (averaging on the order Interestingly, the SR predicts the lower tail of the future macroeconomic distribution, but not the center of the distribution. The individual measure that comes the closest to capturing the aggregated measure is bank volatility. There is evidence the government responds to distress, but not enough to mute all of the impact on macroeconomic activity. Finally, Kelly’s results suggest financial market volatility is a useful predictor of the left tail of macroeconomic activity, whereas

of 0.2). One interesting result was that, in long samples, many of the measures reached “crisis levels” multiple times, even when there was no crisis (false positives). On an individual basis, the only SR measure to have non-financial equity market volatility is not: significant out-of-sample R- squared statistics across all starting periods and in all three geographic regions was the measure of Bibliography: financial sector volatility, as shown in the table below (fourth row from the bottom): Giglio, Stefano, Bryan Kelly, and Seth Pruitt, “Systematic Risk and the Macroeconomy: An Kelly then proceeded to examine whether Empirical Evaluation,” available at dimension techniques could be used to http://www.q-group.org/wp- improve the predictability of negative content/uploads/2015/05/Kelly- Systematic- macroeconomic shocks by aggregating Risk-and-the-Macroeconomy.pdf. individual SR measures, and, hence, reducing 33. Why Global Demographics Matters the noise relative to the signal. He focused on For Macro-Finance, Asset Pricing, principal component quantile regressions, Portfolios and Pensions (Fall 2014) where one (PCQR1) and two (PCQR2) Amlan Roy principal components were extracted, and Dr. Amlan Roy of also on partial quantile regression (PQR). The Securities (Europe) presented a lively and results (for IP shocks) are summarized below, iconoclastic talk covering some of the and show the power of aggregating the highlights of his fourteen years conducting signals. Kelly shows that increases in the research on demographics, and its aggregate index are associated with a large importance for understanding financial widening in the left tail of economic activity. markets and the global economy. At times, A one standard deviation increase in the he chastised members of the financial and aggregated SR index shifts the 20th percentile academic communities, leading financial of the IP growth distribution down by more publications, and global policymakers for than 50%, from around -1.4% per quarter to - ignoring demographically driven micro factors that impinge on asset prices, 50 employment, inflation, GDP growth and burden much larger. The length also makes policy responses. He also encouraged it that much more difficult to estimate the members of the Q Group to show greater burden because of uncertainty over: health commitment to using quantitative care costs, the level of taxes that are techniques to unearth and take account of politically feasible, inflation, the returns these micro foundations. that can be earned on investable assets, and the geographic differentials in living costs. During his wide-ranging talk, he expressed deep concern that: He warned that there is a huge danger in lumping all people over 65 together to 1. Across the globe, countries have no estimate the cost of retirement programs. In ability to pay for their future pension, the U.S., for instance, the costs of health care and other age-related supporting people over the age of 80 is commitments, over 100% higher per year than the cost of 2. Global thought leaders fail to supporting people between 65-80. This is a sufficiently appreciate the importance particularly important problem because, that demographic differences should between 1970 and 2013, the number of have on the appropriate policy responses, people in the world over 80 grew by 359%, and compared to a growth rate of 124%, for 3. Global societies have failed to create instance, in the 15 to 64 age cohort. sufficient job opportunities for young workers (15- 24 years old) and young In assessing the ability of younger middle-age workers (25-34 years old). workers to support older retirees, Roy Roy, for instance, cited an estimate by argued that it is important to consider that Kotlikoff (2012) of a $222 trillion present some countries have expanding populations, value gap between anticipated future U.S. and others have decreasing populations. spending and U.S. taxes. He also noted that For example, between 2010 and 2100, Europe is basically bankrupt, in that age- Russia is expected to shrink from almost related expenses (public pensions, health 150MM people to 100MM people. The care, and long-term care) are about 20% of U.S., in contrast, is expected to grow from GDP compared to about 10% in the US. He 310MM to 460MM. China is expected to said that Europe would need a minimum 70% lose 300MM people over the same period tax rate to close the funding gap, and that of time. even Europe’s richest country, Due to all the demographic changes and Luxembourg, cannot pay for it. differences in the world, Roy argued that To help close the funding gap, Roy the current offering of retirement programs recommends: Abolishing mandatory --- that grew out of the work of Samuelson, retirement ages, increasing female labor Modigliani, Merton and Bodie --- is totally force participation, instituting a carefully inadequate, but Roy did not offer much in considered immigration policy to ensure a terms of new suggestions. suitable workforce, and effectively utilizing Roy criticized Bernanke, Krugman and off-shore outsourcing to help offset other policy leaders, who argue that the U.S. inadequate immigration. and other countries are still suffering from Roy argued that the global increase in the Great Financial Recession. He believes the length of retirement (largely the result that the problems facing the world are more of increased life expectancy) makes the structural than cyclical. He presented a 51 relation decomposing GDP growth into the 1. Kotlikoff, Laurence, “A National Debt Of sum of three components: (1) Working-age $14 Trillion? Try $211 Trillion,” NPR, population growth, (2) labor productivity August 6, 2011, growth, and (3) labor utilization growth. http://www.pbs.org/newshour/making- Roy compared the average growth of GDP sense/kotlikoff-on-the-real-problem/. in Germany and Japan during 2000-2013 34. “The Financial Crisis Six Years On: with growth in the 90s, and with growth in What Have We Learned” (Fall 2014) the U.S. He presented numbers showing Jeremy Siegel that declining work age populations in Jeremy Siegel, Russell E. Palmer those countries has been a significant drag Professor of Finance at the Wharton School, on growth during the more recent period. gave an animated and well-received (5.0 rated!) retrospective keynote speech at the Roy looked at the breakdown of GDP Banquet about Ben Bernanke, the Federal and finds strong differences between, for Reserve, and the greatest financial crisis instance, U.S., Germany, and Japan. since the great depression. Germany is much more reliant on trade than the other two countries. He said, Siegel noted that both he and Bernanke therefore, that the WSJ and FT are wrong were ardent students of the history of when they argue that the growth monetary policy, and that both had become prescription for Germany and Japan is to hooked on the subject while Ph.D. students become much more like the US. at MIT – in large part from reading Milton Friedman and Anna Schwartz’s A Roy said that he was a big bull on the Monetary History of the United States and US, and noted that it has the best the chapter therein on The Great universities in the world. He is a particular Contraction: 1929-1933. fan of the U.S.’s liberal arts system in that he believes it encourages creativity and Professor Siegel said that you would innovation. Roy noted how people all over have to go back to the Great Depression to the world are desirous of attending any of find an event of a similar magnitude as the the top 60 universities in the U.S. He collapse of Lehman Brothers on September presented evidence showing the str ong 15, 2008. There has been much debate on value- added of U.S. workers across whether Lehman Brothers should have agriculture, industry and services. been bailed out. Christine Lagarde (head of the IMF) and Professor Allan Metzler, One of the biggest problems facing the among others, have argued “Yes” and world is youth unemployment, declining others such as Charles Plosser (President of labor force participation of young workers, the Philadelphia Federal Reserve bank) and the concomitant social instability this argued “No”. The opponents assert that may bring. He noted, for instance, that the such bailouts only encourage inappropriate labor force participation rate for 16-24 and risk taking. Bernanke has publicly stated 25-34 in the U.S. is going down while the the Fed did not have the authority to bail participation rate for 65+ is going up. Roy out Lehman. Siegel termed this “hogwash”. forcefully argued that the older generations His reading of Section 13(3) of the 1932 across the world have let the younger Amendments to the Federal Reserve Act of generation down by not creating sufficient 2013 is that the Fed clearly had the ability employment opportunities. to lend to prevent the Lehman bankruptcy. References

52

Siegel stated his view that the reason happened. He will pull out all stops to the Fed did not bail out Lehman was prevent the collapse of the financial system.” entirely political. He said that there a huge backlash --- especially from the This led to a series of actions, Siegel Republicans --- from “bailing out” Bear recounted, to stabilize and save the Stearns (or at least taking over some of its financial system. Ultimately, this included: most toxic assets). Siegel mentioned $182BB of loans for AIG, the ring-fencing President Bush’s directive that there would of over $300BB trouble loans at Citi and be no new bailouts. Lehman was instructed $90BB of toxic loans at Bank of America, to put its house in order and was told there backing up “the buck” at all money market would be no additional public funds funds to prevent trillions of dollars from committed to ring-fencing toxic assets. redeeming from those funds, and backing Unfortunately, Lehman --- due to the high all large corporate bank deposits over the leverage and size of its real estate holdings FDIC limit. What caused Bernanke’s --- was dead in March and there was turnaround? The most important reason was the impact of Friedman and Schwartz’s nothing that Dick Fuld, Lehman’s CEO, th could do at that point to alter the course of book. As noted by Bernanke at a 90 events. Bernanke and Paulson had hoped birthday celebration for Milton Friedman in that the markets would be prepared for 2002: ``Regarding the Great Depression. Lehman’s collapse, but this was not the You’re right. We did it. We’re very sorry. case --- in part due to the huge But thanks to you, we won’t do it again.’’ interconnectedness of the financial system. Siegel compared the behavior of six key Lehman declared bankruptcy very early macroeconomic variables during the Great Monday morning --- many hours before Depression with the most recent Great markets opened. The stock market was Recession in a compelling slide in his talk. relatively firm for the first hour of trading, The most striking numbers involved the and then “all hell broke loose.” Markets behavior of the monetary base and the quickly became focused on whether Merrill , M2. During the Great Lynch or Citibank were in any better Depression, the Fed did an inadequate job condition, whether money market funds increasing the monetary base, such that would be able to “hold the buck” etc. there was a 31% decline in M2. This led to Markets froze and Treasury bill rates a roughly 25% fall in prices and a 27% fall dropped to zero. Professor Siegel recounted in real GDP. In contrast, during the Great going to the Bloomberg terminal in his Recession, the monetary base increased by Wharton office --- seeing the T-Bill rates at 380%, there was no decline in M2 or prices, zero. and thinking this was exactly the and real GDP declined by only 6%. same as in 1929, 1930, and 1932. Bernanke had clearly studied history, and was determined not to make the same An hour later, Professor Siegel got a mistakes. call from the Dean’s office and was asked to sit on a panel to discuss the events with Siegel offered some other quick students. One of his colleagues on the panel, thoughts about the Great Recession. Was Dick Herring, asked him: ``Won’t AIG be TARP needed? Absolutely not—the the the next domino to fall?’’ Jeremy Fed could have acted on its own! It was just responded: “The Fed will bail out AIG.” pursued for political cover so Congress Herring: “Why?” Siegel: “Today the world would also have to take some of the heat. changed. Bernanke knows what just Was the government unfair to AIG? No. 53

AIG shareholders should be very thankful. 35. The Economics of Big Time College A share that was worth $1,230 at peak is Sports (Spring 2014) now worth $50. It could easily have been Roger Noll worthless. Roger Noll, Professor of Economics Emeritus, Stanford University gave an Should AIG’s creditors been forced to enthralling Monday evening Q Group dinner take haircuts? Absolutely not! Nothing less talk about The Economics of Big Time than 100% backup of creditors would have College Sports. calmed the markets. We only barely prevented a complete melt-down. Nothing Noll began his talk by noting that college less than 100% would have helped at the sports have seen a tremendous growth in time. Siegel noted that Section 13(3) of the revenue over the last 30 years. The key driver Federal Reserve Act has been repealed and behind this growth was a landmark anti-trust superseded by the Dodd-Frank Law and decision that the NCAA lost in 1984 that in the event of future crisises, the Fed regarding its control of television rights. Prior would be limited in its ability to respond. to this anti-trust decision, live telecasts of Obama bought GOP votes by restricting the NCAA sporting events could only occur with Fed. Hopefully, this will not be an issue in NCAA approval. NCAA limited broadcasts the future. Who’s to Blame for the Crisis? to two each Saturday. The reasons, they Three primary culprits: 1) the CEOs of the argued, if more events were televised no one large investment banks and other financial would attend live events. The NCAA institutions who were over-levered to real reasoned, falsely, that this would severely estate, 2) the Fed, particularly Alan hurt member schools’ revenue: revenue from Greenspan, who could have prevented 50:1 television rights were small relative to school leveraging of portfolios of AAA securities, revenues from tickets, trinkets and team and 3) the ratings agencies. apparel. Nothing could have ended up being farther from the truth. Siegel believes that if the Fed had not helped to save , the dominoes Since the anti-trust decision in 1984, would have fallen sooner and the Fed nominal college sports revenue has grown 8-9% would have ended in the same position. (6-7% in real terms). The revenue boom has Conversely, if the Fed has saved Lehman, created great tension in colleges and within days (if not hours) it would have had universities across that country as an to save AIG, and then Citi, then Merrill enormously profitable business was thrust Lynch, then Bank of America etc. upon them by the NCAA. One obvious Politically this would have been impossible question is, has this enormous growth in to do. Itwas politically necessary to show revenue caused big time sports to be the public and the politicians the dark side profitable for the schools in general, or had of the crisis. Otherwise, they would have unintended consequences? voted the Fed out. Bernanke threaded it --- There are currently 120 universities that just so it would work. play Division1 football, 300 that play Siegel concluded by noting that the Division 1 basketball, and there are no final chapter on whether QE has been barriers to entry. A big-time college sport is a effective has not been written. As Paul highly competitive industry, and has Samuelson has frequently said: “We have continued to attract new entrants. As but one sample of history.” economists know, Noll says, a big increase in revenue does not necessarily lead to bigger

54 profits if competition is sufficiently high, and to the coaches, and coaches that can attract competition is very high. the best players are able to earn a higher salary and more benefits. Noll reports on an Noll points out that this tends to be a interview with Nick Saban, the head football winner-take-all industry where huge rewards coach at . He discussed the 2 main flow to the winners. As a result, competition values of new facilities: it helps athletes; it for inputs for winning, players, has grown protects the student athlete from the fiercer and fiercer. The NCAA controls only distraction of campus life. one input, the players, by restricting the size and number of scholarships. As a result, input Big time college sports have seen a costs for players are clear and fixed, while the dramatic rise in effective demand. In revenues are not. In the race to win players response, colleges have spent more and more increasingly more has been spent on what on their athletic programs. Since success attracts them -- coaches, athletic facilities, drives revenue, the shadow cost of going to and administrators. class has gotten higher and higher. There is thus an incentive to require players to put Noll asks whether this has this led big more and more time into practice; a problem time college sports to be more profitable. noted in the player testimony during Except for the top twenty schools, his answer Northwestern University’s football players’ is a clear no! Compounding the problem, he petition to be able to be unionized. notes that universities are non-profit Northwestern is clearly a school with high institutions that know they must stay non- admission standards and where the students profit, and decisions are decentralized. This have, what Noll calls, real majors, but has led most athletic departments to spend all practice demand are high. One former of their revenue increases on coaches, quarterback, who is behind the push to stadiums and training facilities. Noll uses as unionize, was not allowed select his preferred an example of Duke University’s basketball major, pre-med, if he wanted to play football. coach’s salary: Mike Krzyzewski earned Clearly the balance between academics and roughly $300,000 in 2000. By 2012 it was athletics is much worse at places that take over $5 million. academics far less seriously. A regression of the athletic revenue of a Noll’s says that the current system is school on a set of explanatory variables, unsustainable and in the process of imploding. including won-loss record and quality of the The NCAA must either stop being so conference, the regression explains almost all commercial or give some to the players. One of the variability but none of its depends on simple solution would be to raise scholarships the school itself. Player recruitment is the key to cover the full cost of attending school. He to success. The success of a team is almost concludes that conferences should be allowed entirely determined by how many 4- and 5- to compete without collusion, and set their star rated high school athletes are recruited, own rules. It is clear to him that if the NCAA and most of them go to the top 20 universities. exists at all in 3 years, it will not be the same. This is a winner-take-all industry, Noll Fittingly, many participants left the talk in points out. New players often choose a coach haste to watch and more than the school. To produce a good compete in the NCAA basketball final. team, a school must hire and keep an iconic coach. Since player costs are fixed, the marginal revenue product of the players goes

55

36. Housing And The Macro Economy the loss from that peak. (Spring 2013) Karl Case The housing bubble and bust of the past Karl Case was evening dinner speaker at decade was not unique in history. It has the 2013 Spring Q-Group Meeting. His long happened before, most recently in the 1980s history in. history in real estate research and when the Greenspan Fed increased the inflow his collaboration with Robert Shiller in of money into the economy. That action creating the well-known Case-Shiller Home fueled an era when house prices rose, and Price Index gives him a wealth of history to rose, and home owners expected them to keep share. The evidence demonstrates both rising. Home ownership became an reasons for optimism and continuing concern. investment with little perceived downside. The precipitous decline in housing prices Case annually surveyed homeowners that began in 2007 slowed and then ended from 2003 to 2012. Through 2007 most towards the end of 2009. Since then, housing homeowners expected high returns from prices seem to have stabilized with prices home ownership. The following chart being similar to those that existed in 2003 and captures those expectations. It is interesting 2004. The following chart tracks housing to note that expectations fell dramatically in

2008 but also turned positive again in 2009. Still, the survey shows that since 2008 prices from 1987 through 2010. expectations have moderated and The turning point in housing price homeowners do not expect the same increases declines coincides with the improvement in in value that they anticipated from 2003 Gross Domestic Product. through 2006. The following chart shows the date of the Housing starts are on the rebound. With home price peak in a number of markets and interest rates at record lows, and prices still at 2003 levels, affordability has increased. Combined with a decline in foreclosures and distressed sales and very low vacancies levels, the housing market is well positioned for a recovery. However, these positive indications must be balanced against a number of factors including tight credit markets, changes in the national and regional economies, possible movement in interest rates, changing demographics and potential tax law changes.

56

The data provided by Case sheds light population. This results in a high ratio of and provides insight on whether we are past elderly to the, working age population and, in the debilitating decline in home prices and some countries the country’s penchant for their impact on the overall economy and at male heirs, China’s “One Child” policy has the same time highlights the still existing produced an entire generation of men with concerns that might adversely impact the few prospects of finding a wife. future of the housing market. As for what might mitigate this outcome, 37. Are We Headed For A Demographic Kotkin suggested policies that foster: Winter? (Fall 2012) Joel Kotkin • Immigration. An increase in immigration Kotkin is a demographer who is worried helps cushion the problem of a declining about declining population growth and what work force and an aging population. that means for such things as aging, cities, Unfortunately, the impact is population and workforces. Using data from his decline in other countries. He pointed to worldwide research he provided plenty of several countries where the population food for thought on changing demographics. decline due to emigration is already is very serious. Worldwide, birth rates are declining. • Lower housing prices relative to incomes. Kotkin linked lower birth rates to changing Increase in policies to foster shared views among women with respect to marriage • responsibility by both parents. He likened and children. Citing survey results from this to policies already in place in 1976–2010, he said that an increasing number Scandinavia. of women reported that they (1) never married/plan to marry and/or (2) never He concluded that we need to ask expected to have children. Japan is a stark ourselves four questions: example where the number of women who 1. What is a city for? never married has increased from less than 20% 2. What does sustainability (resilience) to over 60% since 1920. Japan is not unique: mean in the post-familial future? by 2010 marriages in Spain had dropped to a 3. How can high income societies reboot to 20-year low and there were fewer children in accommodate families? the country than in the 18th century. Declines 4. Is it worth it to let the current financial in marriage rates are accompanied by new situation overwhelm the next generation? attitudes toward having children. Quoting Margaret Meade who said, “No In another survey, Kotkin reported that matter how many communes anybody invests when asked about the factors that are in, the family always creeps back,” Kotkin important in a good marriage, two factors — expresses hope for the family. good housing and sharing chores — increased most in importance while the importance of 38. Hurricane Outlook for the 21st children declined 37%. Kotkin said that lower Century. (Spring 2012) birth rates are associated with expensive Hugh E. Willoughby housing, ultra-competitive economies, a Hugh E. Willoughby, Research Professor, decline of traditional beliefs and, in advanced Department of Earth Sciences, countries, growing pessimism about the International University was the Monday future. evening dinner speaker. His topic was “Hurricane Outlook for the 21st Century.” The impact on a society of declining birth rates is a declining workforce and aging 57

Willoughby defines hurricanes as a they are the best guide we have. In looking at substantive concern due to their financial and the data using many of the statistical tools human impact and says all we have to do to with which those at Q are familiar, he is led understand this is remember the effects of to conclude that human induced, Hurricane Katrina in August 2005. Anthropogenic Global Warming (APG), is not “a matter of belief but rather a persuasive Hurricanes are heat engines that argument.” Willoughby provides facts about Willoughby compares to steam engines on a the changes that have and are happening due ship. The difference in energy (heat content) to climate change (e.g. the world is now 40% resulting from the warm temperatures at sea warmer than during Thomas Jefferson’s time). level and much colder temperatures at the top of the hurricane creates a crucial temperature Willoughby turns to the numerical models differential within the storm. In the Atlantic, used in hurricane analysis and prediction. He he points out, there is enough energy to make concludes that even as the number of a Category 5 hurricane anytime during hurricanes is likely to decline, the strongest August and September, but sheer is a hurricanes in the future will be stronger and moderating influence. Sheer describes a have more and heavier rain than those in the change in wind speed or direction and in past due to their increased water vapor hurricanes is primarily vertical from the content. The decline in number comes from bottom to the top of the hurricane. It is the increasing sheer and thermodynamics that dominant limiting factor in the intensity of a will limit their size and intensity. According hurricane. to Willoughby, there is yet another disturbing possibility. Models predict that the most Damage from hurricanes can be destructive Atlantic hurricanes will form over personally devastating (total economic the Cape Verde Islands. This means that the damage is two times the insured damage) and center of hurricane action will move over the this impacts many with homes and Central Atlantic resulting in apparent bad possessions on the East Coast of the U.S. He news for New England. does note that is no visible upward trend in U.S. damage when the data is corrected for He ends his discussion with a warning: such things as increased populations and these changes will challenge humanity, possessions in the U.S. coastal areas. especially farmers, as the threat to agriculture However, tail forecasts from the distribution increases. APG “is not a conjecture but is a allow for $16 billion in damages per year matter of evidence” and concludes that with the possibility of up to $1trillion per ignoring it may result in a “civil judgment but year. Furthermore, he points out, on a warmer not jail time.” globe it will be much easier for devastating hurricanes to develop. 39. Investment for Effect: Design for Undersea Warfare in the 21st Century This leads Willoughby to ask: How much (Fall 2011) of the apparent increase in hurricanes is due Rear Admiral Micheal Yuring to incomplete observations from the past and Rear Admiral Michael Yuring, Director how much is due to climate change? He says of Strategic Planning and Communications that over the last 1500 years the world has for Submarine Warfare was the Monday gone through an anthropogenic, human evening dinner speaker. His topic was induced, change -- a neutral cycle. However, “Investment for Effect: Design for Undersea st numerical models of this period have Warfare in the 21 Century.” common problems in spite of the fact that 58

Admiral Yuring began with an overview of naval resources. Submarines, he said are uniquely equipped to operate globally and Yuring reminded us that the US military unobserved. He provided a description of the is not just about waging war, but about resources currently in place and the Navy’s avoiding war, and discussed their cooperation planning for an uncertain future. Two things with the State Department. he covered were of particular interest: 40. Demography Changes, Financial personnel and equipment. Markets And The Economy (Fall 2011) Robert D Arnott He noted that the level of skill and Robert D. Arnott, Chairman and physical condition of those willing to serve is Founder, Research Affiliates, LLC presented not uniformly satisfactory and, to make the “Demographic Changes, Financial Markets, situation more challenging -- increasing and the Economy,” a joint work with his technology requires greater education. In colleague Denis Chaves. addition to a higher level of education, greater technical skills are needed as Arnott starts by saying that we are about underwater drones are developed and to be hit with a “Triple D Hurricane: Debt, deployed. Given the current state of the Deficits and Demographics.” All are economy, recruiting recently has been easy important, but the first two are more with yearly quotas being met in the first 6 immediate and more the subject of headlines. months of the year. As the economy recovers However, he says, it seems natural that the he does not believe this will continue. The shifting composition of a nation’s population Admiral reported that the Navy now is taking ought to influence GDP growth and perhaps a proactive role in public schools to also capital markets returns. In this encourage the development of talent. presentation he explores the role of changing demographics along the six linkages shown in The current economic conditions are the chart that follows. favorable for recruiting new members of the Navy, but are not conducive to new As the baby boomers have aged, many investment and expenditures. This is an people have studied past demographic data in evolving situation and all the services are an effort to extract indications for their future working to contain costs and do careful influence on many aspects of the economy. planning, Yuring said. This cost cutting has Arnott says before his work with Chavez resulted in increased cooperation between the there were some interesting results, but none services. The support the Navy receives with sufficient statistical significance. ultimately depends on political decisions. To lead into the presentation he shows Admiral Yuring provided a review of the cohorts grouped by age over time in various submarine services resources and its plans for countries. These charts show past and the future. He specifically noted that building impending demographic changes and clearly submarines has a long lead time and the new demonstrate the shifting ages in a country’s class submarines, when built, would population. Japan, well known to be an aging be in service until 2080. This led to a population, is only one of many countries discussion of the long-term planning process where the population’s age distribution is the Navy uses to anticipate needs in a changing. The current situation and the changing world. coming changes are clear, but what is important for Arnott is what these shifts do to

59 the country’s economy, its economic growth 1. PPP adjusted real GDP growth and its financial markets. adjustment creates a fairer global comparison by focusing on the domestic He analyzes the effect of demographic purchasing power of the average citizen changes on three measures of great for the consumption basket that is more importance for countries all over the world: appropriate for each country than GDP. real per capital PPP adjusted GDP growth, 2. Stock and bond returns are measured as stock market excess returns, and bond market excess returns relative to domestic cash excess returns. On the basis of this work he returns rather than as simple annualized can confirm what others have already returns. This strips out inflation demonstrated: a growing roster of young differences and allows for fair adults (age 15–49) is very good for GDP comparisons around the world. growth, growth of older workers is a little bad for GDP growth and a growing roster of Arnott reports that these steps lead to young children or senior citizens is very bad some simple and compelling demographic for GDP growth. The use of a polynomial curves that conform nicely to intuition about curve-fitting technique allows them to do so how people behave at various stages in their with statistical significance. lives. He presents pairs of graphs showing the relationship between demographics and He spells out the demographic, GDP and demographic rate of change with GDP GDP growth relationships: growth, returns from stocks and from bonds. 1. GDP per capita growth. All are intriguing. The one that relates GDP a. Highest GDP growth is associated growth to demographic share is shown below. with young adults 20-39. The role of the various age cohorts is clear. b. Late teens and the younger middle- What he calls the “sweet spot,” near the top age population help GDP growth a of the curve, shows the group that has the bit. highest impact on GDP growth. c. The transition period for those in the 50-55 cohort goes from helping to hurting GDP growth. d. Young children hurt GDP growth a little. e. Senior citizens hurt GDP growth a lot. 2. Stocks perform better when: a. There are many in the 35 - 59 age group, but worse when there are many senior citizens or children. b. The 45-64 group is growing faster, He then shows relationship between stock but worse when the young adult or returns and bond returns and demographic the 70+ age group are growing the composition. fastest. Arnott takes his findings and develops 3. Demography affects bonds about a 5-year three global forecasts based on demographic age difference, but with greater statistical composition: a forecast for 2011-2020 GDP significance. growth based on demographic composition; Two notions guide the research design: forecasts for bond and; for stock returns for the same period. The forecast for GDP

60 growth and that for stock returns are shown question: is it covariance-based risk (e.g., below. beta) that increases stock returns, or is it the

These forecasts for the next 10 years are observable characteristics of stocks (e.g., the what Arnott calls, sobering. book-to-market ratio from the accounting statements). This need not be an “either-or” Arnott concludes that what he learned is question, but one that attempts to estimate unsurprising, except in its statistical which “story” seems to explain a majority of significance and in the tacit implications for stock returns (while the other may play a the years ahead. He says their research subordinate role). shows: Of course, this question has been asked 1. A strong link between demographic numerous times. Fama and French (1992, shares (or demographic changes) and: 1993) and Davis, Fama, and French (2000) a. Per capita real GDP growth. argue for covariance-based risk-factor models b. Stock excess returns. with multiple factors (as they see it, an c. Bond excess returns. improvement over the CAPM’s single risk- 2. Polynomials are a powerful and intuitive factor). Daniel and Titman (1997) argue for a way to understand these relationships. model of returns based more strongly on observed characteristics of stocks, regardless Factor Models of whether these characteristics are 41. Cross-Sectional Asset Pricing With completely related to the risks laid out by Individual Stocks: Betas VS. factor models (such as the Fama-French Characteristics (Fall 2015) three-factor model). One interpretation of the Tarun Chordia controversy is that Daniel and Titman argue Tarun Chordia, Professor of Finance at that investors base their buy/sell decisions on Emory University, returned to a well-worn (sometimes) irrational behavioral reactions to 61 the observed characteristics of stocks, while greater amount of sampling variability than Fama-French argue that the “representative the unobserved “true” betas. Hence, we investor” is rational and that prices are should adjust the estimated betas using the efficient --- since (in their view) expected estimated sampling variability of the betas in returns are completely explained by the factor order to make an adjustment for the bias models. created by the measurement error. More estimated beta variability, results in a bigger Back to Chordia’s paper. Fama and adjustment (upward) for beta. French and Daniel and Titman use portfolios of stocks to make their point. Chordia argued Chordia then examines individual stock for using individual stock returns as the unit returns from 1963-2013, using this beta bias of analysis, as portfolios throw away a lot of correction in order to run a “horse-race” information on the individual stocks that may between beta (and other risk-factor loadings) be useful in determining what drives their and stock characteristics (such as the book-to- returns. However, Chordia acknowledged the market ratio from accounting statements) in drawback of using individual stock returns: explaining the cross-section of stock returns. they introduce a big Error-In-Variables (EIV) He uses the following second-stage OLS problem. That is, the noisiness of individual regression model: stock returns means that the estimation of ! each stock’s covariation with a factor (for �!" − �!" = �!! + �!!�!"!! + � !!���!"!! example, the market beta of the stock) is + �!" imprecise (“noisy”). When we then run a where � equals the estimated beta from linear regression of the future stock return on the first-stage regression (two-year rolling its estimated beta (based on noisy past return regressions, with the above-mentioned beta data), the properties of linear regression correction), and zcs is a vector of stock techniques result in a downward-biased characteristics, including size, book-to- estimate of the impact of beta on future market, and six-month past-return momentum. returns. In extended models, Chordia uses a vector of Chordia uses a correction developed by � risk-loadings, including loadings on the his coauthor, Shanken (1992), for this EIV market factor (beta) and the Fama-French problem. He argued that, with the correction, small-minus-big (SMB), high-minus-low we can use individual stock returns to gain book-to-market ratio (HML), momentum more precise information about the “betas vs. (MOM), robust-minus-weak profitability characteristics” debate. The basic idea is that (RMW), and stocks with conservative minus betas measured with error will contain a aggressive investment policies (CMA).

62

The below table shows Chordia’s results: Bibliography The key takeaways from this table are Chordia, Tarun, Amit Goyal, and Jay that (1) the EIV-corrected models produce Shanken, 2015, “Cross-sectional asset pricing estimates of risk loadings that are further with individual stocks: betas versus away from zero than do the OLS (non- characteristics,” available at http://www.q- corrected) models. For example, using the 5- group.org/wp- factor Fama-French risk model in conjunction content/uploads/2015/05/Chordia-SSRN- with the characteristic-based versions of these id2549578-1.pdf. risk factors, the market beta increases from Daniel, Kent, and Sheridan Titman, 1997, 0.191 to 0.469, while the loading on the SMB “Evidence on the Characteristics of Cross- factor increases (in absolute value) from - Sectional Variation in Common Stock 0.058 to -0.270. And, (2) the percentage of Returns,” Journal of Finance 52, 1–33. cross-sectional return variability explained by risk factors is much lower (see “% Betas”) Davis, James, Eugene F. Fama, and Kenneth than that explained by observable stock R. French, 2000, Characteristics, covariances, characteristics (see “% Chars”)—as shown by and average returns: 1929 to 1997,” Journal “% Diff”. of Finance 55, 389-406. Chordia concludes that, after correcting Fama, Eugene F., and Kenneth R. French, the risk-factor loadings (e.g., beta) for 1992, “The cross-section of expected stock measurement error, the risk-factors have returns,” better explanatory power than before Journal of Finance 47, 427–465. correcting, but not nearly the explanatory Fama, Eugene F., and Kenneth R. French, power of stock characteristics. 1993, “Common risk factors in the returns on 63 stocks and bonds,” Journal of Financial Murray also showed that this Economics 33, 3–56. phenomenon only exists among stocks with a Shanken, Jay, 1992, “On the Estimation of low proportion of institutional ownership, Beta-Pricing Models,” Review of Financial consistent with lottery demand being Studies 5, 1–33. concentrated among individual investors. Further, he showed that the lottery demand 42. Betting Against Beta Or Demand For phenomenon cannot be explained by a variety Lottery? (Spring 2015) of firm characteristics (market capitalization, Scott Murray book-to-market, momentum, illiquidity, and Scott Murray, Assistant Professor of idiosyncratic volatility), risk measures (co- Finance at the University of , talked skewness, total skewness, downside beta, and about one of the most enduring and tail beta) and funding liquidity measures documented anomalies in financial (TED spread sensitivity, sensitivity to TED economics: that the empirical security market spread volatility, T-bill rate sensitivity, and line (SML) is too flat. That is, high beta financial sector leverage sensitivity). stocks generate returns that are lower than that predicted by the SML, while low beta The key table (Table 5 in the paper) is shown stocks generate returns that are higher. This below. It presents evidence showing that betting anomaly was documented as far back as against beta phenomenon is subsumed by the Black, Jensen and Scholes (1972), and has lottery demand effect (MAX10 consists of stocks puzzled researchers for more than four with the highest average of their five daily highest decades. Recently, Frazzini and Petersen returns during a month). That is, holding beta fixed, the lottery demand effect persists; whereas, (2014) have presented a model and results the betting against beta phenomenon disappears arguing that leverage constraints cause lower after holding the lottery demand effect fixed. risk assets to outperform higher risk assets (on a risk-adjusted basis) across many asset The table presents the results of bivariate classes, including equities, sovereign and independent decile sorts based on both beta corporate debt, and futures. The idea is that and the lottery demand proxy. The leveraged-constrained investors who wish to intersections of each of the decile groups are hold a portfolio that tracks the market (i.e., then used to form 100 portfolios. (As the beta=1) must hold stocks centered around that sorts are independent, the number of stocks in level—reducing the demand for low-beta each of the 100 portfolios can differ.) The last stocks. two rows of the table show that the beta effect disappears after controlling for the In his talk, Scott Murray presented lottery demand proxy --- both in terms of the evidence of an alternative driving force for raw return difference between the high and the betting against beta effect (defined as investors outperforming by holding low-beta stocks). Murray argued that, in U.S. equity markets, the demand for high-beta can be explained by the demand for stocks with lottery-like payoffs (lottery demand), proxied by the average of the five highest daily returns of a stock in a given month (henceforth, MAX). In short, high beta stocks also tend to have a high value of this lottery demand proxy.

64 low beta portfolios (High-Low) and the Fama French 4-factor alpha for the difference between the high and low beta portfolios (FFC4); no t-statistic is larger than 1.61 in absolute value. Conversely, the right-most two columns of the table show that the Murray closes with one important point: negative relation between the lottery demand A key difference between the betting against proxy and future stock returns persists after beta and lottery demand explanations of this controlling for the effect of beta --- with t- empirical anomaly has to do with the role of statistics for the FFC4 alphas ranging from - 2.70 to -7.43. Murray provided insight into the channel by which lottery demand generates the betting against beta phenomenon. He showed that in a typical month, market beta and the lottery demand proxy have a high cross- sectional correlation. In months when this cross-sectional correlation is high, the betting against beta phenomenon is strong, as can be seen in the first three sets of rows in the table institutional investors in generating the price below (High). The first row shows the pressure. Frazzini and Petersen (2014) have average beta (the sorting variable), while the argued that it is the leverage constraint of second and third show the following- month institutional holders that generates the return and four-factor alpha, respectively. phenomenon, and, hence, the effect should be However in months when the correlation is strongest for those stocks with a high degree low, the betting against beta phenomenon of institutional ownership. Murray argued disappears. This can be seen in the bottom that if the lottery demand explanation is three sets of rows in the table (Low). correct, the effect should be strongest for Murray’s conclusion is that the betting those stocks with low institutional ownership. against beta strategy is a proxy for a much The following table shows the returns for better strategy, which might be coined the bivariate dependent portfolios, where stocks are initially sorted into ten deciles based on degree of institutional ownership, then sorted within each decile by beta. As is apparent, the betting against beta phenomenon is actually strongest for those stocks with the lowest institutional ownership (column 1: INST 1) and weakest for those stocks with the highest betting against lottery demand! institutional ownership (column 10: INST 10) --- additional evidence in favor of the lottery The below graph clearly summarizes the demand explanation. overall idea of Murray’s paper: the empirical SML is flatter only during months when the Bibliography lottery demand is strong: Bali, Turan G., Stephen Brown, Scott Murray, and Yi Tang, Betting against Beta or Demand for Lottery, 2014, available at 65 http://www.q-group.org/wp- incredible influence on the investment content/uploads/2014/11/BaliBrownMurrayT management world today. angPaper.pdf. Q members should note that, in his many Black, Fischer, Michael Jensen, and Myron years of participation, Jack presented papers Scholes, 1972, The Capital Asset Pricing at the Q-Group on 26 occasions—a “World Model: Some Empirical Tests. Available at Record”! http://ssrn.com/abstract=908569. Jack Treynor then rose to talk to Q 43. The Recognition Of One Of Our Most members. In a poignant and brief set of Valued Members (Spring 2015) remarks, Jack recognized the influence of Jack Treynor Monday Evening Dinner Operations Research—his field—on Modern Portfolio Theory. He noted that Bill Sharpe Rather than the usual evening keynote, Q and Harry Markowitz started their careers in members were treated to a special event—the Operations Research. He also pointed out that recognition of one of our most valued Operations Research has brought the values members—Jack Treynor! and attitudes of the physical sciences to the social sciences. Jack closed with a clear Jack has been a valued wise elder of Q for mission for social scientists: In the physical quite some time, and Q members can easily sciences, exciting new discoveries are made spot him by the cap he wears. Jack has been a each month. Jack challenged us to make member of the Q Research Committee since exciting new discoveries in finance each 1975, an astounding 40 years! Jack has also month, as well! served on the Q-Group Board for decades. Here are Jack’s remarks: Joanne Hill and Marc Reinganum opened the dinner presentation with a tribute to Jack. Are Markowitz and Sharpe so productive In their remarks, they discussed just a few of because of their years in operations research? Jack’s numerous contributions to the field of finance. For Q members, Jack’s research has Every month there are new discoveries in clearly and profoundly influenced our chosen the physical sciences. Every month there will profession, quantitative asset management! be a life-saving discovery in molecular biology that will change the practice of Indeed, Joanne said that Jack is one of the medicine. Next month, there will be another founders of quantitative investment exciting discovery about earth-like planets. management, and that he helped to align But, next month, there will be no exciting investment management practice with the discoveries in the social sciences. best of academic research. When I was young, I thought operations And, of course, Jack worked on a capital research was just about applying the asset pricing model in the early 1960s, techniques of the physical sciences to social creating the earliest drafts of ideas of what problems. Now that I am deep into retirement, would eventually become the CAPM. He also I think operations research is about applying worked on several papers with Fischer Black, the research philosophy of the physical including the famous Treynor and Black sciences to social problems. (1973) paper on the information ratio that showed how to optimally construct a But why would research philosophy portfolio of risky assets, and that still wields matter? The philosophy of the physical sciences changed at the beginning of the 17th

66 century. The discoveries that followed made Do they prefer problem solving to the Industrial Revolution possible. erudition? But what was the difference between the Wouldn’t a problem-solving culture be old research philosophy and the new? The old useful for professional investors? philosophy valued erudition; the new philosophy valued discovery. But why were Bibliography their objectives mutually exclusive? Treynor, Jack L. and Fischer Black (1973). "How to use Security Analysis to Improve Because one is the Type 1 error and the Portfolio Selection". Journal of Business 46, other is the Type 2 error, and they are No.1, pp. 66–86. mutually exclusive: if you want to avoid Type 1 errors, commit Type 2 errors; if you 44. “Risky Value” (Fall 2014) want to avoid Type 2 errors, commit Type 1 Scott Richardson Scott Richardson, Professor of errors. Accounting at the London Business School, Type 1 errors reject as false ideas that are delivered a presentation that dug deeply true; Type 2 errors accept as true ideas that into accounting notions of “value” for are false. Because it valued erudition, the old stocks. He presented strong evidence that philosophy minimized Type 1 errors; because Earnings-to-price, E/P, and Book-to-Price, it valued discovery, the new philosophy B/P, capture two different economic minimized Type 2 errors. characteristics of value, and jointly help to explain cross-sectional variation in the The physical sciences have never returns for 30 countries. In contrast to some forgotten the lesson of the Industrial previous work, Dividend-to-Price is not Revolution; the social sciences are too young associated with country level returns, after to remember it. And that’s why they are controlling for E/P and B/P. mutually exclusive research philosophies. Richardson’s empirical work builds on Aren’t these two different cultures? the recent accounting work of Penman and Don’t the cultures require their members Reggiani (2013) and Penman, Reggiani, to have the right research philosophy? If you Richardson and Tuna (2013), who argue have grown up in one culture, isn’t it that B/P differences explain cross-sectional consequently hard to change? variation in expectations of earnings growth and also reflect differences in the If academics value erudition, are they less riskiness of future earnings growth. likely to challenge the old ideas? Richardson explains that the accounting Are people whose work brings them face- system is “conservative”, in that it to-face with real-world problems going to be recognizes the cost of growing a firm aware of the need for change? immediately, but not the revenues until they are realized. The expensing of R&D Are they likely to be less impressed with costs, asymmetric asset impairment tests, erudition? and the expense of advertising costs are three examples of this conservatism. A Are central bankers the kind of consequence of this conservative practitioners who would be stimulating for accounting is that earnings are depressed academic- economists? during periods where risky projects are undertaken, and E/P is not a sufficient

67 statistic for value during these times, since both expected and uncertain; they should earnings don’t reflect future earnings also exhibit greater downside sensitivity to growth. contemporaneous global earnings growth, consistent with B/P reflecting risky future Richardson shows how this intuition is earnings growth. This is exactly what Richardson finds for the period 1993-2011 and which is depicted in the following chart, where the blue lines represent the high B/P countries, and the black lines represent the low B/P

countries. The high BP countries have higher reflected through the clean surplus equation, future growth of earnings after Year 0, and as expected returns can be decomposed into: the dispersion of that growth is higher. While the future premium of price Richardson also finds strong evidence minus book is unknown, this formulation that countries with high levels of B/P have suggests that B/P may be a valid candidate greater sensitivity of subsequent earning to predict future changes in price minus growth to downside states of the world, as book and suggests the following regression shown below: specification:

where Xt reflects other variables that are expected to be related to changes in price to book. In addition to the graphical information, Richardson runs cross-sectional Fama- Richardson studies 30 country-level MacBeth as well as panel regression stock indexes to test the hypothesis that analysis of country level stock market B/P is important when risky investment is returns on lagged B/P and E/P ratios. (He heavy, and less important, otherwise. A focuses on explaining 12-month ahead corollary is that B/P is positively correlated excess local currency returns.) Each with both expected earnings growth (the country’s E/P and B/P are demeaned (using first moment) and the risk in earnings that country’s mean over the entire time- growth (the second moment). If so, then period) to take care of different industry high B/P countries are, on average, facing structures and/or accounting standards in temporarily depressed “E” and their different countries. First, he finds that D/P recovery in near-term earnings growth is 68 loses significance when in the presence of 45. The Not So Well-Known Three-and- B/P, and explains that this is because D/P One-Half Factor Model (Spring 2014) measures paid out value, not retained value. Steven Thorley Next, he finds that both E/P and B/P load Asset pricing theory in financial significantly (as x-variables) when economics has developed well beyond explaining following year return (as the y- Sharpe’s (1964) original CAPM. There were variable), and that these two variables unexpected outcomes from using this model remain important even when he adds to study equity returns: realized alphas of low several control variables, such as size, (high) beta portfolios are reduced (increased) momentum, etc. when a beta factor was included. Today’s equity researchers and others using the equity An especially salient point made by pricing models conceptualize the 3-factor Richardson to take home is this: Why did Fama-French (FF) model, shown below, as a the famous Fama/French (1992) “beta is tool for studying market return and the size dead” paper find a role only for B/P, and and value characteristics of equity. not for E/P? Because F/F included all stocks—and the thousands of small stocks, Thorley asks, what are the three factors in which have risky growth, dominated their the FF model? He says that some analysts regression results. In these types of stocks, will answer the market, size, and value, while expected growth is so high that B/P plays others answer beta, size, and value. So is it the only role in forecasting future returns. the return that is on the capitalization- The idea is that, and Richardson finds that weighted market portfolio, or the return that if you more carefully look at large cap, E/P is from the CAPM beta? Theoretically, the will matter as well as B/P. And, B/P will return to the beta factor is the market return. matter more for any stocks with higher Empirically, it is not, and the two are not expected (but risky) growth. Maybe we can even close. Commercial providers of equity call the Richardson paper the “E/P is alive” risk models typically include a market and a paper! beta factor, along with variations of the size and/or value factors, in their models. References To understand the problems addressed by Penman, Stephen H., and Francesco commercial providers, Thorley creates a Reggiani, 2013, “Returns to Buying variation of the basic market model, by Earnings and Book Value: Accounting for splitting the middle term (βiRM) into a market Growth and Risk,” Review of Accounting constant and a relative beta factor, hence the Studies 18, 1021- 1049. name three-and-half factor model. Penman, Stephen H., Francesco Reggiani, For his analysis of the importance of this Scott A. Richardson, and A. Irem Tuna, innovation, he uses the following model: 2013, “An accounting-based characteristic model for asset pricing,” working paper, Ri = A + B1 betai + B2 sizei available at + B3 valuei + B4 mom! http://faculty.chicagobooth.edu/workshops/ac + e! counting/past/pdf/PRRT- Sept27%202012.pdf. Where A, B1, B2, B3 and B4 are parameters (factor returns) estimated from cross sectional regressions, N is the number of securities in each cross-section, and beta, size, value, and momentum (mom) are stock- 69 specific factor sensitivities. These types of returns are RMF, 5.64%, beta, -0.79%, size, factor models are at the core of practitioner 0.86%, value, 1.88% and momentum of equity risk management models. 4.99%. It is interesting that the estimated (betai -1) factor has significant factor Under the assumptions that the CAPM is volatility (7% annual standard deviation) and true, B2=B3=B4=0 in the above equation and hence is important for risk management. It the coefficient B1 in each cross-sectional just does not appear to be a significantly regression should equal the return on the priced factor. market portfolio. He standardizes the cross sectional variations in betas to have unit Thorley reviews how the cross-sectional variance and allows the pure beta factor regression parameters can be thought of as return to be different from RM to create the factor mimicking portfolios where the following model. intercept corresponds to a portfolio whose weights sum to one and all other parameters R! = A + RMF + B1 (betai − 1) correspond to zero net investment portfolios. + B2 sizei + B3 valuei In practice, these factor mimicking portfolios + B4 momi + ei have a high correlation to the FFs SMB, Thorley empirically examines whether HML, and UMD (momentum) portfolios. this is true. His primary results are Finally, Thorley uses two different summarized in the following chart where the specifications of equation the second returns are, from the top right down, the equation for the purposes of portfolio market, momentum, value, small and beta at performance measurement with data from a set of ETFs. One specification of the equation forces the coefficient B1=0 and allows RMF to have a slope coefficient different from 1.0. In the second specification, B1 is freely estimated and the slope coefficient on RMF is set equal to 1.0, as specified in the second formula above. He finds estimated annual alphas that can differ by as much as 3% depending on the specification. Thorley concludes that: 1. The market’s influence on individual security returns can be split into a capitalization-weighted market portfolio and a pure beta factor 2. The beta anomaly in the U.S. equity the bottom. The prefix z identifies they are market over the last half century is larger standardized. than either the size or value anomalies, It is readily apparent that the CAPM null second only to momentum, and is more hypothesis: B2=B3=B4=0 seems violated. evident in larger capitalization stocks. Further, the risk premium associated with the 3. The beta anomaly presents challenges for (betai -1) factor, which should equal RMF, is portfolio performance measurement and virtually non- existent. Over the 1963-2012 opportunities in portfolio construction. In period, the average annual estimated factor fact, returns-based performance 70

measurement for the MSCI Minimum earnings and output in the future. Ceteris Volatility Index indicates that selecting paribus, this will lead to an increase in stock low beta stocks provides most of the prices. However if stock prices rise, firms value added in low volatility strategies. should immediately raise their investment in 4. In most cases a separate beta factor capital stock since the price has risen relative should be incorporated in portfolio to the cost of capital, resulting in a performance calculations to avoid contemporaneous increase in investment misperceptions about the source and growth and investment return at the same magnitude of the alpha when the market time as increased stock returns. As Cochrane beta is materially different from one. notes, if there are lags in the investment It is important that much of the variation process then investment will not actually rise in the performance of economic sector for a period of time, but orders or investment portfolios is explained by their differential plans should rise immediately. Further, if the exposures to a separate beta factor. production function recognizes investment Understanding this distinction between the lags --which Zhang does not -- the investment market and beta factors in the not-so-well- returns should still move at the same time as known three-and-one-half factor model the stock return. should help avoid misperceptions about the The relationship between stock returns sources of portfolio performance. and investment returns depend on a number 46. A Model of Momentum (Spring 2014) of restrictive assumptions as well as careful Lu Zhang timing alignment of the data for empirical The momentum premium, which is purposes. Zhang, in his paper, discusses in observable within and across most asset detail the efforts they undertake to construct classes, is perhaps the most difficult of the and align the data. Among key assumptions: asset pricing anomalies to explain as a 1. All firms have identical constant returns rationale outcome of financial market to scale production functions whereby equilibrium. Hence, the bulk of the each firm’s operating profits are a momentum literature has adopted behavioral constant proportion of the firms’ models to explain the momentum premium κ operating revenue; where the using conservatism, self-attributive overconfidence, and slow information proportionality constant κ is the same diffusion as justifications. across all firms. Hence, all firms are assumed to have identical operating Zhang uses a neoclassical investment margins regardless of their industry or framework, pioneered by James Tobin and operating efficiency, and there are no utilized by John Cochrane in a similar finance winners and losers across firms in terms context, to examine whether the cross-section of operating margins. of expected returns of momentum sorted 2. Firms pay an adjustment cost to alter their portfolios are consistent with a production- capital stock through investment or based asset pricing model. Previous research disinvestment. The adjustment costs are has clearly shown that this cross-section of assumed to be of a quadratic functional expected returns is not consistent with the form and do not depend on the firm’s Fama-French three- or four-factor models. industry or the nature of the firm’s capital. The after-tax marginal cost of an The intuition underlying Zhang’s model investment is given by: (1 + (1 - ) a is as follows. Suppose rational investors τ (I /K )) where a is a parameter to be revise upward their expectations about it it 71

estimated in the empirical work, Iit is the momentum portfolio to 19.22% for the high investment by firm i at date t, Kit is the momentum portfolio. The corresponding capital stock of firm i at date t, and τ is Fama-French alphas are -0.21% and 7.29%. the corporate tax rate. The spread between the Fama-French alphas 3. Markets for capital are sufficiently of the top and bottom price decile momentum competitive and friction free such that portfolios+1 is a large and significant. The next- firms can optimally adjust their capital to-bottom row of Panel A shows similar stock each period so that firms’ expected alphas for Zhang’s production-based asset I investment returns (Et[r t+1 ]) are pricing model. Here the alphas are I dramatically reduced, with the spread equilibrated to the point that: Et[Mt+1 r t+1] between the high-momentum and low- = 1 where Mt+1 is the stochastic discount factor. In principle, this allows firms to momentum portfolios only -0.90% and not short capital as well to be long capital. statistically significant. 4. Similarly capital markets are assumed The production investment model, which sufficiently competitive and friction free is fit based on the returns of the S such that expected stock returns (Et[r t+1 ]) 10momentum-decile portfolios, appears to are equilibrated to the point that Et[Mt+1 succeed in capturing average momentum S r t+1] = 1. profits. In addition, the model captures the reversal of momentum in long horizons, long- The empirical work in the paper relies on run risks in momentum, and the interaction of the equilibrium relations given in the last two momentum with several firm characteristics. points above, and the resultant moment However, the model fails to reproduce the restriction: pro-cyclicality of momentum as well as its ! !" negative interaction with book-to- market. E! r!!! − r!!! = 0 (1) On firms’ stock and investment returns 47. Horizon Pricing (Spring 2013) !" Robert Korajczyk where E! r!!! is the firm’s levered Korajczyk starts his presentation with two investment return. questions: should the assessment of risk be Zhang uses the moment restriction in (1) related to horizon, and why? To provide above to estimate the two unknown background to the research he reviews what parameters a and κ on 10 price momentum others have found and focuses on three and 10 earnings momentum portfolios. The potential effects that horizon can have on pricing. First, prices of certain stocks have a delayed reaction to news about systematic factors. Second, risk and risk premia estimates exhibit autocorrelation and hence seem to depend on horizon. Third, there is evidence that the investment horizon varies across investors. Given what we know, Korajczyk says that key results of the paper are panels below. if the investment horizon affects the measurement of systematic risk, it is sensible The results in Panel A show the well- that it might impact risk premia. In this known cross-sectional variability in average presentation, he examines whether short- returns for the10 momentum portfolios, horizon risk factors exist that explain some of ranging from 10.74% per year for the low 72 the short-term cross-sectional differences in liquidity is significantly larger later in the expected returns but have no impact when sample period. This is consistent with a measured over long horizons; and whether shift toward higher frequency trading later there are long-horizon factors whose risk in the sample. measured over short horizons does not 3. Long- horizon measures of risk seem to explain short-term cross-sectional differences be important for the pricing of the market in expected returns whereas they do over long and value/growth risk factors. horizons. 4. The value-versus-growth factor behaves like a characteristic when risk is Korajczyk uses NYSE/AMEX/NASDAQ measured at a monthly horizon and has data from 1963–2010 to unravel this horizon both risk factor and characteristic-like question. Many of the key results of the paper behavior at longer (2- to 3-year) horizons are summarized in the table presented below. when historical OLS betas are used. In this table, he shows the average monthly 5. HML seems to be priced as a risk factor return spread over different horizons and when estimating a model that conditions between decile portfolios created on the basis betas on the various characteristics. of five well- known systematically-priced 6. The size and momentum variables exhibit risk factors. His factors include market characteristic-like behavior at all horizons. (MKT); small minus large (SMB); high minus low (HML); up minus down (UMD); Korajczyk concludes by saying that these liquidity (LIQ). He compares the mean return results highlight the importance of differences for horizons ranging from one- considering investment horizon in month to sixty months. From this analysis he determining whether a cross-sectional return concludes that LIQ is a short--- run factor, spread is alpha, due to a firm characteristic, market is a medium---run factor and HML is a or a premium for systematic risk. Some long---run factor. Further he finds that neither factors, he says, that are risky from the SMB nor UMD are priced risk factors. perspective of short-term investors may not Further, he notes the evidence of time- varying risk premia based on variance ratio tests for the traded factors. From this and other analysis, Korajczyk finds the following: 1. Factor risk measured at short horizons be from the perspective of long- term might be more relevant for more investors, and vice versa. In particular, transitory factors since that may match liquidity risk may be of particular concern for the relevant horizon for short-horizon short-horizon investors while HML risk is investors. Conversely, risk measured at important for long-horizon investors. longer horizons might be more relevant 48. Dynamic Conditional Beta (Spring for other factors since that may match the 2013) relevant horizon for long-horizon Robert Engle investors. Engle starts by reminding those who 2. He finds short-horizon measures of risk to teach that while the least squares model is the be important for the pricing of liquidity. workhorse in empirical economics and This, he says, is consistent with its more finance, students often question the transitory nature. The premium for assumptions of exogeneity, homoscedasticity

73 and serial correlation. In practice, the 1. Estimates and tests of dynamic unstable regression coefficients are the most conditional betas, his new approach. troubling for them. No surprise, he says, since 2. Tests of the importance of dynamic rarely is there a credible economic rationale conditional betas in the cross-section of for the assumption that the slope coefficients security returns. are time invariant. 3. The use of dynamic conditional betas in computing the amount of capital that So why do we continue to use this model financial firms need to hold to survive a when econometricians have developed a crisis. variety of ways to deal with time series regression models with time varying Engle then examines how dynamic parameters? Many adaptations are available, conditional betas can be used to assess global and in his presentation, Engle discusses the systemic risk. He asks, how much will the three most common procedures used to deal total equity value of a financial firm fall when with time varying betas: rolling window the global equity market falls? The answer, estimates; interactions with trends; splines or he says, is in the beta and shows plots of his other economic variables; state-space models DCB for a number of banks from 2008 to where the parameters are treated as an 2012. Those at the most risk are those with estimated state variable. He says that each of particularly high or unstable betas. these approaches makes very specific His DCB is critical in his computation of assumptions about the path of the unknown one input, LRMES, in his SRISK model. coefficients: the first specifies how fast the LRMES is the expected equity loss if there is parameters can evolve; the second specifies a another financial crisis. SRISK, is a measure family of deterministic paths for the of how much additional capital a bank must coefficients; the third requires a stochastic raise to survive a financial crisis and is process that is generally unmotivated and defined by: rarely based on any economic analysis. None of these approaches has become widely SRISK = k ∗ LIABILITIES − 1 − k accepted and Engle proposes a new approach ∗ EQUITY ∗ (1 − LRMES) to estimating time series regressions that allows for richer time variation in the The k represents the capital cushion the regression coefficients. He calls this model financial institution needs to function Dynamic Conditional Beta (DCB). normally. Using the Dynamic Conditional Beta, he Engle demonstrates the use of SRISK begins with the assumption that a set of model using Bank of America (BOA) as his observable variables can be described by a example. multivariate distribution. The objective is to 1. BOA has a market cap of $102 billion. Its characterize the conditional distribution given accounting liabilities are $1.9 trillion for a the observable variables. Engle notes that leverage ratio of 20. econometricians have developed a wide range 2. If there were another financial crisis with of approaches to estimate these matrices a fall of 40% in broad US equities over including dynamic conditional correlations. six months, he estimates shares in BOA Using the model, Engle reports on three fall by 66%. streams of research: 3. The implied Dynamic Conditional Beta today for BOA is 2.5, the beta can change as a result of mean reversion in

74

volatilities and correlations or a rise with 49. Momentum Crashes (Fall 2012) downside returns. Kent Daniel 4. The SRISK for BOA is $122 billion and Momentum strategies, a bet that past suggests that BOA is undercapitalized returns have predictive value, have produced today. This would be high returns and strongly positive alphas. Past more severe under the stress of an equity research has shown that momentum is decline. pronounced and pervasive among many asset classes around the world, though it is Engle provides an analysis for other considered by many to be an anomaly. The financial institutions, some of which, he says problem with this strategy is that returns to could face bankruptcy if there were another momentum strategies are highly skewed and finance crisis. The chart which follows shows sometimes result in what Daniel called aggregate SRISK for a large collection of “momentum crashes”—strong and persistent global financial institutions for the 5 years strings of negative returns. In this research, following the 2008 financial crisis. While not he asks why “momentum crashes” exist and now at an all-time high, the total SRISK has whether they are forecastable. not yet retreated to pre-2008 levels. He next shows that global systemic risk by country: Japan has the highest level of systemic risk followed by the US.

Daniel said that momentum crashes tend to occur in times of market stress, specifically when the market has fallen and ex-ante measures of volatility are high. These patterns are suggestive of the possibility that the changing beta of the momentum portfolio may be partly responsible for momentum crashes. They also occur when Surprising results, but he says systemic contemporaneous market returns are high. risk is more insightful and logical when Daniel asked, are they predictable? measured relative to GDP: Cyprus is the most vulnerable, followed by Switzerland. Over To analyze this question, Daniel creates the past 5 years, Engle shows, that the 10 momentum portfolios using monthly and systemic risk for the US has declined and daily CRSP data from 1947 through 2007. He systemic risk in Asia has risen dramatically. finds a strong momentum premium over this 50-year period: the winning decile had an Engle concludes by saying that the big excess annual return averaging 15.4% and the European banks have had rising betas since losing decile averaged an annual loss of 1.3%. 2011 as the sovereign debt crisis has grown in The average excess market return was 7.5%. strength. This, he says, is one piece of Daniel said that from 2009–2010 momentum evidence that shows the serious nature of the outperformed as they did during the period of current level of systemic risk. Engle argues the Great Depression. that the SRISK is superior to stress testing and better shows the level of systemic risk in the financial system. 75

Separating winners and losers as of realized volatility, affect the payoff to March 2009, he finds that many of the firms momentum strategies. in the loser portfolio had fallen by 90% or more. In that group were , Bank of Looking at a variety of time periods, bear America, Ford, and GM. The loser companies and bull markets, and different asset classes were often extremely levered, at risk of in the United States and other countries, bankruptcy and, Daniel said, their common Daniel concluded that: stock was effectively an out-of-the-money 1. While past winners have generally option on the firm value. In contrast, the outperformed, there are relatively long winner portfolio was composed of defensive periods over which momentum strategies or counter-cyclical firms. This suggests to experience severe losses. him that there are potentially large differences in the market betas of the winner and loser portfolios. The following chart shows the cumulative gains from market, past winners and losers and the risk free asset. The lines appear in the order they are listed in the legend. In addition, he said that there is a strong up and down market beta (β) differential between the two different momentum portfolios. The following chart shows the 2. “Momentum crashes” do not occur over a beta of the Loser Portfolio. It the more single day but are spread out over the span of several days or months. 3. Because of the magnitude of the losses from these crashes, momentum strategies can experience long periods of underperformance. 4. The most severe momentum underperformance appears to occur following market downturns and when the market itself is performing well. 5. The crashes occur after severe market downturns and during a month when the market rose, often dramatically, and were clustered in July and August. volatile line. Looking at various kinds of markets, Daniel reported that in bear market states, and Daniel said that there are clear payoffs in particular when market volatility is high, associated with the winner minus loser the down-market betas of the past losers are (WML) portfolio. The portfolio has low, but the up-market betas are very large. short-option-like characteristics that appear to This optionality does not appear to be be more costly when market variance is reflected in the prices of the past losers. higher. This is consistent with behavioral Consequently, the expected returns of the motivations for the premium and suggests past losers are very high, and the momentum that crashes are forecastable. Daniel effect is reversed. investigates whether other variables associated with perceived risk, such as 76

Daniel concluded by saying that the asset pricing models. It yields strikingly evidence is loosely consistent with several different results from the risk-sharing or behavioral findings. In extreme situations liquidity motive model: high-beta assets are when individuals are fearful, they appear to more speculative since they are more sensitive focus on losses and probabilities are largely to disagreement about common cash flows. ignored. Whether this behavioral When disagreement is low, the SML is upward phenomenon is fully consistent with the sloping. As opinions diverge the slope can be empirical results documented here, he says, is initially positive and then negative for high-beta a subject for further research. assets. Thus, the empirical results should show: 50. Speculative Betas (Fall 2012) 1. An upward sloping SML when Harrison Hong disagreement/uncertainty is low. When and why does the Capital Asset 2. An inverted U-shaped SML when Pricing Model (CAPM) “fail?” Frequently, it disagreement is high. turns out. We even have a name for these 3. More stock-level disagreement on high β “failures”—anomalies. Hong pointed to one stocks, especially when there is high failure—the beta-neutral strategy of long low- aggregate disagreement. beta/short high-beta stocks. This strategy should 4. More shorting of high β stocks, especially not work, he said, and yet it does. In practice it when there is high aggregate disagreement. provides a positive Sharpe ratio. Regarding 5. More turnovers on high β stocks, these anomalies Hong said he has developed particularly when there is high aggregate both a theory and evidence for why and when disagreement. these “failures” occur. Using security analyst disagreements, Over the past 20 years, financial economists Hong’s empirical tests confirm the predictions. have developed a large and impressive body of He verifies the notion of a speculative findings that reject the CAPM beta as mechanism by finding that high-beta stocks sufficiently explaining the variation in asset have much higher individual disagreement returns. These results have led to a search for about cash flows, higher shorting, and higher multiple factor models. Hong said that the share turnover than do low-beta stocks. He also debate over how to interpret these asset pricing finds that these gaps grow with aggregate patterns ignores the idiosyncratic behavior of disagreement. The following chart shows the the theoretical upward sloping Security Market outcomes for high and low levels of Line (SML). There is suggestive evidence, he said, that the risk and return relationship is not strong, and is reverse of what you would expect Hong’s thesis is that investor disagreement about the common factor of cash flows leads to the observed behavior. He believes that high- beta assets are more sensitive to disagreement than are low-beta assets, and thus experience a greater divergence of opinion about their cash flows. Costly short-selling then results in high- beta assets coming up against binding short-sale disagreement. constraints and being overpriced. These empirical tests show that high-beta Hong created a model that incorporates the assets can be more speculative and thus yield speculative motive for trading into traditional lower expected returns than low-beta assets. 77

This finding, he said, cuts directly at the heart 1. Are not the market beta. They are risk of what we teach finance students in terms of exposures that have positive risk how to price risk and may have important premia uncorrelated with global market implications for capital budgeting decisions. risk. 2. Reflect observable variations in risk 51. Revisiting exotic beta (Fall 2011) Mark Carhart premia over time that can be exploited to The first presentation of the Q-Group Fall improve both the risk and return of a 2011 meetings was made by Mark Carhart, traditional portfolio. Kepos Capital LP, who presented “Exotic 3. Result from risk premia related to long- Beta Revisited” coauthored with his Kepos term risk factors. colleagues: Ui-Wing Cheah, Giorgio De 4. Are Santis, Joe DeLuca, Bob Litterman and a. A source of uncorrelated returns not Attilio Meucci. created by short-term inefficiencies. b. Transparent, relatively well known This paper addresses the very focus of the and intuitive, and include such things Seminar, “New Challenges Ahead: Beta, as time horizon, capacity, and Diversification, and Innovation,” by liquidity. Perhaps most importantly, providing an analysis of and tutorial on they reflect a judgment about whether creating portfolios using “exotic betas” rather there is a real or perceived risk that than relying on the traditional asset-class justifies a premium. allocation approach. c. More transparent and liquid and have Carhart starts with the traditional CAPM: more capacity than other sources of a one period, one factor model with expected active alpha. returns that are linearly proportional to the What differentiates exotic betas from the market beta and with unpredictable residual market beta, Carhart says, is simply the returns. He provides the following to show source of the premium. A good example of an his view of the returns’ continuum for the exotic beta is value investing, although most various models. investors think of value investing as a style or a result of manager skill, not a risk factor. He

The so called “exotic betas:” argues that we can and should separate the exposure to a risk factor (created by a passive 78 tilt toward a value equity benchmark) from relative to price hedged by short exposure the skill needed to create the alpha that would to the opposite. outperform such a benchmark. He goes on to 2. Bond Yields: GDP-weighted combination describe how this can be done. of sovereign bonds. 3. Bond Yields Value: long the sovereign He also lists other examples of risk bonds in countries with the highest yields factors including certain asset classes, well and short those in countries with the known factors such as HML hedge fund lowest yields. strategies and “insurance-like” tools 4. Term Structure: long the sovereign term including credit default swaps, catastrophe structure in countries with the steepest bonds and out-of-the-money put options. curves hedged by short the opposite. Carhart offered following explanations 5. Commodities: volume-weighted for the value of “exotic betas:” combination of commodity futures subject to constraints of 50% in the 1. Behavioral: most investors do not behave energy sector, tilted towards commodities in the perfectly rational manner. While with the greatest backwardation in prices. theory suggests that investors are 6. Real Assets: market cap weighted concerned with the risk of their portfolio, combination of REIT securities. they do in fact, worry about the volatility 7. Currency Value: long exposure to high of individual assets. In addition, they interest rate and weak purchasing power worry about assets that have declined in parity currencies hedged by short value. exposure to low interest rate and strong 2. The importance of liquidity. PPP currencies. 3. The impact of leverage. 8. Volatility: short equity index volatility in As important as it is to know the sources most market conditions except long of value that come from “exotic betas”, it is equity index volatility when there are important to recognize when the embedded signs of significant financial stresses premium ceases to exist. ahead. 9. Credit: equal-weight combination of the 6 He then discusses “exotic betas” noting major credit sectors: US investment grade, that the following exist: US high yield, European investment 1. Equity Value: long exposure to countries grade, European high yield, mortgages with the highest fundamental value and asset- backed securities.

79

10. Catastrophe Bonds: market cap weighted invest in the portfolio with the highest combination of bonds linked to expected excess return per unit of risk and catastrophic reinsurance risk. lever or de-lever it to suit their risk Carhart compares the “exotic beta” preferences. While true in theory, in 1972, approach to a portfolio concept known as Black, Jensen and Scholes showed that the “risk parity,” an approach that normalizes empirical risk-return slope is lower than the various asset classes to constant risk and then predicted 45 degree angle. This is an anomaly, equal-weights them in a portfolio, and to Frazzini says, that has never been resolved; hedge fund replication. He concludes that though, he points out, borrowing constraints “exotic beta” is more complex and requires may be the culprit. more judgment to refine, but seems to Frazzini means to deal with the notion generate better return and risk characteristics. that investors prohibited from leverage bid-up He provides as evidence the chart showing high- beta assets while those limited only by the risk/return tradeoff for the various margin requirements can trade to profit from portfolios. Regardless of the approach, these this, at least to the limit of their margin liquid and transparent strategies can constraints. Thus leverage constrained considerably enhance the performance and investors will resort to over-weighting risky, risk of a traditional portfolio. high beta, securities to increase their return. In conclusion, Carhart says that one of the This behavior of tilting towards high-beta many investment themes that survived the assets suggests that risky high-beta assets financial crisis is that exposure to “exotic require lower risk-adjusted returns than low- beta” is very attractive in a portfolio context. beta assets – those that require leverage. However, as he explores in this paper, an Frazzini creates a model to test whether important caveat is that such exposure should high-beta assets require lower risk-adjusted not be expected to be completely static over returns than low-beta assets. They test the time. In addition to being dynamic, Carhart model’s predictions within U.S. equities, believes it is optimal for investors to access across 20 global equity markets, for Treasury these risk premia in relatively lower cost, bonds, corporate bonds, and futures. The data more transparent and customized for the test are collected from several sources implementations. Based on his impression of and result in the inclusion of 50,826 U.S. and currently available products, he concludes the global stocks covering 20 countries. The industry is already headed this way. authors consider alphas with respect to the 52. Betting against beta (Fall 2011) market and US factor returns based on size Andrea Frazzini (SMB), book-to-market (HML), momentum Andrea Frazzini, Assistant Professor of (UMD), and liquidity risk. In addition to data Finance, The University of Chicago Graduate on bonds, data for forwards and futures are School of Business, AQR Capital used and detailed. They use the TED spread Management, LLC and NBER, presented as a proxy for time periods where credit “Betting Against Beta,” he coauthored with constraints are more likely to be binding. Lasse H. Pedersen, John A. Paulson Professor Frazzini reports that they find that for each of Finance and Alternative Investments, New asset class a betting-against-beta (BAB) York University Stern School of Business factor, long a leveraged portfolio of low-beta and CEPR and NBER. assets and short a portfolio of high-beta assets, produces significant risk-adjusted returns. In Frazzini starts with the basic Capital addition, he says, when funding constraints Asset Pricing Model premise: all agents 80 tighten, betas are compressed towards 1.0, Fixed Income, Bonds, and and the return to the BAB factor is low. Interest Rates Frazzini provides considerable detail 53. The Equilibrium Real Funds Rate: about their analysis. The chart below shows a Past, Present And Future (Fall 2015) sample of their findings in graphic form. Here Ken West we see the plot of the annualized Sharpe Professor Ken West, John D. MacArthur Ratio of BAB factors for all the asset classes. and Ragnar Frisch Professor of Economics at All but one is positive. the University of (and, of Newey- West fame), led Q through a theoretical and To show the impact of credit constraints empirical review of the relationship between on BAB he plots the 3-year rolling average of real interest rates and economic growth. The the US equity BAB and the inverse of the relationship has been front-and-center-stage TED spread shown in the chart on the next in policy discussions, where Summers (2014) page. Frazzini uses this as further evidence of has expressed concern about “secular the impact on BAB on credit tightening. stagnation” manifested by low equilibrium On the basis of their work, Frazzini rates due to persistent low demand and a concludes that the model predicts that rising propensity to save; and McCulley (2003) and Clarida (2014) have stated the 1. Agents, such as hedge and private equity case for a “new neutral” where the funds, with access to leverage buy low- equilibrium real funds rate is close to zero. beta securities and lever them up. 2. Agents restricted by leverage constraints, West finds that the secular stagnationists such as mutual funds, prefer high-beta are overly pessimistic and that the long-run assets. equilibrium real funds rate remains

significantly positive --- although the authors themselves disagree on its exact level (with 2 of the authors believing the long-run rate is about 0.5 percent and the other 2 authors confident it will be closer to 2 percent). West said that their strongest finding is: Anyone

81 who speaks with confidence on the rate averages that do not take this into equilibrium long run rate is likely wrong! account are poor proxies for the equilibrium rate. West’s analysis was based on a long and 6. Persistent headwinds, due to the length of comprehensive data set. They analyzed some economic recoveries, can create a annual data doing back to 1800 for 17 persistently low real rate. When countries, and quarterly post-WWII data for 4 headwinds abate, however, rates have 21 countries. Their analysis of the data and tended to rise back to their historic theoretical contributions in the literature leads average or higher. to these key lessons: 7. The global saving glut probably distorted 1. There is no strong relationship between overall U.S. financial conditions (such as the real rate and economic growth in the the slope of the term structure), but did U.S., as depicted in the following chart: not have a clear impact on the equilibrium real funds rate. 8. During the period of alleged secular stagnation (1982-present), the unemployment rate was below its postwar average, and inflation pressures emerged only at the end of each cycle. The argument that zero rates and/or asset bubbles are essential to achieving full employment seems weak. a. During the first part of the period of alleged secular stagnation (1982- 2007), the real rate averaged 3%, a 2. The equilibrium rate is sensitive to the percentage point higher than its post- rate at which the future is discounted and war average. perceptions about the riskiness of the real b. The economy reached full cash flow from government debt. employment in the 1980’s, despite 3. Judging the equilibrium rate using long- high real interest rates and run historical averages can be misleading, retrenchment in the real estate and as the average pre-WWI is substantially S&L industry in the second half of the different than post-WWII. recovery. 4. The equilibrium real rate is sensitive to c. The NASDAQ bubble came after the the degree of financial constraint imposed economy reached full employment by regulations (e.g. deposit rate ceilings) and therefore was not a precondition and by the degree to which policy relies for achieving full employment. on quantity (e.g. reserve requirements) 9. Taking into account the offsetting rather than price (interest rates) to manage headwind from the rising trade deficit and aggregate demand. higher oil prices as well as the tailwinds 5. Trends up or down in inflation can from the housing bubble, it is not clear influence the real interest rate for whether the economy suffered secular prolonged periods (i.e., decades). Real stagnation in the 2000s. West then addressed the question: What is 4 For annual data, they assume that expected the equilibrium long-term real Fed Funds rate inflation can be modeled by an AR(1) process, while, for quarterly data, by an AR(4) process. 82 today? He addressed this question using a 54. The Return On Your Money Or The reduced form vector error correction model: Return Of Your Money: Investing When Bond Yields Are Negative (Fall where the first equation describes the change 2015) in the U.S. real interest rate, and the second Vineer Bhansali equation describes the change in the median Vineer Bhansali, Managing Director and world real rate. Three primary conclusions Portfolio Manager at PIMCO, gave a very arise from this system: i) the feedback from provocative talk when he argued that negative the U.S. to the rest of the world is small, ii) nominal interest rates can be explained with the feedback from the rest of the world to the fairly simple concepts. U.S. is large (e.g., if U.S. rates are 1% point below world rates, we expect U.S. rates to Of course, one of the first “truisms” that move 40 basis points toward the world rate in we learn in finance is that nominal interest the next year), and iii) given the high rates can never go negative, otherwise, we standard deviation of residuals (260 basis would simply “stuff money under our points), substantial divergence between U.S. mattresses,” or, more practically, place cash and world rates is possible. in a safety deposit box at a bank. While this may work for small investors who have time The key conclusion, which West claimed to implement such a strategy, it is impractical is not new to them but seems to be new to the for large-scale investors who demand literature, is that there is a high degree of immediacy. For instance, an investment fund uncertainty about the equilibrium long-run may need a large exposure to the Swiss Franc, real Fed Funds rate. In light of this, the as a tilt toward safety, to be implemented optimal policy response is to be much slower quickly through a non-derivatives position. to adjust target rates. Such a fund may find it worthwhile to pay a slightly negative interest rate for the Bibliography convenience of being able to buy a large Clarida, Richard, 2014, “Navigating the New position in Swiss Sovereign Bonds. Neutral,” Economic Outlook, PIMCO, November. As an even simpler example in everyday life, Bhansali points out that we often pay a Hamilton, James D., Ethan S. Harris, Jan fee to maintain a checking account, and such Hatzius, and Kenneth D. West, “The accounts usually pay zero interest (unless Equilibrium Real Funds Rate: Past, Present large balances are maintained)—clearly, the and Future,” available at alternative would be to pay for everything in http://econweb.ucsd.edu/~jhamilto/USMPF_2 cash, but this would prove both burdensome 015.pdf. and unsafe, and most of us would never McCulley, Paul, 2003, “Needed: Central consider doing this. Such a fee- based Bankers with Far Away Eyes,” Global checking account is equivalent to accepting a Central Bank Focus, PIMCO. negative interest rate for convenience and safety.5 Summers, Lawrence, 2014, “U.S. Economic Prospects: Secular Stagnation, Hysteresis, and the Zero Lower Bound,” Business 5 Economics, Vol. 49, No. 2. Yet another example cited by Bhansali is attributed to Greg Mankiw of Harvard. Suppose that the U.S. Treasury announced today that, next year, a number between 0 and 9 will be chosen at random, and all currency with serial numbers ending in that chosen number will become 83

Bhansali explained that this negative markets—it is shorter-term safe bonds that interest rate can also be thought of as an are commanding a price premium, such that “insurance payment,” since negative interest yields are negative. This appears to reinforce rates exist in the market only for securities the notion that negative yields represent perceived to be the safest. Investors accept a either a fee for the convenience and safety of negative interest rate on very safe securities holding bonds, rather than cash, or are a in return for the “global catastrophe insurance” short-term global catastrophe premium. that such securities provide. The fact that Swiss bonds exhibit their most negative rates Bhansali proceeds to illustrate how during periods of market stress reinforces this negative yields (or discount rates) can play interpretation. out in many different sectors of the market. For example, if investors hold equities for As a sidenote, Federal Reserve Chair, their insurance against rare disasters, then, Janet Yellen, made a statement recently about from the Gordon Growth model, the discount the possibility of negative discount rates in rate can be negative only if the growth rate of the near future.6 dividends is expected to be negative (since dividend yields can only be non- negative). Bhansali outlined four potential reasons Further, carry trades can be characterized as that might explain negative nominal interest writing a straddle on the relation between the rates in bonds considered to be globally safe: carry security and the funding security— 1. Heavy open-market purchases of “safe” basically, writing insurance that guarantees sovereigns by central banks to “manage” that the relation between the two securities interest rates (i.e., demand far does not diverge too much. outstripping natural supply) Bhansali concludes by saying that 2. Indexation, where some investors (or investors choose two different types of funds) insist on holding a benchmark set securities for their portfolios. First, they buy of bonds, no matter what the yield securities that provide a return ON 3. Aging population who are (perhaps) investment (normal securities). Second, they overinvesting in safe fixed-income assets buy securities that provide a return OF 4. Insurance against rare disasters investment (insurance securities). The yields Bhansali points out that, even though on these two different types of securities can Swiss sovereign zero-coupon bonds have be positive and negative, respectively, in negative yields out to 12 years, this is driven equilibrium. by negative yields out to 3-4 years, with positive implied forward interest rates beyond Bibliography that maturity. This appears to be true in all Bhansali, Vineer, “The return on your money or the return of your money: investing in a

world of negative rates,” available at worthless. Faced with this expected return of -10% on holding dollars, the market would be happy to http://www.q-group.org/wp- invest in U.S. Treasuries yielding -5%. (Relatedly, content/uploads/2015/05/Viewpoint- during the Vietnam War, “scrip” was issued to GIs Bhansali-Negative-Rates-May-2015-6-1.pdf. to be used for purchases, and the scrip had to be exchanged for a new issue of scrip by a deadline or it became worthless). 6 http://www.marketwatch.com/story/heres- the-big-problem-with-low-interest-rates-2015- 11-13. 84

55. On The Fundamental Relation Between data. He maps out each firm’s capital Equity Returns And Interest Rates structure, and then constructs returns for each (Spring 2015) security (no small feat for the bonds!). The Robert Whitelaw return on the asset side of the balance sheet is Robert Whitelaw, Edward C. Johnson III computed as the value-weighted average of Professor of Entrepreneurial Finance at NYU, the returns on the liability side (equity plus and CIO of Index IQ, presented a paper on debt). Whitelaw computes duration estimates the relation of stock returns to movements in (presented below) on each tranche of the interest rates. This topic is of great interest, as for firms with different the risk of interest rate increases brings levels of leverage: uncertainty to investors about how their stock portfolios will be affected. Note two effects here. First, within each leverage “bucket” (each row), duration The surprising take-away from increases with priority of the claim in the Whitelaw’s lecture was that some stocks can capital structure. Thus, senior debt is most actually serve as an effective hedge against adversely impacted by a rise in interest rates, interest rate increases, and these stocks can be followed by junior debt, and then equity. identified by their high level of debt in the Second, firms appear to choose higher capital structure (high leverage)! leverage when the duration of their assets is To understand this, there are two effects higher (i.e., when the value of the assets is of the impact of changes in interest rates to a more affected by changes in rates), as can be company’s balance sheet to understand: seen in the far right column above. This means that firms attempt to hedge some of 1. Change in interest rates ! change in asset the interest rate risk that could potentially be value ! change in the value of debt and borne by their equity holders—adding more change in the value of equity (total pie gets debt, which takes the brunt of the increased bigger or smaller) interest rate risk of the higher duration assets! 2. Change in interest rates ! change in the value of debt relative to the change in the Whitelaw further shows that, once you value of equity (the changes are different, control for the change in asset duration with thus, the share of the pie taken by debt vs. increases in leverage, the duration of equity equity changes) becomes more negative as leverage increases. Since the impact on the value of debt is negative from an increase in interest rates, the Whitelaw concludes with another insight. change in the value of equity can be positive Bond portfolio returns are often modeled as a or negative, depending on the change in asset function of “term” and “default” factors, value as well as the leverage of the firm. All where the default factor is proxied by the else equal, higher leverage leads to a more difference in returns between a long-term negative duration of equity—meaning that corporate bond and a long-term government equity can benefit from interest rate increases! bond, such that the maturities are effectively This can also explain (partially) the time- matched. Whitelaw argues that this construct varying correlation between stock and bonds of the default factor is misspecified. That is, it (sometime they move together, and other does not properly isolate default risk, because times they don’t). the corporate bond duration can be very short if they are low priority (or highly leveraged) Whitelaw next takes this theory to the bonds in a firm’s capital structure, and, hence, the bonds underlying the default factor, while maturity matched, are not duration matched. 85

The takeaway for Q members: use a default factor that is corporate bond returns minus short-term government bonds to better isolate default risk! Bibliography a zero coupon bond. He uses this to estimate risk-neutral probabilities. The difference Choi, Jaewon, Matthew Richardson, and between risk-neutral and true probabilities, Robert Whitelaw, 2014, On the Fundamental Breeden says, is that risk neutral probability Relation between Equity Returns and Interest is higher than the true probability of the state Rates, available at and the marginal utility. http://www.q-group.org/wp- content/uploads/2016/01/Whitelaw-paper.pdf. Given the strong positive correlation between consumption and stock market 56. Central Bank Policy Impacts On The wealth, Breeden says that risk-neutral Distribution Of Future Interest Rates probabilities of down-side S&P 500 moves (Fall 2013) will exceed their true probabilities. Douglas Breeden Understanding the relationship between risk- Breeden explores information about neutral and true probabilities for nominal future short- term interest rates implicit in the interest rates is more complicated -- as state prices and risk-neutral densities correlations between interest rates and the obtained from the interest rate cap and floor stock market are time-varying. He provides market. He uses research from his paper with the following chart to demonstrate this. Litzenberger (1978) that showed the state prices can be estimated from portfolios of Breeden says that the ratio of risk-neutral option butterfly spreads. This research is to true probabilities should also be time- particularly timely given the non-normal varying. For example, during the recessions nature of the low short- term interest rates of 1974-1975 and 1981-1982, higher nominal that have existed since the Great Recession. rates were brought about by supply shocks and associated with lower growth. Since the Breeden reviews what others have found global financial crisis, higher nominal rates and cites several current practical uses of have been associated with higher stock prices option implied risk- neutral probabilities. In and higher growth. Low interest rates (e.g.1% particular he notes the risk- neutral density or 2%) are associated with predictions of low and tail risk estimates for many assets real growth and high conditional marginal published by the Kocherlakota group at the 2 utilities of consumption Risk-neutral Federal Reserve Bank of Minneapolis probabilities for low rate states should exceed Breeden then says that the approach he true probabilities and the reverse should be takes is different from others in two ways: true for states with high interest rates. 1. It is wholly arbitrage based, non- The market’s risk-neutral expectations parametric, and uses market prices from about future levels of nominal rates have well-traded markets. 2. It focuses on long-dated, 3-5-year, market forecasts. Breeden then shows how a portfolio of butterfly spreads plus left and right tail spreads produces a pay-off that is identical to 86 changed over the last decade, Breeden says. 57. Duration Targeting: A New Look At In the following figure, he shows the relative Bond Portfolios (Spring 2013) symmetric risk-neutral expectations of 3- Martin Leibowitz, Stanley Kogelman month Libor rates (5 years forward) that Almost all bond portfolio managers existed prior to the global financial crisis. control interest rate risk through a Duration Targeting (DT) process. Post the crisis, the risk-neutral distribution of future rates has become much As bonds age and interest rates shift, more positively skewed as shown in the managers typically maintain, either explicitly or implicitly, a specific duration target through periodic portfolio rebalancing. Not only are most bond portfolios characterized by DT, but most fixed income indices also have very narrow duration ranges and hence are effectively targeted. Despite the wide use of either explicit or implicit DT in building and managing portfolios, the full implications of this strategy generally have been underappreciated. In previous work, Leibowitz [with Bova] found that as the portfolio’s holding period following chart. approaches its “effective maturity” the volatility of cumulative returns dramatically Breeden then goes on to examine how the decreases such that the starting yield exerts a risk- neutral densities of future 3-month Libor “gravitational pull” on returns. This robust rates have been affected by series of policy convergence pattern is the result of the actions by U.S. Federal Reserve Bank (e.g. balance between price changes and accruals the May 22, 2013 tapering announcement that and it exists whether yields are rising or came as a result of the economy’s strength). falling although it is important to note that the To conclude Breeden says that policy accrual component of returns is highly path actions taken by the Federal Reserve and the dependent. For a bond portfolio with a five- European Central Bank can and do affect the year duration, the effective maturity is about entire probability distribution for future nine years. For investors who hold a five-year interest rates, not just the means and duration targeted portfolio for nine years the variances. The tools he uses are wholly effective duration - the annualized return of arbitrage based and nonparametric, and do that portfolio converges to the starting yield. not rely upon assumptions about statistical To illustrate how yield changes impact estimates for the volatility process, nor on the returns, Leibowitz provides a simple example. pricing function for actual market option You first buy a five-year pure discount bond prices. For policy makers, Breeden says that at the beginning of the first year. At the end these tools should be helpful in measuring the of the year you sell it at the fair market price impact of potential policy.

87 using the proceeds to buy a new five-year 4. In scenario analysis, the TL formula can bond. Over the 9-year period yields on a 5- provide a reasonable estimate for the year zero rate bonds rise 50 bp per year. The return to a given terminal yield. final yield is 7%. The graph below plots the 5. In an asset allocation framework, the scenario over its life. overall fund level volatility contributed by the bond component will generally be As annual rates increase the bond price far less than that derived from a standard declines. At the same time the annual rate of mean/ variance model. return increases and the capital losses on the 6. For DT investors who are content with bonds are offset by the higher rates on the the current yield levels, the convergence new bonds and reinvestment. The excess properties suggests that they need not be accruals build over time and offset the series overly worried about the impact of higher of price losses such that the bond ends up rates on their multi-year returns. earning its initially promised 3% yield, but 7. DT investors committed to a fixed over the longer effective maturity. duration target should not count on the Using an analytic expression for the prospect of higher or lower yields to excess return over any path, Leibowitz refers augment their multi-year return above the to another paper where he showed that the current level of yields. effective maturity for a trend line-based DT path is equal to twice the target duration International Finance minus one: the effective maturity depends 58. The International CAPM Redux only on the duration and is independent of the (Spring 2015) size and direction of the yield changes. Adrien Verdelhan Adrien Verdelhan, Associate Professor of Leibowitz said that the theoretical model Finance at MIT, revisited the question of developed to represent this convergence whether foreign currency risk is priced in effect has generally stood up well in a series global equity markets. He began his talk by of tests involving simulation-based yield reviewing the two primary international paths derived from Monte Carlo simulations, models of equity pricing: the World CAPM and with historical constant maturity and the International CAPM. Under the Treasury and investment grade corporate debt assumptions of the World CAPM, purchasing data. power parity (PPP) holds instantaneously, In conclusion he says that these findings and global equity market risk is the single have a number of investment implications: source of systematic risk. Sensitivity (beta) to that risk determines all equity expected 1. The DT process is far more widespread returns, and currency risk is irrelevant from a than generally recognized. pricing perspective. In contrast, the 2. A DT bond portfolio has a much lower International CAPM presumes that PPP does multi-year volatility than generally not hold instantaneously. As a consequence, believed. all bilateral exchange rates are additional 3. Over an appropriate horizon, depending systematic risk factors that theoretically only on the duration target, the annualized impact all expected asset returns. mean return will converge to the starting yield, subject to a modest standard The ICAPM holds under very general deviation. assumptions. However, it is difficult to implement practically, as it requires the use of all bilateral exchange rates. Verdelhan 88 presented a parsimonious reduced form World CAPM and the International CAPM model (the CAPM Redux) that assumes that: generate virtually no cross-sectional (i) the existence of a pricing kernel that variability in expected returns. While a good insures that the law of one price holds across model would be characterized by points assets, and (ii) investors can form portfolios randomly distributed around the 45 degree freely—going both long and short. line (shown in each graph), most of the data points for these two models are best Given these assumptions, the expected described by a vertical line. return on any asset can be written as a time- varying function of three factors: the average In contrast, both the Fama-French four world equity market return denominated in factor model and the CAPM Redux do a much local currency terms, a carry factor that is the better job generating cross-sectional excess return of a portfolio that goes long variability in expected returns. The data high-interest currencies and short low-interest points for the CAPM Redux are the most currencies, and a dollar factor that presumes, tightly clustered around the 45 degree line, each period, that an investor borrows in the indicating that it is the best model. U.S. and invests equally in all other currencies. Verdelhan then provided evidence on a wide variety of model robustness, Verdelhan runs a variety of horse races to performance, and relevance tests for the compare the performance of the CAPM CAPM Redux model. He showed the Redux to its three main competitors: the importance of time-varying betas for all three World CAPM, the International CAPM, and factors in that model, the importance of the an international version of the Fama French carry and dollar factors in explaining mutual four factor model. The most persuasive fund and hedge fund returns, and the superior evidence is provided in four graphs (one for performance of the CAPM Redux model with each model) of realized average excess respect to rolling mean absolute alphas— returns, plotted on the y-axis, against the especially in the last decade (depicted in the corresponding model predicted returns, graph below). Specifically, the below graph plotted on the x-axis. The points in each shows alpha estimates (in absolute value) graph correspond to over 200 different averaged across passive country equity indices in 46 countries, where, for each indexes at each point in time. Verdelhan country, returns are computed for five argued that that his Redux model is superior different MSCI equity indices: the aggregate because it delivers lower mean absolute stock market, growth, value, large market- alphas (in other words, it explains passive capitalization, and small market- country index returns best, and does not capitalization. As is readily apparent, both the

assign high alphas to them).

89

In short, he was able to show that a parsimonious three-factor model can explain a significant amount of variation in the returns of a wide variety of passive developed and emerging market equity indices, and in the returns of a sample of actively managed mutual and hedge funds. Bibliography Brusa, Francesca, Tarun Ramadorai, and Adrien Verdelhan, 2014, The International Cavallo’s presentation focused on several CAPM Redux, available at http://www.q- different aspects of this research project. First, group.org/wp- he reported on the research that they conduct content/uploads/2016/01/verdelhan- to verify the reliability of the online price paper.pdf. data. This involved the simultaneous random 59. PPP and Exchange Rates: Evidence sampling of online and in-store prices using a from Online Data in Seven Countries bar scanning app and crowd-sourced workers. (Spring 2015) Table 2 below shows high degrees of Alberto Cavallo similarity between the prices scraped from Alberto Cavallo, Associate Professor of online websites and the actual prices charged Applied Economics at MIT Sloan, presented in stores for most countries, with the notable research based on a unique set of data exception of Argentina. collected daily from hundreds of on-line global retailers. The initial data were obtained The main focus of the talk was on the as part of MIT’s Billion Prices Project that behavior of the real exchange rate (RER) was started in 2008 with Roberto Rigobon, an defined as: MIT colleague. �!" ∗ �!"#/!" ��� = ( ) The data collection makes use of the �!" HTML code that underlies public webpages, and it “scrapes” this information to extract where �!" is the price of a good in the price and product information for all the local currency, �!" is the price in US dollars, products the retailers sell. The dataset has and �!"#/!" is the exchange rate in US dollars many attractive features relative to other per unit of local currency. If absolute PPP sources of purchasing power parity (PPP) always holds then RER=1 at all times. If, data: It primarily focuses on tradable goods, instead, relative PPP always holds, then is available for many countries, is comparable RER=fixed constant. across time, is available for hundreds (and The empirical literature has found that eventually thousands) of products, can track absolute PPP does not hold, and that relative dozens of varieties of the same product, PPP only holds in the long run—with RER provides exact product matches, and is shocks having half-lives in the 3-5 year range. available on a daily basis. The data also has Cavallo reported on research that revisits the the advantage that PPP has precise and empirical validity of absolute and relative refutable predictions with regards to price PPP using more accurate micro-level price levels, whereas price index data (such as the data. Cavallo noted two primary research CPI) can only be informative about price objectives: 1) measuring the level of the RER changes. 90 over time, and 2) estimating how fast the Cavallo presented summary evidence for RER returns to steady state after a shock. seven countries based on a vector error correction model (reproduced below), where Cavallo first presented results using the coefficient � = 0 id absolute PPP holds, Brazilian data. As can be seen in the figures ! �! = −1 if relative PPP holds, and the ratio below, the RER in Brazil, relative to the U.S. ! !" indicates how much of the adjustment (light blue line), is much more stable than !! relative prices (red line, left axis) or the comes through relative price changes nominal exchange rate (dark blue line, right compared to changes in nominal exchange axis). This negative correlation between rates. A ratio less (greater) than one suggests relative prices and nominal exchange rates most of the adjustment comes through was apparent in 6 of the 7 countries for which changes in nominal exchange rates (relative data was presented; the sole exception was prices). The table also contains estimates of China, where the currency does not freely the half-lives (measured in days) of the relative price and exchange rate changes. In contrast to previous research, Cavallo finds that real exchange rate adjustments are much faster than previously estimated: largely occurring within months rather than years, as previously thought using CPI data. The adjustment mechanism and speed vary by country. Thus, the key to Cavallo’s new findings that prices and exchange rates adjust faster than previously thought is the precise cross-country pricing data for the same goods that he collects from online postings. Bibliography Cavallo, Alberto, PPP and Exchange Rates: Evidence from Online Data in Seven Countries, MIT Working Paper. 60. Is The World Stock Market Co- Movement Changing? (Fall 2015) N. K. Chidambaran One characteristic of the 2008 Financial Crisis was the dramatic shift to high correlations between asset prices worldwide. Diversification strategies designed to shield float. the investor from risk did not work as all asset prices fell. This was shocking, but was He then focused on Argentina. Over the it transitory? Chidambaran says that the last four years, the RER in Argentina has increase in worldwide market co- movement typically been in the 1.00 to 1.10 range. may be more than transitory: capital market However, over the last year, the RER has liberalization and the expansion of free trade risen from 1.00 to roughly 1.20—suggesting agreements have ushered in a tremendous that, over time, the currency will have a increase in world trade and capital flows significant depreciation. since the mid- 1990s. This leads to increased 91 cross- country investments and a demand for a segmented world: 1) an extended version of portfolio diversification beyond what is the Fama-French model; and 2) an APT style available within the home market. These factor model with principal components as forces could drive individual national stock factors. markets, traditionally segmented by national boundaries and regional agglomeration, to Both approaches, Chidambaran says, are increasingly co-move. biased and misstate factors that are common across all countries. This is largely due to the Chidambaran says there are four fact that world capitalization is skewed questions to answer about co-movement: toward the US and Japan, leading to bias when using value-weighting and equal- 1. Has co-movement been changing over weighting in estimating the common world time? market proxy. After showing the bias using 2. If there has been an increase in co- simulation, he develops a Weighted- movement, is this temporary or the new Generalized Canonical (W- GCC) method reality in global markets? that identifies the factors and is free from 3. Do stocks in emerging countries co-move value-weighting and equal- weighting biases. with those in developed countries? Chidambaran identifies a set of Principal 4. Is the co-movement investor driven or Components that explain variation of returns firm cash flow driven? in each country using the W-GCC to identify The answers to these questions are important and can impact international investment diversification strategies. Low co- movement supports diversification and creates opportunities in international sourcing of capital. However, if co-movement is high, diversification is difficult. Chidambaran notes that there are two essential aspects to any study of co- movement. First, measures of co- movement depend upon the model used to measure them. Second, if there is a change in levels of co- movement, there must be a way to understand the “Canonical Variates” that are highly its cause. In particular, if co-movement is correlated across all countries. His model is assumed to be driven by world events, testing shown below. procedures need to allow for structural breaks Factors that are not common, he says, are in long-term relationships. either regional or country-specific factors. He At the heart of international stock co- examines all countries simultaneously to movement studies is the identification of the identify the common drivers of returns. He correct factor model for each country and uses a weighting process that improves on the whether there is a common world factor. Two efficiency of the methodology in the presence approaches have commonly been used to of confounding effects of regional and examine the stock returns across countries in country factors.

92

Using weekly stock returns data from 23 breaks are related to economic innovations, countries in the Index from and, by 2010, world markets are well 1981 to 2010, Chidambaran finds that stock integrated, less segmented across country price co-movements are stable through the lines, more segmented by sub- groups of mid-1990s and then increased to a high of 70% stocks and are driven by investor flows across by 2010. He calculates the country specific boundaries. 61. Global Crises And Equity Market Contagion (Spring 2012) Geert Bekaert Geert Bekaert, Leon G. Cooperman Professor of Finance and Economics, Columbia Business School and (NBER) presented, “Global Crises and Equity Market Contagion,” he coauthored with the following from the European Central Bank: Michael Ehrmann, Senior Adviser, Directorate General; Marcel Fratzscher, Head, International Policy Analysis Division; Arnaud Mehl, Principal Economist, Directorate General International. canonical correlations as shown below: Bekaert begins with the following all too He says that co-movement is largely familiar story: driven by a common world market factor and co- movement from regional factors is small 1. Return correlations increase in bad times. and decreased to negligible levels by 2010. 2. Bad shocks easily transmit across There are no differences between the co- countries. movement of small and large stocks and the 3. Such shock spillovers result in excessive bias in the model is less important in recent comovements between asset classes and times when world co-movement of countries such as the “Tequila Crisis” developing and emerging countries has (1994-95), “Asian Flu” (1998), and soared to over 70%. “Russian Virus” (1998). This often is called contagion. To understand the source of co-movement and co-movement change he studies the co- While we may all agree that such a thing movement of the Chinese A (Chinese as contagion exists, Bekaert points out that investors only) and Chinese B (open to non- there is no consensus on exactly what Chinese investors) stock markets as they have constitutes contagion or how it should be different investor bases. The Chinese B stock measured. In this presentation he revisits the market shows a dramatic increase in co- debate on the presence, types and sources of movement over the time period while the contagion in what he calls an ideal lab: the Chinese A stock market does not. This suggests to him that co- movement is more likely driven by investment cash flows. In conclusion he says that integration, relative to the world factor is increasing in developed and emerging countries. Structural 93

D Global Financial Crisis of 2007-09. Using the 3. Domestic factor (Rt ) chart below, Bekaert reminds us of the depth of the Financial Crisis worldwide from 2007- The domestic factor excludes the 09. portfolio return under investigation. He characterizes 3 popular hypotheses His data is comprised of stock prices from regarding contagion channels as: 1995 to 2009 for 2000 firms, 55 countries, 1. Globalization/interdependence: economies integrated with U.S./global economies were hit hardest. 2. A wake up call: the crisis initially provided new (fundamental) information to investors which led to a generalized crisis. 3. Herd behavior: a contagion without discrimination that was unrelated to and 10 sectors from which he creates 415 fundamentals. value-weighted U.S.sss dollar portfolios. The To examine the three hypotheses, Bekaert crisis is identified as starting on either August first defines contagion as excess correlation: 7, 2007 or September 15, 2008. Using this correlation over and above what one would data he finds systematic and substantial expect from economic fundamentals. Using contagion from domestic equity markets to data from 2007-2009 he analyzes the individual domestic equity portfolios, the transmission of crises to country-industry severity of which is inversely related to the using 55 country-specific equity portfolios. quality of countries’ economic fundamentals His model is an asset pricing framework with and policies. In addition, he reports that U.S. global and local factors designed to predict and global contagion were limited and crisis returns by using explained increases in domestic contagion was stronger. This is shown in the chart on the next page.

Finally, he finds that there was no contagion evident during other recent crises. The Financial Crisis during 2007-09, he says, stands alone. factor loadings as indicative of contagion. The results were not related to globalization or herd behavior. Instead, Since this single factor world (w) model Bekaert concluded that this Financial Crisis does not explain comovements, he creates was a kind of “wake- up call” with markets three factor adaptations using the following and investors focusing substantially more on orthogonalized factors: idiosyncratic, country- specific characteristics U during the crisis. This segmented-world 1. U.S. factor (Rt ) finding, he says, was unexpected since the 2. Global financial factor (R G) t sample was made up of the largest 94 capitalization stocks in each market: stocks of Economic Research (CIDER) at the companies that he would have expected to be University of California at Berkley, NBER, global players. Looking at a number of and CEPR presented “The International different sectors he finds that the culprit in Monetary System: Living with Asymmetry.” the crisis was the financial sector. He provides the chart below that shows three of Obstfeld says that the Financial Crisis laid his sectors. bare global stresses related to the two classic coordination problems the IMF was Bekaert points out that this was not an originally designed to address: global indiscriminate spread of the crisis but a liquidity needs and exchange rate imbalances. “wake-up call” during which investors re- The problems are similar to those of the focused on country characteristics and Bretton Woods era, pre-1973, and yet quite punished markets with poor fundamentals. different. He analyzes the current stresses in The chart below shows their predicted and the two key areas that concerned the architects of the original Bretton Woods system: international liquidity and exchange rate management. He finds that many of the coordination problems that arise stem from differences in the structures, growth rates, and cyclical positions of the mature economies and the developing world: segments of the global economy that are fast approaching equality. The key message of Obstfeld’s work is that a diverse set of potential asymmetries among sovereign member states provides fertile ground for a variety of coordination failures. While obvious, he says, this point is a key starting place for predicting a range of actual returns for Europe. tensions in any system of international monetary arrangements. The floating Ironically, Bekaert concludes that exchange rate, one of the Bretton Woods’ domestic factors regained importance in innovations, did not fix everything. The determining equity market performance in the demand for international reserves is still high most global crisis of recent times. and the modified Triffin problem - the supply Furthermore, he concludes that it was of international liquidity in settings where the banking policy (e.g. deposit and debt demand for safe foreign exchange reserves guarantees and capital injections) that helped grows faster than the supply over the long decouple returns during the contagion. The term - is still present. There is no rapid return world, he says, is still relatively segmented and domestic factors are critical. 62. The International Monetary System: Living With Asymmetry (Fall 2011) Maurice Obstfeld Maurice Obstfeld, Class of 1958 Professor of Economics and Director of Center for International and Development 95 to nearly balanced current accounts. these continue to cause tensions between Distributional and allocative effects still exist, governments and there are no easy solutions particularly for emerging market economies. in a world of sovereign nations. Reforms that Even though there have been some successes, enhance the IMF’s freedom from political every bilateral exchange rate continues to be pressures would allow it to identify and shared by two countries and external critique global coordination failures with adjustment imperatives remain unequal. Compounding these problems is globalization. One of the systemic, globally interdependent, risks comes from inflated gross asset positions. Obstfeld uses the chart on the next page to show the positions for six major countries. The legend left to right mimics the graph’s lines top to bottom. Clearly Switzerland and the UK are in the strongest positions. He then looks at net external assets to demonstrate the problems in the financially troubled countries. The graph for one country, greater impartiality and credibility. Ireland, shows its gross external assets and liabilities at a startling 14 times GDP. To his extensive tutorial he summarizes the main challenges at the regional and global Obstfeld says that there is one levels: overwhelming need: liquidity in the international system. However, there are two 1. National sovereignty and self-interest as issues: heightened sovereign debt risks for expressed through democratic political richer countries and the need for lender-of- processes that frequently are not last-resort (LLR) support in multiple inherently friendly to globalization. currencies to safeguard financial stability for 2. Globalization of governance institutions all countries. Until recently, Obstfeld says, must expand to support the expanded domestic LLR seemed sufficient. Now it does globalization of markets. not. He provides an outline of the system at As for the current threat, Greece, he was work and its flaws before turning to better very pessimistic about the timing and the ways to meet market liquidity needs. impact of the potential bail out. Obstfeld concludes that in this world there is a connection between the extent of Investor Sentiment policy cooperation over macroeconomic 63. The Behavior Of Sentiment-Induced policies and the need for international Share Returns: Measurement When liquidity. The more cooperation (the more Fundamentals Are Observable (Spring carefully coordinated national policies are in 2015) timing and nature) the lower the need for Ian Cooper Whose “sentiment” affects stock prices— international liquidity to finance imbalances. the sentiment of professional investors, or A collateral benefit of such cooperation is that of retail investors? Ian Cooper, Professor that it can mitigate other coordination failures of Finance, London Business School, tackles in national economic policies. Unfortunately,

96 this question though an analysis of the stocks Accordingly, Cooper’s paper assumes of upstream oil companies. that stock prices are affected by the actions of two types of traders: a professional In order to study sentiment, one must first arbitrageur, who sells when the Baker- clearly understand the fundamental value of a Wurgler sentiment measure is high, and a stock. Cooper argued that the relation of naïve trend-follower, who buys when the upstream oil company stocks to fundamentals AAII retail sentiment measure is bullish. are clearly defined, in theory, because of their Then, he allows, in separate regression simple tie to oil prices. That is, if the equations, fundamentals and non- Hotelling (1931) Rule is correct, the value of fundamentals, to depend on the actions of such companies equals the current value of both types of traders (through the level of the their reserves minus extraction costs (another two sentiment variables). That is, Cooper way to express this is that the price of an regresses, separately, fundamental returns and exhaustible resource, minus extraction costs, non-fundamental returns on the two lagged should rise over time at the risk-free rate). sentiment measures. These regression One can determine the deviation of oil equations are combined in a vector and gas stocks from fundamental value by autoregressive model, so that fundamental measuring the deviation of equity prices from returns can also depend on lagged non- reserves minus extraction costs (minus the fundamental returns (and vice-versa). value of debt). Specifically, Cooper assumes Next, to obtain his y-variables in the that fundamental value is a function of the above system, Cooper splits upstream oil month-end spot price of West stock returns into fundamental and non- Intermediate oil and the spot wellhead price fundamental returns by first regressing of West Texas natural gas (as well as the monthly returns over a 60 month period on contango in oil prices). the change in WTI prices, natural gas prices, Cooper uses these 121 U.S. Oil and Gas and the contango (logged 6th minus 1st futures E&P stocks during 1983-2011 to test the price) of WTI: effect of investor sentiment on stock returns. � = � + � Δ��� + � Δ��� + � Δ���� That is, he wished to explore whether !" ! ! ! ! ! ! ! + � investor sentiment pushes these upstream oil !" stock prices away from their fundamental The fitted regression is then used to values in the short-term (Barberis, et al., 2012) generate the fundamental returns, while the or whether sentiment predicts future returns residuals from this regression are the non- because of their impact on fundamentals fundamental portion of returns. themselves. Cooper’s results show that fundamental Cooper measures sentiment in two returns, but not non-fundamental returns, are different ways: the Baker and Wurgler (2006) predicted by both of the lagged sentiment “Sentiment Index,” and the proportion of indicators. Cooper finds that this predictive individual investors who report that they are role of sentiment on fundamentals is only bullish in the regular survey conducted by the present starting in 2000, which is roughly American Association of Individual Investors when investors became interested in the oil (“Bullishness” survey). The monthly sector (and commodities, in general). correlation between these two sentiment metrics is only 9%, indicating that they Further, Cooper finds that the Baker and capture different aspects of sentiment. Wurgler index predicts mean-reversion in

97 upstream oil stock prices, while the AAII returns. BW created a sentiment index that index predicts momentum in stock prices. included the following as proxies for sentiment: The key takeaway from Cooper’s study of upstream oil company stock prices: sentiment 1. Closed-end fund discount rate seems to work mainly on fundamentals 2. Share turnover themselves, and does not appear to drive 3. Number of IPOs prices away from fundamental values! 4. First-day returns of IPOs 5. Dividend premium Bibliography 6. Equity share in new issues Baker, Malcolm, and Jeffrey Wurgler, 2006, Their index tried to mitigate the Investor Sentiment and the Cross-section of measurement error in any one of the above Stock Returns, Journal of Finance 61, 1645- proxies and to create a more reliable 1680. sentiment signal. However, they did not find Brealey, Richard, Ian Cooper, and Evi statistical significance when using their Kaplanis, 2015, The Behaviour of Sentiment- investor sentiment index to predict the stock Induced Share Returns: Measurement when market. Fundamentals are Observable, available at http://www.q-group.org/wp- Zhou attributes BW’s weak findings to content/uploads/2016/01/Cooper_paper.pdf. the principal components approach they used to combine the 6 proxies. He tries a different Hotelling, Harold, 1931, The Economics of approach, partial least squares (PLS). He uses Exhaustible Resources, Journal of Political the BW proxies and a PLS methodology to Economy 39, 137–175. extract the combination of the 6 proxies that 64. Investor Sentiment Aligned: A best explain stock market returns. Zhou says Powerful Predictor of Stock Returns that PLS better aligns the sentiment and calls (2014 spring) his approach aligned investor sentiment. Guofu Zhou Forecasts using the BW index and Zhou’s Zhou starts by saying that many, including John Maynard Keynes, have said that investor sentiment is a key factor in determining stock prices. Since sentiment is unobservable, most empirical research has focused on whether stock prices move too much to have been generated by changes in subsequent fundamentals. Zhou looks to work by Baker and Wurgler (2006, 2007) who suggested there is a role for sentiment in explaining expected aligned investor sentiment approach are shown in the chart below. The solid line is the aligned investor sentiment (ISPLS) and the dashed line is the BW investor sentiment index (ISBW).The predictions are different. As are the predictive results for each of the two models, show in the chart below.

98

Although the R2s are quite modest, 65. Bubbling with excitement: an Zhou’s aligned investor sentiment and BWs experiment. (Spring 2012) investor sentiment index models show that Terrance Odean Terrance Odean, Rudd Family high positive sentiment is a negative indicator Foundation Professor of Finance, Haas of future returns. Zhou points out that the School of Business, slope coefficient for the his sentiment indicator suggests that a one standard University of California, Berkeley deviation increase is associated with an presented, “Bubbling with Excitement: An economically large, 0.55%, decrease in Experiment,” coauthored with his Haas expected excess market returns over the School of Business, University of California, subsequent month. Berkeley colleagues: Eduardo B. Andrade, Associate Professor of Marketing, and When Zhou compares the predictive Shengle Lin, Searle Foundation Fellow. strength of his model to other approaches he concludes that it outperforms multiple Odean starts with a little history lesson macroeconomic indicators as shown in the pointing out that behavioral research in economics began about 20 years ago after years of researching how people should act as a rational agent, as opposed to how people make decisions. He says that we have known for a long time that there are periods when the markets do not reflect rationality from “Tulip mania” of 1637, to the “” of the late 1990s. These popular accounts of investment bubbles emphasize the role of emotions and the role of excitement as an accelerant. During these non-rational periods, aroused emotional states distort better judgment and inflate an asset bubble until it ends in panic selling. table below. Experimental studies of asset pricing Zhou concludes that the statistical bubbles have focused on non-emotional significance of the aligned sentiment factors such as liquidity, experience, indicator is robust to the inclusion of the transparency, novelty of the environment and macroeconomic predictors and is speculation. Investors, however, can and do economically valuable. It can lead to a act on emotion and there are several theories doubling of monthly Sharpe Ratios. The about how that occurs (preferences; beliefs; forecasting power of sentiment, Zhou says, overconfidence; limited attention; emotions). comes from the investor's under reaction to Bubbles themselves could result from cash flow information rather than discount people’s undifferentiated sensation seeking rate predictability. and financial trading could represent an important source of physiological arousal. As such, any highly arousing incidental experience could produce a larger bubble relative to a non- arousing experience. Odean notes that he believes that pleasantness

99 associated with the arousing experience at any time (excitement) is critical for investors to inflate during the the bubble. trading. The positive affect is transmitted through The changes in information processing that subject then impacts preferences and influences beliefs chooses to and/or increases risk taking. A negative affect buy or sell at market rates that are reflections can decrease risk-taking. Odean and his of the behavior of the other subjects in the colleagues create an experimental financial simulation. The result from one of the market designed to study the role of emotions “exciting-video” simulations is shown below. in asset-pricing bubbles. In previous research Clearly, during this simulation a bubble by others this method has been used. develops before prices finally crash at the end of the simulation. The simulation prices (the Odean creates an experimental market dots in the chart below) trump the fair value designed to manipulate participants’ until near the end when prices drop rapidly emotional states by showing one of a number toward the fair value (the dashed lines in the of short videos at the beginning of each chart in the next column). trading session. The videos are chosen to arouse excitement or fear and sadness. As a This experience is not only true for the control, one of the videos provides a neutral, “excited video” treatment. The following unemotional treatment. Creating a chart shows the average price of the stock predisposition before the experiment is a being traded for each of the induced commonly used procedure that is known to emotional states. While the results are impact financial and economic decision- different, only the results of the excitement making. state are significant. Each of Odean’s experimental markets is On the basis of his research, Odean conducted over 15 periods. The subjects concludes that larger asset pricing bubbles begin with several pieces of information: the develop in experimental markets run starting position (a combination of cash subsequent to showing the exciting videos and/or stocks) and potential dividend and that differences in the magnitude and payments (the magnitude of which is amplitude of the bubbles are both economic uncertain and later revealed). The price and and statistically significant. Emotion, the dividend pattern is designed to create the factor studied by behavioral economists, fundamental-value price pattern like that appears to play a very large role in bubbles. shown below. The subject sees this chart at each of the 15 rounds of buying and selling. Thus the fundamental value is not in question

100

During periods of heightened excitement, breakdown is depicted below. rational agents, if they exist, seem to deviate from rationality. Cremers says that the rise of explicit and closet indexing globally raises some Passive Asset Management important questions: 66. The Mutual Fund Industry Worldwide: 1. What explains the existence and growth Explicit An Closet Indexing, Fees, And in market share of passive funds? Performance (Spring 2013) 2. What could have spurred that growth: Martijn Cremers regulation; competition; market Mutual funds have become one of the development? primary investment vehicles for households 3. What impacts have passively managed worldwide over the last half century. As of alternatives had on active funds? December 2010, there were nearly 28,000 4. Have low-cost alternatives put pressure equity funds with $10.5 trillion in assets on active funds; how has the performance under management (AUM). Index funds have impacted that of low-cost alternatives and been an increasingly important low-cost active funds. alternative to actively managed stock funds. 5. How passive are the passive funds? By 2010 they accounted for 22% of AUM worldwide. Cremers reports on his study of Before discussing the methods he used to whether increased competition from passively answer these questions, Cremers previews managed funds globally increases some findings. competition and efficiency in the mutual fund 1. In markets with more low---cost explicit industry. In particular, he wonders whether index funds, active funds deviate more this change pushes active funds to increase from their benchmarks, charge lower fees, their product differentiation, lower their fees, and deliver higher returns. and whether it has led to improved 2. Outside the most developed markets there performance. is little explicit indexing but considerable In the United States, the market share of closet indexing. index funds and ETFs has grown from 16% In his research Cremer uses information of AUM in 2002 to 27% in 2010. In other from the Lipper Hindsight and countries indexing is much less prevalent, but Factset/Lionshares databases to create a has grown from 6% in 2002 to 13% in 2010. sample of 11,776 funds with $7.9 trillion total Although the amount of explicit indexing is net assets (TNA) from 32 countries. The small outside of the U.S., Cremers finds that sample contained global, regional, country closet indexing is close to 30%, twice the and sector funds from 2002---2010. Cremers level of closet indexing in the U.S. The 2010 reports that only 22% of TNA were explicitly indexed worldwide in 2010, and only 13% of them were explicitly indexed outside the US. Switzerland had the highest proportion of explicitly indexed funds at 80%. While a low proportion of funds are explicitly indexed, Cremers notes that in most countries investors cannot invest in index funds with their attendant lower fees funds, however, many active fund managers are effectively closet indexers charging active fees. To 101 determine which funds were closet indexers, whereas in all years well below 50% of the he used a measure he developed previously, closet indexers outperformed their Active Share: the share of portfolio holdings benchmarks - in large part due to their high that differ from the benchmark index holdings. Cremers determined that there were three things that explain indexing in a particular market: 1. More stringent standards. 2. A larger fund industry. 3. More liquid and developed stock markets. fees. As an illustrative example, Cremers compares the domestic funds benchmarked to Overall, he says that it is clear that the Madrid SE stock index to the domestic explicit indexing improves the levels of funds benchmarked to the S&P 500. He notes competition and efficiency of the mutual fund that there are about 75 domestic Spanish industry in a country. funds in the sample. The average Active Share of those funds is 37%. The average Predictability market cap of the stocks in those funds is 67. King Of The Mountain: Finding Short- about $630MM, and there are roughly 150 Term Efficacy In The Shiller P/E Ratio stocks in the Spanish universe. This contrasts (Fall 2015) with over 200 U.S. active funds in the sample Rob Arnott with an average Active Share of 70%. There Rob Arnott, Chairman & CEO of are over 9,000 stocks in the U.S. universe and Research Affiliates, LLC, kicked off the Q average market cap is $14.6BB. Seminar with a new look at an old and familiar Q topic: The ability of Shiller’s In his sample of funds, Cremers finds that Cyclically Adjusted Price-to-Earnings (CAPE) the competitive pressure from indexing ratio to predict future stock market returns. makes active funds: The CAPE ratio is predicated on the idea that 1. Deviate more from benchmark and have a the ten-year average of real earnings --- higher Active Share. averaging over a period that is longer than the 2. Charge lower fees. typical business cycle --- provides a better 3. Deliver higher returns. proxy or instrument for the future earning potential of the stock market than, for Outside the most developed markets there instance, the trailing 12-month earnings. is little explicit indexing but considerable closet indexing, as in those markets, active Arnott’s talk began by revisiting six key funds tend to charge higher fees. empirical CAPE findings: (i) Over the 1881- 2015 period, the CAPE ratio in the U.S. The chart below depicts the difference in displays mean reversion at long horizons with performance between closet indexers and a half-life of 8.5 years, (ii) the mean of the truly active funds. CAPE ratio in the US has risen over that time The chart shows that in four of the nine frame from roughly 12 to 20, (iii) high CAPE years, slightly more than 50% of the active ratios are associated with lower subsequent funds outperformed their benchmarks 10-year real returns, (iv) the relationship between CAPE and future returns is also 102 present in emerging markets, (v) relative paper: the non-linear relation between CAPE, CAPE ratios can be used to predict relative real rates, and inflation. Arnott then returns, such as between the United States proceeded to describe a more robust approach and Emerging Markets, and (vi) CAPE is to capturing the relationship between these efficacious at predicting long-run returns but three variables by using a maximum does less well at predicting shorter returns. likelihood procedure (assuming a joint Gaussian distribution) to estimate the three- Arnott examined whether the poor dimensional surface represented above by the performance of CAPE at predicting returns tops of the bars. Then, a regression over shorter horizons can be improved by framework provides a simple way to compare conditioning on current macroeconomic the forecasting ability of traditional linear conditions. This builds on earlier research by models that relate CAPE to macro variables Leibowitz and Bova (2007), who found that to his non-linear procedure. The main results moderate real interest rates are associated are summarized in the two tables below. The with higher stock market valuations. Arnott’s first table shows the statistical superiority, talk focused on the joint relationship between based on R-squared, of the non- linear model CAPE and two key macroeconomic variables: in terms of its ability to explain the time the real interest rate (measured as the 10- year series behavior of CAPE ratios.7 nominal government rate minus three-year inflation) and inflation (measured on a one- The second table compares the year basis). The key finding is that, performance of two models that predict future historically, CAPE has been it’s highest at moderate levels of real rates and inflation. CAPE tends to be depressed when rates and inflation are either too low or too high. This is depicted in the three-dimensional chart below. The chart shows that CAPE is maximized when inflation is in the 1-3%

real returns across a variety of horizons. The left panel of the table contains results where the log of CAPE is used as an explanatory variable. The panel on the right contains results where the explanatory variable is the CAPE ratio minus its conditional expected value (where this conditional expectation is based on Arnott’s non-linear model above). At horizons of a year or less, the conditional model shows superior relative performance, although the absolute performance is still weak. At longer horizons, the unconditional model works better. Arnott argues that this is region and real rates are 3-5%. 7 With the caveat that the usefulness of the R- This three-dimensional “P/E Mountain” Squared statistic is affected by the overlapping nature graphically captures the main point of the of the earnings data on the left hand side of the equation. 103 to be expected as the current value of these 1970s. Hundreds of papers have been

macro variables should not be that relevant to written (and many faculty members have predicting returns over long horizons. become tenured) based on the claim that various “anomalies” exist. McLean’s The last part of Arnott’s talk focused on presentation explores the question: What results outside of the United States, which happens to the anomaly after the paper gets reinforced the conclusions arrived at above published? using U.S. data. As of September 30, 2105 the international analysis suggested the This question is useful for at least a few following over (-) and under (+) valuations: reasons: U.S. (-31%), Australia (+11%), Canada (- 15%), France (+7%), Germany (-6%), Hong 1. Did the anomaly ever really exist, or Kong (-15%), Italy (+55%), Japan (-39%), was it a product of pure luck Spain (+52%), Sweden (-17%), Switzerland (“discovered” through data mining)? If (-28%) and United Kingdom (+51%). the anomaly was purely fictitious and due to data mining, this implies that Bibliography: there is no alpha to the anomaly after Leibowitz, Martin, and Anthony Bova, 2007, the sample period studied by the “P/Es and Pension Funding Ratios,” academics

Financial Analysts Journal 63 2. If it did exist, can it be explained as a (January/February), 84-96. reward for systematic risk, as persistently argued by Eugene Fama 68. Does Academic Research Destroy Stock (e.g., perhaps momentum is a Return Predictability? (Fall 2014) systematic risk factor). If true, this David McLean implies that the alpha to the anomaly Professor David McLean, Dianne and will persist, and not decay, after the Irving Kipnes Chair in Finance and sample period or publication of the Development at the University of Alberta, academic paper delivered a talk on the existence and 3. If the anomaly cannot be explained as durability of stock return anomalies. The an artifact of data mining or a reward key goal of his talk was to determine for bearing systematic risk, and it whether anomalies disappear or weaken appears to be driven by behavioral after an academic article is widely biases—often argued by Robert circulated that documents it. Shiller—how quickly does it dissipate Sorting on various stock characteristics, after it becomes widely known and such as market capitalization or E/P ratio, traded on? Shiller would interpret the has been a staple of academic research (and, decay as being the elimination of investment practice) since at least the inefficiencies in the market by 104

sophisticated investors (those who read McLean could not always obtain data the literature), who face some risks and for the entire sample period) costs in “arbitraging” these anomalies. 2. Post-sample, pre-publication period— Academic studies disagree on the here, the paper was either not written or “durability” of predictors; Haugen and presumed not widely distributed, so we can call this an “out-of-sample” period 3. Post-peer-reviewed journal publication -here, the paper was more widely distributed, so we can assume that it is now public knowledge (in a robustness check, they tried the first date the article was “published” on ssrn.com, with similar results as below) In sample, the average predictor variable produces 67 bps/month of long- short return. Post- sample, pre-publication, Baker (1996) find strong durability in this average drops to 50 bps/month; post- 11 predictors, while Chordia, publication, this average drops to 30 Subrahmanyan, and Tong (2013) find bps/month. Thus, more than half of the in- that zero of 7 predictors survive in a sample predictive power is lost by the time more recent period of publication. McLean studies a wide variety of predictors from the literature--altogether, Further, those predictors that work 95 predictors are harvested from papers better in-sample have a larger decay during published in 66 finance journals, 27 the post- publication period. This could accounting journals, and 2 economics indicate both some level of data-mining, as journals that documented, with a p-value of well as a larger amount of capital invested 5% or lower, a predictor that worked in by the market in the best anomalies. forecasting future abnormal returns. For 15 McLean next explores in-sample returns of the predictors, McLean was unable to and decay rates by the predictor type: replicate the in-sample results. McLean 1. Corporate events: e.g., sell-side analyst then focused on the 80 predictors that he ratings downgrades was able to replicate. 2. Fundamentals: e.g., accruals (variables Interestingly, the average pairwise constructed only with accounting data) correlation between returns that accrue to 3. Market-based: only use market data, anomalies is only 5%, indicating that these and no accounting data, such as return anomalies may be based on somewhat momentum different underlying economic effects. 4. Valuation: ratios of market values to fundamentals Returns on an equal-weighted long The below graph shows the in-sample position in the “best” quintile, and short the vs. post-publication return, on average, “worst” quintile, according to each across these four categories. Purely market- predictor, are compared over 3 periods: based signals have the largest in-sample 1. The original sample period covered by returns, but also have the largest decline, the study of the anomaly (although indicating that these are most readily detected and arbitraged by investors (and, 105 maybe the easiest to be data-mined by guide to the cross-section of stock returns,” academics as well!). where he will list the most important of the (now) over 100 signals they are studying in equity markets. References Chordia, Tarun, Avanidhar Subrahmanyam, and Qing Tong, 2014, “Trends in capital market anomalies,” Emory University Working Paper. Haugen, Robert, and Nardin Baker, 1996, “Commonality in the determinants of expected stock returns,” Journal of 41, 401-439. Finally, McLean studies other characteristics of stocks involved in each of 69. The Shiller CAPE Ratio: A New Look the signals, and finds that, after publication, (Fall 2013) Jeremy J. Siegel there is an increase in shorting activity and In 1998, Robert Shiller and John trading volume. This finding indicates that, Campbell published a path breaking article, indeed, investors note the publication and “Valuation Ratios and the Long-Run Stock increasingly trade on the anomaly. Market Outlook.” In that article they showed Further, the correlation of a signal’s that long-run stock market returns were not a returns with those signals yet unpublished random walk but could be forecast by their decreases, while the correlation with those Cyclically Adjusted Price-Earnings ratio signals already published increases, (CAPE ratio). This ratio is created by indicating a potential common pool of dividing an index of stock market prices by investors who arbitrage these. the average of the prior ten years of aggregate real earnings. Siegel says that Shiller’s CAPE McLean concludes that, while some ratio ``…is the single best forecaster of long- data-mining likely occurred, the average term future stock returns.’’ Still, Siegel is signal did generate some returns post- troubled by the fact that the ratio tends to be sample. And, when the market was widely over pessimistic. He explores why it is overly informed of the signal, post-publication, pessimistic and focuses on the data behind the anomaly returns almost disappear. One the CAPE ratio’s forecasts. chief finding of his paper is that academic research appears to be profitable for The CAPE ratio gained attention in investors while also making markets more December 1996 when Shiller presented it to efficient. Of particular interest to Q-Group the Board of Governors of the Federal members: he hopes to create a “user’s Reserve. At that time the ratio warned that 1990s stock prices were running well ahead of earnings. One week later Chairman Greenspan delivered his “Irrational Exuberance” speech in an attempt to temper the market. At the top of the bull market in 2000 the CAPE ratio hit an all-time high of 43, more 106 than twice its historical average, correctly b. Real growth rates. forecasting the poor equity returns over the c. Aging of the population. next decade (as is shown in in the following 2. Growth of earnings per share for a given chart). return on equity, such as those caused by changes in dividend policy. In January 2013, the CAPE ratio reached 3. Methodology used to compute reported 20.68, about 30% above its long-term average, earnings. and predicted a 10-year annual real stock return of 4.16% (2.5 percentage points below Siegel concludes that the problem lies in its long run average). This created concern the accounting treatment of capital losses that among many stock market forecasters, and has substantially distorted firms’ reported led many to consider the current stock market earnings. To demonstrate the potential rally unsustainable. This concern continues problem in reported earnings, he compares into October 2013. Wisely, Siegel asks three different measures of after-tax earnings whether the CAPE ratio is now overly bearish? per share: Is it time for patience or are we on the brink 1. Reported earnings as published by S&P. of another serious decline as the ratio 2. Operating earnings published by S&P forecasts? Siegel seeks to answer at least part since 1989 that allow firms to exclude of this question. unusual items. As background, Siegel notes there have 3. National Income and Product Account only been nine months since 1991 when the (NIPA) earnings adjusted for changes in CAPE ratio has been below its long-term the number of shares outstanding. mean. Further, in 380 of the 384 months from The following table shows the different 1981 through 2012, the actual 10-year real earnings measures during the post-1929 returns in the market have exceeded the CAPE recessions. Note that the cyclical nature of ratio’s forecasts. Seigel speculates about S&P reported earnings has changed whether there is a problem with the CAPE dramatically in the last 3 business cycles ratio and whether its effectiveness as a (from June 1990 to June 2009 and predictor of long-term returns changed over highlighted in dark grey in the following time. chart). Perhaps most striking, Siegel points out, is that NIPA earnings fell 53.0% during the recent global financial crisis whereas those reported by S&P500 dropped 92.1%. Particularly puzzling is “…that the decline in S&P 500 reported earnings in the 2008-9 recession, where the maximum decline in GDP was just over 5%, was much greater than the 63.4% decline in S&P’s recorded earnings in the Great Depression, which was over 5 times as deep [in terms of GDP Siegel argues that the CAPE methodology decline].” Siegel attributes this difference to is not robust to changes in: FASB Rules 142 and 144 issued in 2001 that forced non-financial firms to mark-to- market 1. Required ROE caused by changes in: any impairments to PPE and intangibles and a. Transaction costs. to FASB Rule 115 issued in 1993 that 107 required financial firms to mark-to-market out the total profits of the 30 most profitable securities held for trading or “available for firms in the S&P 500, despite it weight of less sale.” Since FASB does not allow non- than 0.2% in the S&P 500 index. At the same financial firms to write-up assets, Siegel time the 30 most profitable firms had an argues that these rules had the effect of aggregate market value comprising almost greatly increasing the number and size of the half the index. Siegel wants us to know that earnings write-downs without any offsetting there is one more problem that can bias the increases when assets recovered in value and data: the post WWII change in dividend uses. policy that impacts the predictive relationship between CAPE and future stock returns. He provides an example to show the magnitude of the problem using Time- Siegel looks at the CAPE ratio on Warner’s purchase of AOL in January 2000. September 30, 2013; 23.31, 46.3% above the The capital gain to AOL shareholders was 140 year median CAPE ratio of 15.93. This, excluded from reported earnings data, he says, pointed to a grossly overvalued stock although when Time Warner subsequently market and to meager returns over the next was forced to write-down its investment by ten years (2.94% per annum). This forecast, $99 billion following, the internet bubble, this Siegel points out, is dependent on the use of write-down could not impact earnings. S&Ps reported earnings. Changing from S&P to NIPA profits reduces the overvaluation as Siegel points out that there are serious well from 46.3% to 8.9%. When further issues with how the S&P computes reported adjusted for changes in dividend policy the earnings for its index, what he calls the overvaluation declines further. “aggregation bias.” Given its S&Ps construction methodology, it effectively Siegel concludes by saying his assumes that the holder of the S&P500 index observations do not reduce the CAPE ratio as has one option on the aggregated earnings all a powerful predictor of long-term real stocks 500 of the firms in the index rather than returns. However, substitution of NIPA having a portfolio of 500 options on the profits into the CAPE model and correcting earnings of each of the firms. This for the shift in the trend rate of growth of real “aggregation bias” was particularly acute per share earnings improves its predictability during the global financial crisis, Siegel says. and eliminates most, if not all, of the In the fourth quarter of 2008, for example, the overvaluation of the stock market in recent S&P 500 index recorded an unprecedented years. $23.25 loss in reported earnings. This was 70. Changing Times, Changing Values: A primarily caused by the huge write-downs of Historical Analysis Of Sectors Within three financial firms: AIG, Bank of America, The U.S. Stock Market 1872-2013 (Fall and Citigroup. AIG itself recorded a $61 2013) billion loss that quarter that more than wiped

108

Robert J. Shiller Originally Shiller used a CAPE ratio

Robert Shiller kept his commitment to based on the current real price divided by a present his paper to the Q Group Fall 2013 ten-year average of real after-tax earnings. meeting in Scottsdale, though he did so via This measure is potentially affected by

video link. differences in corporate payout policy. He discusses a procedure for scaling real Shiller presents work that extends the earnings based on the ratio of an asset’s total Cyclically- Adjusted Price Earnings (CAPE) return (including dividends) to its price return ratio research of Campbell and Shiller (1988, (excluding dividends) and uses this to 1998 and 2001), and Shiller (2005) into construct a more robust CAPE ratio. forecasting long-term market returns. He constructs price, dividend, and earnings series As another modification to his earlier for the Industrials, Utilities, and Railroads work, Shiller introduces the Relative CAPE - sectors from the 1870s until 2013 from the current CAPE divided by its own twenty- primary sources and investigates the year average. Shiller argues that this forecasting power of the Cyclically Adjusted adjustment removes long-term trends and Price-Earnings (CAPE) ratios for each sector intermediate cycles, and also controls for individually. different industries having different growth trajectories.

109

Before presenting his findings, he and graphs. discusses how the data was created beginning The correlations all suggest that sectors with the lowest Relative CAPE ratios tend, over a long-period, to be the sectors with highest relative returns. As a further robustness check, Shiller looks at whether the CAPE or Relative CAPE indicator actually predicts changes in future prices or changes in future 10-year earnings in the early 1870s and ending in 2012. He averages. He concludes that they are most melded data from the 1939 Cowles helpful in predicting future returns and Commission for Research in Economics at changes in prices. Yale, and various editions of Standard and Shiller then explores a hypothetical Poor's Security Price Index Record and trading strategy based on the Relative CAPE Analysts' Handbook to construct historical indicators by quarterly allocating long-only prices, dividends, and earnings for the three positions across the three sectors, based on it sectors. Shiller presents a series of charts and and momentum. He finds that $1 invested in graphs showing nominal and real price his simple rotation strategy in Q4/1902 would indices, real total returns, 12-month trailing have grown to approximately $1,756 by earnings, the CAPE ratios and Relative 1/2013. An investment in the S&P500 would CAPE indicators, each plotted against long- have grown to $569; an equally-weighted term returns, and ten-year average returns for portfolio of the three sectors would have the CAPE and Relative CAPE indicators. The grown to $856 over the same period. The following table summarizes the correlations rotation strategy holds promise and he is between subsequent 10-year real returns, and implementing it. Shiller says that the the CAPE and Relative CAPE ratios for the contribution of this work is threefold. three sectors and for three different time periods. 1. A long time-series of prices, dividends, and earnings from the 1870s through the Shiller says that the correlations reported present was developed from primary in the table are negative, similar in size across sources for three sectors. sample periods and across sectors, and 2. Using this data enables the investigation largely statistically significant. This of the valuation of the US stock market reinforces Shiller’s earlier work linking the for almost 130 years on the more granular CAPE ratio’s ability to forecast long- term level of sectors, and allowed for the aggregate market returns by showing that consideration of the relative valuation of they have forecasting ability for individual these sectors. sectors as well. 3. On methodological grounds, the Shiller next explores whether relative definition of the Cyclically Adjusted differences in CAPE ratios between pairs of Price-Earnings ratio (CAPE) is extended sectors can help explain inter-sector to account for inflationary effects that differences in long-dated subsequent returns. occur because of the consideration of a The table presented below summarizes much long-term earnings history, and to correct of the information contained in these charts for effects from changes in corporate payout policy.

110

And from the Q Group members present, real per capita GDP growth (light grey) and congratulations on becoming a Nobel real stock market returns (dark grey) can be Laureate, Robert Shiller! seen in the chart that follows 71. Is Economic Growth Good For Ritter posits three possible explanations Investors? (Fall 2013) of the negative correlation between real stock Jay Ritter returns and real per capita GDP growth: Economic growth is widely believed to be good for stock returns, making forecasts of 1. High future growth is reflected in high growth a staple of international asset price-earnings ratios at the start of the allocation decisions. But, Ritter asks, does periods, economic growth always, or even generally, 2. Many of the largest publicly traded benefit stockholders? His answer, and the companies in each market are subject of this paper, is no on both theoretical multinationals, and their earnings are only and empirical grounds. weakly related to domestic GDP growth. 3. Stocks returns are determined not by the Ritter’s argument is not that economic growth in economy-wide earnings, but by growth is bad - people who live in countries firm-specific performance. with higher growth tend to have higher Ritter dismisses the first two explanations. incomes, higher standards of living, and For the first, he says that starting valuation longer life spans. The problem is that owners differentials do not make a tremendous return of capital do not necessarily benefit from that difference over a 112-year period. For the growth. The reason is that countries can grow second, he says that the multinational rapidly, and for long periods of time, by explanation, originally proffered by Jeremy applying more labor and capital, and Siegel, could explain a lack of correlation, introducing technological improvements, but not a negative correlation. even though the return on capital is not high. The third explanation is Ritter’s favorite. Using data from 19 countries with He says the results reflect the amount of continuously operating stock markets from equity capital raised over time to generate the 1900 - 2011, Ritter finds that the cross- growth as well as the efficiency with which sectional correlation between real per capita the capital was used. He cites as support of GDP growth and real stock returns is this explanation the case of the United States negative in both local currency, -39%, and in where corporate payout policy is important. USD, -32%. These findings are similar to He argues that the large distributions of those in Jeremy Siegel’s Stocks for the Long nd capital to shareholders, either through Run 2 . Ed., and are similar to those from a dividends or stock buy-backs, have been an slightly larger subset of developed countries important contributing factor to the high using more recent data, 1970-2001. The returns earned by U.S. stockholders. This negative correlation between real stock puts the reinvestment decision back in the returns and real per capita GDP is also hands of investors and avoids the waste that present in a sample from 15 emerging would have occurred from reinvesting in markets countries using data from 1988-2011: declining businesses or entering an unfamiliar -0.41 in local currencies and -0.47 in USD. one. The lack of positive relationship between Higher payout policies also discipline management’s tendency to be overly optimistic and to overinvest. This argument,

111

Ritter believes, is supported by the higher continues about whether this is true, the correlation between real dividend growth per dominant view is that equity returns are share and high stock returns. As an example predictable. Kelly asks, “Why are EMH of the relationship he says that Japan had a disciples OK with this?” To which he relatively poor overall stock market answers, predictable returns need not be a performance (+3.60%) over the 112-year violation of the EMH. period, but the lowest real dividend growth per share ( -2.36%), the highest real GDP Kelly posits that investors require growth (+2.69%). compensation for assets that are risky. Investors, he says, demand high returns for Ritter next discusses sources of economic assets that do poorly in bad times when the growth. He notes that economic growth marginal utility of consumption is high: they largely comes from increases in three main demand low expected returns for assets that inputs: labor, capital and technology. do well in bad times (such as Treasuries or Increases in labor are attributable to increases OTM puts). When risks change in a in the size of the population, the percentage predictable way, he says the compensation of the population that is part of the labor investors require should change as well. This force, and the human capital of the workforce. sets up Kelly’s key thought, if risk is In support Ritter cites the work of Paul predictable, so are returns. Krugman and Alywyn Young who examined the high growth rates in the Soviet Union Kelly notes that there have been three from 1930-1970 and in many East Asian major approaches taken in trying to predict countries from 1960-1993. They explained aggregate market returns: historical analyses the tremendous growth in those countries as a which assume the unconditional returns result of combining huge supplies of distribution in the future will look like the underutilized labor (e.g. moving people out of returns distribution in the past; theoretical subsistence agriculture), a demographic calibrations based on preference-based dividend that came from lower birth rates, models; what he calls leading indicator with capital from high savings rates and models that combine theory with historical imported technology. As Ritter explains, this data. combination need not translate into higher per One of the primary examples of leading share profits for shareholders: consumers and indicator models are those based on such workers may end up being the primary aggregate valuation ratios such as book to beneficiaries. market, price to dividend, and price to Ritter did not have time to address his earnings. remaining topic, predicting future returns. For While these models have shown modest this, he deferred to the research of an earlier predictive ability in-sample, Kelly finds they presenter, Jeremy Siegel. have negative R-squares out-of-sample. Thus, 72. Market Expectations in the Cross the predictive information has a negative Section of Present Values (Spring 2013) impact. Bryan Kelly Kelly argues that much more powerful Kelly starts by reminding us that until 30 predictive models can be constructed by years ago we thought of the equity premium exploiting the information in the cross- as a constant. This has changed. Theory and section of asset-specific valuation ratios. He data suggest returns move in subtly says the intuition is that the same state predictable ways. Although the debate variables that drive aggregate returns 112 expectations also govern the dynamics of analysis are shown in the following chart. thousands of asset-specific valuation ratios. He posits that the richness of the cross- Kelly concludes that the expected returns section of these asset-specific valuation ratios from his model: can be exploited in a smart way to generate a 1. Significantly outperform previous super predictor of future stock market returns. attempts to predict market returns. Using this, Kelly proposes using market 2. Possess sensible correlations with valuation ratios to predict market returns. suspected macroeconomic drivers of Kelly uses Partial Least Squares, an easily discount rates. implemented OLS-based approach, to 73. Innovative Efficiency And Stock estimate the latent factor that is optimal for Returns (Fall 2011) forecasting. Rather than one thing to make David Hirshleifer the predictions, he forms what he calls a David Hirshleifer, Professor of Finance, “super predictor,” from the weighted average Merage Chair in Business Growth, Paul of all the predictors. He then creates Merage School of Business, University of predictions by applying PLS to the cross California, Irvine presented “Innovative section of: Efficiency and Stock Returns,” he coauthored with Po-Hsuan Hsu, Assistant Professor, 1. Portfolio level book to market ratios. School of Business, University of 2. Stock level book to market Connecticut and Dongmei Li, Assistant ratios. Professor, Rady School of Management, 3. Stock level moving average price ratios. University of California, San Diego. 4. Country level book to market ratios. Hirshleifer points out that recent studies He then predicts: have provided evidence suggesting that, due 1. Aggregate US market returns for 1 month to limited investor attention, prices do not and 1 year. fully and immediately impound the arrival of 2. Aggregate US cash-flow growth for 1 relevant public information. This is especially year. so when such information is less salient or 3. Characteristic-sorted US portfolio arrives during a period of low investor returns for 1 month and 1 year. attention. This insight into prices may be 4. World index returns (ex-US). particularly relevant when looking at the link between innovative efficiency and subsequent He relates the estimated returns to macro- stock returns, an important contributor to a economic variables. The results of this firm’s market value. Hirshleifer and his colleagues test whether a measure of innovative efficiency, the ratio of a firm’s patents to its R&D expenditure, predicts subsequent abnormal returns. There are several reasons why innovative efficiency should predict higher future returns: 1. Limited Attention. Innovations are usually officially introduced to the public in the format of approved patents that provide necessary detailed information. Firms 113

that are more efficient in innovations may weighted return of the high innovation (IE) be undervalued and firms that are less portfolio is 38 basis points per month higher efficient in innovations may be than that of the low IE portfolio and the overvalued due to investor under-reaction alphas of the high- minus-low IE portfolio to information. The problem may come range from 45 to 46 basis points per month. from the difficulty in getting, Moreover, the high- minus-low IE portfolio understanding and processing patent loads significantly and negatively on the information, and in determining the market and size factors. This implies that economic potential of innovative high IE firms are less risky than low IE firms efficiency. from the perspective of conventional risk 2. Q-Theories. Companies with higher factor models. innovative efficiency tend to be more profitable and have higher return-on- For their test they use a sample of firms in assets. All else equal, this implies that the intersection of Compustat, CRSP, and the higher profitability predicts higher returns NBER patent database from 1981-2006. The since a high return on assets suggests that IE measure is based on R&D expenditures these assets were purchased at a discount. over the preceding five years that contribute to successful patent applications. Hirshleifer To test their key prediction (innovative notes that the lag length between R&D efficiency predicts stock returns) Hirshleifer expenditures and patent applications is hard defines innovative efficiency (IE) as the ratio to identify precisely. As a result, they of the number of patents granted to five-year consider alternative measures of innovation. cumulative R&D expenditures. He says that the evidence supports the basic prediction of To examine the return predictability of IE, they sort firms into three groups (low, middle,

both the limited attention and Q-theories and and high) at the end of February based on the finds a significantly positive relationship 33rd and 66th percentiles of IE measured in the between innovative efficiency and future previous year. The two-month lag between stock returns that are not explained by the granted year end and the time of portfolio standard risk factor models. In fact the value- formation is imposed to ensure that patent 114 information was known to the public. The role in the IE-return relationship. More table on the next page provides the summary importantly, regardless of the source of the statistics for their sample. effect, the heavy weight of the EMI factor in the tangency portfolio suggests that Hirshleifer then points out that the high innovative efficiency captures pricing effects IE group has the lowest R&D intensity, above and beyond those captured by the other suggesting that IE is distinct from R&D well-known factors. intensity. The high IE group also has lower book-to-market ratio, higher ROA, lower 74. The Other Side Of Value: Good CapEx/Assets, higher institutional ownership, Growth And The Gross Profitability and issues less equity than the low IE group. Premium (Fall 2011) He then proceeds to describe the process they Robert Novy-Marx used in considerable detail. Robert Novy-Marx, Assistant Professor, Simon Graduate School of Business In conclusion, Hirshleifer summarizes their findings: University of Rochester presented “The Other Side of Value: Good Growth and the 1. Firms that are more efficient in Gross Profitability Premium.” innovation earn higher subsequent returns and the relationship is robust when Novy-Marx starts with the proposition controlling for other well-known firm that that gross profitability is a powerful characteristics. In addition, traditional predictor of the cross-section of average empirical factor pricing models do not returns and is a much stronger predictor than explain this relationship. earnings or free cash flows. He says gross 2. The innovative efficiency proxy (patents profitability, granted per dollar of R&D capital) is a 1. Has about as much power as the book-to- strong positive predictor of future returns. market (B/M) ratio and is complementary 3. A model that combines the investment to it. based three-factor model with the 2. Is negatively correlated with B/M and financing based mispricing factor helps distinguish high return “good (Undervalued Minus Overvalued) growth” stocks from those with ordinary captures most of the innovative efficiency growth. effect in the full and large firm subsample, but not in the small firm subsample. He then turns to a discussion of why gross 4. Within the large firm subsample, profitability matters: explanatory power comes primarily from 1. Gross profit is the “cleanest” measure of the ROA factor. true economic profitability: earnings are 5. In the small firm subsample, explanatory punished for growth-related activities power comes primarily from the such as R&D and the development of misvaluation factor. organizational capital. 6. Proxies for investor inattention and 2. Free cash flows are further “punished” for valuation uncertainty are associated with capital investment even when they are the stronger ability of IE to predict returns. optimal. This provides support for psychological 3. Gross profit predicts economic growth, bias or constraints contributing to the IE- even after controlling for valuations. return relationship. Novy Marx calls it another dimension of Hirshleifer concludes that these findings growth. suggest that both risk and mispricing play a 115

4. Gross-profits-to-assets predict growth in In fact, he and his coauthor suspect that gross profits, earnings, cash flows, what is really going on it that firms are dividends and repurchases. arbitraging time-varying demand for To provide a strong test he uses stale data: characteristics. He says that investor end of year or fiscal year data six months sentiment revolves around “themes” or after its publication. He does test high “narratives” for which characteristics serve as frequency data: data published most recently. a proxy, and firms arbitrage this demand by Novy Marx says that that incorporating gross issuing stock. This suggests to Greenwood profits, whether lagged or high frequency, is that issuance may be useful for forecasting a powerful predictor of the cross section of returns to characteristic-based factors. It is average returns. It appears, though, that the this idea that is the core of the research and data is obscured by its negative correlation this presentation. with book-to-market. In fact, after sorting on The typical approach to a resolving a book-to-market it appears that profitable problem like this is to collect information stocks underperform. about the characteristics of a company and These results and others, Novy-Marx says, associate these characteristics with average are difficult to reconcile with popular returns in a cross-sectional analysis. To explanations of the value premium. Profitable clarify how their analysis is different, firms are less prone to distress, have longer Greenwood lays out an example of the two cash flow durations, and lower levels of approaches using Google as an example: operating leverage. Controlling for gross • Usual approach: profitability explains most earnings related o Collect data on Google’s anomalies as well as a wide range of characteristics such as beta, size, seemingly unrelated profitable trading book-to- market ratio, profitability strategies. and dividend yield. 75. Share Issuance And Factor Timing o Associate each characteristic with (Spring 2011) some average return in a cross- Robin Greenwood sectional analysis. Robin Greenwood, Associate Professor of • Greenwood’s approach: Business Administration, Harvard Business o Collect data on other firms that have School and NBER, presented “Share Issuance the same characteristics as Google. and Factor Timing” coauthored with Samuel o Use net issuance by similar firms to Hanson, Ph.D. Candidate, Harvard University. back-out when these characteristics Greenwood had previously presented a paper are mispriced. at the Q-Group® seminar in the Fall of 2009. o Use this to improve the forecast of Google’s returns. Greenwood started by noting that research has found that, relative to other firms, Implicit in this approach is assuming that firms that issue stock subsequently firms can act as arbitrageurs or liquidity underperform and repurchasers subsequently providers relative to the time-varying have high returns. He notes that there is a characteristic return, albeit in a noisy fashion. debate about whether these patterns should be He notes two caveats: market timing may not interpreted as evidence of a corporate be the primary determinant of net issuance; response to mispricing, or as fully consistent there may be limitations on the ability to act with market efficiency. opportunistically. He believes that conditioning on behavior of other firms may 116 provide more information than simply 2005. This suggests the importance of the looking at the company’s own issuance and characteristic to share issuance in that year. repurchase decisions. The slope and level of the line varies from year to year. In this research Greenwood and his coauthor limit themselves to characteristics Shown in a somewhat different way, that are measurable and have been recognized Greenwood relates the issuer repurchase as important in previous research: spread to the characteristic. The chart for the size characteristic is shown below. • Value & Size o Book-to-Market Not surprisingly, the spreads correlate o Size across characteristics. Greenwood says that: • Size Related • Share issuance forecasts characteristic- o Nominal Share Price based factor returns. o Age: Years on CRSP Firms issue equity prior to periods when o ß from 24-month trailing CAPM • regression other stocks with similar characteristics perform poorly, and repurchase prior to o Σ, residual volatility from CAPM regression periods when other firms with similar characteristics perform well. o Distress, bankruptcy hazard rate o Dividend policy as a binary indicator • The strongest results are for portfolios for firms that pay dividends • Other o Sales growth (DS/S): year-over-year sales growth o Accruals (Acc/A) o Profitability (E/B) They allow the data to tell them which characteristics are important. based on book-to-market, size (i.e. HML One way to examine what impacts net and SMB), and industry. share issuance (N/S) is to relate it to a single characteristic. Using deciles Greenwood He takes it a step further and concludes creates Issuance Characteristic Tilt Graphs issuer-repurchaser characteristic spreads: for different characteristics and for several • Forecast characteristic returns. different years. Shown below is the 2005 • Contain information beyond B/M and graph relating size and share issuance. The have forecasting power for non- issuing size of the circles in the first column varies firms. by the number of firms in the N/S deciles. As • Are consistent with the role of firms as can be seen, the relationship between macro liquidity providers. issuance and company size is negative in Whether it is due to required returns or mispricing, he says the evidence points to mispricing. Greenwood believes their work has implications for researchers who study the stock market performance of secondary, 117 initial public offerings, and recent and the “Financial Crisis” of 2007-08. He acquisitions. Among other things, he says that says that during the first bubble stock prices those doing event studies that compare the fell primarily because discount rates performance of sample firms to firms increased. However, low discount rates were matched on characteristics will omit any the main driver in the booms of 1990s. The returns coming from event firms’ timing of Financial Crisis was, Polk says, largely those characteristics when they should not do expected cash flow based and cash-flow so. based declines are much more long lasting. 76. Hard Times (Spring 2011) To examine the distinction between the Christopher Polk impact of discount rate and cash flow Christopher Polk, Director of the expectations changes, Polk uses a structured Financial Markets Group, Professor of econometric approach that builds on research Economics, London School of Economics, by others but adds the estimate of the presented “Hard Times” coauthored with aggregate VAR. This imposes the cross- Stefano Giglio, Ph.D. Candidate, Department sectional restrictions of the inter-temporal of Economics, Harvard University. capital asset pricing model (which is assumed Polk begins with the question that to be correct), thus reducing uncertainty motivates this study: where will the market about the components of the stock market go? He says that many think it is easy to fluctuations. This methodology, Polk says, forecast the market’s direction, particularly relies on specific assumptions about the data when there is a bubble. However, it is not, generating process, which he believes are and, at times, has been banned. To reasonable, although their results also are demonstrate this Polk provides the NY State consistent with a simpler and much less Statute 899 that explicitly forbade forecasting. elegant model. Guilty transgressors faced jail time and/or a In the model, the authors include the fine. following variables with data on French’s six In spite of prophecy’s reputation, given portfolios: that the US stock markets have had two major • Excess log return on CRSP value- boom/bust cycles in the past 15 years, the weighted index ability to understand them is important and to • Log ratio of S&P index to 10-year predict them even more important. Thus, smoothed earnings Polk asks the following questions: • Term spread in Treasury yields (10 years • How should we interpret these dramatic to 3 months) fluctuations? • Small-stock value spread (difference in • Are stock prices driven by changes in log B/M for small growth and small value discount rates or by expectations about portfolios) profits? • Default spread (BAA to AAA bonds) Polk says that the answers to these Their analysis shows that: questions are important because they tell us about the proximate causes of stock market • Theoretical restrictions improve the out- fluctuations, and the answers may reveal of-sample forecasting power of the VAR. future prospects for the stock market. • Return forecasts were much lower in the 1990’s than in the mid 2000’s and To begin, Polk focuses on the two recent increased much more rapidly in 2000-02 downturns: the “Internet Bubble” of 2000-02, than in 2007-09. The contrast is 118

particularly striking in their theoretically- 3. Environmentally responsible lending restricted model. where lenders may impose a higher cost • The increase in expected return mitigated of borrowed capital and/or more the impact of the tech bust for long-term restrictive terms on non-responsible firms. investors. However, the changes in 4. Socially responsible investing (SRI) expectations were particularly abrupt in where current and potential shareholders March 2009, and a strong recovery should may demand higher expected returns not have been anticipated at the time, a from firms that are not socially forecast in line with his presentation title responsible. “Hard Times.” Exclusionary ethical investing can lead to From their analysis, Polk comes to three polluting firms being held by fewer investors, concluding observations: thus resulting in a lower stock price and an increased cost of capital. Socially responsible 1. Value stocks have higher returns on lending can lead to an increase in the cost of average because they have lower realized capital for the affected firms if a significant returns during periods of negative cash- number of lenders adopt environmentally flow shocks. sensitive lending policies and firms cannot 2. Imposing this theory when estimating a easily substitute between various sources of time-series model for the equity premium capital. improves out-of-sample performance. 3. In contrast to the tech boom/bust, a good The import of socially responsible portion of the recent downturn reflects investing, Chava points out, is clear: in expectations of significantly lower future late 2010, $3.07 trillion in assets were tied to profitability. SRI in the US and SRI constituted 12.2% of total US assets under management. In 77. Socially Responsible Investing And addition to an impact on stock price, Expected Stock Returns (Spring 2011) Sudheer Chava environmental concerns can impact lending Sudheer Chava, Associate Professor, costs. As an example he points to The College of Management, Institute of Equator Principles that were initiated by Technology presented “Socially Responsible and International Financial Investing and Expected Stock Returns.” Corporation (IFC). Signatories agree to integrate social and environmental risk into The recent off-shore oil spill by British their lending decisions. Current signatories Petroleum in the Gulf of Mexico and the represent approximately 80% of global resulting environmental and economic lending volume and include Bank of America, damage emphasizes the need to understand Citibank, and J.P. Morgan Chase. how environmental externalities can be internalized by a firm. Chava points to four Chava asks, does SRI or environmentally possible mechanisms that can impact a firm’s restricted lending actually impact the environmental practices: environmental profile of a firm? To answer this question he looks at the impact on the 1. Taxes, such as a carbon tax. firm's expected stock returns and the price 2. Regulation, such as institution of a cap and terms of its bank loans. For firm level and trade program and/or imposition of environmental data Chava uses ratings tough new regulations on the information from KLD Research & Analytics, environmental performance of firms. Inc. on environmental concerns (e.g. hazardous waste, substance emission and 119 climate change concerns), and environmental asset pricing model and does not need long strength (e.g. environmentally beneficial sample periods. He does point out, however, products, pollution prevention and clean it requires assumptions about the forecasting energy strength). KLD, he says, has data for horizon and dividend payouts. a larger cross-section of firms over a longer time period than any of the alternate data Using implied cost of capital derived sources. The data covers the S&P 500 from from analysts' earnings estimates, he finds 1991-2000 and the Russell 2000 starting 2001. investors demand significantly higher This data, Chava says, does have some expected returns from non-SRI stocks; those problems: that are excluded by environmental screens widely used by SRI. In addition, these stocks • Disclosure of greenhouse gas emissions is have lower institutional ownership and are not mandatory. held by fewer institutional investors. It is difficult to evaluate and quantify the • Next Chava turns to the question of risk implied by the disclosed numbers. environmentally responsible lending. He asks • KLD collects information from a number why lenders consider the environmental of data sources. profile of the firm in pricing loans. He • KLD's team of qualified analysts suggests that it could be any or all of the evaluates the data and makes decisions following: potential credit risk related to about whether the firm has a specific regulatory uncertainty and unexpected change; environment exposure or not. uncertainty regarding borrower litigation and From this data Chava constructs the compliance costs; potential impact of following variables: expanded lender liability laws; possible impact of current and future environmental • Numconcerns measures the total number practices on the lender’s reputational risk. As of environmental concerns for the firm a result of his analysis, laid out in detail in his recorded in the KLD database. paper, he finds: • Numstrength is the total number of environmental strengths for the firm 1. The environmental profile of a firm recorded in the KLD database. affects the price and non-price terms of its • Netconcerns is a net measure of bank loans because: environmental concerns and is a. Environmental concerns increase loan constructed as numconcerns-numstrength. spreads b. Environmental strengths decreases • Climscore is constructed as the difference of climate change concerns (climchange) loan spreads and clean energy strength (cleanenergy). 2. Lenders consider the environmental profile of the firm in pricing loans To determine the impact of the variables because: on stock price he uses as a proxy for ex-ante a. The environmental profile is not stock returns derived from the Implied Cost simply a proxy for an omitted of Capital (ICC). His ICC is based on component of the firms default risk. discounted cash flow model of equity b. Environmental strengths and concerns valuation and his analysis controls for such are priced both in short-term and things as leverage and includes a test for long-term loans. industry impact. ICC, Chava says, has the c. Lower syndicate size for firms with advantage of being a forward looking environmental concerns and larger measure that does not explicitly rely on any 120

syndicate size for firms with environmental strength. While it is a challenging task to conclusively rule out the risk story, the results are consistent with reputation-risk-channel of information transmission. They suggest that premia, and to help calibrate models of the socially irresponsible investing and the term structure that capture the evolution of consequent increase in the cost of capital is the term structure over time. In equity one channel though which environmental markets, Ross notes there has been virtually externalities can be internalized by the firm. no attempt to use forward-looking data aside A second channel is that of the cost and from implied volatility estimates. We have, availability of debt. he says, largely relied on long periods of historical data to estimate quantities such as Regardless of channel, on the basis of this expected returns, long-term risk premia, and research, one can conclude that it pays for a future expected return differentials. Ross says company to be environmentally responsible. a good example of this is the use of the For the lenders and shareholders, it changes dividend yield and Shiller’s CAPE ratios to the environmental risk profile of the firm at predict ten-year returns. the cost of potentially lower expected returns. Ross motivates the potential use of option Recovery Theory prices by showing the surface of implied volatilities on S&P put and calls for a 78. The Recovery Theorem (Spring 2013) Stephen A. Ross particular date with axes of time to maturity A central tenet of modern option ricing (tenor), moneyness (percentage of spot), and theory is that the value of an option can be implied volatility, as shown in the chart computed without reference to the expected below. return of the underlying asset. This is a well- The surface of the graph displays the known feature of the Black-Scholes option following familiar features: a smile with out pricing model and its extensions. Option of the money and deep in the money options prices can and have been used to infer having the highest implied volatilities; the estimates of the market’s view of risk curvature is greatest for short-date options. (volatility), but a whole generation of The Recovery Theorem is intended to provide academics and practitioners have grown up a way to go from the contingent prices to the believing that option prices have very little to natural probabilities. say about expected returns or the actual return distribution. Steve Ross kicks off the Spring Ross develops the intuition for The 2013 Q-group meetings by presenting his Recovery Theorem using the binomial option work on this problem. structure. In the binomial framework the prices do not depend on the natural Ross begins his talk by discussing how probabilities of the up and down states and financial data has been traditionally used. In thus there is no way to infer the market’s (sovereign) fixed income markets, the full estimate of expected returns. term structure of interest rates is often used - many times in conjunction with a parametric Ross notes that the contingent price is the interest rate model - to back out forward rates. pricing kernel - the marginal rate of The forward rates are used to help predict substitution that captures risk aversion. Ross’ anticipated future spot rates, to estimate term key insight is that the pricing kernel, which 121 determines how risky cash flows are risk 1. Test the predictive power of the model. adjusted, has the form of a representative 2. Explore the potential benefits of The agent with state independent and additively Recovery Theorem. separable preferences. This allows for the 3. Extend the analysis to the fixed income recovery of the natural probabilities from a markets. known matrix of Arrow Debreu state prices. Because the Black Scholes model’s state Risk and Return: space is not finite, Ross needs another approach. He uses The Recovery Theorem. Variance/Volatility, Correlation & Equity Premium Ross illustrates the use of The Recovery 79. How Do Investors Measure Risk? (Fall Theorem in a three-step process. Using: 2015) 1. Option prices to get pure security prices. Jonathan Berk Tuesday culminated with what felt like a 2. Pure prices to find the contingent prices. 3. The Recovery Theorem to determine the CAPM revival meeting led by “Evangelist” Jonathan B. Berk, A.P. Giannini Professor of market’s risk aversion, the market’s natural probabilities for equity returns Finance at Stanford Graduate School of Business. Berk presented an engaging and the pricing kernel. defense of the CAPM, and compared CAPM Ross uses data from May 1, 2009, a date disbelievers to earlier followers of Ptolemy, not far removed from the stock market who believed the data better supported the bottom in March 2009. To test this model, he theory that the sun and the planets rotated uses data from the recovered natural about the earth than the newer theory of probabilities. The results suggest a mean Copernicus that the earth and other planets excess return over the next year of 7.13%: the orbited around the sun.8 historical mean from a bootstrapped sample suggest an excess return of only 3.06%. Berk’s research focuses on the comparative ability of alternative asset In comparing the recovered natural pricing models to predict flows into and out probabilities, with the risk-neutral of mutual funds. Building on the model in probabilities, Ross finds that the risk-neutral Berk and Green (2004), he argued that the distribution has a much fatter left tail and that competition for alpha results in flows into the natural probability density function is (out of) mutual funds with positive (negative) much more symmetric. risk-adjusted performance until the point where the net alpha is zero. The key results In addition, the recovered probability discussed by Berk rely on the following density function has significantly more mass simple regression equation: at higher return levels than the density derived from historical returns. Thus, the � �!,!!! = � + �!" ∗ � �!,!!! + �!,!!! recovered natural probabilities seem to show much greater upside potential in returns than where � �!,!!! is the sign of mutual one would have estimated from historical fund i’s investor flow at date t+1 (increase in data. assets-under-management, adjusted for investment returns, between dates t and t+1), Ross acknowledges that there are many implications and features of this model that and � �!,!!! is the sign of fund i’s he is still trying to understand. He concludes that his next steps are to: 8 Copernicus incorrectly assumed the planets made circular orbits, rather than elliptical ones. 122 outperformance—according to a particular regression model using the CRSP value- model—between the same dates. Berk weighted index is the top-performing model presented a theoretical argument that one can and predicts the sign of the flows correctly rank the predictive efficacy of alternative 63.63% of the time. This version of alpha asset pricing models by the size of �!" in the outperforms no risk adjustment (58.55%), the above equation. The model that investors return in excess of the market (62.08%), the most rely on will have the highest coefficient. Fama French model (63.14%), and Fama In contrast, if flows and contemporaneous French Carhart model (63.25%). Interestingly, measures of outperformance were unrelated, the CAPM model using the CRSP value- as with performance measured by a “bad weighted index --- an index that is model” that investors don’t believe in, we unavailable to most investors --- would expect the slope coefficient to be zero. outperformed the CAPM model using the S&P 500 index across all investment horizons For ease of interpretation, Berk presented (comparing the 1st and 2nd rows of the table). evidence on the quantity: !!"!! , which will ! Berk next turned to discussing the equal 50% if flows and performance are statistical significance of his findings. The unrelated � = 0 . Berk’s results are based !" following table shows the t-statistics on a sample of 4,275 U.S.-domiciled associated with the quantity !!"!! for each domestic equity mutual funds over the ! January 1977 – March 2011 period --- a risk model at the three month horizon (1st period characterized by substantial growth in column of numbers) and the t-statistics for mutual fund assets under management (AUM) the pair-wise differences in this performance and concomitant decline in direct investing in measure for each risk model (remaining equities. As flow data is unavailable, the analysis is based on the change of AUM after adjusting for performance of a relevant benchmark. 9 The key results, for different forecast horizons, are summarized in the table

columns). As mentioned above, the CAPM, using the value-weighted CRSP index, explains 63.63% of the signs of fund flows. It beats a no risk model (63.63% vs. 58.55%) with a pair-wise t-statistic of the difference in performance measures of 4.98 (second below: column from right). Among the results that Berk highlighted were: people appear to be st At the three-month forecast horizon (1 using the CAPM with the CRSP value- column), “alpha” measured with the CAPM weighted index (to his surprise), mutual fund investors are not risk neutral (far right column: CRSP CAPM works better than 9 The performance adjustment is based on a mimicking set of Vanguard funds. returns relative to risk-free rate), the CRSP 123 implementation of the CAPM beats the two (inflation fighting), and late Greenspan and multi-factor models (although not statistically Bernanke (renewed attention to output significantly at the three-month horizon), the stabilization). CRSP CAPM does better than simply looking at returns relative to the market index. Berk In the recent period, including during the noted that it remains an anomaly why the Great Financial Crisis, Treasury bonds and CAPM does not work in explaining the cross- notes have been an effective hedge to equity section of expected returns, and that the market investments (as can be seen by the CRSP CAPM only explains about ~64% of negative beta in the chart above). This is the fund flows. Hence, there is more to because a weakening economy has been explain. associated with a fall in inflation, real interest rate declines, and an investor flight to quality. Bibliography: The last two factors also drove up the price of Berk, Jonathan and Jules van Binsbergen, TIPS. In earlier decades, however, Treasuries “How Do Investors Measure Risk?” available had a positive correlation with stocks (as can at http://www.q-group.org/wp- be seen in the chart above) because when the content/uploads/2015/05/qgroup.pdf. economy weakened: inflation rose, real interest rates rose, and the flight to quality Jonathan B. Berk and Richard C. Green, was into cash or commodities and out of both 2004, “Mutual Fund Flows and Performance stocks and Treasuries. All three factors drove in Rational Markets,” Journal of Political down the prices of Treasuries and would have Economy, 112 (6), 1269-1295. driven down TIPS as well. 80. Monetary Policy Drivers Of Bond And The possibility that bond and stock prices Equity Risks (Fall 2015) could and have been positively correlated at Carolin Pflueger Carolin Pflueger, Assistant Professor of times in the past is particularly important for Finance at the University of British Columbia, pension plans as they could be exposed to a contributed to this “Macro Q conference” by “perfect storm” in such an environment: helping us to better understand the linkages dramatic declines in equities (a substantial between the macro economy, sovereign rates portion of their assets) and dramatic increases (both US and international), and corporate in the present value of their liabilities. spreads. Carolin began her talk by reminding The correlation between bond and stock Q that, historically, the sign of the correlation prices is also an important determinant of between US five-year rates and stock returns equilibrium credit spreads. The stronger the has been both positive and negative, as positive correlation between stock and bond depicted in the chart below. The chart shows prices, the higher the credit risk, since three important periods: pre-Volker (rising economic slowdowns are then associated inflation), Volker and early Greenspan with higher borrowing costs. The empirical link between these bond betas and credit spreads is very apparent in international data that Pfleuger presented, which is reproduced

124 below: • The estimated volatility of PC shocks is the largest in the earliest sub-period, What causes the change in bond risks which comprised major global oil price over time? Changes in macroeconomic shocks, shocks? Changes in monetary policy? The estimated volatility of the MP shocks Pfleuger and her co-authors employ a New • was the highest during the inflation Keynesian macro model to look at this fighting years of the Volker and early question and find that both can contribute. Greenspan regime, and The model has three key building blocks: • The estimated volatility of the Inflation 1. The Investment and Saving (IS) curve Target shocks has been the highest in the that links output and real interest rates in last period, which has seen a dramatic equilibrium through an Euler equation decrease in long-run expected inflation. that reflects habit formation on the part of Pfleuger showed Q that the model can consumers/investors, help explain and understand these other 2. A description of firms’ price setting known facts: behavior that links inflation and output in equilibrium, reflects nominal rigidities, • The sign change of nominal bond betas and is captured by the Phillips curve (PC), across the three sub-periods, and • The large positive beta and high volatility 3. The Fed’s monetary policy rule that of nominal bonds after 1977 is consistent describes the Fed’s procedure for setting with a change in monetary policy towards short-term nominal interest rates, which is a more anti-inflationary stance, presumed to depend on the deviation of • Changes in the volatility of supply shocks inflation from the Central Bank’s target, through the Phillips Curve can affect the output gap, and lagged nominal bond risks, interest rates. • The size and composition of shocks to Pfleuger presented the results for the Monetary Policy affect bond betas, and Monetary Policy rule (top panel below). As • Changing fundamental risks can be expected, the importance of the output gap is amplified by time variation in risk premia much higher in the third sample period, and leading to “flights” to quality. decreased slightly from the first to the second Bibliography: period, as the Fed focused on fighting Campbell, John, Carolin Pflueger, and Luis Viceira, “Monetary Policy Drivers of Bond and Equity Risks,” available at http://www.q- group.org/wp- content/uploads/2015/12/Pflueger_paper.pdf. 81. Origins Of Stock Market Fluctuations (Fall 2014) Martin Lettau inflation. Martin Lettau, Professor of Finance, We also see in this table that: University of California, Berkeley, asks the question: “What economic forces drive the • Fed monetary policy persistence has level of the stock market at different increased substantially since 2000, horizons?” Lettau’s goal is to explain the variability in real stock market levels 125

(which represent real aggregate financial increases in labor income must come from wealth) using data since 1952. His focus is decreases in financial wealth in the different from many past studies, which economy. have attempted to explain stock market returns over shorter periods—such as Lettau interprets his model as monthly or yearly. This paper attempts to containing three economic forces that explain the stock market level, which is a create shocks to the level of the stock cumulation of these shorter- horizon market: average market returns. Simply put, Lettau 1. Technological progress that raises the wishes to pin down the types of economic productivity of capital and labor, such shocks that contribute the most to the as the invention of the transistor: Lettau variability of long-run stock market finds that this economic force expected returns. determines long-run growth trends in Lettau develops a model that consists of the stock market, but plays a small role two types of agents: shareholders and in market fluctuations workers. The representative shareholder in 2. Redistribution of rents between capital the model can be thought of as a large and labor, such as the decrease in the or wealthy individual power of labor unions since about 1980 who derives all income from investments; which shifted rents to capital: Lettau on the other hand, the representative finds that this is the main economic worker consumes his labor income every force that has driven the long-run value period and does not invest. This two- of the stock market since the 1970s representative agent type model allows 3. Changes in risk aversion/sentiment that more realistic aspects of the economy, such are independent of the above as the non- participation in the stock macroeconomic fundamentals: Lettau market of a large fraction of the workforce. finds that this economic force explains A result of the modeling is new insights high-frequency stock market movements, into how redistribution of wealth between but not longer-term market levels workers and investors moves the stock Specifically, Lettau uses his VAR market. model to decompose the short-run (quarterly) variance of the stock market, Lettau’s vector autoregressive model and measures the contribution of each of (VAR) links together (1) consumption, (2) labor income, and (3) financial wealth with four quarterly lags of each as explanatory variables (to handle seasonality). As an interesting aside, Lettau notes that stock his three economic forces: holdings are, by far, the most volatile asset on the representative household balance He next turns to explaining longer-run sheet—far more than real estate, human changes in stock market wealth. As he capital, etc. This statistic supports the use lengthens the measurement horizon, he of financial wealth as a proxy for all wealth finds a decreasing role for sentiment, and in his model. Also, Lettau confirms that the an increasing role for redistribution. One growth rate of measured consumption in particularly interesting period that he the U.S. has been incredibly smooth, even studies is the economy since 1982. Lettau’s though representative individual net worth model decomposes the 59% increase in the fluctuates considerably. This suggests that real level of the stock market since 1982 126

(net of the long-run trend of the stock 82. “The Divergence of High- and Low- market). For this period, he finds the Frequency Estimation: Causes and following breakdown of influences of Consequences” (Fall 2014) economic forces on aggregate stock market Mark Kritzman Almost all financial models require the wealth: estimation of sample moments before they Lettau notes that the large negative can be implemented. The models are productivity shock is mainly due to the typically derived in simplified settings (e.g. large decrease in consumption during the a world with only two periods) where the Great Recession of 2007-2009. parameters are presumed known, and the models, hence, provide no guidance about Perhaps the biggest takeaway from parameter estimation. Practitioners, in Lettau’s talk is the following graph, which particular, are, thus, left to their own is an estimate from his VAR model of the devices to choose the appropriate sample influence of shocks in redistribution on period and data frequency. Statistical labor income and the stock market. The theory tells us that if returns are lesson from this graph is that the level of independently and identically distributed labor income—which he interprets as a (i.i.d.), the precision of the estimate of redistributive force—has had a profound mean returns depends on the length of the (and inverse) impact on total stock market sample period, and the precision of returns’ wealth. Interestingly, Lettau also noted that, second moment estimates depends on the without the redistribution from labor to sampling frequency. It is therefore common, capital that occurred since 1980 (as in practice, to use high frequency data (e.g. depicted by the solid black line), the monthly) to estimate risk parameters --- estimated level (detrended) of the stock even when longer-term investment market (dashed red line) would be 10% horizons are of primary interest. One of the bedrock assumptions underlying this practice is the belief in the appropriateness of scaling high frequency (e.g. monthly) estimates of volatility to obtain longer-term volatility estimates (e.g. annual). In the case of using monthly data to obtain annual volatility estimates, the typical practice is to presume returns are (or close enough to) independently and identically distributed, and multiply the monthly standard deviation estimates by the square root of twelve to arrive at annual volatility estimates. In his talk, Mark Kritzman of Windham lower today. Capital Management challenged this conventional wisdom and presented strong evidence that: • The working heuristic assumption that returns are independently and identically distributed is not a good 127

approximation when it comes to b) lagged auto-correlations explain a estimating second moments, significant amount of the excess dispersion • Volatility estimates do not scale with for domestic assets, and c) lagged cross- time, correlations have a big impact, • Volatility estimates using longer run internationally. data can be as much as 100% higher Kritzman discussed, in practice, how than volatility estimates using shorter excess dispersion impacts portfolio run data, construction (one can get very different • Correlation estimates also can differ optimal portfolios), the performance dramatically depending on frequency evaluation of hedge and mutual funds choices, (relative rankings often significantly • The sampling frequency decision can change), empirical estimates of the have important ramifications on the steepness of the security market line (SML optimal asset allocation, and appears much steeper when using longer • The choice of sampling frequency and frequency data), and the apparent period can have a dramatic impact on attractiveness of risk parity. Risk parity, for performance evaluation --- to the extent, instance, has a better Sharpe ratio than a at times, of flipping ordinal performance traditional 60/40 equity/bond allocation --- rankings. using monthly data --- over the 1929-2010 Kritzman introduced the concept of period (0.42 versus 0.36). Using ten year excess dispersion as a measure of the data, however, risk parity has a distance between the actual distribution of dramatically lower Sharpe ratio (0.16 returns and the i.i.d. returns’ distribution versus 0.23). assumption used in most practical 83. “Visualizing The Time Series Behavior implementations. He defined excess Of Volatility, Serial Correlation And dispersion as the fraction of the distribution Investment Horizon” (Fall 2014) of lower frequency returns that is outside Ralph Goldsticker the one-standard deviation tails implied by Ralph Goldsticker, CFA, of the Q- monthly returns. This is depicted in the Group picked up on the theme of Mark figure below that is taken from his Kritzman’s talk from Tuesday afternoon presentation, based on the relative returns regarding the empirical challenges in of U.S. and emerging market equities. estimating return volatilities and correlations. The key message of his talk Kritzman showed that excess dispersion was that the choice of sampling period and could mathematically be decomposed into return frequency matters, and that there is three components: (i) non-normality, (ii) no objective quantitative metric for the significance of lagged autocorrelations, determining the “best choice”. Ultimately, and (iii) the presence of significant lagged financial researchers must make a cross-correlations. In general, Kritzman qualitative assessment and make the presented evidence that: a) non-normality choices that seem to best fit the situation. explains very little of the excess dispersion,

128

The following chart depicts the The graph also shows that most of the importance of both sampling period and positive serial correlation occurred from 1995 through 2008 --- as the 12-month line deviates faster during that period. The graph below, comparing 12-, 24-, and 36-month returns, shows that there is time diversification (mean reversion) at longer horizons, as the variance based on 36-month returns is less than the variance

frequency in estimating S&P volatility: Goldsticker introduced the concept of cumulative contribution to variance as a tool for helping to analyze the behavior of volatility over time. Assuming there are N total observations in a sample, he defined the cumulative contribution to variance up until time t as: of 12-month returns. The following chart depicts the Goldsticker showed that the choice of cumulative contribution to variance for the sampling period and frequency is just as S&P over a forty- year period. It is readily important for measuring correlations, and apparent from the next graph that terminal introduced the notion of cumulative variances (the ending points of each line) contribution to correlation as a visual tool for examining correlations. The following graph displays these contributions for the correlation between stocks and bonds over a forty-year period. The numbers at the far right of the graph show the full-sample correlations. Over the entire sample period, the short-term correlation between stocks and bonds using daily data was 0 (compared to a 0.35 increase with holding period --- reflecting positive serial correlation (momentum) in 12-month returns.

129 estimate using 3-year data). The short-term sufficientl measure peaked around 0.15 at the turn of y high ex- the millennium, and has subsequently been ante declining. The correlations using longer- Sharpe term data show a similar hump shape --- Ratio to with a positive correlation between stocks justify and bonds through the tech stock bubble, their and a negative correlation since then. prominent position relative in a portfolio of bonds and commodities. However, that is not Goldsticker finished his talk by reality. From 1971-2013, equities have suggesting similar cumulative contribution outperformed bonds and commodities in tools could be used to visualize other types terms of returns, they were riskier and when of risk such as autocorrelations, tracking adjusted for risk kept pace as shown in the error, and information ratios. chart that follows. 84. Panel Presentations And Discussion Of Asness says that the goal of asset Risk Parity And “Where Will It Take allocation and optimal portfolio construction Us Next Year” (Fall 2013) Clifford Asness, Paul Kaplan is to maximize expected return for a given level of risk. Theoretically, this is done by combining the maximum Sharpe Ratio Risk-Parity: An Application of Finance portfolio with risk-less borrowing or lending. 101 (plus low beta investing) If the goal is achieving high returns, levering In contrast to the CAPM setting, where a risk-diversified portfolio is superior to a investors hold the market portfolio and lever portfolio containing only equities. it according to their preference for risk, the Asness goes on to talk about research by risk parity model targets a specific level of Black, Jensen and Scholes (1972) that risk and divides the risk more equally across showed that the security market line is too flat, relative to theory. Asness argues that low-beta assets are generally shunned, and conversely high- beta assets are in high demand. This results in a flat empirical market line like that shown below. Not only does this result hold for U.S. stocks, but low beta countries outperform high beta countries on a risk-adjusted basis. all the portfolio assets. Its stated objective is Asness argues that this phenomenon is to achieve richer portfolio diversification for universal and occurs for U.S. and global an investor. This is the most basic stuff in stocks, across the term structure of credit and finance, Asness says. It all starts with a government bonds, across the credit spread version of the following chart. spectrum from distressed bonds relative to those that are AAA. Asness says that risk The embedded belief in traditional parity is just taking advantage of the low beta portfolios, where equity risk dominates the anomaly applied to asset classes. If leverage portfolio, is the idea that equities have a aversion, or other theories like lottery preferences or investor focus on relative returns, predicts that higher-risk portfolios 130 have lower risk-adjusted returns, then a portfolio that overweights low-risk assets and underweights high-risk assets can capture some of this premium. Risk parity then applies concepts from Finance 101 -- find the portfolio with the highest Sharpe Ratio to leverage to maximize the Sharpe Ratio? (roughly an unlevered risk-parity portfolio) and use leverage to customize expected return In the Markowitz model, the expected and risk. return and risk are known and the weights for each asset can be backed out. In the Black- But this is not theory, it is all in the Litterman model, implied expected returns evidence, says Asness. can be backed out for any given covariance Risk Parity, Expected Returns, & matrix and set of portfolio weights assuming Leverage Aversion those portfolio weights maximize the portfolio’s Sharpe Ratio. Kaplan then shows Both risk and return are important in the differences in implied expected returns portfolio creation, at least theoretically. that result from using the risk-parity model Before Markowitz, Kaplan says, no one had (shown to the right, top) and an equal- taken risk explicitly into account. Risk parity weighted portfolio (shown to the right, commits the opposite error: it is explicit below). They are quite different. about risk but totally ignores expected return. Kaplan lays out what risk parity (RP) is and Kaplan goes on to discuss the results with what it is not. In general, he says, there is no equal correlation. In this case the risk parity closed-form solution for the weights in the condition is simpler: the constituent weights risk-parity formula; it is marketed as an are inversely proportional to their standard intuitive approach to asset allocation. deviations (equal to a low volatility weighting) and the implied expected returns are linear in Using the RP idea, the unlevered portfolio standard deviation. Equally-weighted is heavy with low-risk asset classes: portfolios, in an equal-correlation leveraged risk parity models seek to achieve environment, have Sharpe Ratios that are higher returns by leveraging the portfolio. linear in standard deviation. The implied Kaplan says “beware: this model, like all expected returns are quadratic in standard risk-based models, is like one hand clapping” deviation. – risk but no explicit returns in contrast total return and no risk of other models. Without Kaplan says there are a number of expected returns, Kaplan asks, how do we interesting questions that remain: know if the risk-parity model is the right one 1. Are the implied expected returns of risk- parity models consistent with the expected returns predicted by leverage aversion? If not, he says, risk parity is a heuristic and there might be better ways to exploit leverage aversion. 2. Is the prediction of lower risk-adjusted returns for stocks relative to bonds strategic or tactical? 3. Is the dominance of stocks in traditional strategic asset allocation due to leverage 131

aversion only, or to the idea that in the 4.4% over cash and 3.5% over bonds. long run the stock market is the source of exposure to the real economy as well? Ilmanen decomposed returns into real 4. With bond yields subject to central growth in corporate earnings and dividends, bankmanipulation, are stocks dividend yield, inflation, and gains from re preferable to bonds for long term pricing. Using the chart below he showed that investors averse to leveraging a bond the real growth of earnings per share (lowest dominated portfolio? line on right axis) and dividends (second lowest line on left axis) has lagged that of To conclude Kaplan says that back tests GDP (dotted line) and GDP per capita of risk parity during periods when bond (second line from the top of right axis). yields were falling are not good predictors when rates are rising. Thus leverage aversion Illmanen pointed out that over time the may not be a risk parity portfolio but a consensus assumptions regarding the risk tactical strategy to offer the best returns in the premium in academic finance have changed. current environment. At this point in time the Thirty years ago we assumed the following: best returns would come from low volatility 1. A single risk factor (beta) affects the stocks, not bonds. equity market. A discussion between Asness and Kaplan 2. Expected returns are constant over time. ensued. In the main, they agreed with each 3. Investors are concerned only with the other. No fireworks, just interesting insight. mean and variance of returns. 4. Efficient markets are inhabited by rational 85. The Equity Premium And Beyond (Fall investors. 2012) Antti Ilmanen Our present understanding of what creates The equity premium has always premiums recognizes that: engendered discussion, but never more than 1. Multiple risk factors affect the equity now when markets and investors are so market. unsettled. In a presentation based largely on 2. Required returns should ultimately his 2012 book, Expected Returns on Major depend on co-variation with “bad times.” Asset Classes, Antti Ilmanen reviewed equity 3. Risk premiums are time varying. premium history and projections for its future. 4. Skewness, leverage, and liquidity Ilmanen expressed concern about how preferences exist. managers capture the equity premium using 5. Investors’ irrationalities and market non-traditional assumptions about the source frictions can create market inefficiencies. of risk premiums, particularly in the midst of Times have certainly changed, and unsettled markets. To provide context he said Ilmanen reminds us there are better ways to that historically equity returns have beaten earn an equity premium: global the rate of inflation and the returns on bonds diversification; constant volatility targeting; and bills in every country and over the long defensive equity, value and momentum tilting; term.: the long-run world equity premium is and equity market timing based on value and momentum signals.

132

He said that there are two primary reasons 1. Global diversification may reduce why most institutions still choose to depend portfolio volatility—especially at longer on an outmoded notion of the equity premium. horizons—without sacrificing average First, equity risk dominates traditionally returns, thereby improving risk-adjusted diversified portfolios, and, second, the market performance. is highly directional, as he showed in the 2. Long-run market volatility conceals chart below. All this happens in a world shifting periods of low and high volatility. where correlations have increased -- for many Thanks to volatility persistence, constant- global equities portfolios the correlation can volatility targeting may help keep exceed 0.9. Note in the chart, the lines on the portfolio risk more stable over time and right axis are in order of the legend. result in slimmer tails for the distribution. Illmanen said that the exposure of portfolios to the equity risk premium is even more apparent when portfolio allocations are measured by risk rather than by dollars. Equity domination would be justified, Ilmanen said, if the equity premium offered a uniquely high Sharpe ratio. However, when it does not a better risk-balanced set of return sources would likely improve a portfolio’s risk-adjusted return. As a solution he 3. While most speculative assets within each recommended more aggressive asset class give disappointing long- run diversification through risk parity investing returns, levering up low-risk peers offers and the use of alternative beta premiums. His much better returns. prescription was to diversify more 4. Earning the equity premium together with aggressively thus cost-effectively harvesting helpful stock selection tilts can improve a multiple return sources. This approach, he market-cap portfolio’s performance. concluded, is the most realistic way to 5. Earning the equity premium together with achieve 5% real returns in a low-return world, helpful market timing tilts can improve but it is available only to those who are the performance of the buy- and-hold willing to utilize what he calls the “three dirty portfolio even though risk is concentrated. words in finance”—leverage, shorting, and Ilmanen provided graphic support for derivatives. each of his conclusions. To achieve this level of returns, he said, 86. Panel Discussion: Rethinking The others may have more faith in illiquid assets, Equity Risk Premium (Fall 2012) discretionary manager magic or market Brett Hammond, Marty Leibowitz, Bill Sharpe, Laurence B. timing. These are, in his estimation, Siegel approaches that should have a modest role in Hammond introduced the panel on the a portfolio. The core of the portfolio should equity risk premium (ERP) noting that the be multi beta- oriented. He called this session updates the ERP views and research approach “wide harvesting” and that has the discussed at a 2001 Research Foundation of following possible results: CFA Institute forum of experts. At that time estimates of the ERP ranged from zero to 7%.

133

The 2001 forum produced a schematic series of charts showing the different facets of (shown below) of what constitutes the ERP, the ERP. These included such things as interest including investment horizon, supply-demand rates, objective and circumstantial drivers, imbalances, and the level of demand for a different levels of inflation and earnings return to compensate the risk of equities. In expectations. He provided the following chart addition, it dealt with whether the markets to show the relationship between the factors exhibit rational expectations or suffer from behavioral distortions. One thing was widely accepted by all in attendance at the 2001 event: few institutions or individuals explicitly address these issues and even they fail to consider the size of the equity premium itself in forming policy portfolios and determining asset allocation. Hammond said that the equity risk premium is the most important measure in all of finance. It impacts saving and spending behavior, as well as the critical riskless and risky assets allocation in investors’ portfolios. The problem posed by recent history is that and three interest rate levels. we may not be confident in our understanding of equity risk and, therefore, in our forward He identified a “sweet spot” associated looking decisions. with moderate real long-term interest rates (2–3%), and demonstrated why it is “sweet” After several decades during which realized in the chart below. equity returns followed a welcome positive pattern, the past decade has seen a marked This brought Hammond to what he calls downturn in equities. The decade was the “risk premium smile”—the relationship characterized by much lower-than-average between real rates and the ERP (shown returns and higher volatility, rising cross-asset, below). cross-country, cross-sector, and intra-sector Hammond concluded that the past 10 correlations, and two of the biggest bubbles in years have shown that the ERP is far from stock market history. This downturn prompted being a settled matter. He proposes that (1) some investors to suggest that future estimates of ERP be explicit and (2) the expectations for equity returns versus other model used to make the forecast be clearly broad asset classes must be permanently described. Finally, Hammond said that it is adjusted downward. Others argue that the same evidence suggests equities are poised for outstanding future excess returns. Hammond suggested we sort through the best thinking on the ERP and look particularly at the most important drivers of the premium. In 2011, these issues were revisited in a forum where a number of academics reported their research. From that meeting they developed a

134 clear that circumstances and differences From this Leibowitz constructed the among investors lead to true and irreducible following chart combining a stable risk differences in the ERP. premium and a 0.3% drift. The results, he said, are inherently overstated in accepting higher expected returns without fully

Leibowitz looked at a series of addressing the greater prospective risks. hypothetical return models under the assumption of rising rates. He said that an asset’s return premiums can be viewed as incremental returns for accepting the prospective risks. Thus, higher premiums may not be a bargain but may be an indication of greater prospective risks and/or lower growth prospects. He described models he created based on the following assumptions: • Bonds: zero coupon with a duration of 5, Finally, he reminds us that these models initial yield of 2% and rate volatility of should be viewed as strictly hypothetical 1%. illustrations and are not to be taken as market projections. • Equities: a risk premium over bonds of 3.5% and a volatility of 16%. In these models rates were allowed to Sharpe opened his presentation with a drift at various rates. The multi-year returns Casey Stengel quote, “Never make a for bonds are shown below. prediction, especially about the future,” a perfect prelude to a discussion of the equity risk premium. Sharpe started with a little CAPM history that assumes that market risk alone is rewarded with higher expected returns. This, he said, is an incomplete model and he then discusses a state dependent model. This model includes alternative future states of the world (though one and only one The following table shows the multi- year will occur) each accompanied by an estimated probability. In this model it is possible to buy and sell state claims by comparing the price of the claim with the likelihood of cashing in on it. As a cost he uses the ratio of the state price to the probability of that state price per chance (PPC). The higher the PPC, the less attractive is an investment. Rational investors, he said, take more of a thing (or investment) when it costs less, with PPC being the measure of returns for equities. cost

135

The market portfolio using this approach Siegel used this identity as a template for is shown in the following chart. States with a discussion on how to best estimate each the same wealth will have the same PPCs, variable, how to determine the accuracy of and states with more wealth will have lower each estimate, and other issues relevant to PPCs. The individual makes the choice based long-term forecasting. Siegel then defined ERP as the expected equity total return less the yield on the 10- year Treasury. He noted that, in retrospect, the Grinold and Kroner (2002) forecast was too high. The main problem in their forecast was the volatile repricing term: they made no serious attempt to estimate the speed with which the unusually high P/Es prevailing at the beginning of the decade would revert toward their historical mean. Siegel said that they understandably got it wrong since market repricing tends to happen all at once. on a personal PPC–wealth trade-off. Siegel then turned to data that he believes Sharpe then turned to communicating are appropriate for making estimates at this with investors about risk and return. He said time. Because now he views the market as that return/standard deviation is not a roughly fairly priced after two bear markets particularly useful paradigm to use when and two recoveries. He uses a repricing rate distributions are not normal. It is preferable to of zero, cautioning that the repricing term is show the entire cumulative probability noisy. As a result of the forecast with revised distribution using a graph where the inputs, he said he expects moderate stock investor’s goal is plotted on the X axis and market growth in the future. His paper the chance of exceeding the goal on the Y projects an expected equity risk premium axis. In this way the client has a visual over the 10-year nominal Treasury bond of demonstration of the outcomes associated 3.6% (geometric). By the time he presented with the probability of meeting the goal. the paper at the Q Group, he said that falling interest rates caused the estimate to increase to 4% (geometric). Siegel updated a Grinold and Kroner ERP Siegel summarized his paper by saying model that segregates expected equity that although Black Swans, fat tails, and returns into several understandable pieces tsunamis are the talk of the day, such large (income, earnings growth, and re-pricing. The unexpected events tend to fade in importance model shows that the expected equity return over extended periods. It is only then that the underlying long-term trends reveal themselves once more. 87. What Is Risk Neutral Volatility? (Spring 2012) Stephen Figlewski Stephen Figlewski, Professor of Finance, over any period is definitionally equal to: New York University Stern School of Business, presented “What is Risk Neutral 136

Volatility?” This research was supported by a 2. How to extend the distribution to the tails grant from the Q-Group. beyond the range of traded strike prices. Figlewski starts by saying that the market He notes that unlike implied volatility, the prices for risky assets depend on the market's RND is model free, and that is a big assessment of the probability distribution (P) advantage. He develops the Breeden- for its future payoff together with the Litzenberger technology to extract RNDs market's risk aversion (Q). Neither Q nor P from daily S&P 500 index option prices and can be observed directly: computes the standard deviations. He then explores whether, and by how much, the risk 1. The form of the distribution itself is neutral volatility is influenced by a broad set unknown, although extensive research of exogenous factors which are expected to suggests it has fatter tails and its shape be related to its empirical density and to the changes over time. risk neutralization process. He notes that his 2. The factors that go into the risk- main objective is not to build a formal model adjustment process are unknown. of option risk premia, but to establish a set of 3. The risk premia for unspanned, stylized facts with which any model must be unhedgeable factors are often treated consistent. There are, he notes, several broad simply as unknown constants to be questions to be addressed: estimated. 4. P and Q, when combined into the Risk 1. What return and volatility-related factors Neutral Probability Density (RND), can are most important to investors in forming be extracted from market prices for the forecasts of the empirical probability options. density that are embedded in the RND? 2. Is the market primarily forward- looking Figlewski asks, “Should We Care?” about or backward-looking in gauging volatility? the RND. The answer is that indeed we 3. What are the relevant time horizons on should because RND measures how options which it focuses? are being priced. The Black Scholes (BS) 4. What factors influence the process of risk model uses the principle of delta-hedging to neutralization? price options. It is delta-hedging that market 5. Do the answers to these questions differ makers use to manage their risk. He reminds for long-maturity vs. short- maturity us that the BS model assumes a constant options and do they do so over different volatility log normal diffusion, an approach time periods? that he says is too simple. It does not consider such things as tail behavior that may, and Figlewski points out that extracting the should, be critical for investors and market RND from options prices in practice is markers. Moreover, it is important to nontrivial and describes his process in detail. understand the multiple risk premia His explanatory variables are divided into embedded in option prices since they are three classes: significant components of return. 1. Volatility-related variables that do not Figlewski says there are two major require looking back over past data. problems in constructing a complete RND 2. Volatility-related variables computed from a set of market option prices: from past data over a horizon that must be specified. 1. How to smooth and interpolate option 3. Variables related to risk attitudes that may prices to limit pricing noise and produce a influence risk neutralization. smooth density. 137

After describing the outcomes of his a. Previous day volatility risk premium analysis in detail, he provides the following as measured by yesterday's RND observations regarding key questions. volatility minus that of the GARCH model. 1. What return and volatility-related factors b. Strength in the Consumer are most important to investors forming Sentiment variable. forecasts of the empirical probability c. High P/E ratio for the stock market. density that are embedded in the RND? The past error from a GARCH model He says that simple correlations is both in absolute value and to a indicating the strength of the direct lesser extent as a signed variable. univariate relationships between d. Large past errors of either sign. explanatory variables and both RND and e. Yield spread in the bond market. realized volatility were largely of the f. Day t trading range. expected sign. g. S&P P/E ratio. 2. Is the market primarily forward- looking h. Day t-1 RND volatility risk premium. or backward-looking? What are the relevant time horizons on which it focuses? At the end of his presentation, Figlewski a. Backward looking: The past trading asked the participants for ideas on how to range and some tail- related variables continue to develop the ideas from his were statistically significant but not ongoing research. There was a lively easy to interpret. conversation. Those who did not hear the b. Forward looking: presentation will have to wait until the paper i. One model, GARCH, had the best is completed to read it in its entirety. predictive power over all horizons. RND volatility alone or combined Securities Lending & Short with historical volatility was Selling highly significant, but contributed 88. A Solution To The Palm-3Com Spin- nothing when combined with Off Puzzles (Spring 2015) GARCH. Chester Spatt ii. All three volatility measures Chester Spatt, Pamela R. and Kenneth B. appear to have the greatest Dunn Professor of Finance, Carnegie Mellon marginal explanatory power for University, and former SEC Chief Economist, future volatility over the next 5 to discussed the 15-year-old 3Com-Palm 10 trading days, though only mispricing puzzle. The central question GARCH is significant. addressed was why Palm, a subsidiary of iii. The coefficient on historical 3Com, traded at a higher price than implied volatility was uniformly negative by the stock price of 3Com. and insignificant. 3. What factors go into the neutralization? The following time-line provides a Here Figlewski reports mixed results, summary of the events surrounding the however, yesterday's RND volatility minus that of the GARCH model as extremely powerful in the RND volatility equations. 4. What risk factors go into the neutralization? He includes such things as the: 138

3Com-Palm puzzle: borrowing rate climbed substantially in early May, right at the time that 3Com resolved The long-standing puzzle, of course, is exactly when Palm would be spun-off. why the 3Com stock price from March 2 to July 27 did not properly reflect the implied Second, the explanation of the negative value of its Palm ownership. For example, stub value. Palm shares were in short supply Lamont and Thaler (2003) wondered whether after the 5% spin- off—with very limited the market “can add and subtract,” claimed stock loan availability and a small float for that the above facts meant that the Law of potential buyers. The small float created an One Price were violated, and showed opportunity for those who controlled Palm additional evidence that options were shares to charge very high lending rates mispriced such that Put-Call Parity did not starting March 2, 2000. To exploit the hold. These phenomena have more negative stub value of 3Com, one must widespread implications than simply Palm, as borrow shares of Palm to short them, in Mitchell, Pulvino, and Stafford (2002) combination with a long position in 3Com.10 identified 84 cases of negative stub values After factoring in the steep borrowing costs, during 1985-2000. Spatt showed that there was no way to arbitrage the apparent negative stub value. In his talk to Q members, Spatt addressed two related questions: (1) Did Palm’s share A Deeper Dive price reflect the fundamental value that could be attributed to all future cash flows Spatt showed how to quantify the associated with owning the stock (including computation of a “rational” stub value for lending fees)? (2) Did the relationship 3Com, and that, properly computed, it was between Palm’s and 3Com’s stock prices (almost) always positive—presuming there is violate the Law of One Price? He argued that, no wedge between what borrowers would if one takes into account the “convenience have to pay to borrow the Palm shares and yield” that ownership of Palm shares what holders of Palm shares could earn by provided to its owners through the ability to lending the stock (e.g., there is no large lend these shares, as well as the uncertainty capture of the borrowing cost by brokers). of when the spin-off would actually happen, A key to his approach is noting that Palm then the pricing of Palm and 3Com shares shareholders can earn the lending fees on were consistent with the Law-of-One Price Palm shares, while the 3Com shareholders and rational markets. effectively own a forward contract on the First, some observations. The graph Palm shares, and cannot earn those fees. Thus, below shows the annualized percentage the intrinsic value of Palm stock should borrowing rate for Palm shares over time. exceed the present value of a forward Note two important things: first, that the contract on the stock by the present value of borrowing rate was generally extremely expected lending fees: high— from 20 to 60% per year. Second, the s!"#$,! = � �� ������� ���� + ��[�(�∗, �)]

10 Another salient point is that 3Com may have cancelled the spin-off of the remaining 95% of Palm, and might have spent the cash flows from Palm on negative NPV projects due to agency problems in corporate management. 139

where s!"#$,! = spot price of a Palm actual lending costs (actuals were obtained share today, and from a major broker). Note that the implied matches the actuals reasonably well! �(�∗, �) = forward price to be paid when receiving a Palm share at time �∗, the spin- The Remaining Puzzle off date. Spatt admitted that it remains unknown We can compute the value of a share of why some investors held Palm shares without 3Com prior to the 95% spinoff by adding its lending them, since the market must, in stub value to (1.5 times) the present value of aggregate, hold the shares long. Perhaps these the embedded forward: investors were naïve, could not lend shares, and/or were overvaluing Palm shares. ∗ s!"#$,! = ����! + 1.5 ∗ ��[�(� , �)] Whatever the case, the 3Com-Palm puzzle, if not fully resolved, is now focused on this We can use this equation to determine the market’s valuation of the STUB. The only simpler, new puzzle. problem is that there is no forward market for Bibliography Palm shares, but Spatt uses put-call parity to estimate the value of Palm forwards, since Cherkes, Martin, Charles M. Jones, and options were traded on Palm shortly after the Chester S. Spatt, A Solution to the Palm- 5% spin-off: 3Com Spin-Off Puzzles, 2013, available at http://www.q-group.org/wp- ∗ !!(!∗!!) ∗ �� � � , � = �� + � �, � content/uploads/2016/01/Spatt-paper.pdf. − �(�, �∗) Lamont, Owen and Richard H. Thaler, 2003, Can the Stock Market Add and Subtract? Mispricing in Tech Stock Carve-Outs, Journal of Political Economy 111, 227-268. Mitchell, Mark, Todd Pulvino, and Erik Stafford, 2002, Limited Arbitrage in Equity Markets,” Journal of Finance 57, 551-584. 89. “In Short Supply: Short-sellers and Stock Returns” (Fall 2014) Charles Lee Professor Charles Lee, Joseph McDonald Professor of Accounting at Stanford Business School uses a new Spatt solves for STUB, and plots it below: database, Markit Data Explorer, to provide a comprehensive analysis on a relatively Note that it is (almost) always positive, opaque market: the stock lending market. unlike the incorrectly computed stub that The data is collected from a consortium of ignores the value of lending fees embedded in over 100 institutional lenders, and covers Palm shares. Finally, Spatt uses the relation between forward and spot prices of Palm to reverse- engineer the implied lending cost (δ) of Palm ∗ !!(!∗!!) shares, �� � � , � = �!"#$,!� , resulting in the below graph of implied vs. 140 about 70% of all shorting activity across hardest-to-borrow stocks. The combination over 90% of all U.S. equities (measured of limited supply and high utilization of the both by count and market value). Previous DCBS=10 stocks results in average rebate research into shorting has largely had to rates (cash loan interest minus stock rely on short-interest ratios (SIRs), defined lending fee) of -48.86% per year for these as short interest divided by shares Special stocks; compared to +.22% for the outstanding. One major limitation of this easiest-to- borrow General Collateral research is the inability to know whether stocks. Lee attributes most of the variations low SIRs are the result of limited demand in borrow costs to variations in available for short sales, or the inability to borrow supply --- as depicted in Figure 2 stock (i.e., supply constraints). Lee finds (reproduced below) of his paper. This that the richness of the Data Explorer graph shows that the demand for stock loan dataset helps to separately identify the as a percent of shares outstanding (BOLQ) importance of demand and supply factors is roughly stable across DCBS scores, in stock lending markets, and their impact whereas the lendable supply of shares on future asset prices. (BOIQ) dramatically falls as the DCBS score rises. Supply thus appears to be the The Data Explorer lending data breaks binding constraint. the universe of stocks into ten different categories: ranging from easiest-to-borrow Lee used a recent IPO, GPRO (GoPro, (DCBS score = 1) to most-difficult-to- Inc.), as an example of some of the borrow (DCBS score = 10). Lee problems that are inherent in using characterizes roughly 86% of the firm- traditional SIR numbers as a measure of month observations as easy-to-borrow shorting demand. GPRO went public in general collateral stocks --- with lending June 2014. It is up over 100% since then. fees of less than 100 basis points on an The borrow rate on the stock is over 100% annual basis --- and 14% of the firm month per annum. The SIR for the stock, however, observations as special stocks (DCBS is only about 2.5%. There are hardly any scores = 2-10). One of the more interesting shares available for borrow --- as most of results that Lee presented was that the the shares are still held by insiders. typical stock has less than 20% of its outstanding shares in readily lendable form. Lee focused on the results in Table 4 to Table 2, which he showed, indicated that illustrate the importance of conditioning on this number ranged from 19.66% for whether stocks are special. The table DCBS=1 stocks to 6.28% for hard-to- presents the results from monthly cross- borrow DCBS=10 stocks. sectional Fama-MacBeth regressions of one-month ahead size-adjusted returns on a The utilization of the lendable supply is variety of variables. The first regression in roughly 15% for the easiest-to-borrow Panel A shows a coefficient of -0.9% on stocks, and approaches 80% for the SIR, and a t-statistic of -3.58. Panel B shows a similar regression where there are different coefficients on SIR when the stock is GC and Special. The coefficient on SIR when the stock is Special rises to -2.3% and has a -value of 7.03. Conversely, the coefficient on SIR when the stock is GC is only -0.4% with a very marginally

141 significant t-value. The regression result Business and Professor of Finance at the that excited Lee the most is the following Michael G. Foster School of Business,

one taken from Table 4, Panel C: University of presented “Short Sellers and Financial Misconduct” he It shows the importance of conditioning coauthored with Xiaoxia Lou, Assistant on whether a stock is special (indicated by Professor of Finance, Lerner College of the interaction variables SP*BOLQ and Business and Economics, University of SP*BOIQ), and suggests that conditional . This research was supported by the on a stock being Special, higher demand to Q-Group. Karpoff had previously presented borrow (BOLQ) leads to lower future a paper at the Q-Group® seminar in the Fall returns, and higher supply of stock to of 2002. borrow (BOIQ) --- holding demand for shorts fixed --- leads to higher future Short selling is controversial: detractors returns. claim that short sellers undermine investors’ confidence in financial markets and decrease Lee next turned to examining nine well- market liquidity; liquidity advocates argue known pricing anomalies, and showed, in that short selling facilitates market efficiency Table 6, that virtually all of the short-side and the price discovery process. Either way return comes from stocks where the investors who identify overpriced stocks can lendable supply constraint is binding. sell short thus incorporating unfavorable Finally, in analyzing what determines information into market prices. The issue for borrow costs, he found that borrow costs Karpoff is whether short selling conveys are higher for smaller, lower-priced, and information about external costs or benefits to more volatile firms with lower institutional other investors. In this research Karpoff looks ownership, higher share turnover, and more at overvalued stocks and short selling. Put negative recent stock returns. He also found plainly, he and his coauthor seek to answer that accounting characteristics that are two questions with this research: associated with pricing anomalies impacts the supply of lendable shares, indicating 1. Do short sellers anticipate financial that the institutions that normally lend misrepresentation? shares are aware of these anomalies. 2. How do short sales affect markets and social welfare? 90. Short Sellers And Financial Misconduct (Spring 2011) Jonathan M. Karpoff Jonathan M. Karpoff, Washington Mutual Endowed Chair in Innovation, Finance and

142

Karpoff turns to the first question. To monthly short interest and stock price data answer it he needs a sample of overpriced during the violation period for the 454 firms stocks that subsequently reverse their in their sample. The focus is on the period overpricing. For overpriced stocks he and his before the misrepresentation is revealed and coauthor use companies that misrepresented whether the short sellers get it right, and, their financial statements and are overpriced, when the price drops, whether the results are at least until the misrepresentation is made sensitive to the severity of the conduct. public. An example of the stock prices before and after misrepresentation is revealed is To assess the anticipation of misconduct shown below. by short sellers they measure abnormal short interest (ABSI): raw short interest minus

To collect their sample of misrepresenting expected short interest. Raw short interest is companies, the authors use firms that had measured by the number of shares that are SEC/DOJ enforcement actions for financial short divided by the outstanding shares in a misrepresentation initiated against them month. The expected short interest is short during 1988-2005. To this data they add the interest relative to one of three benchmarks 143 based upon the firm’s characteristics. The 2. Decreases the time to public discovery of chart below shows the three measures of the misconduct. abnormal short interest and the importance of 3. Does not exacerbate price decline when each. ABSI Model 1 includes the smallest the misconduct is revealed. number of factors. Karpoff plots the raw and abnormal short Social Security, Retirement & interest during the 20 months before and after Health Care the initial revelation of financial misconduct 91. “Made Poorer By Choice: Worker for each stock. All measures of ABSI rise Outcomes In Social Security vs. Private continuously and peak about 5 days after the Retirement Accounts” (Fall 2014) misrepresentation is made public. Using this Terrance Odean information Karpoff concludes that short Terrance Odean, Rudd Family sellers anticipate misrepresentation. Foundation Professor of Finance at the University of California, Berkeley, Next, Karpoff turns to the question of the compared the welfare implications of importance of the misconduct’s severity. He Social Security (SS) and Private Retirement considers three measures of severity based on Accounts (PRAs). The idea of PRAs— fraud charges, insider trading charges, and where investment of social security taxes is total accruals. He concludes, as can be seen in self-directed by the worker—is not new, the chart below, each of the severity measures and was floated in 2001; further, President Bush made the case for PRAs to the American people during his 2004 State of the Union Address. Odean stressed that it is important to look at the cross-sectional variation (across all people) of potential outcomes of PRAs, rather than simply the average or “representative individual”. An important takeaway from his talk is that the are economically meaningful. average person is projected to have a good retirement annuity with a PRA, relative to Karpoff concludes that, on average, short SS, while a significant share of individuals sellers convey substantial benefits to fare quite poorly. uninformed investors, although for the median firm these benefits are negligible and Odean employs simulations to illustrate slightly negative. the potential outcomes for PRAs. The baseline simulation has the following Finally, Karpoff turns to the question of assumptions: whether short sellers affect markets and social welfare. For this he asks the question, 1. All workers invest in a 60/40 stock/bond do short sellers help uncover financial index misconduct? To this he answers yes, short 2. Bond returns equal their post-war interest: average 3. Stock returns are shaved from their 1. Dampens price inflation during the 9%/year post-war average to 7%/year violation period and does not exacerbate for conservatism price decline when the misconduct is revealed.

144

4. A 60/40 stock/bond index earns mean States population used by the Social annual returns of 7.7%/year (SD Security Administration 14.2%/year) 6. Asset allocation choices are determined In another test case, Odean allows by randomly sampling from workers to choose their asset allocation distributions based on the 2010 Survey and/or their individual stock investments of Consumer Finances

(e.g., individual stocks), since all major 7. Stock selection choices are determined reform proposals that have been proposed by the distribution of equity returns in allow for investor choice. Investment individual tax-deferred accounts at a theory says that unconstrained optimization large discount brokerage beats constrained, but only if the investor Odean finds the following results. properly optimizes. Ideally, a tenured professor—whose future income is more First, when workers have no choice of bond-like—would invest his PRA in stocks, their allocation to stocks vs. bonds and are while a stockbroker would invest more in forced to use a 60/40 rule, their PRA bonds. However, if workers make poor underperforms their SS benefit much less choices in asset allocation, or in often than when workers are allowed to diversifying among their equity choose their stock/bond allocation. At age investments, then choice will worsen their 88 and with no allocation choice, 31% of retirement payouts. workers would be better off with SS than with PRA. With choice, 45% would be To conduct this simulation, Odean better off with SS. In these 45% of cases, creates 3,655 simulated individuals born in the PRA account, on average, generates 1979, and bootstraps their allocations to only half of the income of SS due to poor investments by resampling from the 2010 investment choices. And, with equity and Survey of Consumer Finances. allocation choice, more than 9 of 10 Interestingly, 13% of individuals in this workers have greater than a 25% dataset place 100% of investable money in probability that their age 88 PRA income bonds, while 15% place 100% in stocks in falls short of promised Social Security their IRA, Keogh, or 401(k) plans, which is benefits. This result is chiefly due to the first indication of trouble if SS is underinvestment in the equity class, and a replaced by PRA. failure to diversifying one’s equity investments. However, the progressive The simulation has the following (redistributive) nature of SS payouts also assumptions: plays a big role in these unequal outcomes. 1. Individuals retire at 67 Odean then looks at workers within 2. Price inflation is 3% each earnings quintile. Workers in the 3. Wage inflation is 4% lowest earnings quintile are better off with 4. A worker who uses PRA chooses an SS with 67% probability, due to the asset allocation strategy at the beginning redistributive aspect of SS. Unsurprisingly, of his working life, then does not the highest earnings quintile is better off change it until retirement with PRA. However, with more choice 5. Wages, demographic characteristics, over asset allocation and security selection, and mortality of the cohort are PRA becomes less appealing to even the generated by CORSIM, a dynamic high earnings quintile because of their micro-simulation model of the United suboptimal investment choices! 145

Odean notes that his estimate of the produce 401(k) retirement income equivalent potential shortfall of PRAs, relative to SS, to that of median defined benefit plans. may even be conservative, since workers may spend their PRAs too quickly, may Van DerHei creates the base model he invest in high-fee products, and that the will use to in determining the required poor and less educated may make worse generosity parameter. For the base model he investment decisions, including a lower assumes: likelihood of participating in the stock 1. Historical rates of return -- stochastic market. variables with log normal distribution and The Key Takeaway of this Talk: Giving mean arithmetic real returns of 8.6 U.S. workers more choice to invest their percent for equities, 2.6 percent for bonds. social security withholdings is projected to 2. Fees of 0.78 percent. lead to a greater percentage of retirees with 3. Wage growth of 3.9 percent until 55 and much lower income during retirement, 2.8 percent thereafter. compared to today’s social security system 4. Participation probability equal to due to the redistributive nature of social ((1+unconditional probability before security and to non-optimal investment participation)/2) once the individuals have choices by individuals. Most individuals participated. are “made poorer by choice.” 5. Cash outs for 401(k) that follow the Vanguard 2012 experience. 92. “How Much Would It Take? Achieving 6. Cash outs for defined benefit terminated Retirement Income Equivalency based assumption that participants react Between Final-Average Pay Defined similarly to 401(k) participants based on Benefit Plan Accruals And Voluntary LSD, lump sum distribution. Enrollment 401(K) Plans In The 7. Annuity purchase price of 12.34 for males Private Sector” (Spring 2014) at 65 years of age. Jack Van DerHei Van Derhei asks an important question With his base case he performs sensitivity and one that is becoming more and more analysis using the following tests: critical as the pension crisis widens and 1. Reducing rates of return by 200 basis expands in the U.S.: what is enough and are points. 401(k) plans markedly worse in meeting than 2. Increasing annuity purchase prices to goal of retirement relative to defined benefit reflect low bond yields at the time. plans? To gain some insight he proposes a 3. #1 and #2 simultaneously. comparative analysis of future benefits from 4. Assuming real-wage growth does not private sector, voluntary-enrollment 401(k) drop to zero after age 55. plans, and stylized, final-average-pay, 5. Assuming conditional-participation rates defined-benefit and cash-balance plans. He do not increase once an employee has develops a procedure to determine the participated in a 401(k) plan. generosity parameter, the percentage of the 6. Assuming no cash outs at job change. average salary per years of service used to 7. Changing the defined benefit LDP calculate the pension annuity, needed to generosity parameter from median to 75th percentile.

146

He provides graphs of his simulations. Shown below is the simulation result for younger employees in the lowest income quartile. The subsequent chart shows the same for younger workers in the highest

DerHei concludes with an offer. He says that Employee Benefit Research Institute has considerable data and it is available to researchers. He encourage those at Q to communicate with him about the data and/or if they with ideas for further research. 93. Optimal Annuitization With Stochastic Mortality Probabilities (Fall 2013) Kent Smetters Smetters starts by reporting that sales of income quartile. fixed annuities in the United States totaled $54.1 billion during the first three quarters of For each of the parameter changes in each 2012, but only a fraction will be held for life. simulation he provides a graph showing the This apparent under-annuitization by impact of the change. The chart below shows households is commonly referred to as “the the magnitude of the defined benefit annuity puzzle” and it is not, Smetters says, generosity parameter changes needed to equal just a theoretical curiosity. Annuities are 401(k) plans for ages 25-29 when purchase investment wrappers that should dominate all prices and return assumptions are decreased non-annuitized investments because of the by 200 bp. mortality credit. Annuities are attractive Van DerHei’s study looks only at because of the regular fixed payment coupled voluntary enrollment plans. With the upward with the bet that you will live longer than trend in automatic enrollment (AE), opt out, others in the pool. This bet enhances that adoption plans; Van DerHei plans to repeat yield for those who are long lived. The this study when sufficient data is available. mortality credit is the payout that can be very The charts are interesting for various groups. large later in a contract holder’s life. This credit comes from pool participants who die These charts were prepared for the U.S. and forfeit their assets to the remaining Congress as they consider the issues contract holders. This return enhancing surrounding retirement and retirement plans. feature of fixed annuities was studied by Much more could be done with the data. Van Yaari in 1965 and led him to conclude that households without a bequest motive should fully annuitize their investments. In spite of all the attention his work has received, lifetime annuities paying a fixed amount each year until death are still fairly uncommon. Smetters examines Yaari’s model in an attempt to rationalize its full annuitization conclusion with the fact that annuities are not widely held. He attempts to find flaws with 147 the model by adding numerous market 3. Health care shocks produce correlated frictions. None break Yaari’s sharp result. reductions in income and increases in When Smetters allows a household's costs. mortality risk itself to be stochastic, full 4. Health shocks impact the present value of annuitization no longer holds. Annuities still annuity payments making the present help to hedge longevity risk, but they are now value for short lived annuitants lower than subject to valuation risk, the risk that you will those who survive longer. This introduces exit the pool early. Adding this stochastic valuation risk when the annuity is mortality risk allows for numerous frictions acquired. to reduce annuity demand in Yaari’s model. Using his modified multi period model In fact, Smetters finds that most households Smetters finds that: should not annuitize any wealth and concludes that the optimal level of aggregate 1. If households are sufficiently patient and net annuity holdings is likely to be negative. mortality probabilities are the only source Give these finding, the underutilization of of uncertainty, full annuitization is still lifetime annuities may well not be a puzzle optimal. but rational choice. 2. If a bad mortality update is correlated with an additional loss (income or For those that are not familiar with expenses), incomplete annuitization can lifetime annuities, Smetters says they have be optimal, even without ad-hoc liquidity three characteristics: constraints. 1. It is longevity insurance that protects 3. If households are sufficiently impatient against the risk of outliving assets. then incomplete annuitization may be 2. In exchange for one-time premium, the optimal even without any additional loss. holder receives a stream of payments with This can be achieved using a customized certainty until death. annuity that is a function of the 3. Since the lifetime annuity cannot be annuitant's rate of time preference. bequeathed, the principal at death is at When Smetters’ stochastic lifecycle risk in exchange for a mortality credit for model is used to simulate optimal living. annuitization, even with no bequest motives Social Security and defined-benefit or asset-related transaction costs, most pension funds are life annuities, at least as households (63%) should not annuitize any they currently exist. wealth. Indeed annuitization is mostly found with wealthier households where the costs Smetters turns to his adaptation of the correlated with health shocks are small Yaari’s model assumptions. Yaari assumed relative to assets, and older and sicker that individuals have uncertain life spans, do households where the expected mortality not have bequest motives and premiums are credit is large relative to the valuation risk. approximately fair. Smetters adds some modifications in an attempt to break the full To summarize, Smetters lays out the optimization prescription: details of his three-period model and how it compares to that created by Yaari. He shows 1. Survival probabilities are stochastic, thus that the original Yaari prediction of 100% introducing a rate of time preference. annuitization of wealth is very hard to break 2. Investor's health status evolves over time when survival probabilities are assumed to be and is not a deterministic function of deterministic, as is standard in the model. initial health status and subsequent age. However, when the survival probabilities 148 themselves are stochastic, the full are two widely used measures of risk -- beta annuitization does not hold. These stochastic and duration. No similar risk measures for survival probabilities become a gateway assessing the risk of health and longevity mechanism for various market frictions to products exist. To rectify this, Yogo matter. developed two risk measures: the health delta and the mortality delta. The health delta Smetters concludes by saying that the measures the differential pay-off that a policy simulation evidence is this work suggests that, delivers when the insured is in poor health; even under conservative assumptions, it is the mortality delta measures what a policy indeed not optimal for most households to delivers at death. annuitize any wealth; many younger households should actually short annuities by Three forces explain the optimal health buying life insurance. In the future, Smetters and mortality deltas. There is relatively more plans to examine how the results impact the wealth in the health states in which: optimal construction of tax and social insurance policies. 1. The marginal utility of consumption is high as determined by the relative weights ascribed to poor and good health; 2. The average propensity to consume is low; and 3. Lifetime disposable income is low. Yogo said that an individual should combine health and longevity products to reflect the optimal health and mortality delta in their portfolio. The optimal exposure to the health and mortality deltas depends on the individual’s, and bequest motive) and 94. Health And Mortality Delta: Assessing personal characteristics preferences (e.g., risk The Welfare Cost Of Household aversion, consumption (e.g., birth cohort, age, Insurance Choice (Fall 2012) health, and wealth). The objective function is Motohiro Yogo shown in below. Retail financial advisors and insurance Yogo then reported on the health and companies offer a wide variety of what Yogo mortality risk of a representative sample of called health and longevity products U.S. men who were 51 and older. Those in including such things as life insurance, the sample were interviewed every two years annuities, supplementary health insurance, beginning in 1992. For each cohort the key and long-term care insurance. His research inputs for the model were: attempted to address two questions related to the household insurance choice: (1) What is • Health transition probabilities the optimal demand for health and longevity • Out-of-pocket health expenses (after products and (2) how close is the observed employer-provided insurance and demand to being optimal? Medicare) To choose among products and create an • Income including Social Security investor-appropriate optimal portfolio (excluding annuities and private pensions) requires a way to measure risk. For equity • Actuarially fair prices for health and and fixed-income investment products there longevity products

149

In addition he observed participants’ ownership of: • Term and whole life insurance • Annuities including private pensions b. Products that automatically switch • Supplementary health (Medigap) insurance from positive to negative mortality delta around retirement age. • Long-term care insurance 95. Health Care Reform In The United The following exhibit shows the out-of- States: Past, Present And Future (Fall pocket health expenses, income, and present 2012) value of future disposable income, in constant Jonathan Gruber dollars, for men born in 1936–1940. Income On the afternoon of the second U.S. is then compared to out-of-pocket expenses Presidential Campaign debate of 2012, and disposable income, dependent on state of Gruber presented the history of the U.S. health. Some of the numbers appear Patient Protection and Affordable Care Act anomalous, for example, the lower negative (PPACA), legislation widely called present value of future disposable income for Obamacare. He is uniquely qualified for this men over 91 in poor health. Yugo explained role. Why? Jonathan Gruber had a major role that this is due to the fact that the future in designing the Health Care income of individuals in poor health is Reform law, signed by Governor Romney truncated because they die sooner. and widely called Romneycare, as well as the He gathers data about ownership of health PPACA, signed by President Obama. Both and longevity products (the scatter in the Romneycare and Obamacare establish health following chart) and compares that to what he insurance exchanges, prohibit insurers from estimates as optimal, shown by the chart’s denying coverage for pre-existing conditions, dark line. allow adult children to stay on their parents' insurance until age 26, and include the The following chart shows the optimal controversial individual mandate. This health and mortality delta over a life cycle. mandate, proposed earlier by the Heritage Foundation, requires people to buy health Retail financial advisors and insurance insurance. It was contested and upheld as a companies should report the health and tax by the U.S. Supreme Court in January mortality deltas for their health and longevity 2012. products. This, Yogo hopes, will: Gruber described the evolution and 1. Facilitate standardization of products; design of the Massachusetts law. Competing 2. Identify overlap between existing products; 3. Identify risks that are not insured by existing products; and 4. Lead to the development of new products that are analogous to life-cycle (equity/fixed-income) funds consisting of: a. Packaged life insurance and deferred annuities.

150 interests, ideas, and objectives meant that For more information on health care compromises had to be made. Of the two legislation, see Gruber’s book entitled Health major problems in health care—the rising Care Reform (Farrar, Straus and Giroux, cost and the lack of coverage for large 12/20/2011) in which he explains in a graphic portions of the population—the focus of this novel style the complicated piece of bill was on the latter. Given the objective of legislation. having most Massachusetts residents covered by health insurance, it was essential to create 96. The Market Value Of Social Security (Spring 2012) a large risk-sharing pool of those insured. To Stephen Zeldes, John Geanakoplos create this pool, the Massachusetts law Stephen P. Zeldes, Benjamin M. Rosen included the individual mandate requiring Professor of Economics and Finance, individuals to obtain health care or face a Graduate School of Business, Columbia penalty. This mandate was designed to University and NBER presented, “The improve insurance rates for all and to Market Value of Social Security” coauthored eliminate the “free rider” problem of with John Geanakoplos, James Tobin uninsured patients seeking care at emergency Professor of Economics, Yale University. rooms. Social Security is on the national agenda The Affordable Care Act is a major law again and yet, Zeldes says, we continue to and a substantial change in our system, but it face the same questions: does not really kick in until 2014. Most important at this point in time is to 1. How healthy is the U.S. Social Security understand the law and not assume the system? politically-based assessments are true. Gruber 2. What is the present value shortfall? said that the biggest misconception of the 3. How large an increase in taxes would it national legislation is that it is some sort of take to restore Social Security balance? socialism or federal takeover of the health- 4. Would it save money to index Social care system. Unfortunately this and other Security to prices instead of wages? misconceptions (e.g., calling physician The answers to all of those questions, he counseling of elderly patients “death panels”) says, depend on the choice of discount rate are still given credence. The whole idea of for Social Securities’ revenues and benefits. the Affordable Care Act is to build on the existing private health-insurance system. He Typically, to calculate the actuarial value, notes that the Massachusetts law and in spite of the riskiness of the Social Security subsequent experience with it is proving to be cash flows, a Treasury bond yield is used as a a live experiment for the national health care discount rate. Zeldes advocates a change reform. from actuarial value to market value that is calculated using a risk-adjusted discount rate. Before concluding, Gruber discussed the He says that the market value provides a challenges involved in managing health care more accurate assessment of the projected costs and what has been done in other revenue- benefit shortfall than does the countries. Regarding the high the cost of actuarial value. drugs in the United States, he said that high prices subsidize low prices in the rest of the While advocating market value, Zeldes world. He said that this is an issue that needs cautions that estimating the risk-adjusted to be addressed, among others, including discount rate in this or any other pension Medicare reform. related application has its challenges:

151

1. Risk-adjusting discount rates for wage- linked cash flows is more complicated than it is for stocks and bonds. 2. Imputing the right discount rates requires estimating: a. How much uncertainty there is about the path of future wages and how that increases with the horizon. uncertainty varies with horizon. In this model the risk premium is always

b. How correlated wages are with the positive and the term structure of discount factors that determine asset prices (e.g. rates is upward sloping; the risk premium aggregate stock market and built into discount rates increases with the consumption) and how that horizon and is more like a bond rate over correlation varies with horizon. short horizons and a stock return over longer He says there are two possible models he horizons. Using three approaches (market, can use to understand the process: the actuarial, and a one created using a factor standard general equilibrium macroeconomic analysis model), Zeldes estimates the model or a reduced-form that links wages and discount rate over the long term. This results stock prices. The standard model implies a in three quite different discount rate estimates strong contemporaneous correlation between over time that can be seen below: Discount wage growth and stock returns. This, Zeldes rate – risk-free rate (three approaches). says, is counter to empirical evidence. To break this link, he adds a “news shock” in Using the new market discount rate each period during which agents get “news” paradigm, Zeldes solves for the prices and about future productivity growth. Thus expected returns of various financial assets, Zeldes’ model incorporates asset prices that including the wage bond. In addition, Zeldes react today to news about future wages, says, he examines a reduced-form model with wages that change only when the future is a link between wages and stock prices that realized. As a result his model has: generates the same term structure of risk premia as that for wage bonds. After 1. Low contemporaneous correlation calibrating these models, he computes the set between stock returns and wage growth. of implied risk-adjusted discount rates. 2. Positive correlation between stock returns and future growth in wages. Once done, he uses these discount rates to 3. Returns on “wage bond” (a security with re-compute the common measures of Social payoffs tied to the future economy-wide- Security’s funding status [present value of average wage at various horizons that is future (outflows minus inflows) minus the not equal to wage growth). value of the trust fund] depending upon 4. A correlation between “wage- bond” whether future participants, future returns and consumption growth that contributions and benefits are included. The actuarial shortfall ranges from $18.3 to $5.3 trillion. Using the market value approach, the risk- adjusted shortfall in Social Security is 18 - 68% less than indicated by the official statistics. The following chart shows the outcome for the market and actuarial present values for what Zeldes calls the closed group

152

(the group that includes future contributions 1. Large risk adjustments have very small and benefits, but only for current participants). short-run correlations between wage growth and stock returns. Zeldes assesses the three policy changes 2. Most imbalance measures are that have been proposed to bring the Social significantly smaller under the market Security system back into balance. His value approach. analysis includes many provocative, colorful 3. The cost gap between wage indexing and and useful charts that show the current price indexing shrinks when using the risk situation and the possible changes outlined adjustment: under some parameterizations below: wage indexing becomes less expensive 1. Increasing payroll taxes, decreasing than price indexing. benefits, and indexing benefits to prices As next steps Zeldes plans to: rather than wages. His assessment is that even though adjusting for risk reduces the 1. Do additional research to enhance and present value shortfall, it actually would calibrate macroeconomic models of wage further increase the tax rate necessary to risk. bring the system back into balance. 2. Allow for an inflation risk premium. 2. Decreasing benefits. He concludes that 3. Make empirical estimates of short- run the risk adjustment would have an and long-run wage risk and its correlation ambiguous effect on the magnitude of with stock prices. benefit cuts. 4. Estimate magnitudes of policy changes 3. Indexing benefits to prices rather than necessary to restore balance to the system wages. Linking benefits to prices instead (tax increases, benefit cuts). of wages would increase the cost of Encourage governments to adopt market- Social Security benefits and thus increase value approaches to measurement and policy the shortfall. The chart that follows shows evaluation. 97. Pension Funding And Investment Strategy Issues From Market Value Account (Spring 2012) Barton Warning M. Barton Waring, Global Chief Investment Officer for Investment Strategy and Policy at Global Investors (retired) presented ”Insights About DB Pension Plans From Market Value Accounting.” This presentation captures the spirit of some key portions of Waring’s recently the differences. published book, Pension Finance: Putting the Zeldes summarizes the results of creating Risks and Costs of Defined Benefit Plans his simple model-based recipe for performing Back Under Your Control. Waring says the risk adjustment: book provides a “clean-sheet-of-paper” approach to pension accounting and actuarial science using market value accounting principles and discusses its implications (for

153 contribution and expense risk, for affordable liability representing the promised benefit levels, and for investment strategy payments due and owed based on some and policy). choice of normal cost method. About this, he says he will have more to say later. Waring says that in a good-hearted, but perhaps misguided effort to make good Warning then provides the following retirement benefits widely available, actuarial graph to show the results of using each science failed to control risks and costs of method. defined benefit (DB) plans. This effort No matter what method is used, the supported higher benefits than can be paid for benefit payments are the same and since the with planned funding levels and biased investment policy is constant, the returns will contributions downward, thus back-loading be constant. The fact is that the higher obligations. To make the DB issues more discount rate is just a cover for reducing the difficult, a plethora of sufficiently complex normal cost and contribution payments, he actuarial methods forestall debate and convey asserts. This leads him to ask the question no real information. Despite the complexity, “Why discount at the expected return on Waring says that “if you manage the assets?” He says that the answer is that there economics, the actuarial and accounting are huge savings to the DB sponsor (or at versions will follow.” least they appear to be huge) and the higher The central question is “What is the right discount rate is just a cover for reducing the discount rate?” While actuaries prefer the normal contribution payments. expected return on the pension asset portfolio, A critical issue in the pension debate is economists argue that the expected return on the rate used to discount liabilities. the liability-matching asset portfolio is the Typically the expected return on assets is correct rate. To date, Waring notes, this used. As you can see from the chart below, argument has only been about measuring the the lower the rate the smaller the contribution liabilities rather than contributions, the subject upon which he focuses. Of economic interest are three liabilities: 1. Full Economic Liability: Present value of benefits for all past, current, and future employees (less termination option on future accruals). 2. Present Value of Future Benefit Payments: For all past and current employees (less termination option), the payment schedule drives expenses and contributions. 3. Benefit Security Liability: The accrued

and the larger the savings for the DB sponsor.

Warning says the problem is that

investors do not get the expected return just

because they are long term investors, while

the assumed actuarial discount rate depends

154

on it. He reminds us of the facts: the S&P 500 to ask: When contributions are overdue, who has only delivered a 0.55% per year will pay? compound average return for the years from 2000 to 2011. At the same time, the dollar 1. Contribution rules, with their long term grew to only $1.07, less than 1%, far less than amortizations and wide corridors, are not the result when using the actuarially expected remotely likely to make up any resulting 8% return assumption. A dollar would have accrued liability deficits. grown to $2.52. Thus the resulting assets in 2. In a bankrupt plan, participants will the pension plan are only worth 42% of what ultimately bear the loss, recovering only a was expected. Waring says, “No wonder portion of each benefit dollar. there is a pension crisis.” It will take a After being told for years that their compound average return of 16% per year for benefits are secure, participants, Waring the next 12 years to make up this shortfall warns, should understand that good intentions and, in spite of many bad returns being offset and blind faith in the expected return with good returns, there is a 50% probability assumption will not adequately fund a that the ending portfolio expected could vary pension plan. from what is projected by a substantial amount. Waring points out that discounting with the expected return on assets is not cheaper Showing the outcome for the single but a way to set-up a slow motion bankruptcy. employee example, holding benefit payments No one else in finance and banking, he says, and investment returns constant, and uses the expected return on debtor assets to discount that debtor’s future cash flow obligations except the government. “No One Else” and “No Where Else” he says. “Why then”, asks Waring, “are pension actuaries, the GASB, and others supporting this harmful practice? 98. Diversification Over Time (Fall 2011) Barry Nalebuff Barry Nalebuff, Milton Steinbach Professor, Yale University School of Management presented “Diversification Across Time,” he coauthored with , William K. Townsend Professor, Yale University Law School was the last presentation of the Fall 2011 Q-Group. discounting by the expected return on assets, results in larger than planned contributions Nalebuff starts with the motivation for more than half the time (the leftmost chart). this research, a quote from Paul Samuelson In contrast, the chart to the right shows that (1969) contributions seldom badly disappoint when [The businessman] can look forward to a the risk-free rate is used. high salary in the future; and with so high a Given the possible shortfalls in present discounted value of wealth, it is only contributions that could occur, it is important prudent for him to put more into common stocks compared to his present tangible

155

2 005

0

wealth, borrowing if necessary for the deviation. This results in a strategy with purpose. lower volatility that better diversifies the stock risk exposure across time with no loss Based on their work, Nalebuff comes to of return. the following conclusion: buy stocks using leverage when you are young. He provides To assess the two strategies Nalebuff and the following example to make the reason his coauthor use simulation to examine a clear. A person has two choices, a fund that worker’s 44-year experience in the markets allocates: spanning the period from 1871 through 2009. Each group, cohort of workers, begins 1. Seventy-four percent of the initial working at age 23 and retires at 67. The first portfolio to equities and the remainder to cohort of workers was born in 1848, started government bonds, or a work in 1871 and retired in 1915. The table 2. Leveraged lifecycle fund that begins by on page 24 provides strong evidence that a allocating 200% of the initial portfolio leveraged lifecycle strategy can reduce value to stocks ramping down to 50% as retirement risk: the 200/50 strategy has a 21% the person nears retirement. lower standard deviation in the retirement Instinctively, he says, most people feel accumulation. that the leveraged strategy is riskier since it starts with the higher proportion of the Nalebuff says there are several reasons portfolio in equities and is levered. What they the leveraged approach is superior. It: do not factor in is that the leveraged strategy 1. Reduces volatility by having higher lows ends with a lower stock allocation at a point and lower highs. when the absolute size of the portfolio is 2. Is better diversified because it spreads the larger. exposure to stocks across time.

Using historical stock returns (from 1871 3. Channels all the incremental benefits of through 2009) Nalebuff demonstrates that better temporal diversification to risk with identical mean returns for the two reduction. strategies the 74% strategy has lower 4. Brings the investor closer to their utility- volatility. The result comes from the maximizing allocation: the improvement leveraged lifecycle strategy’s lower standard in the certainty equivalent of retirement wealth is between 35% and 39%. As to the impact of major market declines on their results, there are two concerns: losses and the impact of potential margin calls. As for losses, he says that in their monthly data, the stock market never declined sufficiently to wipe out the preexisting investments of any retirement group they studied when that cohort adopted a leveraged strategy. To test for the impact of margin calls, they perform a reality check by examining the results for groups who lived through the depression years. Nalebuff reports that workers who retired just after the crash were not severely hurt because the leveraged strategy had already eliminated 156 their leverage: workers retiring in 1932 would of the index options, other leveraged and have had 50% of their portfolio invested in derivative investments inside these accounts the market when the market lost more than a are prohibited. What mitigates this possibility third of its value. However, because of the is the fact that an employer who offered success of their investments in previous years workers the option of investing in a fund that they still would have accumulated a implements a leveraged strategy might risk retirement wealth greater than if they had losing its statutory safe harbor. used the constant percentage strategy. Fourth, Nalebuff recognizes that legal Nalebuff reports on their robustness constraints are not the primary reason that checks before turning to his conclusions. people fail to buy enough stock when they are young. Despite compelling theory and 1. The data supports that the Samuelson- empiricism, many people have a strong Merton notion that people with constant psychological aversion to mortgaging their relative risk aversion should invest a retirement savings. While families are constant percentage of their lifetime encouraged to buy a house on margin, they wealth each period in stock. are discouraged and/or prohibited from 2. Young workers require leveraged to buying equities on margin. We are, he says, implement the strategy. taught to think of leveraged investments as 3. Their recommended investment strategy having the goal of short-term speculation is simple to follow. rather than long-term diversification. As a 4. The potential gains of risk reduction from result, most people have too little such leveraged investments are striking. diversification across time and too little Nalebuff points out four important points. exposure to the market when young. Based First, their estimation does not take into on theory and historical data, the cost of these account the impact of non-portfolio wealth mistakes, Nalebuff says, is substantial. such as housing and human capital. The relevance of this issue will vary across Trading, Information, professions and he suggests that all Transaction Cost investment strategies, including target-date funds, should be different by profession thus 99. Forced Liquidations, Fire Sales, And reflecting the different indirect exposure to The Cost Of Illiquidity (Fall 2015) Andrew Weisman equities via human capital. Andrew Weisman presented a lively talk Second, their estimation also does not that exposed problems with the common take into account the potential general treatment of liquidity risk in investment equilibrium effects that might occur if a portfolios. Simply put, Weisman highlighted substantial segment of investors began that most measures of illiquidity of funds adopting leveraged lifecycle strategies. tend to focus on liquidity during normal times; Nalebuff moderates that concern by pointing he shows that investors need to consider out that, as a historical matter, economies potential forced liquidations and fire sales have successfully adapted to leveraged during times when markets are stressed and investments in both housing and education. the holdings of a fund are forced to be re- priced by an interested creditor to the fund— Third, their results have implications for whether it is a counterparty or lender. A key legal reform. The natural places to engage in intuition to Weisman’s paper is that there is a disciplined leveraged purchases are in IRA “threshold effect,” where the fund is forced to and 401(k) accounts. Yet, with the exception re-price its holdings when its “true value” 157 deviates from its quoted value by a with the assumption that they can be significant amount. liquidated in an orderly fashion over time, while an “interested party” may value Most research approaches to measuring securities with the assumption that they illiquidity examine the degree of serial should be able to be liquidated quickly. One correlation of returns of an investment fund. of the key contributions of the paper is to However, these measures tend to miss the challenge the notion of a unique price largest costs of illiquidity that investors representing “true” fundamental value. The should be concerned with: forced liquidations “fair” (or true) price to pay for an asset (“fire sales”). This is because forced during normal quiescent periods may differ liquidations are rare, and measures of serial dramatically from the price that can be correlation in returns do not capture such rare realized during a liquidation. Furthermore, events. the liquidated value will depend on the Weisman reviewed the factors that affect overall level of stress in the system. This the illiquidity mismatch in a portfolio, which poses a significant problem in thinking about is defined as the difference between the how to mark portfolios and determining the funding (liability side of the balance sheet) prices at which investors can enter and exit and the investments (asset side). These funds. include: Weisman models the credibility threshold 1. Use of leverage, including swaps, futures, as a barrier option, where the difference and margining of long positions between the reported value and the “true” 2. Fund gates, lock-up periods, and notice value of the portfolio (the amount of periods (these tend to mitigate the “overstatement of value”) is the underlying mismatch) asset. The barrier option pays off only when 3. Network factors, including the use of the “true” value is lower than the quoted common service providers (e.g., securities value of the portfolio by a certain percentage. lending counterparties), strategy When this happens—reported value exceeds correlation among many funds, and “true” value by a certain percentage—the commonality in the investors across exercise of the “repricing option” by an different funds (so that they behave “interested party” results in an adjustment of similarly in their investments and the reported value to the ``true’’ value minus redemptions in several funds) a penalty for overreporting. (This penalty represents the amount of “fire sale” discount Weisman introduces the concept of the that results from forced selling triggered by “credibility threshold”. The idea is that the revaluation by the “interested party”). illiquid assets can be valued at a “perfect markets” liquid price (e.g., the price at which Using this paradigm, a fund’s reported these assets might ideally trade, based on the value can be modeled as a process that equals prices at which other assets trade) until the the ``true’’ value with some time lag, minus “first interested party” (such as a prime the value of this barrier option. broker or significant investor) takes action to force a sale (and true price discovery) of such Weisman conjectures, based on the serial illiquid securities. Weisman defines the correlation in HFRI hedge fund indices, that “credibility threshold” as the level at which hedge fund managers typically report returns price marks trigger forced selling. that reflect less than 50% of the true change Determining this level is very tricky, as the in the value of their portfolios. He then fund manager may normally value securities proceeds to use the Morningstar-CISDM 158

Hedge Fund Database (which contains both 100. Trading Costs Of Asset Pricing live and dead funds) to estimate the value of Anomalies (Fall 2014) the barrier option that results from this level Toby Moskowitz of misreporting. He finds that the average Toby Moskowitz, Fama Family return of these hedge funds, 11.8%/year, is Professor of Finance at the University of reduced to 6.3%/year when we adjust for the Chicago, employs a unique database value of the (short) barrier option— provided by AQR of actual execution costs 5.5%/year. of stocks to estimate the net-of- trading- cost alphas of some standard quantitative Weisman further checks the credibility of strategies. The database consists of nearly his barrier option approach by graphing the $1 trillion of actual trades by AQR from estimated option value of hedge funds vs. 1998-2013 across 19 developed equity their maximum drawdowns, shown below. markets. AQR manages $118 billion AUM The resulting correlation indicates that his (as of Oct 2014), and accounts for 1.1% of barrier option approach reasonably represents daily volume in these stock markets. The great majority of the trades in the database occurred during 2007-2013. There are hundreds of papers in academia that study various alleged anomalies in the cross- section of stock returns; these academic papers typically analyze alphas gross of transaction costs. However, we also need to know whether stock anomalies are profitable, net of realistic trading costs. Academics wish to know whether markets are efficient, with frictions, and asset managers need to know if and when then can expect to profit from well-known investment signals. While the field of market microstructure and several the actions of investors: prior papers that used Plexus data, such as The key intuition to remember from this Keim and Madhavan (1997), attempt to talk: if you invest in an illiquid and levered estimate institutional trading costs, portfolio that is dependent on funding from Moskowitz notes that arbitrageurs often an external party, then you have written an obtain much better trading execution than option to the market—where an interested the average institution. third party may exercise and force liquidation So, the research questions that at “distressed” repricing—which could result Moskowitz addresses can be summarized in a permanent and significant loss. as follows: Bibliography 1. How large are trading costs faced by Lindsey, Richard R., and Andrew B. large arbitrageurs? Weisman, “Forced Liquidations, Fire Sales, 2. How robust are anomalies in the and the Cost of Illiquidity,” available at literature to realistic trading costs? http://www.q-group.org/wp- content/uploads/2015/05/Liquidity.pdf. 159

3. At what size of position implementation exchanges. To do so, to compute the price do trading costs start to constrain impact, he compares the VWAP of the arbitrage capital? executed trade (by AQR) to the open price 4. What happens if we take realistic of the day for each desired trade. He then transactions costs into account, ex- separates this total impact into (1) ante—can we improve the payoff of permanent and (2) temporary impacts. anomalies? Permanent impact is the amount of price Moskowitz focuses on four well-known impact that does not reverse within one day anomalies: size, book-to-market, after the trade package is complete. momentum, and short-term (monthly) He finds that AQR’s permanent, reversals in stock returns. While Fama and temporary, and total trading costs are 8.5, French (1993), Jegadeesh (1990), and 2.5, and 11 bps, on average. He interprets Jegadeesh and Titman (1993) document this to mean that we need to deduct 11 bps, these patterns gross of trading costs, it is on average, for a careful arbitrage strategy not clear which survive after costs. that implements these four stock return Moskowitz notes that he does not take a predictors—much lower than that predicted position on whether these stock return by prior literature. Interestingly, predictors are truly anomalous or whether Moskowitz finds that inflow- and they are compensation for risk. He merely discretionary trades have similar trading wishes to see whether they survive, ex- costs. That inflow-related trades, which trading costs, whether we call them alpha must be made without undue delay, have or “smart beta.” similar costs reassures us that the trade Moskowitz notes that AQR implements costs in the database do not reflect these strategies, but not exactly like they endogeneity, that is, AQR changing its are done in the papers. For example, the trades based on point-in-time liquidity for a book-to-market factor (HML) is formed particular stock. If that were true, the with equal-weighting of securities, which trading costs might not be realistic and makes it expensive to trade in reality. AQR generalizable to another investor. implements modified version of these Moskowitz notes that 95% of AQR strategies which avoid excessive trading trades, all achieved through limit orders, costs. So, Moskowitz measures net-of- are completed within one day; these trades trade-cost returns of long-short portfolios are conducted through direct market access based on the implementation of, e.g., HML through electronic exchanges. AQR actually done by AQR at each particular randomizes the trade size, time, and actual point-in-time (a “sampled” HML strategy). stocks traded at a particular point- in-time For instance, AQR trades, on average, 62% in order to limit the ability of other traders of the stocks in Fama-French’s HML to reverse-engineer and then “front-run” portfolio in the U.S., and 35% of HML in their strategies. non- U.S. developed markets. Moskowitz uses the actual AQR trades He attempts to separate the anomaly to estimate a multivariate trading cost profits from profits achieved by a savvy model that controls for a time trend in trade trader who generates some additional alpha cost, market cap, etc. Key variables in the while trading—perhaps through a smart model are “fraction of daily volume” and provision of liquidity to the market or “Sqrt(fraction of daily volume)” through careful splitting of orders across represented by the AQR trades. (Of interest 160 to Q-Group members, Moskowitz plans to Jegadeesh, Narasimhan, and Sheridan publish the trading cost model coefficients Titman, 1993, Returns to buying winners on the web, including price-impact and selling losers: Implications for stock breakpoints, so that researchers and market efficiency, Journal of Finance 48, investors can evaluate trading costs of other 65–91. strategies.) Next, he uses the fitted Keim, Donald B., and Ananth Madhavan, regression to simulate the net-of-cost 1997, Transactions costs and investment returns to HML, SMB, UMD, and return style: An inter-exchange analysis of reversal portfolios. One version uses actual institutional equity trades, Journal of trade sizes of AQR, the second extrapolates Financial Economics 46, 265 – 292. to the prescribed trade sizes of the strategies from the source academic papers 101. A Primer On Fast Electronic using the fitted regression model. Markets (Spring 2014) Larry Harris From this analysis, Moskowitz In the Spring 2010 Q Group meetings estimates trade costs at different levels of Harris had provided a primer on electronic total market trading in the strategies, as markets. Events, including the recent well as break-even total strategy sizes. publication of Michael Lewis’ Flash Boys, Break-even is defined as the total amount and increased concern by regulators and of asset management money, among all Congress, has elevated the importance of investors, that could be employed such that understanding these markets. As a result net-of-trading-cost abnormal returns are Harris agreed to provide an optional tutorial zero. These break-even levels, for long- for Q participants on the markets and updated short portfolios that capture size, value, and the data from 2010 “Equity Trading in the momentum, are $103, 83, and 52 billion 21st Century.” dollars in the U.S., respectively, and $156, Harris describes electronic order 190, and 89 billion dollars, globally. A matching systems and says they are very combination of value and momentum, used cheap to operate once built. The systems do by many quantitative shops, has even exactly what they are programmed to do and higher capacity ($100 billion U.S., $177 maintain perfect audit trails. These systems billion globally) due to netting of trades have largely displaced floor-based trading across negatively correlated positions (i.e., systems in all instruments for which order- value and momentum sometimes involve driven markets are viable. He goes on to taking opposite positions). Short-term describe: reversal strategies, on the other hand, are not scalable beyond $9 billion in the U.S., 1. Types of Electronic Traders or $13 billion globally. 2. Main Proprietary Strategies including Liquidity Trading and Parasitic Low References Latency Strategies Fama, Eugene F., and Kenneth R. French, 3. Electronic Buy-Side Traders 1993, Common risk factors in the returns 4. Needs for Speed on stocks and bonds, Journal of Financial 5. How To Trade Fast Economics 33, 3–56. 6. Why Electronic Traders Are Efficient Jegadeesh, Narasimhan, 1990, Evidence of He says that the efficiency of electronic Predictable Behavior of Security Returns, trading strategies led to their widespread Journal of Finance 45, 881-898. adoption by proprietary traders, buy-side traders, and their brokers. At the same time 161 electronic traders have displaced traditional perspective is important. Harris concludes dealers, arbitrageurs, and brokers in that the empirical results show that the electronic order-driven markets. markets are functioning reasonably well despite the many calls for change. At the The update of the empirical results from same time, various economic distortions and his 2010 “Equity Trading in the 21st Century” principal-agency problems suggest that shows, Harris says, that market quality in the market structures could be enhanced to better U. S. has remained high by all measures, serve investors. Such changes should be despite concerns expressed by some guided by sound economic and legal practitioners and regulators. Improvements in principles when their application is well market quality have benefited small traders understood. When they are not, pilot studies and new evidence indicates that it also has and policy phase-ins can produce additional benefited the institutional trader executing economic evidence that could improve policy very large orders over many days. decisions. However, regulators should not While he finds this encouraging, he notes create pilot studies simply to defer good that certain changes in market structure could policies that are politically difficult to produce even better markets. He has implement. particular concerns about maker/taker pricing 102. The High-Frequency Trading Arms and its cousin, taker/maker pricing. The later Race: Frequent Batch Auctions As A represents a subversion of the sub penny rule Market Design Response (Spring 2014) of Regulation NMS with the attendant Peter Cramton implications for front-running trading. Also Cramton starts with some anecdotes about of concerned is front-running of large the current state of electronic, high-frequency institutional traders by traders employing trading, especially in light of last week’s computerized pattern recognition systems. publication of Michael Lewis’s Flash Boys: Large institutional traders could be protected from these strategies by a simple change in • In 2010, Spread Networks invested trade reporting procedures. Finally, he notes $300mm to dig a high-speed fiber optic that high frequency traders are engaged in a cable from NYC to Chicago. It shaves dissipative arm’s race that is not benefiting round-trip data transmission time from public investors. A slight random delay in 16ms to 13ms. order processing could reduce the importance, • Within three years, Spread’s fiber optic and thus the costs, of winning this race. cable was obsolete. • Industry observers joke that 3ms is an Various proposals to change market eternity and that next innovation will be structure are troubling and he believes they to dig a tunnel, thus avoiding the planet's would damage market quality substantially. pesky curvature. Foremost among these are proposals to • Microwave technology has advanced impose transaction costs on trading and sufficiently bringing the round-trip NY- minimum resting times on quotes. Both Chicago transmission time from 10ms to proposals would hurt liquidity without 9ms and now to 8.5ms. producing the desired improvements in • Analogous races are occurring throughout market quality. the financial system, sometimes measured With so much current controversy about as finely as nanoseconds. equity market structure, taking a fresh look at • Last month a Business Wire reported on the structure of our markets from a broad the use of lasers to transmit data.

162

Cramton says that the HFT arms race is correlation between the returns of the E-mini an optimal response from market participants S&P 500 future (ES) and the SPDR S&P 500 given the current market design. He believes, ETF (SPY) bid-ask midpoints as a function of however, that continuous-time trading and the return time interval for every year from continuous limit order book are basic flaws in 2005 to 2011. modern financial market design. He proposes replacing continuous- time trading with As is readily apparent, in 2005 the discrete-time frequent batch auctions: pairwise correlation had completely broken uniform-price sealed-bid double auctions down (i.e. was below 10%) at a 100ms trade conducted at frequent, but discrete, time interval. By 2011, that level of breakdown intervals (e.g., every 1 second or 100ms). He did not occur until trade intervals of less than argues this would be a minimal departure 10ms were reached. This chart, Cramton says, from the current system where the arms race is a clear depiction of the high-frequency is a never-ending, and the market design arms race. harms liquidity and is socially wasteful. The At short enough trade horizons, there is cost to investors of this change would be very room for arbitrage between the ES future and short wait time-to-trade. the SPY ETF. The following table presents One of the arguments against discrete data, over the January 2005 to December time batch auctions is that may facilitate 2011 period, for arbitrage opportunities break-downs in correlations across assets between the E-mini S&P 500 future (ES) and which normally would be highly correlated. Cramton argues that correlations break down for technical reasons at some trade frequency even under continuous trading. The following table, for instance, shows pairwise correlations of related companies’ stock

the SPDR S&P 500 ETF (SPY). Cramton goes on to estimate annual potential value of ES-SPY arbitrage at $75mm, suspecting it is an underestimate. prices at various horizons. Furthermore, he says, ES-SPY is just the tip While all pairs are highly correlated at a of the iceberg when it comes to the value of one-minute interval, the correlations start to execution speed - one of many potential break down at intervals of ten seconds or less. arbitrage opportunities. This following chart depicts the He concludes his talk with some of the open questions: Isn't the arms race just healthy competition? What's the market failure? Why hasn’t some exchange tried it? What is the equilibrium if there are both batch and continuous exchanges operating in parallel? What would be the mechanics? What are some of the likely unintended consequences? What is the optimal time 163 interval? What would be the role of trading revealed over time. halts? What trade information should exchanges release and when? What are the Koudijs gathered what was needed to adverse information impacts from a structure study the information flows between these that will still allow faster traders to act on two ports from Dutch and English news between the batch auctions? Interesting newspapers: proposal, intriguing questions. 1. Daily stock prices in London and three 103. Those Who Know Most: Insider times a week in Amsterdam. Trading In 18th C.Amsterdam (Fall 2. Arrival and departure dates from both 2013) ports. Peter Koudijs In addition, to estimate sailing time, he How is private information incorporated gathered marine weather data from a Dutch into security prices? This is the question that observatory. Peter Koudijs researches using information from the market for English securities from Using this data he reconstructs the arrival 1771-1777 and 1783-1787, periods dates of sailing ships and examines the characterized by peace on the European movement of English security prices between continent. His interest is in how the ship arrivals. Two models of information information that traveled between the stock dispersion were considered by Koudijs: market in London to the secondary market in 1. Agents with private information act in a Amsterdam. Koudijs suggests that the arrival competitive fashion without taking into and departure of mail packet boats, sailing consideration the impact on prices. They ships between the ports of London and trade right after they receive a private Amsterdam, signaled the possible arrival of signal and do so aggressively. Thus, the new information about such things as prices privately-informed immediately reveal in the London market. This era, Koudijs says, their private information. provides a unique opportunity to infer how 2. Insiders are strategic and take into insiders or informed agents trade on their account the price impact of their trades. information and the process by which private They understand that profits fall as prices signals are revealed to the market as a whole. become more informative and thus The packet boats carried both public news constrain their behavior. Price discovery, including newspapers which contained stock the process by which private information prices and private letters. They were is incorporated into price, is prolonged. practically the only source of news. Although Koudijs says that strategic behavior is not the exact sailing time between London and a sufficient condition for slow price discovery. Amsterdam depended on the marine weather conditions, the median travelling time was 4 To examine the data, Koudijs uses the days. Ships left on preset days (Wednesday classical two period model of strategic insider and Saturdays), thus making information trading developed by Kyle (1985). This arrival relatively discrete. This discrete model posits that an informed trades slowly arrival of information, depicted below, allows over time. The rate at which private the identification of how information is information is incorporated into prices depends on when the agent expects the private signal to be publicly revealed. Using his Amsterdam and London packet trade data, Koudijs finds: 164

1. Private information is slowly revealed to Koudijs is aware that historical results can the market; the speed depends on how be generalized or may not be relevant today. long insiders expected it to take for the About this he says three things: private signal to be publicly revealed. 2. Price movements in Amsterdam between 1. While insider trading has become illegal the arrival of boats were correlated with since the 18th century and one might the contemporaneous (but as of yet think that private information has unreported) returns in London. This is therefore become less relevant, empirical consistent with the presence of a private evidence does not square with this signal that is slowly incorporated into conclusion. prices in both markets. 2. The speed of information transmission 3. The time to arrival of the next packet, was a lot slower and more infrequent in impacts the speed of revelation. The the 18th century. However, private initial co-movement of Amsterdam and information itself, the fundamental thing, London prices was stronger when the next was not the result of primitive boat was expected to arrive early. This is communication technology but the result consistent with strategic behavior on the of London insiders having access to part of the insider. superior sources of information. This is 4. The importance of private information is similar to today where corporate insiders underlined by the response of London are likely to be better informed than the prices to price discovery in Amsterdam: market as a whole. conditional on its own price discovery, Koudijs concludes that while he studied a the London market updated its beliefs different venue in a different time, only the based on price changes in Amsterdam. speed of transmission has changed. While 5. The co-movement between Amsterdam different today – more advanced technology and London reflected permanent price makes private information much shorter lived, changes: it is unlikely that the co- and markets have not become more movement was the result of correlated information. Long-lived private information, liquidity trades or other transitory shocks. held by monopolistic agents is as relevant The arrival of news through channels today as it ever was. other than the official packet boats did not 104. No News Is News: Do Markets account for the empirical results. Underreact To Nothing? (Spring2013) Kelly Shue One important thing about his study that Shue asks whether, absent news, the Koudijs noted was that information flows passage of time contains information and during these time periods were less complex whether the market incorporates this than flows today. Moreover, they can be information. To illustrate the point she perfectly reconstructed. Since information references the Sir Arthur Conan Doyle quote: arrived in a non-continuous way, clean “The dog did nothing in the night time ... that identification of private information was was the curious incident.” She mentions possible. The fact that the lengths of time other examples where the absence of news over which insider information remained implies information: private in Amsterdam varied exogenously and this allowed for a direct test of the 1. A citizen who lives through a sustained strategic behavior of insiders. period without terrorist attacks should update positively on the government’s anti-terrorism programs effectiveness. 165

2. A manager who observes that an Shue posits two possible explanations for employee has executed a difficult task the findings. First, she suggests that the without incident should update the positive correlation between returns and employee’s quality positively. hazard rates can be explained by a behavioral “No news” also can be news in many model in which the agents underreact to the financial contexts. For example, if: passage of time. If agents do not fully appreciate the variation in hazard rates 1. A firm does not lay off workers or declare associated with the passage of time, they will bankruptcy after a macroeconomic shock, behave as if hazard rates are less time- investors should update positively on the varying (flatter) than they are. The second firm’s underlying strength. plausible explanation is that it is a rational 2. A firm repeatedly fails to announce new reaction to changes in risk or trading frictions investment projects, investors may be over time. justified in updating negatively on the firm’s growth prospects. After controlling for other kinds of risk, 3. Investment returns that seldom display Shue concludes that event time variation in newsworthy variation can reveal selling pressures and asymmetric information information about the underlying are unlikely to explain the observed returns investment or, if overly consistent, may patterns. Shue concludes that aggregate be suggestive of fraud, as in the case of hazard rates of merger completion predict Bernie Madoff’s investment fund. merger returns because markets underreact to the information content of the passage of time. In this talk, Shue uses the context of Importantly she finds under reaction is mergers to examine whether markets fully concentrated in the subset of deals with lower incorporate the information content of “no liquidity and higher transaction costs, news.” Mergers provide a convenient suggesting that trading frictions prevent empirical setting for the analysis because sophisticated investors from arbitraging away each merger has a defined starting date, the the mispricing. announcement date, and the returns to merger arbitrage depend heavily on a definitive and Shue goes further when she concludes stochastic ending point. that evidence of under-reaction in mergers markets also is suggestive of a more general If markets fully incorporate this phenomenon in which agents underreact to information, expected returns should be the passage of time because it is often less sufficient to compensate for the probability of salient than explicit news stories. Under a failed merger and reward investors for the reaction to no news can be persistent, and can use of their capital. In particular, Shue says, potentially exacerbate asymmetric there is no reason for hazard rates and information problems in many other contexts. realized rates of return to be strongly related. As to future research, she wants to explore However, Shue describes finding a strong the extent to which under reaction to no news relation between the hump shape of the pervades other contexts such as the hazard rates and the hump shape of mean interactions between voters and politicians, merger arbitrage returns - with cash financed managers and employees, or investors and mergers earning a mean return of 20bp per insiders. week in the first weeks after a merger announcement. Returns peak at 40bp per week around event week 25, and then decline sharply over time. 166

105. Which News Moves Stock Price? A Textual Analysis (Spring 2013) Shimon Kogan Shimon Kogan asks, what moves stock prices? In his Presidential Address to the

answer is “yes”: the median stock exhibits return variance on identified news day that is 2.2 times the variance of no news days. These results appear robust: over 90 percent of stock variance ratios exceed 1.0 on identified news days. Kogan goes on to report direct evidence American Finance Association, Richard Roll that overturns Roll’s anomalous result. He (1988) found little relation between stock uses one- and four-factor pricing models prices moves and news - a finding that has finding the R2s are similar no news and been confirmed in many other studies. To unidentified news days, consistent with Roll’s challenge this Kogan’s research extends results. However, on identified news days it previous textual analysis methodology and is much lower as shown in the following utilizes an information extraction platform, table. “Stock Sonar”. This system uses a dictionary- based sentiment measure to count positive Kogan concludes that this research stands and negative words in a news article, and in sharp contrast to the last 25 years of analyzes phrase-level patterns to decide literature on the relationship between stock whether the article contains relevant news. As prices and news. Whereas past research failed an example of what "Stock Sonar" can do, to find a strong link, he finds that when Kogan says from 2000-09, the Dow Jones information is more properly identified, a Newswire produced over 1.9 million stories much stronger link can be found. about S&P500 companies and estimated that Most, maybe all of us, believe that news roughly 50% of those stories had relevant impacts investor sentiment and can change news. Kogan classifies articles from 2000-09 investor action. Kogan gives us a new way to into possible company events, and classifies look at news using computer-based each into it into unrelated news (Unid) or information-extraction technology to test our news related (Iden) to an identifiable event. beliefs. He identifies a link between news and Of the 1,229,359 total news days, 57% had stock price changes although he cannot yet no news, 30% were unidentified and 13% answer the question of whether investors can were identified news days (as shown in the profitably trade on news releases. The table that follows). remaining, and perhaps more difficult, Kogan then turns to the question of questions are whether news releases can be whether this refined measure of news can be forecast and/or can increases in the speed of used to consider Roll’s empirical no-news- information incorporation into investor action moves-stock- prices finding. make a difference. Kogan intends more research to answer questions like these. Based on the variance ratios (reported in the table below) Kogan concludes that the 167

106. High Frequency Trading And Price of price discovery and price efficiency. He Discovery (Fall 2012) used the model to decompose price Jonathan Brogaard movements into permanent and temporary Brogaard said that financial markets have components and to relate changes in both to two functions important for asset pricing: HFT. The permanent price movement liquidity and price discovery. Historically, component is normally interpreted as financial markets relied on intermediaries to information and the transitory component is facilitate these functions. When stock seen as pricing errors (also referred to as exchanges became fully automated, the transitory volatility or noise). He used a markets’ trading capacity increased and NASDAQ dataset that includes trading data intermediaries were enabled to expand their on a stratified sample of stocks in 2008 and use of technology. This reduced the need for 2009 and provides information on whether human market makers and led to the rise of a the trade is a liquidity-demanding (HFTD) or new class of intermediaries, typically referred liquidity-supplying (HFTS) trade. to as high frequency traders. Brogaard stated that he found the Many do not believe that high frequency following impact on price discovery: trading (HFT) is benign. To demonstrate this Brogaard presented a sample of stories about 1. Overall, HFT trades make prices more HFT and its potential for harm that appeared efficient. during 2009 in : 2. Results are stronger on high-volatility days. 3. Market order trading shows positive correlation with efficient price and negative correlation with pricing error/noise. This is consistent with forecasting both parts of returns. 4. Limit order trading coefficients have opposite signs: negative with efficient price and positive with pricing error/noise. 5. Efficient price is a consequence of standard adverse selection of liquidity providers. 6. Noise could be the result of adverse selection, manipulation, and/or order anticipation. The most important issue at this point is whether these passive trades make money. The answer, Brogaard said, is yes. They earn the spread and liquidity rebates, although the overall HFT profitability is quite low Brogaard acknowledged that these (around $0.03 per $10,000 traded. concerns are legitimate. However, he said, Brogaard provided considerable analysis HFT is “socially useless” only if there is no before presenting these conclusions: new price discovery. To examine price discovery, Brogaard used a state-space model 1. Overall, HFT increases the efficiency of which incorporates the interrelated concepts prices by trading in the direction of 168

permanent price changes and in the 107. The Impacts Of Automation And opposite transitory pricing errors. This is High Frequency Trading On Market done through marketable orders. In Quality (Fall 2012) contrast, non-marketable HFTS (liquidity- Robert Litzenberger supplying) orders are adversely selected The U.S. equity markets have changed on both the permanent and transitory from predominantly manual markets with component of prices. limited competition to highly automated and 2. HFT marketable orders’ informational competitive markets. One method that has advantage is sufficient to overcome the attracted the attention of the media and policy bid-ask spread and trading fees to makers is high frequency trading (HFT). generate positive trading revenues. Studies using proprietary exchange provided 3. HFT predicts price changes occurring a data that distinguishes activity by HFT firms few seconds in the future. show they contributed directly to narrowing 4. Brogaard said that there is no evidence bid–ask spreads, increasing liquidity, and that HFT contributes directly to market reducing transitory pricing errors and intra- instability in prices. In fact, during a day volatility. period of relative market turbulence from To begin his analysis, Litzenberger 2008 to 2009, overall HFTs traded in the discussed the structure of pre-electronic direction of reducing transitory pricing markets and the evolution of electronic errors, both on average days and on the trading. The idea of an electronic marketplace most volatile days measured. was articulated far earlier than the technology 5. HFT imposes adverse selection costs on allowed. To emphasize this he pointed to a liquidity suppliers both overall and at statement by Don Weeden, a “ of market stress. This could lead, he prophet,” who wrote the following to the SEC said, to non-HFT l iquidity suppliers in 1969. withdrawing from the market and indirectly result in HFT reducing market “With today’s electronic miracles stability. available to the industry, all market makers wherever located could be combined into a Brogaard concluded that HFTs are a type central, interrelated market for fast and of intermediary. When thinking about the role efficient access by investors to all its HFT plays in market it is natural to compare segments. The true central marketplace the new market structure to the previous one. demand access to all available pools of There are some primary differences: there is positioning capital for maximum liquidity.” free entry into HFT; HFT does not have a designated role with special privileges; HFT It took four decades to create Weeden’s does not have special obligations. “electronic miracle.” When considering the optimal industrial NASDAQ was first to automate. Other organization of the intermediation sector, exchanges followed in 2005 after rules that HFT more resembles a highly competitive negated the advantages of ATS were lifted. environment than a traditional market One such rule was the “trade through rule” structure. A central question, Brogaard said, under which all orders in NYSE shares were is whether possible benefits from the old, required to be exposed to the floor for a 30- more highly regulated intermediation sector second period for price improvement before outweigh the disadvantages of lower trading. By early 2005 automatic trading innovation and higher entry costs that are networks (ATS) accounted for half of all typically associated with regulation. trading. 169

Litzenberger noted that HFT has certain 4. Marketable orders are sufficiently characteristics: profitable to offset execution costs and the bid-ask spread. Resting orders earn 1. Market making that uses resting orders to sufficient spreads and rebates to offset the quote two-sided prices and rapidly adjusts impact of adverse selection. quotes in response to market conditions. 5. Large buy or sell programs implemented 2. Profit reduced by adverse selection that through execution algorithms and comes from the bid–ask spread. facilitated by HFT are consistent with the 3. Directional trading that uses marketable observed decline in trade sizes. orders and includes mean-reverting and 6. Net HFT resting orders are negatively momentum strategies. related to permanent price movements In practice, he said, distinctions between and positively related to transitory pricing market marking and directional trading are errors. To Litzenberger, this suggests that less clear. As a result of regulatory change resting orders provide liquidity to and the decimal rule, spreads declined informed traders and absorb adverse between 1993 and 2002. They have declined selection. further since 2006 and the onset of automated Much of the empirical evidence on the trading. The declines, Litzenberger said are direct impact of HFT on the U.S. equity directly related to HFT. markets relies heavily on limited samples of This is not only true in the United States. proprietary data provided by NASDAQ and Citing the work of others, Litzenberger said BATS to several financial economists. that liquidity has increased, trade sizes have Broadening these samples to include other decreased, and block trades remain unused. In trading venues, including more stocks and addition, there are positive trading revenues time periods, and making the information from HFT, although they have declined with more widely available, Litzenberger said, greater competition. would result in more empirical research on a larger data set and more reliable inferences Litzenberger summarized what we know drawn from this data. as follows: Finally, Litzenberger encouraged 1. A large and growing body of evidence thoughtful initiatives such as pilot programs shows that the quality of the U.S. equity and structured deployments of rule and markets has improved significantly over technology changes to provide the recent decades. Many of the benefits can opportunity for further empirical study and be attributed to improved competition, better understanding of the markets. automation, and high frequency trading. 2. The benefits include a reduction in quoted 108. Noise As Information For and effective bid-ask spreads, an increase Illiquidity (Spring 2012) in posted liquidity at NBBO and within Jiang Wang Jiang Wang, Mizuho Financial Group six cents of the NBBO, and a decline in Professor, MIT Sloan School of Management, the transitory price pressure impacts of CAFR and NBER, presented “Noise as large trades. Information for Illiquidity,” coauthored with 3. Net HFT marketable order flow is Xing Hu, Assistant Professor, University of positively related to permanent price Hong Kong, and Jun Pan, Professor of movements and inversely related to Finance, MIT Sloan School of Management, transitory price movements. CAFR and NBER.

170

If the Financial Crisis taught us anything, it is that liquidity is essential for markets. However, Wang says, it is only partially understood and there is no clarity about: 1. How to measure liquidity. 2. What determines liquidity. 3. Whether there is commonality in liquidity across markets. 4. How liquidity affects asset prices. Wang says that noise in asset prices (pricing errors) reflects market illiquidity: 1. During normal times, an abundance of fundamental values of Treasuries are arbitrage capital forces prices close to determined by a small number of interest rate fundamentals, smoothing out variations factors easily captured empirically to create a around the yield curve. more reliable measure of price deviations. 2. During times of crisis, a shortage of arbitrage capital allows prices to deviate Using the CRSP daily Treasury database, from fundamentals. Thus, limited he constructs a noise measure by backing out arbitrage capital allows more noise in a smoothed zero-coupon yield curve (the yields. yield curve used to price all available bonds on that day). He then aggregates the For Wang, the amount of noise in yields deviations across all bonds to create a provides a measure of illiquidity. measure of noise. Wang notes that in the The basic premise in Wang’s work is that fixed-income literature, deviations from a the abundance of arbitrage capital during given pricing model are often referred to as normal times helps smooth out the Treasury noise. yield curve and keep the average dispersion Wang says the results are rather low. During liquidity crises, however, the informative regarding the liquidity condition lack of arbitrage capital forces the proprietary of the overall market. During normal times trading desks and hedge funds to limit or the noise is kept to an average of 3.94 basis even abandon their relative value trades, points comparable to the average bid/ask leaving the yields to move more freely and yield spread of 2 basis points. Because of this resulting in more yield curve noise. Wang he concludes that arbitrage capital is effective argues that these abnormal noises in Treasury in keeping the deviations within an prices are a symptom of a market with a unattractive range given transaction costs severe shortage of arbitrage capital. during normal times. However, during a crisis Furthermore, this is not unique to the the noise measure spikes up much more Treasury market and can impact overall prominently than the bid/ask spread, thus financial markets. implying a misalignment in bond yields In this study he focuses on the U.S. sufficiently large to attract relative value Treasury market for two reasons. First, it is arbitrageurs. The following charts contrast by far the most important asset market in the the yield curve on normal days (the first chart) world and a shortage of liquidity in this and during crisis periods (the second chart). market provides a strong signal about The curves represent the key period and two liquidity in the overall market. Second, the time periods around the key date. Note the 171 legend provides the dates and the yield curve Investors rebalance their portfolios as deviations from those on the key date. their views about expected returns and risk change. In this paper Brant uses empirical Clearly, the size of the noise measure measures of portfolio rebalancing to back out spiked during the Financial Crisis. This is investors’ views, specifically their views further shown on the event annotated chart in about the state of the economy. His focus is the next column. on orderflow, the act of initiating the purchase or sale of securities, not price or returns. Orderflow, Brandt says, is the conduit through which information about economic fundamentals are aggregated into asset prices. Asset prices help forecast business cycles and orderflow is the mechanism by which asset prices change. This raises the question, Brandt says, of how orderflow is related to current and future economic conditions and whether it contains less, the same, or more information than prices or returns. Orderflow contains: Wang says that the sharp rise of the noise measure during crises of different origins and 1. Less information if a substantial portion causes suggests that the noise measure may of the price formation process is due to capture market-wide liquidity risk. He unambiguous public information resulting wonders if noise factor can explain asset in instantaneous price adjustments. returns. Specifically, he examines whether 2. The same information if it simply passes the returns on assets he deems sensitive to through to asset prices. changes in market-wide liquidity (such as 3. More if unique information relative to hedge funds and the currency carry trade) can prices is not fully incorporated in them be explained by his noise measure and finds but is passed through investors’ trading that it explains cross-sectional variation in behavior. hedge fund returns and currency carry trade The latter is possible if, Brandt says, strategies. Liquidity is important. orderflow represents actions of investors 109. What Does Equity Orderflow Tell while returns reflect consequences. Us About The Economy? (Fall 2011) There are many different ways to Michael Brandt Michael W. Brandt, Kalman J. Cohen investigate these issues, just as there are Professor of Business Administration, Duke many strategies investors use to adjust their University and NBER presented “What Does portfolios in response to changes in their Equity Sector Orderflow Tell Us about the views about economic fundamentals. In this Economy?” he coauthored with Alessandro research Brandt and his colleagues focus on Beber, Amsterdam Business School, sector rotation, a highly publicized University of Amsterdam and CEPR, and investment strategy that exploits differences Kenneth A. Kavajecz, Associate Professor of in the relative performance of sectors at Finance, Madison School of Business different stages of the business cycle. University of Wisconsin – Madison. Specifically they analyze the dynamics of orderflow across ten U.S. equity sectors to 172 determine whether sector-based portfolio multiplied by the weight of that sector in adjustments are related to the current and the market portfolio. future state of the macro economy and the 2. Active net orderflow for each sector is the aggregate stock and bond markets. As a difference between sector level total net result of the research, Brandt says they find orderflow and passive net orderflow. that: Examining the data Brandt reports that 1. Sector orderflow movements predict they find the percentage of orderflow across changes in the expansion/contraction years remains fairly stable, although there are index and the future performance of the variations that occur, particularly leading up bond markets. to and during the economic downturn in 2000. 2. Orderflow contains more and different Further, the shifts in the shares of orderflow information when compared to such across sectors appear more pronounced for things as returns. large orders and suggest that market 3. The nature of the information is common participants placing large orders may be more across markets. aggressive and/or savvy in positioning their 4. There is a line between this information portfolios ahead of changes in the economy. and the macro economy as seen through When their data is sorted into sectors its relationship to non-farm payroll chosen by their cyclicality with the U.S. announcements. business cycle, they find that the aggregate

5. Their results are stronger when orderflow portfolio rebalancing across equity sectors is is less dispersed within sectors. This lends consistent with sector rotation, an investment further support to the conjecture that strategy that exploits perceived differences in the sector orderflow measures reflect relative performance at different stages of the the empirical foot-prints of broad sector business cycle. The empirical foot-print of rotation rather than stock-picking within sector rotation, he says, has predictive power particular sectors. for the evolution of the economy and future

6. The correlation between active sector bond market returns, even after controlling orderflow and mutual fund flows in core for relative sector returns. Contrary to many categories suggests to Brandt that theories of price formation, trading activity orderflow measures are indeed capturing therefore contains information that is not institutional trader flows. entirely revealed by resulting relative price

7. Orderflow contains asymmetric changes. information: it is primarily defensive in nature and largely related to wealth As to the link of orderflow to the preservation. economy, they examine whether sector To reach these conclusions, Brandt orderflow has predictive power for the creates equity orderflow data constructed Chicago Federal Reserve Bank National using the Trades and Quotes (TAQ) dataset Activity Index (CFNAI) expansion indicator. over the sample period 1993 through 2005 Intuition suggests to Brandt that pro-cyclical and combines it with stock market data. After (counter) sectors would have positive constructing the basic sector and stock market (negative) coefficients. What they find is that level net orderflow measures, they define active orderflow of large orders into the passive and active measures: material sector predicts higher levels of the expansion index both one and three months 1. Passive orderflow is defined as the total ahead. Conversely, active orderflow of large net orderflow to the stock market orders into financials, telecommunications 173 and consumer discretionary predicts lower of the equity and bond markets. levels of the expansion index at the one and three month horizons. It is interesting to note, As for practicality, when Brandt and his he says, how pro-cyclical sector orderflow coauthors translate sector orderflow tends to lead the economy while the reverse is movements into tilts to the market portfolio observed for counter-cyclical sectors. and produce an orderflow mimicking portfolio, they find that it has superior risk After investigating the relationship and return properties relative to both the between the expansion index and sector, they traditional market and to industry momentum turn to an analysis of the cross-section of portfolios. This is because, he says, orderflow orderflow. Specifically, they are interested in contains asymmetric information that it is determining the orderflow factor (i.e. the set primarily defensive in nature and largely of sector loadings) with the highest related to wealth preservation. correlation to the state of the macro economy. Based on this, Brandt says, it is clear that the In the discussion that ensued, Brandt was link between aggregate sector orderflow and asked about whether the results held up the macro economy is strong, with large- during the Financial Crisis. He said they did sized active orderflow in specific sectors not. forecasting expansions/contractions up to one 110. The Flash Crash: The Impact Of quarter ahead. In addition, High Frequency Trading On An Electronic Market (Spring 2011) 1. Large-sized sector orderflows, likely to Albert Kyle originate from institutional investors, Albert S. Kyle, Charles E. Smith Chair appear to contain the bulk of the Professor of Finance at the University of predictive power in aggregate orderflow. ’s Robert H. Smith School of 2. Target sectors for trading on the macro Business, and Commodities Future Trading economy are consistent with common Commission (CFTC) presented “The Flash financial wisdom concerning sector Crash: The Impact of High Frequency rotation and portfolio allocation tactics: Trading on an Electronic Market” coauthored positive coefficients for pro-cyclical with Andrei Kirilenko, Chief Economist, sectors; negative coefficients as sectors CFTC, Mehrdad Samadi, Economist, CFTC, become more counter-cyclical. and Tugkan Tuzun, Ph.D. candidate Brandt does address whether University of Maryland and the CFTC. prices/returns contain the same information Kyle has been a frequent contributor to Q- as orderflow. He discusses their approach to Group® seminars. examining this question and their tests before On May 6, 2010, major US stock market concluding that active net orderflow provides indices, stock-index futures, options, and more and materially different information exchange-traded funds experienced a sudden than that contained in returns and traditional price drop of more than 5% followed by a low frequency market variables and contains rapid rebound, all in about 30 minutes. The information about the near term performance price and volume data for June 2010 E-Mini S&P 500 Stock Index Futures Contract on May 6, 2010 are shown below. Kyle points out two dramatic examples of the incredible volatility during the period: Accenture’s share price fell to $0.01 and 174

Apple’s rose to $100,000. He also reminds us In answer to his first question, the origin that flash crashes are not that rare. After the of the crash, Kyle says that it was the result October 1987 stock market decline, there of one account that sold 75,000 contracts were several other flash crashes in the week worth $4.1 billion, 1.5% of the day’s volume, that followed. precisely at 13:32 CT. It was executed by an algorithm set to target 9% of trading volume, This brief period of extreme, intraday and it was the largest net position change in volatility on May 6, 2010, commonly referred the E-mini of the year. While trades of this to as the “Flash Crash,” raised a number of size are usually executed over a day, it was questions about the structure and stability of executed in approximately 20 minutes. Kyle US financial markets. One survey reported believes that the way the trade was targeted, that over 80 % of US retail advisors believed as a percent of trading volume, exacerbated that “overreliance on computer systems and the market selloff: as volume increased the high-frequency trading” were the primary sell target remained fixed at 9% of trading contributors to the volatility. The results of a volume. In addition to the sell target of this survey of retail advisors included as culprits very large trade, market conditions played a the use of market and stop-loss orders, a role: there had been large price declines decrease in market-maker trading activity, earlier in the day and the buy side of the limit and order routing issues among securities order book was greatly depleted at the exchanges. In August, following the crash, moment of the sale. representatives of individual investors, asset management companies, and market To answer the questions about the role of intermediaries, testifying at a hearing before HFT, Kyle and his coauthors use trading data the CFTC and the SEC, suggested that in the on the E-mini S&P 500 equity index futures current electronic marketplace such an event on May 6. They use audit-trail, transaction- could easily happen again. level, data for all regular transactions. This allows them to identify the price, quantity and Kyle and his coauthors take on the time of execution, as well as the identity of assertion that High Frequency Traders (HFT) buyer and seller. played a role in creating the “Flash Crash.” To examine the role of HFTs in the crash he They sort the 15,000 trading accounts that asks three questions: participated in transactions on May 6 into one of six categories: 1. What may have triggered the Flash Crash? 2. How did HFT and other traders act on 1. High Frequency Traders (16): high May 6 in comparison to other days? volume and low inventory relative to 3. What role did HFT play in the Flash volume that account for significant Crash? portion of trading volume, although they To examine these questions, Kyle first do not accumulate large net positions. provides a brief tutorial of the S&P 500 E- 2. Intermediaries (179): market makers with mini contract provisions and distinguishes lower volume and low inventory relative between electronic trading (all E-mini to volume. trading), algorithmic trading (electronic 3. Fundamental Buyers (1,263): consistent trading based on computer algorithms) and buyers during the day. high frequency trading (algorithmic, 4. Fundamental Sellers (1,276): consistent electronic trading that takes advantage of intraday sellers (including the 75,000 trading opportunities in the shortest time contract sell program). intervals measured in milliseconds). 175

5. Small (Noise) Traders (6,880): trade few Fundamental Sellers, reversing this after a contracts each day. few minutes competing for liquidity with 6. Opportunistic Traders (5,808): all other Fundamental Sellers. traders including index arbitrage, day • HFTs appeared to buy and sell contracts traders and miscellaneous speculators. from one another rapidly and many times, generating what Kyle calls a “hot potato” effect before falling prices attracted High frequency traders were the least Fundamental Buyers to take these numerous of the trading categories with only contracts off the market. 16. In an assessment of the HFTs role, Kyle Kyle then turns to his question about says that their analysis shows that HFTs: trading activity on May 6 versus that on Exhibit trading patterns inconsistent with previous days. In the table shown below, the • the traditional definition of market average activity of the 3 days preceding the making.

• Aggressively trade in the direction of price changes. This comprises a large percentage of total trading volume but does not result in a significant accumulation of inventory. • Whether under normal market conditions or during periods of high volatility, HFTs Flash Crash and May 6, one can see that May are not willing to accumulate large 6 was quite unusual. positions. Kyle and his colleagues also plot the • Fundamental Traders may mistake higher holdings of various types of traders, and their trading volumes for liquidity. profits and losses relative to price, for each of • When rebalancing their positions, High the 4 days. Kyle makes the following Frequency Traders may compete for observations about HFT activity during the liquidity and amplify price volatility. period: Consequently, Kyle concludes: • The net holdings of HFTs fluctuated • Irrespective of technology, markets can around zero so rapidly that they rarely become fragile when: held more than 3,000 contracts long or " Imbalances arise as a result of large short on the day of the Flash Crash. traders seeking to buy or sell • HFTs did not change their trading quantities larger than intermediaries behavior during the Flash Crash from that are willing to temporarily hold, before: HFTs aggressively took liquidity and from the market when prices were about to change and actively kept inventories " Long-term suppliers of liquidity are near a target level. not forthcoming even if significant • The trading of HFTs appears to have price concessions are offered. exacerbated the downward move in prices: • Technological innovation is critical for HFTs initially bought contracts from market development.

176

• As markets change, appropriate safeguards must be implemented to keep pace with trading practices enabled by advances in technology. explanation focuses on transparency: dealer markets are less transparent than exchanges. 111. Paired Bond Trades (Spring 2011) This can differentially disadvantage small- Eric Zitzewitz Eric Zitzewitz, Associate Professor of trade clients particularly if they face higher Economics, Dartmouth College, presented costs of comparison shopping across dealers. “Paired Bond Trades.” Zitzewitz had To examine this, Zitzewitz uses ® previously presented a paper at the Q-Group information about small corporate bond seminar in Fall of 2007. trades. He finds that these trades are routinely “paired” - same size client and Zitzewitz begins by pointing out what we interdealer trades occur within 60 seconds of know about bond trading: each other, and these pairs of trades are • Essentially all bond trading is in dealer usually for exactly the same quantity and/or markets despite existence of exchanges are executed during the exact same second. (e.g. NYSE bonds). Controlling for trade size, pairing of clients is • Bonds do not trade frequently. more common for “vanilla bonds,” senior • Trading costs are high, especially for investment grade bonds with no credit small trades. enhancements, callable/convertible issues, • Many investors are locked into using a and non-round-number-priced client trades. single dealer as a counterparty. He does find that the pairings are typically • Recent regulation has sought to reduce made for seasoned bonds, although not right trading costs with some success. before they mature. In addition, his pairings show that two dealers pair more frequently as In spite of what we know, he says that shown in the following chart, though he says there are many open questions: it is not clear why. • Do trading costs reflect rents? To look at the paired trades Zitzewitz • Is price regulation (ex post enforcement uses the ex-post corporate bond transaction actions when markups are deemed information from Trade Reporting and excessive) optimal? Compliance Engine (TRACE). From that data • Should exchange-based trading be he locates what appear to be paired trades: mandated? trades of equal quantities occurring in In this research Zitzewitz examines why proximate time. He finds: trading costs are so much higher for small 1. Thirty seven percent of dealer-client versus large bond trades. There are two corporate bond trades are paired with a leading explanations for the difference: bonds dealer-dealer trade, usually for same exact trade almost exclusively in dealer markets quantity and executed at exact same rather than on an exchange; a lack of price second. transparency. Trading in dealer markets 2. Trading costs are higher for paired trades increases costs for several reasons: it is more of given size and the spread is split labor intensive than exchange trading; trading roughly 50-50 between pairing dealer and and dealer costs have a fixed component per ultimate dealer trade, thus requiring a higher percentage 3. Implied pairing dealer profit is essentially spread for smaller trades. The second never negative (0.4 % of pairs). 177

4. Pairing rates are: a. Much higher for trades under $100k (46%) than for trades over $500k (4.5%). b. Lower for institutional (NAIC matched) trades of a given size. In addition he finds that the pairing rate is lower for round-price trades and the interdealer spread is higher for small trades and bonds of low credit quality, although pairing shows the dealer spread is U-shaped in trade size and credit quality. Putting trade size and cost together with pairing dealer profit, Zitzewitz finds that profits for paired trades decline as the trade size increases but are non-negative for paired trades made within 15 minutes. For all of his matched finds there were no trades made at a loss. Zitzewitz spells out the implications of his research. First, pairing suggests many investors do not (or cannot) search over the entire market, and dealers with preferred access to orders (due to contracts or relationships) earn substantial and nearly risk-free trading profits. Second, by Wall Street standards, pairing dealer profits are tiny, out of $3.8B/yr. corporate bond dealer profits, only $360M are pairing dealer profits. Third, there is, he says, some evidence that excessive markup regulation has an impact in holding down excessive profits. Finally, he concludes that understanding pairing is important to other empirical bond asset pricing research.

178

The State-of-the Art in Investment Management

• SEMINAR PROGRAMS 2011-2015 • AUTHORS INDEX • SUBJECT INDEX

SEMINAR PROGRAMS

Listed in Chronological Order 2011-2015

Page numbers will direct the reader to appropriate summary. In some cases the title of the summary will differ from the program listing.

APRIL 3-6, 2011 TRADING AND EXTREME MARKET STRESS April 4 Neglected Risks, Financial Innovation, And Financial Fragility 36 Speaker: Andrei Shleifer, Professor of Economics, Harvard University The Effect Of Housing On Portfolio Choice 28 Speaker: Raj Chetty, Professor of Economics, Harvard University The Flash Crash: The Impact Of High Frequency Trading On An Electronic Market 174 Speaker: Pete Kyle, Charles E. Smith Professor of Finance, Robert H. Smith School of Business, University of Maryland Conflicting Family Values In Fund Families 17 Speaker: Utpal Bhattacharya, Associate Professor of Finance, Indiana University – Bloomington Who is doing what to whom on Wall Street and why? 38 Speaker: Charles Gasparino, Senior Correspondent, FOX Business Network April 5 Share Issuance And Factor Timing 116 Speaker: Robin Greenwood, Associate Professor of Business Administration, , Harvard University Portfolio Choice With Illiquid Assets 29 Speaker: Andrew Ang, Ann F. Kaplan Professor of Business, Columbia Business School, Columbia University Short Sellers And Financial Misconduct 142 Speaker: Jonathan M. Karpoff, WAMU Endowed Chair in Innovation, Finance and Business, Michael G. Foster School of Business, University of Washington Paired Bond Trades 177 Speaker: Eric Zitzewitz, Associate Professor of Economics, Dartmouth College April 6 Hard times 118 Speaker: Christopher Polk, Professor, London School of Economics Socially Responsible Investing And Expected Stock Returns 119 Speaker: Sudheer Chava, Associate Professor, Georgia Tech College of Management

180

OCTOBER 16-19, 2011 BETAS, DIVERSIFICATION AND INNOVATION October 17 Revisiting Exotic Beta 78 Speaker: Mark Carhart, Kepos Capital LP The International Monetary System: Living With Asymmetry 95 Speaker: Maurice Obstfeld, Class of 1958 Professor of Economics, The University of California, Berkley Betting against beta 80 Speaker: Andrea Frazzini, Assistant Professor of Finance, The University of Chicago Innovative Efficiency And Stock Returns 113 Speaker: David Hirshleifer, Professor of Finance, Merage Chair in Business Growth, The University of California, Irvine Investment For Effect: Design For Undersea Warfare In The 21st Century 58 Speaker: Rear Admiral Michael J. Yuring, Director of Strategic Planning and Communications for Submarine Warfare, United States Navy October 18 Demography Changes, Financial Markets And The Economy 59 Speaker: Robert Arnott, Research Affiliates LLC Diversification Over Time 155 Speaker: Barry Nalebuff, Milton, Steinbach Professor, Yale University What Does Equity Orderflow Tell Us About The Economy? 172 Speaker: Michael W. Brandt, Cohen Professor of Business Administration Duke University The Other Side of Value: Good Growth and the Gross Profitability Premium 115 Speaker: Robert Novy-Marx, Assistant Professor, University of Rochester October 19 Why Bank Equity Is Not Expensive. 33 Speaker: Anat R. Admati, George G.C. Parker Professor of Finance and Economics, Stanford University Institutional-Quality Hedge Funds 27 Speaker: David A. Hsieh, Bank of America Professor, Duke University APRIL 1-4, 2012 BEHAVIOR, NOISE AND EXCESS RETURNS April 2 Natural Expectations, Economic Fluctuations, And Asset Pricing 41 Speaker: David Laibson, Harvard University The Market Value Of Social Security 151 Speaker: Steven Zeldes, Columbia University, Graduate School of Business 181

John Geanakoplos, Yale University Global Crises And Equity Market Contagion 93 Speaker: Geert Bekaert, Columbia University, Graduate School of Business Pension Funding And Investment Strategy Issues From Market Value Account 153 Speaker: Barton Waring, CIO (Retired), Barclays Global Investors Hurricane Outlook for the 21st Century. 57 Speaker: Hugh E. Willoughby, Florida International University April 3 What Is Risk Neutral Volatility? 136 Speaker: Stephen Figlewski, New York University, Stern School of Business Bubbling with excitement: an experiment. 99 Speaker: Terrance Odean, University of California at Berkeley New Research In Behavioral Finance 43 Speaker: Nicholas Barberis, Yale School of Management Toward Determining Systemic Importance 31 Speaker: Mark Kritzman, Windham Capital Management April 4 Noise As Information For Illiquidity 170 Speaker: Jiang Wang, MIT Sloan School of Management Will My Risk Parity Strategy Outperform? Lisa R Goldberg 5 Speaker: Lisa Goldberg, MSCI OCTOBER 14-17, 2012 HEALTH, WEALTH AND MARKETS October 15 Health And Mortality Delta: Assessing The Welfare Cost Of Household Insurance Choice 149 Speaker: Motohiro Yogo, Federal Reserve Bank, Minneapolis The Equity Premium And Beyond 132 Speaker: Antti Ilmanen, AQR Capital Management, LLC High Frequency Trading And Price Discovery 168 Speaker: Jonathan Brogaard, University of Washington The Impacts Of Automation And High Frequency Trading On Market Quality 169 Speaker: Robert Litzenberger, RGM Capital Advisors Are We Headed For A Demographic Winter? 57 Speaker: Joel Kotkin, Chapman University October 16 Can Financial Engineering Cure Cancer? A New Approach For Funding Large-Scale Biomedical Innovation 26 Speaker: Andrew W. Lo, Massachusetts Institute of Technology Panel Discussion: Rethinking The Equity Risk Premium 133 182

Speaker: Brett Hammond, MSCI Marty Leibowitz, MSCI Bill Sharpe, Stanford University Laurence B. Siegel, Research Foundation of CFA Institute Does Mutual Fund Size Matters: The Relationship Between Size And Performance 16 Speaker: Edwin J. Elton, New York University Martin J. Gruber, New York University Health Care Reform In The United States: Past, Present And Future 150 Speaker: Jonathan Gruber, Massachusetts Institute of Technology October 17 Momentum Crashes 75 Speaker: Kent Daniel, Columbia University Speculative Betas 77 Speaker: Harrison Hong, MAY 5-8, 2013 PRICING AND VALUATION: NEW PERSPECTIVES May 6 The Recovery Theorem 121 Speaker: Stephen A. Ross, Massachusetts Institute of Technology Dynamic Conditional Beta 73 Speaker: Robert Engle, New York University Global Pricing Of Gold 22 Speaker: Campbell R. Harvey, Duke University The Cost Of Capital For Alternative Investments 24 Speaker: Erik Stafford, Harvard University Housing And The Macro Economy. 56 Speaker: Karl Case, Wellesley College May 7 Horizon Pricing 72 Speaker: Robert Korajczyk, Northwestern University Market Expectations in the Cross Section of Present Values 112 Speaker: Bryan Kelly, University of Chicago No News Is News: Do Markets Underreact To Nothing? 165 Speaker: Kelly Shue, University of Chicago The Mutual Fund Industry Worldwide: Explicit And Closet Indexing, Fees, And Performance 101 Speaker: Martijn Cremers, University of Notre Dame May 8 Duration Targeting: A New Look At Bond Portfolios 87 Speaker: Martin Leibowitz, Morgan Stanley Stanley Kogelman, Advanced Portfolio Management 183

Is The World Stock Market Co-Movement Changing? 91 Speaker: N. K. Chidambaran, Fordham University OCTOBER 13-16, 2013 ECONOMIC GROWTH, ASSET VALUATION AND SKILL October 14 Those Who Know Most: Insider Trading In 18th C.Amsterdam 164 Speaker: Peter Koudijs, Assistant Professor of Finance, Graduate School of Business, Michael G. Foster School of Business, University of Washington The Surprising “Alpha” From Malkiel’s Monkey And Upside-Down Strategies 4 Speaker: Jason Hsu, Chief Investment Officer, Research Affiliates The Optimal Holding Of Annuities 147 Speaker: Kent Smetters, Boettner Professor, Professor of Business Economics and Public Policy, Wharton, University of Aspects Of Risk Parity 130 Speaker: Cliff Asness, Managing & Founding Principal, AQR Capital Management Paul Kaplan, Director of Research, Morningstar, Inc. The Success Equation: Untangling Skill And Luck In Business, Sports And Investing 40 Speaker: Michael Mauboussin, Managing Director, Global Investment Strategies, Credit Suisse October 15 The Shiller CAPE Ratio: A New Look 106 Speaker: Jeremy J. Siegel, Russell E. Palmer Professor of Finance, Wharton, University of Pennsylvania Changing Times, Changing Values: A Historical Analysis Of Sectors Within The U.S. Stock Market 1872-2013 108 Speaker: Robert J. Shiller, Sterling Professor of Economics, Yale University Is Economic Growth Good For Investors? 111 Speaker: Jay R. Ritter, Joe B. Cordell Eminent Scholar Chair, Warrington College of Business Administration, University of Florida Time-Varying Fund Manager Skill 14 Speaker: Marcin Kacperczyl, Assistant Professor of Finance, NBER Faculty Research Fellow, Boxer Faculty Fellow (2010-2013), Stern School, New York University October 16 Central Bank Policy Impacts On The Distribution Of Future Interest Rates 86 Speaker: Douglas T. Breeden, William W. Priest, Jr. Professor of Finance, Fuqua School of Business, Duke University Which News Moves Stock Price? A Textual Analysis 167 Speaker: Shimon Kogan, Assistant Professor, McCombs School of Business, University of Texas at Austin APRIL 7-9, 2014 FACTORS, ACTIVE MANAGEMENT, AND EXOTIC ASSETS

184

April 7 Scale And Skill In Active Management 13 Speaker: Lubos Pastor, Booth School of Business University of Chicago The Investment Performance of Art and Other Collectibles 18 Speaker: Elroy Dimson, London Business School; Cambridge Judge Business School Estimating Private Equity Returns From Limited Partner Cash Flows 19 Speaker: Andrew Ang, Columbia University Investor Sentiment Aligned: A Powerful Predictor of Stock Returns 98 Speaker: Guofu Zhou, Washington University The Economics of Big Time College Sports 54 Speaker: Roger Noll, Stanford University April 8 Dissecting Factors 2 Speaker: Juhani Linnainmaa, University of Chicago The Not So Well-Known Three-and-One-Half Factor Model 69 Speaker: Steven Thorley, Brigham Young University Marriott School of Management A Primer On Fast Electronic Markets 161 Speaker: Larry Harris, Marshall School of Business University of Southern California Impact Of Hedge Funds On Asset Markets 21 Speaker: Andrew Patton, Duke University The High-Frequency Trading Arms Race: Frequent Batch Auctions As A Market Design Response 162 Speaker: Peter Cramton, University of Maryland April 9 A Model of Momentum 71 Speaker: Lu Zhang, State University “How Much Would It Take? Achieving Retirement Income Equivalency Between Final- Average Pay Defined Benefit Plan Accruals And Voluntary Enrollment 401(K) Plans In The Private Sector” 146 Speaker: Jack Van DerHei, Employee Benefit Research Institute OCTOBER 19-22, 2014 BEHAVIOR AND THE ORIGINS OF VALUE October 20 Why Global Demographics Matters For Macro-Finance, Asset Pricing, Portfolios and Pensions 50 Speaker: Amlan Roy, Credit Suisse Securities (Europe) Does Academic Research Destroy Stock Return Predictability? 104 Speaker: David McLean, Dianne and Irving Kipnes Chair in Finance and Development, University of Alberta “The Worst, The Best, Ignoring All The Rest: The Rank Effect and Trading Behavior” 39 Speaker: Samuel Hartzmark, Assistant Professor of Finance, 185

University of Chicago Booth School of Business “Made Poorer By Choice: Worker Outcomes In Social Security vs. Private Retirement Accounts” 144 Speaker: Terrance Odean, Rudd Family Foundation Professor of Finance, University of California, Berkeley “The Financial Crisis Six Years On: What Have We Learned” 52 Speaker: Jeremy Siegel, Russell E. Palmer Professor of Finance, the Wharton School October 21 Trading Costs Of Asset Pricing Anomalies 159 Speaker: Toby Moskowitz, Fama Family Professor of Finance, University of Chicago

“In Short Supply: Short-sellers and Stock Returns” 140 Speaker: Charles Lee, Joseph McDonald Professor of Accounting, Stanford Business School Origins Of Stock Market Fluctuations 125 Speaker: Martin Lettau, Professor of Finance, University of California, Berkeley “The Divergence of High- and Low-Frequency Estimation: Causes and Consequences” 127 Speaker: Mark Kritzman, Windham Capital Management October 22 “Risky Value” 67 Speaker: Scott Richardson, Professor of Accounting, London Business School “Visualizing The Time Series Behavior Of Volatility, Serial Correlation And Investment Horizon” 128 Speaker: Ralph Goldsticker, CFA, of the Q-Group MARCH 29-APRIL 1, 2015 RISKS AND REWARDS: SEARCHING FOR KNOWLEDGE March 30 Patient Capital Outperformance 8 Speaker: Martijn Cremers, Professor of Finance, University of Notre Dame Betting Against Beta Or Demand For Lottery? 64 Speaker: Scott Murray, Assistant Professor of Finance, University of Nebraska On the Fundamental Relation between Equity Returns and Interest Rates 85140 Speaker: Robert Whitelaw, Edward C. Johnson III Professor of Entrepreneurial Finance, NYU, and CIO of Index IQ The International CAPM Redux 88140 Speaker: Adrien Verdelhan, Associate Professor of Finance, MIT The Recognition Of One Of Our Most Valued Members 66140 Speaker: Jack Treynor, the Q-Group March 31 PPP and Exchange Rates: Evidence from Online Data in Seven Countries 90140 186

Speaker: Alberto Cavallo, Associate Professor of Applied Economics, MIT Sloan The Behavior Of Sentiment-Induced Share Returns: Measurement When Fundamentals Are Observable 96140 Speaker: Ian Cooper, Professor of Finance, London Business School Do Funds Make More When They Trade More? 10140 Speaker: Robert Stambaugh, the Miller Anderson & Sherrerd Professor of Finance, the Wharton School A Solution To The Palm-3Com Spin-Off Puzzles 138140 Speaker: Chester Spatt, Pamela R. and Kenneth B. Dunn Professor of Finance, Carnegie Mellon University, and former SEC Chief Economist April 1 Cross-Firm Information Flows 1140 Speaker: Anna Scherbina, Associate Professor of Finance, University of California, Davis A Sharper Ratio: A General Measure For Ranking Investment Risks 11140 Speaker: Kent Smetters, Boettner Professor of Business Economics and Public Policy Department, Wharton, University of Pennsyulvania OCTOBER 18-21, 2015 MACROECONOMICS, MONETARY POLICY, AND INTEREST RATES October 19 Inflation, Real Interest Rates And The Shiller P/E 102140 Speaker: Rob Arnott, Chairman & CEO of Research Affiliates, LLC Cross-Sectional Asset Pricing With Individual Stocks: Betas Versus Characteristics 61140 Speaker: Tarun Chordia, R. Howard Dobbs Professor of Finance, Goizueta Business School, Emory University The Equilibrium Real Funds Rate: Past, Present And Future 81140 Speaker: Ken West, John D. MacArthur and Ragnar Frisch Professor of Economics, University of Wisconsin The Return On Your Money Or The Return Of Your Money: Investing When Bond Yields Are Negative 83140 Speaker: Vineer Bhansali, Managing Director and Portfolio Manager at PIMCO The College Earnings Premium, The Higher Ed Scorecard, And Student “Sorting”: Myths, Realities And Explanations 45140 Speaker: Eleanor Dillon, Assistant Professor of Economics, Arizona State University October 20 Deflation Risk 47140 Speaker: Francis Longstaff, Allstate Professor of Insurance and Finance, UCLA Anderson School of Management Forced Liquidations, Fire Sales, And The Cost Of Illiquidity 157140 Speaker: Andrew Weisman Target Date Funds: Characteristics And Performance 7140 Speaker: Edwin J. Elton, Stern School of Business, New York University 187

Martin J. Gruber, Stern School of Business, New York University Assessing Asset Pricing Models Using Revealed Preference 122140 Speaker: Jonathan Berk, A.P. Giannini Professor of Finance, Stanford Graduate School of Business October 21 Systematic Risk And The Macroeconomy: An Empirical Evaluation 49140 Speaker: Bryan Kelly, Associate Professor of Finance and Richard N. Rosett Faculty Fellow, University of Chicago Monetary Policy Drivers Of Bond And Equity Risks 124140 Speaker: Carolin Pflueger, Assistant Professor of Finance, University of British Columbia

188

AUTHORS INDEX TO VOLUMES I, II, III, IV, V, VI, VII, and VIII (References are to volume and page number) (Does not include papers prior to 1975, that are listed in Volume V)

Adler, F. Michael II:13 Bienvenue, Dan VII:223 Admati, Anat VIII:33 Bierwag, Gerry O. I:1, 8 Adrian, Tobias VII:170 Biggs, John VI:91 Ahlers, David M. I:112 Black, Fischer III:32, IV:45 Albrecht, Ralph P. VII:49 Bloch, Ernest I:101 Alexander, Carol V:135 Block, Frank E. I:76, 124, II:26 Allen, Edward L. I:30 Blood, Charles I:145 Allen, H. Franklin VI:35, VII:78 Bodie, Zvi II:9, VI:99 Altman, Edward I. I:12, 27, 92, II:4, 37, III:4, 7, 8, Boehmer, Ekkehart VII:221 IV:24 Bogle, John C. II:41, V:85 Ambachtsheer, Keith I:58, 63, 150, 155, II:30, III:68, Bookstaber, Richard I:68, III:19, 21, V:138, 149 71, IV:95, VI:53 Booth, David G. II:73 Ang, Andrew VI:15, VII:93, VIII:19, VIII:29 Boquist, John I:127 Angelica, Robert E. IV:9 Botkin, Donal II:12 Anson, Mark VI:108 Brav, Alon VII:27 Apgar, Sandy II:38 Brandt, Michael VIII:172 Armentrout, Steven IV:68 Breeden, Douglas VIII:86 Ameriks, John VI:95 Brennan, Michael J. I:47, VII:203 Arnott, Robert D. I:108, 123, II:16, III:41, 63, V:57, Brightman, Chris VII:11 VI:53, VII:131, VIII:59, VIII:102 Brinson, Gary P. II:11, 46, III:63 Ashenfelter, Orley VII:104 Britten-Jones, Mark V:122 Asness, Clifford VI:64, VII:131, 185, VIII:130 Brock, William A. III:13 Ayres, Herbert F. I:11, 89, II:66 Brogaard, Jonathan VIII:168 Brown, Stephen J. IV:119, V:108, VI:120 Babcock, Guilford C. I:140 Brunie, Charies H. I:100 Bader, Lawrence N. III:72 Brush, John S. III:107 Baghat, Sanjai V:71 Burroughs, Eugene I:81 Bagot, Gordon M. III:30 Burtless, Gary VI:93 Bagwell, Laurie S. IV:17 Baker, Charles (Tony) VI:159 Calderwood, Stanford I:137, II:43 Baker, David A. I:34, 99, 152 Campbell, John Y. V:48 Barber, Brad M. V:21 Carhart, Mark V:109, VIII:78 Barberis, Nicholas VI:44, VIII:43 Carleton, Willard T. I:51, II:62, IV:82 Carty, Lee V. Barclay, Michael J. III:110 V:66 Barnett, Gregory A. I:57 Case, Karl VIII:56 Barr, Dean IV:68, V:94 Cavaglia, Stefano V:89 Barry, John V. I:11, 89, 152 Cavallo, Alberto VIII:90 Bauer, Robb VII:127 Chacko, George VII:120 Bauman, W. Scott I:110 Chalmers, John M. R. V:57 Beaver, William H. I:42, 152 Chan, Louis K. C. V:26 Beebower, Gilbert I:157, II:73, III:96, IV:81 Chava, Sudheer VIII:119 Bekaert, Geert VIII:93 Chen, Dan V:67 Benartzi, Shlomo V:130, 156 Chen, Joseph VI:46 Bergstrom, Gary I:59 Chen, M. Keith VII:57 Berk, Jonathan V:37, 162, VI:163, VII:53, VIII:122 Chetty, Raj VIII:28 Bernhard, Arnold I:143 Chidambaran, N. K. VIII:91 Bernstein, Peter L. II:60, III:23, VII:105 Chiene, John I:55 Bernstein, Robert J. II:4, III:7, IV: 60 Chordia, Tarun VI:19, VIII:61 Bhagat, Sandip IV:68, VII:76 Cialdini, Robert B. V:151,VII:59 Bhansali, Vineer VII:125, VIII:83 Clarke, Roger I:68 Bhattacharya, Utpal VIII:17 Clossey, David II:39 Cochran, John V:1 Ezra, D. Don I:6, 9, II:30, III:76, 91, 101 Cohen, Susan I. III:45 Cohen, Randolph B. VII:44 Fama, Eugene I:43 Collin-Dufresne, Pierre VI:87 Farrell, James L., Jr. I:59, 62, 151, 154, III:129 Condon, Kathleen I:157 Faurot, Allen R. II:42 Conrad, Jennifer VI:14 Ferguson, Niall VII:104 Cooper, Ian VIII:96 Ferguson, Robert I:71, 87, II:20, IV:5, 10, 107 Cooper, Michael J. VI:10, VII:139 Fernholz, E. Robert I:108, VI:1 Cooperman, Leon G. III:119 Ferreira, Miguel VII:181 Cottle, Sidney I:154 Ferson, Wayne E. V:23, VI:12 Cramton, Peter VIII:162 Fewings, David R. I:144 Cremers, K. J. Martijn VII:129, VIII:8, VIII:101 Figlewski, Stephan I:65, 67, VIII:136 Cress, Steven III:34 Fisher, Jeffrey D. IV:86 Crowell, Richard A. III:60, 65 Fisher, Lawrence I:120 Curran, John VII:118 Fleischman, Edward III:93 Curry, Bernard I:81 Fogler, Russell I:14, III:90, 101, IV:68, 116 Fong, H. Gifford I:2, 4, 18, III:17, 37, 135, IV:21, 76, D'Alelio, Edward III:7 V:157, VI:81 Dadachanji, Naozer IV:64 Fouse, William I:139, II:40, III:67, 108, 116, IV:113, Dalton, Timothy G. I:100, 153 116, V:56 Damsma, Marvin III:103 Frankel, Richard M. V:167 Daniel, Kent VIII:75 Frazzini, Andrea VIII:80 Das, Sanjiv V:9, VI:76, 81 Freeman, John D. IV:1 Davanzo, Lawrence E. III:50 French, Craig W. VII:11 DerHei, Jack Van VIII:146 French, James C. III:98 De Silva, Harinda VII:11 French, Kenneth R. III:27, 61, IV:108 Dhrymes, Phoebus J. II:69 Friedman, George VII:100 Diebold, Francis X. VII:98 Friend, Irwin II:69 Diermeier, Jeffrey J. II:46, VI:57 Froewiss, Kenneth I:71 Dietz, Peter I:14, 74, 85, 96, 98, 115, 157, II:23, 35 Froot, Kenneth A. IV:47, 53, V:100, VII:73, 76 Diller, Stanley I:49 Fuller, Russel J. IV:112, V:16 Dillon, Eleanor VIII:45 Fulmer, Jeffrey I:124 Dimson, Elroy V:159, VIII:18 Dornbusch, Rudi IV:49, V:97 Gadkari, Vilas III:31 Douglas, George M. I:134, II:16, III:122, IV:115 Gale, William G. VII:134 Dreher, William A. I:5, 9, II:23 Gallimore, Michael J. III:80 Du Bois, Charles H. I:150, III:63 Garman, Mark B. II:18 Duffie, Darrell V:60, VI:75, VII:67 Gasparino, Charlie VIII:38 Duggirala, Kamal III:98 Gastineau, Gary VI:161 Dungey, Mardi VI:144 Gau, George W. I:97 Durham, John Benson VII:172 Geanakoplos, John, VIII:151 Dybvig, Philip H. VI:65, VII:146 Geczy, Christopher C. VI:102 Geller, Jeffrey IV:29 Eadie, Dugald M. I:55 Geltner, David IV:86 Eagle, Blake I:94, 96, II:39 Geyer, Carter T. I:102 Edelen, Roger M. VII:218 Gibbons, Michael R. II:71 Edesess, Michael I:135 Giliberto, Michael IV:88 Einhorn, Steven I:133 Gillard, William I:143 Eisenstadt, Samuel I:143 Glick, Madeline Einhorn I:19 Elton, Edwin J. I:114, 118, 126, 147, 149, II:7, 47, 67, Glosten, Lawrence R. III:133, IV:80 III:33, 88, V:105, VII:168, VIII:7, VIII:16 Goetzmann, William N. V:160, VI:140, VII:38, 85 Engle, Robert F. III:135, VII:176, VIII:73 Goldberg, Lisa VIII:5 English, John W. II:40, III:69, 119 Goldstein, Michael L. VII:51 Estep, Tony I:34, 121, II:62 Goldsticker, Ralph VIII:128 Evans, Richard B. VII:42 Golub, Bennett VII:76 Evans, Robert R. I:74 González-Hermosillo, Brenda VII:62 190

Good, Walter I:37 Goodman, David A. I:132 Ibbotson, Roger G. VI:62 Gordon, Myron J. III:126 Ilmanen, Antti VIII:132 Granito, Michael R. II:8, III:103, 138, VI:71 Imrohoroglu, Selahattin V:75 Greeley, John I:157 Infanger, Gerd V:116 Green, David L. VI:89 Ippolito, Richard A. II:25, III:78 Greenwood, Robin VII:65, VIII:116 Grinblatt, Mark S. II:54 Jacklin, Charles J. V:56 Grinold, Richard III:109, VII:132 Jacobs, Bruce I. III:109, IV:62 Grossman, Sanford J. III:84, IV:97 Jacques, William E. III:36, IV:1, 106 Gruber, Jonathan VIII:150 Jahnke, William I:125, 143, III:121, IV:116, VI:53 Gruber, Martin J. I:114, 118, 126, 147, 149, II:7, 47, Jarrow, Robert IV:25 67, III:33, 88, V:105, VII:168, VIII:7, VIII:16 Jeffrey, Robert H. I:85 Gultekin, N. Bulent II:69 Jobson, J. D. (Dave) IV:71 John, Kose IV:91 Hagin, Robert L. II:55, IV:123, V:24 Johnson, R. Sheldon III:97 Haldeman, Robert G. I:27 Jones, Charles VII:210 Hall III, J. Parker IV:113 Jones, Frank I:71 Hall, Nick VII:61 Jordan, James V. I:50 Hamada, Robert I:145 Jorion, Philippe IV:73, V:95, VII:76, 80 Hamao, Yasushi III:136 Hammond, Brett VII:76, VIII:133 Kabus, Irwin I:128, 129 Hanna, Jeff II:10 Kacperczyl, Marcin VIII:14 Harasty, Hélène V:50 Kahn, Ronald N. IV:2, V:45, VII:31 Harford, Jarrad VI:146 Kalotay, Andrew J. I:61, III:12, IV:23 Harlow, W. Van V:113 Kane, Carl II:39 Harrington, Diana I:61, 122 Kantor, Michael IV:81 Harris, Lawrence III:15, 140, IV:35, 37, 55, V:81, Kao, Duen L. IV:24 VI:135, 141, VII:119, 207, VIII:161 Kao, Tony V:25, 65, VI:121 Hartzell, David J. III:86 Kaplan, James I:19 Hartzmark, Samuel VIII:39 Kaplan, Paul VIII:130 Harvey, Campbell R. IV:57, V:144, VI:24, 85, Kaplan, Steven N. III:54, VII:180 VIII:22 Karpoff, Jonathan M. VI:168, VIII:142 Hasbrouck, Joel IV:36 Kassouf, Sheen T. I:72 Haskel, James L. VII:171 Kat, Harry M. VI:113, VII:29 Haugen, Robert A. III:120 Katz, Richard C. I:153 Haughton, Kelly I:124, II:35 Kaufman, George G. I:1, 7 Henriksson, Roy D. III:41 Kaul, Aditya VI:146 Heston, Steven L. V:80 Kavee, Robert C. I:19, 85 Hill, Joanne M. VI:6, VII:223 Keim, Donald B. I:130, II:4, III:114, V:36, VII:116 Hirshfeld, David I:30, VIII:113 Kelly, Bryan T. VII:188, VIII:49, VIII:112 Ho, Thomas S. Y. VI:172 Kent, Glenn I:36 Hoag, James W. I:96, 98 Kingsland, Louis I:78, II:24 Hobbs, John H. III:120 Kirby, Robert G. I:111 Hodrick, Robert IV:54 Kjaer, Knut VII:223 Holaday, Bart IV:9 Klesch, Gary I:80 Holden, Craig W. IV:46 Klotz, Richard V:158 Holderness, Clifford G. III:110 Koeneman, John I:57 Hong, Harrison VII:83, VIII:77 Kogan, Leonid VII:197 Hou, Kewei VI:4 Kogan, Shimon VIII:167 Houglet, Michel I:26 Kogelman, Stanley III:3, IV:19, 93, 109, VIII:87 Hsieh, David A. IV:66, VI:112, VIII:27 Koijen, Ralph S. J. VII:46 Hsu, Jason VIII:4 Kopprasch, Robert V:148 Huang, Chi-Fu IV:30 Korajczyk, Robert VIII:72 Huberman, Gur III:124 Korkie, Bob IV:73 Hui, Ken V:93 Korteweg, Arthur G. VII:18 191

Kothari, S. P. VI:30 Lucas, Deborah V:2 Kotkin, Joel VIII:57 Luce, R. Duncan V:13 Koudijs, Peter VIII:164 Luckyn-Malone, Laura III:34 Kreps, David M, II:40, 51 Kretzmer, Peter III:137 Mackay, Robert J. IV:27 Kritzman, Mark P. II:36, III:48, IV:51, V:16, VIII:31, Madansky, Albert I:65, 66 VIII:127 Madden, William B. I:22 Kroll, Bernard III:19 Kwan, Simon IV:82 Madhavan, Ananth IV:44, VI:151, VII:15, 219 Kyle, Albert S. III:44, IV:4, VI:42, VII:214, VIII:174 Madrian, Brigitte VII:156 Mahabir, Kris IV:29 Laibson, David VIII:41 Maksimovic, Max II:46 Lakonishok, Josef IV:104 Malemendier, Ulrike M. VII:68 Lamont, Owen VI:167 Manolis, J. Steven II:38 Langetieg, Terence C. III:62, 65, 86 Mark, Robert IV:29 Langsam, Joseph A. III:21 Markowitz, Harry M. I:109, IV:75, 121, V:118, 125, Lanstein, Ron I:132, 145 VI:21, VII:122 Laver, Steven I:90 Marsh, Terry A. II:65 Lavin, Thomas II:38 Marshall, William, VI:51 Leamer, Edward E. IV:118 Mauboussin, Michael J. VII:56, VIII:40 Le Baron, Dean I:36, 130 Mayhew, Stewart VI:164 LeBaron, Blake IV:67 McAfee, R. Preston IV:15 Ledoit, Olivier V:126 McCloskey, Donald IV:117 Lee, Charles M. C. V:19, VI:129, VIII:140 McConnell, Pat III:80 Leeper, Eric M. VI:124 McCulloch, J. Huston I:52 Lehmann, Bruce N. III:123 McDonald, Robert L. II:9 Leibowitz, Martin L. I:1, 9, 16, 21, II:1, 23, III:3, 62, McEnally, Richard W. I:24 63, 71, 104, 119, 128, IV:7, 19, 93, 109, McGahan, Richard P. II:23 V:161, VI:53, VII:6, VII:109, 201, VIII:87, McLaren, Constance H. I:110 VIII:133 McLean, David VIII:104 Leisenring, James J. III:66 McMillan, John IV:13 Leland, Hayne E. V:63, VI:153 McTaggart, James I:142 Lemons, Chuck F. III:79 McWilliams, James D. I:141 Lettau, Martin VIII:125 Meagher, David I:75 Leung, Joseph V:124 Melnikoff, Meyer I:96 Levy, Kenneth N. III:109 Mendelson, Haim IV:40 Lewis, Alan I:90 Mennis, Edmund A. II:49 Lhabitant, François-Serge VI:114 Merton, Robert C. III:115, VI:101, VII:174 Liang, Bing VI:111 Meyer, Jack III:49; IV:9, 12 Light, Jay O. IV:12 Michaely, Roni VII:191 Lindenberg, Eric III:3 Michaud, Richard I:82, 107, 138, II:64, III:99, IV:61, Lindsey, Richard R. VI:107, VII:70 70, V:17, 30, 36, 120, VI:8, 68, 158 Linnainmaa, Juhani VIII:2 Miles, Michael III:87 Litt, Michael C. VI:97 Milken, Michael R. II:5 Litterman, Robert III:9, IV:11, V:132, 150, VI:53 Miller, David W. I:102 Litzenberger, Robert IV:33, VIII:169 Mitchell, Mark L. III:56, VI:155 Lo, Andrew W. IV:65, 103, V:8, 141, VI:59, 115, Mitchell, Olivia S. VII:166 VII:54, 88, VIII:26 Meigs, A. James I:46 Loeb, Thomas F. I:132 Modest, David M. III:123, V:102 Logue, Dennis I:158 Monahan, James P. I:107 Long, William F. III:52 Morck, Randall II:24 Longstaff, Francis VI:77, VIII:47 Moskowitz, Toby VIII:159 López-de-Silanes, Florencio IV:85 Motley, Brian IV:59 Lorie, James H. II:60 Muller, Frederick L. III:67, 71 Lotsoff, Seymour N. I:66 Mulvey, John M. V:115 Love Douglas A. II:27, III:82 Murphy, Joseph E. I:92 Lucas, Charles M. IV:34 Murray, Roger F. I:79, III:118 192

Murray, Scott VIII:64 Ramezani, Cyrus A. VI:165 Musto, David K. VI:102, 157 Ramond, Charles I:40 Myers, Stewart C. VI:105, VII:96 Rappaport, Alfred VI:2 Ready, Mark J. V:82, VII:205 Nagorniak, John I:133, 145, IV:116 Record, Eugene E. I:132, II:22, 73, III:15, 47, 49 Naik, Narayan VI:122 Reed, Adam VI:104 Nakamura, Takeo II:57 Reid, Kenneth I:132 Nalebuff, Barry V:43, VIII:155 Reinganum, Mark R. III:128 Nicholson, David J.S. I:153 Rennie, Edward P. II:22 Noll, Roger VIII:54 Rentzler, Joel I:71 Novy-Marx, Robert VIII:115 Rich, Don V:146 Nowak, Martin A. V:38, VI:38 Richards, Thomas M. III:102 Nowakowski, Christopher A. II:11 Richardson, Matthew VII:25 Richardson, Scott VIII:67 O'Brien, John W. I:77, II:20 Rie, Daniel II:63, III:37, 118 Obizhaeva, Anna VII:214 Rigobon, Roberto VI:133 Obstfeld, Maurice VIII:95 Ritter, Jay R. III:107, VIII:111 Odean Terrance V:14, 104, VI:37, VII:216, VIII:99, Rock, Kevin IV:38 VIII:144 Rogoff, Kenneth IV:56 Oldfield, George I:27 Roll, Richard I:86, II:68, VI:148 Osborne, Maury I:91 Roman, Theodore S. III:28 Owens, Paul III:7 Ronn, Ehud III:25 Rosenberg, Barr M. I:26, 88, 119, 127, 131, V:124 Pakianathan, Vinod IV:63 Ross, Stephen A. III:131, VIII:121 Page, Sébastien VII:144 Rossi, Peter E. III:106 Pang, Eric III:1 Rothstein, Morton IV:43 Pastor, Lubos VIII:13 Rowenhorst, K. Geert VI:83, VII:22 Patel, Jayendu V:28 Roy, Amlan VIII:50 Patel, Sandeep VII:12 Ruback, Richard S. III:55 Patton, Andrew VIII:21 Rubinstein, Mark E. I:69, II:20, III:17, IV:102, V:78 Pearson, James I:19 Rudd, Andrew I:67, 70, 120, 127, II:24, 44, 70, III:29, Peavy, John W. I:132 50, 95, V:93, 143, VII:32 Penman, Stephen H. VI:31, VII:184 Russell, George F. I:74 Perold, André F. I:109, III:81, IV:123, VI:49, 143, VII:14 Sack, Paul I:95 Peskin, Michael V:52, 140 Sadka, Ronnie VII:113 Peters, Edgar III:60 Sakaguchi, Yusaku III:39 Peters, Helen F. II:2 Salisbury, Dallas L. II:32, III:73 Pezier, Jacques V:133 Sandor, Richard I:71 Pfleiderer, Paul III:93, V:98 Sapra, Steven VII:1 Pflueger, Carolin VIII:124 Sauter, George U. VI:160 Phillips, Thomas K. IV:77, V:53 Schaefer, Stephen M. I:48, 52 , VI:78 Plott, Charles R. IV:16 Scherbina, Anna VIII:1 Pohl, Charles I:139; IV:113 Scherer, Bernd VII:36 Polk, Christopher VIII:118 Schielke, Hugo J.H. I:19 Pontiff, Jeffery VII:135 Schlusche, Bernd VIII:1 Porter, Michael E. II:50 Schoar, Antoinette VII:20 Post, Larry II:5 Scholes, Myron S. II:28, IV:32, VI:117, VII:175 Poterba, James M. V:73 Schreyer, Gary W. II:3 Poundstone, William V:40 Schulman, Evan I:157, 159, II:59 Powell, Shari L. IV:97 Schwartz, Eduardo S. I:47 Pozen, Robert Charles I:82, VI:98 Schwartz, Robert A. I:100, 101 Schwert, G. William III:139 Rainville, Henry B. II:72 Scott, James H. I:27, 91, II:40, 44, IV:9, VII:198 Ramaswami, Murali III:5, 7, IV:26, 29 Seagars, Alan D. I:14 Ramaswamy, Krishna III:9 Searcy, William N. III:79 193

Seidel, David III:101, 102 Testa, David I:57 Senft, Dexter I:11 Thaler, Richard H. III:113, IV:98, V:6, VII:59 Shanken, Jay V:10, 34 Thomas, Lee VI:7 Shapira, Zur IV:100 Thorley, Steven VIII:69 Sharpe, Bill VIII:133 Tierney, David E. I:75, IV:6 Sharpe, William F. I:63, 84, 113, 134, 137, III:64, 75, Tilley, James A. II:18 100, V:16, 111, 127, 148, VI:52, 72, VII:123 Timmerman, Allan V:47 Sheikh, Aamir V:143 Tint, Larry I:114, 118 Sherrerd, Katrina F. VII:11 Titman, Sheridan V:33 Shiller, Robert J. I:140, VIII:109 Toevs, Alden L. I:1 Shleifer, Andrei IV:84, VIII:36 Treanor, John H. IV:28 Shouse, Polly I:113 Treynor, Jack L. I:25, 33, 49, 80, 83, 105, 124, 136, Shue, Kelly VIII:165 II:40, 54, 56, III:58, 120, 129, IV:3, 10, 49, Shultz, Robert E. II:22, III:52, 70, 71, 79, 101, 102, V:42, 154, 165, VI:39, 67, 126, 170, VII:107, IV:1, 61 VIII:66 Shumaker, Robert V:26 Trueman, Brett VI:17, 26 Siegel, Jeremy J. VIII:52, VIII:106 Tucker, Cynthia VII:143 Siegel, Laurence B. VIII:133 Turner, Andrew III:97, IV:75, 111 Sims, Ian IV:60 Twardowski, Jan I:55, II:12 Singleton, Kenneth V:59, 68 Tynan, Mary Ann III:47 Sirri, Eric IV:123 Sitver, Steven IV:33 Umstead, David A. II:14, III:28, IV:115 Skinner, Frank S. V:62 Smetters, Kent VII:76, VIII:11 Valliere, Gregory R. VII:137 Smith, Roger I:115 Vandell, Robert I:61, 111 Smith, Vernon L. IV:42 Vanderhoof, Irwin T. I:9 Sloan, Richard G. VI:32, VII:195 Vanroyen, Anne-Sophie V:136 Smetters, Kent VI:92, VII:76, 153, VIII:147 Vasicek, Oldrich I:4, III:1, 37, IV:19, 21, 76 Solnik, Bruno I:54, V:87 Vassalou, Maria V:4 Sophianos, George VII:219 Verdelhan, Adrien VIII:88 Sorenson, Eric H. IV:26, VII:9 Viceira, Luis M. VI:131, VII:158 Spatt, Chester V:70, VI:22, VII:148, VIII:138 Vincent, Linda VI:28 Speidell, Lawrence S. III:35 Spence, A. Michael I:106, II:40, 52 Wagner, Gerald C. V:131 Spiegel, Matthew VII:95 Wagner, Wayne I:22, 35, 38, II:43, 74, III:68, IV:37, Spivack, Joseph II:37 63, VI:138 Stafford, Erik VIII:24 Waldman, Michael II:1 Stambaugh, Robert F. VII:41, 101, VIII:10 Wallich, Henry C. I:44 Stanyer, Peter W. IV:79 Wang, Jiang VII:109, VIII:170 Stapleton, Richard I:114 Wang, Ming Yee V:117 Starks, Laura T. III:45 Waring, M. Barton VI:47, VIII:153 Statman, Meir IV:106, V:6 Warshawsky, Mark J. VII:164 Staub, Renato VI:57 Watts, John H. I:19 Stein, David M. IV:77 Webb, Brian IV:86, 89 Stephens, Phillip E. II:38 Weeden, Donald I:103, III:97 Stewart, Scott D. IV:113 Weedon, D. Reid, Jr. II:42 Stoll, Hans R. II:57, IV:41 Weisman, Andrew VI:118, VII:12, VIII:157 Stout, Lynn A. V:153 Weiss, Laurence III:9 Strongin, Stephen H. VI:6 Wermers, Russ VII:4, 34 Stulz, Réné M. V:91 West, Ken VIII:81 Summers, Lawrence III:111 Westbrook, Harvey, Jr. VI:107 Westerfield, Randolph I:32 Taheri, Hadi V:124 Whalen, Robert L. I:4, 71, III:88 Tartaglia, Nunzio I:134 Whaley, Robert W. II:57, III:16 Tepper, Irwin I:75, II:29, 31, III:74 Wheeler, Langdon B. III:126 Terada, Noboru II:57 Whitcomb, David K. I:100 194

White, Halbert IV:68, VI:8 Whitelaw, Robert F. IV:58, VIII:85 Wilcox, Michael F. III:38 Willen, Paul S. VII:162 Williams III, Arthur I:19, II:23, 40, III:79 Williamson, J. Peter VI:60 Willmore, Henry VI:128 Willoughby, Hugh E. VIII:57 Winkelvoss, Howard E. I:77, II:33 Winston, Kenneth J. VII:178 Wisner, Len I:9 Wolfarth, Alwyn E. I:94 Wood, Arnold S. IV:106, V:18, 28, 148 Wunsch, R. Steven III:98 Wurgler, Jeffrey VI:33

Xu, Gan Lin IV:121

Yamini, Matt VII:223 Yogo, Motohiro VII:150, VIII:149 Young, Donald V:56 Yuring, Rear Admiral Micheal VIII:58

Zeikel, Arthur III:43 Zeldes, Stephen VIII:151 Zevin, Robert B. II:44 Zhang, Lu VIII:71 Zhou, Guofu VIII:98 Zipkin, Carol I:36 Zisler, Randall IV:90 Zitzewitz, Eric VII:141, VIII:177

195