Business Models and Incentives in Rating Markets: Three Essays

by

Paul Robert Seaborn

A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Management University of

© Copyright by Paul Robert Seaborn 2011

Library and Archives Bibliothèque et Canada Archives Canada Published Heritage Direction du Branch Patrimoine de l'édition

395 Wellington Street 395, rue Wellington ON K1A 0N4 Ottawa ON K1A 0N4 Canada Canada Your file Votre référence ISBN: 978-0-494-78016-9

Our file Notre référence ISBN: 978-0-494-78016-9

NOTICE: AVIS: The author has granted a non- L'auteur a accordé une licence non exclusive exclusive license allowing Library and permettant à la Bibliothèque et Archives Archives Canada to reproduce, Canada de reproduire, publier, archiver, publish, archive, preserve, conserve, sauvegarder, conserver, transmettre au public communicate to the public by par télécommunication ou par l'Internet, prêter, telecommunication or on the Internet, distribuer et vendre des thèses partout dans le loan, distrbute and sell theses monde, à des fins commerciales ou autres, sur worldwide, for commercial or non- support microforme, papier, électronique et/ou commercial purposes, in microform, autres formats. paper, electronic and/or any other formats.

The author retains copyright L'auteur conserve la propriété du droit d'auteur ownership and moral rights in this et des droits moraux qui protege cette thèse. Ni thesis. Neither the thesis nor la thèse ni des extraits substantiels de celle-ci substantial extracts from it may be ne doivent être imprimés ou autrement printed or otherwise reproduced reproduits sans son autorisation. without the author's permission.

In compliance with the Canadian Conformément à la loi canadienne sur la Privacy Act some supporting forms protection de la vie privée, quelques may have been removed from this formulaires secondaires ont été enlevés de thesis. cette thèse.

While these forms may be included Bien que ces formulaires aient inclus dans in the document page count, their la pagination, il n'y aura aucun contenu removal does not represent any loss manquant. of content from the thesis.

Business Models and Incentives in Rating Markets: Three Essays

Paul Robert Seaborn

Doctor of Philosophy

Graduate Department of Management

2011 Abstract

This dissertation consists of three essays linking the business models of rating agencies to the rating decisions these agencies make as market intermediaries between buyers and sellers.

The first study examines the link between a rating agency‟s primary revenue source and its rating decisions. Theoretically, rating payments could influence rating agency decisions or be counterbalanced by reputational rewards for rating accuracy. I explore this relationship in U.S. corporate credit ratings, where some agencies are primarily paid by bond issuers (sellers) and others by investors (buyers). Analysis of a balanced panel of 338 companies rated between 2005 and 2009 reveals that agencies produce differing ratings consistent with the preferences of their paying customers. Changes in buyer-paid ratings are more frequent and generally precede corresponding seller-paid rating changes. Seller-paid ratings are slower to incorporate negative information, particularly for rated firms in the financial services sector and firms with ratings above a critical grading cutoff.

The second study complements the first by estimating the gap between the rating information disclosed by sellers and the information sought by buyers, again using evidence from U.S. corporate credit ratings. While seller willingness to pay for an additional rating is highly

ii

concentrated among a subset of relatively high-quality firms, buyers demonstrate more uniform interest in additional ratings for firms at all quality levels. This finding highlights an information gap among high-risk firms that is not a major focus of existing regulation.

The third study focuses on rating decisions by government rating agencies, an alternative rating model to those examined in the first two studies. The empirical setting is Canadian film classification where the existence of multiple regional regulators has been justified by claims of variation in community standards. I find significant and increasing consistency in the regulatory decisions of these agencies, suggesting institutional isomorphism that brings into question the persistence of the parallel regional structure.

Overall, these studies provide new empirical insight into the relevance of rating agency heterogeneity to firm strategy and policy. The findings may also be relevant to a variety of other settings involving information disclosure such as environmental impact and corporate social responsibility.

iii

Dedication

I dedicate this thesis to my wife Heidi, who has supported and encouraged me from the first moment I considered a career in academia and whose love continues to inspire me daily. I also dedicate this thesis to my parents, Tex and Glynda Seaborn, who fostered in me a love of learning and new experiences and the confidence to pursue my dreams. I am grateful to all of my family, friends and colleagues for their support and encouragement.

iv

Acknowledgments

I am truly fortunate to have chosen the University of Toronto and the Rotman School of Management for my PhD studies five years ago. Little did I know that I was joining such a rich learning environment and amazing community of scholars. My thesis committee members – Brian Silverman, Tim Simcoe, Mara Lederman and Anne Fleischer – have provided me with wonderful support and guidance, particularly at the most difficult junctures of the process. As researchers, teachers and mentors they have given me four powerful examples of how to make the most of my academic career. One of the most rewarding aspects of my time at Rotman has been getting to know my fellow PhD students whom I admire and respect tremendously. Christian Catalini, Alastair Lawrence, Jay Horwitz, Elena Kulchina, Alex Oettl, Alison Kemper and Nan Jia have been particularly good friends and colleagues. Many others at Rotman, including George Fleischmann, Joanne Oxley, Anita McGahan, Bill McEvily, Ken Corts, Olav Sorenson, Sarah Kaplan, Matt Grennan and Rick Powers have also been very helpful to my progress. Financial support from the SSHRC Canada Graduate Scholarship and the AIC Institute for Corporate Citizenship is gratefully acknowledged. Data access from Egan-Jones, Gus DeFranco, Florin Vasvari, Olav Sorenson, Sean Forbes, Kevin Mak and the Rotman BIC is also acknowledged.

v

Table of Contents

Dedication ...... iv

Acknowledgments ...... v

Table of Contents ...... vi

List of Tables ...... ix

List of Figures ...... xi

Chapter 1 ...... 1

1 Introduction ...... 1

Chapter 2 ...... 3

2 Business Models and Incentives in Rating Markets: How „Who Pays‟ Matters ...... 3

2.1 Introduction ...... 3

2.2 Literature and Theory ...... 6

2.3 Empirical Setting: U.S. Corporate Credit Ratings ...... 12

2.4 Data and Sample ...... 14

2.5 Empirical Approach ...... 16

2.5.1 Fixed Effects Panel Data Regression ...... 16

2.5.2 Granger Causality Tests ...... 18

2.6 Results ...... 19

2.6.1 Descriptive Statistics ...... 19

2.6.2 Fixed Effects Panel Data Regression ...... 21

2.6.3 Granger Causality Tests ...... 24

2.7 Discussion and Conclusion ...... 26

Chapter 3 ...... 38

3 Do Sellers Disclose What Buyers Want to Know? Evidence from U.S. Credit Rating ...... 38

vi

3.1 Introduction ...... 38

3.2 Empirical Setting: U.S. Corporate Credit Ratings ...... 41

3.3 Literature and Hypotheses ...... 43

3.3.1 Why Do Sellers Pay for a (Third) Rating? ...... 43

3.3.2 Why Do Buyers Pay for Ratings? ...... 44

3.4 Data ...... 48

3.4.1 Measures ...... 48

3.4.2 Sample & Unit of Analysis ...... 49

3.4.3 Descriptive Statistics ...... 50

3.5 Empirical Approach ...... 52

3.6 Results ...... 54

3.6.1 Linear Probability Model Results ...... 54

3.6.2 Cox Model Results ...... 55

3.6.3 Robustness Checks ...... 57

3.7 Discussion and Conclusion ...... 59

3.7.1 Implications for Firm Strategy ...... 59

3.7.2 Implications for Policy ...... 60

3.7.3 Conclusion ...... 61

Chapter 4 ...... 75

4 Regulatory Convergence: An Exploratory Examination of Government Film Classification ...... 75

4.1 Introduction ...... 75

4.2 Literature and Theory ...... 76

4.2.1 Factors Increasing Consistency ...... 78

4.2.2 Factors Decreasing Consistency ...... 79

4.3 Data and Sample ...... 81 vii

4.3.1 Sample ...... 81

4.3.2 Measures ...... 81

4.3.3 Descriptive Statistics ...... 84

4.4 Empirical Approach & Results ...... 84

4.4.1 Consistency Score Analysis ...... 84

4.4.2 RatingAge Analysis ...... 85

4.5 Discussion and Conclusion ...... 87

Bibliography ...... 98

viii

List of Tables

Table 2.1: SEC Nationally Recognized Statistical Rating Organizations (NRSRO) ...... 30

Table 2.2 Summary Statistics ...... 31

Table 2.3 Decomposition of Rating Differences During the Economic Downturn ...... 32

Table 2.4 Granger Causality Tests for S&P and Egan-Jones ...... 33

Table 2.5 Granger Causality Tests for Moody's and Egan-Jones ...... 34

Table 3.1 Nationally Recognized Statistical Rating Organizations (NRSRO) ...... 62

Table 3.2 Summary Statistics – Full Monthly Rating Sample ...... 63

Table 3.3 Summary Statistics – Fitch or Egan-Jones Entry after S&P/Moody‟s ...... 64

Table 3.4 Average Rating Differences by Rating Category ...... 65

Table 3.5 Linear Probability Model for Adding a Fitch or Egan-Jones Rating ...... 66

Table 3.6 Cox Proportional Hazard Model for Time to Adding Fitch Rating ...... 67

Table 3.7 Cox Proportional Hazard Model for Time to Adding Egan-Jones Rating ...... 68

Table 3.8 Logistic Regressions for Adding a Fitch or Egan-Jones Rating ...... 69

Table 3.9 Logistic Regression for Fitch or Egan-Jones Rating Coverage ...... 70

Table 4.1 Regulatory Scope and Classification Levels of Canadian Film Classification Bodies 90

Table 4.2 Overview of the Film Review Board (OFRB) ...... 91

Table 4.3 Canadian Classification Boards As Represented in IMDB, 1993-2007 ...... 91

Table 4.4 Example Movie Pair for Calculation of Consistency Score ...... 92

Table 4.5 Variable Definitions and Summary Statistics ...... 92 ix

Table 4.6 Film Classification Consistency Scores by Province ...... 93

Table 4.7 Explanatory Factors in Canadian Film Classification Decisions ...... 94

Table 4.8 Explanatory Factors in Canadian Film Classification Decisions ...... 95

x

List of Figures

Figure 2.1 Data Structure ...... 35

Figure 2.2 Average Active Ratings for Balanced Panel (2001 – 2009) ...... 36

Figure 2.3 Average Active Ratings for Balanced Panel (2005 – 2009) ...... 36

Figure 2.4 Rating Changes During the Economic Downturn ...... 37

Figure 3.1 Egan-Jones Web Site Snapshot ...... 71

Figure 3.2 Rating Coverage by Agency ...... 72

Figure 3.3 Cumulative Hazard – Fitch Rating by Rating Category ...... 72

Figure 3.4 Cumulative Hazard – Fitch Rating by Tiebreaker Status ...... 73

Figure 3.5 Cumulative Hazard – Egan-Jones Rating by Rating Category ...... 73

Figure 3.6 Cumulative Hazard – Fitch Rating by Tiebreaker Status ...... 74

Figure 4.1 Distribution of RatingAge by Province ...... 96

Figure 4.2 Consistency Levels ...... 97

xi

Chapter 1 1 Introduction

Ratings and certifications are prevalent in a variety of settings, enabling quality disclosure that may serve as a partial remedy to information asymmetries between buyers and sellers. This dissertation poses three main questions related to business models and incentives among rating agencies. First, what is the relationship between rating agency business models and rating outcomes? Second, in settings with voluntary disclosure do sellers disclose what buyers want to know? Lastly, when multiple rating agencies rate a common set of products over a common time period, how closely aligned are the decisions made by these parallel regulators and is there a trend toward increased consistency over time?

There is now considerable work in the strategy, economics and sociology literature examining the impact that ratings have on the decisions of potential buyers (Dafny & Dranove, 2008; Jin & Sorensen, 2006; Pope, 2009; Shrum, 1991; Xiao, 2010), on organizations who are rated by outside parties (Chatterji & Toffel, 2008; Espeland & Sauder, 2007; Jin & Leslie, 2003; Sauder & Espeland, 2009) and on organizations required to disclose additional information to buyers (Bollinger, Leslie, & Sorensen, 2011; Mathios, 2000). 1

A complementary stream of research involves analyzing the rating process itself – which firms and products are rated, what ratings are assigned and how the business models and incentives of the rating agencies impact these decisions. Understanding this prior step in the rating process is important for a variety of stakeholders – buyers and sellers as well as policy-makers and the rating agencies themselves. Empirical study of rating markets requires a clear understanding of institutional details as well as access to rating decision data as well as a variety of complementary information sources.

1 Previous rating settings examined in the strategy, economics and sociology literature include restaurants (Jin and Leslie 2003), sports cards (Jin, Kato et al. 2009), thoroughbred horses (Chezum and Wimmer 1997), corporate social responsibility (Jin 2005; Chatterji, Levine et al. 2007), hospitals (Dranove, Kessler et al. 2003; Jin 2005) and business schools (Espeland and Sauder 2007). 1

2

The research questions of this dissertation are examined in two distinct settings, U.S. corporate credit rating (Chapters 2 and 3) and Canadian film classification (Chapter 4). I focus on these settings for three key reasons. First, they represent two of the most well-established rating processes in North America, each dating back to the early 20th century and offering a rich qualitative and quantitative track record. As a result, they have reached a level of maturity and complexity yet to be experienced in nascent rating settings such as environmental and corporate governance ratings. This makes these settings well-suited to empirically test key concepts that have been identified in the theoretical literature on ratings and information disclosure. For example, these settings encompass a variety of rating organizations including for-profit seller- paid rating agencies, for-profit buyer-paid rating agencies, government-appointed rating agencies and industry-led rating agencies that allows for unique insight into the relevance of these organizational forms. Second, rating decisions in both settings have significant economic impact on rated firms and the overall economy both within North America and worldwide. Finally, in both settings active debates exist in academia, government, and industry regarding all aspects of these rating processes including the appropriate role of competition, government oversight, for- profit or non-profit rating activity, mandatory disclosure, and buyer and seller responsibility.

Together, these three studies provide new perspectives on rating markets from both a strategy and policy perspective. As technology and globalization continue to provide new requirements and opportunities for ratings and information disclosure hopefully these studies will have relevance in their current settings and well beyond.

Chapter 2 2 Business Models and Incentives in Rating Markets: How ‘Who Pays’ Matters 2.1 Introduction

Ratings affect substantive matters such as purchase decisions, access to capital markets, and firm reputation, so it is not surprising that many firms pay close attention to the ratings they receive (Chatterji & Toffel, 2008; Durand, Rao, & Monin, 2007). An obvious concern is whether rating agencies are “fair” and issue accurate ratings. While inaccurate ratings could be due to the challenge of evaluating a difficult subject (Chatterji, Levine, & Toffel, 2009) or lack of skill on the part of the rating agency, ratings could also be biased due to conflict of interest, particularly if the users of ratings are not the ones paying for the ratings. Concern regarding this source of rating bias has increased following the massive decline in structured-finance security ratings during 2007-2008 (Benmelech & Dlugosz, 2010) and the broader financial crisis that followed.

Of the past research that has examined potential rating bias due to conflict of interest, most has focused on time-invariant rating differences that cause one set of rated firms to be treated more favorably than another (Becker & Milbourn, 2011; Dellarocas, 2006; Hayward & Boeker, 1998; Waguespack & Sorenson, 2010). These studies primarily focus on a single rating agency or a set of agencies with a common business model. Empirical evidence linking rating bias to differences in the incentives resulting specifically from differing revenue sources is limited, in part because many rating settings lack the available data or necessary identifying variation.2

This paper contributes to the growing literature on ratings as well as the broader literature on business models and conflict of interest by specifically investigating the relationship between rating agency business models and rating outcomes. In the recent strategy literature, „business model‟ has been defined to encompass the internal logic of the firm - the way it operates and

2 An exception from the accounting literature is Beaver, Shakespeare et al (2006) who compare ratings from two rating agencies – one that is SEC-certified and one that is not. 3

4 how it creates value (Casadesus-Masanell & Enric Ricart, 2007). Here I classify rating agencies based on their primary source of revenue. Under a seller-paid model, the agency receives revenue from sellers who pay to have their product offerings or overall firm rated. The resulting ratings are made available to buyers and other interested parties. Under a buyer-paid model, potential buyers of a product or service pay a rating agency for access to private ratings.3

My empirical setting, U.S. corporate credit ratings, offers several features that make it particularly well suited to this study. First, and most importantly, this type of credit rating has attracted multiple competing raters, some primarily paid by sellers (bond issuers) and others primarily paid by buyers (institutional investors), with both types of agencies often rating the same firm at the same time.4 Second, detailed high-frequency data on issued ratings is available. Third, while corporate credit ratings did not decline as dramatically as structured finance credit ratings during the worldwide financial crisis of 2007-2010, they did experience a period of instability that is helpful in identifying potential rating bias separately from other causes of rating differences.

Theoretically, I propose expanding the dimensions on which potential bias is examined beyond those prevalent in the prior literature to include the speed and relative timing in which rating agencies respond to change. I propose that ratings from buyer-paid agencies will be more responsive than those from seller-paid agencies to the negative information of the financial crisis and that these differences will be most significant at margins where conflict of interest is most likely to arise. I also propose that buyer-paid agencies will change their ratings more frequently and that these changes will increase the likelihood of corresponding seller-paid changes for the same rated firm.

I construct a novel dataset of ratings between mid-2005 and mid-2009 that combines publicly available credit ratings with additional ratings normally made available only to paying subscribers. Once the selection of which firms are rated and the timing of when rating announcements occur have been accounted for, it is clear that seller-paid ratings are not

3 Alternative business models such as government-paid ratings are not the focus of this paper. 4 In structured finance credit ratings, buyer-paid ratings only appeared well after the widespread downgrades of 2007-2008 and remain scarce.

5 consistently higher than buyer-paid ratings on average during the sample period. However during the financial crisis, average buyer-paid ratings declined much earlier than average ratings of seller-paid agencies. Rating differences between the two business models are most noticeable for rated firms within the financial services sector, where personal and organizational ties to the seller-paid rating agencies are particularly close. Rating differences are also significant for firms whose seller-paid ratings start out above the investment grade cutoff and thus are most adversely affected by rating downgrades that bring them closer to or into the range of “junk” bonds. Overall, seller-paid ratings are more stable and change less frequently, while buyer-paid ratings fluctuate more dramatically with changes that generally precede corresponding seller-paid rating changes for the same firm, particularly for bad news (rating downgrades). This evidence is inconsistent with purely informational differences between the business models. While not definitive, these results suggest that one or both types of rating agencies may bias their ratings in accordance with the preferences of their paying customers.

This paper makes several contributions. I highlight the importance of moving beyond mean differences to consider variability and timeliness of ratings as additional dimensions which can be affected by rating bias or conflict of interest in settings where ratings are dynamic. I provide new empirical evidence consistent with a link between rating agency business model and rating outcomes.5 The paper also provides a clearer theoretical understanding of the balance between potential conflict of interest and reputational constraints and the conditions under which one may outweigh the other. The findings offer useful insight for multiple audiences, including managers in firms that are rated or use ratings, rating agencies themselves, as well as policymakers.

The rest of the paper proceeds as follows. Section 2.2 provides a review of prior literature and lays out the hypotheses to be tested. Section 2.3 provides institutional background on my empirical setting. Section 2.4 describes the data and sample. Section 2.5 introduces my empirical approach. Section 2.6 summarizes my results while Section 2.7 contains a discussion and conclusion.

5 This contribution is consistent with the call by Dranove & Jin (2010) for research focusing on the economics of certifiers asking “does it matter if they collect revenue from sellers or buyers?”

6

2.2 Literature and Theory

The prior literature suggests a number of reasons why seller-paid and buyer-paid rating agency business models could generate different rating outcomes. These include the tailoring of ratings to the preferences of paying customers, informational differences and operational differences.

The first and most prominent argument for expecting differing rating outcomes is that the differing incentives inherent in the seller-paid and/or buyer-paid business models lead agencies to tailor their rating decisions to the preferences of their paying customers. Seller-paid and buyer-paid agencies serve different primary customers whose willingness to pay for ratings is based on differing rating attributes. Because sellers value ratings that increase the perceived attractiveness of their offerings, higher ratings are generally better and being rated high enough to enter a buyer‟s consideration set is particularly valuable. In contrast, buyers typically prefer accurate ratings that allow them to make better purchase decisions than they could make with alternative information.

Reputation, along with disclosure or external monitoring, may discipline rating agencies and provide an incentive to generate accurate ratings (Horner, 2002; Klein & Leffler, 1981), given that a severe loss of trust in an agency‟s opinions could eliminate their audience of rating users and thus render their ratings worthless. However previous theoretical research specifically on information disclosure and credit rating highlights a number of reasons why reputational constraints may not be sufficient to prevent intentional rating bias. First, even if one were to assume that the two types of rating agencies share common reputational goals, reputational incentives may fail if the product being rated is too complex for accurate ex-post evaluation (Mathis, McAndrews, & Rochet, 2009) or if the demand for ratings or percentage of naive buyers in the market is high (Bolton, Freixas, & Shapiro, 2011). Disclosure and external monitoring can also be insufficient if quality assessment is too noisy (Benabou & Laroque, 1992; Wolinsky, 1983), buyers too forgiving or if disclosure causes agencies to feel morally licensed to bias their assessments (Cain, Loewenstein, & Moore, 2005). Recent work examining the private regulatory enforcement of vehicle emissions under differing organizational structures (Pierce & Toffel, 2010), and the relative cleanliness of franchised and corporate-owned chain restaurants (Jin & Leslie, 2009) both provide examples where reputational incentives are insufficient to counterbalance organizational conflicts of interest.

7

In addition to the potential for agencies to intentionally tailor their ratings to the preferences of their paying customers, there are also informational and operational arguments for differing rating outcomes. First, sellers may share additional private information with seller-paid agencies that they do not share with other rating agencies. This is the case in both U.S. movie age- appropriateness ratings, where the industry-run MPAA receives advance access to films before box office release but other niche rating agencies wait for theatrical release, and credit ratings, where firms preparing to issue bonds can legally share non-public information with chosen rating agencies under Regulation FD (Jorion, Liu, & Shi, 2005). Second, the differing revenue sources of the two business models may also correspond with the use of different operational approaches in terms of their staffing, assessment processes, rating grade levels and thresholds. My empirical tests are designed to distinguish between these factors and conflict of interest looking for results inconsistent with information or operational arguments and by using firm-by-month fixed effects which absorb time-invariant differences across raters.

A commonly expressed concern in the popular press in settings such as credit rating is that seller- paid agencies will bias some or all of their ratings higher (closer to AAA) than is justified by underlying quality (Lewis, 2010). Tests for this form of time-invariant rating difference have also been the general focus of prior research on rating bias in the strategy literature and are well suited to settings where all ratings come from a single rating agency business model or ratings are not revised over time. For example, Hayward and Boeker (1998) provide results showing that around the time of corporate finance deals, investment banking equity analysts rate securities issued by the bank‟s clients more favorably than other analysts rating the same securities.6 Similarly, Waguespack and Sorenson (2010) demonstrate a persistent mean rating bias favorable to major studios and prominent directors in film classification decisions of the U.S. MPAA, a non-governmental industry self-regulatory body. In a supplementary comparison they do not find the same result for government-operated regulatory bodies in other countries.

I argue that looking for time-invariant rating differences is insufficient in a setting such as credit rating where heterogeneous rating agencies are being compared and where it is possible for

6 Unfortunately the authors lack an alternative business model that would allow for a direct comparison of relative bias under multiple organizational forms.

8 existing ratings to be revised, either due to new learning about the unchanged underlying characteristics of a product or new changes in the underlying characteristics themselves.

In these settings it can be difficult to separate intentional rating bias from other business model characteristics affecting relative ratings, such as the private information disclosed by sellers to seller-paid raters being consistently positive or the use of a differing rating scale that makes seller-paid ratings appear more favorable (Fleischer, 2009).7 Persistent differences could also arise from rating agency characteristics not directly related to business model, including agency age and market power, or from the chosen setting and timeframe of analysis. The relative magnitude of these other factors could mask the impact of intentional bias. Thus, a finding that seller-paid ratings are persistently higher than buyer-paid ratings does not provide conclusive evidence of conflict of interest, nor does the lack of a persistent difference rule out rating bias related to business model. Comparing ratings at a single point in time or averaging agency differences over a particular period could lead to a misrepresentative conclusion, particularly if rating changes by various agencies are not synchronized.

The introduction of relevant new information into a rating market provides a useful test to distinguish between conflict of interest and the other aforementioned factors affecting rating decisions if the paying customers have differing preferences for how agencies react to the news.8 In analyzing differences in agency ratings before and after an information shock, time-invariant characteristics of the setting such as random measurement error and time-invariant characteristics of individual agencies should not affect the differences. On the other hand, business model differences including conflict of interest and unequal access to private information should be directly relevant.

Based on the typical preferences of buyers and sellers discussed earlier, it is assumed that buyers will have a stronger preference for seeing new negative information incorporated into ratings than the sellers whose ratings could be the ones downgraded as a result. If agencies cater to these

7 If sellers consistently share inaccurate positive private information with seller-paid agencies in an attempt to boost their rating, these agencies should learn this over time and adjust their interpretation accordingly. 8 I will label information that causes raters to upwardly revise their assessment of appropriate rating levels as positive information whereas information that causes raters to lower their assessment as negative information.

9 preferences in the period after new negative information is released, buyer-paid ratings should demonstrate a more significant reaction to the bad news.

Hypothesis 1a: Buyer-paid ratings will decline further than seller-paid ratings during a period of predominantly negative new information.

If observed differences in relative ratings during a period of predominantly negative information are related to conflict of interest, these differences should vary based on the types of firms being rated and how critical the impact of rating changes is to firms at various ranges within the rating scale.

Sellers who represent a large proportion of an agency‟s revenue or profits or interact with the agency frequently should be more likely to be rated favorably than less significant customers, due to factors such as power, reciprocity within social networks and career concerns among individual employees (Chevalier & Ellison, 1999; Emerson, 1962; Uzzi, 1997). In credit rating, firms in some industries, such as financial services, interact with the seller-paid rating agencies in multiple capacities and across many different rating types, allowing the formation of closer social and organizational ties. As a result, seller-paid agencies may be more hesitant than buyer- paid agencies to downgrade such firms. It is consistent with this logic that buyer-paid rating agencies often market their lack of ties to any particular firm or industry as a selling feature.9

Hypothesis 1b: Rating differences between seller and buyer-paid agencies in the response to new negative information will be greater for rated firms with close ties to seller-paid rating agencies.

Finally, the presence of a particularly critical juncture within a given rating scale, such as the cutoff between A and B grades in California restaurant hygiene report cards (Jin & Leslie, 2003) or the 15-minute cutoff that is used to classify US flights as on-time or late (Forbes, Lederman, & Tombe, 2011), has been linked to the gaming of rating outcomes by those in control of rating information. If this is also the case for the investment grade cutoff in credit ratings, which separates investment grade ratings from high-risk or “junk” ratings, seller-paid agencies responding to the preferences of their customers may hesitate more to downgrade firms who are

9 For example, the Egan-Jones web site advertises “No-conflicts-of-interest.....We receive no compensation from any issuers to rate their securities”. Accessed January 10, 2010.

10 currently above a key cutoff to avoid or delay the significant negative impact such a downgrade would trigger.10 On the other hand, once firms have fallen below such a cutoff the pressure to avoid further downgrades may be weaker since the marginal impact of subsequent downgrades is lower. If buyer-paid agencies rating decisions are less sensitive or even completely insensitive to the key cutoff this leads to the following hypothesis.

Hypothesis 1c: Rating differences between seller and buyer-paid agencies in the response to new negative information will be greater for those firms whose seller-paid ratings are at risk of crossing a critical rating juncture.

When ratings are expected to change over time, all rating agencies are forced to make a trade-off between accuracy and stability. The chosen balance provides another way to cater rating outputs to the preferences of paying customers. Even where the reputational costs of issuing biased rating announcements are too high, agencies have latitude in how quickly they revise their existing ratings and differences in this timing may be opaque to all but the most attentive observers. Thus my examination of the link between rating agency business model and rating outcomes includes the timing and frequency of rating changes in addition to relative differences in ratings assigned.

Loffler (2005) argues that [seller-paid] credit rating agencies are slow to react to new information due to a conscious decision to minimize „rating bounces‟ caused when a change proves short-lived and to neglect cyclical variations in seller credit quality. The relative behavior of buyer-paid agencies is not considered in his theoretical and simulation analysis, but of the two paying groups, investors regularly buying and selling bonds benefit more from timely changes to ratings than issuers who only periodically issue new bonds and do not face the same downside risk. In a paper from the accounting literature, Beaver, Shakespeare et al. (2006) compare Moody‟s and Egan-Jones ratings from 1996 to 2002 and find Moody‟s ratings slower to change.11

10 If any uncertainty exists about whether a downgrade below a critical cut-off is warranted, even a short delay allows for the possibility that new developments will make the downgrade unnecessary. 11 I extend these findings by analyzing ratings from a later period that encompasses a major financial crisis and when Egan-Jones was more established and achieved SEC certification. I also add ratings from S&P, which allow

11

Many buyer-paid ratings are sold as sets by subscription.12 Like other subscription-based businesses, buyer-paid rating agencies face an ongoing challenge to maintain the attention of their customers and convince them to renew their subscriptions. Making more frequent rating changes than the publicly available seller-paid ratings is consistent with this goal. If the choice between accuracy and stability is taken to one extreme, ratings may experience dramatic variability or “churn” that is driven by minimal or nonexistent underlying quality changes. At the other extreme, ratings may remain static even in the face of overwhelming evidence for revision.

In terms of reputational incentives for accuracy, not only is there variation in the preferred frequency of rating updates among those paying for ratings but evaluating this agency characteristic requires more effort on the part of the information user than just observing relative rating differences at a given point in time. This leads to the following hypothesis.

Hypothesis 2a: Buyer-paid ratings will change more frequently than seller-paid ratings.

In any setting with two or more rating agencies making non-simultaneous rating announcements, the possibility exists that one agency‟s rating changes will tend to precede the corresponding changes of other agencies, creating a leader-follower pattern in the relative timing of rating decisions. Differences in the relative frequency of rating changes as discussed in Hypothesis 2a are neither necessary nor sufficient to create such a pattern. If buyer-paid ratings change more frequently on average, as per Hypothesis 2a, but only due to uninformative “churn”, there should be no evidence of buyer-paid rating changes preceding corresponding seller-paid rating changes. At the same time if seller-paid agency rating changes occur less frequently on average but are often followed by corresponding buyer-paid changes for the same rated firms, a leader-follower pattern could still be found. Since seller-paid ratings are typically made publicly available but buyer-paid are not, the next hypothesis tests whether buyer-paid rating changes have informational value for buyers at the individual firm level.

me to contrast the consistency of ratings among the two leading seller-paid agencies with the differences in ratings between these two agencies and buyer-paid Egan-Jones. 12 For example, Zagat restaurant ratings or Consumer Reports product ratings.

12

Hypothesis 2b: Buyer-paid rating changes will increase the likelihood of corresponding rating changes by seller-paid agencies, but not vice versa.

The timing differences of Hypothesis 2b could result from conflict of interest on the part of seller-paid rating agencies but also from other factors. If the findings are driven by conflict of interest, then the differences observed should be more pronounced when conflicts of interest are likely to be stronger. One place to look for evidence of conflict of interest is in differences between rating upgrades and downgrades. If timing differences between agencies are generated by random measurement error or operational differences, these differences should be symmetrical for both upgrades and downgrades. On the other hand, if the cause is agencies catering to differing buyer and seller preferences over the speed of rating changes, the timing differences should not be symmetrical. Assuming that buyers value timely downgrades whereas sellers prefer slower downward rating changes but that their preferences regarding upgrade timing are more aligned, the predicted result for Hypothesis 2b should be more significant for downgrades than upgrades. Large seller-paid credit rating agencies have also been criticized for issuing rating downgrades to struggling firms that became self-fulfilling prophecies by pushing the firms into default or bankruptcy and even influencing overall confidence in the market. These factors may make seller-paid agencies even slower to downgrade than upgrade.

Hypothesis 2c: The leader-follower relationship between buyer-paid rating changes and the corresponding seller-paid rating changes (H2b) will be stronger for downgrades than upgrades.

2.3 Empirical Setting: U.S. Corporate Credit Ratings

In credit ratings, the sellers are known as issuers because they have issued, or plan to issue, bonds or other forms of debt financing. Credit ratings come in the form of letter grades. Bonds rated above a specific point on the grading scale are classified as „investment grade‟ and treated more favorably under a variety of government regulations and private contracts. Falling below the investment grade cutoff into “junk” status may narrow the pool of potential investors, triggering an immediate sell-off. Credit ratings provide the rater‟s opinion of the creditworthiness of an issuing entity or of the specific financial obligations it issues. They directly impact the seller‟s access to capital markets and cost of borrowing. For investors, credit ratings provide an expected probability of credit default and fund recovery and directly affect bond market pricing.

13

Regulation has played a key role in the competitive structure of the industry. The SEC endorses certain U.S. rating agencies as Nationally Recognized Statistical Rating Organizations (NRSROs), significantly expanding the purposes for which the agency‟s ratings can be used.13 Investment grade NRSRO ratings have specific uses in a myriad of government regulations, with the result that many institutional investors are prohibited from investing in securities lacking a sufficient number of investment grade NRSRO ratings.14

From 1975 to 2006 only a small number of seller-paid agencies received NRSRO certification. Since then regulatory changes have allowed additional agencies to become certified, in a move designed “To improve ratings quality for the protection of investors and in the public interest by fostering accountability, transparency, and competition”.15 While seller-paid agencies continue to have higher visibility and far more employees than buyer-paid rating agencies, three of the ten current NRSROs are primarily buyer-paid, as shown in Table 2.1, which describes the current NRSROs. Of the three, Egan-Jones has the broadest rating coverage and longest track record, making it the most useful to testing the thesis of this paper.

Seller-paid and buyer-paid rating agencies have business models that differ across a number of dimensions. The primary source of revenue for seller-paid ratings agencies are the issuers of bonds and other securities but the primary users are bond investors, shareholders, and regulators who can generally access them for little to no charge.16 Issuers generally prefer to receive higher initial and ongoing ratings, which reduce their cost of capital, improve their likelihood of successfully issuing new debt, and maximize the set of investors interested in holding their active securities. These rating preferences may conflict with those of the primary users.

The primary users of buyer-paid ratings, institutional investors, are also the primary source of revenue, since access is generally restricted to paying customers only. These investors generally

13 A similar approach is used by the USDA, which certifies a group of third-party organizations to inspect and certify organic producers. 14 When the capital reserves of a bank or insurance company are evaluated, securities with high ratings from an NRSRO receive a smaller discount or “haircut” than securities without one or in many cases two investment grade NRSRO ratings. Private contracts may also include NRSRO requirements and some go further in requiring ratings from specific named rating agencies. 15 Credit Rating Agency Reform Act http://ftp.resource.org/gpo.gov/laws/109/publ291.109.pdf accessed 1-25-2010 16 Seller-paid ratings are accessible through Bloomberg terminals, news reports and rating agency websites.

14 prefer accurate ratings and see particular value in rating information that protects them from the downside risk of credit default. Unless they are about to make a new bond purchase they are generally less concerned with upgrades because equity holders, not bond holders, capture most of the upside benefits if a firm‟s financial outlook improves.

Seller-paid agencies are typically engaged by issuing firms and their investment bankers well before any new debt is issued so that ratings can be incorporated into the issue pricing and promotion. The rating process includes an exchange of private information and labor-intensive analysis by assigned agency staff members and a senior review committee that sets the final rating. Buyer-paid firms, on the other hand, typically base their ratings only upon publicly available financial information and have little or no direct interaction with issuers. Both types of agencies generally update or reaffirm existing ratings whenever deemed necessary.

The typical fee structure for a seller-paid agency consists of both an upfront fee charged when new coverage is initiated and a recurring fee charged annually for as long as coverage is maintained. The recurring fee is not tied to the frequency of rating reviews or changes.17 In contrast, buyer-paid agencies typically charge subscribers a flat fee for access to their full portfolio of ratings and receive no funds from issuers. The internal costs of rating a given firm are thought to be much higher for seller-paid agencies than buyer-paid agencies because of the additional staff that interact directly with the issuer.

2.4 Data and Sample

Buyer-paid agency Egan-Jones agreed to share its complete rating history for this study under a series of confidentiality restrictions. The remaining rating announcements are publicly available and were obtained via Bloomberg terminal. The rating announcements are supplemented with Compustat firm-specific information including industry sector and financial characteristics.18

17 Based on interviews with employees of both seller-paid rating agencies and issuing firms, the upfront fee generally consists of both a variable amount based on the size of a firm‟s bond issue and a fixed base amount. 18 The sources lack a common index so I use a variety of matching techniques to link them all to GVKEY, a Compustat company identifier.

15

The rating announcements I analyze are those containing long-term U.S. ratings for individual corporations (e.g. IBM). By restricting my sample to corporate ratings, I exclude ratings of individual corporate securities (e.g. IBM five-year notes), structured finance instruments, or municipal, state or federal governments. When compared to structured finance ratings, corporate ratings have a longer history and attract a larger number of both issuers and rating agencies. Corporate ratings also tend to be more variable than government ratings, some of which may remain unchanged for decades.

Analyzing rating differences between agencies requires carefully accounting for the fact that they rate different sets of firms and may start and stop rating these firms during the sample period. Sellers may selectively choose which ratings they pay for and buyer-paid agencies may strategically rate only a subset of firms, such as those that they deem overrated or even underrated by seller-paid agencies. As a result, naive comparisons of ratings across agencies could be skewed by the composition of the set of firms each agency rates and the effects of specific firm ratings entering or exiting the panel, even if rated-firm fixed effects are used.

To address this selection problem I construct a balanced panel that includes only firms rated throughout the sample period by S&P and Moody‟s, the two most prominent seller-paid agencies, as well as Egan-Jones, the buyer-paid agency with the broadest rating coverage. This reduces the sample size by excluding firms without ratings from all three agencies or with interruptions in their rating coverage during the sample period.

For each rated firm and month I convert individual agency rating announcements, which may arrive at any time, into a monthly panel that includes all agencies with an active rating at month end. Active ratings represent a combination of new ratings issued in that month and previous ratings that have not been withdrawn or revised.19 I focus primarily on the four-year period from July 1, 2005 to June 30, 2009 that encompasses the rise of buyer-paid agencies as well as a worldwide financial crisis.20 The visual output of the data structure for one firm, Ford Motor

19 Rating announcements that “reaffirm” an existing rating do not represent a rating change in this data structure. 20 As a robustness check I also make use of an eight-year sample extending back to July 1, 2001 and obtain similar results. However far fewer firms qualify for the balanced panel by being rated throughout during the period.

16

Credit, is shown in Figure 2.1, illustrating a gradual rating decline that occurs at varying speeds for different rating agencies.

2.5 Empirical Approach

2.5.1 Fixed Effects Panel Data Regression

In order to estimate differences in ratings between seller-paid and buyer-paid agencies I begin with fixed effects panel data regressions that test Hypotheses 1a-c.

The primary dependent variable is “RATINGNUMijt” the month-end credit rating for rating agency i of rated firm j in month t. I convert all letter grades to a number between 1 (AAA) and 22 (D), with investment grade ratings having a value of 1-10 and non-investment grade ratings having a value of 11-22, following the procedures used in previous papers including Beaver Shakespeare et al. (2006) and Becker & Milbourn (2009). While it is possible that the conversion fails to incorporate subtle differences in the grading thresholds for different agencies, the impact of this omission on my results is lessened by a focus on time series variation for identification.

The focal independent variables are indicator variables identifying which rating agency generated a given credit rating. The basic regression equation is:

RATINGNUMijt = α + β1MOODYSjt + β2EJjt + γjt + εijt (1) where α is a constant, the β‟s are estimated coefficients on indicator variables for each agency

(excluding S&P which is used as the default agency) and γjt represents a firm-month fixed effect for every rated firm and month pair. This fixed effect controls for the time-invariant characteristics of each rated firm, time-varying changes common to all firms, such as broad government actions, as well as time-varying differences for individual firms in a particular month, such as 10-K announcements or stock market fluctuations unique to a single firm. I allow for robust standard errors and cluster on individual rated firms in order to reduce the potential of overstating significance due to the fact that the same firm is observed in multiple months (Bertrand, Duflo, & Mullainathan, 2004).

One identification challenge to making what are essentially cross-sectional comparisons between rating agencies is that firms may not be rated by all agencies in all sample periods, reducing the

17 ability to make accurate within-firm comparisons. I address this by using the balanced panel of firms rated by a common set of agencies throughout the sample period to ensure that the interpretation of rating differences is not affected by the firms entering and exiting the sample.

The second identification challenge is that business model may be correlated with other agency characteristics and/or impact rating differences through mechanisms other than incentives. In the ideal experiment, business models would be randomly assigned to rating agencies or agencies would switch business models over time. No such switches occur during periods with available data. In order to better identify rating bias related to business model incentives separately from other factors that may affect rating differences, I separate the four-year sample into a pre- and post-period using October 2007 as the first month of the post-period.21 I then modify the basic regression equation to compare the reaction of agencies under both business models to the negative information generated by the financial crisis of 2007-2010. This approach is useful both for general insight into potential factors motivating differences between agency business models and specific insight into that event. The pre and post periods are distinguished using post-period indicator variables. The estimating equation becomes:

RATINGNUMijt = α + β1MOODYSjt + β2EJjt + β3MOODYS*POSTjt + β4EJ*POSTjt + vjt + εijt (2) where MOODYS*POST and EJ*POST are the indicator interaction variables indicating that month t falls in the post-period of the sample time frame.

To understand what subsections of the data generate the differences in agency ratings in the POST period (Hypotheses 1b and 1c), I then interact the two POST interaction variables with two other sets of variables. First, categorical variables representing rated firm characteristics, specifically the industry sector of each rated firm as defined by the Global Industry Classification Standard at the 10-category sector level. While I lack a direct measure of the relative influence of different firms or industries over seller-paid agencies, these industry sectors are used as a proxy to identify the subsections within the sample where seller-paid rating agencies may have experienced stronger conflicts of interest (Hypothesis 1b). Second, to test Hypothesis 1c I

21 The results are robust to using a variety of different cut-off dates within 2007. The procedure used to divide the sample period is described in the results section below.

18 interact the POST variable with a dichotomous variable indicating whether each firm‟s starting S&P rating just prior to the financial crisis fell above or below the investment grade cutoff. As explained previously, I expect seller-paid agencies to be most reluctant to downgrade firms that start out above the investment grade cutoff prior to the crisis but subsequently approach this key juncture.

2.5.2 Granger Causality Tests

To statistically test inferences of relative timeliness between the rating agencies (H2b and H2c) I follow Beaver et al. (2006) and use Granger causality tests (Granger, 1969). The Granger methodology relies on the use of lagged observations within time series data and is widely used as a useful step toward separating causality from association by comparing two independent series of events to see how closely past events from one series are correlated with subsequent events from the other series. Here it is used to determine whether rating changes by one agency for a given firm can help predict subsequent rating changes by another agency for the same firm.

These tests employ logistic regression models in which the dependent variable is an indicator taking a value of 1 for a rating change (e.g. S&PUPGRADEjt or EJUPGRADEjt) by a particular rating agency of rated firm j in month t and 0 if no such change occurred.22

Two sets of independent variables are included. First, lagged values of the dependent variable (rating changes by the focal agency) in the months prior to month t act as controls that account for any prior information known to the focal agency. Second, lagged indicator variables for rating changes by a second rating agency represent the explanatory variables of interest.23 The version of the model where the dependent variable is an S&P downgrade and the independent variables are 10 months of lagged downgrades for both S&P and Egan-Jones is as follows (with rated firm subscripts suppressed for clarity):

10 10 S&PDowngradet = α0 +  αjS&PDowngradet-x +  ηjEJDowngradet-x + εt (2) x1 x1

22 The size of the upgrade or downgrade is not incorporated. 23 As a robustness check the number of lags was increased further with similar results.

19 where the x subscript tracks the lagged variables for each agency.

These Granger tests rely on variation across agencies in the timing of rating changes for a given rated firm. If all agencies change their ratings in the same direction simultaneously, or if the timing of these changes is completely random, none of the coefficients on the lagged variables will be statistically significant. If buyer-paid rating changes Granger-Cause seller-paid changes as per H2b, I would expect a number of the ηj coefficients in the above equation to be positive and significant.

To test H2c which predicts that differences in timeliness between seller-paid and buyer-paid agencies will be greater when delivering bad news (downgrades) than good news (upgrades), I run these tests separately for upgrades and downgrades. Support for H2c requires individual coefficients for downgrades to be more positive and to be statistically significant at longer lag intervals than the coefficients for upgrades and for the models that focus on lagged downgrades to offer a better fit with the sample data.

2.6 Results

2.6.1 Descriptive Statistics

Table 2.2 provides summary statistics for the July 1, 2005 to June 30, 2009 sample. Panels A and B consist of month-end ratings by S&P, Moody‟s and Egan-Jones, the three NRSROs most active in issuing corporate ratings in the U.S. market. In Panel A, which is based on the full unbalanced sample of all ratings from these three agencies during the four-year period, the proportion of downgrades issued by buyer-paid agency Egan-Jones (0.025) is statistically significantly higher than the proportion issued by S&P (0.016) or Moody‟s (0.014), the two seller-paid agencies. A t-test of difference in downgrade proportion between Egan-Jones and pooled S&P and Moody‟s ratings is statistically significant at p<0.0001, consistent with H2a. The two seller-paid agencies issue a near-identical proportion of investment grade ratings (0.570) with Egan-Jones issuing a lower proportion (0.548) that a t-test again finds to be statistically significantly lower at p<0.0001. The three agencies have different raw mean ratings within one

20 rating level of each other.24 The standard deviations on each agency`s mean rating of approximately five rating levels (not shown) reflect the significant rating dispersion that exists across rated firms. S&P issues more than twice as many long-term corporate credit ratings as the other two agencies during the four-year period, although rating activity by other agencies is increasing over time.

Panel B of Table 2.2 provides the same descriptive statistics on the active monthly ratings of only the balanced panel of 338 firms rated continuously by S&P, Moody‟s and Egan-Jones during the four-year sample period. When compared to the full sample, this set of ratings has a higher ratio of investment grade ratings to non-investment grade ratings. For this sample Egan- Jones again makes more upgrades and downgrades than the two seller-paid agencies.25

Panels C and D compare the same two samples in terms of the unique firms rated within them, rather than individual month-end ratings. In both panels, buyer-paid ratings are much more likely to cross the investment grade cutoff than seller-paid ratings. There are also clear difference in the relative importance of different industry sectors to each agency‟s overall rating activity, with financial services making up twice as large a proportion of the rating volume for seller-paid agencies than for Egan-Jones.

Figure 2.2 displays average monthly ratings for a balanced panel of 174 firms rated by the three agencies over eight years (from July 2001 to June 2009) and offers key intuition into the link between agency revenue source and rating decisions. Contrary to what some observers may expect, seller-paid ratings are not the most lenient in each month. From early in 2004 until early in 2007, average buyer-paid ratings from Egan-Jones are higher (more lenient, closer to AAA) than the seller-paid ratings for the exact same balanced panel of firms. Soon after, Egan-Jones ratings drop dramatically until they have are the harshest among the three agencies for the same

24 The full distribution of grades (not shown) is neither uniform nor normal with most agencies showing a large drop in frequency between the lowest investment grade rating and the top non-investment grade rating. The distributions for Fitch and DBRS, traditionally distant third and fourth choices among seller-paid agencies, are more skewed towards investment grade ratings that those for S&P and Moody‟s. 25 2% of Egan-Jones month-end ratings represent upgrades vs. 1% for both S&P and Moody‟s. 4% of Egan-Jones month-end ratings represent downgrades vs. 2% for both seller-paid agencies. In real numbers this translates into 627 downgrades and 170 upgrades for Egan-Jones compared to 282 and 113 for S&P and 268 and 117 for Moody‟s.

21 set of companies. Average monthly ratings from the two largest seller-paid agencies are much more consistent and do not decline until later in the sample period.26

2.6.2 Fixed Effects Panel Data Regression

Table 2.3 displays the results of a series of linear fixed effect panel regression model specifications testing Hypotheses 1a-c. In each model S&P, the largest seller-paid agency, is the default agency so the β coefficients indicate the average relative difference between ratings from other agencies and S&P for the same rated firms in the same month. A negative coefficient represents a more lenient rating agency, since a reduction in the numerical rating moves a rated firm closer to the AAA = 1 end of the rating spectrum.

Model 1 applies the fixed effect panel specification described in the previous section. Averaging across all firm-month observations in the data, Egan-Jones ratings are not statistically significantly different than S&P while Moody‟s is the harshest rater of the three.27 As we know from the latter portion of Figure 2.2, however, this is not because Egan-Jones ratings are similar to S&P in all monthly periods, but because the monthly fluctuations between Egan-Jones and S&P happen to cancel each other out over four years despite a major decline in average Egan- Jones ratings during the financial crisis.

For reasons outlined earlier, the findings in Model 1 offer little evidence towards the question of whether agency business models have an effect on rating outcomes through incentives. Thus I move on to a series of models that make use of the financial crisis of 2007-2010 as a negative information shock in the credit rating market. The analysis allows me to further examine the factors that contributed to the dramatic Egan-Jones rating decline illustrated in Figure 2.2 during the latter part of this period.

26 For the smaller set of firms that also have a rating from Fitch, the third-largest seller-paid agency throughout the sample period, Fitch average ratings follow a trajectory very similar to those of Moody‟s and S&P. 27 As robustness checks I use two even more restrictive balanced panels: the 205 firms also continuously rated by Fitch as well as the previous three agencies between 2005 and 2009, as well as the 174 firms rated by S&P, Moody‟s and Egan-Jones between 2001 and 2009 as shown previously in Figure 2.2. Moody‟s continues to be the harshest rater and the difference between Egan-Jones and S&P remains insignificant in both models.

22

A prerequisite for my analytical approach is determining an appropriate date that divides the four-year sample into two periods – one prior to the impact of the financial crisis on corporate credit ratings and one during which these negative impacts were incorporated into rating decisions. In retrospect, an early signal of negative information that generated limited press and rating agency reaction occurred when New Century Financial Corporation, a leading subprime mortgage lender, filed for Chapter 11 bankruptcy protection in April 2007 (Federal Reserve Bank of St. Louis, 2010). In October of 2007, however, Merrill Lynch announced the biggest quarterly loss in its 93-year history after taking $8.4 billion of writedowns, an event described by an S&P spokesman as „startling‟ (Keoun, 2007). Other negative information continued to arise, culminating in September 2008 when Lehman Brothers declared bankruptcy and American International Group (AIG) accepted an $80B bailout (Federal Reserve Bank of St. Louis, 2010). These events are shown in Figure 2.3 along with monthly mean ratings for S&P, Moody‟s and Egan-Jones using the four-year balanced panel sample.

In an exploratory regression not shown here I use information within the rating data itself to identify a point in time when the average ratings of any of the agencies first reflected a significant downturn relative to start of the sample period.28 The results of this regression show that Egan-Jones average ratings become statistically significantly different (p<0.05) from Egan- Jones‟ initial July 2005 ratings starting in October 2007. Seller-paid ratings decline much later and do not fully close the average rating gap with Egan-Jones before the end of the sample period.

Based on both the timeline and quantitative analysis, I establish October 2007 as the cutoff date and introduce a post-October 2007 indicator variable in Model 2 of Table 2.3.29 Consistent with Hypothesis 1a, the Egan-Jones Post variable is strongly significant both statistically (0.60, p<.001) and economically, equivalent to moving the average rating a half grade lower relative to the default agency, S&P. In contrast, the Moody‟s Post variable is not statistically significant, indicating again that the difference between S&P and Moody‟s ratings in the pre-period did not change in the post-period.

28 I create a model with a dummy variable for each agency-month pair during the four-year sample period to determine when each agency‟s average ratings decline significantly from their July 2005 starting point. 29 The results that follow are robust to the use of other cut-offs throughout 2007.

23

In Model 3, I interact the Egan-Jones Post variable with categorical variables indicating the industry sector of the rated firm in order to test Hypothesis 1b. Somewhat surprisingly, the drop in Egan-Jones ratings in the post-period is statistically significantly different than S&P and Moody‟s for only three of the ten industry sectors – Financial Services, Consumer Discretionary and Materials.30 The economic significance of the coefficients for these three sectors is also high, representing more than a full notch on the rating scale. Other industries without statistical significance nevertheless have positive effects that are close to the average effect, making it unlikely that there is no treatment effect at all in these other sectors.

It is suggestive but not conclusive that each of the three sectors with statistically significant differences has characteristics consistent with a conflict of interest explanation for their rating differences. Financial Services, the sector in which buyer-paid ratings differ the most from the seller-paid ratings, contains firms particularly familiar with and closely tied to the seller-paid rating agencies, consistent with Hypothesis 1b. The other two sectors, Consumer Discretionary and Materials,31 had the lowest average sector ratings in October 2007 for both Egan-Jones and S&P. As a result their rating distributions were closest to the critical investment grade cutoff, consistent with Hypothesis 1c. To illustrate this, Figure 2.4 plots the monthly average ratings by industry sector for all three rating agencies. As shown in the top row, average Egan-Jones ratings in Consumer Discretionary and Materials dropped well below the investment grade cutoff during the post period whereas the average seller-paid ratings for the same two sectors did not.32

One explanation for the difference between the two business models is that seller-paid agencies delayed downgrading firms during the financial crisis not because of self-interested bias or a differing opinion of credit quality but because they were concerned about the impact their downgrades would have on the overall financial system. While this story is plausible for some

30 If the model is expanded to include equivalent dummy variables for Moody‟s (not shown), the results show that Moody‟s ratings are not statistically significantly different than S&P in any of the ten industry sectors. 31 The Financial sector includes companies such as Goldman Sachs, GMAC, Marsh McClennan, and Morgan Stanley. Consumer Discretionary sector includes companies such as New York Times, Liz Claiborne, JC Penney. The Materials sector includes Alcoa, Dow Chemical, International Paper, and Weyerhauser. 32 Egan-Jones downgraded 10 of the 31 firms in the Materials sector below the investment grade cut-off to “junk” status after October 2007 whereas S&P and Moody‟s downgraded only 2 firms and 1 firm across the threshold, respectively. The number of Consumer Discretionary firms downgraded to “junk” status was also higher for Egan- Jones, although not by as much - 13 for Egan-Jones compared to 8 for S&P and 9 for Moody‟s.

24 key financial services firms, it seems less so for firms in the other industry sectors such as Liz Claiborne and JC Penney not central to financial market stability.

In Model 4 I interact the Egan-Jones Post variable with the dichotomous variable indicating whether a firm was rated above or below the investment grade cutoff by S&P in October 2007. The post-period difference between S&P and Egan-Jones ratings is statistically significant for firms above the investment grade cutoff but not for those below the cutoff. This is consistent with H1c and the argument that seller-paid agencies are more reluctant than buyer-paid agencies to downgrade firms while they remain above the investment grade cutoff (because the impact of crossing the investment grade cutoff is large) but show no difference in willingness to downgrade firms once they have already crossed this threshold. The observed rating differences between the business models could be even stronger with a broader sample than the balanced panel that included more firms with ratings just above the investment grade cutoff prior to the downturn.

2.6.3 Granger Causality Tests

The higher frequency of both upgrades and downgrades for Egan-Jones compared to the seller- paid agencies within the four-year balanced panel is consistent with H2a but offers no insight into whether individual buyer-paid rating changes are correlated with an increase in the likelihood of subsequent corresponding changes by seller-paid agencies for the same rated firms (H2b). It could be that the eventual decline in average seller-paid rating changes in Figure 2.2 results from rating changes to a completely different subset of firms. I thus use Granger causality tests (Granger, 1969) to examine whether individual buyer-paid rating changes lead corresponding seller-paid rating changes for the same rated firms.

Table 2.4 displays the results of a series of four logistic regressions examining Granger causality between S&P and Egan-Jones rating changes. In the first, I test whether Egan-Jones downgrades precede subsequent S&P downgrades. The dependent variable is an S&P downgrade in month t and the independent variables are lagged downgrades by both S&P and Egan-Jones in the previous 10 months. The second regression is identical except that I reverse the roles of the two agencies and test whether S&P downgrades precede subsequent Egan-Jones downgrades. In the third and fourth regressions I repeat the same exercise for upgrades instead of downgrades.

25

Support for H2b requires positive coefficients and significant coefficients on lagged Egan-Jones rating changes when the dependent variable is a rating change by S&P (the boxed “explanatory” coefficients in the first and third columns), with insignificant results for S&P lagged changes when the dependent variable is an Egan-Jones rating change (the boxed “explanatory” coefficients in the second and fourth columns).

For downgrades, the results of the first two regressions strongly support Hypothesis 2b. The first regression shows Egan-Jones downgrades to have a strong positive correlation with subsequent downgrades by S&P after 1-5 months (p<0.001), and weaker positive correlations after 8 and 10 months (p<0.01 and p<0.05, respectively). I use a chi-square likelihood ratio test to compare this model to a “restricted” model containing only the control variables, S&P‟s own lagged downgrades. The test statistic shown at the bottom of Table 2.4 (141.53, p<0.0001) suggests that the inclusion of the lagged Egan-Jones downgrades provides a statistically significant improvement in predicting future S&P downgrades. The interpretation is that a firm that is downgraded by Egan-Jones has a statistically significantly higher probability of being downgraded by S&P during the subsequent 10 months relative to a firm that has not been downgraded by Egan-Jones. In contrast, the second regression in Table 2.4 shows a positive statistically significant correlation between Egan-Jones downgrades and prior S&P downgrades only in the immediate prior month (t-1). Because the monthly panel dataset is based only on ratings on the last day of each calendar month, some correlated ratings that fall in different calendar months may be within days or weeks of each other, so the true predictive value of S&P downgrades is likely some period considerably less than a full month. The chi-square test statistic indicates that the inclusion of lagged S&P downgrades does not improve the model as much as in the first regression (p=0.0061). If month t-1 is omitted from the chi-square test, S&P downgrades do not improve the model at all (p=0.1731). Thus, overall Egan-Jones downgrades lead changes by S&P but S&P changes do not lead changes by Egan-Jones.

When upgrades are examined in the same way, Egan-Jones upgrades still appear to lead those of S&P although the correlation between agency decisions is weaker. The third regression in Table 2.4 suggests that Egan-Jones upgrades are weakly correlated with subsequent S&P upgrades 1 and 8 months later and the chi-square likelihood test is still statistically significant whether or not the t-1 period is included (p=0.0013 or p=0.0046) . This suggests that the addition of the Egan- Jones lagged upgrades improves model fit. In the fourth regression, on the other hand, the only

26 lagged S&P upgrades that are significantly correlated with later Egan-Jones upgrades are those in the immediately preceding month (p<0.001) or seven months prior (p<0.05) and the chi- square likelihood test statistic finds much less improvement by adding the lagged S&P upgrades (p=0.0449) and none if the t-1 period is omitted (p=0.4345). Table 2.5 demonstrates similar results when the two agencies analyzed are Egan-Jones and Moody‟s instead of Egan-Jones and S&P.

Overall these results provide evidence that Egan-Jones leads both Moody‟s and S&P by up to 10 months for downgrades and to a lesser extent for rating upgrades. The difference between downgrades and upgrades provides support for Hypothesis 2c; upgrades were less frequent during the sample period, reducing the statistical power of the upgrade analysis. These results are particularly noteworthy given that buyer-paid Egan-Jones was likely at a significant informational and resource disadvantage relative to the two larger seller-paid agencies.

2.7 Discussion and Conclusion

The results of this paper provide evidence of the impact that rating agency business model has on rating outcomes. The empirical support for Hypotheses 1a indicates that during a period when all rating agencies were incorporating predominantly negative new information into their rating decisions, buyer-paid ratings reflected this information sooner and with greater magnitude than seller-paid ratings. These differences were concentrated among large and influential financial services firms as well as firms in two other industry sectors where buyer-paid ratings frequently crossed into “junk” rating status during the crisis but seller-paid ratings did not.

A key theoretical contribution of this study is to highlight the frequency and timing of rating changes as additional outcomes that may be affected by rating agency incentives. The higher frequency of rating changes among buyer-paid ratings (H2a) was evident from the summary statistics of both the full four-year sample and balanced panel subsample. Similar frequency differences between the two business models are also found in the years prior to the sample, although the general ratio of upgrades to downgrades changes depending on the economic conditions.

The hypotheses regarding relative timing were tested using Granger causality tests. Support was found for both H2b, which proposed that buyer-paid rating changes would lead seller-paid

27 changes, and H2c, which proposed that this effect would be stronger for downgrades than upgrades. These results may reflect a number of underlying mechanisms. While it could be that analysts from S&P and Moody‟s closely follow Egan-Jones rating changes and make their own corresponding rating changes in response, discussions with industry participants suggest that analysts at the largest rating agencies do not pay significant attention to Egan-Jones. A more likely scenario is that analysts from both buyer-paid and seller-paid agencies are responding to a common set of outside information but choosing different reaction speeds best suited to their business model and paying customers. The exchange of private information between sellers and seller-paid agencies that typically takes place prior to a rating change could also slow seller-paid rating updates.

Finding evidence of rating differences consistent with the preferences of each business model‟s paying customers is not sufficient to determine which of the two business models generates “better” information. Such a determination requires taking a position as to the optimal balance between rating stability and accuracy and then comparing the performance of each agency to that benchmark. For example, when a firm‟s rated qualities decline, some stakeholders may prefer an agency that issues multiple downgrade announcements in small increments while others may prefer an agency that issues fewer, larger downgrades, even if the two agencies agree on the firm‟s rating before and after the decline. If the rating preferences of key stakeholders in a particular rating market are sufficiently varied (or the ratings decisions of various business models sufficiently similar) there may be no consensus as to the superior model.

For entrepreneurs launching new rating services or managers of existing rating agencies, these findings highlight the range of dimensions on which raters can differentiate themselves as they seek to match their rating outputs to the preferences of their paying customers and other stakeholders. For example, firms expected to experience high future rating volatility may provide buyer-paid agencies more opportunity to differ from seller-paid agencies. Various decisions by the focal buyer-paid agency, Egan-Jones, such as explicitly adopting S&P‟s rating scheme rather than employing a more granular or ambiguous rating scheme (Fleischer, 2009; Jin, Kato, & List, 2010), may have also increased the agency‟s incentive to differentiate on timeliness and other margins.

28

It is important to recognize limitations of these results which provide opportunities for future research. First, t+he analysis seeks to identify the effect of a „treatment‟, rating agency revenue source (seller-paid or buyer-paid), that is not randomly assigned and is only observed over a small number of agencies. Thus the interpretation of results relies on an assumption that rating differences between the two business models are not caused by other unobserved agency characteristics unique to the individual agencies involved. Second, for lack of a true measure of credit risk I only compare relative ratings and the relative timing of rating changes between agencies rather than comparing the ratings of each agency relative to some underlying true rating.

An obvious next step is attempting to separate the specific mechanisms generating the observed rating differences between business models, which may include individual employee career concerns or personal social networks (Chevalier & Ellison, 1999; Uzzi, 1996), organizational- level customer retention incentives, or factors such as organizational status and legitimacy, as has been previously demonstrated in equity analyst ratings and other settings (Phillips & Zuckerman, 2001; Zuckerman, 1999).

Another area for further analysis is in determining the appropriate role of government in markets in which sellers have valuable private information and information asymmetries exist. The majority of previous theoretical and empirical work in such settings has focused on whether governments should require mandatory disclosure of quality information (Board, 2009; Fishman & Hagerty, 2003; Jin & Leslie, 2003; Mathios, 2000). Other regulatory actions such as influencing the number of rating agencies, their relative importance and the level of competition among them are not as well studied.

The goal of this paper has been to provide new insight into the link between rating agency business model and rating outcomes and in particular the potential for rating bias resulting from conflict of interest. The paper also suggests a broader view of the dimensions in which such bias may occur and the approaches through which they may be identified. Where rating differences are found, they are generally consistent with seller-paid and buyer-paid rating agencies tailoring their rating decisions to their paying customers. These findings complement other recent studies that characterize rating agencies as strategic actors rather than neutral evaluators (Fleischer, 2009; Waguespack & Sorenson, 2010). The findings also offer firms, researchers and

29 policymakers useful insight into the future direction of nascent rating areas such as environmental and corporate social responsibility ratings that currently lack the heterogeneity in agency business models and detailed tracking of rating changes already present in U.S. credit ratings.

30

Table 2.1: SEC Nationally Recognized Statistical Rating Organizations (NRSRO)

NRSRO NRSRO Year Primary Agency approval date classes rateda Founded Revenue Model S&P 1975 I-V 1916 Issuer pay Moody‟s 1975 I-V 1909 Issuer pay Fitch, Inc. 1975 I-V 1924 Issuer pay Dominion Bond Rating 2003 I-V 1976 Issuer pay Service (DBRS) AM Best 2005 I - IV 1899 Subscriber pay Egan-Jones 2007 I-V 1995 Subscriber pay Japan Credit Rating Agency, 2007 I-V 1985 Issuer pay Ltd Rating and Investment 2007 I-V 1975 Issuer pay Information, Inc. Kroll Bond Rating (formerly 2008 I-V 1984 Subscriber pay LACE Financial Corp.) Morningstar Realpoint Division (formerly Realpoint 2008 IV only 2001 Hybridb LLC) Notes: a. The five classes are I. Financial Institutions, II. Insurance Companies, III. Corporate Issuers, IV. Asset- Backed Securities, and V. Government Securities. b. Realpoint charges issuers for initial rating but subscribers must pay for ongoing surveillance. Source: Securities and Exchange Commission (SEC) www.sec.gov

31

Table 2.2 Summary Statistics

S&P Moody's Egan- Jones Pooled S&P Moody's Egan- Jones Pooled mean mean mean mean mean mean mean mean Panel A: Full Sample, Monthly Rating-Level Dataa Panel B: Balanced Panel, Monthly Rating-Level Data Upgrade (0/1) 0.007 0.007 0.012 0.008 0.009 0.009 0.017 0.011 Downgrade (0/1) 0.016 0.014 0.025 0.018 0.019 0.018 0.043 0.027 No change (0/1) 0.971 0.974 0.961 0.970 0.972 0.972 0.941 0.962 Other (e.g. First/only) 0.005 0.005 0.003 0.004 0.000 0.000 0.000 0.000 Numerical Rating (1-22) 10.098 10.642 11.140 10.476 8.541 8.844 8.437 8.607 Investment grade (0/1) 0.570 0.570 0.548 0.565 0.843 0.807 0.819 0.823 N = 191,594 95,815 84,998 372,407 16,224 16,224 16,224 48,672 Panel C: Full Sample, Firm-Level Datab Panel D: Balanced Panel, Firm-Level Data Always investment grade 0.496 0.522 0.454 0.450 0.760 0.716 0.666 0.589 Cross igrade cut-off 0.061 0.088 0.163 0.103 0.148 0.172 0.251 0.361 Never investment grade 0.442 0.390 0.384 0.447 0.092 0.112 0.083 0.050 Energy 0.053 0.076 0.081 0.055 0.068 Materials 0.046 0.053 0.080 0.050 0.092 Industrials 0.073 0.083 0.115 0.075 0.121 Consumer Discretionary 0.120 0.136 0.187 0.131 0.157 Consumer Staples 0.030 0.038 0.064 0.035 0.080 Health Care 0.048 0.033 0.059 0.046 0.056 Financials 0.371 0.286 0.146 0.327 0.198 Info Tech 0.035 0.029 0.076 0.038 0.050 Telecom 0.032 0.041 0.053 0.040 0.015 Utilities 0.067 0.099 0.065 0.058 0.163 Unknown/Unclassifiedc 0.127 0.126 0.072 0.147 N = 4,830 2,364 1,852 6,493 338 338 338 338 Notes: a. Observations represent active month-end long-term credit ratings for firms between July 1, 2005 and June 30, 2009. Ratings include issuer ratings, corporate ratings, and ratings of unsecured debentures issued for the US credit market. Foreign and short-term ratings (securities with periods less than one year) are excluded, along with government ratings and structured finance instruments. b. Observations represent unique firms with active ratings between July 1, 2005 and June 30, 2009. c. Not all firms have been matched to a COMPUSTAT GICS industry sector. Data Sources: Bloomberg, Egan-Jones

32

Table 2.3 Decomposition of Rating Differences During the Economic Downturn TABLE 3: Decomposition of Rating Differences During the Economic Downturn Dependent Variable = Numerical Rating (AAA = 1, Default = 22) Model 1 Model 2 Model 3 Model 4 Seller-Paid (Moody's) 0.30*** 0.30*** 0.30*** 0.30*** (0.06) (0.06) (0.06) (0.06) Buyer-Paid (Egan-Jones) -0.10 -0.37*** -0.37*** -0.37*** (0.08) (0.08) (0.08) (0.08) Moody's Post 0.00 0.00 0.00 (0.04) (0.04) (0.04) Egan-Jones Post 0.60*** -0.33 (0.08) (0.44) Materials x EJ Post 1.40** (0.54) Industrials x EJ Post 0.53 (0.50) Consumer Discretionary x EJ Post 1.62** (0.51) Consumer Staples x EJ Post 0.92 (0.53) Health Care x EJ Post 0.72 (0.56) Financials x EJ Post 1.77*** (0.53) Info Tech x EJ Post -0.35 (0.65) Telecom x EJ Post -1.55 (1.05) Utilities x EJ Post 0.38 (0.48) S&P Pre-Period Rating Above 0.69*** Igrade Cut-off (1-10) x EJ Post (0.09)

S&P Rating Below Igrade Cut-off 0.14 (11-22) x EJ Post (0.27)

Constant 8.54*** 8.54*** 8.54*** 8.54*** (0.04) (0.04) (0.04) (0.04)

Fixed Effects Firm*Month Firm*Month Firm*Month Firm*Month R-squared 0.03 0.05 0.10 0.05 F 14.9 30.7 12.3 25.5 Notes: Each column presents the coefficients of a fixed effect panel data regression on 48, 672 observations from a balanced panel of 338 firms rated by all three agencies continuously July 1, 2005-June 30, 2009. EJ Post variable takes value of 1 for all Egan-Jones ratings in months after October 2007. Quartiles are based on the position of each firm's October 2007 rating within the rating scale. First and second quartiles fall above the investment grade cut-off while the third and fourth quartiles fall below the investment grade cut-off. Models include Firm*Month fixed effects and robust standard errors clustered at the firm level. Excluded agency: S&P, month: industry: Energy. * p<0.05, ** p<0.01, *** p<0.001.

33

TABLETable 2.4 4A: Granger Granger Causality Causality Tests Tests for for S&P S&P and and Egan Egan-Jones-Jones Dependent Variablea S&P Downgrade EJ Downgrade S&P Upgrade EJ Upgrade in period t in period t in period t in period t Egan-Jones Lagged Rating Changesb Explanatory Controls Explanatory Controls t-1 0.98*** 0.25 1.17* -1.22 t-2 0.77*** 0.66*** 0.93 0.77 t-3 0.92*** 0.81*** 0.94 1.32*** t-4 0.71*** 0.69*** -0.54 -0.40 t-5 0.90*** 0.01 0.78 -0.07 t-6 0.34 0.63*** 0.42 0.93* t-7 0.24 0.27 0.73 0.68 t-8 0.62** 0.50** 1.28** 0.62 t-9 0.35 0.44* 0.90 0.22 t-10 0.48* 0.34 0.86 -0.06 S&P Lagged Rating Changesb Controls Explanatory Controls Explanatory t-1 0.07 0.72*** 1.50*** t-2 -0.49 0.37 0.92 t-3 0.28 0.16 0.49 t-4 0.77** 0.36 0.16 t-5 0.31 -0.44 0.59 t-6 0.51 -0.19 0.34 0.62 t-7 0.02 -0.14 0.29 1.09* t-8 0.63* 0.11 -0.50 0.14 t-9 -0.01 -0.01 -0.60 0.12 t-10 -0.33 -0.87* -0.60 0.88 Constant -4.40*** -4.84*** -3.32*** -4.51*** Pseudo-R2 0.09 0.04 0.02 0.03 Log-likelihood -1201.6 -2347.3 -640.7 -883.5 χ2 likelihood ratio test 141.53 24.63 28.91 18.65 vs. control-only model (p<0.0001) (p=0.0061) (p=0.0013) (p=0.0449) χ2 test if t-1 omitted (p<0.0001) (p=0.1731) (p=0.0046) (p=0.4345) Notes: Each column presents the coefficients of a logit panel data regression. a. Dependent variable is a dichotomous variable coded as 1 if an upgrade (or downgrade) by the focal rating agency occurs in month t and 0 otherwise. b. Independent variables are coded in the same way as the dependent variable, indicating rating changes of the same type in the ten months prior to month t. Sample is a balanced panel of 12,844 monthly observations for 338 firms rated by S&P, Moody's and Egan-Jones continuously from July 1, 2005-June 30, 2009. * p<0.05, ** p<0.01, *** p<0.001.

34

TableTABLE 2.5 4B: Granger Granger Causality Causality Tests Tests for for Moody's Moody's and and Egan Egan-Jones-Jones Dependent Variablea Moody's Down- EJ Downgrade Moody's Upgrade EJ Upgrade grade in period t in period t in period t in period t Egan-Jones Lagged Rating Changesb Explanatory Controls Explanatory Controls t-1 1.09*** 0.29 0.48 -1.08 t-2 0.95*** 0.71*** 1.16** 0.85* t-3 1.03*** 0.82*** 0.05 1.34*** t-4 1.08*** 0.71*** 0.86 -0.32 t-5 0.89*** -0.02 1.10* -0.07 t-6 0.55* 0.64*** 0.42 1.01** t-7 -0.19 0.27 0.36 0.73 t-8 0.59** 0.48** 0.83 0.62 t-9 0.07 0.44* 1.72*** 0.27 t-10 0.89*** 0.33 -0.11 -0.07 Moody's Lagged Rating Changesb Controls Explanatory Controls Explanatory t-1 0.03 0.66*** 0.32 1.04* t-2 -0.11 0.13 0.34 0.53 t-3 0.53 -0.03 0.09 t-4 0.67* 0.44 0.52 t-5 -0.43 -0.22 0.16 t-6 -0.47 -0.84* -0.41 0.50 t-7 0.61 -0.20 0.40 0.83 t-8 0.63 0.42 -0.41 -0.52 t-9 -0.06 -0.05 -0.38 0.91 t-10 -0.05 0.08 0.29 1.26** Constant -4.56*** -4.86*** -3.33*** -4.50*** Pseudo-R2 0.11 0.04 0.03 0.03 Log-likelihood -1117.2 -2347.5 -647.6 -884.8 χ2 likelihood ratio test 195.85 24.21 37.72 15.94 vs. control-only model (p<0.0001) (p=0.0071) (p<0.0001) (p=0.1012) χ2 test if t-1 omitted (p<0.0001) (p=0.0046) (p<0.0001) (p=0.1073) Notes: Each column presents the coefficients of a logit panel data regression. a. Dependent variable is a dichotomous variable coded as 1 if an upgrade (or downgrade) by the focal rating agency occurs in month t and 0 otherwise. b. Independent variables are coded in the same way as the dependent variable, indicating rating changes of the same type in the ten months prior to month t. Sample is a balanced panel of 12,844 monthly observations for 338 firms rated by S&P, Moody's and Egan-Jones continuously from July 1, 2005-June 30, 2009. * p<0.05, ** p<0.01, *** p<0.001.

35

Figure 2.1 Data Structure

Ford Motor Credit Ratings

1

11

Numerical rating 22 2001m7 2003m1 2004m7 2006m1 2007m7 2009m1 Monthly Period

Egan Jones S&P Moody's July 2001 to June 2009. Rating Scale 1 = AAA and 22 = Default

36

Figure 2.2 Average Active Ratings for Balanced Panel (2001 – 2009)

Average Active Ratings for Balanced Panel

8

8.5

9

Average Rating

9.5 10 2001m7 2004m7 2007m7 2003m1 2006m1 2009m1 Monthly Period

Egan Jones Moody's S&P

Sample: 174 firms from 7/2001 to 6/2009 Blue vertical line: SEC NRSRO regulatory change June 2007 Red vertical line: Egan-Jones NRSRO certified Jan 2008

Figure 2.3 Average Active Ratings for Balanced Panel (2005 – 2009)

Average Monthly Ratings for Balanced Panel Four Years 07/2005-06/2009

New Century Ch11 MER $5B Loss Lehman/AIG

7.5

8

8.5

9

9.5 Rating22=Default) (1=AAA

2005m7 2007m7 2009m7 2006m7 2008m7 Monthly Period

Egan Jones Moody's S&P

Note: Lines represent mean ratings for balanced panel of 338 firms rated by all 3 agencies. April 2007 - New Century filed for Ch11; Oct 2007 - Merrill Lynch announced 'startling' $8B loss Sept 2008 - Lehman Brothers filed for Ch11, AIG received an $80B bailout

37

Figure 2.4 Rating Changes During the Economic Downturn

Rating Changes During Economic Downturn

Financials (N=67) Materials (N=31) Consumer Discretionary (N=53)

6

8

10 12

Energy (N=23) Industrials (N=41) Consumer Staples (N=27)

6

8

10 12

Health Care (N=19) Info Tech (N=17) Utilities (N=55)

6

8

10 12

2005m7 2007m7 2009m72005m7 2007m7 2009m72005m7 2007m7 2009m7 Month

Buyer-Paid (Egan Jones) Seller-Paid (S&P) Seller-Paid (Moody's)

Note: Balanced panel of 338 firms from 7/2005 to 6/2009. Red horizontal line is investment grade cut-off. N=Number of Firms, Telecom omitted since N=5. The change in Egan-Jones ratings after Oct 2007 is only statistically significantly different for the first three industry sectors in the top row.

Chapter 3 3 Do Sellers Disclose What Buyers Want to Know? Evidence from U.S. Credit Rating 3.1 Introduction

Information about seller offerings is a critical element in market exchange. Buyers seek knowledge of the features and quality of available products to inform their purchase decisions. In turn, this may provide sellers with an incentive to strategically influence and shape the product information available. In recent years, information disclosure programs involving ratings, rankings and certification have become increasingly common across a wide variety of industries and settings, supplementing other informational mechanisms such as advertising, packaging, pricing and warranties.

Under certain theoretical conditions,33 buyer assumptions regarding firms that opt for less than full disclosure of product information should incent above-average firms to disclose more information, in turn lowering buyer expectations of the remaining “non-disclosers” and incenting additional firms to follow suit. A repeated information unraveling process should eventually cause all but the lowest-quality firms to fully reveal their true quality to avoid being judged equivalent to lesser-quality competitors (Grossman, 1981; Milgrom, 1981). However in practice such unraveling appears to be rare and selective disclosure that benefits sellers, via ratings and other mechanisms, often remains a viable strategy.

Opportunities for selective disclosure exist because not all rating programs are mandatory and many are paid for by sellers. When sellers are paying to be rated and their decision to do so is voluntary, an obvious concern is whether the amount and content of rating disclosure reflects the demand for information by buyers or is skewed towards the self-interest of sellers. In this paper, I investigate whether sellers voluntarily disclose the information that buyers want to know.

33 As summarized in Dranove and Jin (2010) these include: sellers with complete and accurate knowledge of their own quality, costless disclosure, no strategic interaction between competing sellers, consumers willing to pay a positive amount for any enhancement of quality, a publicly known distribution of quality, products vertically differentiated along a single, well defined dimension and homogeneous consumers. 38

39

Insight into this question is of direct relevance to policymakers concerned about the efficiency of markets and interventions that make use of voluntary or mandatory disclosure. It is also of interest to both buyers and sellers seeking competitive advantage in these settings.

A number of recent empirical studies on related topics provide evidence of strategic selective disclosure. Brown, Camerer et al. (2011) examine the decision of movie studios to hold back some films from critical review prior to box office release in order to delay expected negative reviews. They find that these “cold openings” are associated with a 10-30% increase in opening box office revenue after controlling for other factors, suggesting that, at least temporarily, buyers can be fooled by strategic non-disclosure rather than interpreting it as a negative signal. The authors attribute the result to cognitive constraints on the part of buyers. While movie patrons may not be the most sophisticated buyers in terms of pre-purchase assessment this finding raises the possibility that similar phenomenon of selective seller disclosure could be at play in other settings. Prado (2011) provides descriptive evidence of the factors South American flower producers consider when choosing whether to participate in one or more environmental certification programs. Some of the factors, such as choosing certifiers whose market coverage and geographical focus overlap with the producer‟s end customers, appear likely to be consistent with buyer preferences. However sellers also demonstrate a strong preference for more lenient certifiers, which appears less likely to be in the best interest of buyers.

Finally, research in U.S. credit rating examining seller willingness to pay for a third rating in addition to the standard “two-rating norm” suggests that demand for such seller-paid ratings is highest for a specific subset of sellers who stand to reap a potential regulatory benefit from a third rating (Bongaerts, Cremers, & Goetzmann, 2011; Cantor & Packer, 1997).34 That only a narrow set of sellers pay for a third rating is interpreted as a potential concern for buyers and policymakers. Without some estimate of the additional information that buyers value, however, an unanswered question in these studies is how closely the level of disclosure matches the demand for ratings by buyers.

34 A variety of government regulations rely on credit ratings to classify a firm‟s debt as investment grade or speculative/high-risk, a distinction that affects investor demand. Where existing ratings do not agree on this classification an additional rating can serve as a potential tiebreaker under many tiebreaking formulas.

40

A similar interest in estimating the underlying buyer demand for additional information exists in other settings where government rules encourage or require information disclosure to buyers.35 Even observing that government-mandated disclosure has altered consumer behavior (Bollinger et al., 2011; Dafny & Dranove, 2008; Pope, 2009) does not tell us whether the intervention was optimal or perhaps forced sellers to disclose too little or too much information.

In this study I follow Bongaerts, et al. (2011) and examine the provision of additional ratings in the U.S. corporate credit rating market. The key difference in my approach is that I compare seller information disclosure from firms who pay for a third rating to a unique alternative set of ratings from a buyer-paid rating agency. These buyer-paid ratings provide a clear indication of the information valued by buyers. My dataset allows me to estimate the difference between the two sets of ratings on multiple dimensions including quantity (the number of additional firms rated), timing (what factors trigger an additional rating), and relative opinions (which ratings are more or less lenient). Small or nonexistent differences would suggest that seller disclosure appears to be consistent with buyer information demand. However if the differences are significant this is suggestive that seller disclosure may reflect self-interest or other factors inconsistent with buyer preferences, potentially strengthening the argument for additional policy intervention.

My results provide new insight into the gap between seller-provided information and the preferences of buyers. I find two specific differences. First, third ratings by the seller-paid agency are skewed towards higher-rated firms (based on their preexisting ratings) whereas third ratings by the buyer-paid agency are distributed more uniformly across the levels of the rating scale including the lowest-rated sellers. Second, sellers whose existing ratings are split across a critical rating threshold are significantly more likely to pay for a third-rating, consistent with previous literature, but the likelihood of a buyer-paid rating is not affected by this tiebreaker situation.

35 Examples include restaurant hygiene (Jin and Leslie 2003; Jin and Leslie 2009), nutritional content (Mathios 2000) and airline on-time performance (Forbes, Lederman et al. 2011).

41

These specific findings provide new insight for firms and policymakers into factors that motivate seller disclosure of information, as well as the impact of adding additional seller-paid or buyer- paid rating agencies to an existing, concentrated rating market.

The rest of the paper proceeds as follows. Section 3.2 describes the empirical setting. Section 3.3 provides a review of prior literature and lays out the hypotheses to be tested. Section 3.4 describes the data and sample. Section 3.5 introduces my empirical approach. Section 3.6 summarizes my results while Section 3.7 contains a discussion and conclusion.

3.2 Empirical Setting: U.S. Corporate Credit Ratings

Financial markets offer a setting where information intermediaries play a key role. Buyers have no physical product to inspect, direct disclosure by sellers of unverified information about future risk is of limited value given the complexity of financial forecasting, and even audited accounting statements are of minimal importance due to their backward-looking nature and delayed release. Credit ratings agencies have emerged and evolved to play an important role in worldwide financial markets, providing outside experts that classify firms into a set of common, comparable categories based on their expected level of default risk.

Corporate credit ratings only have limited value as an assessment tool for internal management. While the issuers of structured finance securities can readjust the underlying contents of the security to optimize its pre-issuance rating, corporations have much more limited ability to change their underlying risk structure based on the ratings they receive. Thus the primary value of credit ratings is their effect on demand for a firm‟s current and future bonds.

Corporate credit ratings have been shown to impact prices in the bond market and firm access to capital, both in terms of the cost of borrowing and the amount of debt issued (Dichev & Piotroski, 2001; Kliger & Sarig, 2000; Tang, 2009). More so than in equity markets, the majority of buyers for these securities, and thus the primary users of ratings, are institutional investors from banks, insurance companies, pension funds and hedge funds.

U.S. corporate credit ratings represent one of the most established types of ratings with a history dating back to the early 20th century. Moody‟s and S&P were the first of today‟s credit rating agencies to be established in 1909 and 1916, respectively. Fitch was founded in 1924 and has

42 trailed Moody‟s and S&P in prominence and market share since that time. Within U.S. corporate credit ratings there has traditionally been a strong “two-rating norm” whereby the majority of issuing firms pay for ratings from both Moody‟s and S&P (subsequently referred to as MSP). A series of acquisitions in the late 1990‟s contributed to Fitch expanding its market share and becoming a more popular option for firms seeking an additional rating. However as of 2006, S&P and Moody‟s were still estimated to have approximately 80% of industry market share as measured by revenues (Senate Report 109-326, 2006). Moody‟s, S&P and Fitch all employ a seller-paid business model whereby firms who are issuers of debt pay an up-front fee to receive an initial rating as well as ongoing fees to maintain rating coverage. These ratings are made publicly available to investors and other stakeholders.

In December 1995, Egan-Jones, a new rating agency, began issuing corporate credit ratings using the same rating scale as S&P but with a buyer-paid revenue model unlike the incumbent agencies. Investors pay a subscription fee to Egan-Jones for access to their full suite of ratings. As shown in Figure 3.1, a screenshot of the Egan-Jones website, the agency publicly claims to provide “highly accurate ratings with predictive value” to “forward thinking institutional investors”. It claims that it “selects an issuer for a credit analysis generally based on developments within issuers and industries, market developments and requests of subscribers.”

The Securities and Exchange Commission is responsible for regulating credit rating agencies in the United States. As shown in Table 3.1, a total of 10 rating agencies, including S&P, Moody‟s, Fitch, and Egan-Jones, are currently certified by the SEC as Nationally Recognized Statistical Rating Organizations (NRSROs).36 NRSRO status matters because a large number of government regulations and private contracts explicitly rely on NRSRO ratings, particularly to classify bond holdings as either investment grade or speculative/high risk. Many institutional investors are prohibited from investing in securities that fall below investment grade status. This regulatory structure is unique to credit ratings in the United States.

36 The other six agencies all have much more narrow rating coverage due to focusing on a particular industry, geographic or rating-type niche.

43

Given that S&P and Moody‟s rate almost all active debt issuers, the key disclosure decision facing most firms is not whether to be rated or not, but instead whether to pay for a third Fitch rating to supplement their MSP ratings.

3.3 Literature and Hypotheses

3.3.1 Why Do Sellers Pay for a (Third) Rating?

For sellers in a variety of markets, gaining the attention of influential critics, analysts and raters has been found to have a positive benefits for firms, even in some cases when the ratings received are unfavorable (Roberts & Reagans, 2007; Zuckerman, 1999). These information intermediaries are particularly important in settings such as bond markets when buyers are unable to physically inspect the product, do not trust unverified information provided directly by sellers, or lack the expertise to make their own assessment of default risk. When rating agencies are independent from both buyers and sellers or are wholly focused on serving buyers, sellers may have limited ability to influence when and how they are rated or reviewed. However in settings where rating agencies primarily operate under a seller-pay model, sellers usually have significant input into the amount and content of information provided to buyers.

Previous research in the finance literature has provided progressively clearer understanding of what motivates sellers to pay for a third rating. Cantor & Packer (1995, 1997) investigated the importance of previous ratings received on the decision of firms to seek a 3rd credit rating. Of the 76 firms in their sample with one investment grade rating and one non-investment grade rating from the two major agencies, 46% seek a third rating. Of these firms, approximately 85% (29 of 34) receive an investment grade rating from the third agency. As the firms‟ ratings from Moody‟s and S&P grew further from the investment grade cutoff, fewer third ratings were sought. Jewell & Livingston (1999) provide empirical evidence indicating that a third credit rating has informational value to the bond market. Third ratings have an incremental impact upon bond yields, particularly when Moody‟s and S&P disagree on the rating. When there is a third rating, upgrades by Moody's and Standard & Poor's are more likely and downgrades less likely. The authors attribute this to the view that firms hire Fitch in the belief that they have been undervalued by Moody‟s and S&P.

44

In a more recent paper, Bongaerts, Cremers et al. (2011) extend these findings by testing three overlapping theories for why sellers apply for a third credit rating: „information production‟ where additional ratings reduces uncertainty about the credit quality of the rated bonds, „rating shopping/adverse selection‟ where rating agencies make rating errors and sellers who have better information about their credit quality seek a third to optimize their average rating, and „regulatory certification‟ where the decision to seek a third rating is only related to the regulatory rules used to determine whether a firm is investment grade when its credit ratings are not in agreement. In such situations, the most common “tiebreaker” rule is to use the lower rating when only two ratings exist and use the middle rating when three ratings exist. The authors find that marginal, additional credit ratings are more likely to occur because of regulatory purposes, but do not seem to provide significant additional information related to credit quality.

3.3.2 Why Do Buyers Pay for Ratings?

With Moody‟s and S&P ratings publicly available for most firms, along with Fitch ratings in some cases, it is not immediately clear whether any buyers would see value in paying for additional ratings in this setting. However two factors suggest buyers still want information beyond that provided by S&P, Moody‟s and Fitch. First, all three seller-paid agencies have received considerable criticism over the accuracy of their ratings, particularly related to high- profile cases such as Enron, WorldCom and the structured finance rating collapse of 2007-2008. In response, a variety of stakeholders have advocated for changes to improve rating timeliness and accuracy including encouraging more agencies to challenge the dominant incumbents (Hunt, 2009) or creating a central agency to assign sellers to rating agencies (Mathis et al., 2009). Second, the 15-year existence of Egan-Jones, a rating agency which receives its revenue directly from institutional investors who subscribe to its rating service, provides indirect evidence that some investors value additional credit ratings enough to pay for them.

Buyer-paid rating agencies have received less attention from researchers and the media than seller-paid ratings and most prior investigations, including the first chapter of this thesis, have focused on how the ratings they assign compare to those of seller-paid agencies. Beaver, Shakespeare, et al. (2006) compare Moody‟s and Egan-Jones ratings from 1996 to 2002 and find that Egan-Jones upgraded and downgraded ratings in a more timely manner than Moody's. Xia (2011) compares seller-paid and buyer-paid ratings and finds that seller-paid ratings show signs of inflation when expected compensation from sellers is high and neither regulators nor investors

45 seem to adjust for this rating bias. In contrast, my analysis in this study focuses primarily on the selection of firms rated by Egan-Jones as a way of gaining insight into bond investors‟ demand for information.

It should be noted that coverage of a firm by one or more seller-paid rating agencies in no way prevents a buyer-paid agency from rating it as well. In fact, coverage from prominent seller-paid agencies can provide an ideal point of comparison to convince subscribers of the agency‟s investment value, as demonstrated by Egan-Jones‟ inclusion of the corresponding S&P rating with each of its rating announcements.

In this context I offer two predictions on how buyer demand for additional ratings will differ from what sellers disclose via Fitch.

3.3.2.1 Additional Ratings Across the Rating Scale

For individual issuers, the position of their firm‟s existing S&P and Moody‟s ratings on the rating scale can significantly alter their willingness to pay for a third rating. First, sellers with two ratings below the investment grade cutoff get no regulatory benefit from a third rating even if it proves more favorable. Second, firms rated well below the investment grade cutoff typically issue less subsequent debt than those above the cutoff, and seller decisions to engage a third rating agency often coincide with the issuance of new debt. Finally, fees charged by seller-paid rating agencies typically vary based on the amount of debt a firm issues but do not vary based on the assigned rating, which would raise obvious conflict of interest concerns. Higher-risk firms with more volatile credit ratings require more ongoing surveillance work, so if anything they may receive less favorable pricing from rating agencies, making them even less likely to invest in a third rating.

In contrast, buyers as a group can benefit from new rating information when the existing average ratings from Moody‟s and S&P sit at various points on the rating scale, including at the lowest rating levels. Prior literature (Bongaerts et al., 2011) has described bond investors as having two distinct profiles. In one group are conservative investors holding primarily investment grade bonds due to requirements in public regulation, private contracts, and company policy. Ratings from a buyer-paid agency may be helpful to these investors in determining which bonds to buy or sell, particularly if buyers have some reason to doubt the existing MSP ratings. Such doubt

46 could arise from either a lack of rating updates or a flurry of rating changes by Moody‟s and S&P.

The second group of investors invest in securities at all risk levels, seeking an informational edge when investing in speculative, high-yield debt that would typically be poorly rated but also investing in investment grade securities for liquidity and diversification. These investors are particularly concerned with determining which firms are at highest risk of defaulting on their debt commitments. This concern is amplified by the nature of potential outcomes for bond investors as compared to equity investors. As residual claimants on the value of the firm, equity investors have unlimited upside on their investment whereas the maximum return a buy-and-hold bond investor can receive back is the interest and principal promised in the terms of the bond. However bond holders can still lose their entire investment if a firm experiences financial difficulty and is unable to cover its debts. Thus it may be a more viable approach for a buyer- paid agency than a seller-paid agency to rate higher-yield firms where expert advice is particularly valuable to buyers. This is consistent with findings regarding bond analysts, a separate group of financial professionals similar to equity analysts who provide bond investors with “buy”, “sell” and “hold” recommendations rather than assigning firms to a particular rating category.37 These analysts have been found to focus disproportionately on higher-yield securities, consistent with the needs of the second investor group described above (De Franco, Vasvari, & Wittenberg-Moerman, 2009). Since the Egan-Jones subscriber base appears to include both types of investors we should expect to find differences when comparing the distribution of third ratings by buyer-paid and seller-paid agencies in terms of where the existing MSP ratings of rated firms fall on the credit rating scale.

Hypothesis 1: Firms receiving third ratings by a buyer-paid agency will be more uniformly distributed across the levels of the rating scale than firms receiving third ratings paid for by sellers.

37 Like equity analysis, bond analysts are partially compensated through the trade volume they generate for their banking employer.

47

3.3.2.2 Additional Ratings Near A Critical Threshold

Previous findings have emphasized that seller willingness to pay for a third rating from Fitch is highest for a small subset of sellers who stand to benefit from a “tiebreaker” rating that pushes them above the investment grade cutoff. While buyers also have some interest in ratings at this threshold, I suggest three reasons why we should expect buyer-paid ratings to be less concentrated around these tiebreaker situations. First, seller demand for “tiebreaker” ratings is wholly contingent on the expected outcome being favorable to the seller, preventing the required sell-off of bonds or reallocation of capital that can be triggered when a firm‟s debt is no longer considered investment grade. If buyer-paid ratings are not as lenient and thus less likely to fall on the investment grade side of the cutoff than seller-paid ratings, the tiebreaker benefit will be severely muted.38 Second, when sellers are paying for a new rating, Fitch, like Moody‟s and S&P, grants them control over when (or if) the rating is publicly announced.39 This should concentrate the release of third ratings during the exact month when a split opinion regarding investment grade status exists, particularly since firms will know from their periodic direct interaction with S&P and Moody‟s when an upgrade or downgrade is expected to move them into such a split position. Egan-Jones also controls the timing of its rating announcements but has no inside information on the timing of S&P or Moody‟s rating changes and would seem to have less incentive to delay a new rating until a potential tiebreak situation becomes reality. Finally, in terms of time horizon the benefits to sellers of being deemed investment grade extend beyond the impact on their existing debt to their ability to issue future debt at lower cost. In contrast, buyers are only affected to the extent of their current holdings and can invest in alternative firms in the future, reducing their concern over the threshold. This gives us a second prediction:

Hypothesis 2: Buyer-paid raters will be less likely than seller-paid raters to issue an additional rating for firms whose existing ratings are split across a critical rating threshold.

38 In addition, Egan-Jones ratings were not granted NRSRO status until near the end of the sample period and even at that point some confusion existed over their use as a tiebreaker at the issuer level. 39 Arrangements between seller-paid rating agencies and firms to keep unfavorable ratings private have been the target of criticism and proposed regulatory changes. However to date no mechanism has been implemented to eliminate this practice.

48

3.4 Data

3.4.1 Measures

Egan-Jones ratings were obtained directly from the agency under specific confidentiality restrictions. The publicly-available seller-paid ratings for S&P, Moody‟s and Fitch were collected from Bloomberg. These three agencies issue ratings for specific individual corporate bonds as well as issuer-level or senior unsecured debt ratings that reflect their assessment of a firm‟s overall level of credit risk. Because Egan-Jones issues ratings only at the firm level, rather than also rating individual bonds, I restrict my sample to issuer-level ratings, as in some previous papers analyzing rating differences (Cantor & Packer, 1997).40 Firms in the Egan-Jones and Bloomberg datasets were matched using a combination of numeric identifiers (CUSIP, GVKEY, CIK), ticker symbols and standardized name-matching and then linked to annual firm financial data and industry sector information, based on Global Industry Classification Standard (GICS) code, from S&P and MSCI Barra.41

Ratings are announced as letter grades and I follow previous researchers in converting these letter grades to a numerical value on scale of 1-22 with AAA = 1 and D = 22 (Becker & Milbourn, 2011; Jewell & Livingston, 1999; Xia, 2011). In months where a firm is rated by both S&P and Moody‟s I also calculate the average rating for the two agencies and assign the observation to one of six MSP Rating Categories based on where on the rating scale the average MSP rating falls (AAA to AA-, A+ to A-, BBB+ to BBB-, BB+ to BB-, B+ to B-, and C+ to D). I code a variety of dichotomous variables consistent with Bongaerts et al. (2011) to allow for direct comparison of my Fitch results to their findings. These include Tiebreaker IG which takes a value of 1 in a month when Moody‟s and S&P ratings fall on opposite sides of the investment grade boundary (between 10 and 11 on the numerical scale). A placebo measure, which I label Tiebreaker A-, is coded in the same manner for ratings that split the A-/BBB+ cutoff which holds no regulatory importance. For months where Tiebreaker IG = 1 and Fitch initiates coverage of the firm in that month, Fitch above IG and Fitch below IG indicate whether the new Fitch rating

40 This also excludes structured finance ratings and ratings for municipal, state and national levels of government. One limitation of issuer ratings is that they do not allow a one-to-one match with bond trading and pricing data. 41 I used a variety of fuzzy name-matching techniques to link firms in both datasets to GVKEY, a Compustat company identifier, CUSIP #, or Ticker symbol. There are 10 industry sectors in the GICS classification scheme.

49 is above or below the investment grade cutoff. To capture uncertainty in a firm‟s ratings, S&P & Moody’s Disagree takes a value of 1 in any month where both S&P and Moody‟s (abbreviated as MSP) rate the firm but do not assign the same rating, and Notches MSP Rating Dispersion measures the number of numerical rating notches separating the two ratings when they disagree. I then code equivalent variables for Egan-Jones.

3.4.2 Sample & Unit of Analysis

The sample for this study draws on the 2,444 firms that had active issuer credit ratings from both S&P and Moody‟s in at least one month between January, 2000 and June, 2009. 42 However some of these firms were already rated by Fitch and/or Egan-Jones at the start of the sample period. I exclude these firms leaving 1,769 firms who are thus “at risk” of receiving an additional rating from Fitch and/or Egan-Jones after January 2000.43

The unit of analysis is the firm-month pair. I begin with a data structure consisting of 114 monthly periods between January 2000 and June 2009. For each firm and each month, I convert individual agency rating announcements, which may arrive at any time, into a monthly panel by taking the active ratings on the last day of each month. In each month I observe which of the four agencies has an active rating for each firm. By looking at previous months I can determine whether the rating represents a first-ever rating, an upgrade or downgrade from the previous month or simply a repeat of the previous month‟s rating. Based on the number of months in which the 1,769 firms have at least one active rating between Jan 1, 2000 and June 30, 2009 my final sample consists of 141,613 firm-month observations. Figure 3.2 illustrates the monthly changes in rating coverage for all four agencies, showing that Egan-Jones and Fitch rate a comparable number of firms throughout the sample period.

42 On July 7, 2005, Moody‟s withdrew its issuer ratings for a large number of speculative grade issuer ratings, while maintaining individual bond-level ratings for the same firms. I exclude these firms from the sample used. 43 94 firms were rated by both Egan-Jones and Fitch as of January 2000, 276 by Egan-Jones but not Fitch and 305 by Fitch but not Egan-Jones. The results of the subsequent empirical analysis are highly consistent when these firms are left in the sample.

50

3.4.3 Descriptive Statistics

Summary statistics from my sample are generally consistent with previous research regarding Fitch ratings, while providing new insight into Egan-Jones ratings that generally supports the hypotheses of this study.

Table 3.2 provides summary statistics for the full monthly rating sample and provides a number of key insights. First, as expected given their market prominence, S&P and Moody‟s are more likely to have an active rating for any firm-month pair (mean 0.80 or 0.71) than Fitch and Egan- Jones, who have lower but comparable coverage rates (mean 0.24 and 0.22). Second, consistent with Bongaerts et al. (2011) over the full sample period Fitch has the most lenient average ratings of the three seller-paid agencies (mean 8.08), followed by S&P (10.5), and Moody‟s (11.6). On average, Egan-Jones ratings happen to be equivalent to S&P over the full sample period despite considerable year-to-year fluctuations demonstrated in the previous chapter of this thesis. These differences may provide some insight into the relative leniency of the various agencies. However the first chapter of this thesis also makes clear that they are significantly affected by the selection of which firms are rated by each agency. Third, if Fitch or Egan-Jones enter when S&P and Moody‟s ratings are split across the investment grade threshold (Tiebreaker IG=1) both are more likely to rate on the investment grade side of the cutoff. However the “Make IG” to “Deny IG” ratio is higher for Fitch than Egan-Jones, potentially consistent with Fitch issuing more lenient ratings than Egan-Jones.

Table 3.3 provides a similar set of summary statistics for two key subsets of the full sample. The first five columns describe the 263 firm-month observations when Fitch begins rating a firm already rated by S&P and Moody‟s. The next five columns describe the 224 firm-month observations when Egan-Jones begins rating a new firm already rated by S&P and Moody‟s. Here we see that for 24% of Fitch entries, Egan-Jones has already rated the same firm, and 22% of Egan-Jones entries have been preceded by Fitch. This alleviates any concern that one of the two agencies consistently leads the other in terms of when they initiate new rating coverage. We can also see that in the months when Fitch enters, average ratings for all four agencies are within half a rating notch of each other. When Egan-Jones enters, the average rating dispersion among agencies is greater. Egan-Jones is more likely than Fitch to enter during a month when S&P and Moody‟s have downgraded the firm than when they have upgraded the firm, consistent with

51 buyer concern regarding downside risk. Finally, Egan-Jones is less likely than Fitch to issue an initial rating that is equal to S&P, Moody‟s or even to the average of the two (MSP). This is consistent with buyers especially valuing a third rating that differs, rather than reaffirms, existing opinions.

Table 3.4 shows average rating differences between pairs of agencies. The full set of 2,444 issuers rated simultaneously by S&P and Moody's in at least one month is shown in the top left corner. Average differences for the subset of these issuers rated by Fitch and/or Egan-Jones are shown in the subsequent columns to the right. A negative average difference means that the rating agency named first gives on average a more favorable rating than the second agency named. The top row of columns 2 and 3 show that Fitch ratings are on average more favorable than both S&P or Moody‟s for the same issuer in the same month, consistent with Cantor & Packer (1997), Bongaerts et al. (2011) and the mean ratings in Table 3.2 and Table 3.3. Egan- Jones on the other hand is more optimistic than Moody‟s on average (-0.241, p < 0.001) but less optimistic than S&P (1.55, p < 0.001) or Fitch (0.304, p < 0.001). Subsequent rows divide the sample based on each firm‟s average MSP rating. Average rating differences vary significantly and even change sign depending on what portion of the rating scale is examined. Consistent with H1, Egan-Jones is twice as likely as Fitch to rate firms in the bottom two rating categories, although neither agency rates anywhere near the number rated by S&P and Moody‟s.

In analysis not reported here, average rating differences when the sample is divided by time period (2000-2004 and 2005-2009) or by total firm assets (quartiles or above and below mean), remain significant in all subsamples and retain the same sign except in one case. The average rating difference between Egan-Jones and S&P ratings changes sign between the two time periods, consistent with the greater time variability of Egan-Jones ratings compared to S&P and Moody‟s demonstrated in the first chapter of this thesis. Egan-Jones rates 306 firms with total assets below the 50th percentile compared to only 181 firms for Fitch. This suggests significant buyer demand for ratings of smaller firms that is not reflected in the level of seller-paid disclosure via Fitch, perhaps because the cost involved in obtaining an additional rating is more prohibitive for these smaller sellers.

52

3.5 Empirical Approach

I use two different empirical approaches to better understand the differences in rating coverage provided by Fitch and Egan-Jones within the January 2000 to June 2009 monthly panel. First, I use a linear probability model fitted by simple linear regression where the dependent variable is dichotomous and takes a value of 1 if an agency (either Fitch or Egan-Jones) initiates coverage of a firm for the first time in a given month and 0 otherwise. A second key dichotomous variable indicates whether the focal agency for the specific observation is Fitch (0) or Egan-Jones (1). I interact the agency, Fitch or Egan-Jones, with the other key explanatory variables, all of which are categorical or dichotomous, which allows the effect of each these variables to differ by agency. Both hypotheses can then be tested by comparing the estimated coefficients for the various Fitch and Egan-Jones interaction terms within a single estimated model.

A concern with linear probability models is that the dichotomous dependent variable results in heteroskedasticity, whereby the variance in predicted values is not the same across all cases and the model error term is not normally distributed. These violations of the standard assumptions for ordinary lease squares regression do not affect the sign of estimate effects but can reduce the accuracy of the estimated standard errors. Given this concern, I also use a second empirical approach, following Bongaerts et al. (2011) by running a series of Cox proportional hazard models to estimate the time to adding a first rating from Fitch or Egan-Jones. Because these two events are unordered and receiving a rating from one agency does not preclude a firm from receiving a rating from the other, separate models are run for the new rating events of each agency. The same set of 1,769 firms are at risk of being rated by Fitch or Egan-Jones in each model, but the number of observations listed for each model varies depending on how soon the first rating occurs for the focal agency (if ever). The results identify the effect of several variables, including those related to the two hypotheses, upon the hazard of a firm receiving its first rating from either Fitch or Egan-Jones. Evaluating the hypotheses requires comparing the coefficients from the Fitch results to the coefficients from the Egan-Jones results.

The coefficients in a Cox regression relate to hazard; positive coefficients indicate a shorter expected time to the specified event whereas a negative coefficient indicates a longer expected time to the specified event. A key benefit of this modeling approach is that identification comes from whether or not a firm is ever rated by the focal agency, as well as the time between when

53 they were first “at risk” and when a new rating occurs. In addition, the models make no assumption about the underlying baseline probability of new rating coverage, other than that the effects of the predictor variables upon survival are constant over time and additive using a single scale.

A key assumption in the interpretation of results from both the linear probability and Cox models is that the set of firms rated by buyer-paid agency Egan-Jones is representative of actual buyer demand for corporate credit rating information. This assumption could be incorrect if Egan-Jones caters its rating coverage to a small number of non-representative buyers. However as of 2005, the midpoint of the sample period, Egan-Jones was reported to have approximately 400 paid subscribers (Hempel & Henry, 2005) drawn from pension funds, insurance companies, asset managers and hedge funds. As part of its SEC certification, Egan-Jones is also required to confirm that none of its individual clients represents more than 10% of the firm‟s net revenue (Egan-Jones, 2011), which further reduces the likelihood that its coverage decisions cater to only a handful of buyers.

The representativeness of Egan-Jones ratings would also be reduced if the agency specialized in a narrow set of industries at the exclusion of others or was simply slow or inaccurate in updating its rating coverage to reflect current buyer interests. However analysis of the industry mix of Egan-Jones ratings from 2000 to 2009 (not reported) shows that the firm has rated a broad set of industries throughout the period and the relative share of each industry has remained quite stable.

If Egan-Jones‟ cost to issue a new rating varies significantly across firms, those that are most costly to rate could be excluded despite some level of buyer willingness to pay. In this scenario the set of firms observed may not be those of greatest interest to buyers. However because Egan- Jones‟ rates a variety of industries using a process that relies primarily on the use of publicly available data and not direct interaction with sellers, there is no indication of significant variation in cost due to geographic distance, seller demands or other factors.

Egan-Jones‟ 15-year existence and the fact that its rating changes have been shown to be more timely than those of seller-paid counterparts (Beaver et al., 2006 and Chapter 1 of this thesis; Johnson, 2003) are positive signs that its rating coverage reflects actual buyer demand.

54

3.6 Results

3.6.1 Linear Probability Model Results

Results of the linear probability model are provided in Table 3.5. Model 1 is the standard specification with all variables of interest. Model 2 adds both period and industry dummies to test whether temporal or industry-related factors affect the coefficients on the variables of interest. Model 3 adds control variables for concurrent rating changes by S&P or Moody‟s in the focal month. Upgrades and downgrades by these agencies were correlated with the initiation of rating coverage in the descriptive statistics of Table 3.3.

In all three models, Fitch is the default agency so the upper rows of coefficients describe Fitch effects. Below them are the interaction variables for Egan-Jones for all the same variables of interest. Thus the constant shown at the bottom of the table represents the probability of Fitch entry for a firm with MSP ratings in the highest rating category (above A+), where all other dummy variables have a value of 0.

The first six variables listed categorize firms based on their average MSP rating category, with MSP above A+ as the omitted reference category. The top three categories represent average ratings above the investment grade cutoff. The Fitch coefficients suggest that when other variables are held constant, Fitch is significantly less likely to rate firms below the investment grade cutoff than above the cutoff. However Fitch is equally likely to rate firms within the three categories above the cutoff. In contrast, Egan-Jones is less likely to rate the firms in the omitted category (as per the “Egan-Jones” coefficient) but more likely to rate those firms who fall below the investment grade cutoff. Viewed together, these results are consistent with H1, which predicted firms receiving an Egan-Jones rating to be more evenly distributed across the rating scale.

The two key variables for evaluating H2 are “Tiebreaker IG” and “EJ * (Tiebreaker IG)”. In all three models, Fitch is statistically significantly more likely to rate a firm when it can be the tiebreaker at the investment grade cutoff. This circumstance is not statistically significant for Egan-Jones in any of the models, consistent with the predicted finding.

When Models 1, 2 and 3 are compared, it is clear that the results are consistent following the inclusion of controls for period, industry and concurrent MSP rating changes.

55

3.6.2 Cox Model Results

Results of the Cox proportional hazard analysis are provided in Table 3.6 (Fitch) and Table 3.7 (Egan-Jones). Positive and significant coefficients indicate factors that increase the likelihood of new rating coverage whereas negative and significant coefficients indicate a decrease in the same likelihood.

3.6.2.1 Time to Adding a Fitch Rating

Despite using monthly data rather than quarterly data and issuer-ratings rather than bond-level ratings, the results for Fitch entry in Table 3.6 are fairly consistent with those of Bongaerts et al (2011). All five models include rating category dummies based on average MSP ratings, with the omitted category being firms rated above A+. Under the first four specifications, rating coverage is equally likely for the upper three rating categories when we account for Tiebreaker IG, since coefficients for the first two categories shown are not significantly different than the omitted group. However the likelihood of Fitch rating coverage is lower for firms just below the investment grade cutoff (BB+ to BB-) and dramatically lower for firms in the two lowest rating categories.44 The economic significance of these coefficients is large. In Model 4 the -2.257 coefficient on the B+ to B- rating category indicate that firms in this range have approximately 1/10th the hazard rate of receiving a Fitch rating (exp(-2.257) = 0.105) as firms in the highest rating category. Figure 3.3 illustrates the wide variation in estimated cumulative hazard of a Fitch rating depending on the rating category into which a firm‟s average Moody‟s and S&P ratings falls.

In Models 1, 3, 4 and 5 of Table 3.6 the coefficient on the Tiebreaker IG variable is statistically significant and positive, indicating that sellers are significantly more likely to pay for a Fitch rating that has the potential to be the investment grade tiebreaker based on common regulatory rules. This is consistent with the “Tiebreaker” hypothesis of the Bongaerts et al. (2011) paper. In contrast the placebo measure Tiebreaker A- is not significant in any specification as expected. The measures for uncertainty, including a dummy variable for MSP disagreement or Notches MSP Rating Dispersion, which measures the gap when the two agencies disagree, have negative

44 This pattern may have been muted in Bongaerts et al. 2010 because they exclude all bonds with ratings below B-. The number of issuers at risk in each rating category and the number actually rated by Fitch or Egan-Jones can be seen in Table 3.4.

56 signs and are not significant. Figure 3.4 again shows the estimated cumulative hazard of a Fitch rating, this time comparing the significant difference between firms with TiebreakerIG=1 and all other firms.

When variables for upgrades or downgrades by either S&P or Moody‟s are added in Model 4, the results show that both types of changes significantly increase the hazard of Fitch adding a rating but also that the effect is larger for upgrades. In Model 5, when control variables for firm industry sector and logged total assets are added where available, the baseline hazard level of the omitted rating category (MSP ratings above A+) shifts but the category coefficients still show that the likelihood of a Fitch rating is higher for firms at the higher end of the MSP rating scale than those at the lower end.

3.6.2.2 Time to Adding an Egan-Jones Rating

Comparing Table 3.6 to the results for Egan-Jones in Table 3.7 allows us to test the two hypotheses. In contrast to Fitch, firms at the lowest rating category levels are not less likely to receive an Egan-Jones rating. In fact all rating category levels in Models 1-4 have comparable likelihood of receiving coverage, with the exception of firms with average ratings in the BBB+ to BBB- rating category right above the investment grade cutoff whose likelihood is slightly higher (p < 0.05). This difference in distribution is a notable finding consistent with H1 which predicted buyer-paid ratings to be more evenly distributed across the rating scale. After adding control variables for firm industry in Model 5 likelihood of rating is indistinguishable at any rating category, including BBB+ to BBB-.45 When the estimated cumulative hazard rates for firms at each rating category level are plotted in Figure 3.5, the range is much narrower than for Fitch as shown in Figure 3.3.

Consistent with H2, the Tiebreaker IG variable that was significant for Fitch rating coverage is not significant for Egan-Jones in any model specification. This suggests that the demand from sellers for a regulatory “tiebreaker” when they end up with S&P and Moody‟s ratings split across the investment grade cut-off does not coincide directly with buyer demand for another informative rating for the same select set of firms. The different in cumulative hazard rate for

45 GICS code indicating a firm‟s industry sector is unknown for approximately 40% of the at-risk sample but less than 3% of the firms receiving an Egan-Jones rating. With more complete data these results could change.

57 firms based on the Tiebreaker IG variable is shown in Figure 3.6. Again, compared to Fitch the range is much narrower. Viewed together, Models 1-4 suggest that Egan-Jones initiates coverage for a variety of firms near the investment grade cutoff rather than placing additional emphasis on the narrower group whose MSP ratings fall on opposite sides of the threshold.

When variables for rating changes by Moody‟s and S&P are added in Models 4 and 5, both upgrades and downgrades by S&P and Moody‟s significantly increase the likelihood of Egan- Jones coverage, consistent with the findings for Fitch. However, in contrast to Fitch, downgrades rather than upgrades have a larger positive coefficient and greater statistical significance. The magnitudes of both these rating change effects are the largest of any within the model, with an MSP downgrade increasing the hazard of Egan-Jones entry by over times in Model 5 (exp(1.867) = 6.47). This pattern is consistent with Egan-Jones catering to buyers who are particularly interested in additional rating information for firms in declining financial situations.

3.6.3 Robustness Checks

As described above, the linear probability model and the Cox proportional hazard model both have characteristics that make them well suited for the phenomenon of interest in this study. The linear probability model allows for hypothesis testing within a single set of model results and the interaction coefficients are fairly easily interpreted. The Cox model requires no assumptions regarding the baseline hazard and allows duration dependence to be separated from the partial effects of individual regressors. It is also adept at handling censored data that result when firms enter or leave the sample in various periods.

However as a robustness check it is also possible to run a logistic hazard model using the same sample structure and same dependent variable as the Cox model that takes on a value of 0 if the focal agency (Egan-Jones or Fitch) does not initiate covering in a given period, a value of 1 if a first-time rating is issued in the period, at which time the issuer drops out of the sample. Because the logistic model does not incorporate information on time to first rating in the same way as the Cox model, this approach can be thought of as a repeated cross-section analysis. Like the Cox model it is necessary to run the logistic hazard model with one focal agency, either Fitch or Egan-Jones, at a time.

58

I provide the results of such a regression in Table 3.8 using a similar set of regressors as was found in the Cox models. The first two columns provide coefficients from models where Fitch is the focal agency, whereas the third and fourth columns provide results from models focused on Egan-Jones. The results, which are displayed as odds rations, are comparable to the Cox models in that Fitch shows a lower likelihood of rating firms in the bottom rating categories (p<0.001) but increased likelihood of rating firms where TiebreakerIG = 1 (p<0.05).

Both entry models above (Cox and logistic) use a sample that excludes firms that were rated by Fitch and Egan-Jones prior to the sample period, as well as observations from the months after Fitch or Egan-Jones began rating a firm. This is by design since the primary research question of this paper is focused on the initiation of new rating coverage by the two agencies. Once sellers have paid the up-front fee required to receive a seller-paid rating and have publicly announced the resulting rating, they typically continue to keep the rating active unless they withdraw completely from the debt markets or experience a major event such as merger, acquisition or bankruptcy. Similarly, once a buyer-paid rating agency has conducted their analysis to rate a firm for the first time and added the firm to their portfolio of ratings available to buyers, they are likely to keep the rating active unless a similar major event affects the firm.

However throughout the time that coverage is maintained, ratings may change, shifting firms into different rating categories closer or farther away from the investment grade cutoff or default status. Without carefully examining the typical rating longevity and rating trajectory of rated firms for each agency, it is not clear how closely entry conditions will compare to the conditions of the overall set of ratings available in a given month. As a descriptive extension to the previous analysis, I make use of an alternative logistic regression model that allows me to include the previously excluded observations. I use a different dependent variable based on active rating coverage rather than initiation of a new rating. In this specification, we are predicting the likelihood that Fitch or Egan-Jones has an active rating for a specific firm-month pair, rather than the likelihood that the agency initiates new rating coverage in a given month. The results are shown in Table 3.9 and the coefficients are again shown as odds ratios.

The same rating category variables used in the Cox models appear again here. The results are similar but not exactly the same, since firms do not always stay in the same MSP average rating category where they were when first rated by Fitch or Egan-Jones. As in previous models, when

59 compared to the excluded “A+ and above” category Fitch coverage is equally likely for the other two categories above the investment grade cutoff but significantly less likely for the three categories below the cutoff. The distribution of Egan-Jones coverage is different but not as uniform as observed at the time of entry. As compared to the excluded category, Egan-Jones is more likely to rate firms in the two rating categories above the investment grade cutoff and equally likely to rate firms below the cutoff. Results for TiebreakerIG and MSP upgrades and downgrades are consistent with entry, suggesting that ratings must be fairly stable over time for both agencies for these conditions to persist.

3.7 Discussion and Conclusion

The results of this paper provide evidence that at least in corporate credit rating, significant differences exist between disclosure that sellers choose to pay for and information buyers want to know. These differences can be partially explained by the economic incentives affecting buyer and sellers.

3.7.1 Implications for Firm Strategy

From the perspective of sellers, these results confirm past findings regarding the selective circumstances under which firms are willing to pay for an additional rating. It appears likely that in credit rating, as in the movie industry and other settings, selective disclosure of additional product information is a viable strategy, even if a careful examination of the circumstances raises some concerns. More significantly, the novel approach of using buyer-paid rating coverage to provide an estimate of buyer demand for information provides a useful example of the potential gap between buyer and seller interests regarding information disclosure.

In settings where highly influential raters or critics are not seller-paid, a common question for sellers is how to attract the attention these raters, especially since just joining the set of rated firms may increase buyer interest regardless of the relative rating received (Roberts & Reagans, 2007). The results of this study provide new insight for sellers as to what circumstances are likely to attract rating attention to their firm when the agency is buyer-paid. An extension to these findings would be to investigate factors affecting coverage under a variety of other rating business models, from crowd-sourced ratings such as Yelp, TripAdvisor or OpenTable to ratings that are bundled as part of magazines, newspapers and other forms of media.

60

3.7.2 Implications for Policy

These findings suggest that encouraging entry by additional seller-paid rating agencies may not be sufficient intervention to improve rating quality or significantly increase rating coverage in markets. In addition to concerns that have already been raised about the impact of Fitch‟s growth on the overall accuracy and informativeness of ratings from all three seller-paid agencies (Becker & Milbourn, 2011), this paper suggests a further concern about selection of which firms receive ratings from additional seller-paid agencies. If sellers only choose to pay for additional ratings in select circumstances that favor their own interest and are reluctant to pay for third ratings when their preexisting ratings suggest that they are relatively high credit risks, the expected benefits to buyers from competition may not be realized.

The finding that high-risk sellers in particular provide less disclosure than buyers want seems to have parallels to other rating settings. Among restaurants or producers of consumer food products, it appears that it is often the case that firms already known to be of relatively high quality also invest in voluntary certification and labeling programs related to their nutritional content and production processes. In many cases these disclosure programs are supported or managed by government. However while they help consumers better differentiate their choices within the relatively high-quality subset of the market, similar disclosure signals are often lacking for firms at the lower end of the quality spectrum where health risks may be more severe and additional disclosure beneficial for consumers.

The broader buyer-paid rating coverage that was observed in this setting suggests that mechanisms encouraging or subsidizing buyer-paid rating agencies may offer positive informational benefit to buyers. The alternative option of mandating disclosure, such as recent efforts regarding restaurant hygiene, may also be warranted where “information unraveling” does not seem to effectively motivate higher-risk firms to voluntarily disclose (Jin & Leslie, 2003, 2009).

Consideration of heterogeneity in rating agency business models is sometimes overlooked by regulators and consumer advocates in the U.S. credit rating market. For example, even though three buyer-paid agencies have already become NRSRO certified, some of the recently proposed U.S. regulatory changes, such as assigning sellers to specific rating agencies rather than allowing

61 them to choose freely (Mathis et al., 2009), could potentially undermine or lock-out buyer-paid agencies unless they are taken into account during planning.

3.7.3 Conclusion

In this paper I test whether sellers in the U.S. corporate credit rating market provide incremental information via seller-paid Fitch ratings consistent with the interests of buyers. I make use of a unique dataset that combines ratings from three seller-paid agencies with ratings from one buyer- paid agency. I find differences between seller-paid and buyer-paid ratings consistent with their predicted willingness to pay for additional information. Buyer-paid ratings are less focused on playing a “tiebreaker role” and more evenly distributed across the rating scale.

By suggesting that buyer-paid agencies provide buyers with distinctly different information than seller-paid agencies, my findings add to the ongoing debate over the performance of credit rating agencies and potential regulatory mechanisms for improving their accuracy.

62

Table 3.1 Nationally Recognized Statistical Rating Organizations (NRSRO)

NRSRO NRSRO Approval Classes Primary Revenue Agency Date Rateda Founded Source S&P 1975 I-V 1916 Seller-paid Moody‟s 1975 I-V 1909 Seller-paid Fitch, Inc. 1975 I-V 1924 Seller-paid Dominion Bond 2003 I-V 1976 Seller-paid Rating Service AM(DBRS) Best 2005 I - IV 1899 Buyer-Paid Egan-Jones 2007 I-V 1995 Buyer-Paid(Insurance) Japan Credit Rating 2007 I-V 1985 Seller-paid (Japan) Agency, Ltd Rating and 2007 I-V 1975 Seller-paid (Japan) Investment KrollInformation, Bond Rating Inc. 2008 I-V 1984 Buyer-Paid (formerly LACE Financial Corp.) Morningstar 2008 IV only 2001 Hybrid – Initial rating Realpoint Division issuer pay, (formerly Realpoint surveillance Notes:LLC) a. The five classes are I. Financial Institutions, II. Insurancesubscriber Companies, pay III. Corporate Issuers, IV. Asset-Backed Securities, and V. Government Securities. Source: Securities and Exchange Commission (SEC) http://www.sec.gov

63

Table 3.2 Summary Statistics – Full Monthly Rating Sample Summary statistics for the sample of 1,769 firms rated by both S&P and Moody's in at least one month Jan 1, 2000 - June 30, 2009 but not rated by Egan-Jones or Fitch prior to Jan 1, 2000. The unit of analysis is the firm-month.

N Mean St. Dev Min Max Description

Comparing Rating Coverage S&P Covering S&P141,613 Covering0.80 0.4 0 1 = 1 if Rated by S&P Moody's Covering Moody's141,613 Covering0.71 0.45 0 1 = 1 if Rated by Moody's Fitch Covering Fitch141,613 Covering0.24 0.43 0 1 = 1 if Rated by Fitch Egan-Jones Covering Egan141,613 Jones Covering0.22 0.42 0 1 = 1 if Rated by Egan-Jones

Comparing Mean Ratings Fitch Rating Fitch33,602 Rating 8.08 3.55 1 22 Fitch Rating (1=best, 22=worst) EJ Rating EJ 31,830Rating 10.5 4.93 1 22 Egan Jones Rating Moody's Rating Moody's100,469 Rating11.6 4.93 1 21 Moody's Rating S&P Rating S&P113,698 Rating 10.5 4.85 1 22 S&P Rating

Entry When MSP Are on Opposite Sides of the Investment Grade Boundary Moody's and S&P on opposite side Tiebreaker IG Tiebreaker141,613 IG0.029 0.17 0 1 of Igrade boundary Fitch above IG Fitch141,613 makes 0.00012IG 0.011 0 1 Fitch rates as IG when MSP split Fitch below IG Fitch141,613 denies 0.000042IG 0.0065 0 1 Fitch rates below IG, MSP are split EJ above IG EJ141,613 makes IG0.000042 0.0065 0 1 EJ rates as IG when MSP are split EJ below IG EJ141,613 denies IG0.000035 0.0059 0 1 EJ rates below IG, MSP are split

Notches MSP Rating Notches79,859 MSP 1.17Rating~n 1.2 0 15 Absolute value of MSP rating Dispersion difference

Upgrades & Downgrades Moody's Upgrade moodys141,613 Upgrade0.0047 0.069 0 1 Moody's upgrade Moody's Downgrade moodys141,613 Downgrade0.012 0.11 0 1 Moody's downgrade S&P Upgrade sandp141,613 Upgrade0.0058 0.076 0 1 S&P upgrade S&P Downgrade sandp141,613 Downgrade0.017 0.13 0 1 S&P downgrade Fitch Upgrade fitch141,613 Upgrade0.0017 0.042 0 1 Fitch upgrade Fitch Downgrade fitch141,613 Downgrade0.0038 0.061 0 1 Fitch downgrade Egan-Jones Upgrade ej 141,613Upgrade 0.0029 0.054 0 1 Egan-Jones Upgrade EJ Downgrade ej 141,613Downgrade0.0055 0.074 0 1 Egan-Jones Downgrade

64

Table 3.3 Summary Statistics – Fitch or Egan-Jones Entry after S&P/Moody’s The table presents summary statistics and a brief description for the sample of observations from a month where either Fitch (Left Panel) or Egan-Jones (Right Panel) began rating a firm that already had ratings from both S&P and Moody's.

Fitch Entry Egan-Jones Entry Variable N Mean St. Dev Min Max N Mean St. Dev Min Max Description Comparing Rating Coverage Fitch Covering Fitch263 Covering1 0 1Fitch1 Covering224 0.22 0.41 0 1 Rated by Fitch Egan Jones Covering Egan263 Jones0.24 Covering0.43 Egan0 Jones1 224Covering1 0 1 1 Rated by Egan-Jones

Comparing Mean Ratings Fitch Rating Fitch263 Rating8.65 3.47 1 Fitch22 Rating49 8.57 3.27 2 17 Fitch Rating EJ Rating EJ62 Rating8.35 2.85 3 EJ15 Rating224 10.9 3.99 1 22 Egan Jones Rating Moody's Rating Moody's263 8.92 Rating3.57 1Moody's20 224Rating11.2 4.05 1 21 Moody's Rating S&P Rating S&P263 Rating8.59 3.28 1 S&P22 Rating224 10.7 3.86 1 22 S&P Rating

Entry When MSP Are on Opposite Sides of the Investment Grade Boundary MSP on opposite side of Igrade Tiebreaker IG Tiebreaker263 0.11 IG0.31 0Tiebreaker1 224 IG0.06 0.24 0 1 boundary Entry above IG Fitch263 makes0.07 IG0.25 0 EJ 1makes224 IG0.03 0.16 0 1 New rating above IG Entry below IG Fitch263 denies0.02 IG0.15 0 EJ 1denies224 IG0.02 0.15 0 1 New rating below IG

Notches MSP Absolute value of Notches263 0.94 MSP 0.93Rating~nNotches0 5MSP224 Rating~n0.92 0.97 0 5 Rating Dispersion MSP rating difference MSP Upgrades & Downgrades Coinciding with New Ratings Moody's Upgrade moodys263 0.04 Upgrade0.2 0moodys1 Upgrade224 0.02 0.15 0 1 Moody's upgrade Moody's Downgrade moodys263 0.04 Downgrade0.19 moodys0 1 Downgrade224 0.08 0.27 0 1 Moody's downgrade S&P Upgrade sandp263 Upgrade0.04 0.19 0sandp1 Upgrade224 0.01 0.12 0 1 S&P upgrade S&P Downgrade sandp263 Downgrade0.01 0.11 sandp0 1 Downgrade224 0.09 0.29 0 1 S&P downgrade

Comparing New Rating to Existing Ratings Rating Better MSP F263 Added,0.42 Better0.49 MSPEJ 0Added,1 Better224 0.46MSP 0.5 0 1 New rating < MSP Rating Equal MSP F263 Added,0.27 Equal0.44 MSPEJ 0Added,1 Equal224 MSP0.2 0.4 0 1 New rating = MSP Rating Worse MSP F263 Added,0.31 Worse0.46 MSPEJ 0Added,1 Worse224 0.34MSP 0.48 0 1 New rating > MSP Rating Better SP F263 Added,0.29 Better0.45 SP EJ0 Added,1 Better224 0.32 SP 0.47 0 1 New rating < S&P Rating Equal SP F263 Added,0.39 Equal0.49 SP EJ0 Added,1 Equal224 0.34 SP 0.48 0 1 New rating = S&P Rating Worse SP F263 Added,0.32 Worse0.47 SP EJ0 Added,1 Worse224 0.33 SP 0.47 0 1 New rating > S&P Rating Better M F263 Added,0.33 Better0.47 M EJ0 Added,1 224Better0.46 M 0.5 0 1 New rating < Rating Equal M F263 Added,0.49 Equal0.5 M EJ0 Added,1 224Equal0.3 M 0.46 0 1 NewMoody's rating = Rating Worse M F263 Added,0.17 Worse0.38 M EJ0 Added,1 Worse224 0.24 M 0.43 0 1 NewMoody's rating > Moody's

65

Table 3.4 Average Rating Differences by Rating Category This table shows average rating differences for issuers rated simultaneously by S&P and Moody's in at least one month between Jan 2000 - June 2009. Some are rated by either Fitch or Egan-Jones as well. Differences are measured in rating notches and split up by rating categories based on the average Moody's and S&P ratings. A negative number means that the rating agency named first gives on average a more favorable rating than the second agency. T-statistics are in parentheses. *, **, and *** indicate statistical significance at the p < 0.05, p < 0.01, p < 0.001 levels respectively.

All Issuers Moody's vs. Fitch vs. Fitch vs. Egan-Jones Egan-Jones Egan-Jones S&P (full S&P Moody's vs. S&P vs. Moody's vs. Fitch Sample)

Difference _cons0.345*** -0.225*** -0.280*** 0.155*** -0.241*** 0.304*** (76.37) (-49.90) (-50.11) (17.65) (-24.83) (27.73) N N 123,840 58,234 58,234 45,935 45,935 26,053 N. Issuers 2,444 988 988 831 831 447 AAA to AA- (1-4.5) Difference _cons-0.482*** -0.212*** 0.481*** 0.895*** 1.043*** 0.959*** (-41.01) (-15.54) (36.57) (25.22) (32.14) (21.90) N N 10,422 6787 6787 2485 2485 1712 N. Issuers 277 179 179 81 81 47 A+ to A- (5-7.5) Difference _cons-0.0326*** -0.299*** -0.212*** 0.318*** 0.133*** 0.606*** (-4.94) (-43.56) (-25.93) (24.20) (9.76) (35.25) N N 30,383 19,502 19,502 12,991 12,991 8,844 N. Issuers 702 470 470 284 284 204 BBB+ to BBB- (8-10.5) Difference _cons0.325*** -0.207*** -0.441*** -0.0707*** -0.439*** 0.0877*** (51.93) (-33.38) (-60.18) (-6.54) (-36.92) (6.47) N N 39,653 23,532 23,532 19,004 19,004 11,520 N. Issuers 883 524 524 435 435 268 BB+ to BB- (11-13.5) Difference _cons0.800*** -0.289*** -0.666*** -0.0958*** -0.766*** 0.0940* (58.57) (-13.36) (-31.83) (-3.46) (-24.35) (2.21) N N 13,734 4,917 4,917 5,626 5,626 2,351 N. Issuers 561 198 198 240 240 98 B+ to B- (14-16.5) Difference _cons1.355*** 0.156*** -1.118*** 0.455*** -0.983*** -0.447*** (116.86) (5.07) (-21.22) (10.49) (-18.05) (-5.37) N N 21,348 2,475 2,475 4,107 4,107 1,269 N. Issuers 912 107 107 200 200 55 C+ to D (17+) Difference _cons-0.486*** 0.106 0.991*** 0.461*** 0.738*** 0.703*** (-16.68) (1.92) (13.01) (5.06) (9.75) (3.81) N N 8,300 1,021 1,021 1,722 1,722 357 N. Issuers 539 72 72 141 141 38

66

Table 3.5 Linear Probability Model for Adding a Fitch or Egan-Jones Rating This table presents combined linear probability regressions of the likelihood of Fitch or Egan-Jones initiating rating coverage in a given month. The risk set is all firms having active ratings from both Moody's and S&P (MSP) in that month that have yet to receive a rating from Fitch or Egan-Jones. The dependent variable is dichotomous, taking a value of 1 when an agency (Fitch or Egan-Jones) initiates coverage in the month. A dichotomous variable indicates whether the focal agency is Fitch (0) or Egan- Jones (1). Key explanatory variables interacted with the Fitch/EJ variable include the rating category dummies based on average MSP ratings, measures for whether MSP ratings are split across the Investment Grade cutoff, whether MSP disagree (indicating uncertainty in the existing ratings), variables indicating MSP upgrade and downgrades. Standard errors are clustered at the firm/issuer level. T- statistics are in parentheses. *, **, and *** indicate statistical significance at the p < 0.05, p < 0.01, and p < 0.001 levels respectively. Model 1 Model 2 Model 3 MSP Above A+ MSP Aboveref. A+ ref. ref. MSP A+ to A- MSP0.000509 A+ to A- (0.36) 0.000373 (0.26) 0.000298 (0.21) MSP BBB+ to BBB- MSP-0.000777 BBB+ to BBB-(-0.57) -0.00134 (-0.97) -0.00141 (-1.03) MSP BB+ to BB- MSP-0.00403** BB+ to BB-(-2.78) -0.00488*** (-3.34) -0.00513*** (-3.51) MSP B+ to B- MSP-0.00776*** B+ to B- (-5.89) -0.00891*** (-6.68) -0.00900*** (-6.75) MSP Below B- MSP-0.00837*** Below B-(-5.70) -0.00916*** (-6.15) -0.00978*** (-6.55) S&P ne Moody's S&P0.0000783 ne Moody's(0.12) -0.00000895 (-0.01) -0.0000593 (-0.09) Tiebreaker IG Tiebreaker0.00336* IG (2.48) 0.00296* (2.18) 0.00288* (2.12) Egan-Jones Egan-Jones-0.00548***(-3.39) -0.00428** (-2.65) -0.00440** (-2.72) EJ * (MSP A+ to A-) EJ 0.000896* (MSP A+ to(0.51) A-) 0.000361 (0.20) 0.000445 (0.25) EJ * (MSP BBB+ to BBB-)EJ 0.00350** (MSP BBB+(2.04) to BBB-)0.00256 (1.49) 0.00269 (1.56) EJ * (MSP BB+ to BB-) EJ 0.00834**** (MSP BB+(4.46) to BB-)0.00712*** (3.81) 0.00726*** (3.88) EJ * (MSP B+ to B-) EJ 0.00918**** (MSP B+ to(5.52) B-) 0.00793*** (4.77) 0.00804*** (4.84) EJ * (MSP Below B-) EJ 0.00969**** (MSP Below(5.09) B-) 0.00848*** (4.46) 0.00859*** (4.51) EJ * (S&P ne Moody's) EJ -0.00235** (S&P ne Moody's)(-2.58) -0.00225* (-2.46) -0.00223* (-2.45) EJ * (Tiebreaker IG) EJ -0.00290* (Tiebreaker(-1.55) IG) -0.00306 (-1.64) -0.00311 (-1.67) S&P Upgrade sandp Upgrade 0.00884*** (4.01) S&P Downgrade sandp Downgrade 0.00316* (2.55) Moody's Upgrade moodys Upgrade 0.0146*** (6.17) Moody's Downgrade moodys Downgrade 0.00745*** (5.34) Constant Constant0.00878*** (6.85) 0.00466* (2.16) 0.00484* (2.25) Period dummies PeriodNo dummies Yes Yes Industry dummies IndustryNo dummies Yes Yes Observations Observations116164 116164 116164 R-squared R-squared0.002 0.007 0.008 Adjusted R-squared Adjusted0.002 R-squared 0.006 0.007

67

Table 3.6 Cox Proportional Hazard Model for Time to Adding Fitch Rating This table presents Cox Proportional Hazard model regressions of the time to adding a first Fitch rating for firms with an active rating from both S&P and Moody's. The dependent variable is dichotomous, taking a value of 1 when Fitch issues a first-time rating in a given month. Key explanatory variables are the rating category dummies based on average Moody's and S&P (MSP) ratings, measures for whether the MSP ratings are split across the Investment Grade or A- cutoffs, a variable indicating whether the MSP ratings disagree and the absolute value of rating difference between them (indicating uncertainty in the existing ratings). T-statistics are in parentheses. *, **, and *** indicate statistical significance at the p < 0.05, p < 0.01, and p < 0.001 levels respectively. Standard errors are clustered by issuer.

Model 1 Model 2 Model 3 Model 4 Model 5 MSP A+ to A- MSP 0.175A+ to A- 0.104 0.148 0.156 0.970* (0.76) (0.47) (0.64) (0.68) (2.32) MSP BBB+ to BBB- MSP-0.152 BBB+ to BBB--0.0588 -0.175 -0.143 0.822* (-0.70) (-0.28) (-0.81) (-0.66) (1.99) MSP BB+ to BB- MSP-0.683** BB+ to BB--0.636* -0.658** -0.688** 0.711 (-2.69) (-2.50) (-2.58) (-2.73) (1.45) MSP B+ to B- MSP-2.289*** B+ to B- -2.301*** -2.245*** -2.257*** 0.59 (-7.64) (-7.70) (-7.41) (-7.56) (1.11) MSP Below B- MSP-2.736*** Below B--2.751*** -2.640*** -2.808*** -0.763 (-4.49) (-4.52) (-4.30) (-4.56) (-0.67) Tiebreaker IG Tiebreaker0.605** IG 0.711** 0.578* 0.670* (2.58) (2.87) (2.51) (1.99) Tiebreaker A- Tiebreaker-0.323 A- -0.217 -0.321 -0.561 (-1.29) (-0.81) (-1.29) (-1.30) S&P and Moody's Disagree S&P-0.0804 ne Moody's-0.0417 -0.103 -0.154 (-0.54) (-0.30) (-0.71) (-0.70) Notches MSP Rating DispersionNotches MSP Ra~n -0.0973 (-1.41) MSP Upgrade MSPupgrade 1.541*** 1.623*** (6.04) (4.34) MSP Downgrade MSPdowngrade 0.713* 1.195*** (2.36) (3.58) Industry Control Variables industryNo dummies No No No Yes Observations Observations58,708 58,708 58,708 58,708 26,680 Pseudo R-squared Pseudo0.058 R-squared0.056 0.059 0.066 0.115

68

Table 3.7 Cox Proportional Hazard Model for Time to Adding Egan-Jones Rating This table presents Cox Proportional Hazard model regressions of the time to adding a first Egan-Jones rating for firms with an active rating from both S&P and Moody's. The dependent variable is dichotomous, taking a value of 1 when Egan-Jones issues a first-time rating in a given month. Key explanatory variables are the rating category dummies based on average Moody's and S&P (MSP) ratings, measures for whether the MSP ratings are split across the Investment Grade or A- cutoffs, a variable indicating whether the MSP ratings disagree and the absolute value of rating difference between them (indicating uncertainty in the existing ratings). T-statistics are in parentheses. *, **, and *** indicate statistical significance at the p < 0.05, p < 0.01, and p < 0.001 levels respectively. Standard errors are clustered by issuer.

Model 1 Model 2 Model 3 Model 4 Model 5 MSP A+ to A- MSP0.597 A+ to A- 0.541 0.579 0.595 0.136 (1.63) (1.49) (1.56) (1.61) (0.32) MSP BBB+ to BBB- MSP0.746* BBB+ to BBB-0.747* 0.762* 0.757* 0.371 (2.12) (2.14) (2.17) (2.14) (0.89) MSP BB+ to BB- MSP0.679 BB+ to BB-0.691 0.715 0.684 -0.0618 (1.86) (1.90) (1.95) (1.87) (-0.13) MSP B+ to B- MSP-0.245 B+ to B- -0.245 -0.187 -0.236 -0.556 (-0.68) (-0.67) (-0.52) (-0.65) (-1.10) MSP Below B- MSP0.136 Below B- 0.137 0.331 -0.212 -0.365 (0.33) (0.33) (0.79) (-0.52) (-0.59) Tiebreaker IG Tiebreaker0.15 IG 0.352 0.132 0.35 (0.48) (1.07) (0.44) (1.15) Tiebreaker A- Tiebreaker-0.292 A- -0.165 -0.312 0.397 (-0.81) (-0.44) (-0.87) (0.93) S&P and Moody's Disagree S&P-0.581*** ne Moody's-0.595*** -0.591*** -0.259 (-3.72) (-4.01) (-3.85) (-1.36) Notches MSP Rating DispersionNotches MSP Ra~n -0.312*** (-3.62) MSP Upgrade MSPupgrade 1.105** 1.536*** (3.00) (4.33) MSP Downgrade MSPdowngrade 1.710*** 1.867*** (8.98) (8.37) Industry Control Variables industryNo dummiesNo No No Yes Observations Observations63,460 63,460 63,460 63,460 23,312 Pseudo R-squared Pseudo0.022 R-squared0.021 0.022 0.040 0.079

69

Table 3.8 Logistic Regressions for Adding a Fitch or Egan-Jones Rating This table presents logistic regressions of the likelihood of Fitch or Egan-Jones initiating rating coverage in a given month. The risk set is all firms having active ratings from both S&P and Moody's in that month but yet to receive a rating from Fitch or Egan-Jones. The dependent variable is dichotomous, taking a value of 1 when the focal agency (Fitch or Egan-Jones) has an active rating in the month. Key explanatory variables are the rating category dummies based on average Moody's and S&P (MSP) ratings, measures for whether MSP ratings are split across the Investment Grade or A- cutoffs, whether MSP disagree (indicating uncertainty in the existing ratings), and variables indicating upgrade and downgrade activity by Moody‟s and S&P. Coefficients are odds ratios with standard errors clustered at the firm/issuer level.

Fitch Fitch Egan-JonesEgan-Jones MSP Rating A+ to A- big2rtgcat==21.01 0.99 1.84 1.76 (0.23) (0.23) (0.81) (0.78) MSP Rating BBB+ to BBB- big2rtgcat==30.84 0.82 1.64 1.60 (0.19) (0.18) (0.66) (0.65) MSP Rating BB+ to BB- big2rtgcat==40.67 0.63 1.36 1.28 (0.17) (0.16) (0.59) (0.56) MSP Rating B+ to B- big2rtgcat==50.15*** 0.15*** 0.57 0.54 (0.04) (0.04) (0.25) (0.23) MSP Rating C+ to D big2rtgcat==60.05*** 0.05*** 0.57 0.40 (0.03) (0.03) (0.28) (0.20) Tiebreaker Investment GradeTiebreaker1.77* IG1.73* 1.12 1.06 (0.45) (0.43) (0.35) (0.33) Tiebreaker A- Tiebreaker0.71 A- 0.72 0.74 0.72 (0.18) (0.18) (0.29) (0.28) S&P, Moody's Disagree S&P0.94 ne Moody's0.92 0.68* 0.68* (0.15) (0.14) (0.11) (0.11) S&P Upgrade sandp Upgrade3.81*** 1.61 (1.30) (1.03) S&P Downgrade sandp Downgrade0.50 3.62*** (0.31) (1.05) Moody's Upgrade moodys Upgrade4.32*** 3.52* (1.40) (1.75) Moody's Downgrade moodys Downgrade2.99** 3.03*** (1.02) (0.97) Industry Control Variables industryYes dummiesYes Yes Yes Observations Observations53,605 53,605 62,559 62,559 Pseudo R-squared Pseudo0.08 R-squared0.09 0.11 0.13 Issuers Issuers1,535 1,535 1,630 1,630

70

Table 3.9 Logistic Regression for Fitch or Egan-Jones Rating Coverage This table presents logistic regressions of the likelihood of having a Fitch or Egan-Jones rating in a given month. The risk set is all firms having active ratings from both S&P and Moody's in that month. The dependent variable is dichotomous, taking a value of 1 when the focal agency (Fitch or Egan-Jones) has an active rating in the month. Key explanatory variables are the rating category dummies based on average Moody's and S&P (MSP) ratings, measures for whether S&P and Moody's ratings are split across the Investment Grade or A- cutoffs, whether Moody's and S&P disagree (indicating uncertainty in the existing ratings), and variables indicating upgrade and downgrade activity by S&P and Moody's. Standard errors are clustered at the firm/issuer level. Fitch Fitch Egan-JonesEgan-Jones MSP Rating A+ to A- MSP1.01 A+ to A-1.01 2.44** 2.44** -0.21 -0.21 -0.75 -0.75 MSP Rating BBB+ to BBB- MSP0.92 BBB+ to 0.92BBB- 2.80*** 2.79*** -0.19 -0.19 -0.85 -0.85 MSP Rating BB+ to BB- MSP0.34*** BB+ to0.34*** BB- 1.79 1.78 -0.08 -0.08 -0.57 -0.57 MSP Rating B+ to B- MSP0.07*** B+ to B-0.07*** 0.62 0.62 -0.02 -0.02 -0.20 -0.20 MSP Rating C+ to D MSP0.04*** Below 0.04***B- 0.64 0.60 -0.01 -0.01 -0.24 -0.23 Tiebreaker Investment GradeTiebreaker0.62* IG0.61* 1.12 1.11 -0.14 -0.14 -0.27 -0.26 Tiebreaker A- Tiebreaker0.82 A- 0.82 0.87 0.87 -0.14 -0.14 -0.18 -0.18 S&P, Moody's Disagree S&P1.20 ne Moody's1.20 0.63*** 0.63*** -0.13 -0.13 -0.07 -0.07 S&P Upgrade sandp Upgrade1.21* 1.16 -0.12 -0.13 S&P Downgrade sandp Downgrade1.11 1.34*** -0.08 -0.10 Moody's Upgrade moodys Upgrade1.31** 0.94 -0.13 -0.10 Moody's Downgrade moodys Downgrade1.26** 1.47*** -0.09 -0.10 Observations Observations79,859 79,859 79,859 79,859 Pseudo R-squared Pseudo0.15 R-squared0.15 0.07 0.07 Issuers Issuers1,769 1,769 1,769 1,769

71

Figure 3.1 Egan-Jones Web Site Snapshot

72

Figure 3.2 Rating Coverage by Agency

Rating Coverage by Agency

2000 - 2009

4000

3000

2000

# of firms rated of #firms

1000 0

2000m1 2004m1 2008m1 2002m1 2006m1 2010m1 Month

S&P Moody's Fitch Egan-Jones

Note: March 2003 S&P withdrew its unsolicited ratings of insurance firms.

Figure 3.3 Cumulative Hazard – Fitch Rating by Rating Category

Cumulative Hazard - Fitch Rating

0.80

0.60

0.40

0.20 0.00 0 50 100 150 analysis time

MSP Above A+ MSP A+ to A- MSP BBB+ to BBB- MSP BB+ to BB- MSP B+ to B- MSP Below B-

Note: Cumulative hazards based on Nelson-Aalen estimates

73

Figure 3.4 Cumulative Hazard – Fitch Rating by Tiebreaker Status

Cumulative Hazard - Fitch Rating

1.00

0.80

0.60

0.40

0.20 0.00 0 50 100 150 analysis time

Moody's, S&P Disagree on Investment Grade (Fitch Tiebreaker) Moody's, S&P Agree on Investment Grade

Note: Cumulative hazards based on Nelson-Aalen estimates

Figure 3.5 Cumulative Hazard – Egan-Jones Rating by Rating Category

Cumulative Hazard - EJ Rating

0.50

0.40

0.30

0.20

0.10 0.00 0 50 100 150 analysis time

MSP Above A+ MSP A+ to A- MSP BBB+ to BBB- MSP BB+ to BB- MSP B+ to B- MSP Below B-

Note: Cumulative hazards based on Nelson-Aalen estimates

74

Figure 3.6 Cumulative Hazard – Fitch Rating by Tiebreaker Status

Cumulative Hazard - EJ Rating

0.40

0.30

0.20

0.10 0.00 0 50 100 150 analysis time

Moody's, S&P Disagree on Investment Grade (Fitch Tiebreaker) Moody's, S&P Agree on Investment Grade

Note: Cumulative hazards based on Nelson-Aalen estimates

Chapter 4 4 Regulatory Convergence: An Exploratory Examination of Government Film Classification 4.1 Introduction

A variety of solutions exist for addressing market imperfections such as information asymmetry between sellers and buyers. While the previous two chapters have focused on the role of for- profit rating agencies in facilitating information disclosure, other options include communitarian self-regulation led by industry associations or direct government responsibility for evaluating seller offerings and disclosing the results to buyers.

This study analyzes the role of government agencies in assessing seller offerings and in particular the key challenge faced by regulators of determining whether the appropriate geographic scope of regulation is at the municipal, state/ provincial, national or international level. In an economic environment where products and information increasingly cross regional and national boundaries, there is increasing interest in regulatory harmonization and convergence (Vogel & Kagan, 2004). However initial choices regarding geographic scope can have long- lasting ramifications, given the well-documented tendency for regulatory policy to persist, even when inefficient (Coate & Morris, 1999; Warren & Wilkening, 2010).

I examine the motion picture industry, a globalized market where products are increasingly distributed worldwide but the assignment of age-appropriateness ratings is carried out in parallel at the national or even regional level. I address two research questions. First, how closely aligned are the decisions made by these parallel regulators? Second, what is the trend in consistency over time?

I focus specifically on Canada, one of only two major countries in the world where authority for film classification rests at the provincial/regional level rather than at the national level.1 The

1 Switzerland also has a regional structure with rating authorities at the canton level. Canada is also the only developed country in the world without a national body responsible for regulating financial securities (Expert Panel on Securities Regulation 2009). 75

76 classification boards themselves emphasize variation in Canadian community standards as a primary justification for why Canada still has six film classification bodies when almost all other countries have one.2

The analysis that follows tests the extent of this variation based on past classification decisions. I find a surprisingly high level of consistency in the decisions of these six provincial bodies with only one province, Quebec, proving to be somewhat of an outlier. Regional consistency increases further between 1993 and 2007, to the point where the differences in classification decisions between any two regulatory bodies outside Quebec are no longer statistically significant.

The observed convergence in classification decisions in this setting raises broad questions about the net impact of fragmented or concentrated regulatory structures on industry participants. Under a fragmented structure where firms of all sizes are required to pay standard rates to each regulatory body, the cost of compliance is more significant for smaller firms than larger firms, potentially reducing the level of variety and diversity in the market. In addition, larger firms may also be able to exert more influence over a series of smaller government regulatory bodies than over a single large government regulator.

This paper seeks to contribute to the broader literature on quality disclosure and certification by offering new empirical evidence relevant to the theoretical debate on the appropriate role of government in the disclosure of product information.

The rest of the paper is organized as follows. Section 4.2 describes the relevant literature and theory. Section 4.3 describes my data source, sample and measures. Section 4.4 describes my empirical approach and results. Discussion and conclusion are found in Section 4.5.

4.2 Literature and Theory

Information asymmetries that exist between producers and consumers may provide an incentive for producers to misrepresent product attributes. While such asymmetries may be minimal for

2 From the Ontario Film Classification Board‟s FAQ page: Q: “Why does Canada have provincial classification boards? Why not have one board for all of Canada? A: The provinces of Canada have been responsible for regulating film exhibitions for almost 100 years. Canada's diverse population is spread across a huge geographic expanse so community standards may differ across the land. Film boards reflect the community standards which are mirrored in their composition.” http://www.ofrb.gov.on.ca/english/faq_page3.htm accessed on Jan 28, 2010.

77 goods that can be directly observed prior to purchase (“search” goods), the asymmetries are often much higher for goods that consumers must use in order to verify quality (“experience” goods) (Nelson, 1970). Movies, and many other cultural products, fall into the category of experience goods.

Classification of a product or process, assigning it a specific location within a classification scheme, is one form of information disclosure that can reduce asymmetry and prevent misrepresentation.3 This classification can be performed by some combination of efforts from sellers, third-party organizations (as examined in Chapters 2 and 3), and governmental organizations. Determining the appropriate role of government in mandating or facilitating such disclosure is a key concern to policymakers as well as an important theoretical issue in the strategy and economics literature on quality disclosure and certification (Dranove & Jin, 2010).

In the movie industry, where goods can be evaluated on many dimensions but cannot be directly observed prior to purchase, concerns over exposure to certain types of visual imagery, particularly among children, have made age-appropriateness classification a common practice in most developed countries.4 Past research has shown that a movie‟s age classification has a significant effect on box-office success in various countries (Leenders & Eliashberg, 2004), making the chosen regulatory structure strategically important to industry participants in addition to being socially important.

In this context I examine two related empirical questions. First, when multiple regulators regulate a common set of products during the same time period, how closely aligned are their rating decisions? Second, is there a trend towards convergence of classification opinions over time? Prior research highlights multiple factors that can affect the level of consistency among multiple decision makers closely associated with one another. I will start by reviewing factors expected to increase consistency and then review those expected to decrease consistency.

3 Other mechanisms include reputation, experience, warranties, licensing, and minimum quality standards. 4 For firms, certification decisions can function as entry barriers, limiting or eliminating distribution options for films that receive unfavorable certifications for their content or are produced by niche firms lacking the resources or legitimacy to undergo certification (Sandler 2001).

78

4.2.1 Factors Increasing Consistency

The first factor recognized to contribute to consistency across organizations is institutional isomorphism. DiMaggio and Powell (1983) describe this concept as forces of rationalization and bureaucratization experienced by a set of organizations once they emerge as a field and describe the types of organizations they expect to be more vulnerable to these forces than others. They suggest that organizations that are dependent on other organizations will become increasingly alike and that uncertainty and goal ambiguity will cause an organization to model itself after a successful organization.

These characteristics appear to fit the circumstances of Canadian film review boards. These boards are completely dependent on their sponsoring government, who grant the authority to classify films and typically appoint all of the review board‟s members. Significant uncertainty and ambiguity exist both in the classification guidelines used to rate individual films and in the overall priorities of the review boards, which encompass protecting youth from harmful film content, encouraging provincial film production, and interfacing with film distributors. Canadian regulators belong to a common association that organizes regular national conferences, providing a useful mechanism for ideas and norms to spread. These Canadian organizations also operate in the shadow of the U.S. industry-led MPAA, which typically makes its own influential rating decision widely known prior to its Canadian counterparts.5 Unlike competing for-profit firms who could differentiate based on their rating decisions in an attempt to increase demand for their ratings, Canadian film boards have a geographic monopoly and may see consistency as a more advantageous means to longevity.

Separate research in economics tells us that public regulation may be “captured” by industry, causing government decisions to reflect the outcomes preferred by their industry constituents, as a result of a differential advantage in coordination and incentive for major industries over individual consumers or other stakeholders. (Peltzman, 1976; Stigler, 1971). Analysis of asymmetry of classification within the decisions of the U.S. MPAA has identified this type of regulatory capture as a potential contributing factor in more lenient ratings being granted to films

5 The MPAA provides a voluntary classification service that does not carry the force of legal statute but is critical in determining theatrical distribution options.

79 of major studios and influential directors (Waguespack & Sorenson, 2010). Canadian motion picture distribution is dominated by films from the same major studios as the United States and these firms have worked cooperatively on Canadian issues through a trade association since 1920.6 If the major studios are influential in the decisions of individual provincial review boards this would likely increase consistency among them.

Finally, an economic literature on herding in settings with multiple rating agencies and non- simultaneous announcements of ratings is also relevant here. It is argued that agencies face a choice between herding, strategically agreeing with earlier ratings for reputational reasons, despite differing private assessments (Graham, 1999) and anti-herding, strategically choosing to disagree with previously issued ratings (Effinger & Polborn, 2001). Herding is expected to be more likely when an agency has high reputation or low ability, or if there is strong public information that contradicts a rating agency‟s private information. It is also expected to increase if private signals across raters are positively correlated.

This herding theory also appears to fit the circumstances of government film classification boards in Canada. Access to information regarding a film‟s U.S. classification along with its underlying content is becoming increasingly available to through film trailers and entertainment shows on television, web sites and even mobile phone applications. This is particularly true for controversial films or those expected to be box office hits. In addition, each province has an appeal process where the rating decisions of their peers are often cited as evidence. In this environment, Canadian raters may be hesitant to vary significantly from their North American counterparts.

4.2.2 Factors Decreasing Consistency

Previous research also suggests factors that may decrease consistency among decision makers. First, motivation to “anti-herd” is expected to be high if the value of being considered the only smart agent is sufficiently large relative to the value of being one of two smart agents (Effinger & Polborn, 2001). However in contrast to providing investment advice, for example, where the

6 The Canadian Motion Picture Distributors Association was established in 1920 and formally incorporated in 1976. “CMPDA serves as the voice and advocate of the major U.S. studios whose distribution divisions market feature films, prime time entertainment programming for television and pay TV, and pre-recorded videos and DVD's in Canada.” Accessed on June 12, 2009 at http://www.cmpda.ca/jsp/aboutus.jsp.

80 rewards of providing clients with a successful contrarian recommendation can be significant, the upside for individual Canadian film classification boards to classify a movie quite differently than others would seem to be much lower.

Another factor that could be expected to reduce alignment is the subjectivity of film classification. While many forms of classification involve subjectivity, the challenge of consistently mapping the presence of sexual content, violence, profanity and other thematic elements to a linear scale of age restrictions is particularly challenging even if all agencies attempt to use consistent standards. However this factor is partially mitigated if a U.S. rating has been released prior to Canadian decisions being announced or if raters are aware of decisions being made in other Canadian provinces.

When asked about Canada‟s unique regional classification system, the organizations often point to two reasons not yet discussed. The first, history, is no doubt relevant given the bureaucratic nature of film review boards but in itself suggests little about the level of consistency that should be expected. In contrast, the second, variation in community standards, suggests that the agencies may have valid reasons to disagree on age classifications as a reflection of cultural differences between their constituents.

The balance between these factors and the resulting level of alignment among Canadian classification boards is an empirical question that does not lend itself to a clear prediction. If there is variation in community standards that outweighs other factors, board decisions should conflict frequently. If not, institutional pressures and incentives for herding should lead to higher consistency.

The stability of the factors identified above over time should determine whether the level of consistency changes significantly between 1993 and 2007. Canadian film classification boards have existed for almost 100 years and trends of increased interprovincial mobility and communications links, as well as evolving technology, started well before 1993. However there have been a number of notable structural changes in the industry during this period that suggest increased integration. These include the introduction of a national, rather than regional, home video rating system, adoption of the American industry-led Entertainment Software Rating Board (ESRB) video game standard for all of Canada, and shared film classification among some

81 of the smaller provinces such as the Maritime Film Classification Board, created in 1994 to classify movies in , New Brunswick and Prince Edward Island.7

4.3 Data and Sample

4.3.1 Sample

My sample of ratings is drawn from Canadian film classification, which dates back to 1911. There has been only limited interprovincial consolidation since then, leaving six remaining classification boards with legal authority to control public exhibition of films, each requiring considerable human resources.8 Five of the six boards also classify home videos.9 Detail on Canadian film classification boards is provided in Table 4.1 and a profile of one classification board, the Ontario Film Review Board, is provided in Table 4.2.

Classification schemes have varied over time but took a big step towards standardization in 2005 when all provinces other than Quebec adopted a standard 5-level classification scale.10 Provincial variation still exists in content advisories, labeling and communication accompanying these ratings and the provinces have continued to make minor changes in these areas.11

4.3.2 Measures

I obtain movie-specific information from the Internet Movie Database (IMDB). This database includes a list of classifications received by movies across countries (and province/canton in the case of Canada and Switzerland), the release dates of the movie by country, and other descriptive characteristics. Although IMDB was acquired by Amazon.com in 1998, the majority of its content is user-created, benefitting from the careful peer-review of movie aficionados from

7 Newfoundland, the other East Coast Canadian province, does not have a film classification office and does not follow a classification system. Many theatres in Newfoundland voluntarily use the classifications assigned by the Maritime Film Board. 8 In 2008, Quebec‟s Régie du Cinéma, had 48 full-time employees while Ontario had 20 part-time reviewers, up from three reviewers at the time of inception in 1911. 9 The Canadian Home Video Rating System (CHVRS), a voluntary industry association, also calculates the average of the theatrical ratings assigned by the government classification bodies and displays this average on all VHS/DVDs nationwide. 10 Quebec maintains a unique G, 13+, 16+, 18+ classification scheme that does not include a PG rating. 11 For example, Alberta has passed Bill 18 to expand its movie regulation and increase inspections and fines (CBC News 2009).

82 around the world as well as industry participants.12 To supplement IMDB, I obtained box office revenue and parent studio data from Variety magazine and manually collected information on each of the Canadian film boards. I limited the data to movies whose first worldwide theatrical release occurred during the period January 1, 1993 to December 31, 2007 with at least one Canadian classification listed in IMDB (Table 4.3).

4.3.2.1 Dependent Variables

I employ two distinct dependent variables in my analysis. First, I use a consistency score construct adapted from Jin, Kato et al. (2010).13 For any pair of raters and any pair of movies (A & B), I define three possible grading outcomes. Raters are strongly consistent if both said A>B, A=B or AB, and the other said AB or A

The basic unit of analysis is the “movie pair-province pair” and consistency can be assessed for every pair of provinces that rated the same two movies. As an example, consider the movies Mr. & Mrs. Smith and 40-Year Old Virgin, both first released in 2005. As shown in Table 4.4, these movies were both rated in Alberta, Ontario, Nova Scotia and Quebec, allowing for multiple province-pair consistency scores.14 No pair of provinces have strongly inconsistent ratings for these two movies but only two (AB & ON, NS & PQ) have strong consistency. All other

12 Spot checks against the online records of Canadian film boards suggest a high level of accuracy. 13 The authors analyze sport card quality certification using six “raters” who independently rate the same 212 sports cards. Raters assess quality as a number between 1 and 10 but use different intermediate grade levels. 14 I only show ratings from four of the provinces in Table 4.4 to simplify the example. Strong inconsistency is by far the rarest of the three outcomes in the dataset.

83 provincial pairs are weakly inconsistent, meaning that one province saw the two movies as being of the same age-appropriateness while the other province assigned the movies to two different classifications.

This dependent variable comes with both advantages and disadvantages. It allows comparison of classification decisions across provinces despite the use of different rating levels and admission conditions, since all ratings are converted to ordinal values and banned movies can be placed at the most restrictive end of the ordinal scale. On the other hand, movie pair-level disagreement scores are unfamiliar measures which complicates the interpretation of the results.

As an alternative dependent variable I follow previous research on movie ratings (Leenders & Eliashberg, 2004) and calculate ratingage by converting all assigned movie ratings to a numerical equivalent.15 As shown in Figure 4.1, Canadian regulators employ fairly similar rating systems in terms of both the number of rating levels and their distribution in practice. Quebec‟s classification board is most likely to classify films as “all ages” and least likely to classify them as “18+”. Unfortunately, the ratingage dependent variable cannot account for related admission conditions such as required parental accompaniment. Also, in some cases the cardinal distance between two age-appropriateness ratings significantly misrepresents their practical difference in terms of public perception and box office impact (for example 18A and R). In contrast the wide numerical difference between G (all ages) and PG ratings (age 10+) is much less significant economically and culturally.

4.3.2.2 Other Variables

The timing of movie release may vary by province. Therefore, I follow IMDB‟s convention and define the title year as the year of the movie‟s first worldwide theatrical release. Unfortunately, the exact date of rating assignment in each province is not publicly known. For many films, I can only obtain the release date when the movie first appeared in theatres somewhere in Canada. I use movie characteristic variables, including primary genre, as provided in IMDB, and a dummy variable, major, indicating movies from one of the traditional major studios or their

15 For example, a “13+” rating assigned in Quebec would be converted to 13.

84 subsidiaries.16 Data identifying the originating studio was obtained from Variety Magazine for approximately half of the sample. Dummy variables, MadeinUS and MadeinCan, are based on the primary production location of the film.

4.3.3 Descriptive Statistics

The initial sample of 1993-2007 movie ratings consists of 12,215 records at the movie-country pair level. The average movie in the data set has classifications from 5.1 provinces.17 Table 4.5 contains full descriptive statistics at the movie-rating pair level of analysis. Analysis of the release dates shows that the majority of movies are released first in the United States, then in Canada. Within the sample, the mean lag between US and Canadian release is 5.9 days and the standard deviation is high (71 days). Between 2003 and 2007, the gap drops significantly to 1.3 days on average.

4.4 Empirical Approach & Results

4.4.1 Consistency Score Analysis

For my first analytical approach, the unit of observation is the movie-pair/province-pair (two movies both rated in the same two provinces) and the dependent variable is the consistency score (Jin et al., 2010). This approach allows me to account for differences in the composition and grading scales used by individual provinces that make analysis of a single province‟s rating decisions less informative. Calculation of the consistency scores allows me to generate a variety of descriptive statistics offering comparative insight into decision-making consistency. I discuss three key aspects below: overall consistency, inter-province consistency and time trends. As an external point of reference I compare the results to the relative consistency between film classification boards in other countries.18

16 Majors consist of: Sony (inc. Columbia/MGM/UA/Tristar), News Corp (20th Century Fox), Disney (inc. Touchstone, Miramax, Hollywood), Time Warner (inc. HBO, New Line/Fine Line, Castle Rock, Picturehouse), Viacom (Paramount, Dreamworks, Orion post 1997), GE/Vivendi (NBC Universal, Focus Features, Rogue). 17 If multiple certifications and release dates exist for a movie within a single province, due to film festivals or re- releases for television or DVD, I have sought to use the first theatrical release or most restrictive certification. 18 A potential obstacle to national alignment is the presence of Quebec as one of the six Canadian regulators. Not only has Quebec chosen not to standardize its rating levels with the other provinces, it classifies a somewhat different mix of films due to language differences. However I do not expect Quebec‟s presence to eliminate convergence within the Canadian system.

85

Overall consistency. The mean level of strong consistency across all pairs in the dataset, regardless of the two provinces chosen, is 85.5%. The interpretation is that for any pair of randomly selected movies (A and B) released in the same year and rated by two of the provinces, there is an 85.5% chance that both provinces will agree that either A>B, A=B or A

Inter-province consistency. When I evaluate the consistency of ratings for all movies rated within each province pair, strong consistency is highest for Alberta and Manitoba (92.2%) and Alberta and (91.5%). Strong consistency is lowest for pairs involving Quebec, specifically Quebec and Nova Scotia (64.5%). These results are shown in Panel A of Figure 4.2. Quebec aside, geographic proximity is a strong predictor of strong rating consistency. Consistency scores between all 15 provincial pairs are shown in Table 4.6, with all non-Quebec pairs showing high consistency.

Time trends. I also find clear evidence of increasing consistency among rating bodies over time. As shown in Panel B of Figure 4.2, Canada‟s two largest provinces, Ontario and Quebec, show a clear trend towards increased consistency over the past 15 years – particularly a reduction in strong inconsistency. Similarly, a 10-year analysis of all English-speaking province pairs20 (not shown) showed a distinct upward trend in strong consistency that was not as strong for pairs involving Quebec.

4.4.2 RatingAge Analysis

In my second analysis, the unit of analysis is the movie-provincial rating pair. Using ratingage as the dependent variable, I conduct regression analysis showing the relative effect of various factors on assigned rating for a given movie. Under my first model specification (Model 1 – 4), the assignment of a ratingage to movie i in province j can be written as:

ratingageij = δ1 (province)j + δ2 (year)t + δ3 (vector of movie-level control variablesi) + ηi + ε ij (1)

19 The comparable number among the 15 largest countries that regulate nationally during a comparable time period is only 62.9% (Seaborn 2009), suggesting relatively high Canadian consistency. 20 Insufficient observations for some province pairs during 1993-1997 makes a 10-year analysis more reliable.

86

δ1 is a vector containing our variables of interest that capture province characteristics unchanged during the sample period (e.g. if Alberta certifiers are generally more tolerant towards certain types of movie content than certifiers in other provinces), δ2 is a vector of year fixed effects to account for time trends, δ3 captures the effect of certain movie characteristics and ε ij is the remaining error term. Results are summarized in Table 4.7. Across all four models, genre variation is as expected, with Animation and Horror films receiving the lowest and highest age ratings. Movies from major studios receive age-appropriateness ratings approximately 2 years lower. US productions are also rated slightly lower, whereas movies produced in Canada do not show a strong significant difference.21 The key coefficients of interest, the provincial differences, are all statistically significant when using the 15-year sample period with year dummy variables (Models 2 and 3). Quebec is a notable outlier with age ratings over 3 years lower than Ontario‟s when controlling for film characteristics. In contrast, the other four provinces are within one year of Ontario. When we restrict the sample to just 2003-2007 (Model 4), only the Quebec coefficient remains statistically significant and consistent in magnitude, providing support for the hypothesis of convergence among the English-speaking provinces. While these variations are all practically relevant, they pale in comparison to larger differences found when comparing other national rating bodies such as the United States and France (Seaborn, 2009).22

Although I use robust standard errors to account for possible heteroskedasticity in the error term, this specification is not ideal due to potential omitted variable bias. However because the same movie is rated in multiple provinces a panel data structure can be employed where a film-specific fixed effect variable ηi replaces the movie-level control variables to control for all movie-specific characteristics (both observed and unobserved).23

ratingageij = δ1 (province)j + δ2 (year)t + ηi + ε ij (2)

21 These origin differences may result from differences in both the types of films submitted by these sources and in film board decision-making. I cannot separate these two effects in my data. 22 In a related paper, using a similar data set and time frame, an average movie receives a rating 11 years higher in the U.S. as compared to France where film classification is much more liberal. 23 Film-specific variables for genre and origin no longer appear since they are subsumed within the film fixed effect.

87

The results of the second, preferred, specification are shown in Table 4.8. Again in these models, Ontario is the most conservative province in terms of assigned age for the 15-year sample period, with Alberta becoming slightly more conservative during the most recent five years. Other provincial differences are consistent with the previous 15-year and 5-year analysis as evidenced by their coefficients. Controlling for composition by using a balanced panel of movies (rated by all six regulators) in Model 6 reduces the size of provincial differences, but they remain highly statistically significant. When I restrict the sample to 2003-2007 (Model 7), the provincial differences (other than Quebec) are again greatly diminished in both size and statistical significance. These results again provide strong support for the hypothesis of consistency among the English-speaking provinces and convergence over time.

My empirical analysis reveals two key facts. First, movie rating decisions across Canadian provinces are highly consistent (Quebec excepted), both in absolute terms and relative to international jurisdictions. Second, there is a general trend towards increased consistency and fewer strong disagreements, even between Quebec and the other provinces.

4.5 Discussion and Conclusion

Findings of high and increasing interprovincial consistency suggest that significant variation in community standards either no longer exists or is being overlooked in the decisions of Canadian film classification boards outside Quebec. What about alternative reasons for the continued existence of these organizations?

Geographical constraints and diseconomies of scale are no longer plausible reasons given advances in technology and many examples of national-level classification bodies outside of Canada. Provincial governments may still value the boards as a way to maintain influence and control over the film industry (even if not exercised regularly) and collect indirect taxes on movie distribution.24 Incumbent distributors may value the boards as barriers to entry and competition. Inertia may also play a role.

24 Operational costs are primarily borne by film distributors, who are required to pay fees to each board to have their film reviewed. Court records from an Ontario lawsuit show that over five years, the OFRB approved 18,248 of 18,452 films, rejecting 204 films. 97% were approved on first review, and 99% following editing and resubmission. 3% of films and videos (550 in total) were censored during the five year period.

88

The findings also invite consideration of the impact of fragmented regulatory systems, strategic opportunities for firms in such settings, and the possibility of policy change.

In terms of impact, it would appear that fragmented regulatory structures generating consistent decisions can place a disproportionate burden on small distributors and niche groups while offering limited benefit to them. This is particularly true when regulatory fees do not scale by revenue or volume. Large firms, the international film distributors in this setting, have resources to overcome regulatory barriers and win appeals from regional boards, can learn from their frequent interaction with regulators, and likely view classification fees as a trivial fraction of revenues. In contrast, small niche distributors who lack these advantages, yet bear a comparable cost of classification, may think twice about extending distribution into smaller regions or distributing their products in the country at all.25 Waguespack & Sorenson (2010) demonstrate that films by major studios receive more lenient treatment by the industry-led MPAA in the U.S. and my findings raise the possibility that merely having government regulators may not be enough to prevent similar outcomes elsewhere.

“Corporate political strategy” or “non-market strategy” offer insight into how a firm can interact with political institutions to change the competitive landscape and obtain competitive advantage (De Figueiredo, 2009; Spulber, 1989). Some firms, such as the major international film distributors in this setting, may benefit from maintaining the status quo and thus are best served by focusing efforts on compliance and increasing their influence. In contrast, less powerful organizations, such as independent distributors or film festivals in this setting, may wish to focus on deregulation, which requires a more difficult balancing act of agitating for change while maintaining a relationship with the current regulators.26 In film classification, exploiting the continuing evolution of technology, which provides ubiquitous media access and additional tools that allow moviegoers to become informed, may expose limitations of the current structure and provide an opportunity to build consumer support. Governments may also be receptive to reform

25 A similar argument has been made about the detrimental effects of Canada‟s provincial securities regulations. 26 In 2004 the Ontario Supreme Court ruled that the Theatre Act, under which the OFRB operated, violated the Canadian Charter of Rights & Freedoms. The case was based on charges laid against a Toronto Gay/Lesbian bookstore for not submitting a film for classification. This court ruling led to Ontario Bill 158 which revised the Theatre Act Film Classification Act. 2005. Film Classification Act. Ontario. narrowing but not eliminating the censorship role despite strong protests from the opposition parties and a wide range of stakeholder groups (Hansard Session 38-1. 2005. Third Reading - Film Classification Act, 2005, 38 ed.: Ontario Legislative Assembly.

89 arguments linked to promoting greater emphasis of the local creative economy in an attempt to make their jurisdictions more attractive to the creative class (Florida, 2004). A third strategic options for firms with little power to change the current system is to bypass it completely using less regulated distribution channels, such as the Internet for film delivery. By disadvantaging legacy industries such as theatrical movie distribution, regulators may provide an unintended benefit to these new channels.

Film classification offers an interesting test case of the possibilities for regulatory change. A variety of policy changes can be envisioned. First, exemption, whereby the least variable content types (e.g. children‟s movies and adult sex films) could be exempted from review to reduce the burden on both reviewers and distributors. Second, further incremental consolidation could take place between jurisdictions. Canadian provinces could also follow the video game precedent and completely defer to the leading industry-led organization – the U.S. MPAA. Alternatively, film classification could mimic recommendations for Canadian securities regulation by establishing a national classification board (with or without Quebec) or by introducing a reciprocal passport system as an interim measure (Expert Panel on Securities Regulation, 2009).27 Of course, consistency among provinces is a necessary but insufficient condition for change. Many obstacles remain – inertia, familiarity with the status quo, Quebec‟s unique position as an outlier, a revenue-recovery model that offers governments little direct financial incentive to change, and the lack of an organized interest group to spur change.

The results of this work suggest a number of opportunities for future work both within the entertainment industry and beyond into settings such as environmental, food and consumer product regulation. These include applying additional empirical methods for identifying convergence in regulatory decisions, more precisely distinguishing between the reasons why ratings among different regulatory bodies converge, and examining why industry and government may prefer the status quo in various regulatory settings.

27 Under such a system, classification assigned to a movie in one province would be recognized in all other provinces.

Table 4.1 Regulatory Scope and Classification Levels of Canadian Film Classification Bodies

Canadian BC / Alberta Maritimes Home Saskatchewan (NWT, Manitoba Ontario (NS, NB, & Quebec Video / Yukon Nunavut) PEI) Rating System Public Exhibition: 35mm and Video Home Use Home Use Home Use Home Use Home Use Home Use Video: Video: None Video: Video: Main- Video: Video: Main- Video Adult sex Mainstream stream video Mainstream stream video only videos only video and adult and adult sex video and adult and adult sex sex videos video sex video video General G General G General G General G General G General G General G Parental Parental Guidance PG Guidance PG 14A - anyone 14A - 14A - persons 14A - 14A - persons 13+ - 14A - under 14 persons under 14 must persons under 14 must children 12 those years of age under 14 be accompanied under 14 be accompanied years of age under 14 must be must be by an adult must be by an adult and under should accompanied accompanied accompanied may be if view with by an adult by an adult by an adult accompanied an adult. by an adult No rental/ purchase by under 14 18A - persons 18A - 18 18A - 18 16+ - may be 18A - under 18 must persons Accompaniment persons Accompaniment viewed, persons be under 18 (18A) persons under 18 or 18A- persons rented or under 18 accompanied must be ages 14-17 must must be ages 14 -17 purchased by should by an adult accompanied be accompanied accompanied must be persons 16 view with by an adult by an adult. by an adult accompanied by years of age an adult. an adult. or over No rental or purchase by those under 18 Restricted - Restricted - Restricted - Restricted - Restricted - Restricted No one under admittance viewing, renting viewing, viewing, renting - no rental the age of 18 restricted to or buying is renting or or buying is or may view persons 18 restricted to buying is restricted to purchase under any years of age persons 18 years restricted to persons 18 years by those circumstances or over of age or over persons 18 of age or over under 18 years of age or over Adult Adult Adult Adult Adult 18+ E Exempt E

90

91

Table 4.2 Overview of the Ontario Film Review Board (OFRB)

20 part-time reviewers screening films 4-5 times per month. Paid $380 per day plus expenses. “Members are appointed by the government from different parts Reviewers of the province and from different age groups and backgrounds to ensure that there is collective input respecting community standards.” Chair Janet Robinson, former nurse, grandparent. Paid $119K in 2008 ($618 per day). 35mm films are reviewed by a three-person panel, DVDs and videos by a two- Classification person panel, and adult sex films by a single reviewer. The OFRB maintains a Process 32-page Member Reference Manual. $4.20/min of film if English/French, $78.75 total per film if foreign, free if Fee Structure wholly produced in Canada. Operating Budget $2.8M Surplus ($3.7M in fee revenue from distributors less $975K in direct (2007-08) expenses) Films Screened 4,783 (2,548 adult sex films 1,578 mainstream English and French, 238 (2007-08) mainstream other language, 419 mainstream trailers) Source: Ontario Film Review Board - 2007-2008 Annual Report

Table 4.3 Canadian Classification Boards As Represented in IMDB, 1993-2007

Classification Board Films Classified Classification Body Name Ontario 2,904 Ontario Film Classification Board Quebec 2,273 La Régie du Cinéma du Québec Manitoba 1,851 Manitoba Film Classification Board Nova Scotia 1,775 Maritime Film Classification Board British Columbia 1,737 British Columbia Film Classification Board Alberta 1,675 Alberta Film Classification Board

Note: Based on number of unique classifications of theatrically released films in IMDB.

92

Table 4.4 Example Movie Pair for Calculation of Consistency Score

Movie Title Mr. & Mrs. Smith 40-Year Old Virgin Genre Action Comedy Year 2005 2005 Budget $110M $26M Worldwide Box Office $186M $109M Sample of Ratings Alberta/Nova Scotia: 14A, Ontario: Alberta: 18A, Nova Scotia/Ontario: Received PG, Quebec: 13+ 14A, Quebec 13+ ______

Table 4.5 Variable Definitions and Summary Statistics Table 5

Variable Definitions and Summary Statistics (where each observation is a movie-rating pair)

Variable Description Obs Mean Std. Dev. Min Max titleyear Year of movie's first release 12,215 2002 3.89 1993 2007 titlecerts # of certifications for movie 12,215 5.11 1.63 1 6 ratingage Age rating in province 12,200 12.29 5.27 0 19 major Movie comes from a major studio 8,010 0.79 0.41 0 1 AB Observation is an Alberta rating 12,215 0.14 0.34 0 1 BC Observation is a British Columbia rating 12,215 0.14 0.35 0 1 MB Observation is a Manitoba rating 12,215 0.15 0.36 0 1 NS Observation is a Nova Scotia rating 12,215 0.15 0.35 0 1 ON Observation is an Ontario rating 12,215 0.24 0.43 0 1 QC Observation is an Quebec rating 12,215 0.19 0.39 0 1 MadeinCan Primary Production Country = Canada 12,215 0.15 0.35 0 1 MadeinUS Primary Production Country = USA 12,215 0.58 0.49 0 1 vtotbo2 max US domestic box office ($M) 8,010 $43.7 $63.0 $0.0 $601.0 vforbo max foreign box office ($M) 8,010 $34.4 $86.5 $0.0 $1,210.0 vmaxscr max weekly screen count 8,010 1,665 1,205 0 4,223 vweeks weeks in theaters 8,010 15.05 8.70 1 68 lag Days between US and Canada release 2,434 5.89 77.93 -735 560 Source: IMDB, Variety Magazine

Table 8 93

Film Classification Consistency Scores by Province Table 4.6 Film Classification Consistency Scores by Province

Sample of movie pairs (Movie A & B) with initial release dates in same year Panel 1: % strongly consistent (both provinces said movie A > B, A = B or A < B) AB BC MB NS ON QC AB 0.000 BC 0.915 0.000 MB 0.922 0.904 0.000 NS 0.873 0.843 0.854 0.000 ON 0.910 0.878 0.890 0.718 0.000 QC 0.830 0.723 0.703 0.645 0.701 0.000 sum (except self) 4.450 4.263 4.274 3.932 4.097 3.602 avg (except self) 0.890 0.853 0.855 0.786 0.819 0.720 Ranks by average 1 3 2 5 4 6

Panel 2: % weakly inconsistent (one province said A = B, the other said A > B or A < B) AB BC MB NS ON QC AB 0.000 BC 0.082 0.000 MB 0.076 0.094 0.000 NS 0.122 0.147 0.137 0.000 ON 0.089 0.110 0.108 0.275 0.000 QC 0.159 0.271 0.257 0.319 0.267 0.000 sum (except self) 0.527 0.704 0.672 0.998 0.848 1.273 avg (except self) 0.105 0.141 0.134 0.200 0.170 0.255 Ranks by average 1 3 2 5 4 6

Panel 3: % strongly inconsistent (one province said movie A > B, other said movie A < B) AB BC MB NS ON QC AB 0.000 BC 0.003 0.000 MB 0.002 0.002 0.000 NS 0.006 0.011 0.009 0.000 ON 0.002 0.011 0.003 0.008 0.000 QC 0.011 0.007 0.039 0.036 0.032 0.000 sum (except self) 0.023 0.033 0.055 0.069 0.055 0.125 avg (except self) 0.005 0.007 0.011 0.014 0.011 0.025 Ranks by average 1 2 4 5 3 6

Table 6 94

TableExplanatory 4.7 Explanatory Factors Factors in Canadian in Canadian Film FilmClassification Classification Decisions Decisions

Model 1 Model 2 Model 3 Model 4 Sample 1993-2007 1993-2007 1993-2007 2003-2007 genre2==Adventure -3.630*** (0.315) -3.681*** (0.305) -3.720*** (0.305) -2.693*** (0.481) genre2==Animation -9.329*** (0.303) -9.362*** (0.300) -9.380*** (0.297) -10.18*** (0.418) genre2==Biography 0.622* (0.298) 0.535 (0.289) 0.496 (0.289) -0.655 (0.545) genre2==Comedy -1.917*** (0.152) -1.963*** (0.146) -1.964*** (0.147) -2.760*** (0.252) genre2==Crime 2.053*** (0.186) 2.015*** (0.177) 1.972*** (0.177) 1.888*** (0.318) genre2==Documentary -1.502** (0.513) -1.594** (0.501) -1.606** (0.507) -1.490* (0.727) genre2==Drama -0.189 (0.154) -0.297* (0.149) -0.341* (0.149) -0.427 (0.255) genre2==Horror 2.753*** (0.196) 2.806*** (0.187) 2.823*** (0.186) 3.576*** (0.277) genre2==Other -2.150*** (0.321) -2.228*** (0.313) -2.266*** (0.312) -1.656** (0.636) Major Studio -1.999*** (0.132) -2.007*** (0.130) -1.908*** (0.132) -1.428*** (0.226) titleyear==1994 -0.0495 (0.428) -0.108 (0.406) -0.0976 (0.406) titleyear==1995 0.887* (0.406) 0.812* (0.392) 0.825* (0.393) titleyear==1996 1.047** (0.364) 0.950** (0.353) 0.992** (0.355) titleyear==1997 0.618 (0.372) 0.559 (0.358) 0.568 (0.358) titleyear==1998 0.856* (0.354) 0.764* (0.339) 0.775* (0.340) titleyear==1999 0.688* (0.348) 0.615 (0.332) 0.625 (0.334) titleyear==2000 0.722* (0.336) 0.678* (0.323) 0.631 (0.326) titleyear==2001 0.748* (0.329) 0.689* (0.315) 0.670* (0.317) titleyear==2002 0.278 (0.323) 0.209 (0.310) 0.207 (0.312) titleyear==2003 -0.190 (0.329) -0.264 (0.316) -0.260 (0.317) 1.359*** (0.293) titleyear==2004 -0.944** (0.341) -1.047** (0.326) -1.011** (0.327) 0.601* (0.303) titleyear==2005 -0.940** (0.358) -1.015** (0.342) -1.019** (0.343) 0.457 (0.321) titleyear==2006 -1.372*** (0.393) -1.450*** (0.376) -1.467*** (0.377) titleyear==2007 -1.005 (0.577) -1.000 (0.565) -0.936 (0.564) 0.361 (0.547) Alberta -0.441** (0.159) -0.436** (0.159) 0.383 (0.267) British Columbia -0.797*** (0.164) -0.790*** (0.163) -0.171 (0.274) Manitoba -0.441** (0.153) -0.432** (0.153) 0.294 (0.274) Nova Scotia -0.469** (0.154) -0.457** (0.153) -0.279 (0.289) Quebec -3.259*** (0.183) -3.252*** (0.183) -3.352*** (0.312) MadeinCan 0.264 (0.239) -0.820* (0.400) MadeinUS -0.420*** (0.123) -0.775*** (0.224) Constant 14.61*** (0.338) 15.65*** (0.333) 15.87*** (0.350) 14.14*** (0.403) Observations 7997 7997 7997 2814 R-squared 0.232 0.278 0.279 0.288 F 106.9 105.7 100.6 72.10 Notes: Dependent variable is numerical rating age (e.g. PG-13 = 13). Robust standard errors in parantheses. Default genre = Action, default province = Ontario * p<0.05, ** p<0.01, *** p<0.001

95 Table 7

TableExplanatory 4.8 Explanatory Factors Factors in Canadian in Canadian Film FilmClassification Classification Decisions Decisions (with Film Fixed Effects)

Model 6 Model 7 Model 8 Sample 1993-2007 1993-2007 2003-2007 Panel Unbalanced Balanced Unbalanced Alberta -0.411*** -0.205*** 0.201** (0.0498) (0.0473) (0.0655) British Columbia -0.698*** -0.410*** -0.185** (0.0545) (0.0510) (0.0717) Manitoba -0.414*** -0.229*** 0.132 (0.0523) (0.0489) (0.0692) Nova Scotia -0.447*** -0.195*** -0.136* (0.0540) (0.0479) (0.0689) Quebec -2.728*** -1.743*** -2.287*** (0.101) (0.104) (0.141) Constant 13.08*** 12.51*** 12.08*** (0.0383) (0.0390) (0.0483) Fixed Effects Film Film Film Observations 12,200 8,622 5,595 R-squared 0.190 0.114 0.189 F 149.2 58.12 63.76 Notes: Dependent variable is numerical rating age (e.g. PG-13 = 13). Robust standard errors in parantheses. Default province = Ontario * p<0.05, ** p<0.01, *** p<0.001

96

Figure 4.1 Distribution of RatingAge by Province Figure 1: Distribution of RatingAge by Province

AB BC MB

.4

.3

.2

.1 0

NS ON QC

Density

.4

.3

.2

.1 0 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 calc: age equivalent of rating Source: Internet Movie Database (IMDB)

97

Figure 4.2 Consistency Levels

A - Consistency Levels by Province Pair

1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2

ProportionofMovie Pairs 0.1 0.0 AB_MB AB_BC AB_ONBC_MBMB_ON BC_ON AB_NS MB_NS BC_NS AB_QC BC_QC NS_ON MB_QC ON_QC NS_QC Provincial Regulator Pairs Strong Consistency Weak Inconsistency Strong Inconsistency

B - Degree of Consistency between Ontario & Quebec over Time

1

0.8

0.6

0.4

0.2

ProportionofMovie Pairs 0

2000 2001 2002 2003 2004 2005 1993 1994 1995 1996 1997 1998 1999 2006 2007 Year of Release for Both Movies in Pair

Strong Consistency Weak Inconsistency Strong Inconsistency

Bibliography

Beaver, W. H., Shakespeare, C., & Solimana, M. T. 2006. Differential Properties in the Ratings of Certified vs. Non-Certified Bond Rating Agencies. Journal of Accounting and Economics, 42(6): 303-334. Becker, B., & Milbourn, T. T. 2011. How did increased competition affect credit ratings? Journal of Financial Economics, 101(1). Benabou, R., & Laroque, G. 1992. Using Privileged Information to Manipulate Markets: Insiders, Gurus, and Credibility. The Quarterly Journal of Economics, 107(3): 921-958. Benmelech, E., & Dlugosz, J. 2010. The Credit Rating Crisis, NBER Macroeconomics Annual 2009, Vol. 24: 161-207. Boston: National Bureau of Economic Research, Inc. Bertrand, M., Duflo, E., & Mullainathan, S. 2004. How much should we trust differences-in- differences estimates? Quarterly Journal of Economics, 119(1): 249-275. Board, O. 2009. Competition and Disclosure. The Journal of Industrial Economics, 57(1): 197- 213. Bollinger, B., Leslie, P., & Sorensen, A. 2011. Calorie Posting in Chain Restaurants. American Economic Journal: Economic Policy, 3(1): 91-128. Bolton, P., Freixas, X., & Shapiro, J. 2011. The Credit Ratings Game. Journal of Finance(Forthcoming). Bongaerts, D., Cremers, K. J. M., & Goetzmann, W. N. 2011. Tiebreaker: Certification and Multiple Credit Ratings. Journal of Finance(Forthcoming). Brown, A. L., Camerer, C. F., & Lovallo, D. 2011. To Review or Not to Review? Limited Strategic Thinking at the Movie Box Office, Working Paper. Cain, D. M., Loewenstein, G., & Moore, D. A. 2005. The Dirt on Coming Clean: Perverse Effects of Disclosing Conflicts of Interest. Journal of Legal Studies, 34(January 2005). Cantor, R., & Packer, F. 1995. The Credit Rating Industry. Journal of Fixed Income, 5(3): 10- 34. Cantor, R., & Packer, F. 1997. Differences of opinion and selection bias in the credit rating industry. Journal of Banking & Finance, 21(10): 1395-1417. Casadesus-Masanell, R., & Enric Ricart, J. 2007. Competing Through Business Models, SSRN eLibrary. Chatterji, A. K., Levine, D. I., & Toffel, M. W. 2009. How Well Do Social Ratings Actually Measure Corporate Social Responsibility? Journal of Economics & Management Strategy, 18(1): 125-169. Chatterji, A. K., & Toffel, M. W. 2008. How firms respond to being rated. Strategic Management Journal, 31(9): 917-945. Chevalier, J., & Ellison, G. 1999. Career Concerns of Mutual Fund Managers*. Quarterly Journal of Economics, 114(2): 389-432. 98

99

Coate, S., & Morris, S. 1999. Policy persistence. American Economic Review, 89(5): 1327- 1336. Dafny, L., & Dranove, D. 2008. Do report cards tell consumers anything they don't already know? The case of Medicare HMOs. The RAND Journal of Economics, 39(3): 790-821. De Figueiredo, J. M. 2009. Integrated Political Strategy. In J. A. Nickerson, & B. S. Silverman (Eds.), Economic Institutions of Strategy, Vol. 26: 459-486: Emerald Group Publishing Limited. De Franco, G., Vasvari, F. P., & Wittenberg-Moerman, R. 2009. The Informational Role of Bond Analysts. Journal of Accounting Research, 47(5): 1201-1248. Dellarocas, C. 2006. Strategic Manipulation of Internet Opinion Forums: Implications for Consumers and Firms. Management Science, 52(10): 1577-1593. Dichev, L. D., & Piotroski, J. D. 2001. The long-run stock returns following bond ratings changes. Journal of Finance, 56(1): 173-203. DiMaggio, P. J., & Powell, W. W. 1983. The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review, 48(2): 147-160. Dranove, D., & Jin, G. Z. 2010. Quality Disclosure and Certification: Theory and Practice. Journal of Economic Literature, 48(4): 28. Durand, R., Rao, H., & Monin, P. 2007. Code and conduct in French cuisine: Impact of code changes on external evaluations. Strategic Management Journal, 28(5): 455-472. Effinger, M. R., & Polborn, M. K. 2001. Herding and anti-herding: A model of reputational differentiation. European Economic Review, 45: 385-403. Egan-Jones. 2011. Application For Registration as a Nationally Recognized Statistical Rating Organization (NRSRO), Vol. 2011. Emerson, R. M. 1962. Power-Dependence Relations. American Sociological Review, 27(1): 31- 41. Espeland, W. N., & Sauder, M. 2007. Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113(1): 1–40. Expert Panel on Securities Regulation. 2009. Final Report and Recommendations. Ottawa: Department of Finance Canada. Federal Reserve Bank of St. Louis. 2010. The Financial Crisis: A Timeline of Events and Policy Actions. Film Classification Act. 2005. Film Classification Act. Ontario. Fishman, M. J., & Hagerty, K. M. 2003. Mandatory Versus Voluntary Disclosure in Markets with Informed and Uninformed Customers. Journal of Law, Economics & Organization, 19(1): 45. Fleischer, A. 2009. Ambiguity and the Equity of Rating Systems: United States Brokerage Firms, 1995–2000. Administrative Science Quarterly, 54(4): 555-574. Florida, R. L. 2004. The rise of the creative class : and how it's transforming work, leisure, community and everyday life. New York, NY: Basic Books.

100

Forbes, S. J., Lederman, M., & Tombe, T. 2011. Do Firms Game Quality Ratings? Evidence from Mandatory Disclosure of Airline On-Time Performance, Working Paper. Graham, J. R. 1999. Herding Among Investment Newsletters: Theory and Evidence. The Journal of Finance, 54(1): 237-268. Granger, C. W. J. 1969. Investigating causal relations by econometric methods and cross-spectral methods. Econometrica, 34: 424-438. Grossman, S. J. 1981. The Informational Role of Warranties and Private Disclosure About Product Quality. Journal Of Law & Economics, 24: 461-483. Hansard Session 38-1. 2005. Third Reading - Film Classification Act, 2005, 38 ed.: Ontario Legislative Assembly. Hayward, M., & Boeker, W. 1998. Power and Conflicts of Interest in Professional Firms: Evidence from Investment Banking. Administrative Science Quarterly, 43(1): 1-22. Hempel, J., & Henry, D. 2005. Ranting At The Ratings Agencies, Businessweek. Horner, J. 2002. Reputation and Competition. American Economic Review, 92(3). Hunt, J. P. 2009. Credit Rating Agencies and the "Worldwide Credit Crisis": The Limits of Reputation, the Insufficiency of Reform, and a Proposal for Improvement. Columbia Business Law Review, 2009(1). Jewell, J., & Livingston, M. 1999. A Comparison of Bond Ratings from Moody‟s S&P and Fitch IBCA. Financial Markets, Institutions & Instruments, 8(4). Jin, G. Z., Kato, A., & List, J. A. 2010. That's News to Me! Information Revelation in Professional Certification Markets. Economic Inquiry, 48(1): 104-122. Jin, G. Z., & Leslie, P. 2003. The Effect of Information on Product Quality: Evidence from Restaurant Hygiene Grade Cards. Quarterly Journal of Economics, 118(2): 409-451. Jin, G. Z., & Leslie, P. 2009. Reputational Incentives for Restaurant Hygiene. American Economic Journal: Microeconomics, 1: 237-267. Jin, G. Z., & Sorensen, A. T. 2006. Information and consumer choice: The value of publicized health plan ratings. Journal of Health Economics, 25(2): 248-275. Johnson, R. 2003. An Examination of Rating Agencies' Actions Around the Investment grade Boundary, Working Paper: Federal Reserve Bank of Kansas City. Jorion, P., Liu, Z., & Shi, C. 2005. Informational effects of regulation FD: evidence from rating agencies. Journal of Financial Economics, 76(2): 309-330. Keoun, B. 2007. "Startling" $8 billion loss for Merrill Lynch, Bloomberg News. New York City: Bloomberg. Klein, B., & Leffler, K. B. 1981. The Role of Market Forces in Assuring Contractual Performance. The Journal of Political Economy, 89(4): 615-641. Kliger, D., & Sarig, O. 2000. The information value of bond ratings. Journal of Finance, 55(6): 2879-2902.

101

Leenders, M. A. A. M., & Eliashberg, J. 2004. Antecedents and consequences of third-party products evaluation systems: Lessons from the international motion picture industry: The Wharton School, University of Pennsylvania, Philadelphia, PA. Lewis, M. 2010. The Big Short (1st edition ed.): W. W. Norton & Company. Löffler, G. 2005. Avoiding the rating bounce: why rating agencies are slow to react to new information. Journal Of Economic Behavior & Organization, 56(3): 365-381. Mathios, A. D. 2000. The impact of mandatory disclosure laws on product choices: An analysis of the salad dressing market. Journal of Law & Economics, 43(2): 651-677. Mathis, J., McAndrews, J., & Rochet, J.-C. 2009. Rating the raters: Are reputation concerns powerful enough to discipline rating agencies? Journal of Monetary Economics, 56(5): 657-674. Milgrom, P. M. 1981. Good News and Bad News: Representation Theorems and Applications. Bell Journal of Economics, 12(2): 380-391. Nelson, P. 1970. Information and Consumer Behavior. Journal Of Political Economy, 78(2): 311-329. Peltzman, S. 1976. Toward a More General Theory of Regulation. Journal of Law & Economics, 19(2): 211-240. Phillips, D., J., & Zuckerman, E. W. 2001. Middle-Status Conformity: Theoretical Restatement and Empirical Demonstration in Two Markets. The American Journal of Sociology, 107(2): 379-429. Pierce, L., & Toffel, M. W. 2010. Leniency in Private Regulatory Enforcement: The Role of Organizational Scope and Governance, Harvard Business School Working Paper. Boston: Harvard University. Pope, D. G. 2009. Reacting to rankings: Evidence from "America's Best Hospitals". Journal of Health Economics, 28(6): 1154-1165. Prado, A. 2011. Choosing Among Competing Environmental and Labor Standards: An Exploratory Analysis of Producer Adoption, Working Paper. New York: Stern School of Business. Roberts, P. W., & Reagans, R. 2007. Critical Exposure and Price-Quality Relationships for New World Wines in the U.S. Market. Journal of Wine Economics, 2(1): 84-97. Sauder, M., & Espeland, W. N. 2009. The Discipline of Rankings: Tight Coupling and Organizational Change. American Sociological Review, 74(1): 63-82. Seaborn, P. 2009. Irregular Regulation: Understanding Cross-National Differences in Regulatory Decision-Making, Working Paper. Toronto. Senate Report 109-326. 2006. Credit Rating Agency Reform Act of 2006, Senate Report 109- 326: Committee on Banking, Housing, and Urban Affairs. Shrum, W. 1991. Critics and Publics: Cultural Mediation in Highbrow and Popular Performing Arts. The American Journal of Sociology, 97(2): 347-375. Spulber, D. F. 1989. Regulation and markets. Cambridge, Mass.: MIT Press.

102

Stigler, G. J. 1971. The Theory of Economic Regulation. The Bell Journal of Economics and Management Science, 2(1): 3-21. Tang, T. T. 2009. Information asymmetry and firms' credit market access: Evidence from Moody's credit rating format refinement. Journal of Financial Economics, 93(2): 325- 351. Uzzi, B. 1996. The sources and consequences of embeddedness for the economic performance of organizations: The network effect. American Sociological Review, 61(4): 674-698. Uzzi, B. 1997. Social structure and competition in interfirm networks: The paradox of embeddedness. Administrative Science Quarterly, 42(1): 35-67. Vogel, D., & Kagan, R. A. 2004. Dynamics of regulatory change : how globalization affects national regulatory policies. Berkeley: University of California Press. Waguespack, D. M., & Sorenson, O. 2010. The Ratings Game: Asymmetry in Classification. Organization Science, 22(3): 541-553. Warren, P., & Wilkening, T. 2010. Regulatory Fog: The Informational Origins of Regulatory Persistence, Working Paper. Wolinsky, A. 1983. Prices as Signals of Product Quality. The Review of Economic Studies, 50(4): 647-658. Xia, H. 2011. The Issuer-Pay Rating Model and Rating Inflation: Evidence from Corporate Credit Ratings, Working Paper. Xiao, M. 2010. Is quality accreditation effective? Evidence from the childcare market. International Journal of Industrial Organization, 28(6): 708-721. Zuckerman, E. W. 1999. The categorical imperative: Securities analysts and the illegitimacy discount. American Journal Of Sociology, 104(5): 1398-1438.