Credit Rating Agency Fees: Pay to Play or Pay for Work?*

Jess Cornaggia Smeal College of Business Pennsylvania State University University Park, PA 16802 (814) 863-2390 [email protected]

Kimberly J. Cornaggia Smeal College of Business Pennsylvania State University University Park, PA 16802 (814) 865-2243 [email protected]

Ryan Israelsen Eli Broad College of Business Michigan State University East Lansing, MI 48824 (517) 353-6982 [email protected]

This draft: November 18, 2018 First draft: October 8, 2015

JEL classification: G24, G28 Keywords: Ratings, Ratings Fees, Municipal Bonds, , Information Production

* We benefit from comments and suggestions from John Griffin and audience members at the U.S. Securities and Exchange Commission, Georgetown University, Hong Kong University of Science and Technology, Indiana University, Michigan State University, National University of Singapore, Penn State University, Singapore Management University, and the University of Hong Kong. Any errors belong to the authors.

Credit Rating Agency Fees: Pay to Play or Pay for Work?

Abstract

We document a significant relationship between ratings fees and credit rating levels in the market. In contrast to prior evidence from the and structured markets indicating a “pay to play” phenomenon, our results suggest that ratings fees in the municipal market are pay for work. We conclude that rating agency incentives differ across asset classes and that fee disclosure mitigates the conflict of inherent in an issuer-pays compensation structure. Our results also suggest a substitution effect between certification agents in the muni market. The strong positive relationship between fees and ratings is exclusive to the subset of now uninsured issuers who previously purchased AAA ratings from MBIA and AMBAC. Without the benefit of the AAA pricing provided by AAA insurers, these issuers have increased incentive to pay rating agencies for certification.

I. Introduction

The purpose of this paper is to test whether the conflicts of interest inherent in the compensation structure of traditional credit rating agencies (CRAs) affect the quality of municipal bond ratings. This research question is important because the municipal bond market is a $4 trillion opaque asset class dominated by retail investors who rely heavily, if not exclusively, on credit ratings for information. This research question is unanswered because although a host of prior evidence indicates this “pay to play” phenomenon affecting ratings quality, that evidence is obtained from other asset classes (corporate ratings and products) and economic theory predicts variation in CRA incentives across asset classes.1

The municipal bond (muni) market is additionally interesting as a laboratory because of the potential substitution of certification agents in this asset class. Prior to the financial crisis, and the demise of the monoline insurance industry more specifically, most munis were wrapped with third party insurance.2 Municipal issuers of any credit quality could purchase AAA certification directly from insurers and muni investors priced these “purchased AAA” bonds equivalently to the

“true AAA” uninsured bonds as as the insurer maintained a AAA credit rating; see Cornaggia,

Hund, and Nguyen (2018). However, when the largest public monoline insurers (MBIA and

AMBAC) lost their AAA ratings, their lost its value and many previously insured municipal issuers began to issue uninsured bonds. We predict that these issuers who previously purchased AAA certification from MBIA and AMBAC now have greater incentive to purchase AAA certification (or the most favorable ratings they can purchase) from the CRAs for their newly issued uninsured bonds.

Although the Pay to play hypothesis predicts a positive relationship between rating fees and rating levels, such a result would also be consistent with an alternative Pay for work hypothesis. Signaling theory predicts that in a pool of opaque issuers, only those of relatively high

1 See Griffin, Nickerson, and Tang (2013) and Baghai and Becker (2018) for examples of relevant empirical work. Relevant theory includes Mathis, McAndrews, and Rochet (2009). Section II provides a broader literature review. 2 Insurers of financial assets are referred to as “monolines” because they are prohibited from underwriting other types of losses, such as property & casualty, life, and health insurance. 1

quality should pay certification agents. Low-quality issuers have no incentive to pay agents to certify their type. In this case, we should expect a positive relationship between fees paid to CRAs and credit rating levels as CRA analysts work longer hours to certify the relatively high-quality issuers who pay more for their time.

Both Pay to play and Pay for work hypotheses further predict that this positive relationship between rating agency fees and credit rating levels should be pronounced among the set of issuers who previously purchased AAA ratings from MBIA and AMBAC but can no longer do so.

However, because the Pay for work hypothesis also predicts a selection effect that is not predicted by the Pay to play hypothesis, subsample analyses allow us to distinguish which hypothesis is more compelling in our sample. Specifically, these hypotheses have different predictions for the

AAA rating category. The Pay to play hypothesis predicts that this coveted rating, which allows munis to quality for municipal bond funds, should be the most expensive. In contrast, the Pay for work hypothesis predicts that it should be among the cheapest.

The Pay for work hypothesis is based on information asymmetry between municipal issuers and market participants and thus the need for intermediation. Among issuers of uncertain type, this hypothesis predicts a positive relationship between rating fees (for work) and rating (relative issuer quality). However, when comparing this opaque pool of issuers to the more transparent pool of obvious AAA-rated issuers, the Pay for work hypothesis predicts a negative relationship between rating fees (for work) and credit ratings (issuer quality). For example, compare Dallas County, TX to the city of Magnolia, TX. In our 17-year sample, Dallas County issued new bonds 15 times with zero standard deviation in ratings (all were AAA). Magnolia issued three bonds over this same time period with a seven-notch range of ratings between BBB- and AA. Because the amount of work required to analyze the current issue by the city of Magnolia exceeds the amount of work required to re-certify Dallas County, the Pay for work hypothesis predicts that AAA rated issuers

(e.g., Dallas) should pay less in ratings fees than issuers of uncertain type (e.g. Magnolia).

Our empirical analyses employ a sample of 19,293 new bond issues by 2,104 distinct issuers in the state of Texas over the 1998 - 2014 time period. Within-issue analysis (comparing

2

credit ratings and rating fees across different CRAs for the same issue at the same time) allows us to control for issuer fundamentals, issuer characteristics, and the macroeconomic environment in order to cleanly identify the relation between rating fees and rating levels. Additional analyses at the issue-level allow us to further draw inferences from both time-series and cross-sectional variation across all issuers in both ratings and fees.

Our baseline analysis documents a significant positive relationship between rating fees and credit rating levels, controlling for issuer fundamentals, bond characteristics, and macroeconomic conditions. For example, consider two adjacent cities (21 miles apart) each issuing $6.5 million in general obligation (GO) bonds two months apart. Both counties hired Moody’s and S&P to rate their new issues. The lead credit analyst at each CRA is constant across counties. One city paid

Moody’s 237% more than it paid S&P and received a more favorable rating from Moody’s. The other city paid S&P 168% more than it paid Moody’s and received a more favorable rating from

S&P. Because each issue-rating observation compares ratings of the same issue at the same time, the observed difference in ratings cannot be explained by issuer fundamentals, bond characteristics, and macroeconomic conditions. The only difference is the rating fee. Because the analyst pair is constant (same analyst at each CRA for both cities) it is difficult to conclude that results reflect relative analyst pessimism.

In a regression analysis of the full sample, we find that this observed positive relationship between rating fees and rating levels is significant only among uninsured issues. More specifically, the significant positive relationship among uninsured issuers is driven by the subset of issuers who previously purchased insurance, but no longer do so. These results are consistent with the hypothesized substitution between insurers and CRAs as certification agents in the muni market.3

When we collapse the sample to issue-level, we lose the ideal controls from the within- issue analysis, instead controlling for bond characteristics, year fixed effects, and issuer fixed

3 We hypothesize only that issuers who lose insurance have increased incentive to pay CRAs. We do not suggest that CRAs are perfect substitutes for insurance. Indeed, Bergstresser, et al (2015) argue that insurance companies provide more valuable certification given their financial obligation. 3

effects. However, we gain the ability to estimate a higher-level relationship between fees and ratings across all issuers and across time. Here, both time-series selection effects (from boomtowns and ghost-towns) and cross-sectional selection effects (e.g. Dallas vs. Magnolia) present and we observe a negative relationship between fees and ratings across the entire sample of issuers. These results are inconsistent with the Pay to play hypothesis.

We confirm the hypothesized selection effect by analyzing issuer-level issuance frequency, rating levels, and ratings volatility. We find that frequent issuers have higher ratings (the correlation between number of issuers and average rating level is positive and significant at 1%); frequent issuers have less volatile ratings (the correlation between number of issuers and standard deviation in ratings is negative and significant at 1%); highly-rated issuers have less volatile ratings

(the correlation between rating level and standard deviation in ratings is negative and significant at 1%). Collectively, these results support the hypothesis that highly-rated issuers are associated with less uncertainty and therefore easier to certify. Finally, we examine the distribution of rating fees paid by issuers in each rating category and find that (1) AAA is the most common rating category awarded by each CRA and (2) AAA is among the least expensive rating categories on average for each CRA.4 In our motivating example, the city of Magnolia pays an average 11.4% higher rating fees for its new issues compared to Dallas County, even though Dallas County is a much larger and more complex entity.

Although not public information, CRAs provide issuers with fee schedules indicating their fees as a function of observable variables. For the CRAs and years for which we are able to obtain these schedules, we perform the issue-level analysis for abnormal fees rather than total fees. The results are inconsistent with the Pay to play hypothesis, which predicts that more favorable ratings should be associated with higher abnormal fees. Instead, we find more evidence of the selection effect associated with the Pay for work hypothesis.

4 We examine ratings from Standard & Poor’s (S&P), Moody’s Investors Service (Moody’s) and (Fitch). 4

Additional tests of the Pay for work hypothesis include measures of analyst-level workload: number of bonds rated, number of issuers rated, and number of sectors covered.

Controlling for the ratings fee and a host of analyst characteristics, we find that analysts award more conservative ratings (less favorable to the issuer) when they have less time available to certify the issuer. These results support the premise underlying the Pay for work hypothesis.

As a final exercise, we examine offer yields to test whether the market prices munis with expensive ratings differently than munis with inexpensive ratings, controlling for credit rating levels, bond characteristics, issuer fixed effects, and year fixed effects. The Pay to play hypothesis predicts that an efficient market should charge higher spreads to munis with inflated ratings

(suggested by higher ratings fees given a particular credit rating level). He, Qian, and Strahan

(2012) find some evidence in the market for structured finance securities: AAA rated securities issued by the CRAs’ most lucrative clients (measured by issuer size) faced higher credit spreads than AAA rated securities issued by smaller issuers (where issuer size indicates the relative importance of the issuer to CRA revenue). In contrast, we find no evidence that the market perceives any difference in the quality of expensive versus inexpensive ratings in the muni market.

We believe the difference in our results versus those from He, et al (2012) reflects less CRA (less pay to play) in the asset class we study, but quite likely also reflects less investor attention in the muni market compared to the market for structured finance products.

Our contributions to the literature are (1) novel empirical evidence of a substitution in certification agents in the muni market and (2) empirical evidence that rating agency fees paid by municipal issuers reflect pay for work, more than they reflect a conflict of interest. To the best of our knowledge, this paper is the first to directly study the relation between credit ratings and credit rating fees. Other papers have used proxies for rating fees, but actual rating fee data are difficult to obtain because this information is privately maintained by the CRAs. A benefit of studying municipal bonds issued in Texas is that the Texas Bond Review Board requires each municipal issuer to disclose rating fees paid to each CRA.

5

For at least two reasons, we do not believe that these new results call into question the prior evidence of the pay to play phenomenon documented in structured finance and corporate bond ratings. First, prior literature indicates that CRA incentives are different in those asset classes.

Second, unlike our sample of municipal bonds issued in a state that mandates ratings fee disclosure, by CRA and for each issue, issuers of corporate bonds and structured finance products do not typically disclose the fees they pay for their ratings. We pose then one policy prescription: the U.S.

Securities and Exchange Commission (SEC) has the regulatory authority to compel either the issuers of publicly traded securities or the regulated CRAs themselves to disclose their fees.

II. Institutional Background and Literature Review

A. Certification agents

Credit rating agencies are one of several types of certification agents with potential conflicts of interest in their compensation structure. The major CRAs receive their primary compensation directly from the issuers they rate resulting in an economic bonding whereby CRA income is a function of the prosperity of their issuing clients. Other certification agents facing a similar conflict of interest include real estate appraisers, securities underwriters, and auditors.

Following the Arthur Anderson scandal in 2002, the Sarbanes-Oxley Act requires auditors to publicly disclose their fees, disaggregated into audit, tax, and consulting categories. The purpose of this mandatory disclosure is two-fold. First, observing abnormally high audit fees can trigger investor and regulatory review. Second, exposure should mitigate the ex ante incentive for auditors to falsely certify disclosure in exchange for higher payment. To the extent that this mandatory fee disclosure improves audit quality, similar improvement to ratings quality should obtain if ratings fees face similar mandatory disclosure.

B. Economic Theory

Our Pay for work hypothesis is based on the premise of asymmetric information between issuer and investor. Relevant theory begins with foundational work from Akerlof (1970) which demonstrates how asymmetric information produces adverse selection. Spence (1973)

6

subsequently demonstrates the incentives of informed agents to expend resources to credibly signal their private information to uninformed agents. In his setting, high-quality job applicants obtain expensive education to signal their type to potential employers. In our setting, relatively high quality issuers pay independent credit analysts (CRAs) to signal their type to uninformed investors.

Leland and Pyle (1977) apply signaling theory to and financial intermediation,

Thakor (1982) applies signaling theory to motivate third party information production (specifically ), and Ramakrishnan and Thakor (1984) model rating agencies as centralized information producers who mitigate collective information production costs.

Our Pay to play hypothesis is also grounded in economic theory. In the model of Mathis,

McAndrews, and Rochet (2009) CRA concern for reputation is diminished by high -run profit, especially among opaque issuers. Related work from Bolton, Freixas, and Shapiro (2012) also models the CRA conflict of understating in order to attract more business. A key takeaway is that CRAs are more likely to sell inflated ratings when the probability of detection

( risk) is low. Finally, related work form Fulghieri, Strobl, and Xia. (2014) suggests that

CRAs earn a reputation for harshness among low-revenue clients, which in turn facilitates selling inflated ratings among high-revenue clients. Municipal issuers collectively qualify as opaque (they are not subject to federal disclosure requirements and accounting quality is generally poor) and they exhibit lower default probability than other asset classes. Both of these factors suggest that this asset class should be subject to conflicted ratings. However, because this asset class is less lucrative than others (see Cornaggia, Cornaggia, and Hund, 2018) CRAs have less incentive to sell inflated ratings in the muni market.

C. Prior Empirical Evidence Extant literature documents evidence consistent with a pay to play phenomenon in the credit ratings issued by Moody’s, S&P, and Fitch. Empirical papers documenting adverse effects of the issuer-pays compensation structure on corporate bond ratings quality include Becker and

Milbourn (2011), Jiang, Stanford, and Xie (2012), Cornaggia and Cornaggia (2013), Xia (2014),

Bruno, et al (2016), and Baghai and Becker (2016). Empirical papers examining CRA conflicts of

7

interest in the structured finance market include Griffin & Tang (2012), He, Qian, and Strahan

(2012), and Griffin, Nickerson, and Tang (2013). Also relevant is evidence of an inverse relationship between CRA revenues and ratings quality across asset classes from Cornaggia,

Cornaggia, and Hund (2017).

Overall, prior empirical evidence collectively suggests that the issuer-pays compensation structure negatively affects ratings quality in the corporate bond market, that CRAs sold inflated ratings to issuers of structured finance products, and that these conflicted ratings contributed to the financial crisis of 2008. Alleging fraud, the U.S, Department of Justice (DOJ) filed a $5 billion claim against S&P in February 2013 for its role in awarding inflated ratings to structured finance products. Alleging “violations of federal law”, the DOJ filed a similar lawsuit against Moody’s in

2016. Ultimately, S&P paid $1.4 billion and Moody’s paid $864 million to settle the charges.

D. Municipal Bonds

Market regulators including the SEC have long been concerned that investors in municipal bonds rely heavily, if not exclusively, on credit ratings for information.5 A host of empirical evidence indicates that this is indeed the case. For example, Cornaggia, Cornaggia and Israelsen

(2018a) document the surprising market reaction to Moody’s 2010 scale recalibration. Though

Moody’s educated market participants that recalibrated ratings conveyed no change in underlying credit quality, muni investors responded to the ‘upgrades’ associated with the scale recalibration with massive increase in trading volume and bond repricing.6

Additional evidence from Cornaggia, Hund, and Nguyen (2018) indicates that muni investors ignored information from MBIA and AMBAC equity and CDS markets indicating the seriousness of their financial distress (and therefore the erosion of their credit enhancement) responding only after these distressed insurers lost their AAA certification from Moody’s and

5 SEC Pub. No. 134 is available here: https://www.sec.gov/investor/alerts/municipalbondsbulletin.pdf. 6 At the 2013 BondBuyer Brandeis University Municipal Finance Conference in Boston, Moody’s employee Merxe Tudela explained the scale recalibration as comparable to a shift from Centigrade to Fahrenheit and questioned how such a shift would cause market participants to sweat. 8

S&P. Because the $4 trillion muni market is dominated by retail investors who rely so heavily on ratings, we are interested in testing whether the CRA conflicts previously demonstrated to negatively affect the quality of corporate bond ratings and the credit ratings of structured finance products likewise negatively affect municipal bond ratings.

III. Data and Sample

We obtain an unselected sample of municipal bond issues from the state of Texas over the

1998-2014 time period with issue information including issue amount, underlying credit ratings from each CRA hired and fees paid to each agency, insurance premia, underwriting fees, an indication for whether the issue is a general obligation (GO) or revenue bond, and whether the sale was competitive or negotiated.7,8 We translate 22-point alphanumeric ratings into numeric scales increasing in credit quality, such that AAA and Aaa ratings take the value of 22, AA+ and Aa1 take the value of 21, and so forth. For simplicity, we employ the rating nomenclature adopted by

S&P and Fitch in our discussion (e.g., we refer to BBB and Baa rated bonds as the “BBB” rating category) unless we refer specifically to Moody’s ratings.

We collect lead analyst names from the analyst reports. For each name, we collect information from LinkedIn.com and Intelius.com on analyst age, gender, job history and location, and educational history including undergraduate and graduate degrees, degree granting institutions, and the number of years enrolled in each institution. If we cannot collect analyst age, we estimate it by subtracting 18 years from the first year the analyst attended college, or, if unavailable, 22 years from college graduation date.

To analyze the potential impact of rating fees the cost of municipal financing, we obtain new issue characteristics from the Ipreo i-Deal database. Specifically, we collect the following

7 The Texas Bond Review Board requires each municipal issuer to disclose (among other fees) rating fees paid to each CRA. See Chapter 1231.081(c)(3) , Texas Government Code: http://www.statutes.legis.state.tx.us/Docs/GV/htm/GV.1231.htm

8 The “underlying” credit rating is based on the credit quality of the bond issuer rather than quality of the bond insurer (if insurance was secured). Unless otherwise specified, we use the terms “credit rating” and “underlying credit rating” interchangeably in this paper. 9

data for new issues in Texas over the 1998-2014 sample period: issue size, offer , maturity date, coupon rate, general obligation (GO) or revenue bond type, call features, and whether the new issue is negotiated or competitive, insured, and/or classified as a Build America Bond (BAB).

We also obtain data indicating the number of other bonds outstanding for each issuer. We estimate credit spreads with duration-matched Treasury yields.

IV. Empirical Results

A. Descriptive statistics and certification agent independence

Table 1 reports summary statistics for 19,293 issue-rating observations employed in our within-issue analysis. From Panel A, which describes the complete sample, we observe the average and median underlying credit ratings correspond to AA- and that over 25% of muni ratings are

AAA. These ratings reflect the low default risk associated with this asset class. The average ratings fee (per CRA) is $11,082 and the rating fee distribution exhibits positive skewness. Even in the case when an issuer pays three CRAs, the ratings fees are much lower than the fees paid to the monolines ($69,663 on average) and the underwriters ($202,090 on average). The difference in fees paid to CRAs and insurers clearly indicate that these certification agents are not perfect substitutes. Where the CRA is certifying credit quality without direct monetary penalty for inaccuracy, the insurer is responsible for making timely interest payments to investors in the event the covered issuer fails to do so. Academics and insurance companies both suggest that this “skin in the game” makes insurers more credible certification agents; see Bergstresser, Cohen, and

Shenai (2015). Accordingly, our substitution hypothesis is unidirectional; we hypothesize only that issuers who lose valuable insurance coverage have greater incentive to pay credit rating analysts – not that CRAs are perfect substitutes for insurance.

Regarding potential interdependence of these certification agents, we believe that original underlying credit ratings likely precede the insurance decision since true AAA issuers have less incentive to purchase the AAA certification from an insurance company. Indeed, Cornaggia, Hund, and Nguyen (2018) report that less than 1% of the true AAA bonds were insured in their much

10

larger and more diverse sample.9 We therefore believe that underlying ratings are likely assessed independently. As a rudimentary test of insurance company reliance on underlying ratings to price coverage, we examine the distribution of premiums paid by issuers with the same underlying credit rating. In untabulated results (available from the authors), we find substantial variation in the premiums paid for insurance inside each credit rating category. It stands to reason that insurers consider the underlying credit rating, but it seems clear that premiums are determined in large part by an independent credit analysis by the insurance companies. The only apparently mechanistic reliance of one certification agent on the other occurs when unrated issuers (those who do not purchase underlying credit ratings prior to purchasing insurance) purchase insurance. In these cases, it appears that S&P imputes the AAA certification of the insurer onto the wrapped bonds.

These results support our hypothesis that issuers who previously relied on insurance for AAA certification, but then lose this certification when the monolines are downgraded, have an increased incentive to purchase certification from the rating agencies; CRAs appear to be (imperfect) substitutes for lost AAA insurance.

From Table 1, we further observe that 65% of new issues are by issuers who have previously employed the CRA(s) hired to rate the current bond. Because we employ issue-rating observations in our baseline analysis, each new issue has potentially one, two, or three issue-rating observations. The majority (51%) of these observations have ratings from S&P, 36% are from

Moody’s, with Fitch accounting for only 13% of this market share. The overwhelming majority

(72%) of issue-rating-level observations are insured. However, 71% of these issuers switch from issuing insured bonds to issuing uninsured bond following the downgrades of MBIA and AMBAC.

[Inset Table 1 approximately here.]

We examine next the correlation of fees paid to different CRAs for rating the same bond issue at the same time in Figure 1. In Panel A, we observe significant positive correlation between

9 Not subject to our credit rating fee requirement, those authors assemble a database consisting of 3,555,964 bonds issued by 53,045 issuers across the country and across different levels of government. 11

the fees paid to Moody’s and S&P across issues where both CRAs are hired. However, the scatter plot indicates several observations where one of these CRAs is paid significantly more than the other. Similar results obtain for comparison of Moody’s and Fitch (Panel B) and for S&P and

Fitch (Panel C). This correlation between CRA fees is consistent with the Pay for work hypothesis, assuming that the required amount of work varies more across issues than across CRAs within an issue. The Pay to play hypothesis has no clear prediction for correlation in ratings across CRAs.

[Inset Figure 1 approximately here.]

Panels B and C of Table 1 compare the issue-rating observations with and without insurance, respectively. As expected, the average and median credit quality is higher among the uninsured. Consistent with the hypothesized substitution effect of CRAs for insurers, we observe that insured observations are associated with 32% lower average CRA fees than uninsured observations. The greater representation of issuers who lost insurance coverage among the subsample of issue-rating level observations that are uninsured suggests more new issues from this set in the latter half of the time series. CRA market share is similar in Panels B and C.

Panels D and E divide the observations in Panel C based on whether or not the issuer lost insurance. Specifically, we characterize issuers who insured their bonds prior to 2008 and then did not insure their new issues after 2008 as having “lost insurance”. We document this lost insurance experience in Figure 2. In Panel A, we see that prior to 2008, over 70% of munis issued in the state of Texas were wrapped with third party insurance. By 2009, this percentage was closer to 30%.

By the end of our sample period, the percentage of new issues with insurance levels out around

40-45%, with coverage primarily from Assured Guaranty or Build America Mutual (BAM).

[Inset Figure 2 approximately here.]

Panels B and C of Figure 2 plot the relative market share of the two largest public monolines, MBIA and AMBAC. Both are irrelevant after 2008. Still, “lost insurance” is a choice variable; issuers who previously insured their bonds with MBIA or AMBAC prior to the crisis could continue to insure their bonds after the crisis with another insurer, such as Assured Guaranty

12

or BAM. In this case, these issuers and their new issues would remain in the insured subsample

(Panel B of Table 1). From Panel E, we observe that 86% of uninsured issues are associated with issuers who lost their coverage, by our definition. The remaining set in Panel D represent uninsured issues associated with issuers who had not previously insured their bonds.

Table 2 describes the sample at the issue-level for the 11,552 new issues. Panels A, B, and

C describe the full sample, the insured subsample, and the uninsured subsample, respectively.

Additional statistics in Table 2 describe issue characteristics including issue amount, indicator for revenue bonds (as opposed to GO bonds), an indicator for whether the issue was negotiated (as opposed to competitive), indication of the number of CRAs hired to rate the issue, and in the case of insured bonds the fees paid to the insurer. Collectively, these variables serve as control variables in our issue-level regression models.

[Inset Table 2 approximately here.]

From Panel A, we observe that 40% of new issues hire only one CRA. Another 40% hire two and 20% hire all three CRAs. Comparing Panels B and C, we observe a larger average issue size, greater proportion of revenue bonds, and a higher percentage of previous CRA payment among the uninsured sample. We also observe higher ratings (1.5 notch difference at the median) and higher rating fees (44% higher, on average) among the uninsured group, as in Table 1.

B. Within-issue relationship between ratings and fees

Our baseline approach relies on within-issue variation specified in Equation (1). Rating is a numerical translation of Moody’s/S&P/Fitch underlying ratings, increasing in credit quality

(AAA = 22). Each bond issue has one observation per rater with up to three observations per issue.

Bonds rated by only one CRA (40% of the sample) contribute no variation to this baseline test.

The key variable of interest (Fee) is the natural log of the amount paid in USD to each respective

CRA for rating the bond.

Rating = α + β * Fee + Controls + Issue FE + ε (1)

13

Controls include CRA fixed effects, to control for systematic differences in CRA rating scales, an indicator for whether the CRA rated a prior issue by the same issuer in the previous year, and issue fixed effects. As such, identification comes from differences in ratings on the same bonds at the same time, not from issuer fundamentals, issue characteristics, economic fluctuations, or combinations thereof. We also include interaction terms to further control for the recalibration of municipal bond rating scales performed by Fitch in April 2010 and by Moody’s in May 2010. We cluster standard errors and the issue-level and report results in Table 3 Panel A.

[Inset Table 3 approximately here.]

We observe no significant relationship between credit ratings and rating fees across all issue-rating observations in column (1) or the insured and uninsured subsamples in columns (2) and (3), respectively. However, dividing the uninsured observations into those who never had insurance (column 4) and those we previously had AAA certification from an insurance company but then lost it (column 5) reveals a significant (at 5%) positive relationship among the latter sub- group. Among these particular uninsured issues, a one standard deviation increase in fees is associated with a one notch improvement in credit rating. Given that the distribution of municipal credit ratings in our sample ranges only nine notches (from BBB- to AAA), a one standard deviation increase in fees is associated with an improved rating comparable to 11% of the entire relevant rating scale. Additional, untabulated results indicate that this affect is permanent. With complete ratings histories from Moody’s and S&P, we track the difference between ratings associated with abnormally high and low fees over time. We find that the key result (previously insured bonds with high fees retain more favorable ratings than bonds with low fees) persists at least 36 months. These baseline results support our hypothesis that issuers who previously purchased AAA certification from MBIA and AMBAC now have greater incentive to purchase

AAA certification (or the most favorable ratings they can purchase) from the CRAs for their newly issued uninsured bonds.

14

Because the Pay for work hypothesis predicts selection effects, we exclude the AAA observations in Panel B of Table 3. Recall that the Pay for work hypothesis predicts a negative relationship between credit ratings and ratings fees when comparing the obvious AAA issuers

(such as Dallas County) with the issuers of uncertain type (such as Magnolia) but predicts a positive relationship between credit ratings and ratings fees inside the issuer pool with uncertain type (comparing Magnolia to Gary City, for example) as issuers with higher than average credit quality for the pool have incentive to expend resources to signal their type to investors as in Spence

(1973).

Removing the AAA issuers in Panel B, we observe the positive relationship between rating fees and credit rating levels becomes marginally significant for the full subsample in column (1) and highly significant (at 1%) among all uninsured issuers in column (3) and especially those that previously had insurance (column 5). The magnitude of these results is stronger than those in Panel

A, having removed the competing selection effect. This result is inconsistent with the Pay to play hypothesis which predicts the highest fees for AAA ratings.

Finally, we examine each CRA independently in Panel C of Table 3. Results from S&P

(who awards 51% of all ratings in our sample) and from Fitch (awarding 13% of the ratings in our sample) are consistent with the Pay for Work hypothesis. In both cases, the relationship between ratings fees and credit rating levels is significantly negative (at 1%) when comparing the relatively transparent AAA and relatively opaque non-AAA issuers (columns 3 and 5) but is significantly positive (at 1%) for S&P and insignificant for Fitch among the more-opaque non-AAA issuers

(columns 4 and 6). Only the results from Moody’s fail to reject the Pay to Play hypothesis. Among

Moody’s ratings, the significant positive relationship obtains for the entire sample in column (1) as well as the more-opaque sample in column (2). The magnitude of the result is diminished greatly by the inclusion of the AAA issuers in column (1) compared to column (2) but in both cases the relationship is positive and statistically significant at 1%.

Overall, we conclude from Table 3 that (1) there is on net a significantly positive relationship between credit ratings and CRA fees which appears driven by uninsured issuers; (2)

15

there is a substitution effect between the certification agents in the municipal bond market – when opaque issuers lose the ability to purchase credible AAA certification from their insurers they have new incentive to purchase certification from the CRAs; and (3) there is a selection effect whereby issuers with stable AAA ratings pay less for CRA certification than their more volatile peers. The results and conclusion are more consistent with the Pay for work hypothesis.

C. Within-issuer relationship between ratings and fees

In order to estimate a higher-level relationship between fees and ratings across all issuers and across time, we collapse the sample to the issue-level. In doing so, we lose the ideal controls from the within-issue analysis. Here, we instead control for specific bond characteristics, including issue amount, an indicator for insurance or the log of fee amount in USD paid to an insurance company, an indicator for revenue bonds (as opposed to GO bonds), an indicator for a negotiated sale (as opposed to a competitive offer), controls for the number of CRAs hired to rate the bond isssue, along with year fixed effects and issuer fixed effects. The key variable of interest is now the average fee paid to CRAs rating the particular issue. We tabulate regression results for these issue-level observations in Table 4.

[Inset Table 4 approximately here.]

Because we include issuer fixed effects in Panel A of Table 4, any significant relationship between the key variable (Average fee) and credit rating level indicates correlation over time within a particular issuer. To better motivate this time-series test, we identify in our sample both

“boomtowns” which issue more bonds at higher ratings over time and “ghost-towns” which issuer fewer bonds with lower credit ratings over time. We then test whether the boomtowns (ghost- towns) pay higher or lower fees (relative to their own history). For example, we identify Bay City,

TX as a boomtown. In 2004, Bay City issued once and received Baa1, BBB+ and BBB+ ratings from the three CRAs, respectively. In 2009, Bay City issued bonds four times and received Aa1,

A+ and A+ ratings. Over five years, this municipality quadrupled its issuance and increased its average rating by six notches. In contrast, Webb County experienced a 67% reduction in the

16

number of issues and experienced a five-notch average downgrade (from AAA to A) over the three-year period from 2002 – 2005. The question answered in Panel A of Table 4 is whether these ratings changes are associated with higher / lower rating fees.

Here, we observe a significant (at 1%) negative relationship between average fees and ratings across the entire sample of issuers and the subsample of insured issues. These results are inconsistent with the Pay to play hypothesis, which suggests that boomtowns should be paying more for their improved ratings, not less, and that ghost-towns should be paying less for their lower ratings, not more. The results are consistent with the Pay to work hypothesis if boomtown growth is accompanied by improved disclosure quality and / or if ghost-town decline is accompanied with increased uncertainty or lower disclosure quality. However, among the uninsured bonds, where the CRA certification should be paramount, we find no significant relationship between average fees and ratings. In column (2) we also find that higher insurance premia are associated with lower credit ratings, as expected. Regardless of the extent to which insurers rely on underlying ratings to set premia, we expect correlation in the credit risk assessment by these two certification agents.

Whereas Panel A compares issuers to themselves over time, Panel B of Table 4 omits issuer fixed effects and thus allows us to test for variation across issuers. We again observe negative coefficients on the average fee variable for all issues (column 1) and the subsample of insured issues (column 2), although the magnitudes are smaller here than in Panel A. Again, these results are inconsistent with the Pay to play hypothesis, which predicts that issuers paying more fees should receive higher ratings than issuers paying lower fees. These negative coefficients are consistent with the Pay to work hypothesis if opaque issuers (e.g. Magnolia) receive lower ratings and require more effort and thus greater fees compared to more transparent issuers (e.g. Dallas

County). Among the insured bonds, where the CRA certification should be paramount, we observe a significantly positive (at 1%) relationship between average fees and ratings for the subsample of uninsured issues (column 3) in Panel B which is consistent with either hypothesis.

To further test the hypothesized cross-sectional selection effect (e.g. Magnolia versus

Dallas County), we analyze issuer-level issuance frequency, rating levels, and ratings volatility in

17

Figure 3. In Panel A, we find that the correlation between number of issues per issuer and standard deviation in ratings is -0.0769 (p-value = 0.0024). In Panel B, we find that the correlation between rating level and standard deviation in ratings is -0.1458 (p-value = 0.0000). In Panel C, we find that the correlation between number of issues per issuer and average rating level is 0.1247 (p-value

= 0.0000). We conclude from Figure 3 that frequent issuers have less volatile ratings, highly-rated issuers have less-volatile ratings, and frequent issuers have higher ratings. Each of these large sample results confirms the anecdotal comparison between Dallas County and Magnolia, TX.

Large, frequent issuers generally exhibit high and stable ratings.

[Inset Figure 3 approximately here.]

Finally, we examine the distribution of rating fees paid by issuers in each rating category in Figure 4. Here, we find that (1) AAA is the most common rating category awarded by each

CRA and (2) AAA is among the least expensive ratings on average for each CRA. This large sample analysis likewise confirms the anecdotal comparison between Dallas County and the city of Magnolia. Even though they are larger issuers, the AAA issuers (with greater ratings stability as per Figure 3) pay significantly less on average than their smaller peers (with greater ratings volatility as per Figure 3).

[Inset Figure 4 approximately here.]

Overall, Table 4 and Figures 3 and 4 provide additional evidence of a selection effect whereby relatively transparent issuers (frequent issuers with stable AAA ratings) pay less for credit rating certification than their more-opaque peers (less frequent issuers with relatively volatile non-

AAA ratings). These effects are observed both in the time-series analysis (boomtowns and ghost- towns) and the cross-sectional analysis.

D. Abnormal fees

Although not public information, Moody’s and S&P have fee schedules they provide to issuers that indicate their fees are a function of observable variables such as year of issuance, issue type, issue size, and prior business (frequent issuer discounts). From various sources, including 18

the minutes from municipal government meetings, we obtain copies of the fee schedules from

Moody's for the years 2007 and 2010-2016 and for S&P in 2013 and 2015. Because some are marked "Proprietary and Confidential" we infer that the CRAs may not adhere strictly to their schedules across issuers. We test this inference with a comparison of the actual fees paid by our sample issuers to the fee indicated in the contemporaneous fee schedule. Finding differences between actual and indicated fees, we then test whether the Abnormal fee, defined as the actual fee paid minus the fee indicated by the schedule, is correlated with the credit rating level.

Panel A of Table 5 presents the summary statistics of Abnormal fee, separately for Moody’s and S&P. Because Moody’s Abnormal fee distribution looks different in 2007 compared to the

2010-2014 period, we tabulate 2007 alone in the final column.10 For both CRAs, we find a low adherence rate. In 2013 (the only year we have actual fees to compare to an S&P schedule) S&P charged a fee equal to that indicated by its schedule in only 45.2% of our sample. Moody’s was more likely to adhere to its schedule in 2007 (51% of our sample) than in subsequent years; the overall adherence rate for Moody’s in our sample (including the 2010-2014 period) is only 20%.

These frequencies might suggest some evidence of Pay to play pricing, except that the average and median Abnormal fee is generally negative; CRAs generally charge less than their fee schedules indicate. Only in 2007 (of the schedules we obtain) is the median value of Moody’s

Abnormal fee equal to zero. Because the percentage of fees that adhere to the schedule are low, the average Abnormal fee is negative, and the standard deviations in Abnormal fee are large, we infer that the fee schedules are comparable to hotel ‘rack rates’ from which customers negotiate prices.

The large standard deviation in Abnormal fee commends the regressions with Abnormal fee

(replacing total Fee) tabulated in Panel B of Table 5.

[Inset Table 5 approximately here.]

Columns (1) – (3) in Panel B of Table 5 are essentially replications of the within-issue analysis in columns (1) – (3) in Panel A of Table 3 except (a) we have no fee schedules from Fitch,

10 Although we have fee schedules for Moody’s through 2016, we have actual fees from TX issuers through 2014. 19

which is thus excluded from Table 5 analysis and (b) the substitution of Abnormal fee in lieu of total fee. Similarly, columns (4) – (6) replicate Panel A of Table 4 (issue-level analysis with issuer

FE for time-series tests) and columns (7) – (9) replicate Panel B of Table 4 (issue-level analysis without issuer FE for cross-sectional tests). The Pay to play hypothesis predicts only positive relationships between Abnormal fee and ratings. We find no such evidence. Overall, we find less evidence of any relationship between ratings and abnormal fees. Only the cross-sectional tests in columns (7) and (8) indicate a significant negative relationship, which suggests the selection effects associated with the Pay to work hypothesis.

One potential reason for the reduced significance relative to Tables 3 and 4 is the significantly reduced sample size due to the fee schedule requirement in Table 5. Table 3 Panel A analyzes 19,293 issue-ratings observations compared to 2,893 issue-ratings observations in Table

5. Table 4 analyzes 11,537 issue-level observations compared to 2,672 observations in Table 5. In order to analyze abnormal fees in the full sample, we estimate predicted fees for all observations, based on the variables identified as relevant in the fee schedules. Specifically, we estimate the model specified in Equation (2) for our 19,593 issue-rating observations.

Fee = α + β1 Paid previous + β2 CRA, Year, type, Issue amount + ε (2)

Explanatory and control variables include Paid previous, an indicator variable taking a value of one if the issuer paid for a credit rating from the same CRA on a different issue in the previous year, and CRA × Year of issuance × Debt type × Issue amount fixed effects. We create issue amount fixed effects based on $10 million bins. The dependent variable Fee is the dollar fee paid by the issuer to the credit rating agency for a credit rating on the issue. We take the natural log of this variable and standardize it to follow a mean-zero, unit-variance distribution.

We report the results from this first stage in Panel A of Table 6 and find that our Fee model explains 63% of the variation in our sample. The significantly negative coefficient on the Paid previous indicator confirms the discounts for frequent issuers advertised in fee schedules. We then

20

collect the residuals from this first stage regression specified in Equation (2) to employ as an alternative measure of abnormal or unexpected fees.

Panel B of Table 6 is essentially a replication of Panel B of Table 5 except for the substitution of Residual fee (an estimated abnormal fee) in lieu of Abnormal fee (computed as the departure from the limited fee schedules). Again, the Pay to play hypothesis predicts only positive relationships between Abnormal fee and ratings. Again, we find no such evidence. Here we find significant evidence of the selection effects associated with the Pay to work hypothesis. Columns

(4) and (5) indicate the time-series selection effects (from boomtowns and ghost-towns) observed in Table 4 Panel A and columns (7) and (8) indicated the cross-sectional effects observed in Table

4 Panel B. Not only do we observe that the increased sample size in Table 6 increases significance relative to Table 5, we also observe increased magnitude of these effects relative to Table 4.

[Inset Table 6 approximately here.]

In addition to the hypothesized selection effects, we offer the following interpretation for the significant negative coefficients on Abnormal fees and Residual fees observed in Tables 5 and

6, respectively. Both tables indicate that issuers who pay higher fees, for reasons unexplained by the schedule, tend to receive lower ratings. This might happen if an issuer with deteriorating credit quality needs capital. It might raise the capital by issuing multiple bonds of different types in a relatively short time period. Different types of bonds would impose more and varied types of analysis on the CRAs, thus incurring more work and higher fees, while associating with lower ratings.

The data yield anecdotal evidence to support this conjecture that distressed issuers resort to various, creative ways to raise capital through different types of offerings. For example, in 2009, the city of Corpus Christi issued bonds nine times, some tax-backed GO bonds, some revenue bonds, some combination issues; some rated only by Moody’s, others by S&P, and still others by

Fitch; most were insured but some issued without insurance. These bonds are associated with some of the highest rating fees in our sample but have relatively low underlying credit ratings. Additional

21

anecdotal support for our conjecture is found more recently and outside the state of Texas. We cannot observe the fees paid by the state of Illinois, but we can observe that state’s issuance of many and varied bonds rated at the lowest end of our sample rating scale (Baa3 and BBB- are one step above junk bond status) in 2017.

E. Busy analysts

As an additional test of the Pay for work hypothesis, we include measures of analyst-level workload in Table 7. The sample here includes all uninsured issue-rating observations for which we can obtain credit analyst characteristics for the lead analyst at Moody’s or S&P assigned to the sample issue. Fracassi, Petry, and Tate (2016) indicate that analyst-level subjectivity influences analysts’ information production, so we control for several analyst characteristics including gender, education, and number of years employed by the CRA she works for. Because prior evidence (e.g., Coval and Moskowitz (2001), Malloy (2005), and Butler (2008)) indicates that geographic proximity conveys an informational advantage, we include a control variable if the analyst works for her CRA employer inside the state of Texas. Because prior evidence from

Cornaggia, Cornaggia, and Israelsen (2018b) indicates a home bias among muni analysts, we also include an indicator for whether the analyst is from (received her social number in) the state of Texas.

[Inset Table 7 approximately here.]

In column (1) of Table 7, we observe that the baseline result from Table 3 obtains in this restricted sample. In column (2) we observe that the baseline results obtains after including analyst characteristics to the model. The remaining tests include the key variables of interest – indicators of analyst time constraints – in columns (3) – (5). In column (3), we control for the number of bonds rated by the lead analyst assigned to the sample bond. In column (4), we control for the number of issuers she rates. In column (5), we control for the number of sectors she covers. In each case, controlling for the ratings fee and a host of analyst characteristics, we find that analysts award more conservative ratings (less favorable to the issuer) when they have less time available

22

to certify the issuer. In columns (3) and (4) the inclusion of proxies for analysts’ time constraints mitigates the positive relationship between rating fee and credit rating level, entirely. In column

(5), the baseline result survives with only 10% significance. Overall, the results from Table 7 support the premise underlying the Pay for work hypothesis.

F. Ratings transitions

Ideally, we would analyze differences in default frequency (between high ratings associated with high fees versus the high ratings obtained with lower fees) to test whether the higher ratings associated with higher fees observed in Tables 3, 4, and 7 are indeed inflated relative to fair ratings. However, municipal bonds rarely default. We therefore analyze ratings transitions to provide some evidence of differences in ex post outcomes. Specifically, we gauge the extent to which higher ratings associated with higher fees results in more downgrades than higher ratings obtained with lower fees.

First, we sort bond issues by their ratings at issuance. Second, we sort them into quintiles based on fees paid to CRAs. Then, we compute the average change in ratings over the subsequent three-year period for each rating-fee category. If the expensive ratings are illegitimate, as predicted by the Pay to play hypothesis, then they should be more likely downgraded than the same ratings obtained for less. We tabulate the results of this exercise, separately for Moody’s (Panel A) and

S&P (Panel B), in Table 8. We restrict our sample to the ratings produced prior to Moody’s scale recalibration in 2010 to prevent this recalibration from confounding our analysis. Although some of the middle rating categories are close to marginally significant in Panel B, Table 8 overall provides no compelling evidence that the expensive ratings are illegitimate.

[Inset Table 8 approximately here.]

G. Rating fees and the cost of municipal financing

As a final exercise, we examine offer yields to test whether the municipal bond market prices bonds with expensive ratings differently relative to bonds with inexpensive ratings. The Pay to play hypothesis predicts that an efficient market should charge higher spreads to munis with

23

inflated ratings (suggested by higher ratings fees given a particular credit rating level). He, Qian, and Strahan (2012) find such evidence in the market for structured finance securities: AAA rated securities issued by the CRAs’ most lucrative clients faced higher credit spreads than AAA rated securities issued by those who paid the CRAs less money (where deal volume serves as a proxy for “money” since fees are unknown).

The dependent variable in Table 9 is Spread to after-tax Treasury, the bond’s offer yield minus the after-tax yield of the duration-matched Treasury, where we calculate duration using the bond’s call date if the bond issue is callable and the bond’s time to maturity if the issue is not callable. We assume a tax rate of 35% when calculating after-tax yields for Treasuries. If a bond issue is a Build America Bond, which is taxable to investors, we substitute its raw yield with its after-tax yield, again assuming a 35% tax rate.

[Inset Table 9 approximately here.]

Independent variables in Panel A include indicator variables for Moody’s, S&P’s and

Fitch’s ratings assigned to new issues. Panel B displays results from F-tests of whether the sums of regression coefficients in Panel A are significantly different from zero. P-values appear below summed values in parentheses. High fee is an indicator variable taking a value of one if the rating from Moody’s (Standard & Poor’s, Fitch) has the highest fee from among the set of ratings for the new issue. Controls for bond characteristics include par value, maturity measured in years, coupon, number of bonds outstanding for the issuer, and indicator variables for whether issues are revenue bonds or general obligations, Build America Bonds, issued through a negotiated or competitive process, and callable or noncallable. We suppress these coefficients to conserve space. (Full regression output is available upon request.) We cluster standard errors at the issuer level.

In contrast to He, et al (2012), we find virtually no evidence that the market perceives any difference in the quality of expensive versus inexpensive ratings in the muni market. The results in Panel B show that, among bonds with AAA ratings from Fitch, yields are higher if Fitch charges the most for its rating among the set of raters. However, we also find that, among bonds with AAA

24

ratings from S&P, yields are marginally lower (significant at 10%) if S&P charges the most for its rating among the set of raters on the bond. Among all other ratings assigned by each rater, we see no evidence that the market prices bonds differently if a rater charges the highest fee among the set of raters on the bond. We believe the difference in our results versus those from He, et al (2012) reflects less CRA conflict of interest (less pay to play) in the asset class we study, but quite likely also reflects less investor attention in the muni market compared to the market for structured finance products.

V. Conclusion

The question of whether the conflicts of interest inherent in CRA compensation structure affect the credit ratings of municipal bonds is important because the retail investors who dominate this $4 trillion opaque asset class rely heavily, if not exclusively, on credit ratings for information.

This research question is unanswered because prior evidence of a “pay to play” phenomenon is obtained from other asset classes in which CRA incentives likely differ. The muni market is additionally interesting as a laboratory because of the potential substitution of certification agents in this asset class.

Overall, we conclude that (1) there is a selection effect whereby relatively transparent issuers (frequent issuers with stable AAA ratings) pay less for credit rating certification than their more opaque peers (less frequent issuers with relatively volatile non-AAA ratings); (2) there is a substitution effect between the certification agents in the municipal bond market – when opaque issuers lose the ability to purchase credible AAA certification from their insurers they have new incentive to purchase certification from the CRAs; and (3) that the positive relationship between rating fees and credit rating levels primarily reflects Pay for work. Only a subset of Moody’s ratings fails to reject a Pay to play hypothesis in our sample of municipal bond ratings.

We do not believe that our results call into question the prior evidence of the pay to play phenomenon documented in structured finance and corporate bond ratings for at least two reasons.

First, prior literature indicates that CRA incentives are different in those asset classes. Second, unlike our sample of munis issued in a state that mandates ratings fee disclosure, by CRA and for 25

each issue, issuers of corporate bonds and structured finance products do not typically disclose the fees they pay to each CRA. We pose then one policy prescription: the SEC has the regulatory authority to compel either the issuers of publicly traded securities or the regulated CRAs themselves to disclose the fees paid for each new bond issue. Indeed, the Sarbanes-Oxley Act of

2002 requires precisely this disclosure for another group of certification agents (auditors) in response to Arthur Anderson’s fraudulent certification of ’s financial statements. Given the similarities in the incentive structures (and the inherent conflict of interest) a similar disclosure rule for the CRAs seems prudent.

26

References Adelino, M., Cunha, I., and Ferreira, M., 2017. The economic effects of public financing: Evidence from municipal bond ratings recalibration, Review of Financial Studies 30, 3223- 3268. Akerlof, G. 1970. The market for “lemons”: Quality uncertainty and the market mechanism. Quarterly Journal of Economics 84(3), 488-500 Almeida, H., Cunha, I., Ferreira, M., and Restrepo, F., 2017. The real effects of credit ratings: The sovereign ceiling channel, Journal of Finance 72, 249-290. Alp, A. 2013. Structural shifts in credit ratings, Journal of Finance 68(6), 2435-2470. Agarwal, S., Chen, V.Y.S., and Zhang, W. 2016. The information value of credit rating action reports: A textual analysis, Management Science 62(8), 2218-2240. Ang, A., Green, R.C., and Xing, Y. 2017. Advance refunding of municipal bonds, Journal of Finance 72(4), 1645-1682. Baghai, R., and Becker, B. 2016. Non-rating revenue and conflicts of interest, Journal of Financial Economics 127(1), 94-112. Baghai, R., Servaes, H. and Tamayo, A. 2014. Have rating agencies become more conservative? Implications for capital structure and debt pricing, Journal of Finance, 69(5), 1961-2005. Bar-Isaac, H., and Shapiro, J. 2013. Ratings quality over the business cycle, Journal of Financial Economics 108(1), 62-78. Becker, B., and Milbourn, T. 2011. How did increased competition affect credit ratings?, Journal of Financial Economics 101(3), 493-514. Behr, P., Kisgen, D., and Taillard, J. 2017. Did government regulations lead to inflated ratings?, Management Science, forthcoming. Bergstresser, D., Cohen, R. and Shenai, S. 2015. Skin in the game: The performance of insured and uninsured municipal debt, Working Paper. Bolton, P., Freixas, X., and Shapiro, J. 2012. The credit ratings game, Journal of Finance 67(1), 85-112. Bongaerts, D. 2013. Can alternative business models discipline credit rating agencies? Working paper. Butler, A. W. 2008. Distance still matters: Evidence from municipal bond underwriting, Review of Financial Studies 21(2), 763-784. Butler, A.W., Fauver, L., and Mortal, S. 2009. Corruption, political connections, and municipal finance, Review of Financial Studies, 22(7), 2673-2705. Bruno, V., Cornaggia, J., and Cornaggia, K. 2016. Does regulatory certification affect the information content of credit ratings? Management Science 62(6), 1578-1597.

27

Cestau, D., Green, R., and Schürhoff, N. 2013. Tax-subsidized underpricing: Issuers and underwriters in the market for Build America Bonds, Journal of Monetary Economics 60(5), 593-608. Chalmers, J.M.R. 1998. Default risk cannot explain the muni puzzle: Evidence from U.S. government secured municipal bonds, Review of Financial Studies 11(2), 281-308. Chen, L., D.A. Lesmond, and J. Wei. 2007. Corporate yield spreads and bond liquidity, Journal of Finance 62(1), 119-49. Cornaggia, J., and Cornaggia, K.J. 2013. Estimating the costs of issuer-paid credit ratings, Review of Financial Studies 26(10) 2229-2269. Cornaggia, J., Cornaggia, K., and Hund, J. 2017. Credit ratings across asset classes: A long-term perspective, Review of Finance 21(2) 465-509. Cornaggia, J., Cornaggia, K., and Israelsen, R. 2018a. Credit ratings and the cost of municipal financing, Review of Financial Studies, forthcoming. Cornaggia, J., Cornaggia, K., and Israelsen, R. 2018b. Where the heart is: Information production and the home bias, Working paper. Cornaggia, J., K. Cornaggia, and Xia, H. 2016. Revolving doors on , Journal of Financial Economics 120(2), 400-419. Cornaggia, K., Hund, J., and Nguyen, G., 2018. Investor attention and municipal bond returns, Working paper. Coval, J.D., and T. J. Moskowitz. 2001. The geography of investment: Informed trading and asset prices, Journal of Political Economy 4, 811–841. Ellul, A., C. Jotikasthira, and C. Lundblad. 2011. Regulatory pressure and fire sales in the corporate bond markets, Journal of Financial Economics 101(3), 596-620. Flynn, S. and A. Ghent. 2017. Competition and credit ratings after the fall, Management Science, forthcoming. Fracassi, C., Petry, S, and Tate, G. 2016. Does rating analyst subjectivity affect corporate debt pricing, Journal of Financial Economics 120(3), 514-538. Fulghieri, P., Strobl, G., and Xia, H. 2013. The economics of solicited and unsolicited credit ratings. Review of Financial Studies 27(2), 484-518. Ghent, A., Torous, W., and Valkanov, R. 2017. Complexity in structured finance, Review of Economic Studies, forthcoming. Green, R. C. 1993. A simple model of the taxable and tax-exempt yield curves, Review of Financial Studies 6, 233–64. Green, R., Hollifield, B., and Schurhoff, N. 2007. Dealer intermediation and price behavior in the aftermarket for new bond issues, Journal of Financial Economics 86, 643-682. Green, R., Li, D., and Schurhoff, N. 2010. Price discovery in illiquid markets: Do financial asset prices rise faster than they fall?, Journal of Finance 65, 1669-1702. 28

Griffin, J.M., and Tang, D. 2012. Did subjectivity play a role in CDO credit ratings? Journal of Finance 67(4), 1293-1328. Griffin, J.M., J. Nickerson, and D.Y. Tang. 2013. Rating shopping or catering? An examination of the response to competitive pressure for CDO ratings, Review of Financial Studies 26 (9) 2270-2310. Harris, L., Piwowar, M. 2006.Secondary trading costs in the municipal bond market, Journal of Financial Economics 61, 1361–1397. He, J., Qian, J., and Strahan, P.E. 2012. Are all ratings created equal? The impact of issuer size on the pricing of mortgage-backed securities, Journal of Finance 67(6) 2097-2137. Hilscher, J., and Wilson, M.I. 2017. Credit ratings and credit risk: Is one measure enough? Management Science, forthcoming. Ingram, R.W., Brooks, L.E., and Copeland, R.M. 1983. The information content of municipal bond rating changes: A note, Journal of Finance 38(3), 997-1003. International Organization of Securities Commissions, 2014, Good practices on reducing reliance on CRAs in asset management; www.iosco.org/library/pubdocs/pdf/IOSCOPD442.pdf

Jiang, J.X., Stanford, M., and Yuan, X. 2012. Does it matter who pays for bond ratings? Historical evidence. Journal of Financial Economics 105(3), 607-621. Kadan. O., Madureira, L., Wang, R., and Tzachi, Z. 2009. Conflicts of interest and stock recommendations: The effect of the global settlement and related regulations, Review of Financial Studies, 22, 4189-4217. Kisgen, D.J., and Strahan, P.E. 2010. Do regulations based on credit ratings affect a firm’s cost of capital?, Review of Financial Studies 23(12), 4324-47. Liu, G. 2012. Municipal bond insurance premium, credit rating, and underlying credit risk. Public Budgeting & Finance, spring, 128-156. Malloy, C.J. 2005. The geography of equity analysis, Journal of Finance, 60 (2), 719-755. Manso, G. 2013. Feedback effects of credit ratings, Journal of Financial Economics 109(2), 535- 548. Mathis, J., McAndrews J., and Rochet, J., 2009. Rating the raters: Are reputation concerns powerful enough to discipline rating agencies? Journal of Monetary Economics 56, 657- 674. Moody’s Investors Service. 2003. Measuring the performance of corporate bond ratings. Special Comment, April. Moody’s Investors Service. 2010. Recalibration of Moody’s U.S. Municipal Ratings to its Global Rating Scale, March U.S. . Nanda, V., and Singh, R. 2004. Bond insurance: What is special about munis?, Journal of Finance 59 (5), 2253-2279.

29

Opp, C, Opp, M., and Harris, M. 2013. Rating agencies in the face of regulation. Journal of Financial Economics 108, 46-61. Pirinsky, C.A., and Wang, Q. 2011. Market segmentation and the cost of capital in a domestic market: Evidence from municipal bonds. Financial Management 40 (2), 445-481. Ramakrishnan, R.T., and Thakor, A.V., 1984. Information reliability and a theory of financial intermediation, Review of Economic Studies 51, 415-532. Sangiorgi, F., and Spatt, C. 2017. Opacity, credit rating shopping, and bias, Management Science, forthcoming Schultz, P. 2012. The market for new issues of municipal bonds: The roles of transparency and limited access to retail investors, Journal of Financial Economics 106, 492-512. Sirri, E.R. 2014. Report on trading in the municipal securities market; Commissioned by the Municipal Securities Rulemaking Board. Skreta, V., and Veldkamp, L. 2009. Ratings shopping and asset complexity: A theory of ratings inflation, Journal of Monetary Economics 56, 678-695. Spence, A. 1973. Job Market Signaling. Quarterly Journal of Economics 87, 355—74. Strobl, G., and Xia, H. 2012. The issuer-pays rating model and ratings inflation: Evidence from corporate credit ratings, Working paper. Thakor, A. 1982. An exploration of competitive signaling equilibria with “third party” information production: The case of debt insurance. Journal of Finance 37 (3), 717-739. Trzcinka, C. 1982. The pricing of tax-exempt bonds and the miller hypothesis, Journal of Finance 37 (4), 907-923. Xia, H. 2014. Can investor-paid credit rating agencies improve the information quality of issuer- paid rating agencies? Journal of Financial Economics 111(2), 450-68.

30

Table 1 – Issue-rating level summary statistics This table displays summary statistics for issue-rating observations. Rating is a numerical translation of ratings produced by Moody’s, Standard & Poor’s, and Fitch that increases in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth. Fee is the dollar fee paid by the issuer to the credit rating agency for a credit rating on the issue. Paid previous is an indicator variable taking a value of one if the issuer paid for a credit rating from the rating agency on a different issue in the previous year. Moody’s is an indicator variable taking a value of one if the observation is from Moody’s, and zero if the observation is from Standard & Poor’s or Fitch. Fitch is an indicator variable taking a value of if the rating is from Fitch, and zero if the rating is from Moody’s or Standard & Poor’s. Insured is an indicator variable taking a value of one if the issue is wrapped with insurance. Lost insurance is issuer-level indicator variable. Lost insurance takes a value of one if more than half of the issuer’s new issues prior to 2008 are wrapped with insurance and less than half of the issuer’s new issues after 2007 are wrapped with insurance. The sample includes municipal bonds issued in Texas between 1998 and 2014. The data are publicly available from the state of Texas. Panel A includes all issue-rating observations. Panels B and C split Panel A by whether the issue is insured. Panels D and E split Panel C by whether the issuer lost insurance.

31

Mean SD 25th pct Median 75th pct Panel A – All issue-ratings (N = 19,293) Rating 19.0 (≈ Aa3/AA-) 2.5 17 19 22 Fee 11,082 12,272 5,000 8,000 12,500 Paid previous 0.65 0.48 0 1 1 Moody’s 0.36 0.48 0 0 1 Fitch 0.13 0.34 0 0 0 Insured 0.72 0.45 0 1 1 Lost insurance 0.71 0.46 0 1 1

Panel B – Insured (N = 13,661) Rating 18.8 (≈ Aa3/AA-) 2.6 17 19 22 Fee 9,793 9,407 5,000 7,500 11,500 Paid previous 0.60 0.49 0 1 1 Moody’s 0.37 0.48 0 0 1 Fitch 0.12 0.33 0 0 0 Lost insurance 0.65 0.48 0 1 1

Panel C – Uninsured (N = 5,362) Rating 19.6 (≈ Aa2/AA) 2.1 19 20 21 Fee 14,419 17,205 5,500 9,750 15,000 Paid previous 0.78 0.41 1 1 1 Moody’s 0.33 0.47 0 0 1 Fitch 0.17 0.37 0 0 0 Lost insurance 0.86 0.35 1 1 1

Panel D – Uninsured and the issuer did not lose insurance (N = 853) Rating 18.5 (≈ Aa3/AA-) 3.1 16 19 22 Fee 9,707 11,752 4,000 7,300 11,000 Paid previous 0.57 0.50 0 1 1 Moody’s 0.37 0.48 0 0 1 Fitch 0.07 0.25 0 0 0

Panel E – Uninsured and issuer lost insurance (N = 4,804) Rating 19.8 (≈ Aa2/AA) 1.8 19 20 21 Fee 15,189 17,822 5,900 10,000 16,000 Paid previous 0.81 0.39 1 1 1 Moody’s 0.32 0.47 0 0 1 Fitch 0.18 0.39 0 0 0

32

Table 2 – Issue level summary statistics This table displays summary statistics for issue level observations. Average rating is the average credit rating assigned to the issue by Moody’s, Standard & Poor’s, and Fitch. We numerically translate the ratings assigned by Moody’s, Standard & Poor’s, and Fitch (Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth) and then take the average for each issue. Average fee is the average fee in dollars paid to each rating agency for rating the issue at issuance. Average paid previous is an average over one to three indicator variables, depending on how many rating agencies rate the issue. For each issue-rating observation, we construct an indicator variable taking a value of one if the credit rating agency rated a issue from the issuer in the previous year. We take the average of these indicator variables for each issue. Average Moody’s (Average Fitch) is an average over one to three indicator variables, depending on how many rating agencies rate the issue. For each issue-rating observation, we construct an indicator variable that takes a value of one if the rating is from Moody’s (Fitch). We take the average of these indicators for each issue. For example, if the issue has one rating and the rating is from Moody’s (Fitch), Average Moody’s (Average Fitch) would equal one. If the issue has two ratings, one from Moody’s (Fitch) and the other from Standard & Poor’s, Average Moody’s (Average Fitch) would equal 0.5. Insured is an indicator variable taking a value of one if the issue is wrapped with insurance. Lost insurance is issuer-level indicator variable. Lost insurance takes a value of one if more than half of the issuer’s new issues prior to 2008 are wrapped with insurance and less than half of the issuer’s new issues after 2007 are wrapped with insurance. Issue amount is the size of the issue measured in millions of dollars. Revenue is an indicator variable taking a value of one if the issue is a revenue bond, and zero if the issue is a general obligation bond or funded through other means. Negotiated is an indicator variable taking a value of one if the issue is issued through a negotiated process, and zero if the issue is issued through a competitive process. One rater is an indicator variable taking a value of one if the issue has one rating, and zero if it has two or three ratings. Two raters is an indicator variable taking a value of one if the issue has two ratings, and zero if it has one or three ratings. Bond insurance fee is the fee in dollars paid to the bond insurance company. This variable only appears in Panel B. The sample includes municipal bonds issued in Texas between 1998 and 2014. The data are publicly available from the state of Texas. Panel A includes all issues. Panels B and C split Panel A by whether the issue is insured.

33

Mean SD 25th pct Median 75th pct Panel A – All issues (N = 11,537) Average rating 18.7 (≈ Aa3/AA-) 2.6 17 19 21 Average fee 9,989 10,298 4,940 7,500 11,300 Average paid previous 0.58 0.48 0 1 1 Average Moody’s 0.35 0.32 0 0.33 0.5 Average Fitch 0.12 0.19 0 0 0.33 Insured 0.74 0.44 0 1 1 Issue amount 24.0 59.0 3.7 7.7 20.1 Revenue 0.30 0.46 0 0 1 Negotiated 0.66 0.47 0 1 1 One rater 0.40 0.49 0 0 1 Two raters 0.40 0.49 0 0 1

Panel B – Insured (N = 8,435) Bond insurance fee 69,983 771,229 2,300 14,300 50,000 Average rating 18.5 (≈ Aa3/AA-) 2.6 17 18.5 21 Average fee 8,977 8,040 4,750 7,079 10,450 Average paid previous 0.53 0.48 0 0.67 1 Average Moody’s 0.37 0.32 0 0.33 0.5 Average Fitch 0.12 0.19 0 0 0.33 Issue amount 20.7 53.1 3.6 7.1 17.5 Revenue 0.27 0.45 0 0 1 Negotiated 0.66 0.47 0 1 1 One rater 0.40 0.49 0 0 1 Two raters 0.37 0.48 0 0 1

Panel C – Uninsured (N = 3,102) Average rating 19.3 (≈ Aa3/AA-) 2.2 18.7 20 21 Average fee 12,931 14,656 5,500 9,088 13,750 Average paid previous 0.74 0.43 0.5 1 1 Average Moody’s 0.32 0.31 0 0.33 0.5 Average Fitch 0.13 0.20 0 0 0.33 Issue amount 33.4 72.7 4.0 9.4 27.3 Revenue 0.38 0.49 0 0 1 Negotiated 0.65 0.48 0 1 1 One rater 0.38 0.48 0 0 1 Two raters 0.46 0.50 0 0 1

34

Table 3 – The within-issue relation between credit ratings and credit rating fees This table displays results from OLS regressions with issue-rating observations. The dependent variable is Rating, a numerical translation of ratings produced by Moody’s, Standard & Poor’s, and Fitch that increases in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth. Fee is the dollar fee paid by the issuer to the credit rating agency for a credit rating on the issue. We take the natural log of this variable and standardize it to follow a mean-zero, unit-variance distribution. Post May 7, 2010 is an indicator taking a value of one for issues after May 7, 2010, the date Moody’s completed the recalibration of its municipal bond rating scale. Post April 5, 2010 is an indicator taking a value of one for issues after April 5, 2010, the date Fitch completed the recalibration of its municipal bond ratings scale. We define other variables in the legend of Table 1. We cluster standard errors at the issue level. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

Panel A – All issue-ratings Uninsured and Uninsured and All issue-ratings Insured Uninsured the issuer did not the issuer lost lose insurance insurance (1) (2) (3) (4) (5) Fee -0.04 -0.12 0.10 -0.42 0.15 (0.08) (0.13) (0.07) (0.35) (0.07)** Paid previous -0.03 -0.08 0.13 -0.26 0.16 (0.12) (0.17) (0.11) (0.43) (0.11) Moody’s -0.72 -0.79 -0.49 -0.28 -0.54 (0.05)*** (0.06)*** (0.05)*** (0.13)** (0.06)*** Moody’s × Post May 7, 2010 0.87 1.17 0.50 0.30 0.55 (0.07)*** (0.11)*** (0.07)*** (0.23) (0.08)*** Fitch 0.47 0.67 -0.17 0.20 -0.21 (0.06)*** (0.08)*** (0.05)*** (0.21) (0.05)*** Fitch × Post April 5, 2010 -0.20 -0.08 0.27 -0.28 0.33 (0.08)** (0.13) (0.07)*** (0.53) (0.07)*** Issue fixed effects? Yes Yes Yes Yes Yes Adjusted R2 0.79 0.74 0.92 0.97 0.89 N 19,293 13,661 5,632 828 4,804

35

Panel B – Exclude Aaa/AAA issue-rating observations Uninsured and Uninsured and Insured Uninsured the issuer did not the issuer lost lose insurance insurance (1) (2) (3) (4) (5) Fee 0.09 0.03 0.19 -0.35 0.23 (0.05)* (0.08) (0.06)*** (0.37) (0.06)*** Paid previous -0.00 -0.03 0.06 -0.29 0.10 (0.10) (0.14) (0.11) (0.58) (0.11) Moody’s -0.29 -0.25 -0.42 -0.28 -0.43 (0.03)*** (0.04)*** (0.05)*** (0.22) (0.05)*** Moody’s × Post May 7, 2010 0.51 0.62 0.54 0.28 0.55 (0.06)*** (0.10)*** (0.08)*** (0.66) (0.07)*** Fitch 0.16 0.29 -0.12 0.27 -0.15 (0.04)*** (0.05)*** (0.06)** (0.28) (0.06)*** Fitch × Post April 5, 2010 0.17 0.32 0.27 -0.45 0.32 (0.07)** (0.11)*** (0.08)*** (0.64) (0.08)*** Issue fixed effects? Yes Yes Yes Yes Yes Adjusted R2 0.93 0.91 0.93 0.95 0.91 N 14,330 9,893 4,437 544 3,893

36

Panel C – Split by credit rating agency Moody’s Standard & Poor’s Fitch All Exclude Aaa All Exclude AAA All Exclude AAA (1) (2) (3) (4) (5) (6) Fee 0.08 0.28 -0.13 0.31 -0.12 0.00 (0.03)*** (0.03)*** (0.03)*** (0.03)*** (0.03)*** (0.04) Paid previous 0.82 1.46 0.54 1.06 0.27 0.80 (0.07)*** (0.05)*** (0.06)*** (0.05)*** (0.10)*** (0.10)*** Moody’s × Post May 7, 2010 0.78 1.38 (0.06)*** (0.05)*** Fitch × Post April 5, 2010 -0.03 0.72 (0.08) (0.08)*** Constant 17.94 16.42 18.66 17.19 19.87 18.17 (0.06)*** (0.05)*** (0.05)*** (0.04)*** (0.10)*** (0.10)*** Issue fixed effects? No No No No No No Adjusted R2 0.05 0.23 0.01 0.08 0.01 0.08 N 6,870 5,432 9,850 7,181 2,573 1,717

37

Table 4 – The within- and across-issuer relation between credit ratings and credit rating fees This table displays results from OLS regressions with issue level observations. Panel A includes issuer fixed effects and Panel B omits them. The dependent variable is Average rating, a numerical translation of ratings produced by Moody’s, Standard & Poor’s, and Fitch that increases in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth. Average fee is the average fee in dollars paid to the rating agencies rating the issue at issuance. Bond insurance fee is the fee in dollars paid to the bond insurance company. Issue amount is the size of the issue measured in millions of dollars. For each of Average fee, Bond insurance fee, and Issue amount, we take the natural log and standardize it to follow a mean-zero, unit-variance distribution. Post May 7, 2010 is an indicator taking a value of one for issues after May 7, 2010, the date Moody’s completed the recalibration of its municipal bond rating scale. Post April 5, 2010 is an indicator taking a value of one for issues after April 5, 2010, the date Fitch completed the recalibration of its municipal bond ratings scale. We define other variables in the legend of Table 2. We cluster standard errors at the issuer level. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

38

Panel A – Issuer fixed effects All issues Insured Uninsured (1) (2) (3) Average fee -0.10 -0.16 0.01 (0.03)*** (0.03)*** (0.03) Bond insurance fee -0.18 (0.06)*** Average paid previous -0.09 -0.09 0.05 (0.05)* (0.06) (0.07) Average Moody’s -1.66 -1.71 -0.63 (0.17)*** (0.22)*** (0.29)** Average Moody’s × Post May 7, 2010 2.10 2.18 0.76 (0.17)*** (0.21)*** (0.30)** Average Fitch 0.91 0.53 0.63 (0.30)*** (0.36) (0.45) Average Fitch × Post April 5, 2010 0.33 0.37 0.29 (0.24) (0.33) (0.42) Issue amount 0.08 0.08 -0.03 (0.02)*** (0.03)*** (0.03) Insured 0.40 (0.09)*** Revenue -0.45 -0.35 -0.50 (0.08)*** (0.08)*** (0.12)*** Negotiated -0.01 0.16 -0.12 (0.06) (0.08)** (0.10) One rater 0.84 0.72 -0.02 (0.09)*** (0.11)*** (0.22) Two raters 0.44 0.28 0.06 (0.08)*** (0.10)*** (0.16) Issuer fixed effects? Yes Yes Yes Year effects? Yes Yes Yes Adjusted R2 0.54 0.53 0.81 N 11,537 8,435 3,102

39

Panel B – Omit issuer fixed effects All issues Insured Uninsured (1) (2) (3) Average fee -0.06 -0.10 0.15 (0.03)** (0.03)*** (0.05)*** Bond insurance fee -0.55 (0.04)*** Average paid previous 0.47 0.34 1.08 (0.07)*** (0.06)*** (0.13)*** Average Moody’s -1.29 -1.51 -0.77 (0.14)*** (0.14)*** (0.33)** Average Moody’s × Post May 7, 2010 1.87 2.04 0.57 (0.17)*** (0.17)*** (0.35) Average Fitch 1.40 1.11 0.30 (0.27)*** (0.31)*** (0.54) Average Fitch × Post April 5, 2010 0.12 0.01 -0.30 (0.28) (0.36) (0.49) Issue amount 0.18 0.31 -0.10 (0.06)*** (0.05)*** (0.08) Insured -0.69 (0.12)*** Revenue -0.48 0.04 -0.52 (0.09)*** (0.09) (0.14)*** Negotiated 0.37 0.48 0.08 (0.08)*** (0.08)*** (0.14) One rater -0.09 -0.15 -1.76 (0.12) (0.11) (0.25)*** Two raters 0.71 0.38 0.01 (0.13)*** (0.11)*** (0.22) Issuer fixed effects? No No No Year effects? Yes Yes Yes Adjusted R2 0.22 0.29 0.27 N 11,537 8,435 3,102

40

Table 5 – The relation between credit ratings and abnormal credit rating fees We obtained Standard & Poor’s municipal bond rating fee schedule for 2013. We obtained Moody’s municipal bond rating fee schedule for 2007, and 2010 through 2014. The first step in the analysis uses issue-rating observations and computes the difference between the fees advertised by the CRAs in fee schedules and actual fees paid by issuers to the CRAs. Panel A displays summary statistics of this variable, Abnormal fee. We also report the percentage of fees that equal the advertised fees. Panel B displays OLS regressions with Abnormal fee as an independent variable. We standardize this variable to follow a mean-zero, unit variance distribution. The dependent variable is Rating, a numerical translation of ratings produced by Moody’s, Standard & Poor’s, and Fitch that increases in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth. Columns (1) through (3) mimic the regressions in Table 3. Columns (4) through (6) and (7) through (9) mimic the regressions in Table 4, whereby we collapse the sample by taking averages of issue-rating observations at the issue level. Post May 7, 2010 is an indicator taking a value of one for issues after May 7, 2010, the date Moody’s completed the recalibration of its municipal bond rating scale. We define other control variables in columns (1) through (3) ((4) through (6) and (7) through (9)) in the legend of Table 1 (Table 2). We cluster standard errors at the issue level in columns (1) through (3) and at the issuer level in columns (4) through (9). *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

Panel A – Summary statistics of Abnormal fee Standard & Poor’s (2013) Moody’s (2007, 2010-2014) Moody’s (2007) Mean -$2,970 -$16,406 -$19,516 25th percentile -$3,500 -$7,500 -$1,763 Median -$1,000 -$3,500 $0 75th percentile $0 $0 $0 Standard Deviation $5,779 $81,669 $83,446 N issue-rating observations 717 2,211 488

% of fees that equal advertised fee 45.2 20.0 51.0

41

Panel B – Regressions with Abnormal fee as an independent variable All issue- All issues Insured Uninsured All issues Insured Uninsured Insured Uninsured ratings (1) (2) (3) (4) (5) (6) (7) (8) (9) Abnormal fee 0.23 0.31 0.07 -0.02 -0.03 -0.03 -0.08 -0.10 -0.06 (0.50) (1.23) (0.50) (0.02) (0.04) (0.03) (0.03)** (0.05)* (0.04) Bond insurance fee -0.12 -1.02 (0.16) (0.08)*** Paid previous 0.04 0.18 0.21 0.01 0.08 -0.00 0.61 0.46 0.60 (0.80) (1.22) (1.32) (0.07) (0.12) (0.09) (0.09)*** (0.10)*** (0.15)*** Moody’s 0.13 0.42 -0.04 -1.44 -0.99 -0.91 -2.23 -2.68 -1.92 (0.22) (0.53) (0.20) (0.34)*** (0.56)* (0.48)* (0.35)*** (0.40)*** (0.65)*** Moody’s × Post 1.78 1.70 0.70 2.34 2.76 1.54 (0.27)*** (0.50)** (0.40)* (0.33)*** (0.39)*** (0.63)** Issue amount 0.01 0.06 -0.02 0.10 0.38 -0.18 (0.03) (0.06) (0.05) (0.08) (0.11)*** (0.10)* Insured -0.40 -1.27 (0.14)*** (0.14)*** Revenue -0.47 -0.25 -0.49 -0.68 0.36 -0.63 (0.16)*** (0.11)** (0.19)** (0.14)*** (0.14)*** (0.16)*** Negotiated -0.14 -0.01 -0.18 0.08 0.22 -0.24 (0.11) (0.20) (0.18) (0.13) (0.17) (0.16) One rater 0.17 0.10 0.16 -0.51 -0.23 -1.75 (0.15) (0.27) (0.42) (0.16)*** (0.17) (0.28)*** Two raters 0.02 0.04 -0.10 0.54 0.28 -0.07 (0.11) (0.20) (0.35) (0.18)*** (0.17) (0.27) Fixed effects Issue Issue Issue Issuer + Issuer + Issuer + Year Year Year Year Year Year Adjusted R2 0.91 0.89 0.91 0.82 0.82 0.80 0.29 0.34 0.26 N 2,893 1,589 1,304 2,672 1,518 1,154 2,672 1,518 1,154

42

Table 6 – The relation between credit ratings and residual credit rating fees The first step in the analysis uses issue-rating observations and regresses credit rating fees on Paid previous, an indicator variable taking a value of one if the issuer paid for a credit rating from the rating agency on a different issue in the previous year, and CRA × Year of issuance × Debt type × Issue amount fixed effects. We create issue amount fixed effects based on $10 million bins. Fee is the dollar fee paid by the issuer to the credit rating agency for a credit rating on the issue. We take the natural log of this variable and standardize it to follow a mean-zero, unit-variance distribution.

Fee = α + β1 Paid previous + β2 CRA, Year, Debt type, Issue amount + ɛ

We report the results in Panel A. We collect the residual values of credit rating fees from this regression. Table B displays results from OLS regressions of credit ratings on residual credit rating fees. The dependent variable is Rating, a numerical translation of ratings produced by Moody’s, Standard & Poor’s, and Fitch that increases in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth. Columns (1) through (3) mimic the regressions in Table 3. Columns (4) through (6) mimic the regressions in Table 4, whereby we collapse the sample by taking averages of issue-rating observations at the issue level. Post May 7, 2010 is an indicator taking a value of one for issues after May 7, 2010, the date Moody’s completed the recalibration of its municipal bond rating scale. Post April 5, 2010 is an indicator taking a value of one for issues after April 5, 2010, the date Fitch completed the recalibration of its municipal bond ratings scale. We define other control variables in columns (1) through (3) ((4) through (6) and (7) through (9)) in the legend of Table 1 (Table 2). We cluster standard errors at the issue level in columns (1) through (3) and at the issuer level in columns (4) through (9). *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

Panel A – First stage results (1) Paid previous -0.24 (0.04)*** Fixed effects CRA × Year of issuance × Debt type × Issue amount Adjusted R2 0.63 N 19,593

43

Panel B – The relation between residual fees and ratings All Insured Uninsured All Insured Uninsured All Insured Uninsured issue- issues issues ratings (1) (2) (3) (4) (5) (6) (7) (8) (9) Residual Fee -0.02 -0.05 0.10 -0.15 -0.23 -0.00 -0.47 -0.52 -0.05 (0.10) (0.14) (0.09) (0.04)*** (0.06)*** (0.04) (0.06)*** (0.06)*** (0.09) Bond insurance fee -0.20 -0.49 (0.06)*** (0.04)*** Moody’s -0.72 -0.78 -0.50 -1.68 -1.75 -0.60 -1.23 -1.44 -0.72 (0.05)*** (0.06)*** (0.05)*** (0.17)*** (0.22)*** (0.29)** (0.14)*** (0.14)*** (0.34)** Moody’s × Post May 7, 2010 0.86 1.15 0.51 2.11 2.19 0.75 1.79 1.99 0.40 (0.07)*** (0.11)*** (0.07)*** (0.17)*** (0.21)*** (0.30)** (0.17)*** (0.17)*** (0.36) Fitch 0.48 0.69 -0.18 0.89 0.48 0.61 1.47 1.17 0.42 (0.06)*** (0.08)*** (0.05)*** (0.30)*** (0.36) (0.45) (0.26)*** (0.30)*** (0.57) Fitch × Post April 5, 2010 -0.21 -0.11 0.28 0.38 0.42 0.28 0.08 0.03 -0.48 (0.08)*** (0.13) (0.07)*** (0.24) (0.33) (0.41) (0.28) (0.36) (0.51) Issue amount 0.00 0.00 -0.00 0.00 0.01 -0.00 (0.00)*** (0.00) (0.00) (0.00)*** (0.00)*** (0.00) Insured 0.39 -0.71 (0.09)*** (0.12)*** Revenue -0.47 -0.37 -0.49 -0.45 0.01 -0.50 (0.08)*** (0.08)*** (0.12)*** (0.09)*** (0.09) (0.15)*** Negotiated -0.02 0.13 -0.13 0.35 0.45 0.09 (0.06) (0.08) (0.10) (0.08)*** (0.08)*** (0.15) One rater 0.86 0.76 -0.11 -0.18 -0.21 -2.05 (0.09)*** (0.11)*** (0.19) (0.12) (0.11)* (0.26)*** Two raters 0.45 0.29 -0.00 0.74 0.44 -0.00 (0.08)*** (0.10)*** (0.16) (0.13)*** (0.11)*** (0.25) Fixed effects Issue Issue Issue Issuer + Issuer + Issuer + Year Year Year Year Year Year Adjusted R2 0.79 0.74 0.92 0.53 0.53 0.80 0.22 0.30 0.22 N 19,293 13,661 5,632 11,537 8,435 3,102 11,537 8,435 3,102

44

Table 7 – Ratings, fees, and busy analysts This table displays results from OLS regressions with uninsured issue-rating observations. The dependent variable is Rating, a numerical translation of ratings produced by Moody’s, Standard & Poor’s, and Fitch that increases in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 20, and so forth. Fee is the dollar fee paid by the issuer to the credit rating agency for the credit rating on the issue. We take the natural log of this variable and standardize it to follow a mean-zero, unit-variance distribution. Number of CUSIPs rated (Number of issuers rated, Number of sectors rated) is the number of CUSIPs (issuers, sectors) for which the analyst has produced ratings up to the point in time the analyst produces the given rating. We standardize these three variables to follow mean-zero, unit-variance distributions. Paid previous is an indicator variable taking a value of one if the issuer paid for a credit rating from the rating agency on a different issue in the previous year. Moody’s is an indicator variable taking a value of one if the observation is from Moody’s, and zero if the observation is from Standard & Poor’s. Post May 7, 2010 is an indicator taking a value of one for issues after May 7, 2010, the date Moody’s completed the recalibration of its municipal bond rating scale. Home state is an indicator variable taking a value of one if the analyst received his or her social security number in the state of Texas. In-state is an indicator variable taking a value of one if the analyst works at a rating agency office in the state of Texas. Female is an indicator variable taking a value of one if the analyst has a traditionally female name. Rating agency tenure is the number of years which the analyst has produced ratings up to the point in time the analyst produces the given rating. We scale this variable by ten to ease interpretation of its coefficient. We cluster standard errors at the issue-level. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

45

(1) (2) (3) (4) (5) Fee 0.42 0.23 0.16 0.15 0.18 (0.09)*** (0.11)** (0.10) (0.11) (0.10)* Number of CUSIPs rated -0.15 (0.06)** Number of issuers rated -0.18 (0.11)* Number of sectors rated -0.26 (0.11)** Paid previous 0.34 -0.01 -0.05 -0.03 -0.03 (0.06)*** (0.10) (0.10) (0.10) (0.09) Moody’s -0.53 0.06 0.40 0.28 0.76 (0.18)*** (0.20) (0.29) (0.28) (0.43)* Moody’s × Post May 7, 2010 0.85 0.90 0.81 0.81 0.49 (0.19)*** (0.20)*** (0.23)*** (0.24)*** (0.33) Home state -0.23 -0.09 -0.08 -0.06 (0.10)** (0.08) (0.11) (0.08) In-state -1.08 -1.05 -1.05 -1.04 (0.11)*** (0.11)*** (0.11)*** (0.10)*** Female 0.11 0.14 0.06 0.14 (0.17) (0.16) (0.16) (0.16) Advanced degree 0.18 0.22 0.25 0.19 (0.10)* (0.10)** (0.11)** (0.11)* Rating agency tenure 0.75 0.91 0.82 0.98 (0.07)*** (0.11)*** (0.09)*** (0.13)*** Issue fixed effects? Yes Yes Yes Yes Yes Adjusted R2 0.97 0.98 0.98 0.98 0.98 N 1,800 1,800 1,800 1,800 1,800

46

Table 8 – Future rating changes and fees This table displays means and standard errors of changes in issues’ credit ratings from the time of issuance to three years later. We compute changes in credit ratings by taking the difference in notches of ratings. We perform a numerical translation of ratings such that Aaa/AAA = 22, Aa1/AA+ = 21, and so forth. For example, if an issue has a rating of Aaa (or AAA) at issuance and a rating of Aa1 (or AA+) three years later, the rating change is -1 notch. The issues are first sorted by their credit ratings at issuance and then by fee quintile. The table also displays the average fee for each initial rating-fee quintile group in brackets. The table displays results for issues rated by Moody’s (Panel A) and Standard & Poor’s (Panel B). We test for differences in mean changes between most expensive and least expensive quintiles in the bottom row. Standard errors appear in parentheses. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively.

47

Panel A – Moody’s Credit rating at issuance Fee quintile Aaa Aa1 Aa2 Aa3 A1 A2 A3 Baa1 Baa2 Baa3 Least expensive Avg. change -0.06 -0.10 0.00 0.04 0.14 0.08 0.25 0.15 -0.14 0.03 SE change 0.04 0.07 0.02 0.02 0.03 0.03 0.06 0.06 0.17 0.03 N 13 25 48 93 85 76 53 23 11 6 Avg. fee $2,707 $5,026 $4,648 $2,617 $3,859 $3,775 $4,424 $3,623 $3,482 $3,750

2 Avg. change -0.11 -0.15 -0.06 0.03 0.12 0.11 0.12 0.18 0.14 0.49 SE change 0.11 0.10 0.04 0.02 0.04 0.04 0.04 0.09 0.06 0.34 N 12 25 55 92 73 67 46 14 15 5 Avg. fee $5,357 $9,297 $8,713 $5,454 $6,181 $6,181 $6,234 $5,501 $6,067 $4,900

3 Avg. change 0.00 -0.03 0.05 -0.03 0.09 0.11 0.06 0.14 0.10 0.25 SE change 0.00 0.02 0.07 0.02 0.03 0.05 0.03 0.05 0.10 0.20 N 12 25 40 98 79 70 52 18 6 6 Avg. fee $7,588 $12,422 $11,705 $8,893 $7,989 $8,079 $8,321 $6,539 $7,383 $5,892

4 Avg. change -0.03 -0.02 0.01 0.03 0.09 0.10 0.23 0.19 0.05 0.32 SE change 0.02 0.01 0.03 0.02 0.03 0.03 0.06 0.13 0.03 0.20 N 12 25 48 87 79 72 47 18 11 5 Avg. fee $11,100 $18,042 $17,088 $12,719 $11,489 $11,009 $10,812 $8,072 $8,534 $8,900

Most expensive Avg. change -0.04 -0.03 -0.01 0.00 0.10 0.11 0.16 0.17 0.12 0.33 SE change 0.04 0.03 0.02 0.02 0.05 0.05 0.05 0.06 0.18 0.21 N 12 25 47 92 78 70 49 18 10 5 Avg. fee $21,881 $41,515 $34,657 $33,404 $28,467 $26,541 $22,320 $12,572 $10,453 $14,900 Most – Least 0.01 0.07 -0.01 -0.04 -0.04 0.03 -0.09 0.02 0.26 0.31 (0.05) (0.07) (0.05) (0.03) (0.06) (0.06) (0.08) (0.09) (0.25) (0.19)

48

Panel B – Standard & Poor’s Credit rating at issuance Fee quintile AAA AA+ AA AA- A+ A A- BBB+ BBB BBB- Least expensive Avg. change -0.10 0.12 0.58 1.18 1.71 1.99 2.22 3.97 2.62 0.69 SE change 0.04 0.06 0.08 0.12 0.16 0.20 0.33 0.63 0.77 0.69 N 126 26 63 56 64 61 30 13 12 4 Avg. fee $5,465 $4,161 $3,318 $3,110 $3,668 $3,641 $4,155 $2,915 $4,225 $1,313

2 Avg. change -0.08 0.11 0.39 0.91 1.52 1.91 2.63 3.04 0.76 1.33 SE change 0.03 0.05 0.07 0.13 0.15 0.22 0.44 0.50 0.26 0.34 N 111 26 69 54 61 46 22 17 4 3 Avg. fee $8,805 $8,553 $8,009 $5,827 $6,205 $6,105 $6,872 $5,182 $6,000 $4,300

3 Avg. change -0.15 0.04 0.34 1.09 1.43 1.67 2.55 2.71 1.35 4.75 SE change 0.05 0.05 0.07 0.12 0.16 0.20 0.37 0.90 0.63 1.15 N 109 27 61 56 63 63 22 9 6 4 Avg. fee $10,987 $12,238 $10,917 $8,900 $8,402 $8,657 $8,837 $7,033 $7,119 $4,900

4 Avg. change -0.04 0.12 0.47 0.82 1.22 2.27 1.59 1.98 3.06 3.53 SE change 0.02 0.06 0.09 0.12 0.17 0.26 0.41 0.67 0.92 1.84 N 114 25 58 58 60 44 26 13 7 3 Avg. fee $14,522 $17,916 $14,388 $12,151 $11,601 $11,792 $12,532 $9,015 $8,297 $6,167

Most expensive Avg. change -0.06 0.06 0.45 0.88 1.21 1.76 1.57 2.57 2.87 0.50 SE change 0.02 0.06 0.08 0.14 0.16 0.24 0.35 0.58 1.08 0.25 N 115 26 61 50 62 49 23 12 7 3 Avg. fee $25,952 $40,509 $27,211 $26,132 $25,238 $26,496 $23,594 $28,808 $13,186 $11,833 Most – Least 0.04 -0.06 -0.13 -0.30 -0.50 -0.23 -0.66 -1.40 0.25 -0.19 (0.04) (0.09) (0.12) (0.18) (0.23) (0.31) (0.49) (0.86) (1.30) (0.85)

49

Table 9 – Does the market price expensive ratings differently? This table displays results from OLS regressions with spreads on new issues as dependent variables. The dependent variable is Spread to after-tax Treasury2, the bond’s offer yield minus the after-tax yield of duration-matched Treasury, where we calculate duration using the bond’s call date if the bond issue is callable and the bond’s time to maturity if the issue is not callable. We assume a tax rate of 35% when calculating after-tax yields for Treasuries. If the issue is a Build America Bond, we substitute its raw yield with its after-tax yield, again assuming a 35% tax rate. The independent variables in Panel A include indicator variables for Moody’s, S&P’s and Fitch’s ratings assigned to new issues. Panel B displays results from F-tests of whether the sums of regression coefficients in Panel A are significantly different from zero. P values appear below summed values in parentheses. High fee is an indicator variable taking a value of one if the rating from Moody’s (Standard & Poor’s, Fitch) has the highest fee from among the set of ratings for the new bond issue. Controls for bond characteristics include par value, maturity measured in years, coupon, number of bonds outstanding for the bond’s issuer, and indicator variables for whether issues are revenue bonds or general obligations, Build America Bonds, issued through a negotiated or competitive process, and callable or non-callable. We cluster standard errors at the issuer level. Standard errors are in parentheses below coefficient estimates. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% level, respectively.

50

Panel A – Step 1: Offer yield regressions Moody’s S&P Fitch (1) (2) (3) Aaa/AAA -0.48 -0.77 -1.08 (0.15)*** (0.17)*** (0.13)*** Aa1/AA+ -0.39 -0.71 -0.92 (0.15)*** (0.17)*** (0.13)*** Aa2/AA -0.33 -0.68 -0.73 (0.14)** (0.17)*** (0.14)*** Aa3/AA- -0.33 -0.66 -0.94 (0.14)** (0.18)*** (0.14)*** A1/A+ -0.24 -0.46 -0.57 (0.14) (0.18)** (0.14)*** A2/A -0.05 -0.41 -0.37 (0.15) (0.20)** (0.19)* A3/A- -0.00 -0.20 -0.77 (0.16) (0.18) (0.14)*** Aaa/AAA × High fee (M,S&P,F) -0.38 0.01 -0.14 (0.22)* (0.25) (0.12) Aa1/AA+ × High fee (M,S&P,F) -0.40 0.06 -0.31 (0.21)* (0.24) (0.16)* Aa2/AA × High fee (M,S&P,F) -0.39 0.07 -0.38 (0.21)* (0.25) (0.18)** Aa3/AA- × High fee (M,S&P,F) -0.29 0.15 -0.35 (0.23) (0.26) (0.12)*** A1/A+ × High fee (M,S&P,F) -0.34 0.01 -0.50 (0.24) (0.26) (0.14)*** A2/A × High fee (M,S&P,F) -0.29 0.25 (0.23) (0.31) A3/A- × High fee (M,S&P,F) -0.25 0.14 (0.24) (0.26) High fee (M,S&P,F) 0.38 -0.10 0.36 (0.20)* (0.24) (0.11)*** Controls for bond characteristics? Yes Yes Yes Year FE? Yes Yes Yes Adjusted R2 0.63 0.66 0.74 N 20,149 18,213 7,242

51

Panel B – Step 2: F-tests of summed regression coefficients Moody’s S&P Fitch (1) (2) (3) Aaa/AAA × High fee (M,S&P,F) + High fee (M,S&P,F) 0.00 -0.09 0.22 (0.99) (0.07)* (0.00)*** Aa1/AA+ × High fee (M,S&P,F) + High fee (M,S&P,F) -0.02 -0.04 0.05 (0.66) (0.44) (0.69) Aa2/AA × High fee (M,S&P,F) + High fee (M,S&P,F) -0.01 -0.03 -0.02 (0.89) (0.68) (0.82) Aa3/AA- × High fee (M,S&P,F) + High fee (M,S&P,F) 0.09 0.05 0.01 (0.32) (0.57) (0.93) A1/A+ × High fee (M,S&P,F) + High fee (M,S&P,F) 0.04 -0.09 -0.14 (0.71) (0.34) (0.15) A2/A × High fee (M,S&P,F) + High fee (M,S&P,F) 0.09 0.15 (0.39) (0.41) A3/A- × High fee (M,S&P,F) + High fee (M,S&P,F) 0.13 0.04 (0.34) (0.77)

52

150000

100000

Moody's fee

50000 0

0 50000 100000 150000 200000 Standard & Poor's fee

Panel A – Moody’s and Standard & Poor’s

150000

100000

Moody's fee

50000 0

0 50000 100000 150000 200000 Fitch fee

Panel B – Moody’s and Fitch

53

200000

150000

100000

Fitch fee Fitch

50000 0

0 50000 100000 150000 200000 Standard & Poor's fee

Panel C – Standard & Poor’s and Fitch

Figure 1. Scatter plots of credit rating fees. This figure displays relationships between fees charged by pairs of credit rating agencies. Each point in Panel A (Panel B, Panel C) represents a bond issue rated by Moody’s and Standard & Poor’s (Moody’s and Fitch, Standard & Poor’s and Fitch). The sample includes municipal issues in Texas between 1998 and 2014. The data are publicly available from the state of Texas.

54

0.8 0.7 0.6 0.5 0.4 0.3

0.2

2011 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2012 2013 2014 1998 Panel A – Percentage of all issues with insurance

0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02

0

1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 1998 Panel B – Percentage of issues insured by Ambac

0.1

0.08

0.06

0.04

0.02

0

1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 1998 Panel C – Percentage of issues insured by MBIA

Figure 2. Bond insurance usage through time. This figure displays the percentage of new municipal issues each year that are wrapped with insurance. Panel A shows all municipal issues in Texas. Panel B (Panel C) shows the percentage of new issues that carry insurance from the American Municipal Bond Assurance (Municipal Bond Insurance Association). The sample includes municipal issues in Texas between 1998 and 2014. The data are publicly available from the state of Texas.

55

6

4

2

SD of ratings SD of (notches) 0

0 50 100 150 Number of bonds issued

Panel A – Number of issues per issuer and standard deviation of ratings

6

4

2

SD of ratings SD of (notches) 0

12 14 16 18 20 22 Issuer's average credit rating (Aaa/AAA = 22, Aa1/AA+ = 21, etc.)

Panel B – Average credit rating and standard deviation of ratings

56

150

100

50

Numberof bondsissued 0

12 14 16 18 20 22 Issuer's average credit rating (Aaa/AAA = 22, Aa1/AA+ = 21, etc.)

Panel C – Average credit rating and number of issues

Figure 3. Issuer level relations between issuance frequency, ratings volatility, and ratings levels. This table displays scatter plots of issuer-average characteristics. For each municipality, we compute the average fee it pays per rating, the standard deviation around the ratings it receives, and the number of times it issues bonds. We numerically translate credit ratings produced by Moody’s, Standard & Poor’s, and Fitch such that they increase in credit quality. Aaa/AAA/AAA = 22, Aa1/AA+/AA+ = 21, and so forth. The correlation in Panel A is -0.0769 (p-value = 0.0024). The correlation in Panel B is -0.1458 (p-value = 0.0000). The correlation in Panel C is 0.1247 (p- value = 0.0000). The sample includes municipal issues in Texas between 1998 and 2014. The data are publicly available from the state of Texas.

57

3000 25000

2500 20000

2000 15000

1500 Fee Fee ($) 10000

1000 Number of issues Numberissues of rated

5000 500

0 0 Baa3 Baa2 Baa1 A3 A2 A1 Aa3 Aa2 Aa1 Aaa

Number of issues rated Fee

Panel A – Moody’s

3000 25000

2500 20000

2000 15000

1500 Fee Fee ($) 10000

1000 Number ofissues rated

5000 500

0 0 BBB- BBB BBB+ A- A A+ AA- AA AA+ AAA

Number of issues rated Fee

Panel B – Standard & Poor’s

58

3000 25000

2500 20000

2000 15000

1500 Fee Fee ($) 10000

1000 Number ofissues rated

5000 500

0 0 BBB- BBB BBB+ A- A A+ AA- AA AA+ AAA

Number of issues rated Fee

Panel C - Fitch

Figure 4. Distribution of rating fees and number of issues rated by credit rating. This figure displays mean fees (and interquartile ranges) associated with underlying ratings produced by Moody’s (Panel A), Standard & Poor’s (Panel B), and Fitch (Panel C). The sample includes municipal issues in Texas between 1998 and 2014. The data are publicly available from the state of Texas.

59