BEFORE THE COPYRIGHT ROYALTY JUDGES WASHINGTON, D.C.

In the Matter of: Docket No.:

Distribution of the 2010‐2013 14-CRB-0010-CD (2010-2013) Cable Royalty Funds

WRITTEN REBUTTAL CASE OF THE CANADIAN CLAIMANTS GROUP

Of Counsel: Counsel for Canadian Claimants

Victor J. Cosentino L. Kendall Satterfield CA Bar No. 163672 D C Bar No. 393953 Larson & Gaston, LLP Satterfield PLLC 200 S. Los Robles Ave., Suite 530 1629 K Street NW, Suite 300 Pasadena, CA 91101 Washington, DC 20006 Tel: 626‐795‐6001 Tel: 202-355-6432 [email protected] [email protected]

SUMMARY OF REBUTTAL CASE OF THE CANADIAN CLAIMANTS GROUP

BEFORE THE COPYRIGHT ROYALTY JUDGES LIBRARY OF CONGRESS WASHINGTON, DC

In re Docket No. 14-CRB-0010-CD (2010-2013)

Distribution of 2010-2013 Consolidated with: Cable Royalty Funds Docket No. 14-CRB-0011-SD (2010-2013)

REBUTTAL CASE

OF THE

CANADIAN CLAIMANTS GROUP

The Canadian Claimants Group (“CCG”) hereby submits its Rebuttal Case in the Allocation Phase of the above referenced proceeding, pursuant 37 CFR §351.11 and the Copyright Royalty Judges’ Order dated July 21, 2015.

In rebuttal, the CCG presents the testimony of four witnesses:

1) Dr. Lisa M. George

Dr. George is an Associate Professor of Economics at Hunter College and the

Graduate Center of the City University of New York. As an expert in media economics, she previously provided Written Direct Testimony in this proceeding, as Exhibit CCG-5 and Corrected Amended Written Direct Testimony as Exhibit CCG-5A

(Corrected), dated May 16, 2017.

In her Written Rebuttal Testimony (Exhibit CCG-R-1), she evaluates regression estimates of relative market value submitted in these proceedings by

Joint Sports Claimants (JSC) and Commercial Television Claimants (CTV). She also responds to challenges by Settling Devotional Claimants and Public Television

Claimants to the use of regression analysis in estimating the relative market value of distant signal programming and to their proposed alternatives.

She concludes that the criticisms of Settling Devotional Claimants and Public

Television Claimants regarding regressions are not well founded and that properly constructed regression analysis is an appropriate tool for inferring the average incremental value of distant signal programming and that because regression analysis infers value from actual marketplace decisions, regression estimates represent the best approach to estimating relative market value in these proceedings. Her review of the CTV and JSC studies indicates that they systematically depressed their reported shares for CCG programming. Dr. George provides adjustments to these studies to correct their shares for CCG programming.

2) Dr. Matthew Shum

Dr. Shum is the J. Stanley Johnson Professor of Economics in the Division of

Humanities and Social Sciences at the California Institute of Technology (“Caltech”), in Pasadena, California.

2 In his Written Rebuttal Testimony (Exhibit CCG-R-2), Dr. Shum reviews the testimony of Dr. Jeffrey Gray, offered on behalf of Program Suppliers. Dr. Shum discusses two sets of issues. First, he presents several conceptual difficulties with using viewing as a measure of relative market value for distant signals. Second, he discusses measurement problems vis-à-vis CCG programming which arise in Dr.

Gray’s viewing-based analysis, and offers adjusted share estimates which attempt to overcome these measurement problems.

Dr. Shum concludes that a viewing-based approach suffers conceptual shortcomings, and is not reliable as a primary or sole criterion for determining relative market value of distant signal programming. He further concludes that deficiencies in Dr. Gray’s study systematically bias or underestimate the distant viewing of CCG programming.

3) Dr. Frederick Conrad

Dr. Conrad is a professor of survey methodology at the University of Michigan

(Institute for Social Research) and professor of Psychology at the University of

Michigan.

His Written Rebuttal Testimony (Exhibit CCG-R-3) presents a review of the surveys commissioned by JSC, known as the “Bortz” surveys as well as the surveys commissioned by the Programming Suppliers claimants group, known as the

“Horowitz” surveys. In undertaking that review, he focuses on how the methods used in these surveys might impact operators’ valuation of Canadian programming.

3 Dr. Conrad concludes that the two surveys, especially the way the results are analyzed, are not useful for determining the relative marketplace value of CCG programming because the essential constant sum question requires respondents to compare several programming categories with two entire signals and the method for combining the constant sum results creates an artificial cap on the reported value of CCG content. The result is that the surveys do not reflect the actual relative market value of the CCG content.

4) Danielle Boudreau

Ms. Boudreau previously provided testimony in this proceeding in the form of

Written Direct Testimony (Exhibit CCG-1) which was corrected on May 16, 2017

(Exhibit CCG-1 (Corrected)).

In her Written rebuttal Testimony (Exhibit CCG-R-4) she provides additional testimony that (a) describes supplemental signal content information provided to

Dr. Lisa George; (b) supplements her corrected testimony regarding Devotional programming on distantly retransmitted Canadian signals in response to claims made by the Settling Devotional Claimants in their direct case as amended and corrected; and (c) describes a minor error discovered in the CCG content data related to the Canadian Radio-television and Telecommunications Commission

(“CRTC”) logs for December 2011.

4 Respectfully Submitted,

/s/ L. Kendall Satterfield Dated: September 15, 2017 ______L. Kendall Satterfield DC Bar No. 393953 Satterfield PLLC 1629 K Street NW, Suite 300 Washington, DC 20006 Tel: 202-355-6432 [email protected]

Counsel for Canadian Claimants Group

Of Counsel,

Victor J. Cosentino CA Bar No. 163672 LARSON & GASTON, LLP 200 S. Los Robles Ave, Suite 530 Pasadena, CA 91101 Tel: 626-795-6001 / Fax: 626-795-0016 [email protected]

5 CERTIFICATE OF SERVICE

I, Victor J. Cosentino, hereby cer tify that on this 15th day of September 2017, I caused copies of the for egoing REBUTTAL CASE OF THE CANADIAN CLAIMANTS GROUP in the matter Distribut ion of 2010 -2013 Cable Royalty Funds, Docket No. 14-CRB-0010-CD (2010-2013), to be ser ved by Overn ight Delivery (to those mar ked wit h an* ) and by Electronic Transmission on the follow ing parties:

COMMERCIAL TELEVISION CLAIMANTS

NATIONAL ASSOCIATION OF BROADCASTERS

*Joh n I. St ew art, Jr . Ann Mace Da vid Er vin CROW ELL & MORING LLP 10 01 Pen nsylvan ia Ave. NW Washin gton , DC 20 004 -259 5

JOINT SPOR TS CLAIMANTS

*Rober t Alan Gar r ett Lain R. McPhie M. Sean Laane Rit chie T. Thom as Michael Kien t zle SQUIRE PATTON BOGGS (US) LLP Bry an L. Adk in s 25 50 M St., N.W. ARNOLD & PORTER KAY SCHOLER LLP Washin gt on, DC 20 037 601 Massachuset ts Ave. NW iain .m cphie@sq uir epb.co m Washin gton, DC 20 001 rit chie.t hom as@sq uir epb .co m Rober t.Gar r ett @apk s.com Sean .Laan e@apk s.co m Michael J. Mellis Michael.Kient zle@a pk s.co m EVP and Gener al Counsel Bry an.Adk in s@apk s.com OFFICE OF THE COMMISSIONER OF BASEBALL 24 5 Par k Aven ue Phillip R. Hochber g New Yor k, NY 1016 7 LAW OFFICES OF PHILLIP R. HOCHBERG tom .ost er tag@m lb.com 12 50 5 Par k Pot om ac Avenue, 6t h Floor Pot om ac, MD 20 854 Philip R. Hochber g PHochber g@sh ulm an ro ger s.com

1 PR OGRAM SUP PLIERS I MP AA

*Gr egor y 0. Olanir an Lucy Holmes Plovnick Alesh a M. Dominique MITCHELL SILBERBERG & KNUPP LPP 1818 N St reet NW , 8th Floor

Washington, DC 20 036 .

PUBLIC TELEVISION CLAIMANTS

PUBLIC BR OADCASTI NG SERVICE

*Ronald G. Dove, Jr . R. Scot t Griffin Lindsey L. Tonsager PUBLIC BROADCASTING SERVICE Dust in Cho 210 0 Cr yst al Dr ive COVINGTON & BURLING LLP Arlington , VA 22202-3785 One City Center 850 Ten th St reet, NW Wash ington DC 20 001-4956

SETTLING DEVOTI ONAL CLAIMANT S

*Arnold P. Lutzker Cliffor d M. Har ringt on Benjamin Sternberg Matt hew J. MacLean Jeannette M. Cannadella Michael A. War ley LUTZKER & LUTZKER LLP Jessica T. Nyman 12 33 20th Street, NW, Suite 703 PILLSBURY WINTHROP SHAW PITTMAN Washington, DC 20036 LLP Tel: (202) 408-7600 1200 17t h St reet NW Fax: (202) 408-7677 Washin gt on, D.C. 20036 [email protected] Tel: (202) 663-8000 ben@lutzker. com Fax: (202) 663-8007 . jeannette@lutzker com cliffor d.hanington@pillsbur ylaw .com matthew .m aclean@pillsbur ylaw . com michael.w ar ley@pillsbury law .com jessica.nyman@pillsbur ylaw .com

Victor J. Cosentino

2

EXHIBIT CCG-R-1

WRITTEN REBUTTAL TESTIMONY OF LISA M. GEORGE, PH.D.

Written Direct Testimony of Lisa M. George, Ph.D.

2010-2013 Cable Royalty Distribution Proceeding

Docket No. 14-CRB-0010-CD (2010-2013)

September 11, 2017

My name is Lisa M. George. I am an Associate Professor of Economics at Hunter College and the Graduate Center of the City University of New York. My academic research focuses on the economics of media markets. I have previously provided Written Direct Testimony in this proceeding, as Exhibit CCG-5 and Corrected Amended Written Direct Testimony as Exhibit CCG-5A (Corrected). More details about my experience and training and a copy of my CV can be found in Exhibit CCG-5.

INTRODUCTION

I have been asked by the Canadian Claimants Group (CCG) to evaluate regression estimates of relative market value submitted in these proceedings by Joint Sports Claimants (JSC) and Commercial Television Claimants (CTV). I also have been asked to respond to challenges by Settling Devotional Claimants (SDC) and Public Television Claimants (PTV) to the use of regression analysis in estimating the relative market value of distant signal programming and to consider their proposed alternatives.

My testimony proceeds as follows:

(1) Fundamentals and General Claims. I respond to the general claims of Settling Devotional Claimants and Public Television Claimants against the use of regression analysis in these proceedings. To do this, I first review how regression analysis is used to estimate the

Exhibit CCG-R-1 (George), Page 1 relative market value of distant signal programming, then outline why the alternatives proposed are inferior to the regression approach.

(2) Modeling Carriage Decisions for Canadian Distant Signals. I explain that, although the conceptual framework for regression adopted by JSC and CTV experts are valid, the particular regression models estimated by JSC and CTV experts do not account for the regulatory environment governing carriage of Canadian signals in the . I argue that failure to account for the prohibition on carriage of Canadian distant signals outside the retransmission zone and the opportunity for carriage of Canadian signals on a local basis in some markets leads to biased estimates of the relative market value of Canadian Claimant programming.

(3) Classifying Programming on Canadian Distant Signals. I describe extensive, systematic errors in classifying programming on Canadian stations in the JSC regression data. I identify more than 11,000 CCG broadcasts totaling more than 500,000 programming minutes incorrectly assigned to Program Suppliers. Overall, JSC misclassified more than 20% of programming minutes on Canadian distant signals. I explain how these classification errors bias estimates of the relative market value of Canadian Claimant programming toward zero. I explain why commercial data sources lacking country of origin information such as those used by JSC and (originally) by CTV cannot be used to classify programming on Canadian stations.

(4) Adjustments to JSC Regression. I adjust the JSC regression model to reflect the legal environment for carriage of Canadian signals and adjust the JSC data to ameliorate misclassification. I then re-estimate regression coefficients and associated shares. I show that adjusting the JSC model and correcting JSC data produces a positive value for Canadian Claimant programming that is more consistent with CTV and CCG estimates.

(5) Adjustments to CTV Regression. I adjust the CTV regression model to reflect the legal environment for carriage of Canadian signals then re-estimate regression coefficients

Exhibit CCG-R-1 (George), Page 2 and associated shares. I show that adjusting the CTV model increases estimates of the Canadian Claimant share.

(6) Discussion and Comparison of Adjusted Regressions. I explain why, even after adjustment, estimates of relative market value derived from the adjusted CTV model are more precise and less likely to include bias than those produced by the adjusted JSC model or cable operator surveys.

(7) Response to Settling Devotional Claimants & Public Television Claimants. I respond to specific claims against the use of regression analyses in these proceedings made by Public Television Claimants and Settling Devotional Claimants. I explain why the modified version of the JSC regression, presented in the testimony of Dr. Eskan Erdem, is fundamentally flawed and cannot be used for inferring the relative market value of claimant programming.

(8) Conclusion. I summarize key elements of my analysis and conclude the testimony.

1. FUNDAMENTALS AND GENERAL CLAIMS

Dr. Mark Israel for JSC, Dr. Gregory Crawford for CTV, and myself for CCG have all estimated the relative market value of distant signal programming using regression analysis of royalty payments made by cable systems. Dr. Joel Waldfogel and Dr. Greg Rosston similarly used regression analysis in prior proceedings. The objective of regression in these studies is to estimate the effect of different categories of programming minutes on royalty payments. Coefficients estimated by the regression models can be interpreted as implicit average prices per minute of programming.1 These implicit prices, when multiplied by the

1 In the case of CTV analysis, estimated in logarithms, regression coefficients must be transformed to estimate implicit average prices.

Exhibit CCG-R-1 (George), Page 3 total number of minutes broadcast in each category, produce a value from which shares can be calculated.

Public Television Claimants and Settling Devotional Claimants argue that royalty payments cannot be used to infer the value of programming because the distant signal marketplace is not a “free” market, but instead governed by the terms of the compulsory license. They argue instead that carriage, alone or in conjunction with viewing, best measures relative market value.2

Dr. Matthew Shum, in his response to Dr. Jeffrey S. Gray, details conceptual shortcomings with using viewing as a primary criterion for determining relative market value of distant signal programming. Dr. Shum emphasizes a point covered in past testimony: viewing of claimant programming at best is a measure of subscriber value, rather than cable system value that is the subject of these proceedings. Carriage of claimant programming thus represents an improvement over viewing measures in that carriage reflects choices of cable systems directly. However, carriage alone is simply a measure of quantity. Value is a function of both quantity and price. As an expenditure, royalty payments capture both quantity and price, as such they represent the best metric for measuring value in these proceedings.3

2 Program Suppliers argue that viewing represents the best measure of value in these proceedings, and Dr. Jeffrey S. Gray uses regression analysis to estimate viewing on distant signals. The shortcomings of using viewing as a measure of value in these proceedings are comprehensively detailed in the testimony of Dr. Matthew Shum (Exhibit CCG-R-2).

3 The distinction is captured in a classic example from economics known as the “paradox of value,” which asks why diamonds are priced more highly than water, yet water is more valuable. Economic reasoning explains that diamonds are scarce, so have a high value at the margin, but few are consumed. Water is abundant, so has low value on the margin, but people use it prolifically, for example to wash cars and to maintain golf courses. But total value encompasses all units, not just the marginal ones, hence water is much more valuable. The example dates to Adam Smith, but entered the economic cannon when included in Nobel laureate Paul Samuelson’s influential 1948 textbook, still widely used in its 19th edition (Paul A. Samuelson and William D. Nordhaus, Economics, New York: McGraw Hill Education, 2009). Samuelson’s explication of the paradox can be found in an edition of his textbook available online here: https://books.google.com/books?id=gzqXdHXxxeAC&lpg=PA122&vq=paradox%20of%20value&pg=PA 121#v=snippet&q=paradox%20of%20value&f=false, viewed September 7, 2017. I add that water is rarely sold at an unregulated market price, but its value can still be inferred from consumption decisions.

Exhibit CCG-R-1 (George), Page 4 I emphasize in my direct testimony that market prices are not needed for royalty payments to convey information about the value of distant signal programming. What is needed are market decisions. In other words, as long as cable systems must balance the incremental benefits of retransmitting particular distant signals to subscribers against the incremental costs of offering this programming, observed royalty payments reveal information about marketplace value. Incremental revenues arise from subscriptions, either by attracting subscribers or increasing prices. Incremental costs arise from payments under the compulsory license. As profit-oriented firms, cable systems would not incur the costs of carrying distant signals unless the revenues from doing so exceeded those costs.

Even in the presence of minimum fees, royalty payments reveal information about the value of programming categories in relative terms. A cable system choosing to carry less than one distant signal equivalent must still select a set of distant signals to offer to different groups of subscribers, and would be expected to make this choice in a way that brings highest value to the system.

A substantial body of testimony in current and prior proceedings outlines the strategic tradeoffs cable systems make in balancing incremental costs and benefits of carriage.4 Carriage trends since 2010, when royalty assessment switched from the system to subscriber group level, provide further evidence of strategic choices. The sharp increase in the number of subscriber groups per system and increased diversity of distant signal offerings to these

4 Dr. Crawford’s 2004-2005 rebuttal testimony describes the strategic environment for cable system carriage decisions. My direct statement emphasizes and expands on key points. Dr. Shum’s rebuttal testimony adds insights from the economics literature.

Exhibit CCG-R-1 (George), Page 5 subscriber groups especially highlights efforts by cable systems to maximize the value and minimize the costs of distant signal offerings.5

In addition to their claim that royalty payments do not reflect value, Settling Devotional Claimants argue that the regression technique produces results that are “unstable,” with “ludicrous” assumptions. Dr. Erdem offers modifications to Dr. Israel’s regressions to highlight this perceived instability. In my view, Dr. Erdem’s testimony reflects a fundamental misunderstanding of the regression process.

The objective of the CCG, JSC and CTV regression analyses in these proceedings is to estimate the effect of different categories of programming on royalty payments. In econometrics, this is called causal inference.6 For causal inference, the key concern is to guard against bias, which in this case would lead to incorrect estimates of the coefficients for programming minutes. Omitted variables are a potential source of bias; hence, the choice of control variables receives close attention in designing econometric models. To guard against bias, control variables are selected with a goal of capturing factors that the situation indicates are likely to be correlated with royalty payments, even if the relationship between the control

5 The additional complexity of the distant signal marketplace since 2010 indeed makes the systematic inference available through regression analysis even more useful relative to other valuation approaches in the current proceeding. The trends described are documented in the written direct testimony of Jonda Martin submitted on behalf of the CCG, Exhibit CCG-4, at page 18 and in my written direct testimony, Exhibit CCG-5, table 5. 6 Causal inference is the central focus of modern econometrics. The concept receives prominent treatment in standard textbooks, for example Jeffrey M. Wooldridge, Introductory Econometrics: A Modern Approach (5th Edition). Mason, Ohio: South-Western Cengage Learning, 2012. The movement toward causal inference in policy analysis is summarized by leading researchers Joshua Angrist and Jörn-Steffen Pischke: Improvements in empirical work have come from many directions. Better data and more robust estimation methods are part of the story, as is a reduced emphasis on econometric considerations that are not central to a causal interpretation of the main findings. But the primary force driving the credibility revolution has been a vigorous push for better and more clearly articulated research designs. Angrist, Joshua D., and Jörn-Steffen Pischke. 2010. “The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics.” Journal of Economic Perspectives, 24(2): 3- 30 available online https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3, viewed September 7, 2017.

Exhibit CCG-R-1 (George), Page 6 variables and royalty payments is indirect. This is also why fixed effects are useful.7 Stated another way, the central role of control variables is to ensure that the programming coefficients are estimated on an “all else equal” basis.

In causal inference, issues of “model fit” are less important than guarding against omitted variable bias. That is, variables which might not discernably increase the R2 measure must still be included in the regression if they affect royalty payments and are correlated with programming minutes. Conversely, variables that do not affect royalty payments are not needed, since they typically will just worsen precision of the estimates. Changes to Dr. Israel’s regression advocated by Settling Devotional Claimants run counter to the goals of causal inference, tending to increase bias and reduce precision.

Despite a common focus on causal inference, the JSC, CTV and CCG regression analyses before the Judges differ in important ways. These differences arise from two distinct sources: (1) the modeling approaches used to represent carriage decisions; and (2) the data on the program composition of distant signals used to estimate the models. Some modeling choices are driven by the nature of the data available for estimation. For example, I chose to estimate the relative market value of Canadian Claimant programming relative to other claimant categories combined rather than separately estimating claimant shares because I did not have data on the composition of US distant signals. Other modeling choices made by the economist experts reflect tradeoffs between precision and interpretability, such as the use of non-linear (logarithmic) transformation of royalty payments. But the most important modeling choices are those that characterize the nature of decisions made by cable system

7 Dr. Crawford estimates his model with subscriber group fixed effects to guard against omitted variable bias, though the technique has some limitation in the current setting. Subscriber group estimation was made possible by regulatory changes that assessed royalty fees at the subscriber group rather than system level. However, minimum fees continue to be assessed at the system level. This means that subscriber group carriage does not perfectly map to royalty payments. In the context of Dr. Crawford’s regression, coefficient estimates and resulting value shares will tend to be less precise than the estimated standard errors would imply.

Exhibit CCG-R-1 (George), Page 7 operators in the distant signal marketplace, as those are the essential choices on which inference is to be based.

My substantive critique of Dr. Israel’s and Dr. Crawford’s modeling choices rests on their incomplete specification of the regulatory environment associated with carriage of Canadian stations in the US. The mis-specification relates to rules governing where Canadian stations are authorized for retransmission on a distant or local basis. Failure to account for rules prohibiting distant carriage or authorizing local carriage of Canadian signals leads to biased coefficients that underestimate the relative market value of Canadian Claimant programming. I outline my critique of the regression models in section 2. I adjust Dr. Israel’s model to reflect the regulatory environment in section 3 and adjust Dr. Crawford’s model in section 4.

The regression analyses before the Judges also differ in the approach to constructing the data used to estimate the models, especially the program classification by claimant category. Modern computing makes it possible to consider a much larger set of programming than has been done in the past. For Dr. Crawford’s analysis, Dr. Bennett has classified close to 100% of the programming minutes broadcast on every distant signal over the years 2010-2013 using data licensed from the commercial supplier FYI. For Dr. Israel’s analysis, Mr. Trautman followed Dr. Waldfogel’s approach of classifying a sample of minutes broadcast, in this case 28 days per accounting period. Mr. Trautman and Dr. Israel used a different source of commercial data, TBS/Gracenote. (Dr. Israel also includes only the years 2010- 2012 rather than 2010-2013). For my regression, Ms. Danielle Boudreau used complete Canadian Radio-television and Telecommunications Commission (CRTC) program logs to classify 100% of programming on Canadian distant signals from 2010-2013.

In general, comprehensive data is preferred over a sample to improve precision and reduce the potential for bias in sample selection. However, the size of a dataset does not compensate for its accuracy. Large sets of data require algorithmic classification, which introduce issues of data validity that have not heretofore been considered by the Judges. While random classification mistakes reduce the precision of regression estimates, systematic

Exhibit CCG-R-1 (George), Page 8 misclassification introduces bias. I show, in section 3, that the regression data used by Dr. Israel contains substantial errors in the classification of programming on Canadian distant signals. Over 11,000 broadcasts constituting more than half a million minutes of programming (over 20% of content on Canadian distant signals) were improperly assigned to Program Suppliers rather than Canadian Claimants, rendering the data useless for estimating the relative value of Canadian Claimant programming.8 In section 3, I argue that commercial databases such as TBS/Gracenote and FYI are unsuitable for classifying programming on Canadian distant signals. I also elaborate on the consequences of misclassification for regression results.

2. MODELING CARRIAGE DECISIONS FOR CANADIAN DISTANT SIGNALS

In this section, I explain why regression models estimating the value of programming must be designed to reflect two regulatory provisions affecting cable system demand for Canadian distant signals. I first discuss provisions governing carriage of Canadian stations on a distant basis, then discuss provisions affecting carriage of Canadian stations on a local basis. For each, I outline how my analyses account for these regulatory provisions. I explain why regression models developed by JSC and CTV do not fully account for rules affecting demand for Canadian distant signals, and as a result under-estimate the relative market value of Canadian Claimant programming.

a. Modeling the Canadian Retransmission Zone Cable systems located above the forty-second parallel or less than 150 miles from the US- Canadian border may elect to retransmit Canadian broadcast stations, subject to the terms of the compulsory license. Cable systems outside of this retransmission zone are prohibited from carrying Canadian broadcast stations. Since regression analysis infers value from the

8 The initial classification Dr. Bennett produced for Dr. Crawford’s analysis using commercial data included similarly extensive misclassification. However, for Dr. Crawford’s April 11, 2017 corrected testimony, Dr. Bennett and Dr. Crawford used the same CRTC program logs used by CCG for categorizing programming on Canadian stations. The corrected CTV classification is very similar to CCG classification.

Exhibit CCG-R-1 (George), Page 9 observed decisions of cable system operators, regression models must take into account constraints on carriage. Failure to do so can bias results, effectively treating the absence of carriage of Canadian distant signals as revealing low demand rather than reflecting a statutory constraint.

The retransmission zone alters the set of distant signals available to cable systems in designing their product offerings. As a result, the regulatory framework impacts demand for all programming types, not solely demand for Canadian Claimant programming. Intuitively, if programming on Canadian signals substitutes to some extent for programming on other signal types, the ability to carry Canadian distant signals will reduce the demand for this programming on US distant signals. The value of US programming on distant signals would thus be expected to be lower inside the retransmission zone than outside the zone.

In my analysis for CCG, I addressed the carriage prohibition by estimating my regression model only inside the retransmission zone, so that all valuations were made in a common regulatory environment. Restricting my regression analysis to the retransmission zone was appropriate to my task of estimating the relative market value of Canadian Claimant programming relative to other distant signal programming combined, since, by definition, Canadian Claimant programming is carried only inside the retransmission zone. In a model valuing all programming categories, an alternative approach would be to allow the relationship between programming minutes in each category on royalty payments to depend on whether the minutes were broadcast inside or outside of the Canadian retransmission zone. In other words, modifying the regression model to estimate coefficients on program minutes separately inside and outside of the retransmission area.9 In the general setting of estimating value for all claimant categories across the entire US, this approach captures the prohibition against carriage of Canadian signals outside the retransmission area. It also

9 More technically, this flexibility is operationalized in a regression model by interacting an indicator variable for whether the system is in the retransmission zone with the minutes of programming in each category. Note that including a single control variable for the retransmission zone does not accomplish this task.

Exhibit CCG-R-1 (George), Page 10 captures the different competitive environments for all programming categories inside and outside of the retransmission zone.

Dr. Israel and Dr. Crawford’s regression models do not allow the value of programming to vary inside and outside of the Canadian zone. As a result, the coefficients would be expected to underestimate the value of Canadian Claimant programming. In sections 4 and 5, I adjust and re-estimate Dr. Israel’s and Dr. Crawford’s regression models. As expected, adjusting the models to take into account the retransmission zone increases estimates of the Canadian Claimant share.

b. Modeling Local Carriage of Canadian Stations In contrast to the statutory prohibition on transmitting Canadian stations outside of the retransmission zone, some cable systems may carry Canadian broadcast signals on a local basis. (Cable systems may carry Canadian stations on a local basis when they are proximate to the system’s service area or when the FCC has determined that the stations are significantly viewed.) The ability to carry Canadian stations on a local basis impacts the demand for distant signals, especially Canadian distant signals, and thus must be reflected in the model for accurate estimates. Failure to control for local carriage will bias valuation of Canadian Claimant programming, effectively treating the decision not to carry Canadian signals on a distant basis as reflecting low demand, when in fact the signals are not carried on a distant basis because of high demand.

In my research design for the CCG, I addressed local carriage of Canadian distant signals by including the number of local broadcast signals of five mutually-exclusive signal types as control variables in the regression. (The signal types are network, US independent, Canadian independent, educational and unclassified low power.) Neither Dr. Israel nor Dr. Crawford’s regression models fully reflect local carriage of Canadian distant signals.

Exhibit CCG-R-1 (George), Page 11 In sections 4 and 5, I adjust and re-estimate Dr. Israel’s and Dr. Crawford’s regression models to include a breakdown of local station types.10 As expected, adjusting the models to account for local carriage of Canadian stations increases estimates of the Canadian Claimant share.

3. CLASSIFYING PROGRAMMING ON CANADIAN DISTANT SIGNALS

In this section, I describe extensive errors in classifying programming on Canadian stations in the JSC and original (uncorrected) CTV data. I explain how classification errors bias estimates of the relative market value of Canadian Claimant programming toward zero. I explain why commercial data sources used by JSC and CTV cannot be used to classify programming on Canadian stations for these proceedings.

Regression models require accurate program classification to produce unbiased estimates of reasonable precision. Random misclassification, such as might arise from typographical errors in databases, reduces the precision of estimates but does not introduce bias into regression coefficients or resulting value shares. Systematic misclassification, however, can both reduce the precision of estimates and introduce bias in coefficients and shares.

I have determined that the data Dr. Israel used to estimate his regression models misclassified more than a half million minutes of programming on Canadian distant signals. These classification errors bias regression estimates of the relative market value of Canadian Claimant programming toward zero. The original data prepared by Dr. Christopher Bennett, which Dr. Crawford used to estimate his regression models, was similarly flawed with extensive misclassification of programming on Canadian stations. Dr. Bennett and Dr. Crawford, however, substantially revised their original data, reclassifying programming on

10 I also interact these local station categories with an indicator variable for whether the system is located in the retransmission zone. The final specification thus allows the effect of both local and distant programming to differ inside and outside of the Canadian region.

Exhibit CCG-R-1 (George), Page 12 Canadian stations using the same data source used by the CCG. Their corrected classification using CRTC logs is generally congruent with CCG classification.

The systematic bias in program classification in the JSC and original CTV analyses was largely due to failure to properly identify programs owned by US copyright owners on Canadian stations. I understand that the parties have agreed on the following classification for Canadian Claimant programming:

All programs broadcast on Canadian television stations, except: (1) live telecasts of Major League Baseball, National Hockey League, and U.S. college team sports, and (2) programs owned by U.S. copyright owners.

Per this definition, the crucial factor for allocating the vast majority of programming on Canadian distant signals is the country of the copyright owner. Canadian stations broadcast diverse programming produced in Canada, the United Kingdom, France and other countries. Some Canadian programming is broadcast on US networks (e.g., Rookie Blue), but Canadian copyright owners hold rights to these programs when broadcast on Canadian stations. Some international programs (e.g., BBC series The Tudors) are broadcast in the US, but Canadian copyright owners hold rights to these programs independently for broadcast on Canadian signals. In other cases, Canadian copyright owners hold rights to programs broadcast on Canadian stations that are similar to programs owned by US copyright owners and broadcast in the US (e.g., the religious program It is Written). CRTC logs correctly record all these programs on Canadian stations as Canadian. JSC assigned these programs to US claimants.

The misclassification arises in part because commercial programming databases used by CTV and JSC (FYI database in the case of CTV and TBS/Gracenote in the case of JSC) contain very limited information on whether or not programming on Canadian stations is owned by US copyright owners. For example, in the JSC sample, the country of origin was reported for less than 3% of the 133,314 broadcasts recorded on Canadian stations on sample dates. In cases where the country of origin was not missing, the country information

Exhibit CCG-R-1 (George), Page 13 was often ambiguous. In contrast to commercial data, the program logs submitted by Canadian broadcasters to the CRTC include country of origin information for all but a very small fraction of broadcasts. The CCG uses CRTC logs to classify programming.

To compare CCG and JSC classification directly, I merged airtime data in the CRTC logs with comparable data in the TBS/Gracenote database. The merge identified thousands of broadcasts on Canadian stations assigned by JSC to US claimants (typically Program Suppliers) shown in CRTC logs to originate outside the US. Some errors involved misclassification of Canadian programs (e.g., Doodlebops, Buseytown Mysteries, and Sue Thomas: F.B.Eye.) Others involved international broadcasts for which Canadian broadcasters hold broadcast licenses (e.g., BBC series The Tudors, Olympic coverage). Other errors involved misclassification of a Canadian program with the same name as an American program (It is Written). Table 1 lists the top 50 misclassified titles in the JSC regression data, totaling over 11,000 broadcasts and more than a half million programming minutes for sample dates.

Exhibit CCG-R-1 (George), Page 14 Table 1: Top Canadian Claimant Group Broadcasts Misclassified by Joint Sports Claimants Title Total Broadcasts Total Minutes STEVEN AND CHRIS 2,156 129,295 BUSYTOWN MYSTERIES 1,416 36,090 BEST RECIPES EVER 1,177 35,303 SUPER WHY 1,164 34,904 SUE THOMAS FBEYE 496 29,755 HEARTLAND 458 27,480 BO ON THE GO 852 25,140 XXX SUMMER OLYMPICS 94 23,760 ARTZOOKA 666 19,980 DOODLEBOPS 654 16,350 LES DOCTEURS 243 14,580 THE CAT IN THE HAT KNOWS A LOT ABOUT THAT 366 10,980 XXI WINTER OLYMPICS 43 9,570 FLASHPOINT 135 8,090 DOODLEBOPS ROCKIN ROAD SHOW 311 7,590 TRACK AND FIELD 75 7,260 TENNIS 40 6,600 MARKETPLACE 209 6,510 FIGURE SKATING 47 4,260 CURLING 25 4,260 MAGINATION 148 3,990 18 TO LIFE 119 3,570 THE TUDORS 56 3,360 SO YOU THINK YOU CAN DANCE CANADA 54 3,246 PROVIDENCE 54 3,240 DEX HAMILTON ALIEN ENTOMOLOGIST 125 3,125 CROSS COUNTRY SKIING 40 2,496 OPERATION SMILE 42 2,310 THE BRIDGE 36 2,161 LPGA TOUR GOLF 8 1,920 SPEED SKATING 32 1,920 RODEO 16 1,860 TRAUMA 31 1,805 YAMASKA 26 1,560 EQUESTRIAN 15 1,560 CANADA IN THE ROUGH 49 1,470 SAVING HOPE 24 1,440 ZIGBY 93 1,425 POWERBOAT TELEVISION 47 1,410 THE NEXT GENERATION 46 1,380 BOBSLEDDING 23 1,380 ARARAT 10 1,239 JUSTE POUR RIRE EN DIRECT 28 1,215 THE BORDER 21 1,200 ADVENTURES NORTH 40 1,200 THE BORGIAS 18 1,080 SKIING 8 960 THE NATIVITY 8 960 THE QUEEN 8 960 Total 11,860 514,159 This table reports total number of broadcasts and total runtime for listed programs assigned to US claimants in the JSC regression sample and identified as Canadian in CRTC logs.

Exhibit CCG-R-1 (George), Page 15 Complete and accurate country of origin information is necessary in the data source because claimant categories cannot be identified from titles alone. This is especially true in commercial databases where titles have been edited or “cleaned.” While seemingly convenient, title editing induces errors. For example, JSC incorrectly assigned Vancouver Winter Olympic coverage on Canadian stations to Program Suppliers. This classification is incorrect: Canadian stations license broadcast rights directly from the International Olympic Committee. Canadian coverage differs substantially from US coverage, for example with more televised live broadcast hours. This diversity is more evident in CRTC logs, which report different titles for Olympic programming rather than the generic “XXI Winter Olympics” more typically reported in commercial data. Other examples include the Roger’s Cup Tennis tournament in , generically titled as “Tennis” or “ATP Tennis” in the JSC commercial data but specifically labeled in the CRTC logs. There are other examples in the data of programming with similar names on both US and Canadian signals, but where Canadian rights are clear in CRTC logs but not in the commercial source.

The FYI database used by Dr. Bennett to classify programming for Dr. Crawford’s CTV analysis similarly lacks country information for most titles on Canadian distant signals. As a result, claimant shares produced in Dr. Crawford’s original direct statement were based on extensive misclassification. For Dr. Crawford’s corrected testimony, Dr. Crawford and Dr. Bennett used the CRTC logs to identify broadcast rights, reclassifying thousands of broadcasts on Canadian stations. The corrected data using CRTC logs is generally congruent with CCG classifications. Overall, CRTC logs should be viewed as the preferred source for content categorization on Canadian distant signals.

Program misclassification affects regression estimates in two ways, biasing both the marginal dollar value of programming per minute (derived regression coefficients) and the quantity of minutes in each category. While it is theoretically possible for the bias in value per minute to work in either direction, misclassification of highly valuable programming such as the 2010 and 2012 Olympics suggests an underestimate of Canadian Claimant programming value per minute. The quantity distortion is unambiguous, with misclassification of CCG

Exhibit CCG-R-1 (George), Page 16 programming biasing the Canadian Claimant share toward zero. Thus, from a theoretical standpoint, extensive misclassification of Canadian Claimant programming as Program Supplier programming would tend to produce estimates of value for Canadian Claimant programming that are too low. I show this to be the case in the adjusted JSC estimates presented in section 4.

To summarize, the modeling choices in Dr. Israel’s testimony and Dr. Crawford’s testimony systematically bias the estimated Canadian Claimant share of relative market value toward zero. Extensive misclassification of Canadian Claimant programming in JSC estimates compounds bias in the JSC modeling. In section 4, I present an adjusted version of the JSC model with revised data to demonstrate how more complete modeling and improved data produces results for Canadian claimants more in line with other regression estimates. In section 5, I present an adjusted version of the CTV model and show how more complete modeling produces higher estimates for the Canadian Claimant share.

4. ADJUSTMENTS TO JSC REGRESSION

In this section, I adjust the JSC regression and resulting share estimates to remedy both the data classification and modeling deficiencies in the original analysis described above.

Specific changes are as follows: (1) I adjust the model to allow for different program valuations inside and outside of the Canadian retransmission zone; (2) I adjust the model to control for transmission of Canadian stations on a local basis; and (3) I replace JSC program classification on Canadian distant signals, which relied on TBS/Gracenote data, with CCG categorization, derived from CRTC logs. The modified JSC analysis estimates the relative market value of Canadian Claimant programming at 6.97% over the three years 2010-2012.

a. Adjusting JSC Program Classification To adjust the regression data, I multiply the category shares for each Canadian station in each accounting period calculated by CCG by the total broadcast hours recorded in the

Exhibit CCG-R-1 (George), Page 17 TBS/Gracenote data used in the JSC regression.11 For example, in the second half of 2012, the CCG records Canadian station CBLT broadcasting 95.54% Canadian Claimant programming. The JSC data shows 40,230 total hours broadcast on the station during this accounting period. So, for the adjusted regression, Canadian minutes on the signal are recorded as 40,230 x 0.9554, or 38,435.19. This total replaces the incorrect JSC estimate of 26,330 in my adjusted analysis.

Table 2: Comparison of JSC and CCG program categorization on Canadian stations JSC Classification (%) CCG Classification (%) Call Canadian Program Sports Devotional Canadian Program Sports Devotional Errors Sign Claimant Suppliers Claimant Claimant Claimant Suppliers Claimant Claimant (%) CBAFT 87% 13% 0% 0% 88% 9% 0% 0% -2% CBAT 63% 34% 3% 0% 86% 10% 5% 0% -36% CBET 63% 33% 4% 0% 89% 6% 5% 0% -41% CBFT 87% 12% 0% 0% 88% 9% 0% 0% -1% CBLT 60% 36% 4% 0% 85% 10% 5% 0% -42% CBMT 60% 36% 4% 0% 85% 10% 5% 0% -42% CBOT 60% 36% 4% 0% 85% 10% 5% 0% -42% CBUT 60% 36% 4% 0% 85% 10% 5% 0% -41% CBWT 60% 36% 4% 0% 85% 10% 5% 0% -43% CFCF 41% 58% 1% 0% 53% 45% 1% 0% -30% CFTM 86% 14% 0% 0% 64% 31% 0% 0% 25% CFTO 41% 58% 1% 1% 54% 45% 1% 0% -32% CHCH 52% 48% 0% 0% 64% 35% 0% 0% -23% CHLT 82% 18% 0% 0% 67% 30% 0% 0% 19% CIII 45% 45% 0% 9% 55% 40% 0% 4% -21% CIMT 82% 18% 0% 0% 67% 30% 0% 0% 19% CISA 46% 46% 0% 9% 54% 40% 0% 4% -19% CIVT 41% 58% 1% 0% 54% 45% 1% 0% -30% CJOH 41% 57% 1% 1% 54% 45% 1% 0% -31% CKLT 43% 56% 1% 0% 53% 42% 1% 0% -24% CKSH 87% 12% 0% 0% 88% 9% 0% 0% -1% CKWS 54% 39% 3% 4% 65% 19% 5% 3% -21% CKY 41% 57% 1% 2% 53% 44% 1% 2% -30% This table reports the share of total broadcast minutes in each claimant category on each signal, averaged over accounting periods 2010-2013 in the JSC sample.

11 Programming hours in the CRTC logs typically exclude commercial advertisements and other non- programming minutes, which are subsumed into programming hours in the JSC data. The different recording practices means I cannot directly substitute hours from CRTC logs.

Exhibit CCG-R-1 (George), Page 18

Table 2 reports corrected category shares for each Canadian signal. The left panel shows the original JSC classification and the right panel shows the reclassification based on CCG shares. For expositional clarity, data in this table are averaged over accounting periods during the years 2010-2012, though in the regression dataset I use shares for each signal in each accounting period. As previously noted, JSC did not include 2013 data in their analysis, so my adjustments similarly consider only the years 2010-2012.

The difference in the Canadian claimant share on a percentage basis is shown in the last “Errors” column. In all but two cases, CCG categorization based on country codes are higher than JSC estimates, with classification errors ranging from 5% to over to 40% of minutes. On average, JSC under-counts Canadian minutes by 21%, with most misclassified programming attributed to Program Suppliers. Note that this table presents raw signal shares and does not reflect the differences in signal carriage that are included in the regression. Signals with a higher Canadian Claimant share, such as CBC affiliates, are much more widely carried than signals with lower Canadian Claimant shares.

Table 3: Adjusted JSC Regression – Total program minutes by claimant category, accounting period average

JSC CCG Difference (%) Distant minutes of Joint Sports Claimants 1,203,093 1,209,305 -0.6% Distant minutes of Program Supplier Claimants 30,914,360 30,736,544 0.6% Distant minutes of Commercial TV Claimants 3,402,174 3,402,174 0.0% Distant minutes of Public TV Claimants 3,159,237 3,159,237 0.0% Distant minutes of Canadian Claimants 819,521 1,006,239 -23.6% Distant minutes of Devotional Claimants 2,348,870 2,345,513 0.1% Distant Network minutes 1,836,709 1,813,322 -0.6% Other Distant Minutes 320,372 320,372 0.6% This figure reports total annual programming minutes in each claimant category in the regression data averaged over accounting periods. Column (1) is based on JSC program classification. Column (2) is based on CCG program classification.

Exhibit CCG-R-1 (George), Page 19 Table 3 reports total distant programming minutes by claimant category averaged over the six accounting periods in the JSC sample. The first column reports the original JSC breakdown of broadcast minutes and the second column shows the adjustment based on CCG shares. In this table, minute totals reflect pro-rated signal carriage. The adjusted average based on CCG shares is 23.6% above the JSC assignment. The CTV corrected testimony increased average Canadian Claimant programming minutes by a similar amount.

Full summary statistics for the regression are reported in Appendix table A1, equivalent to table B-II-1 in Dr. Israel’s report. The table reports programming minutes by claimant category inside and outside of the Canadian retransmission zone.

b. Adjusting the JSC Model To adjust the JSC model to reflect the regulatory prohibition on Canadian signal carriage outside of the Canadian retransmission zone, I interact the programming minutes in each claimant category with an indicator variable identifying whether the system lies in the Canadian zone. This produces two coefficient estimates for each programming category, one for inside and one for outside of the retransmission zone. For Canadian claimant programming, which is not broadcast outside of the retransmission zone, only one coefficient is estimated.

To control for systems authorized to carry Canadian signals on a local basis, I replace the local station control variable in the original regression, a system total, with the number of local channels broadcast in each of five signal categories, also interacted with the Canadian zone. The five signal categories are local Canadian stations, local US independent stations, US educational stations, US network stations, and unclassified US low power stations.

Taken together, these changes reflect that cable system decisions to carry distant signals of any type depend on the legal rules for carrying Canadian signals in the US.

Exhibit CCG-R-1 (George), Page 20 c. JSC Adjusted Regression Results Coefficient estimates on claimant programming are shown in table 4. Full regression results, equivalent to table V-I in Dr. Israel’s report, are reported in Appendix table A2. The first column in table 4 shows results of the adjusted model using the original JSC program classification data. The second column shows results of the adjusted model with CCG categorization. In both specifications, the Canadian Claimant share in the Canadian retransmission zone is positive and precisely estimated. Outside of the Canadian zone the value is not estimated.

Table 4: Adjusted JSC Regression - Regression coefficients on minutes of claimant programming and Canada Zone (1) (2) JSC Classification CCG Classification Canada Zone Sports 4.69 2.56 (3.95) (3.60) Program Supplier 0.49 0.59 (0.19) (0.18) Commercial TV -0.055 -0.076 (0.55) (0.58) Public TV 0.53 0.48 (0.34) (0.33) Canadian 1.18 0.96 (0.35) (0.31) Devotional 0.52 0.38 (0.45) (0.45) Outside Canada Zone Sports 3.96 3.69 (2.93) (2.94) Program Supplier 0.75 0.76 (0.13) (0.13) Commercial TV 0.62 0.62 (0.50) (0.50) Public TV 1.12 1.11 (0.36) (0.36) Canadian 0 0 (.) (.) Devotional -2.05 -2.04 (0.32) (0.32) This figure reports the coefficients and standard errors associated with claimant minute variables inside and outside of the Canadian retransmission zone. Column (1) is based on JSC program classification. Column (2) is based on CCG program classification. Standard errors in parentheses.

Exhibit CCG-R-1 (George), Page 21 Some regression coefficients for other claimant categories have large standard errors, indicating that shares calculated for these categories are imprecise.12

Table 5 reports relative market value shares associated with the regression estimates. The top portion of the table reports overall shares for the entire US. The lower portion of the table breaks results into regions inside and outside of the Canadian retransmission zone. The adjusted model produces a CCG share of 6.01% using original JSC categorization and 6.97% using the adjusted CCG categorization. These results represent combined shares over the three years, as Dr. Israel did not provide annual estimates in his testimony.

Table 5: Adjusted JSC Regression - Implied Royalty Shares (1) (2) JSC Classification CCG Classification (%) (%) Overall Share Sports 30.47 32.18 Program Suppliers 35.80 39.18 Commercial TV 9.18 9.37 Public TV 17.73 18.06 Canadian 6.01 6.97 Devotional 0.81 0.67 Inside Canada Zone Sports 10.47 13.18 Program Suppliers 8.80 11.51 Commercial TV 0.00 0.00 Public TV 3.30 3.48 Canadian 6.01 6.97 Devotional 0.81 0.67 Outside Canada Zone Sports 20.00 19.00 Program Suppliers 27.00 27.66 Commercial TV 9.18 9.37 Public TV 14.44 14.58 Canadian 0.00 0.00 Devotional 0.00 0.00 The table reports implied shares by claimant group inside and outside of the Canadian retransmission zone, 2010-2012. The top panel reports shares for the entire US, the lower panel by zone. Column (1) is based on the adjusted model using JSC program classification. Column (2) is based the adjusted model using CCG program classification.

12 The increase in the CCG share after data correction is comparable to the increase shown in Dr. Crawford’s April 2017 regression using the corrected program classification on Canadian distant signals.

Exhibit CCG-R-1 (George), Page 22 As in the original analyses, shares are produced by first calculating a value contribution for each category. In other words, regression coefficients representing the dollar value of an additional minute of programming within the Canadian zone (implicit prices) are multiplied by the total pro-rated minutes (quantity) in each category within the zone. Regression coefficients representing the marginal value of programming outside the zone are similarly multiplied by total minutes in each category outside the retransmission zone. Claimant shares are calculated by dividing the value contributions by the sum of the value contributions inside and outside of the retransmission zone.

A few additional notes: As in the original analysis, all programming minutes are included in the regression estimation, since the entire composition of a signal is relevant to the cable system carriage decision. Only compensable minutes are used in calculating value contributions and resulting shares. This is the approach to estimating relative market value in this and prior proceedings. All Canadian Claimant programming is compensable.

The discussion above explains adjustments to the JSC model to capture the regulatory framework for carrying Canadian stations and to correct errors in JSC program classification. The Canadian claimant share in the adjusted JSC model is 6.97%.

5. ADJUSTMENTS TO CTV REGRESSION

In this section, I adjust the CTV regression model to reflect rules affecting carriage of Canadian stations and re-estimate claimant shares. Because the corrected CTV program classification on Canadian distant signals is generally congruent with the CCG classification, no adjustment to the regression data using CCG shares is required for accurate estimates of the relative market value of Canadian Claimant programming.13 With the adjusted model, I

13 I do, however, proxy CIMT for CHLT using CTV recategorized data due to incomplete information for this station.

Exhibit CCG-R-1 (George), Page 23 estimate the relative market value of CCG programming to be 4.75% over the four-year period, with annual shares of 4.60%, 4.58%, 4.69%, and 5.10% for 2010-2013, respectively.

From a modeling standpoint, Dr. Crawford estimates his model at the subscriber-group level. This allows him to include system fixed effects, which are well-suited to preventing bias from unobserved factors. However, as described above, Dr. Crawford’s model does not fully account for the regulatory framework governing carriage of Canadian distant signals.

To adjust the CTV model to reflect the regulatory prohibition on Canadian signal carriage outside of the Canadian retransmission zone, I interact the programming minutes in each category with an indicator variable identifying whether the system lies in the Canadian zone. This is the same approach I used to adjust the JSC model, above, but for the fact that programming minutes are calculated at the subscriber-group level. This approach produces two coefficient estimates for each programming category, one for inside and one for outside of the retransmission zone. For Canadian claimant programming, which is not broadcast outside of the retransmission zone, only one coefficient is estimated.

Exhibit CCG-R-1 (George), Page 24 Table 6: Adjusted CTV Regression - Regression coefficients on minutes of claimant programming and Canada Zone (1)

Canadian Zone Sports 36.347 (4.206) Program Suppliers 2.222 (0.215) Commercial TV 4.950 (0.701) Public TV 1.968 (0.217) Canadian 4.221 (0.299) Devotional 1.229 (0.414) Outside Canadian Zone Sports 26.735 (7.769) Program Suppliers 2.240 (0.253) Commercial TV 4.419 (0.693) Public TV 1.628 (0.241) Canadian 0.000 (0.000) Devotional 0.756 (0.362) This figure reports the coefficients and standard errors associated with claimant minute variables inside and outside of the Canadian retransmission zone. Coefficients and standard errors are multiplied by one million to ease interpretation. Standard errors in parentheses.

To control for systems authorized to carry Canadian stations on a local basis, I replace the local station control variable in the original regression (a subscriber group total) with the number of local channels broadcast in each subscriber group in each of five signal categories, again interacted with an indicator for systems in the Canadian zone. This is, again, the same approach I used to adjust the JSC model, above, but for the fact that totals are calculated at the subscriber-group level. The overall impact of the interaction terms is to allow the relationship between programming minutes in each category and royalty payments

Exhibit CCG-R-1 (George), Page 25 to depend on the regulatory environment governing signal carriage, which is different inside and outside of the retransmission zone.

To construct the regression data for the adjusted CTV analysis, I used the corrected programming database provided by CTV during discovery to calculate total hours in each claimant category on each Canadian station in each accounting period. I merge the corrected totals by station and accounting period to the original CTV regression data (with stations), updating totals for matched Canadian stations. I sum programming hours over subscriber groups and estimate the adjusted model. I use Dr. Crawford’s methodology to calculate the marginal value of programming in each category, then to compute associated shares.

Table 7: CTV Adjusted Regression - Average marginal value of one distant minute by claimant category and Canada Zone, 2010-2013 (1) (2) Canada Zone Outside Canada Zone

Sports 1.001 0.736 (0.116) (0.214)

Program Suppliers 0.061 0.062 (0.006) (0.007)

Commercial TV 0.136 0.122 (0.019) (0.019)

Public TV 0.054 0.045 (0.006) (0.007)

Canadian 0.116 - (0.008) -

Devotional 0.034 0.021 (0.011) (0.010) The table reports the average marginal value of one distant minute by claimant group inside and outside of the Canadian retransmission zone, with standard errors in parentheses.

Exhibit CCG-R-1 (George), Page 26 Regression coefficients for programming minutes are reported in table 6, equivalent to figure 15 in Dr. Crawford’s report. The table shows a separate coefficient estimate for each programming category inside and outside the retransmission zone. Unlike the adjusted JSC regression, standard errors are small, so coefficient estimates are precise. Coefficients for most claimant categories are similar inside and outside of the retransmission zone. However, the coefficient for JSC programming is lower inside the retransmission zone, suggesting greater competition from sports programming on Canadian distant signals. Because carriage of Canadian distant signals is prohibited outside of the Canadian retransmission zone, no coefficient for Canadian claimant programming is estimated outside the zone. Summary statistics and coefficient estimates for all terms in the economic model are provided in Appendix tables A3 and A4.

Table 7 shows the marginal effects of an additional dollar of programming derived from the regression coefficients. (Because Dr. Crawford’s model is estimated in logs, regression coefficients do not directly show the effect of additional programming minutes on royalty payments, but must first be retransformed.)14 The table reports effects for combined years, comparable to the last row of Dr. Crawford’s figure 16. The average marginal value of Canadian claimant programming increases from 11.2 cents per minute, in the corrected CTV testimony, to 11.6 cents per minute with the adjusted regression.

Table 8 reports combined four-year shares from the adjusted CTV model alongside those in the CTV corrected testimony. In this table, shares for the entire US are calculated as the sum of the rows. The shares in the Canadian zone are lower than shares outside the zone because the retransmission zone covers less than half of US territory, with about 28% of royalties

14 With a log-linear functional form, the effect of an additional minute of programming is not a constant equal to the regression coefficient, but depends also on the royalty paid in each subscriber group in each system in each accounting period. See Dr. Crawford’s testimony A.2.a for the procedure used to calculate marginal effects.

Exhibit CCG-R-1 (George), Page 27 paid by systems in the Canadian retransmission zone.15 Adjustments to the CTV model increase the Canadian Claimant share from 4.23% to 4.75%.

Table 8: CTV Adjusted Regression – Royalty share by claimant group and Canada zone, 2010-2013 CTV Estimates Adjusted CTV Estimates

All US All US Canada Zone Outside Canada Zone

(1) (2) (3) (4) Sports 35.19 33.85 10.68 23.17

Program Suppliers 23.95 24.88 8.82 16.06

Commercial TV 17.18 17.15 4.90 12.25

Public TV 18.75 18.75 5.32 13.43

Canadian 4.23 4.75 4.75 0.00

Devotional 0.69 0.63 0.30 0.33 The table reports implied shares by claimant group inside and outside of the Canadian retransmission zone, and for the entire US, 2010-2013. Column (1) reports shares from figure 17 in the CTV corrected direct testimony. Columns (2)-(4) report results of the corrected CTV model, overall and inside and outside the Canadian retransmission zone.

Table 9: CTV Adjusted Regression - Royalty share by claimant category and year, US total (1) (2) (3) (4) 2010 2011 2012 2013 Sports 32.93 30.89 34.71 36.63

Program Suppliers 28.73 26.39 23.74 21.08

Commercial TV 17.23 17.71 17.37 16.31

Public TV 15.57 19.79 18.99 20.42

Canadian 4.60 4.58 4.69 5.10

Devotional 0.95 0.64 0.50 0.46 The table reports implied shares by claimant group for the entire US each year 2010-2013.

15 Amended Table 3 of my Corrected Amended Direct Testimony (Exhibit CCG-5A (Corrected)).

Exhibit CCG-R-1 (George), Page 28 Table 9 reports adjusted share calculations by year, with information comparable to the first four rows of figure 16 in the CTV corrected testimony. Table 9 reports shares for the US overall, equivalent to column (2) in table 8. The Canadian claimant share increased from 4.6% to 5.1% over the period 2010-2013, an increase of 11 percent.

6. DISCUSSION AND COMPARISON OF ADJUSTED REGRESSIONS

The motivation for my adjustments to the JSC and CTV regressions was reducing bias in the original JSC and CTV estimates. The biases were caused by failure to fully account for the regulatory environment associated with carriage of Canadian stations and, in the case of JSC, by extensive misclassification of Canadian Claimant programming. After adjustment, both the JSC and CTV models produce estimates of the Canadian Claimant share that are positive and precisely estimated. The adjusted results comport with theoretical expectations that cable systems would not expend resources for programming with negligible value to the system.

In selecting among the models before the Judges, the appropriate criteria are the precision and potential bias associated with coefficient estimates and resulting shares. Using these criteria, the JSC regression results, even with the adjustments above, are inferior to the adjusted CTV estimates. The regression coefficients for a number of claimants in the adjusted JSC model are not statistically significant, rendering the resulting shares imprecise. The imprecision arises in part because incorporating the legal framework for carriage of Canadian distant signals into the model requires that additional coefficients be estimated. There is not sufficient variation in the data to precisely estimate all terms. Inclusion of only three years, rather than four, and use of a small sample, rather than the whole universe of dates, further contributes to reduced precision.

It is also likely that even after adjustment, JSC coefficient estimates remain biased due to the omission of 2013 from the analysis. Figures 11 and 12 in Dr. Crawford’s corrected testimony indicate clear trends in the share of total and compensable minutes for different claimant

Exhibit CCG-R-1 (George), Page 29 categories. Exclusion of 2013 from the JSC analysis would tend to underestimate the value of Canadian Claimant programming and other programming with an increasing carriage trend, and overestimate the value of Program Supplier and other claimant programming with a decreasing trend.

The adjusted CTV estimates, in contrast, remain precise even after model adjustment, with coefficients for all claimant categories estimated precisely (and positive) both inside and outside of the retransmission zone. The fixed effects specification works to minimize the potential for bias. For this reason, the adjusted CTV estimates are superior to the adjusted JSC estimates for estimating the relative market value for all claimant categories. The original CTV model produced an estimate for the Canadian Claimant share of 4.23%. Adjusting the model to accommodate the retransmission zone increases the Canadian Claimant share to 4.75%.

It is also worth noting that I view regression estimates, suitably adjusted for the regulatory environment, as superior to estimates produced by the “Bortz” and “Horowitz” surveys of cable system operators. These surveys ask a sample of cable system operators how they would allocate a hypothetical fixed budget (constant sum) across a mix of programming categories and station types based on their relative value. Such surveys are inherently hypothetical in nature, and subject to bias in survey design and in the response of human subjects. Economists rarely use survey data in research when market data are available, as in this case.

In sum, using the criteria of maximum precision and minimum bias, in my view the adjusted CTV regression results are superior to adjusted JSC regression and Bortz and Horowitz survey estimates.

Exhibit CCG-R-1 (George), Page 30 7. RESPONSE TO SETTLING DEVOTIONAL CLAIMANTS & PUBLIC TELEVISION CLAIMANTS

In this section, I rebut claims by Public Television Claimants (joint testimony of Ms. Linda McLaughlin and Dr. David Blackstone) and Settling Devotional Claimants (Mr. Brown and Dr. Erdem) that challenge the use of royalty data in general or regression analysis in particular in these proceedings. In preceding sections I have responded to general arguments against the use of regression analysis. I focus in this section on specific claims.

a. Response to Public Television Claimants – Ms. Linda McLaughlin & Dr. David Blackburn The joint testimony of Ms. Linda McLaughlin and Dr. David Blackburn for Public Television Claimants argues that royalty payments cannot be used to infer the value of programming because the distant signal marketplace is not a “free” market, but instead governed by the terms of the compulsory license. They argue instead that past survey data, augmented with carriage trends, best measures relative market value. Specifically, Public Television Claimants argue that the value of claimant programming increases in proportion to increase in carriage.

Ms. McLaughlin and Dr. Blackburn are not correct in arguing that changes in value can be inferred from changes in carriage alone. As outlined above, distant signal carriage is a measure of quantity, while value takes into account both quantity and price. Value should be measured as expenditure. To offer a simple example, the value of ten loaves of white bread purchased at a price of $1.00 dollar is $10.00. The value of value of ten loaves of wheat bread purchased at a price of $2.00 is $20.00. Although the quantities are equal, value is not. Although the relative quantity of white bread is 50% of the total, the relative value is only 33%.

We can extend this analogy to consider a change in the quantity of white bread. If consumption rises from ten loaves to twelve (an increase of 20%) with prices unchanged, the

Exhibit CCG-R-1 (George), Page 31 expenditure on white bread increases from $10.00 to $12.00. Despite a 20% increase in quantity, the relative value of white bread only increases from 33.3% to 37.5%.

In the current context, distant signal instances are a useful measure of carriage, which is a quantity. Royalty payments are an expenditure, a function of both quantity and price. Educational signals are priced at 0.25 distant signal equivalents (DSE), while Canadian and US independent stations are priced at 1.00 DSE. The lower incremental cost of carrying educational distant signals implies that cable systems require commensurately lower incremental benefits to justify carriage; so educational distant signals are widely carried. But just as the relative value of white bread in the example above is 33.3% not 50%, an increase in carriage of educational signals by 20% does not imply an increase in value of 20%. For educational programming, which can be carried by cable systems at a lower price, the increase in value is substantially less than the increase in quantity alone. The relative value of claimant programming must take into account both price and quantity, which is best measured by royalty expenditure.

Also, as discussed in my direct statement (Exhibit CCG-5, Section VIII), the CCG has in the past used calculations of expenditure on distant signals to estimate the relative market value of Canadian distant signals. The process, historically referred to as “fee generation,” in principle can deliver reasonable estimates of relative value because it is based on actual carriage decisions of the full population of cable systems. However, as outlined in my direct statement, fee generation estimates can be sensitive to assumptions made in allocating royalties to signals, as the regulatory pricing framework is not fully deterministic. Regression analysis systematizes inference made through carriage decisions, which is one reason it represents a superior approach for estimating the relative market value of claimant programming.

To summarize, Ms. McLaughlin and Dr. Blackburn correctly observe that carriage of educational distant signals has increased relative to other signals types, a fact that is

Exhibit CCG-R-1 (George), Page 32 documented in my written direct statement and in the testimony of Ms. Jonda Martin.16 However, the Public Television Claimants are not correct in arguing that wider carriage implies higher relative market value, as value cannot be inferred without information on changes in expenditure. Royalty expenditures are a more complete and accurate determinant of value.

b. Response to Settling Devotional Claimants – Mr. Brown Mr. Brown claims, in paragraph 26 of his direct statement, that regression analyses submitted in these proceedings are conceptually and methodologically flawed and cannot be used to estimate the relative market value of distant signal programming. He lists the following reasons:

(1) Because the independent variables (i.e., control variables) used in the regression analyses are only tangentially related to the value of programming; (2) Because the primary dependent variable in the regressions (i.e. royalty payments) arise not from a free market but are set according to a regulatory formula; (3) Because the use of broadcast minutes fallaciously links viewing and value; and (4) Because regression results are not robust to changes in control variables.

Each of these claims is wrong. I have addressed all of these points conceptually in section 1, and offer additional specific information below:

1. Claim: Control variables are tangentially related to value. Mr. Brown argues, in point 26.i, that the independent (i.e., control) variables included in the regression such as system subscribers in the prior accounting period, the number of broadcast channels and demographic controls bear only a “tangential” relationship to “programming decisions.” Mr.

16 Carriage trends derived from CDC data are reported in section VIII in my written direct statement, Exhibit CCG-5A (Corrected), and in tables 4a, 4b, 5a, and 5b of the written direct statement of Ms. Martin, Exhibit CCG-4, page 19-20. It should be noted that Dr. Crawford finds substantial duplication of programming on educational distant signals, suggesting that net increases in Public Television Claimant programming are less than overall increases documented by CDC.

Exhibit CCG-R-1 (George), Page 33 Brown likens the inclusion of these variables to studying restaurant quality by measuring the quality of silverware.

This argument is not correct. Recall from section 1 that the focus of regression analysis in these proceedings is causal inference regarding how programming minutes of different types impact royalty payments. For causal inference, the purpose of the controls is to guard against bias from omitted variables. We do not attribute causality to the control variables and do not seek to interpret their coefficients. However, even indirect factors that affect royalty payments and are correlated with programming must be included in the econometric model, otherwise estimates of programming value will not be correct.

Both intuition and evidence indicate that cable systems make carriage decisions in the context of system and subscriber characteristics; hence, these factors must be included in the model. To illustrate one example, Dr. Waldfogel’s 2004-2005 direct statement reports that distant signal carriage is higher in markets with fewer local signals, consistent with the academic literature that small markets with fewer locally-targeted offerings tend to “import” more programing.17 This evidence that cable system operators select distant signals in part to compensate for limited local offerings necessitates controls for market and system characteristics in the regression.

2. Claim: Royalty payments cannot be used to infer value. Mr. Brown claims that royalty payments are based on a regulatory formula and cannot be used for inferring value. I discuss this topic in section 1, above. I emphasize here that royalty payments are not determined by a regulatory formula, only royalty prices are determined by regulation. The distinction is crucial. Royalty expenditures depend on both prices and carriage decisions. Even in an environment with regulated prices, cable systems face incremental costs for carrying distant signals. Profit- oriented firms would not bear these costs unless the benefits in terms of attracting subscribers and maintaining prices exceed the costs. Even in the presence of minimum fees,

17 Statement of Joel Waldfogel, Distribution of 2004 and 2005 Cable Royalty Funds, Docket No. 2007-3 CRB CD 2004-2005, p. 4-5.

Exhibit CCG-R-1 (George), Page 34 carriage choices reflect the relative value of different sorts of programming to cable systems. Distant signal carriage decisions reveal information on the value of distant signals. Regression analysis is a tool for aggregating information revealed through carriage choice.

3. Claim: The use of broadcast minutes in regression fallaciously links viewing and value. Mr. Brown claims, in point 26.iii, that Dr. Israel’s and Dr. Crawford’s reliance on viewing is “fallacious” because it neglects that some program types have high value despite low airtime and vice versa.

This claim confounds two issues. First, to be clear, neither Dr. Israel’s, Dr. Crawford’s, nor my regression model is estimated with viewing data. The models are estimated with broadcast minutes. Dr. Shum comprehensively details why distant signal viewing should not be used as a primary criterion for determining the relative market value of distant signal programming. (See Exhibit CCG-R-2.) Indeed, even as a measure of subscriber value, viewing falls short because it captures only the quantity of viewing, not the intensity of preference for programming.

Second, Mr. Brown misconstrues the nature of regression estimates of value. Some programs within each category are more valuable than others to cable systems, either through their ability to attract more (or more desirable) subscribers than other programs, or by allowing cable systems to charge higher prices for channel bundles. Other programs within the same category may be less attractive to the cable system. Regression coefficients effectively measure the average value of programming in each category as a whole, as revealed through carriage decisions and resulting royalty expenditure of cable systems. The approach takes into account that the value of individual programs within the category may differ, even though programs are not considered individually.

Further, it is important to keep in mind that goal of these proceedings is to estimate the relative market value of programming on distant signals, not the value of programming on local signals. As outlined by Crawford’s 2004-2005 report, the value of programming in local

Exhibit CCG-R-1 (George), Page 35 markets is linked to mass viewership, which drives advertising revenue. In the distant signal market, the value lies in attracting niche subscribers with differentiated programming. Differentiation rather than viewing is thus most closely associated with distant signal value. Both my direct statement and Dr. Crawford’s testimony discuss the impact of duplicative programming on coefficient estimates at length, and Dr. Crawford estimates value shares without duplicative programming on US distant signals. Mr. Brown’s claims regarding viewing and value of short programming such as weather alerts are thus not directly relevant to the distant signal market.

4. Claim: Regression results are not robust to changes in control variables. In 26.vii, Mr. Brown argues that regression results are highly contingent on included controls, citing Dr. Erdem’s testimony. I respond in detail to Dr. Erdem’s claims below. The changes to control variables suggested by Dr. Erdem change the economic interpretation of the coefficients of interest, so are econometrically invalid.

c. Settling Devotional Claimants – Dr. Erkan Erdem Dr. Erdem’s critique of regression analysis echoes those of Mr. Brown. Dr. Erdem argues that regression analyses cannot be used to calculate royalties because: (1) Royalty prices are not set in a free market; (2) Program value cannot be inferred from quantity alone; and (3) Regression estimates are sensitive to model specification and other choices of the econometrician (controls included, variable transformation, and influential observations). Dr. Erdem supplements his arguments with a new set of regression results that modify Dr. Israel’s model to add a “control” for distant subscribers, to remove what he considers to be influential observations, and to transform control variables.

I have responded to Dr. Erdem’s general claims above. In this section, I focus on explaining why his adjustments to Dr. Israel’s regression model do not produce meaningful results and should not be considered by the Judges in determining royalty allocations.

Exhibit CCG-R-1 (George), Page 36 First, and most generally, Dr. Erdem’s proposed allocation is based on an adjustment to Dr. Israel’s model. I show, above, that Dr. Israel’s estimates are invalid because his model fails to capture the regulatory environment for carriage of Canadian signals in the US and his data fails to correctly categorize programming on Canadian distant signals. For this reason alone, the SDC model should not be used as a criterion royalty allocation.

Second, recall that for causal inference, the key concern is to guard against bias in estimating coefficients on programming minutes. Omitted variables are a potential source of bias. Factors correlated with royalty payments and must be included in the econometric model even if their contribution to “model fit” is inconsequential. At the same time, factors that do not affect royalty payments should not be included. Dr. Erdem’s modifications run counter to the goals of causal inference, with his proposed changes tending to reduce the precision and increase bias in coefficient estimates. Specifically:

1. Alternative control variables. To support a critique that model estimates are sensitive to inclusion of different control variables, Dr. Erdem re-estimates Dr. Israel’s model adding what might seem to be a plausible control variable, the number of distant subscribers. In fact, adding this control variable destroys the causal interpretation of the regression coefficients and renders the results useless.

To see this, note that all of the regression models control in some way for lagged total subscribers because the prices a cable system must pay for each DSE of distant signal carriage is a function of the number of subscribers. Distant subscribers, however, is an outcome of distant signal carriage decisions, not an independent variable. In other words, the number of distant subscribers is functionally related to the total number of subscribers, the mix of distant signals carried and the cable system’s choice of where to distribute those signals. What this means in practice is that causal estimation breaks down and it is no longer possible to interpret the regression coefficients.

Exhibit CCG-R-1 (George), Page 37 The correct approach to accounting for the share of a system covered by a distant signal is the one followed by Dr. Israel and myself: Pro-rating broadcast minutes in each category by the share of the system with access to the distant signal. Dr. Crawford’s approach of estimating his model at the subscriber group level accomplishes the same task.

2. Variable transformations. Dr. Erdem modifies Dr. Israel’s model by taking logarithms of control variables. He argues that if the transformed variables are statistically significant they would improve “model fit” and should be used in the regression.

The decision to transform variables in an econometric model is not arbitrary, as transformations affect the economic interpretation of coefficients. Transformation decisions in economic settings thus depend crucially on the theoretical relationship between the dependent variables and the independent variables, something Dr. Erdem ignores in his analysis. There is no theoretical justification for log transformation of independent variables in Dr. Israel’s model, or in any of the regression estimates aimed at estimating the relative market value of claimant programming. Further, as argued above, the notion that control variables should be included based on their contribution to R2 is not consistent with the goals of causal inference.

3. Outliers and influential observations. Some observations in regression have a larger impact on coefficient estimates than others. Dr. Erdem proposes dropping the more influential data points, again in misguided pursuit of “fit.” Dropping observations is bad practice because doing so can introduce bias.

Proper treatment of outliers and influential observations depends on the purpose of a model. In the current proceedings, the regression analyses work from the full population of form 3 cable systems, not a sample. Dr. Israel does work with a random sample of dates, which were selected to be representative of the broadcast calendar to avoid selection bias. Dropping dates destroys the randomness of the sample. Dropping cable systems on particular dates also destroy the integrity of the population. In this way, dropping

Exhibit CCG-R-1 (George), Page 38 observations from regression data can introduce bias. Given that the goal of the regression is to estimate programming coefficients with maximum precision and minimum bias and not to improve “fit,” dropping observations runs against this goal and erodes the validity of resulting estimates.

In sum, Dr. Erdem’s modifications to Dr. Israel’s regression undermine the economic interpretation of the coefficients on distant signal programming minutes, so that they no longer reflect the incremental value of programming for claimant categories. His changes to control variables and the sample both reduce precision and introduce bias. As a result, the relative market value of distant signal programming cannot be credibly estimated from Dr. Erdem’s regression.

8. CONCLUSION

In this rebuttal testimony, I respond to general claims against the use of regression analysis of royalty payments to estimate the relative market value of distant signal programming. I emphasize that as long as cable systems bear incremental costs for carrying additional distant signals, regression analysis of royalty payments is an appropriate tool for inferring the relative market value of distant signal programming. I further maintain that because regression analysis infers value from actual marketplace decisions, regression estimates represent the best approach to estimating relative market value in these proceedings. I explain why challenges and adjustments to regression analyses made by Settling Devotional Claimants are not consistent with the goals of causal inference.

I assess the particular regression estimates submitted by JSC and CTV. I explain how both studies fail to account for the legal framework governing carriage of Canadian stations. As a result, both studies underestimate the value of Canadian Claimant programming. I further show that data used by JSC to estimate their regression contains substantial classification errors. I argue that commercial databases that lack country information cannot be used to classify programming on Canadian distant signals.

Exhibit CCG-R-1 (George), Page 39 In section 5 and 6, I adjust the JSC and CTV models to account for rules governing carriage of Canadian stations on a distant and local basis. I also replace the JSC data to correct misclassification of programming on Canadian distant signals. I re-estimate the models and calculate adjusted shares.

On a four-year basis, the adjusted JSC model produces an estimate for the Canadian Claimant share of 6.97%. The adjusted CTV model produces a four-year estimate of 4.75%, with annual shares of 4.60%, 4.58%, 4.69%, and 5.10% in the years 2010-2013, respectively. My analysis shows that when models are estimated with accurate data and suitably account for the legal environment, all produce estimates of the Canadian Claimant share that are positive and statistically significant. This finding comports with theoretical expectations that cable systems would not expend resources for programming with negligible impact on system profits.

I argue that in selecting among the models for a final estimate of relative market value for all claimant categories, the appropriate criteria should be the precision and potential bias associated with coefficient estimates and resulting shares. Using these criteria, the JSC regression results, even with adjustments above, are inferior to CTV estimates. Nevertheless, both, in my view, are superior to survey estimates.

Exhibit CCG-R-1 (George), Page 40

Appendix

Exhibit CCG-R-1 (George), Page 41

(Intentionally Left Blank)

Exhibit CCG-R-1 (George), Page 42 Table A1: JSC Adjusted Regression - Summary Statistics 1 2 Mean Standard Deviation Royalty paid 100,750 228,016 JSC Distant minutes of Canadian Claimants -- Outside Canada Zone 0 0 JSC Distant minutes of Program Supplier Claimants -- Outside Canada Zone 23,029 21,479 JSC Distant minutes of Joint Sports Claimants -- Outside Canada Zone 893 848 JSC Distant minutes of Commercial TV Claimants -- Outside Canada Zone 2,637 3,259 JSC Distant minutes of Public TV Claimants -- Outside Canada Zone 2,298 4,995 JSC Distant minutes of Devotional Claimants -- Outside Canada Zone 1,739 2,699 JSC Distant Network minutes -- Outside Canada Zone 1,422 3,337 JSC Other Distant Minutes -- Outside Canada Zone 242 2,759 JSC Distant minutes of Canadian Claimants -- Canada Zone 886 5,347 JSC Distant minutes of Program Supplier Claimants -- Canada Zone 9,651 18,559 JSC Distant minutes of Joint Sports Claimants -- Canada Zone 381 758 JSC Distant minutes of Commercial TV Claimants -- Canada Zone 964 2,363 JSC Distant minutes of Public TV Claimants -- Canada Zone 1,055 3,487 JSC Distant minutes of Devotional Claimants -- Canada Zone 747 2,064 JSC Distant Network minutes -- Canada Zone 518 2,116 JSC Other Distant Minutes -- Canada Zone 90 1,320 CCG Distant minutes of Canadian Claimants -- Outside Canada Zone 0 0 CCG Distant minutes of Program Supplier Claimants -- Outside Canada Zone 23,029 21,479 CCG Distant minutes of Joint Sports Claimants -- Outside Canada Zone 893 848 CCG Distant minutes of Commercial TV Claimants -- Outside Canada Zone 2,637 3,259 CCG Distant minutes of Public TV Claimants -- Outside Canada Zone 2,298 4,995 CCG Distant minutes of Devotional Claimants -- Outside Canada Zone 1,739 2,699 CCG Distant Network minutes -- Outside Canada Zone 1,422 3,337 CCG Other Distant Minutes -- Outside Canada Zone 242 2,759 CCG Distant minutes of Canadian Claimants -- Canada Zone 1,085 6,340 CCG Distant minutes of Program Supplier Claimants -- Canada Zone 9,462 18,245 CCG Distant minutes of Joint Sports Claimants -- Canada Zone 388 780 CCG Distant minutes of Commercial TV Claimants -- Canada Zone 964 2,363 CCG Distant minutes of Public TV Claimants -- Canada Zone 1,055 3,487 CCG Distant minutes of Devotional Claimants -- Canada Zone 743 2,061 CCG Distant Network minutes -- Canada Zone 492 2,076 CCG Other Distant Minutes -- Canada Zone 90 1,320 Number of system subscribers in the previous accounting period 57,498 129,589 Number of active channels in the previous accounting period 314 157 Median Income 47,628 9,681 Broadcast Channels total 22 14 Indicator for 3.75% fee 0.28 0.45 Indicator for minimum fee 0.44 0.50 Local Network Stations - Outside Canada Zone 4.35 3.94 Local Educational Stations Outside Canada Zone 2.75 3.75 Local Low Power Stations - Outside Canada Zone 0.14 0.46 Local Canadian Stations - Outside Canada Zone 0.00 0.00 Local Independent Stations - Outside Canada Zone 7.09 7.09 Local Network Stations - Canada Zone 1.99 4.05 Local Educational Stations - Canada Zone 1.36 3.16 Local Low Power Stations - Canada Zone 0.03 0.21 Local Canadian Stations - Canada Zone 0.09 0.44 Local Independent Stations - Canada Zone 2.76 5.80 Observations 5465 CCG Claimant minutes calculated as CCG shares times total station hours in the JSC data from TMS.

Exhibit CCG-R-1 (George), Page 43

Table A2: JSC Adjusted Regression - Regression Results (1) (2) JSC Classification CCG Classification Canada Zone=0 # JSC Distant minutes of Joint Sports Claimants 3.96 3.69 (2.93) (2.94) Canada Zone=1 # JSC Distant minutes of Joint Sports Claimants 4.69 2.56 (3.95) (3.60) Canada Zone=0 # JSC Distant minutes of Program Supplier 0.75*** 0.76*** Claimants (0.13) (0.13) Canada Zone=1 # JSC Distant minutes of Program Supplier 0.49** 0.59** Claimants (0.19) (0.18) Canada Zone=0 # JSC Distant minutes of Commercial TV Claimants 0.62 0.62 (0.50) (0.50) Canada Zone=1 # JSC Distant minutes of Commercial TV Claimants -0.055 -0.076 (0.55) (0.58) Canada Zone=0 # JSC Distant minutes of Public TV Claimants 1.12** 1.11** (0.36) (0.36) Canada Zone=1 # JSC Distant minutes of Public TV Claimants 0.53 0.48 (0.34) (0.33) Canada Zone=0 # JSC Distant minutes of Canadian Claimants 0 0 (.) (.) Canada Zone=1 # JSC Distant minutes of Canadian Claimants 1.18*** 0.96** (0.35) (0.31) Canada Zone=0 # JSC Distant minutes of Devotional Claimants -2.05*** -2.04*** (0.32) (0.32) Canada Zone=1 # JSC Distant minutes of Devotional Claimants 0.52 0.38 (0.45) (0.45) Canada Zone=0 # JSC Distant Network minutes -0.024 -0.032 (0.43) (0.43) Canada Zone=1 # JSC Distant Network minutes 0.21 0.13 (0.54) (0.53) Canada Zone=0 # JSC Other Distant Minutes 1.00 1.00 (0.55) (0.55) Canada Zone=1 # JSC Other Distant Minutes 2.34** 2.27** (0.82) (0.83) Number of system subscribers in the previous accounting period 1.29*** 1.29*** (0.062) (0.062) Number of active channels in the previous accounting period 109.1*** 109.0*** (15.7) (15.7) Median Income 0.25 0.25 (0.32) (0.32) Indicator for 3.75% fee 41151.5*** 41009.7*** (4959.9) (4957.0) Indicator for minimum fee 10944.8** 10845.9** (3334.8) (3339.6) Local Network Stations -- Outside Canada Zone -1724.5 -1733.9 (1576.3) (1577.0) Local Educational Stations -- Outside Canada Zone 5864.5*** 5874.4*** (1340.9) (1341.3) Local Low Power Stations -- Canada Zone) 9307.5 9325.9 (5700.0) (5700.2) Local Canadian Stations -- Outside Canada Zone 0 0 (.) (.)

44

Local US Independent Stations -- Outside Canada Zone 2346.6** 2337.3** (818.4) (818.5) Local Network Stations -- Canada Zone 2098.5 2096.6 (1450.2) (1441.9) Local Educational Stations -- Canada Zone -1977.7 -1918.9 (1812.3) (1810.3) Local Low Power Stations -- Outside Canada Zone -51430.4*** -51764.1*** (10713.5) (10782.3) Local Canadian Stations -- Canada Zone -20155.2** -18649.8** (6421.4) (6338.7) Local US Independent Stations -- Canada Zone 2023.2* 1966.7 (1028.4) (1031.3) Account period 20102 -3906.1 -3265.8 (4674.1) (4635.6) Account period 20111 -4211.4 -3708.3 (4924.5) (4927.9) Account period 20112 -1440.9 -1304.1 (5284.7) (5289.2) Account period 20121 6100.7 6285.3 (5957.1) (5960.5) Account period 20122 964.3 1495.0 (6345.0) (6335.1) Constant -91409.4*** -91491.1*** (17828.9) (17832.7) Observations 5465 5465 Adjusted R2 0.705 0.705 This figure shows the coefficients and standard errors for all regressors in the econometric model. Column (1) is based on JSC program classification. Column (2) is based on CCG program classification. Standard errors in parentheses. One asterisk indicates p<0.05, two asterisks indicate p<0.01, and three asterisks indicate p<0.001.

Exhibit CCG-R-1 (George), Page 45 Table A3: CTV Adjusted Regression - Summary Statistics (1) (2) Mean Standard Deviation Royalty 27,534 (97,657) Distant Minutes in Canadian Retransmission Zone • Joint Sports Claimants 4,677 (6,774) • Program Supplier Claimants 157,027 (263,949) • Commercial TV Claimants 22,722 (47,898) • Public TV Claimants 77,599 (185,968) • Canadian Claimants 15,423 (66,986) • Devotional Claimants 12,563 (39,898) • Distant unmerged minutes 533 (11,825) • Distant “to be announced” minutes 388 (4,902) Distant Minutes Outside Canadian Retransmission Zone • Joint Sports Claimants 5,346 (6,300) • Program Supplier Claimants 162,082 (226,655) • Commercial TV Claimants 26,605 (47,280) • Public TV Claimants 78,146 (206,081) • Canadian Claimants 0 (0) • Devotional Claimants 13,014 (33,838) • Distant unmerged minutes 1,569 (20,247) • Distant “to be announced” minutes 374 (5,474) Local Station Count Inside Canadian Retransmission Zone • Local Educational Stations 1.5 (2.4) • Local Low Power Stations 0.0 (0.2) • Local US Independent Stations 8.6 (10.7) • Local Network Stations 2.4 (3.0) • Local Canadian Stations 0.0 (0.2) Local Station Count Outside Canadian Retransmission Zone • Local Educational Stations 1.6 (2.6) • Local Low Power Stations 0.1 (0.3) • Local US Independent Stations 10.3 (11.8) • Local Network Stations 2.8 (3.2) • Local Canadian Stations 0.0 (0.0) Number of channels carried by the system in the previous accounting period 394.1 (187.9) Number of permitted stations rebroadcast to the subscriber group 2.1 (1.7) Indicator for whether the subscriber group’s system is paying the minimum fee 0.2 (0.4) Indicator for whether the subscriber group’s system is within the Canada Zone 0.5 (0.5) Indicator for whether the subscriber pays any syndicated exclusivity surcharge 0.0 (0.0) Indicator for whether the subscriber pays any 3.75% fee 0.3 (0.4) Number of subscribers to the subscriber group in the previous accounting period 15,134.7 (52,979.5) Number of distant signals rebroadcast to the subscriber group 2.5 (1.9) This figure shows the means and standard deviations of key variables in the adjusted CTV regression.

46

Table A4: CTV Adjusted Regression - Regression Results (1) Adjusted CTV Model Distant minutes of Joint Sports Claimants -- Outside Canada Zone 0.000027*** (0.0000078) Distant minutes of Joint Sports Claimants -- Canada Zone 0.000036*** (0.0000042) Distant minutes of Program Supplier Claimants -- Outside Canada Zone 0.0000022*** (0.00000025) Distant minutes of Program Supplier Claimants -- Canada Zone 0.0000022*** (0.00000021) Distant minutes of Commercial TV Claimants -- Outside Canada Zone 0.0000044*** (0.00000069) Distant minutes of Commercial TV Claimants -- Canada Zone 0.0000050*** (0.00000070) Distant minutes of Public TV Claimants -- Outside Canada Zone 0.0000016*** (0.00000024) Distant minutes of Public TV Claimants -- Canada Zone 0.0000020*** (0.00000022) Distant minutes of Canadian Claimants -- Outside Canada Zone - - Distant minutes of Canadian Claimants -- Canada Zone 0.0000042*** (0.00000030) Distant minutes of Devotional Claimants -- Outside Canada Zone 0.00000076* (0.00000036) Distant minutes of Devotional Claimants -- Canada Zone 0.0000012** (0.00000041) Number of permitted stations rebroadcast to the subscriber group -0.0020 (0.024) Indicator for whether the subscriber pays any syndicated exclusivity surcharge 0.65* (0.25) Indicator for whether the subscriber pays any 3.75% fee 0.45*** (0.043) Number of subscribers to the subscriber group in the previous accounting period 0.000037*** (0.0000023) Number of distant signals rebroadcast to the subscriber group -0.56*** (0.054) AT&T # Number of subscribers to the subscriber group in the previous 0 accounting period (.)

Charter # Number of subscribers to the subscriber group in the previous 0.0000096 accounting period (0.0000068)

Comcast # Number of subscribers to the subscriber group in the previous -0.000028*** accounting period (0.0000025)

Time Warner # Number of subscribers to the subscriber group in the previous -0.0000095** accounting period (0.0000029)

Exhibit CCG-R-1 (George), Page 47

Verizon # Number of subscribers to the subscriber group in the previous -0.000030*** accounting period (0.0000024)

Cox Communications # Number of subscribers to the subscriber group in the -0.000019*** previous accounting period (0.0000025)

Others # Number of subscribers to the subscriber group in the previous -0.000021*** accounting period (0.0000029)

Local Educational Stations -- Outside Canada Zone 0.011 (0.022) Local Low Power Stations -- Outside Canada Zone 0.11 (0.057) Local US Independent Stations -- Outside Canada Zone 0.072*** (0.012) Local Network Stations -- Outside Canada Zone -0.080*** (0.023) Local Canadian Stations -- Outside Canada Zone - - Local Educational Stations -- Canada Zone 0.034* (0.014) Local Low Power Stations -- Canada Zone) -0.20** (0.071) Local US Independent Stations -- Canada Zone 0.015* (0.0069) Local Network Stations -- Canada Zone 0.047** (0.017) Local Canadian Stations -- Canada Zone 0.24*** (0.044) Distant unmerged minutes -- Outside Canada Zone 0.0000052*** (0.00000070) Distant unmerged minutes -- Canada Zone 0.00000079 (0.0000012) Distant “to be announced” minutes -- Outside Canada Zone -0.0000013 (0.0000035) Distant “to be announced” minutes -- Canada Zone 0.00000021 (0.0000021) Constant 6.94*** (0.078) Observations 26126 Adjusted R2 0.249 This table shows the coefficients and standard errors for all regressors in the adjusted CTV model. One asterisk indicates p<0.05, two asterisks indicate p<0.01, and three asterisks indicate p<0.001.

48

EXHIBIT CCG-R-2

WRITTEN REBUTTAL TESTIMONY OF MATTHEW SHUM, PH.D.

Written Rebuttal Testimony of Matthew Shum, Ph.D.

2010-2013 Cable Royalty Distribution Proceeding

Docket No. 14-CRB-0010-CD (2010-2013)

September 15, 2017

1. I, Matthew Shum, am the J. Stanley Johnson Professor of Economics in the Division of Humanities and Social Sciences at the California Institute of Technology (“Caltech”) in Pasadena, California. At Caltech, I teach courses in econometrics and industrial organization. I also supervise graduate students working in these areas. My academic research also focuses on these two areas, and I have published approximately 50 articles which have appeared in top academic journals such as Econometrica, the American Economic Review, and the Journal of Political Economy.

2. I received my Ph.D. in Economics from Stanford University in 1998. Prior to arriving at Caltech, I taught at the University of Toronto, in Canada, from 1998 to 2000, and Johns Hopkins University, in Baltimore, Maryland, from 2000 to 2008. I joined Caltech in 2008. My curriculum vitae is attached as Appendix A of this document.

3. I have been asked by the Canadian Claimants Group (CCG) to comment on Dr. Jeffrey Gray’s testimony. In this testimony, I will discuss two sets of issues. First, in Section A, I present several conceptual difficulties with using viewing as a measure of relative market value for distant signal programming. Second, in Section B, I discuss measurement problems vis-à-vis CCG programming which arise in Dr. Gray’s viewing-based analysis, and I offer adjusted share estimates which attempt to overcome these measurement problems.

Exhibit CCG-R-2 (Shum), Page 1

4. Based on my review, I conclude in Section C that a viewing-based approach suffers conceptual shortcomings and is not reliable as a primary or sole criterion for determining relative market value of distant signal programming. However, in case the Judges choose to consider viewing as a factor for determining royalty allocations in the current proceeding, I also enumerate several deficiencies in Dr. Gray’s analysis, which may systematically bias or underestimate the distant viewing of CCG programming. I further conclude that, when Dr. Gray’s viewing analysis is adjusted to accommodate these deficiencies, the results for CCG programming establish a reasonable floor for the share of royalties that should be awarded to the CCG.

A. CONCEPTUAL DIFFICULTIES WITH VIEWING AS MEASURE OF VALUE1

5. By my understanding, the goal of these proceedings is to allocate distant signal royalty funds among different claimant groups which are generally organized around categories of programming. Following the precedent established from previous proceedings, this allocation should be made according to the criterion of relative market value of different programming categories in a hypothetical free market.2

1 Much of the material in Section A is derived from the discussions of the cable television industry in the testimonies of Dr. Gregory Crawford during the 2004-2005 Phase 1 cable distribtion proceedings and Dr. Steven Wildman during the 1990-1992 Phase I cable distribtion proceedings, and the discussion of the use of viewing measures in the 2000-2003 Phase II cable distribution determination (Cable Royalty Board, Distribution of Cable Royalty Funds 2000-2003, Docket No. 2008-02 CRB CD 2000-2003 (Phase II), 78 Fed. Reg. 64984 (Oct. 30, 2013) (“2000-2003 Phase II”)).

2 See Copyright Royalty Board, Distribution of the 2004 and 2005 Cable Royalty Funds, Docket No. 2007-3CRB CD 2004-2005, 75 Fed. Reg. 57063, 57065 (Sept. 17, 2010) (“2004-2005 Phase I”) ( “for the purposes of this proceeding, the parties are all in agreement that the sole governing standard is the relative marketplace value of the distant broadcast signal programming retransmitted by cable systems during 2004 and 2005”).

Exhibit CCG-R-2 (Shum), Page 2

6. Since the existing royalty rates for distant signals are set not by free market forces but rather fixed by the Copyright Act,3 the relative market values for distant signal programming are not observed directly. While market values are typically shaped by both demand and supply forces, for the distant signal marketplace it suffices to focus on the demand side: that is, on the demand of cable system operators (CSOs) for such programming.4 Accordingly, recent proceedings have followed a principle of allocating royalty shares based upon measurements of CSOs’ valuations for the different categories of programming.5

7. CSOs’ valuations for distant signal programming depend, in turn, on the revenue they gain from offering the programming. Since distant signals do not enhance the CSOs’ advertising revenues, they only affect the CSOs’ revenues via the subscription channel, by attracting or retaining subscribers, and maintaining or increasing the prices that CSOs can charge for their products.6 That is, a CSO offers

3 The testimony of Jonda Martin on behalf of the CCG provides details on the royalty rates in effects during 2010-2013 (Exhibit CCG-4, pp. 2ff).

4 See Rebuttal Testimony of Dr. Andrew S. Joskow (1998-99 Proceedings, CTV 04-05 Ex. 13): “[I]t is not necessary to make adjustments to the royalty pool allocations suggested by studies analyzing cable operator valuations based on these supply side considerations.” (p.2.)

5 Both the Bortz Survey and Joel Waldfogel’s regression analysis, which were important in informing the Judges’ decisions in the 2004-05 proceedings (2004-2005 Phase I, at 57065), aim to measure CSOs’ relative valuations (albeit using very different methodologies) of the various categories of distant signal programming.

6 See Direct Testimony of Sue Ann R. Hamilton, current proceedings: “my programming decisions were designed to select the cable networks and broadcast stations that I thought would best contribute to subscriber attraction and retention for my cable systems”; Rebuttal Testimony of Gregory S. Crawford, In the Matter of the Distribution of the 2004 and 2005 Cable Royalty Funds, Docket No. 2007-3 CRB CD 2004-2005: “While broadcast stations rely exclusively on advertising revenue, cable systems rely either predominantly or exclusively on subscriber revenue. […] [C]ontent distributed as distant broadcast signals on cable systems is selected to maximize subscription revenue.” (pp. 5-6); 2000-2003 Phase II, at 64992 (“The revenue that the CSO earns from retransmitted broadcasts is a consequence of the impact of the retransmissions on the sale of subscriptions to its cable bundles (packages or tiers). This is in contrast to the terrestrial commercial television station whose signal is being retransmitted, and whose revenues are received from advertisers.”).

Exhibit CCG-R-2 (Shum), Page 3

distant signal programming hoping to persuade non-subscribers to start subscribing, to convince existing subscribers to continue subscribing, or to enhance the appeal of its bundles. This would allow the CSO to maintain its current prices, or perhaps command higher prices. Naturally, the value of distant signals to CSOs derive in part from the value that existing and potential subscribers place on them. If the subscribers place no value on them, then neither would CSOs, as in that case distant signals would not be useful to the CSOs in retaining or attracting subscribers. Nevertheless, as a principle, the relative market values for distant signal programming depend on the CSOs’ valuations of the programming, and not on subscribers’ valuations.7

8. From this perspective, a fundamental shortcoming of using viewing is that, at best, it is a measure of subscribers’ valuations of distant signal programming, rather than the CSOs’. Nevertheless, even as a measure of subscriber valuation of distant signals, the household viewing variable utilized by Dr. Gray is problemmatic.

9. Dr. Gray utilizes a measure of the number of households watching a program during a given day and quarter-hour. However, subscribers’ valuations, or willingness-to-pay, 8 for the program depends not only on whether they watch a program, but more importantly on the intensity of the interest, or avidity, with which viewers are engaged with the program.9 For instance, the viewership of

7 See Statement of Dr. Steven S. Wildman (1990-92 Royalty Distribution Proceedings): “subscriber demand is relevant to the determination of appropriate payments for programs on distant signals only as it is filtered through the profit-maximizing calculus of cable system operators.” (p. 3)

8 The term willingness-to-pay has been used in distant signal royalty proceedings ever since S. Wildman’s testimony in the 1990-92 proceedings. In the economics literature, a synonymous term for willingness-to-pay is “reservation price,” which denotes the hypothetical maximal price above which the agent would no longer wish to purchase an item. It is a crucial determinant of the aggregate, or market-level demand function for the item. (See H. Varian, Microeconomic Analysis, section 9.4.)

9 See Wildman, op. cit., “cable viewing share studies say nothing about preference intensity (or viewer willingness-to-pay for programs), which must be considered by CSOs in assessing the demand for cable services” (p. 8).

Exhibit CCG-R-2 (Shum), Page 4

baseball games may consist of, first, diehard fans who subscribe to cable television services to obtain access to more baseball broadcasts and, second, casual viewers who may watch baseball on TV when it is on but do not actively seek it out. Obviously, the CSO’s decision to carry a distant signal with baseball programming only aids in retaining or attracting households in the first group. However, a simple count of the number of viewing households ignores this distinction, and may badly mis-estimate the CSO’s potential gain in revenue from adding the signal.

10. Furthermore, even if household viewing were a perfect measure of subscriber valuation for programming on distant signals (and we have already provided reasons why it is not), it provides only an indirect and incomplete measure of the CSOs’ valuations for the distant signal programming. Evaluating distant signal programming using only measures of subscriber valuation (such as viewing) fails to account for the CSO’s opportunities to increase its profits by offering bundles of channels (including distant signals) to households. The importance of such bundling considerations in creating profits for the CSO’s has been emphasized in both previous proceedings,10 as well as the economics literature on cable television markets.11

11. Specifically, offering special interest programming appealing to “niche” tastes can create value for a CSO, because households who value such niche programming likely have preferences which are negatively correlated with the preferences of households who do not value niche programming.12 A CSO which bundles both niche

10 See Wildman, op. cit., pp. 4-6; Crawford, op. cit., pp. 6-7.

11 See, inter alia, B. Owen and S. Wildman (1992), Video Economics, Harvard University Press, ch. 4; G. Crawford (2008), “The Discriminatory Incentives to Bundle in the Cable Television Industry,” Quantitative Marketing and Economics, Vol. 50, pp. 41-78.

12 See Crawford, op. cit.: “[…] programming that appeals to niche tastes (“Special Interest Networks”) is more likely to generate tastes that negatively co-vary with tastes for the bundle than programming that appeals to broad tastes […] [C]ontent that is markedly different from the other

Exhibit CCG-R-2 (Shum), Page 5

and non-niche programming together can sell to both household types and achieve higher profits than it would if it were to offer only one type of programming. Such bundling externalities, or synergies, are ignored by focusing solely on viewing as a measure of value.

12. Beyond this, niche programming, which may have small viewership numbers, may actually have higher incremental value for CSOs relative to mass appeal programs with larger viewerships.13 In other words, for a CSO, adding a channel with primarily niche programming may increase the bundle’s value more compared to adding another channel with mainly mass appeal programming. While this may seem paradoxical, the reason is that many mass appeal programs (e.g., gameshows or sitcom reruns) are close substitutes for each other, and hence if many viewers watch a mass appeal program on a distant signal, that merely subtracts from, or “displaces,”14 the viewership of similar programs on non-distant signals. Thus adding a distant signal station with mass appeal programming merely shuffles existing viewers between the added stations and other stations already carried by the CSO and does not attract new viewers to the CSO’s offerings. The rational CSO would have no value for such a distant signal.15 In contrast, the viewership of niche programs, no matter how small, represent “new eyeballs” for the CSOs, as those viewers would not find similar programs on other channels in the CSO’s bundles. These viewers would be among the “new subscribers” who may otherwise not content already offered by the cable system is likely to have relatively greater economic value to the cable operator than content that is similar.” (p. 10); G. Crawford and J. Cullen (2007), “Bundling, Product Choice, and Efficiency: Should cable television networks be offered a la carte?”, Information Economics and Policy, Vol. 19, pp. 379-404.

13 See Wildman, op. cit.: “we would expect that the types of programs accounting for the largest fraction of the viewing audience on distant signals to have the least value to cable systems at the margin” (p. 9).

14 See 2000-2003 Phase II, 78 Fed. Reg. at 64992, n. 30.

15 See 2000-2003 Phase II, 78 Fed. Reg. at 64992.

Exhibit CCG-R-2 (Shum), Page 6

initiate service with the CSO if distant signal programming were not available. In this way, niche programs, which may be targeted towards a smaller and more specialized audience can provide higher value to the CSO relative to mass appeal programs.

13. By any measure, CCG programming qualifies as niche programming.16 It includes news, current events, and cultural programs from a distinctly Canadian perspective, as well as unique and original French-language programming. In the absence of CCG programming transmitted over distant signals, most American households would have no access to these types of television programs.

14. For these reasons, I conclude that, by itself, viewing is an incomplete and unreliable measure of relative market value for CCG programming. This is consistent with prior royalties distribution proceedings17 in which viewing-based approaches have been discredited for determining relative market values for distant signal programming.18 However, should the Judges consider viewing as a factor in their decisions in this proceeding, I proceed in the next section to enumerate several

16 Testimony of Danielle Boudreau, current proceedings (Exhibit CCG-1 (Corrected)) (discussing many distinctive features of CCG television programs. Two notable examples of such distinctive high-quality programs are The Nature of Things, a science documentary series which has been airing since 1960, making it the longest running science program on television; and Enquête, a French-language public affairs program. Both of these programs have won international awards.

17 See Library of Congress, Distribution of 1998 and 1999 Cable Royalty Funds, 69 Fed. Reg. 3606, 3609 (Jan. 26, 2004) (“1998-1998 Phase I”) (“the Nielsen study does not afford an independent basis for determining relative value […] The Nielsen study reveals what viewers actually watched but nothing about whether those programs motivated them to subscribe or remain subscribed to cable.”).

18 In the 2000-2003 Phase II proceedings, viewing-based analyses were accepted, but this was for the special case of “determin[ining] relative market value of television programs within a single, homogeneous program category” (emphasis added), not for determining market values across multiple program categories. (Order Reopening Record and Scheduling Further Proceedings, in re Distribution of 2004, 2005, 2006, 2007, 2008, and 2009 Cable Royalty Funds, Docket No., 2012-6 CRB CD 2004-09 (Phase II), pg. 2)

Exhibit CCG-R-2 (Shum), Page 7

shortcomings in Dr. Gray’s analysis, which may lead to bias and underestimation of the distant viewing of CCG programming.

B. DISCUSSION OF DR. GRAY’S VIEWING STUDY

15. This section highlights several data-related deficiencies in Dr. Gray’s analysis, which have the effect of biasing or understating the distant viewing of CCG programming relative to programs in other claimant categories. After discussing the deficiencies, I perform adjustments to Dr. Gray’s procedures to remedy them.

B.1. A Summary of Dr. Gray’s Approach

16. I begin with a brief summary of Dr. Gray’s approach. The overall goal of Dr. Gray's exercise is to carve up distant signal royalties among programming categories according to each category’s share of distant signal viewing. In order to do this, Dr. Gray selected a random sample of around 150 distant signal television stations for each year19 and obtained household-level television viewing data for these stations from the Nielsen Company, a well-known market research company for the US television industry.

17. Nielsen compiles its television viewing numbers using a sample of American households. Because the Nielsen methodology relies on a sample the dataset does not contain a full record of all distant viewing in the US. It contains only the distant viewing among a limited number of households, selected according to Nielsen’s

19 The number of stations requested by Dr. Gray were 153 in 2010-11, 152 in 2012, and 151 in 2013. (Authors’ calculations from Testimony of Jeffrey S. Gray, Ph.D., corrected version April 3, 2017 (“Gray Testimony”), pp. 24-26.

Exhibit CCG-R-2 (Shum), Page 8

sampling methodology.20 Because viewing of distant signal programming constitutes only a small fraction of television viewing among American households, the sampling methodology picks up very few instances of non-zero distant viewing.21 This is unsurprising, in the same way as finding no four-leafed exemplars in a random handful of clovers. By my calculations using the data provided by Dr. Gray, over 90% of the observations in the estimation data have zero distant viewing.22

18. In addition to these instances of zero viewing within the Nielsen data, about 20% of the sampled stations do not appear in the Nielsen data at all.23 Specifically, as percentages, 81.7% of the sample stations appeared in the Nielsen viewing dataset in 2010, 81.0% in 2011, 79.6% in 2012, and 77.4% in 2013. For programs on stations not appearing in the Nielsen dataset, then, no information on viewing, either local or distant, is available. Clearly, such data seem ill-suited for measuring distant signal viewing.

19. To overcome this lack of distant viewing recorded in the Nielsen data for the sample stations, Dr. Gray uses a regression approach, which fits a mathematical relationship between local and distant viewing.24 Based on this relationship, one can estimate distant viewing of a program by the local viewing of the same program

20 Details of the sampling procedure are given in the Nielsen Local Reference Supplements, 2010- 2011, ch. 6, produced by Program Suppliers in discovery.

21 See Gray Testimony: “Due to the low frequency of distant viewing and the size of the sample Nielsen uses to measure total U.S. household viewing, there are many instances of no recorded distant viewing of compensable retransmitted programs in the Nielsen Household Meter Data.” (p. 17).

22 Precisely, from my calculations, the proportion of zero distant viewing observations is 92% in 2010, 93% in 2011, 92% in 2012, and 94% in 2013. This is appreciably higher than the 76-82% proportion of zero-viewing observed in the Nielsen sample used in the 2000-2003 Phase II proceedings. 2000-2003 Phase II Determination, 78 Fed. Reg. at 64995.

23 The number of sample stations appearing in the Nielsen data were 125 in 2010, 124 in 2011, 121 in 2012, and 117 in 2013.

24 See Gray, op. cit., p. 15.

Exhibit CCG-R-2 (Shum), Page 9

during the same time interval. Essentially, this exercise replaces the actual distant viewing numbers (both zeros and nonzeros) in the Nielsen by imputed distant viewing numbers based upon local viewing and other control variables. For programs on the sample stations not appearing in the Nielsen dataset, both local and distant viewing had to be imputed.

20. In principle, this procedure produces a predicted distant viewing number for every program shown on each of the sample stations for every time interval over all four years (2010-2013). Subsequently, each program is mapped to one of the claimant programming categories, and the distant viewing is summed across all programs in each category and projected to the national level. Finally, the projected distant viewing number for each category is divided by the total distant viewing across all categories to obtain the royalty shares.

21. One point which emerges from this description is that the primary purpose of the regression methodology in Dr. Gray’s analysis is to accommodate defects in the Nielsen dataset, of which there are primarily two: (i) the high preponderance of zero viewing instances recorded for the sample stations; and (ii) the large number of sample stations not appearing in the Nielsen dataset, for which no measures of viewing, zero or otherwise, are observed. A regression model is used to estimate the relationship between local and distant viewing for the stations in the Nielsen data, and then extrapolated to all the sample stations which are missing from the Nielsen data. In principle, a more comprehensive viewing dataset containing more records of distant viewing and including all the sampled stations would obviate the need for regression. Then, the claimant royalty shares could be computed directly from the distant viewing measures without the need for any imputation.

22. As such, the use of regression in Dr. Gray’s analysis to accommodate data shortcomings contrasts with the purpose of regression analyses in the testimonies of Drs. Lisa George, Gregory Crawford, and Mark Israel in these proceedings, as well

Exhibit CCG-R-2 (Shum), Page 10

as by Drs. Gregory Rosston and Joel Waldfogel in earlier proceedings. In these regressions, the estimated coefficient values from the regression are of primary interest, and are used to construct measures of the CSOs’ willingness-to-pay for programming in the various claimant categories.25 For these purposes, the regression methodology is indispensible.

23. Despite these differences, one important commonality in the studies of Drs. Gray, Crawford, George, and Israel in this proceeding is the reliance on choice data to infer valuation – subscribers’ viewing choices in Dr. Gray’s case, and CSOs’ carriage choices for the others. This “revealed preference”26 approach, well- established in economic research, is based on the principle that agents’ choices in natural incentivized environments provide more reliable estimates of valuation relative to alternative approaches, such as surveys, which are often unincentivized.27

B.2 Deficiencies in Dr. Gray’s Approach

24. Having summarized Dr. Gray’s approach, I now discuss several problems which contribute to the systematic mismeasurement of the distant viewing of CCG

25 See, for instance, Testimony of G. Rosston, submitted on behalf of CTV in the 1998-99 proceedings: “The 훽’s [regression coefficient values] give an estimate of the implicit price paid by a cable system when it adds an additional minute of the different categories of programming and form an important basis for our estimate of the allocation of royalties among the various categories” (p. 8).

26 The term “revealed preference” harks back to the work of Nobel-prize laureate Paul Samuelson. H. Varian (Microeconomic Analysis, section 8.7) provides an introduction.

27 See G. Becker, M. DeGroot, and J. Marschak (1964), “Measuring Utility by a Single-Response Sequential Method,” Behavioral Science, Vol. 9, pp. 226-232, for a discussion of the difficulties in eliciting agents’ utilities (or valuations) via surveys, along with a proposal for an incentivized elicitation scheme (the well-known “Becker-DeGroot-Marschak (BDM)” procedure).

Exhibit CCG-R-2 (Shum), Page 11

programming relative to the other claimant programming categories. I then propose some adjustments to Dr. Gray’s procedure to address these issues.

B.2.i. First adjustment: The distant/local viewing relationship for CCG programming is different

25. The first problem in Dr. Gray’s regression is that it inadequately controls for important differences in the local/distant viewing relationship between CCG and non-CCG programming. Specifically, for most CCG programming during the sample period, Nielsen reports that distant viewing is higher than local viewing. In contrast, for other programming categories, the opposite pattern is observed, in which local viewing is higher than distant viewing. Table 1, which shows average number of local and distant viewing households in the Neilsen data, illustrates these differences. For three out of the four years (2010-2012), local viewing of CCG programming is dwarfed by distant viewing.28

Table 1: Average local and distant viewing in Nielsen data

2010 2010 2011 2011 2012 2012 2013 2013 local distant local distant local distant local distant All 2.886 0.101 2.609 0.085 2.517 0.090 2.174 0.070 CCG 0.028 0.477 0.026 0.391 0.174 0.182 0.317 0.162 CTV 6.553 0.098 6.791 0.079 7.778 0.065 6.252 0.053 SDC 0.355 0.007 0.372 0.009 0.374 0.006 0.482 0.011 PS 3.406 0.083 2.724 0.066 3.045 0.091 2.802 0.058 PTV 1.147 0.113 1.205 0.104 0.886 0.092 0.845 0.077 JSC 21.962 0.333 15.777 0.328 18.593 0.472 16.024 0.391

28 The higher local viewing numbers for CCG programming in 2012-13 appear to arise from the inclusion of CBET, located across from Detroit in Windsor, Ont., for which higher viewing is observed in the data.

Exhibit CCG-R-2 (Shum), Page 12

26. Table 2, which presents the same data in ratio form (local viewing divided by distant viewing) shows that while, across all four years, local viewing is typically around thirty times higher than distant viewing for all programming on distant signals, local viewing never exceeds more than twice the amount of distant viewing for CCG programming. In summary, in no year does the local/distant viewing relationship for CCG resemble that for non-CCG programming, for which local viewing always (far) outstrips distant viewing.

Table 2: Ratio of Local to Distant Viewing in Nielsen Data

2010 2011 2012 2013

All 28.574 30.694 27.967 31.057 CCG 0.059 0.066 0.956 1.957 CTV 66.867 85.962 119.662 117.962 SDC 50.714 41.333 62.333 43.818 PS 41.036 41.273 33.462 48.310 PTV 10.150 11.587 9.630 10.974 JSC 65.952 48.101 39.392 40.982

27. After studying Mr. Paul Lindstrom’s description of the procedure for constructing the Nielsen viewing dataset,29 I believe the reason for these differences is that Nielsen defines “local viewing” differently for Canadian stations, compared to US stations. Local viewing for a US station is defined as viewing of the station in its home market; that is, in the US counties located closest to the station’s home. However, because Nielsen’s television sample surveys only US households,30 there

29 See P. Lindstrom, MPAA Project Steps – Distant Viewing 2010-13, PS-2010-13-C-002635-002637 (produced by Program Suppliers in discovery).

30 See Nielsen Local Reference Supplement, 2010-2011, ch. 1 (produced by Program Suppliers in discovery).

Exhibit CCG-R-2 (Shum), Page 13

are no measures for viewership of Canadian stations in their home markets, which are outside US borders. Instead, Nielsen defines local viewing for Canadian stations by viewing in the US counties closest to the home location of the stations.31 Naturally, viewing of a local television station should be highest in its home market. The fact that local viewing of Canadian stations is not measured in its home market plausibly explains why the ratio of local viewing to distant viewing is systematically lower for CCG programming compared to other programming categories, as shown in Tables 1 and 2.

28. The validity of Dr. Gray’s regression exercise hinges upon an accurate estimation of the local/distant-viewing relationship for all categories of distant signal programming. Therefore, systematic differences in this relationship between CCG and the other programming categories, as documented in the previous paragraph, must be accommodated and controlled for in the regression model. A standard way in econometric modeling to accommodate systematic differences in the relationship between the left-hand side variable and a right-hand side variable of interest for one subset of the observations (in this case the CCG programs) is to include a category indicator in the regression.32 This category indicator is a variable which is equal to 1 for all observations in this subset, and 0 otherwise. In the regression, the coefficient attached to this category indicator would measure the systematic difference in the local/distant viewing relationship between CCG and other types of programs. While Dr. Gray includes indicators for program types33 as

31 See spreadsheet “CDC_MPAA_LocalCountyDetails_2010_2013_cable.xlsx”which lists the counties which are considered “local” for each station in Dr. Gray’s sample (produced by Program Suppliers in discovery).

32 In the econometrics literature these category indicators are also called “dummy variables.” See, among others, W. Greene (1993), Econometric Analysis, Macmillan Publishing, section 8.2.

33 Dr. Gray’s regression specification includes a set of 32 dummy variables for “program types” which describe the general topic matter for each program (cartoon, daytime soap, game show, etc.). See file “PS-2010-13-C-002894.pdf” (Restricted) (produced by Program Suppliers in discovery.) These program types should not be confused with programming categories, a term I use in this

Exhibit CCG-R-2 (Shum), Page 14

well as quarter-hours in his regression, he does not include any category indicators for programming categories.

29. Accordingly, I adjust Dr. Gray's regressions by adding a category indicator for the CCG programming category to the right-hand side variables. The role of this additional variable is to control for the systematic differences in the local/distant viewing relationship between CCG and other program categories, as I have discussed above. The detailed regression results are presented in Appendix B. 34 Not surprisingly, the coefficient on this variable is statistically significant for all four years, indicating that it belongs in the regression, and that it plays a significant role in the relationship between local and distant viewing. This, of course, is important considering that we are using the regression to impute numbers for distant viewing based on this relationship.

30. Following Dr. Gray’s procedure, I use my regression results to impute distant viewing measures for all programs shown on the sample stations during 2010-2013 and project these measures to the US population. The resulting shares, given in Table 3 below, show a higher CCG share for 2010-2012, but lower for 2013 relative to Dr. Gray's results.35

report to refer to the programming of the different claimant groups (CCG, PS, SDC, etc.) in this royalty distribution proceeding.

34 In the appendix, I also present results from an alternative regression specification in which I include category indicators not only for CCG programming, but also for all the other programming categories (CTV, SDC, PS, PTV, and JSC). The implied shares from those results are very close to those in Table 3.

35 Specifically, Dr. Gray’s results (Gray, op. cit., p. 19) imply a CCG share of 1.96% in 2010, 3.93% in 2011, 3.58% in 2012, and 5.16% in 2013.

Exhibit CCG-R-2 (Shum), Page 15

Table 3: Implied viewing-based shares (%) from

adjusted viewing regression with CCG category indicator

Projected Dist View: 2010 2011 2012 2013

CCG 33923 48295 43897 34306 CTV 178164 121242 159758 78847 SDC 13775 24916 11109 8114 PS 585702 504319 373389 334560 PTV 316614 292162 428049 247601 JSC 21622 24989 21063 35550 % Share: CCG 2.95 4.75 4.23 4.64 CTV 15.50 11.93 15.40 10.66 SDC 1.20 2.45 1.07 1.10 PS 50.94 49.64 36.00 45.27 PTV 27.54 28.76 41.27 33.51 JSC 1.88 2.46 2.03 4.81

B.2.ii. Second adjustment: CKSH and CKWS missing from viewing analysis in 2010

31. A second adjustment I make to Dr. Gray’s analysis is to address the problem that the analysis for 2010 does not include the viewing for programming on four of the sampled Canadian stations, namely CKSH, CKWS, CHLT, and CBWT. Table 4 lists all the Canadian stations which Dr. Gray selected for his sample, with these missing stations marked in italics. Because only eight Canadian stations were sampled in 2010, leaving out four of them can lead to critical underestimation of distant viewing for CCG programs, which are broadcast only on Canadian stations.

Exhibit CCG-R-2 (Shum), Page 16

Moreover, since the sample stations were chosen randomly to ensure a representative sample of distant signals, leaving out some of these stations can compromise the statistical validity of results obtained from the incomplete sample.

Table 4: Canadian stations in Gray sample (italics: not included in viewing analysis)

2010 2011 2012 2013 CBLT CBLT CBET CBET CBMT-DT CBMT CBFT CBFT CBUT, CBUT-DT CBUT CBLT CBLT CBWT-DT CFTO CBMT CBMT CFTO-DT CIMT CBUT, CBUT-DT CBUT CHLT CKSH CFTO CFTO CKSH-DT CKSH CKSH CKWS-DT CKWS

32. These missing stations are not small fringe stations. As Table 6 in Appendix B shows, the number of distant subscribers to these missing stations is not systematically smaller than that of the non-missing stations. For example, CKSH, the leading French-language station in Canada, had 236,355 distant subscribers in 2010, which exceeds the number of distant subscribers to CFTO, a Canadian station which was included in the analysis for 2010.36

33. My approach is to impute distant viewing numbers for these stations in 2010 based on the projected distant viewing numbers for these stations from 2013 from my adjusted viewing regression, as reported in the top half of Table 3. Since two of

36 Indeed, among all distant signals in the 2010 sample, CKSH was the nineteenth-largest in terms of distant subscribers (author’s calculations from spreadsheet “Station_Summary_F3Distant_2010_2012_2Feb2015.xlsx” (Restricted) (produced by Program Suppliers in discovery)).

Exhibit CCG-R-2 (Shum), Page 17

the four missing stations, CHLT and CBWT, were selected by Dr. Gray for his sample only in 2010, I am unable to impute any distant viewing for them. For that reason, I would interpret my share for CCG programs in 2010 emerging from this procedure as a lower-bound, since it does not include CHLT and CBWT.

34. I chose 2013 as my comparison year for two reasons. First, both CKSH and CKWS were also selected for the sample in 2013, so using that year sidesteps difficult considerations about how to combine information from years in which only one of the stations was selected. Second, all the non-missing Canadian stations in the 2010 sample were also in the 2013 sample, which maximizes the information on the relative volume of distant viewing among stations which I use in the imputation procedure.

35. The specific details underlying my computations are given in Appendix E. The final adjusted viewing-based shares from this procedure are presented in Table 5, which shows that the CCG increases from 2.95% to 4.50% in 2010 as a result of this adjustment.

Table 5: CCG Viewing-based Shares After Adjustment for

CKSH and CKWS in 2010

4-yr 2010 2011 2012 2013 Avg CCG Indicator Adjustment 2.95 4.75 4.23 4.64 4.14 (from Table 3) +CKSH, CKWS 4.49 4.75 4.23 4.64 4.53

B.2.iii Additional issue: Statutory restrictions on retransmissions of Canadian distant signals

Exhibit CCG-R-2 (Shum), Page 18

36. Finally, I consider an institutional feature of the distant signal retransmission market which limit the distant viewing possibilities for CCG programs relative to programs in other claimant categories. A viewing analysis aims to measure the relative market value of distant signal programming using households’ viewing of the programming. However, the distant viewing measures obtained from the viewing analysis can only measure valuations in areas where subscribers have access to the programming on cable television. That is, valuations for CCG programs in areas served by CSOs which do not carry Canadian distant signals cannot be included in the distant viewing measures. The reasons that underlie CSOs’ decisions to carry distant signal are complex but one particular factor – namely, legal restrictions on distant carriage of Canadian distant signals – arguably handicap the CCG in viewing-based studies, relative to other claimant groups.

37. Specifically, CSOs located outside of the Canadian retransmission zone37 (which make up roughly two-thirds of the “lower 48” continental US territory) are prohibited from carrying Canadian distant signals. Thus, households residing outside this zone, who may have high values for CCG programming, are unable to view this programming. For instance, French speaking Louisianans, who have deep cultural ties to French-speaking Canadians,38 have no opportunity to view French- language Canadian stations such as CKSH and CBFT, because Lousiana lies outside the retransmission zone. Clearly this restricted reach handicaps CCG programming relative to programs in other claimant categories, which are by and large available on distant signals with unrestricted carriage across the entire US.

37 See Exhibit CCG-1-C, pg. 1. Precisely, the US copyright law prohibits the retransmission of Canadian distant signals “when the community of the cable system is located more than 150 miles from the United States-Canadian border and is also located south of the forty-second parallel of latitude.” 17 U.S.C. s.111(c)(4).

38 The French-speaking Cajun community in Lousiana is descended from French-loyalists in Canada (the Acadians), who fled to Lousiana after the French and Indian War in the eighteenth century. (See Exhibit CCG-1-G, pg. 1 and https://en.wikipedia.org/wiki/Acadia.)

Exhibit CCG-R-2 (Shum), Page 19

38. In Dr. Gray’s viewing analysis, zero distant viewing of CCG programs outside the Canadian zone is treated identically to zero distant viewing of non-CCG programs inside the zone. A viewing-based measure of valuation would attach low valuations for the programs in both cases. However, this ignores the important difference that the former may reflect the inability of cable subscribers (who may have high valuations) to view the programs while the latter reflects zero or low levels of viewing that were not recorded in the Nielsen dataset. An adjustment to correct for this issue would be challenging due to lack of credible measures on valuations of CCG progamming outside the Canadian zone. For this reason, I do not attempt an adjustment; rather, I conclude that the adjusted CCG shares reported in Table 5 above should be considered lower bounds on the royalty shares for CCG programming during the 2010-13 period, as they do not account for potential viewing of households outside the retransmission zone.

C. CONCLUSIONS

39. The goal of my testimony has been to analyze Dr. Gray’s testimony and discuss some of its deficiencies, especially inasmuch as these defects would lead to the underestimation of the royalty shares for CCG programming. My explorations of Dr. Gray’s analysis, and also precedents from past proceedings, indicate that a viewing-based approach is problematic from a conceptual point of view because it ignores important economic features of the distant signal marketplace which drive market value for distant signal programming. As such, I do not believe viewing should be a primary or sole criterion for determining relative market value of distant signal programming.

40. However, if the Judges use viewing-based arguments in their determination of royalty allocations in this proceeding, my report also highlights several

Exhibit CCG-R-2 (Shum), Page 20

deficiencies in Dr. Gray’s analysis which bias or understate the distant viewing of CCG programming. While leaving Dr. Gray’s overall framework unchanged, I suggest several adjustments to his procedure to accommodate these deficiencies. My adjusted viewing-based shares for CCG programming are 4.49%, 4.75%, 4.23%, and 4.64% for the years 2010-2013, respectively. To the extent that viewing is an indicator of relative marketplace value, I consider those adjusted viewing-based shares a floor on the CCG royalty shares for those years, as my calculations do not take into account the statutory restrictions on retransmissions of Canadian distant signals over two-thirds of the “lower 48” United States.

Exhibit CCG-R-2 (Shum), Page 21

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 22

APPENDIX A: Curriculum Vitae

Exhibit CCG-R-2 (Shum), Page 23

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 24

Exhibit CCG-R-2 (Shum), Page 25

Exhibit CCG-R-2 (Shum), Page 26

Exhibit CCG-R-2 (Shum), Page 27

Exhibit CCG-R-2 (Shum), Page 28

Exhibit CCG-R-2 (Shum), Page 29

Exhibit CCG-R-2 (Shum), Page 30

APPENDIX B: Distant subcribers for Canadian stations requested in Dr. Gray’s sample

Exhibit CCG-R-2 (Shum), Page 31

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 32

Table 6: Distant subscribers for Canadian stations

Station: 2010 2011 2012 2013

CBET 212586 157644 CBFT 84355 85637 CBLT 201175 191437 201644 188028 CBMT 271354 274453 260888 CBMT-DT 184474 CBUT 495028 966581 868203 893666 CBUT-DT 519880 22672 CBWT-DT 26917 CFTO 213637 225240 210241 CFTO-DT 126564 CHLT 83385 CIMT 8253 CKSH 355378 376637 367635 CKSH-DT 236355 CKWS 99186 CKWS-DT 73288

For each year, the number of distant subscribers is given only for stations requested by Dr. Gray for that year. Stations observed in the Nielsen data for each year are marked in italics.39

39 The data are drawn from the spreadsheets “Station_Summary_F3Distant_2010_2012_2Feb2015.xlsx” (Restricted) and “Station_Summary_F3Distant_2013_6October2015.xlsx” (Restricted) produced by Program Suppliers in discovery.

Exhibit CCG-R-2 (Shum), Page 33

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 34

APPENDIX C:

Results for Poisson regression for distant viewing,

including CCG category indicator

Exhibit CCG-R-2 (Shum), Page 35

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 36

Poisson regression results for 2010 Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat CCG 0.5048452 0.0020948 241 _Iqtr_64 0.1838137 0.0073448 25.03 localrt 1057.408 13.3317 79.32 _Iqtr_65 0.3986853 0.0069991 56.96 ldist 0.4730266 0.0003882 1218.44 _Iqtr_66 0.3603015 0.0070492 51.11 _Iqtr_2 -0.1877465 0.0080665 -23.27 _Iqtr_67 0.3850621 0.0069756 55.2 _Iqtr_3 -0.257497 0.0080683 -31.91 _Iqtr_68 0.4445173 0.0068817 64.59 _Iqtr_4 -0.3597478 0.0082866 -43.41 _Iqtr_69 0.5120685 0.0067273 76.12 _Iqtr_5 -0.3389902 0.0082661 -41.01 _Iqtr_70 0.5217858 0.0067223 77.62 _Iqtr_6 -0.4832805 0.0085794 -56.33 _Iqtr_71 0.478833 0.006855 69.85 _Iqtr_7 -0.5722545 0.0086771 -65.95 _Iqtr_72 0.5405164 0.0067975 79.52 _Iqtr_8 -0.6898688 0.0089332 -77.23 _Iqtr_73 0.6895717 0.0066877 103.11 _Iqtr_9 -0.6332381 0.008928 -70.93 _Iqtr_74 0.6757586 0.0067503 100.11 _Iqtr_10 -0.7759087 0.0093628 -82.87 _Iqtr_75 0.7155215 0.0066008 108.4 _Iqtr_11 -0.855227 0.0096448 -88.67 _Iqtr_76 0.8286276 0.0064876 127.72 _Iqtr_12 -0.9602754 0.0100627 -95.43 _Iqtr_77 0.91369 0.0064161 142.41 _Iqtr_13 -0.9693471 0.0101207 -95.78 _Iqtr_78 0.9419794 0.0064416 146.23 _Iqtr_14 -1.064492 0.0104945 -101.43 _Iqtr_79 0.9000201 0.0063398 141.96 _Iqtr_15 -1.165437 0.0107289 -108.63 _Iqtr_80 1.06196 0.0063188 168.06 _Iqtr_16 -1.171465 0.010854 -107.93 _Iqtr_81 1.21318 0.0063227 191.88 _Iqtr_17 -1.117852 0.0107099 -104.38 _Iqtr_82 1.107946 0.0063975 173.18 _Iqtr_18 -1.105703 0.0106175 -104.14 _Iqtr_83 1.126528 0.0064043 175.9 _Iqtr_19 -1.075418 0.0104479 -102.93 _Iqtr_84 1.131164 0.0064195 176.21 _Iqtr_20 -1.103231 0.0105259 -104.81 _Iqtr_85 1.057108 0.0066008 160.15 _Iqtr_21 -0.9637568 0.00975 -98.85 _Iqtr_86 0.9123462 0.0067039 136.09 _Iqtr_22 -0.9059063 0.0096967 -93.42 _Iqtr_87 0.893961 0.0067566 132.31 _Iqtr_23 -0.8599398 0.0094915 -90.6 _Iqtr_88 0.8379959 0.0067864 123.48 _Iqtr_24 -0.8544857 0.0093953 -90.95 _Iqtr_89 0.9746095 0.0064429 151.27 _Iqtr_25 -0.730967 0.0092221 -79.26 _Iqtr_90 0.7811025 0.0066098 118.17 _Iqtr_26 -0.5159058 0.0087241 -59.14 _Iqtr_91 0.6459997 0.0067718 95.39 _Iqtr_27 -0.3636651 0.0081975 -44.36 _Iqtr_92 0.5813185 0.0068409 84.98 _Iqtr_28 -0.2056542 0.0079138 -25.99 _Iqtr_93 0.6132314 0.0065618 93.46 _Iqtr_29 -0.2850579 0.0091842 -31.04 _Iqtr_94 0.4144961 0.0067665 61.26 _Iqtr_30 -0.1269553 0.0087894 -14.44 _Iqtr_95 0.235375 0.0071247 33.04 _Iqtr_31 -0.0452933 0.0084564 -5.36 _Iqtr_96 0.0685491 0.0074788 9.17 _Iqtr_32 0.0567539 0.0081093 7 _Iprogram_t_2 -0.7803503 0.004226 -184.66 _Iqtr_33 0.1159401 0.0077944 14.87 _Iprogram_t_3 -0.9432867 0.0047459 -198.76 _Iqtr_34 0.1923411 0.0076544 25.13 _Iprogram_t_4 -0.4401798 0.0197143 -22.33 _Iqtr_35 0.1662992 0.0077234 21.53 _Iprogram_t_5 -1.981832 0.2369724 -8.36 _Iqtr_36 0.2505143 0.0075379 33.23 _Iprogram_t_6 0.301817 0.0048081 62.77 _Iqtr_37 0.1230607 0.0074043 16.62 _Iprogram_t_7 -1.546821 0.0081195 -190.51 _Iqtr_38 0.0741135 0.0075116 9.87 _Iprogram_t_8 -0.7616929 0.0139295 -54.68 _Iqtr_39 0.077324 0.007526 10.27 _Iprogram_t_9 -0.5247346 0.0050228 -104.47 _Iqtr_40 0.1026729 0.0074835 13.72 _Iprogram_t_10 -2.221803 0.0161039 -137.97 _Iqtr_41 0.0657053 0.0075555 8.7 _Iprogram_t_11 -0.9367411 0.0160589 -58.33 _Iqtr_42 -0.0207658 0.0076839 -2.7 _Iprogram_t_12 -0.5249065 0.0053984 -97.23 _Iqtr_43 0.0128576 0.0076639 1.68 _Iprogram_t_13 -0.4986564 0.0065456 -76.18 _Iqtr_44 0.0483944 0.0076099 6.36 _Iprogram_t_14 -0.5249308 0.004812 -109.09 _Iqtr_45 0.0340174 0.0078372 4.34 _Iprogram_t_15 -0.5406178 0.0053251 -101.52 _Iqtr_46 -0.0261759 0.008 -3.27 _Iprogram_t_16 -0.6191453 0.0050025 -123.77 _Iqtr_47 -0.0320429 0.0080348 -3.99 _Iprogram_t_17 -0.5000723 0.0040782 -122.62 _Iqtr_48 0.0912349 0.0077554 11.76 _Iprogram_t_18 -0.7092191 0.0040849 -173.62 _Iqtr_49 0.1877452 0.0073126 25.67 _Iprogram_t_19 -1.502686 0.0044657 -336.5 _Iqtr_50 0.1950559 0.0074415 26.21 _Iprogram_t_20 0.1457721 0.0050615 28.8 _Iqtr_51 -0.0267964 0.0077382 -3.46 _Iprogram_t_21 0.1510967 0.0069213 21.83 _Iqtr_52 -0.0663058 0.0078029 -8.5 _Iprogram_t_22 0.1438574 0.0080243 17.93 _Iqtr_53 0.0823818 0.0076296 10.8 _Iprogram_t_23 -1.076576 0.006444 -167.07 _Iqtr_54 0.0732691 0.0076628 9.56 _Iprogram_t_24 -2.954079 0.0095622 -308.93 _Iqtr_55 0.05736 0.0076727 7.48 _Iprogram_t_25 -0.8270434 0.0045653 -181.16 _Iqtr_56 0.0968303 0.0075833 12.77 _Iprogram_t_26 -0.14995 0.0072412 -20.71 _Iqtr_57 0.1369322 0.0074994 18.26 _Iprogram_t_27 -1.51796 0.0360634 -42.09 _Iqtr_58 0.1136077 0.0075432 15.06 _Iprogram_t_28 -0.8002792 0.0080225 -99.75 _Iqtr_59 0.1255582 0.0075862 16.55 _Iprogram_t_29 -0.6148868 0.0038928 -157.96 _Iqtr_60 0.128389 0.007541 17.03 _Iprogram_t_30 -0.4374029 0.0092798 -47.14 _Iqtr_61 0.1643297 0.0073671 22.31 _Iprogram_t_31 -0.8100709 0.0041534 -195.04 _Iqtr_62 0.1437649 0.0073575 19.54 _Iprogram_t_32 -0.5072941 0.0066771 -75.97 _Iqtr_63 0.1590789 0.0073761 21.57 _cons -7.156083 0.0080522 -888.71

Exhibit CCG-R-2 (Shum), Page 37

Poisson regression results for 2011 Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat CCG 0.2231271 0.0031903 69.94 _Iqtr_64 0.404342 0.0076912 52.57 localrt 6677.646 77.68962 85.95 _Iqtr_65 0.4606549 0.0075702 60.85 ldist 0.4859511 0.000424 1146.19 _Iqtr_66 0.3935232 0.0076417 51.5 _Iqtr_2 -0.1692378 0.0086121 -19.65 _Iqtr_67 0.4273339 0.0076346 55.97 _Iqtr_3 -0.1948874 0.008648 -22.54 _Iqtr_68 0.435271 0.0075589 57.58 _Iqtr_4 -0.2808428 0.0088774 -31.64 _Iqtr_69 0.5673062 0.0074338 76.31 _Iqtr_5 -0.3237377 0.0089065 -36.35 _Iqtr_70 0.5478016 0.0074338 73.69 _Iqtr_6 -0.4467787 0.0090956 -49.12 _Iqtr_71 0.5665774 0.0075445 75.1 _Iqtr_7 -0.4853323 0.0091849 -52.84 _Iqtr_72 0.6482345 0.0074092 87.49 _Iqtr_8 -0.6096235 0.0095226 -64.02 _Iqtr_73 0.7638661 0.0072117 105.92 _Iqtr_9 -0.689777 0.0099149 -69.57 _Iqtr_74 0.7903638 0.0071619 110.36 _Iqtr_10 -0.7993434 0.0102504 -77.98 _Iqtr_75 0.6881012 0.007473 92.08 _Iqtr_11 -0.8336111 0.0104526 -79.75 _Iqtr_76 0.8087434 0.0083912 96.38 _Iqtr_12 -0.9117207 0.0107173 -85.07 _Iqtr_77 0.9599572 0.0076823 124.96 _Iqtr_13 -0.9758324 0.0109274 -89.3 _Iqtr_78 1.024025 0.0070186 145.9 _Iqtr_14 -1.062916 0.0112777 -94.25 _Iqtr_79 0.9935732 0.0069634 142.68 _Iqtr_15 -1.136656 0.0116198 -97.82 _Iqtr_80 1.122301 0.007046 159.28 _Iqtr_16 -1.213547 0.0118175 -102.69 _Iqtr_81 1.297691 0.006902 188.02 _Iqtr_17 -1.214227 0.0118864 -102.15 _Iqtr_82 1.146267 0.0070043 163.65 _Iqtr_18 -1.15271 0.0116868 -98.63 _Iqtr_83 1.165397 0.0070242 165.91 _Iqtr_19 -1.094985 0.0111884 -97.87 _Iqtr_84 1.164635 0.0070062 166.23 _Iqtr_20 -1.227833 0.0116923 -105.01 _Iqtr_85 1.073315 0.0071793 149.5 _Iqtr_21 -1.096202 0.0112793 -97.19 _Iqtr_86 0.9098455 0.007328 124.16 _Iqtr_22 -0.9930586 0.0109468 -90.72 _Iqtr_87 0.8589028 0.0073464 116.91 _Iqtr_23 -0.997947 0.0110573 -90.25 _Iqtr_88 0.8322177 0.0074023 112.43 _Iqtr_24 -0.9073619 0.0107768 -84.2 _Iqtr_89 1.051627 0.0072338 145.38 _Iqtr_25 -0.5920428 0.0098818 -59.91 _Iqtr_90 0.9027194 0.0073309 123.14 _Iqtr_26 -0.400311 0.0092987 -43.05 _Iqtr_91 0.825481 0.0072782 113.42 _Iqtr_27 -0.2518799 0.0089068 -28.28 _Iqtr_92 0.6953474 0.0074208 93.7 _Iqtr_28 -0.0236115 0.0084435 -2.8 _Iqtr_93 0.6593077 0.0071748 91.89 _Iqtr_29 0.1300581 0.0090738 14.33 _Iqtr_94 0.5040468 0.0073104 68.95 _Iqtr_30 0.2171267 0.0092526 23.47 _Iqtr_95 0.1961452 0.0079101 24.8 _Iqtr_31 0.1544101 0.0089435 17.27 _Iqtr_96 0.0045416 0.0083428 0.54 _Iqtr_32 0.2393874 0.0085406 28.03 _Iprogram_t_2 -0.5942192 0.0041761 -142.29 _Iqtr_33 0.145276 0.0083812 17.33 _Iprogram_t_3 -0.7920322 0.0047019 -168.45 _Iqtr_34 0.2446539 0.0082478 29.66 _Iprogram_t_4 -0.522444 0.024773 -21.09 _Iqtr_35 0.2440902 0.0081777 29.85 _Iprogram_t_5 0 (omitted) _Iqtr_36 0.2758896 0.0081695 33.77 _Iprogram_t_6 -0.1262425 0.0058162 -21.71 _Iqtr_37 0.2543089 0.0078756 32.29 _Iprogram_t_7 -0.88848 0.0073827 -120.35 _Iqtr_38 0.2200837 0.0078858 27.91 _Iprogram_t_8 -0.6586007 0.0502559 -13.1 _Iqtr_39 0.200619 0.0079216 25.33 _Iprogram_t_9 -0.4233846 0.0053386 -79.31 _Iqtr_40 0.2068135 0.0079062 26.16 _Iprogram_t_10 -1.070997 0.0131822 -81.25 _Iqtr_41 0.2676698 0.0078778 33.98 _Iprogram_t_11 -0.2763569 0.0135666 -20.37 _Iqtr_42 0.2241056 0.0079772 28.09 _Iprogram_t_12 -0.4120069 0.0053378 -77.19 _Iqtr_43 0.1755819 0.0080151 21.91 _Iprogram_t_13 -0.1833912 0.0071987 -25.48 _Iqtr_44 0.1929292 0.0079474 24.28 _Iprogram_t_14 -0.2339549 0.0048192 -48.55 _Iqtr_45 0.1710089 0.0080565 21.23 _Iprogram_t_15 -0.4177501 0.0052195 -80.04 _Iqtr_46 0.1341148 0.008133 16.49 _Iprogram_t_16 -0.6176296 0.0050514 -122.27 _Iqtr_47 0.1569481 0.0081117 19.35 _Iprogram_t_17 -0.3977047 0.0040299 -98.69 _Iqtr_48 0.1699663 0.0080823 21.03 _Iprogram_t_18 -0.7963758 0.0042293 -188.3 _Iqtr_49 0.2978381 0.0075909 39.24 _Iprogram_t_19 -1.1055 0.0044276 -249.68 _Iqtr_50 0.2611925 0.0076562 34.12 _Iprogram_t_20 0.457026 0.005094 89.72 _Iqtr_51 0.167633 0.0079061 21.2 _Iprogram_t_21 0.8923988 0.0081042 110.12 _Iqtr_52 0.2133816 0.0079682 26.78 _Iprogram_t_22 -17.92203 0.0212057 -845.15 _Iqtr_53 0.3622969 0.0077007 47.05 _Iprogram_t_23 -0.4161663 0.0059988 -69.37 _Iqtr_54 0.3399314 0.0077853 43.66 _Iprogram_t_24 -1.981384 0.0089821 -220.59 _Iqtr_55 0.3528508 0.0078444 44.98 _Iprogram_t_25 -0.4817113 0.0044142 -109.13 _Iqtr_56 0.3843097 0.0078248 49.11 _Iprogram_t_26 0.2248544 0.0082209 27.35 _Iqtr_57 0.345994 0.0077881 44.43 _Iprogram_t_27 -1.748409 0.0541882 -32.27 _Iqtr_58 0.3006278 0.0078586 38.25 _Iprogram_t_28 -0.4517075 0.0089116 -50.69 _Iqtr_59 0.32458 0.0078785 41.2 _Iprogram_t_29 -0.5194751 0.0038641 -134.44 _Iqtr_60 0.3468209 0.0077714 44.63 _Iprogram_t_30 -0.1136941 0.0139178 -8.17 _Iqtr_61 0.4290284 0.0076032 56.43 _Iprogram_t_31 -0.5521279 0.0041231 -133.91 _Iqtr_62 0.3439856 0.007783 44.2 _Iprogram_t_32 -0.4051832 0.0076444 -53 _Iqtr_63 0.3427924 0.0077884 44.01 _cons -7.769415 0.008639 -899.34

Exhibit CCG-R-2 (Shum), Page 38

Poisson regression results for 2012 Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat CCG 0.2003567 0.0025269 79.29 _Iqtr_64 0.3245828 0.0069977 46.38 localrt 25683.02 58.84709 436.44 _Iqtr_65 0.3441133 0.0069224 49.71 ldist 0.4264888 0.0003981 1071.27 _Iqtr_66 0.2768529 0.0070267 39.4 _Iqtr_2 -0.1444928 0.0078087 -18.5 _Iqtr_67 0.2655182 0.0070123 37.86 _Iqtr_3 -0.2608783 0.0079537 -32.8 _Iqtr_68 0.3261937 0.0068766 47.44 _Iqtr_4 -0.4467135 0.0082677 -54.03 _Iqtr_69 0.3492992 0.0069756 50.07 _Iqtr_5 -0.4089287 0.0083626 -48.9 _Iqtr_70 0.3159794 0.0069462 45.49 _Iqtr_6 -0.5561705 0.0086183 -64.53 _Iqtr_71 0.3230986 0.0070634 45.74 _Iqtr_7 -0.6180815 0.008694 -71.09 _Iqtr_72 0.3872409 0.0069471 55.74 _Iqtr_8 -0.6771027 0.0087015 -77.81 _Iqtr_73 0.4776592 0.0067394 70.88 _Iqtr_9 -0.8040891 0.008994 -89.4 _Iqtr_74 0.4927182 0.0066578 74.01 _Iqtr_10 -0.9329733 0.0095376 -97.82 _Iqtr_75 0.4519934 0.0067974 66.5 _Iqtr_11 -0.9597762 0.0096755 -99.2 _Iqtr_76 0.5606456 0.0066355 84.49 _Iqtr_12 -1.017339 0.0099374 -102.37 _Iqtr_77 0.7364234 0.0063406 116.15 _Iqtr_13 -0.999195 0.009861 -101.33 _Iqtr_78 0.6986864 0.0064003 109.16 _Iqtr_14 -1.031719 0.0101253 -101.89 _Iqtr_79 0.6933769 0.006322 109.68 _Iqtr_15 -1.156002 0.0103762 -111.41 _Iqtr_80 0.7475355 0.0063222 118.24 _Iqtr_16 -1.233603 0.0106982 -115.31 _Iqtr_81 1.073223 0.006313 170 _Iqtr_17 -1.126199 0.0103258 -109.07 _Iqtr_82 0.9665173 0.0063656 151.84 _Iqtr_18 -1.194072 0.0106329 -112.3 _Iqtr_83 0.9608466 0.0064126 149.84 _Iqtr_19 -1.189601 0.0102126 -116.48 _Iqtr_84 1.000631 0.0063616 157.29 _Iqtr_20 -1.204922 0.0104019 -115.84 _Iqtr_85 0.9874295 0.0063222 156.18 _Iqtr_21 -1.03248 0.0100061 -103.19 _Iqtr_86 0.8505154 0.0064367 132.13 _Iqtr_22 -1.063469 0.009998 -106.37 _Iqtr_87 0.828203 0.0064541 128.32 _Iqtr_23 -1.085665 0.010086 -107.64 _Iqtr_88 0.7944951 0.0064754 122.69 _Iqtr_24 -0.9439003 0.0095632 -98.7 _Iqtr_89 0.7994218 0.0065905 121.3 _Iqtr_25 -0.6974725 0.0091333 -76.37 _Iqtr_90 0.6378448 0.0067221 94.89 _Iqtr_26 -0.5575622 0.0087275 -63.89 _Iqtr_91 0.6265415 0.0066465 94.27 _Iqtr_27 -0.5888894 0.0088566 -66.49 _Iqtr_92 0.5344048 0.006738 79.31 _Iqtr_28 -0.3969203 0.0084325 -47.07 _Iqtr_93 0.4787384 0.0065857 72.69 _Iqtr_29 -0.368706 0.0089837 -41.04 _Iqtr_94 0.4169949 0.0066601 62.61 _Iqtr_30 -0.229442 0.0086483 -26.53 _Iqtr_95 0.2451514 0.0070437 34.8 _Iqtr_31 -0.1453555 0.0085006 -17.1 _Iqtr_96 0.031746 0.0075202 4.22 _Iqtr_32 -0.0351837 0.0082493 -4.27 _Iprogram_t_2 -0.6360719 0.004008 -158.7 _Iqtr_33 -0.1022044 0.0083724 -12.21 _Iprogram_t_3 -0.687103 0.0045017 -152.63 _Iqtr_34 -0.02598 0.0081753 -3.18 _Iprogram_t_4 -0.5323384 0.0156889 -33.93 _Iqtr_35 0.0563427 0.0079306 7.1 _Iprogram_t_5 0 (omitted) _Iqtr_36 0.1118343 0.0078558 14.24 _Iprogram_t_6 -0.3787998 0.0066517 -56.95 _Iqtr_37 0.1698434 0.0072819 23.32 _Iprogram_t_7 -0.7951308 0.0065349 -121.68 _Iqtr_38 0.1530112 0.00727 21.05 _Iprogram_t_8 -15.92993 0.0595369 -267.56 _Iqtr_39 0.1559969 0.0072537 21.51 _Iprogram_t_9 -0.3576854 0.0053639 -66.68 _Iqtr_40 0.1319325 0.0073293 18 _Iprogram_t_10 -1.343416 0.0118966 -112.92 _Iqtr_41 0.032795 0.0076889 4.27 _Iprogram_t_11 -0.6944182 0.016074 -43.2 _Iqtr_42 -0.0548787 0.007846 -6.99 _Iprogram_t_12 -0.3467685 0.0046828 -74.05 _Iqtr_43 -0.1340837 0.0080813 -16.59 _Iprogram_t_13 -0.1212013 0.0073723 -16.44 _Iqtr_44 -0.1284246 0.0092807 -13.84 _Iprogram_t_14 0.0022947 0.0047749 0.48 _Iqtr_45 -0.1664771 0.0080498 -20.68 _Iprogram_t_15 -0.3605158 0.0051386 -70.16 _Iqtr_46 -0.2251011 0.0082445 -27.3 _Iprogram_t_16 -0.3208738 0.0045172 -71.03 _Iqtr_47 -0.2023061 0.0080506 -25.13 _Iprogram_t_17 -0.1319144 0.0037728 -34.96 _Iqtr_48 -0.2311304 0.0081103 -28.5 _Iprogram_t_18 -0.6132485 0.0038748 -158.27 _Iqtr_49 -0.0702106 0.0074771 -9.39 _Iprogram_t_19 -1.222509 0.0044176 -276.74 _Iqtr_50 -0.1128391 0.0075505 -14.94 _Iprogram_t_20 0.5776441 0.0068026 84.92 _Iqtr_51 -0.1656757 0.0077771 -21.3 _Iprogram_t_21 0.8193212 0.0081131 100.99 _Iqtr_52 -0.1462152 0.0077486 -18.87 _Iprogram_t_22 -0.915825 0.1289556 -7.1 _Iqtr_53 0.0376808 0.0076457 4.93 _Iprogram_t_23 -0.4867841 0.0052475 -92.76 _Iqtr_54 0.0478873 0.0076993 6.22 _Iprogram_t_24 -2.207862 0.0107138 -206.08 _Iqtr_55 0.069169 0.0076584 9.03 _Iprogram_t_25 -0.4655076 0.0041472 -112.25 _Iqtr_56 0.1356752 0.0075895 17.88 _Iprogram_t_26 0.5350515 0.0082879 64.56 _Iqtr_57 0.1245456 0.0074855 16.64 _Iprogram_t_27 -0.9454971 0.0452334 -20.9 _Iqtr_58 0.0518542 0.0076401 6.79 _Iprogram_t_28 -0.2832162 0.0086736 -32.65 _Iqtr_59 0.0620628 0.0076467 8.12 _Iprogram_t_29 -0.0670502 0.0035879 -18.69 _Iqtr_60 0.1149958 0.0075305 15.27 _Iprogram_t_30 -0.0157549 0.0123441 -1.28 _Iqtr_61 0.2557212 0.0070543 36.25 _Iprogram_t_31 -0.4762384 0.0039426 -120.79 _Iqtr_62 0.2212523 0.0071232 31.06 _Iprogram_t_32 0.3895737 0.0068028 57.27 _Iqtr_63 0.2571285 0.0071153 36.14 _cons -7.067274 0.0078661 -898.44

Exhibit CCG-R-2 (Shum), Page 39

Poisson regression results for 2013 Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat CCG -0.127654 0.0031136 -41 _Iqtr_64 0.6479141 0.0089065 72.75 localrt 57425.55 281.0128 204.35 _Iqtr_65 0.71439 0.008771 81.45 ldist 0.5287868 0.0005542 954.18 _Iqtr_66 0.6991376 0.0088155 79.31 _Iqtr_2 -0.1135332 0.0099505 -11.41 _Iqtr_67 0.7279873 0.0087465 83.23 _Iqtr_3 -0.0761315 0.00997 -7.64 _Iqtr_68 0.7451234 0.0086145 86.5 _Iqtr_4 -0.1824935 0.0103178 -17.69 _Iqtr_69 0.7955 0.0086015 92.48 _Iqtr_5 -0.2356246 0.0103812 -22.7 _Iqtr_70 0.7126775 0.0087223 81.71 _Iqtr_6 -0.3659326 0.0106685 -34.3 _Iqtr_71 0.7455715 0.0087308 85.4 _Iqtr_7 -0.3078945 0.0106671 -28.86 _Iqtr_72 0.8700404 0.0085046 102.3 _Iqtr_8 -0.3840768 0.0108378 -35.44 _Iqtr_73 0.8839069 0.0091443 96.66 _Iqtr_9 -0.437265 0.011096 -39.41 _Iqtr_74 0.8455825 0.0084319 100.28 _Iqtr_10 -0.463209 0.0112966 -41 _Iqtr_75 0.9285216 0.0084879 109.39 _Iqtr_11 -0.5187199 0.0114316 -45.38 _Iqtr_76 1.086382 0.0082464 131.74 _Iqtr_12 -0.5964158 0.0116421 -51.23 _Iqtr_77 1.15124 0.0080515 142.99 _Iqtr_13 -0.5634269 0.0113498 -49.64 _Iqtr_78 1.163751 0.008089 143.87 _Iqtr_14 -0.6102594 0.0115961 -52.63 _Iqtr_79 1.250224 0.0079331 157.6 _Iqtr_15 -0.704996 0.0119138 -59.17 _Iqtr_80 1.334301 0.0078771 169.39 _Iqtr_16 -0.7116367 0.0119647 -59.48 _Iqtr_81 1.58867 0.0077996 203.69 _Iqtr_17 -0.7286209 0.0123084 -59.2 _Iqtr_82 1.466284 0.0079105 185.36 _Iqtr_18 -0.7698748 0.0123993 -62.09 _Iqtr_83 1.507096 0.0078788 191.29 _Iqtr_19 -0.8125432 0.0124262 -65.39 _Iqtr_84 1.513628 0.0078719 192.28 _Iqtr_20 -0.7664439 0.0121606 -63.03 _Iqtr_85 1.456053 0.0079926 182.18 _Iqtr_21 -0.6472053 0.0122651 -52.77 _Iqtr_86 1.350818 0.0080705 167.38 _Iqtr_22 -0.65975 0.0121798 -54.17 _Iqtr_87 1.304984 0.0081004 161.1 _Iqtr_23 -0.6897758 0.0121849 -56.61 _Iqtr_88 1.251346 0.0081307 153.9 _Iqtr_24 -0.5027392 0.0119301 -42.14 _Iqtr_89 1.408098 0.0080246 175.47 _Iqtr_25 -0.1234245 0.0104516 -11.81 _Iqtr_90 1.231128 0.0081701 150.69 _Iqtr_26 -0.104248 0.0103745 -10.05 _Iqtr_91 1.118897 0.0082763 135.19 _Iqtr_27 -0.2941493 0.0109728 -26.81 _Iqtr_92 0.9721891 0.0084273 115.36 _Iqtr_28 -0.2169922 0.0107163 -20.25 _Iqtr_93 0.7822687 0.0083963 93.17 _Iqtr_29 0.0848945 0.0103508 8.2 _Iqtr_94 0.6102901 0.0085896 71.05 _Iqtr_30 0.2775738 0.0098825 28.09 _Iqtr_95 0.4827849 0.0088829 54.35 _Iqtr_31 0.2494399 0.0098793 25.25 _Iqtr_96 0.3215342 0.0094057 34.18 _Iqtr_32 0.3152671 0.0097253 32.42 _Iprogram_t_2 -0.5878536 0.0041856 -140.45 _Iqtr_33 0.1848448 0.0098958 18.68 _Iprogram_t_3 -0.5477118 0.0048334 -113.32 _Iqtr_34 0.2597834 0.0097556 26.63 _Iprogram_t_4 -0.4049727 0.0153485 -26.39 _Iqtr_35 0.2658358 0.0097757 27.19 _Iprogram_t_5 0 (omitted) _Iqtr_36 0.3210551 0.009651 33.27 _Iprogram_t_6 0.4558713 0.0055553 82.06 _Iqtr_37 0.4469042 0.0089034 50.2 _Iprogram_t_7 -0.7188199 0.0067106 -107.12 _Iqtr_38 0.4702322 0.0088618 53.06 _Iprogram_t_8 -20.68186 0.0383803 -538.87 _Iqtr_39 0.4392707 0.0089351 49.16 _Iprogram_t_9 -0.4048572 0.0068666 -58.96 _Iqtr_40 0.4490907 0.0088679 50.64 _Iprogram_t_10 -1.157123 0.0115534 -100.15 _Iqtr_41 0.3238893 0.0092251 35.11 _Iprogram_t_11 -1.026186 0.0223388 -45.94 _Iqtr_42 0.188578 0.0095153 19.82 _Iprogram_t_12 -0.2721763 0.0050333 -54.08 _Iqtr_43 0.074905 0.009763 7.67 _Iprogram_t_13 -0.3572817 0.0120108 -29.75 _Iqtr_44 0.0126405 0.0099428 1.27 _Iprogram_t_14 -0.3050596 0.0052718 -57.87 _Iqtr_45 -0.0109202 0.010065 -1.08 _Iprogram_t_15 -0.2681691 0.0051069 -52.51 _Iqtr_46 -0.0554645 0.0102297 -5.42 _Iprogram_t_16 -0.3502982 0.0045922 -76.28 _Iqtr_47 0.0371457 0.0099206 3.74 _Iprogram_t_17 -0.141048 0.0038479 -36.66 _Iqtr_48 0.0343595 0.0099238 3.46 _Iprogram_t_18 -0.7085918 0.0041889 -169.16 _Iqtr_49 0.151315 0.0094928 15.94 _Iprogram_t_19 -0.901872 0.0046151 -195.42 _Iqtr_50 0.1779028 0.0093814 18.96 _Iprogram_t_20 1.030742 0.0053728 191.85 _Iqtr_51 0.0644686 0.0097763 6.59 _Iprogram_t_21 0.7672868 0.0092989 82.51 _Iqtr_52 0.0708822 0.009697 7.31 _Iprogram_t_22 -20.08553 0.0140623 -1428.33 _Iqtr_53 0.3603126 0.0092852 38.8 _Iprogram_t_23 -0.4671096 0.0053999 -86.5 _Iqtr_54 0.3287383 0.0094665 34.73 _Iprogram_t_24 -1.459863 0.0083813 -174.18 _Iqtr_55 0.3871893 0.0094263 41.08 _Iprogram_t_25 -0.4546857 0.0043882 -103.62 _Iqtr_56 0.4100084 0.0094662 43.31 _Iprogram_t_26 0.5634795 0.0097161 57.99 _Iqtr_57 0.3456705 0.0093957 36.79 _Iprogram_t_27 -0.5576524 0.0489289 -11.4 _Iqtr_58 0.335277 0.0095618 35.06 _Iprogram_t_28 -0.3966504 0.0099359 -39.92 _Iqtr_59 0.3182759 0.0096869 32.86 _Iprogram_t_29 -0.2604016 0.003709 -70.21 _Iqtr_60 0.3626806 0.0095408 38.01 _Iprogram_t_30 -0.610067 0.0238412 -25.59 _Iqtr_61 0.5415758 0.0090819 59.63 _Iprogram_t_31 -0.2407891 0.0040052 -60.12 _Iqtr_62 0.5320417 0.0091466 58.17 _Iprogram_t_32 0.3444431 0.0072854 47.28 _Iqtr_63 0.5728794 0.0090482 63.31 _cons -8.895431 0.0104251 -853.27

Exhibit CCG-R-2 (Shum), Page 40

APPENDIX D:

Implied royalty shares from Poisson regressions with category indicators for all programming categories.

Exhibit CCG-R-2 (Shum), Page 41

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 42

Table 7: Implied viewing-based shares from Regressions with Category Indicators for all programming categories

Imputed Dist View: 2010 2011 2012 2013

CCG 33662 47035 43991 33903 CTV 190651 150136 135543 79374 SDC 7853 11895 4081 4537 PS 550753 489759 392453 315433 PTV 330623 295420 427629 260652 JSC 15934 20704 20493 33227 % Share: CCG 2.98 4.63 4.30 4.66 CTV 16.88 14.79 13.23 10.92 SDC 0.70 1.17 0.40 0.62 PS 48.76 48.25 38.32 43.38 PTV 29.27 29.11 41.75 35.85 JSC 1.41 2.04 2.00 4.57

The Poisson regression results from which these shares were obtained are provided on the tables on the following pages.

Exhibit CCG-R-2 (Shum), Page 43

Poisson Regression Results for 2010: Incl. Category Indicators Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat localrt 1151.603 14.31801 80.43 _Iqtr_62 0.1921716 0.007347 26.16 ldist 0.4777775 0.000397 1203.53 _Iqtr_63 0.2047469 0.0073658 27.8 CTV -0.2607199 0.0029925 -87.12 _Iqtr_64 0.2288432 0.0073366 31.19 SDC -1.626301 0.0098176 -165.65 _Iqtr_65 0.4297799 0.0070011 61.39 PS -0.5524768 0.0024363 -226.77 _Iqtr_66 0.3896006 0.0070578 55.2 Public -0.3699679 0.0022277 -166.08 _Iqtr_67 0.4164472 0.0069818 59.65 JSC -1.191747 0.0087867 -135.63 _Iqtr_68 0.4754307 0.0068893 69.01 _Iqtr_2 -0.1876763 0.0080743 -23.24 _Iqtr_69 0.5288 0.006728 78.6 _Iqtr_3 -0.2511713 0.008075 -31.1 _Iqtr_70 0.5381709 0.0067254 80.02 _Iqtr_4 -0.3523902 0.0082947 -42.48 _Iqtr_71 0.5021211 0.0068561 73.24 _Iqtr_5 -0.3337468 0.0082705 -40.35 _Iqtr_72 0.5635549 0.0068003 82.87 _Iqtr_6 -0.4773302 0.0085847 -55.6 _Iqtr_73 0.7116623 0.0066824 106.5 _Iqtr_7 -0.5623204 0.008683 -64.76 _Iqtr_74 0.6975643 0.0067472 103.39 _Iqtr_8 -0.6788654 0.0089406 -75.93 _Iqtr_75 0.739026 0.0066055 111.88 _Iqtr_9 -0.6211912 0.0089375 -69.5 _Iqtr_76 0.8531249 0.0064921 131.41 _Iqtr_10 -0.764338 0.0093712 -81.56 _Iqtr_77 0.9432121 0.0064034 147.3 _Iqtr_11 -0.8401974 0.009653 -87.04 _Iqtr_78 0.9713046 0.006429 151.08 _Iqtr_12 -0.9460317 0.0100699 -93.95 _Iqtr_79 0.9222433 0.0063257 145.79 _Iqtr_13 -0.9570679 0.0101313 -94.47 _Iqtr_80 1.084025 0.006305 171.93 _Iqtr_14 -1.053109 0.0105062 -100.24 _Iqtr_81 1.227852 0.0062957 195.03 _Iqtr_15 -1.153365 0.0107387 -107.4 _Iqtr_82 1.122517 0.0063676 176.29 _Iqtr_16 -1.159345 0.0108622 -106.73 _Iqtr_83 1.144873 0.006372 179.67 _Iqtr_17 -1.110493 0.0107141 -103.65 _Iqtr_84 1.149083 0.0063921 179.77 _Iqtr_18 -1.100399 0.0106221 -103.59 _Iqtr_85 1.06007 0.0066131 160.3 _Iqtr_19 -1.067685 0.0104471 -102.2 _Iqtr_86 0.9147942 0.0067179 136.17 _Iqtr_20 -1.095085 0.0105268 -104.03 _Iqtr_87 0.895685 0.006778 132.15 _Iqtr_21 -0.9652174 0.009751 -98.99 _Iqtr_88 0.8393779 0.0068153 123.16 _Iqtr_22 -0.9071481 0.0096948 -93.57 _Iqtr_89 0.9766578 0.0064396 151.67 _Iqtr_23 -0.859886 0.0094991 -90.52 _Iqtr_90 0.7828112 0.0066056 118.51 _Iqtr_24 -0.8535593 0.0094035 -90.77 _Iqtr_91 0.6535236 0.0067722 96.5 _Iqtr_25 -0.7197168 0.0092212 -78.05 _Iqtr_92 0.5931002 0.0068416 86.69 _Iqtr_26 -0.5046614 0.0087173 -57.89 _Iqtr_93 0.6188939 0.006563 94.3 _Iqtr_27 -0.3504991 0.0082031 -42.73 _Iqtr_94 0.4172019 0.0067685 61.64 _Iqtr_28 -0.1962727 0.0079223 -24.77 _Iqtr_95 0.240135 0.0071284 33.69 _Iqtr_29 -0.2653044 0.0091934 -28.86 _Iqtr_96 0.0772872 0.007485 10.33 _Iqtr_30 -0.1058797 0.0088018 -12.03 _Iprogram_t_2-0.7832998 0.0042187 -185.67 _Iqtr_31 -0.0234751 0.0084559 -2.78 _Iprogram_t_3-0.929687 0.0047365 -196.28 _Iqtr_32 0.0781126 0.0081082 9.63 _Iprogram_t_4-0.4202548 0.0198142 -21.21 _Iqtr_33 0.1369077 0.0078006 17.55 _Iprogram_t_5-1.974295 0.2369924 -8.33 _Iqtr_34 0.2133348 0.0076585 27.86 _Iprogram_t_60.4536784 0.0049405 91.83 _Iqtr_35 0.1999444 0.0077215 25.89 _Iprogram_t_7-1.528451 0.008173 -187.01 _Iqtr_36 0.2837493 0.007532 37.67 _Iprogram_t_8-0.5910393 0.0139964 -42.23 _Iqtr_37 0.1827825 0.0074046 24.69 _Iprogram_t_9-0.351572 0.0052281 -67.25 _Iqtr_38 0.1339769 0.0075114 17.84 _Iprogram_t_10-2.215847 0.0161177 -137.48 _Iqtr_39 0.1357225 0.0075267 18.03 _Iprogram_t_1-01.9437397 0.0160522 -58.79 _Iqtr_40 0.1612185 0.0074852 21.54 _Iprogram_t_1-02.5246373 0.0053936 -97.27 _Iqtr_41 0.1033358 0.0075649 13.66 _Iprogram_t_1-03.4894141 0.0065477 -74.75 _Iqtr_42 0.0185327 0.0076943 2.41 _Iprogram_t_1-04.3935858 0.0049028 -80.28 _Iqtr_43 0.050882 0.0076772 6.63 _Iprogram_t_1-05.4993367 0.0053367 -93.57 _Iqtr_44 0.0862964 0.0076201 11.32 _Iprogram_t_1-06.6147984 0.0050009 -122.94 _Iqtr_45 0.0633765 0.0078486 8.07 _Iprogram_t_1-07.4456916 0.0040864 -109.07 _Iqtr_46 -0.0018043 0.0080038 -0.23 _Iprogram_t_1-08.7764574 0.0044831 -173.2 _Iqtr_47 -0.0065811 0.0080355 -0.82 _Iprogram_t_19-1.399752 0.0046105 -303.6 _Iqtr_48 0.1184252 0.0077473 15.29 _Iprogram_t_20.3123703 0.0052525 59.47 _Iqtr_49 0.19543 0.0073291 26.66 _Iprogram_t_201.4169936 0.007163 58.21 _Iqtr_50 0.2028332 0.0074535 27.21 _Iprogram_t_202.3258316 0.0081919 39.77 _Iqtr_51 -0.0179307 0.0077561 -2.31 _Iprogram_t_23-1.099981 0.0065077 -169.03 _Iqtr_52 -0.0571981 0.0078221 -7.31 _Iprogram_t_24-1.877014 0.0111956 -167.66 _Iqtr_53 0.0965582 0.007584 12.73 _Iprogram_t_2-05.7961638 0.0045724 -174.12 _Iqtr_54 0.0875465 0.0076156 11.5 _Iprogram_t_206.0063637 0.007333 0.87 _Iqtr_55 0.0778656 0.007646 10.18 _Iprogram_t_27-1.340319 0.0360947 -37.13 _Iqtr_56 0.1182232 0.0075578 15.64 _Iprogram_t_2-08.7854492 0.0084686 -92.75 _Iqtr_57 0.155798 0.0074761 20.84 _Iprogram_t_2-09.4807948 0.0040488 -118.75 _Iqtr_58 0.1326246 0.0075217 17.63 _Iprogram_t_3-00.3088705 0.0093015 -33.21 _Iqtr_59 0.152443 0.0075678 20.14 _Iprogram_t_3-01.6785247 0.0041942 -161.78 _Iqtr_60 0.1550529 0.0075275 20.6 _Iprogram_t_302.0695678 0.0067889 10.25 _Iqtr_61 0.2130214 0.007354 28.97 _cons -6.858138 0.0087476 -784

Exhibit CCG-R-2 (Shum), Page 44

Poisson Regression Results for 2011: Incl. Category Indicators Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat localrt 6633.865 73.9467 89.71 _Iqtr_62 0.3595056 0.0077878 46.16 ldist 0.4816786 0.0004344 1108.84 _Iqtr_63 0.3583537 0.0077905 46 CTV 0.2163607 0.0042308 51.14 _Iqtr_64 0.4199672 0.0076929 54.59 SDC -1.611761 0.0107733 -149.61 _Iqtr_65 0.4682997 0.0075811 61.77 PS -0.2441995 0.0036106 -67.63 _Iqtr_66 0.3976142 0.0076667 51.86 Public -0.1650964 0.0033797 -48.85 _Iqtr_67 0.4347712 0.0076496 56.84 JSC -0.8162769 0.008694 -93.89 _Iqtr_68 0.4423377 0.0075723 58.42 _Iqtr_2 -0.169266 0.0086232 -19.63 _Iqtr_69 0.5631685 0.0074359 75.74 _Iqtr_3 -0.191562 0.0086589 -22.12 _Iqtr_70 0.5431736 0.0074304 73.1 _Iqtr_4 -0.2766637 0.0088887 -31.13 _Iqtr_71 0.5670238 0.0075394 75.21 _Iqtr_5 -0.320045 0.0089167 -35.89 _Iqtr_72 0.6487256 0.0074031 87.63 _Iqtr_6 -0.4425279 0.0091046 -48.61 _Iqtr_73 0.7753471 0.0072121 107.51 _Iqtr_7 -0.4819394 0.0091953 -52.41 _Iqtr_74 0.8017023 0.0071622 111.94 _Iqtr_8 -0.6056197 0.0095328 -63.53 _Iqtr_75 0.7194982 0.0074666 96.36 _Iqtr_9 -0.6823817 0.0099197 -68.79 _Iqtr_76 0.8426515 0.0082401 102.26 _Iqtr_10 -0.7918526 0.0102553 -77.21 _Iqtr_77 0.9832908 0.0075839 129.65 _Iqtr_11 -0.8275557 0.0104591 -79.12 _Iqtr_78 1.045702 0.0070151 149.06 _Iqtr_12 -0.9055553 0.0107238 -84.44 _Iqtr_79 1.01146 0.0069589 145.35 _Iqtr_13 -0.9702778 0.0109338 -88.74 _Iqtr_80 1.140754 0.0070476 161.86 _Iqtr_14 -1.057986 0.0112836 -93.76 _Iqtr_81 1.298004 0.0068899 188.39 _Iqtr_15 -1.129114 0.0116264 -97.12 _Iqtr_82 1.146822 0.0069899 164.07 _Iqtr_16 -1.20594 0.011824 -101.99 _Iqtr_83 1.168743 0.0070101 166.72 _Iqtr_17 -1.208259 0.0118887 -101.63 _Iqtr_84 1.1684 0.0069931 167.08 _Iqtr_18 -1.14727 0.0116891 -98.15 _Iqtr_85 1.065601 0.0072079 147.84 _Iqtr_19 -1.100963 0.011178 -98.49 _Iqtr_86 0.9015151 0.0073602 122.49 _Iqtr_20 -1.233634 0.0116909 -105.52 _Iqtr_87 0.8443177 0.0073867 114.3 _Iqtr_21 -1.105025 0.0112804 -97.96 _Iqtr_88 0.8167208 0.007453 109.58 _Iqtr_22 -1.001906 0.0109473 -91.52 _Iqtr_89 1.031985 0.0072233 142.87 _Iqtr_23 -1.012791 0.0111014 -91.23 _Iqtr_90 0.8828274 0.0073188 120.62 _Iqtr_24 -0.9215123 0.0108216 -85.16 _Iqtr_91 0.8176351 0.0072746 112.4 _Iqtr_25 -0.598058 0.0098833 -60.51 _Iqtr_92 0.6897121 0.0074246 92.9 _Iqtr_26 -0.4064137 0.0093 -43.7 _Iqtr_93 0.6559313 0.0071763 91.4 _Iqtr_27 -0.2592893 0.0089118 -29.09 _Iqtr_94 0.4983579 0.0073146 68.13 _Iqtr_28 -0.0310846 0.0084548 -3.68 _Iqtr_95 0.1995451 0.0079129 25.22 _Iqtr_29 0.1276723 0.0090917 14.04 _Iqtr_96 0.0135759 0.0083484 1.63 _Iqtr_30 0.214709 0.009275 23.15 _Iprogram_t_2-0.5916636 0.0041719 -141.82 _Iqtr_31 0.148665 0.0089642 16.58 _Iprogram_t_3-0.7867149 0.0046964 -167.52 _Iqtr_32 0.2334295 0.0085601 27.27 _Iprogram_t_4-0.5127191 0.0249343 -20.56 _Iqtr_33 0.147007 0.0083986 17.5 _Iprogram_t_5 0 (omitted) _Iqtr_34 0.2461514 0.0082661 29.78 _Iprogram_t_6 -0.06434 0.0059043 -10.9 _Iqtr_35 0.2499041 0.0081967 30.49 _Iprogram_t_7-0.889612 0.007392 -120.35 _Iqtr_36 0.2811003 0.0081884 34.33 _Iprogram_t_8-0.5830567 0.0502546 -11.6 _Iqtr_37 0.2816852 0.0078997 35.66 _Iprogram_t_9-0.3638795 0.0054964 -66.2 _Iqtr_38 0.2474988 0.0079072 31.3 _Iprogram_t_10-1.094331 0.0132193 -82.78 _Iqtr_39 0.2244529 0.0079398 28.27 _Iprogram_t_1-01.2799695 0.0135733 -20.63 _Iqtr_40 0.2306658 0.0079251 29.11 _Iprogram_t_1-02.4108582 0.0053349 -77.01 _Iqtr_41 0.2712701 0.0078988 34.34 _Iprogram_t_1-03.1732849 0.0071896 -24.1 _Iqtr_42 0.2280676 0.0079986 28.51 _Iprogram_t_1-04.1843285 0.004929 -37.4 _Iqtr_43 0.1774117 0.0080334 22.08 _Iprogram_t_1-05.4175334 0.0052186 -80.01 _Iqtr_44 0.1947379 0.0079673 24.44 _Iprogram_t_16-0.614228 0.0050494 -121.64 _Iqtr_45 0.179207 0.0080742 22.19 _Iprogram_t_1-07.3893446 0.0040346 -96.5 _Iqtr_46 0.1423478 0.0081512 17.46 _Iprogram_t_18-1.063367 0.0047319 -224.72 _Iqtr_47 0.1594795 0.0081285 19.62 _Iprogram_t_19-1.151291 0.0046053 -249.99 _Iqtr_48 0.1728689 0.0080985 21.35 _Iprogram_t_200.535207 0.0053216 100.57 _Iqtr_49 0.2809768 0.0076262 36.84 _Iprogram_t_21 1.20029 0.0089705 133.8 _Iqtr_50 0.2437138 0.0076919 31.68 _Iprogram_t_22-17.75267 0.0216234 -821 _Iqtr_51 0.1608832 0.0079506 20.24 _Iprogram_t_2-03.5078389 0.0058636 -86.61 _Iqtr_52 0.2069639 0.0080122 25.83 _Iprogram_t_2-04.7543849 0.0112105 -67.29 _Iqtr_53 0.3698647 0.0076995 48.04 _Iprogram_t_2-05.4635674 0.0044269 -104.72 _Iqtr_54 0.3476388 0.0077833 44.66 _Iprogram_t_206.3148848 0.0083175 37.86 _Iqtr_55 0.3587444 0.0078421 45.75 _Iprogram_t_27-1.674328 0.0542035 -30.89 _Iqtr_56 0.3907349 0.007823 49.95 _Iprogram_t_2-08.6265061 0.0094812 -66.08 _Iqtr_57 0.3532845 0.007786 45.37 _Iprogram_t_2-09.4655436 0.0040271 -115.6 _Iqtr_58 0.3079092 0.0078551 39.2 _Iprogram_t_30-0.083351 0.0140196 -5.95 _Iqtr_59 0.334483 0.0078783 42.46 _Iprogram_t_3-01.4987792 0.0041849 -119.19 _Iqtr_60 0.3563274 0.0077737 45.84 _Iprogram_t_302.1689875 0.008798 19.21 _Iqtr_61 0.4445153 0.0076055 58.45 _cons -7.558048 0.0097739 -773.29

Exhibit CCG-R-2 (Shum), Page 45

Poisson Regression Results for 2012: Incl. Category Indicators Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat localrt 26644.64 65.19782 408.67 _Iqtr_62 0.2023905 0.0071205 28.42 ldist 0.4282045 0.0003974 1077.51 _Iqtr_63 0.2391375 0.0071128 33.62 CTV -0.4931748 0.003625 -136.05 _Iqtr_64 0.3063996 0.0069941 43.81 SDC -2.175163 0.0147958 -147.01 _Iqtr_65 0.3276592 0.0069191 47.36 PS -0.0643195 0.0028037 -22.94 _Iqtr_66 0.2607419 0.0070197 37.14 Public -0.1770702 0.0025963 -68.2 _Iqtr_67 0.2494591 0.0070108 35.58 JSC -0.3098689 0.0078631 -39.41 _Iqtr_68 0.3103011 0.0068747 45.14 _Iqtr_2 -0.1447074 0.0078035 -18.54 _Iqtr_69 0.3472509 0.00697 49.82 _Iqtr_3 -0.2684256 0.0079467 -33.78 _Iqtr_70 0.3140856 0.0069415 45.25 _Iqtr_4 -0.4549627 0.008262 -55.07 _Iqtr_71 0.3003087 0.0070628 42.52 _Iqtr_5 -0.4203251 0.0083565 -50.3 _Iqtr_72 0.3644825 0.0069459 52.47 _Iqtr_6 -0.5686527 0.0086139 -66.02 _Iqtr_73 0.4530329 0.0067373 67.24 _Iqtr_7 -0.6283922 0.0086865 -72.34 _Iqtr_74 0.4682677 0.0066528 70.39 _Iqtr_8 -0.6881608 0.0086955 -79.14 _Iqtr_75 0.4090707 0.0067928 60.22 _Iqtr_9 -0.8151072 0.008989 -90.68 _Iqtr_76 0.5179637 0.0066322 78.1 _Iqtr_10 -0.944529 0.009534 -99.07 _Iqtr_77 0.7202177 0.0063384 113.63 _Iqtr_11 -0.971089 0.009674 -100.38 _Iqtr_78 0.68234 0.0063954 106.69 _Iqtr_12 -1.028355 0.0099355 -103.5 _Iqtr_79 0.670269 0.0063198 106.06 _Iqtr_13 -1.009364 0.0098591 -102.38 _Iqtr_80 0.724614 0.0063226 114.61 _Iqtr_14 -1.040273 0.0101244 -102.75 _Iqtr_81 1.058079 0.0063376 166.95 _Iqtr_15 -1.16488 0.0103742 -112.29 _Iqtr_82 0.951901 0.0063935 148.89 _Iqtr_16 -1.242795 0.0106965 -116.19 _Iqtr_83 0.9433301 0.0064469 146.32 _Iqtr_17 -1.131067 0.0103237 -109.56 _Iqtr_84 0.9824789 0.0063956 153.62 _Iqtr_18 -1.198734 0.0106302 -112.77 _Iqtr_85 0.9842104 0.0063044 156.12 _Iqtr_19 -1.18227 0.0102145 -115.74 _Iqtr_86 0.8478706 0.006418 132.11 _Iqtr_20 -1.197639 0.010403 -115.12 _Iqtr_87 0.8286897 0.006432 128.84 _Iqtr_21 -1.011479 0.0100028 -101.12 _Iqtr_88 0.7963571 0.0064485 123.49 _Iqtr_22 -1.042431 0.009995 -104.3 _Iqtr_89 0.8076065 0.0065689 122.94 _Iqtr_23 -1.062065 0.0100787 -105.38 _Iqtr_90 0.6464396 0.006703 96.44 _Iqtr_24 -0.9201112 0.0095562 -96.28 _Iqtr_91 0.6316088 0.0066202 95.41 _Iqtr_25 -0.6771514 0.0091352 -74.13 _Iqtr_92 0.5363942 0.0067217 79.8 _Iqtr_26 -0.5370963 0.0087293 -61.53 _Iqtr_93 0.4836841 0.00658 73.51 _Iqtr_27 -0.5641693 0.0088566 -63.7 _Iqtr_94 0.4229181 0.0066523 63.57 _Iqtr_28 -0.3721964 0.0084321 -44.14 _Iqtr_95 0.2376275 0.0070363 33.77 _Iqtr_29 -0.3643989 0.0089849 -40.56 _Iqtr_96 0.0179518 0.0075125 2.39 _Iqtr_30 -0.2249912 0.0086497 -26.01 _Iprogram_t_2-0.6429924 0.0040106 -160.32 _Iqtr_31 -0.1394091 0.0085005 -16.4 _Iprogram_t_3-0.6936247 0.0045026 -154.05 _Iqtr_32 -0.0293088 0.0082507 -3.55 _Iprogram_t_4-0.5655679 0.0156928 -36.04 _Iqtr_33 -0.0994181 0.008376 -11.87 _Iprogram_t_5 0 (omitted) _Iqtr_34 -0.0232361 0.0081793 -2.84 _Iprogram_t_6-0.5115301 0.006941 -73.7 _Iqtr_35 0.0599792 0.0079341 7.56 _Iprogram_t_7-0.7875468 0.0065336 -120.54 _Iqtr_36 0.1155303 0.0078593 14.7 _Iprogram_t_8-17.10238 0.0595444 -287.22 _Iqtr_37 0.1851401 0.0072839 25.42 _Iprogram_t_9-0.4622736 0.0055614 -83.12 _Iqtr_38 0.1684657 0.0072706 23.17 _Iprogram_t_10-1.346477 0.011896 -113.19 _Iqtr_39 0.1690756 0.0072528 23.31 _Iprogram_t_1-01.6942675 0.0160714 -43.2 _Iqtr_40 0.1449875 0.0073279 19.79 _Iprogram_t_1-02.3504896 0.0046831 -74.84 _Iqtr_41 0.0242536 0.0076929 3.15 _Iprogram_t_1-03.1266478 0.0073717 -17.18 _Iqtr_42 -0.0633971 0.0078509 -8.08 _Iprogram_t_1-04.0676141 0.0048543 -13.93 _Iqtr_43 -0.1444563 0.008082 -17.87 _Iprogram_t_1-05.3681651 0.0051223 -71.87 _Iqtr_44 -0.1387881 0.0092745 -14.96 _Iprogram_t_1-06.3229358 0.004517 -71.49 _Iqtr_45 -0.1757964 0.0080507 -21.84 _Iprogram_t_1-07.1340839 0.0037737 -35.53 _Iqtr_46 -0.2343812 0.0082465 -28.42 _Iprogram_t_1-08.4037564 0.0041994 -96.15 _Iqtr_47 -0.2118986 0.0080536 -26.31 _Iprogram_t_19-1.274475 0.0046353 -274.95 _Iqtr_48 -0.2407711 0.0081111 -29.68 _Iprogram_t_20.4662223 0.0069625 66.96 _Iqtr_49 -0.0584637 0.0074622 -7.83 _Iprogram_t_201.8753363 0.0092338 94.8 _Iqtr_50 -0.1010677 0.0075353 -13.41 _Iprogram_t_22-1.021993 0.1289529 -7.93 _Iqtr_51 -0.1688104 0.0077562 -21.76 _Iprogram_t_2-03.4604836 0.0052418 -87.85 _Iqtr_52 -0.1494244 0.0077247 -19.34 _Iprogram_t_2-04.6522407 0.0131898 -49.45 _Iqtr_53 0.0256593 0.0076373 3.36 _Iprogram_t_2-05.4495103 0.0041457 -108.43 _Iqtr_54 0.0361306 0.0076934 4.7 _Iprogram_t_206.5042283 0.0083927 60.08 _Iqtr_55 0.0551252 0.0076503 7.21 _Iprogram_t_27-1.061529 0.0452798 -23.44 _Iqtr_56 0.1216359 0.0075825 16.04 _Iprogram_t_2-08.2134526 0.0085392 -25 _Iqtr_57 0.1099198 0.0074849 14.69 _Iprogram_t_2-09.1429887 0.0037314 -38.32 _Iqtr_58 0.0372546 0.0076404 4.88 _Iprogram_t_3-00.0089098 0.0123549 -0.72 _Iqtr_59 0.0469844 0.0076473 6.14 _Iprogram_t_3-01.5171514 0.0040467 -127.8 _Iqtr_60 0.0999736 0.0075302 13.28 _Iprogram_t_302.4964226 0.0080159 61.93 _Iqtr_61 0.2367184 0.0070523 33.57 _cons -6.900734 0.0087181 -791.54

Exhibit CCG-R-2 (Shum), Page 46

Poisson Regression Results for 2013: Incl. Category Indicators Variable Coefficient Std Error T-stat Variable Coefficient Std Error T-stat localrt 59803.45 311.9119 191.73 _Iqtr_62 0.5672219 0.0091107 62.26 ldist 0.5308335 0.0005705 930.49 _Iqtr_63 0.6076527 0.0090107 67.44 _Icat_2 0.2099309 0.0049123 42.74 _Iqtr_64 0.6826526 0.0088694 76.97 _Icat_3 -0.6924967 0.0099429 -69.65 _Iqtr_65 0.7388159 0.0087362 84.57 _Icat_4 0.1080596 0.0035194 30.7 _Iqtr_66 0.7228429 0.0087856 82.28 _Icat_5 0.2992664 0.0032732 91.43 _Iqtr_67 0.7542185 0.0087125 86.57 _Icat_6 -0.4054421 0.0087607 -46.28 _Iqtr_68 0.771358 0.0085659 90.05 _Iqtr_2 -0.1134465 0.0098836 -11.48 _Iqtr_69 0.8035817 0.0085502 93.98 _Iqtr_3 -0.0842461 0.0099408 -8.47 _Iqtr_70 0.721388 0.0086594 83.31 _Iqtr_4 -0.1895279 0.0102899 -18.42 _Iqtr_71 0.7502696 0.0086651 86.58 _Iqtr_5 -0.2271215 0.0103355 -21.97 _Iqtr_72 0.8742556 0.008438 103.61 _Iqtr_6 -0.3562743 0.0106238 -33.54 _Iqtr_73 0.8838842 0.0094036 93.99 _Iqtr_7 -0.2980548 0.0106384 -28.02 _Iqtr_74 0.8484089 0.0083751 101.3 _Iqtr_8 -0.3733176 0.0108096 -34.54 _Iqtr_75 0.935788 0.0084301 111.01 _Iqtr_9 -0.4277876 0.01107 -38.64 _Iqtr_76 1.093504 0.0081897 133.52 _Iqtr_10 -0.4550777 0.0112691 -40.38 _Iqtr_77 1.155191 0.0079889 144.6 _Iqtr_11 -0.5109836 0.011405 -44.8 _Iqtr_78 1.167543 0.0080328 145.35 _Iqtr_12 -0.589098 0.0116165 -50.71 _Iqtr_79 1.258407 0.0078734 159.83 _Iqtr_13 -0.5521317 0.0113231 -48.76 _Iqtr_80 1.341746 0.0078178 171.63 _Iqtr_14 -0.5994673 0.0115685 -51.82 _Iqtr_81 1.595993 0.0077353 206.33 _Iqtr_15 -0.6984725 0.0118897 -58.75 _Iqtr_82 1.473701 0.0078524 187.68 _Iqtr_16 -0.7054575 0.0119415 -59.08 _Iqtr_83 1.517305 0.0078157 194.13 _Iqtr_17 -0.7220729 0.0122805 -58.8 _Iqtr_84 1.523158 0.0078121 194.97 _Iqtr_18 -0.7633166 0.0123713 -61.7 _Iqtr_85 1.465726 0.007941 184.58 _Iqtr_19 -0.8063954 0.0123976 -65.04 _Iqtr_86 1.36046 0.0080198 169.64 _Iqtr_20 -0.7604279 0.0121317 -62.68 _Iqtr_87 1.314669 0.0080633 163.04 _Iqtr_21 -0.649603 0.0122386 -53.08 _Iqtr_88 1.262229 0.0080895 156.03 _Iqtr_22 -0.6622305 0.0121532 -54.49 _Iqtr_89 1.400213 0.0079727 175.63 _Iqtr_23 -0.681993 0.0121542 -56.11 _Iqtr_90 1.223187 0.0081186 150.66 _Iqtr_24 -0.4950179 0.0118971 -41.61 _Iqtr_91 1.109927 0.0082304 134.86 _Iqtr_25 -0.1029695 0.0104195 -9.88 _Iqtr_92 0.9627917 0.0083873 114.79 _Iqtr_26 -0.0840553 0.0103412 -8.13 _Iqtr_93 0.7784024 0.0083562 93.15 _Iqtr_27 -0.2715662 0.0109463 -24.81 _Iqtr_94 0.6078037 0.0085487 71.1 _Iqtr_28 -0.1948525 0.0106887 -18.23 _Iqtr_95 0.4767291 0.008844 53.9 _Iqtr_29 0.1178642 0.0103243 11.42 _Iqtr_96 0.3129847 0.009464 33.07 _Iqtr_30 0.3105398 0.0098557 31.51 _Iprogram_t_2-0.5997075 0.0041803 -143.46 _Iqtr_31 0.2816599 0.0098541 28.58 _Iprogram_t_3-0.5514181 0.0048291 -114.19 _Iqtr_32 0.3473454 0.0096997 35.81 _Iprogram_t_4-0.3999882 0.0153574 -26.05 _Iqtr_33 0.2203376 0.009872 22.32 _Iprogram_t_5 0 (omitted) _Iqtr_34 0.2952576 0.0097319 30.34 _Iprogram_t_60.6201751 0.0056828 109.13 _Iqtr_35 0.2987823 0.00975 30.64 _Iprogram_t_7-0.6798136 0.0067151 -101.24 _Iqtr_36 0.3550535 0.0096239 36.89 _Iprogram_t_8-20.77028 0.0394129 -526.99 _Iqtr_37 0.5029779 0.008875 56.67 _Iprogram_t_9-0.2405329 0.0070815 -33.97 _Iqtr_38 0.5262016 0.0088326 59.57 _Iprogram_t_10-1.125609 0.011541 -97.53 _Iqtr_39 0.4942721 0.0089051 55.5 _Iprogram_t_11-1.031474 0.0223436 -46.16 _Iqtr_40 0.5041021 0.0088372 57.04 _Iprogram_t_12-0.276106 0.005031 -54.88 _Iqtr_41 0.36444 0.0091935 39.64 _Iprogram_t_1-03.3414586 0.0119972 -28.46 _Iqtr_42 0.229586 0.0094868 24.2 _Iprogram_t_14-0.198966 0.0053649 -37.09 _Iqtr_43 0.1126844 0.0097328 11.58 _Iprogram_t_1-05.2342821 0.005109 -45.86 _Iqtr_44 0.0504395 0.0099132 5.09 _Iprogram_t_1-06.3454253 0.0045898 -75.26 _Iqtr_45 0.0210997 0.0100326 2.1 _Iprogram_t_1-07.1003197 0.0038459 -26.08 _Iqtr_46 -0.0233673 0.0101972 -2.29 _Iprogram_t_1-08.6440535 0.0049507 -130.09 _Iqtr_47 0.0673151 0.0098848 6.81 _Iprogram_t_1-09.7766709 0.0048047 -161.65 _Iqtr_48 0.0641254 0.0098899 6.48 _Iprogram_t_201.219644 0.0056778 214.81 _Iqtr_49 0.171506 0.0094615 18.13 _Iprogram_t_211.198089 0.0102702 116.66 _Iqtr_50 0.1982882 0.0093468 21.21 _Iprogram_t_22-20.02881 0.0141958 -1410.9 _Iqtr_51 0.0852534 0.0097413 8.75 _Iprogram_t_2-03.4578597 0.0054097 -84.64 _Iqtr_52 0.0918195 0.0096616 9.5 _Iprogram_t_2-04.8890318 0.0082593 -107.64 _Iqtr_53 0.3790993 0.0092339 41.06 _Iprogram_t_2-05.4414595 0.0044307 -99.64 _Iqtr_54 0.3474794 0.0094224 36.88 _Iprogram_t_206.7652971 0.0097848 78.21 _Iqtr_55 0.4044025 0.0093844 43.09 _Iprogram_t_2-07.3751471 0.0489627 -7.66 _Iqtr_56 0.427334 0.0094243 45.34 _Iprogram_t_2-08.2919689 0.0103302 -28.26 _Iqtr_57 0.3687554 0.0093583 39.4 _Iprogram_t_29-0.14363 0.0038767 -37.05 _Iqtr_58 0.358324 0.0095272 37.61 _Iprogram_t_3-00.4797948 0.0237718 -20.18 _Iqtr_59 0.3426403 0.009655 35.49 _Iprogram_t_3-01.1202948 0.0040648 -29.59 _Iqtr_60 0.3866537 0.0095082 40.67 _Iprogram_t_302.9429432 0.0079686 118.33 _Iqtr_61 0.5767597 0.0090434 63.78 _cons -9.231757 0.0113392 -814.15

Exhibit CCG-R-2 (Shum), Page 47

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 48

Appendix E:

Details of computations to impute distant viewing

for CKSH and CKWS in 2010

Exhibit CCG-R-2 (Shum), Page 49

(Intentionally Left Blank)

Exhibit CCG-R-2 (Shum), Page 50

The starting point for my imputation procedure is the projected distant viewing numbers obtained from my adjusted regression, which are the basis upon which I computed the royalty shares in Table 3. I imputed distant viewing numbers for CKSH and CKWS in 2010 to match the projected distant viewing numbers that these stations obtained in 2013, as a share of total distant viewing among the six stations common to both years. This imputation procedure is summarized in Table 8. The numbers in italics are my imputed distant viewing numbers for CKSH and CKWS in 2010. These were imputed to match these stations’ shares of distant viewing in 2013 (13.8% for CKSH and 19.8% for CKWS). Moreover, I do not utilize the distant viewing numbers for CBET and CBFT in 2013 because those stations were not in the 2010 sample.

Table 8: Imputing CKSH and CKWS Distant Viewing in 2010 (Imputed numbers in italics.)

2010 dist view 2010 share 2013 dist view 2013 share

CBET 2526.24 CBFT 4468.33 CBLT 10080.87 0.162 4164.22 0.130 CBMT 6658.40 0.109 4950.54 0.155 CBUT 15497.37 0.254 8005.69 0.251 CFTO 8287.67 0.136 4110.24 0.129 CKSH 8394.44 0.138 4397.85 0.138 CKWS 12075.41 0.198 6326.31 0.198 CBWT CHLT

The next step is to allocate these distant viewing numbers to the various program categories. Table 9 summarizes the procedure. For CKSH

Exhibit CCG-R-2 (Shum), Page 51

and CKWS in 2010, I compute the distant viewing for each claimant category by multiplying my imputed distant viewing numbers for these stations (as given in Table 8) by the claimant category shares for CKSH and CKWS from 2013.40 The category-specific imputed distant viewing numbers are given in italic font in Table 9.

Table 9: Imputed Distant Viewing, by Category

CKSH 2013 CKSH 2010 CKWS 2013 CKWS 2010 shares dist view shares dist view

CCG 0.977 8202.63 0.866 10458.86

CTV 0.003 27.32 0.00008 0.91

SDC 0.012 140.95

PS 0.020 164.49 0.122 1474.69

Total: 8394.44 12075.41

Finally, I add the category-specific distant viewing numbers for CKSH and CKWS from Table 9 to the imputed distant viewing numbers from my adjusted regression (as given in the top half of Table 3) and recompute the CCG shares. The resulting adjusted shares are presented in Table 5 in the main text.

40 These shares are computed from the share of imputed distant viewing for CKSH and CKWS of programming in each claimant category, which emerges from my adjusted regression analysis.

Exhibit CCG-R-2 (Shum), Page 52

EXHIBIT CCG-R-3

WRITTEN REBUTTAL TESTIMONY OF FREDERICK CONRAD, PH.D.

Written Rebuttal Testimony of Frederick Conrad, Ph.D.

2010-2013 Cable Royalty Distribution Proceeding

Docket No. 14-CRB-0010-CD (2010-2013)

September 15, 2017

1. QUALIFICATIONS

I am a professor of survey methodology at the University of Michigan (Institute for Social Research) where I have been employed since 2002. In 2014, I also was appointed professor of Psychology at the University of Michigan. My career in survey methodology began in 1991 when I was hired by the US Bureau of Labor Statistics to help improve surveys used to produce key government statistics. Since 2012, I have directed the Michigan Program in Survey Methodology (which grants master’s and doctorate degrees) and, from 2012-2016, I also directed the Joint Program in Survey Methodology at the University of Maryland. I teach courses in data collection methods and the cognitive origins of survey measurement error, among others. My doctorate is in cognitive psychology from the University of Chicago.

My relevant research is concerned with assessing and improving the quality of survey responses by understanding how respondents arrive at their answers. This has led to new data collection procedures for both interview-based surveys and online, self-administered questionnaires. I have a special interest in the use of alternative survey modes, some using new technologies. My CV, which appears in Appendix A, lists 93 peer reviewed journal articles and book chapters, with four articles currently under review and three in preparation; it also lists four books that I either co-authored or co-edited. My CV lists

Exhibit CCG-3 (Conrad), Page 1 fifteen research grants awarded by the National Science Foundation or the National Institutes of Health on which I have been principal or co-principal investigator. In 2013, I received the Warren Mitovsky Innovators Award from the American Association for Public Opinion Research. I am on the editorial board and advisory board of Public Opinion Quarterly, and served as an associate editor of the Journal of Official Statistics from 2002-2012. I have served on several technical panels convened by the Committee on National Statistics at the National Academy of Science, and have served as a reviewer of numerous proposals submitted to the National Science Foundation and the National Institutes of Health.

2. CHARGE

The Canadian Claimants Group (CCG) asked me to critically review certain surveys administered to cable system operators to assess their judgments of the relative market value of several programming categories carried on distant signals. More specifically they asked that I review the surveys commissioned by the Joint Sports Claimants (JSC), known as the “Bortz” surveys, as well as the surveys commissioned by the Programming Suppliers (PS) claimant group, known as the “Horowitz” surveys. In undertaking that review, I focus on how the methods used in these surveys might impact operators’ valuation of Canadian programming.

I have not been asked to opine on whether the Bortz or Horowitz surveys actually provide any information on the relative value of programming on distant signals for other claimant categories.

Exhibit CCG-3 (Conrad), Page 2 3. SUMMARY

I have two primary criticisms of the Bortz and Horowitz surveys. First, the fact that most of the participating systems cannot assign value to Canadian programming but all systems can assign value to the remaining programming categories (with the possible exception of Public Television Claimant (PTV) programming on non-commercial or educational signals) caps the maximum overall value at a much lower level than the maximum overall value for the other programming categories. Second, the PTV and Canadian programming categories are different than the other categories in that they group together content in ways that I believe most people tend not to classify objects, i.e., according to a property they share (educational/non-commercial or country of origin) rather than the type of content (e.g., sports, movies, serials). The constant sum categories for PTV and Canadian programming have the character of what has been called “unnatural categories” which have been shown to lead to poor recall of category instances. In my view, both the cap on maximum value and the unnatural character of PTV and Canadian programming categories are likely to have led to undervaluation of these categories in the Bortz and Horowitz surveys.

In addition, in reviewing the relevant testimony I saw no evidence that the questionnaires had been pretested which, had this been done, would likely have revealed problems with the programming categories. Moreover, the number of respondents whose systems carried a distant Canadian signal was quite small which increases uncertainty about the estimates derived from these responses and increases the vulnerability of the estimates to extreme responses. Finally, the response rates were respectable for telephone surveys at this point in history although there was no analysis of nonresponse bias or adjustment for nonresponse. Thus, we cannot rule out the possibility that the survey estimates are biased.

Exhibit CCG-3 (Conrad), Page 3 4. BORTZ AND HOROWITZ SURVEYS

The two surveys are similar. The Bortz survey has evolved over time in response to certain concerns raised by the various tribunals.1 Horowitz states that his survey attempts to improve upon the Bortz survey as presented in the 2004-2005 royalty distribution proceeding. I organize my discussion in sections on “Sample,” “Response Rates and Nonresponse,” and “Questionnaire.”

4.1 Sample

My primary concern with the sampling design is that the small number of cable systems that both carry a distant Canadian signal and are likely to have completed the questionnaire yields a very small sample and leads to value estimates for Canadian programming that are less stable, i.e., higher standard errors, and more vulnerable to extreme values than the estimates of relative value for the other types of programming such as sports, movies or news which all systems in the survey carry and so are based on a much larger sample.

The Trautman testimony reports that the number of cable systems in 2010-2013 was 1236, 1148, 946, and 943 for each year respectively.2 During 2010, 2011, 2012, and 2013 there were, respectively, 40, 42, 27, and 32 Form 3 cable systems that carried one or more Canadian signals on a distant basis during either or both semi-annual accounting periods.3 Thus, the percent of cable systems that carry a distant Canadian signal and which might potentially be sampled in each year were 3.24%, 3.66%, 2.85%, and 3.39% of all eligible systems in the Bortz and Horowitz surveys. Even if all systems carrying a Canadian signal were invited to participate in the surveys, there would have been far

1 Written Direct Testimony of James M. Trautman (“Trautman Testimony”), Appendix A.

2 Trautman Testimony, p. 32, Table III-3.

3 Based on Form 3 carriage data from Cable Data Corporation.

Exhibit CCG-3 (Conrad), Page 4 fewer of these systems than systems that do not carry a Canadian signal. And, of course, just a fraction of the systems carrying a Canadian signal were actually sampled and invited to participate in the surveys. Of those invited, the number that responded to the Bortz surveys in 2010 through 2013 were 7, 8, 5, and 11;4 the number that responded to the Horowitz surveys were 1, 7, 7, and 8.5 These are very small numbers of systems on which to base the estimates of value for Canadian programming in each year, inflating standard errors relative to the estimates of value based on all systems, and rendering the estimates particularly vulnerable to any extreme values. Compare these responding sample sizes with the responding sample sizes for all systems in the two surveys over the four years: 163, 161, 170 and 160 in the Bortz survey and 136, 174, 228, and 200 in the Horowitz survey.

It is also the case that cable systems which only carried a distant signal containing PTV or Canadian programming were excluded from the sampling frame in the Bortz survey “because it is not possible to obtain an estimate of relative value where the cable operator does not carry diverse types of distant signal programming.”6 I agree with the Cable Royalty Board’s criticism of this practice:

The exclusion of such cable systems clearly biases the Bortz estimates downward for PTV and Canadian programming. The Bortz study seeks to excuse this bias on grounds that it is not possible to obtain an estimate of relative value where the cable system carries only one type of distant signal programming. But this explanation fails to adequately consider the view that: (1) A cable system that

4 Based on files received from Joint Sports Claimants in discovery: JSC 00008184 CRB 2010 Redacted.xlsx, JSC 00008185 CRB_2011_Redacted.xlsx, JSC 00008186 CRB2012 Redacted.xlsx, JSC 00008183 2013 Redacted.xlsx.

5 Based on files received from Program Suppliers in discovery: 2010 Survey Full Data Set - Completes and non-completes with codes.xlsx, 2011 Survey Full Data Set - Completes and non-completes with codes.xlsx, 2012 Survey Full Data Set - Completes and non-completes with codes.xlsx, 2013 Survey Full Data Set - Completes and non-completes with codes.xlsx.

6 Trautman Testimony, p. 14.

Exhibit CCG-3 (Conrad), Page 5 chooses only PTV or Canadian programming may be implicitly making a choice in favor of a 100% relative value score for such programming; (2) an explicit 100% relative value score for the Movies category (and concomitant 0% score for the remaining programming categories) is regarded as acceptable by the Bortz methodology in the case of a U.S. commercial station; and, (3) the latter occurrence—a 100% relative value score for the Movies category—would be recorded by Bortz even in the absence of PTV or Canadian distant signals from the responding cable operator’s system.7

This led to the exclusion of between one and four systems carrying only a Canadian signal in 2010-2013, and between two and four systems carrying only educational and Canadian signals in 2011-2013.8 Given the small number of responding systems which carry a Canadian signal, excluding this small number of Canadian-only or educational- and Canadian-only systems reduced the small number of systems evaluating Canadian programming even further, exacerbating the vulnerability of the estimates to extreme values.

In contrast, the Horowitz questionnaire was tailored to different programming groups including respondents whose system(s) carried only Canadian, educational or WGN as a distant signal, so that these respondents were asked only about the one distant signal their system(s) carried. In principle, this addresses the concerns expressed by the Cable Royalty Board (CRB) about the Bortz surveys’ exclusion of these systems. However, after 2010 none of these systems assigned 100% value to the one distant signal they carried.9 In 2010, seven out of the eight responding systems carrying only a distant

7 Cable Royalty Board, Distribution of the 2004 and 2005 Cable Royalty Funds, Docket No. 2007–3 CRB CD 2004–2005, Federal Register (2010), Vol. 75, No. 180, Notices, p. 57067 (“2004-2005 Phase I Cable Distribution”)

8 Trautman Testimony, p., 13, Table II-1, second footnote

9 Based on files received from Program Suppliers in discovery: 2010 Survey Full Data Set - Completes and non-completes with codes.xlsx, 2011 Survey Full Data Set - Completes and non-completes with

Exhibit CCG-3 (Conrad), Page 6 Canadian signal assigned 100% value to the signal. This suggests that (1) something changed after 2010, perhaps in the instructions to these respondents, and (2) in all years but especially after 2010 there was a problem communicating the task to respondents as, logically, they should all have assigned 100% of value to the Canadian signal.10 I return to this in section 4.3.

A final comment on the sample design in both surveys: The sampling unit was cable systems but for a substantial number of sampled systems a single person was interviewed about multiple cable systems. As far as I can tell, this reflects a trend in the cable industry in which independent cable systems have been aggregated under a single corporate umbrella. It does suggest that the persons who respond about multiple systems may be less conversant with the thinking behind each individual system’s decisions to carry particular signals and thus to provide less differentiated responses across the multiple systems. This cannot be good for data quality.

4.2 Response Rates and Nonresponse Bias

Response rates ranged from 51.8% to 56.6% across the four years in the Bortz survey. I agree with Dr. Nancy Mathiowetz that “These are considered high response rates; it is not uncommon for high quality telephone surveys conducted by organizations such as the Pew Research Center to achieve response rates in only the 10% to 20% range.” 11 In the Horowitz survey, the response rates across the four years, while not directly reported, range (based on information in Table 3.1 in the Horowitz Testimony) from 45% to 76%.

codes.xlsx, 2012 Survey Full Data Set - Completes and non-completes with codes.xlsx, 2013 Survey Full Data Set - Completes and non-completes with codes.xlsx

10 This pattern of results is also observed for the responding systems whose only distant signal was a PTV signal. After 2010, none of the respondents for such systems assigned 100% value to the PTV signal even though it was the only distant signal these systems carried. In 2010, respondents for 13 of 15 such systems assigned 100% value to the PTV signal.

11 Written Direct Testimony of Nancy A. Mathiowetz, Ph.D. (“Mathiowitz Testimony”)

Exhibit CCG-3 (Conrad), Page 7 These too are respectable at the low end and quite strong at the high end for telephone surveys at this point in the history of surveys.

Response rate by itself does not address issues of nonresponse bias. It is always possible that nonrespondents would have answered particular questions differently than respondents, had they responded, thus potentially biasing the survey estimates (which are based only on the answers provided by the respondents). The amount of nonresponse error depends on both the number of nonrespondents and the size of the difference between respondents and nonrespondents. Despite the respectable response rates in both the Bortz and Horowitz studies, substantial numbers of invited sample members did not respond (the complementary percentage to the response rates) so that if they had participated and assigned even somewhat different values to the programming categories, the results could have looked quite different. This is why researchers try to maximize response rates, i.e., to minimize the impact of differences between respondents and nonrespondents, but it is possible to (1) compare the attributes of respondents to nonrespondents and (2) adjust for nonresponse by weighting more heavily the answers from respondents whose attributes match those of nonrespondents. For example, if the nonrespondents were predominantly from one of the royalty strata or operated cable systems of a particular size, then the answers for respondents from these strata or systems of that size would be given more weight. As far as I can tell this was not done in either the Bortz or Horowitz surveys. This by no means invalidates the estimates reported – it is entirely possible that the nonrespondents would have valued the programming categories exactly as the respondent did – but it also does not address the inherent uncertainty of how the nonrespondents might have answered had they actually participated in the surveys, and thus reduces one’s confidence in the results.

4.3 Questionnaire

The key data collected by both the Bortz and Horowitz surveys come from a question that asks respondents to assign relative market value to between five and seven (in the case of

Exhibit CCG-3 (Conrad), Page 8 Bortz) or up to eight (in the case of Horowitz) programming categories. More specifically the respondents are instructed to assign to each programming category its percent of total value of all programming, thus requiring that the component answers add to 100%. This approach to data collection, known as “constant sum,” is widely used in market and time use research. The approach is ideally suited to capturing relative values but is commonly used to produce point estimates for each component category. It is used in the latter way in the Bortz and Horowitz surveys.

While I do not take issue with the use of constant sum items in survey research, I cannot opine on whether a constant sum survey is suited to the royalty allocation task at hand. I do, however, have two concerns with its particular implementation in the Bortz and Horowitz surveys. First, all of the participating systems can potentially assign non-zero values to Movies, Sports, Syndicated Shows, News and Devotional programs but most cannot assign any value to Canadian Programming because they do not carry a distant Canadian signal (systems able to assign value to PTV programming are likely to be similarly limited). One of the standard approaches to analyzing data like this would be, for the systems that do not carry a Canadian signal, to treat the values for Canadian programming as “missing.” It appears that in the Bortz and Horowitz analysis of data for systems that do not carry a Canadian signal, a value of zero rather than “missing” is assigned to Canadian programming. For example, in the 2010 Bortz survey, 156 of the 163 responding systems did not carry a Canadian signal and 135 of the 136 responding systems in the 2010 Horwitz survey did not carry a Canadian signal; these systems did not evaluate Canadian programming and yet are treated as if they did and assigned it zero value. If this was in fact the practice, it would substantially disadvantage the Canadian claimants as the overall (average) value for Canadian programming is based mostly on zeros, yet these zeros are primarily entered for systems that were not asked to evaluate Canadian programming.

To put this in slightly different terms, far more systems can assign large values to Movies, Sports, Syndicated Shows, News, and Devotional programs than can assign large

Exhibit CCG-3 (Conrad), Page 9 values to Canadian Programming (and presumably to PTV). In effect, this caps the possible maximum value for Canadian Programming (and presumably PTV) at a much lower level than the other categories. To take the most extreme case, only about 3% of all participating systems could possibly assign 100% value to Canadian Programming but all of the participating systems can potentially assign 100% value to Movies, Sports, Syndicated Shows, News or Devotional programs. This will result in much smaller overall values for Canadian programming (and presumably PTV) than the other categories as the maximum overall value that can be attributed to Canadian programming is substantially lower than the maximum value that can be attributed to the other programming categories. This is not an inherent risk in the use of constant sum questions but a result of how the constant sum question combines with the particular sampling approach. In practice, the results these surveys produce for Canadian programming are driven more by the number of responding systems with Canadian signals than by any assessment of the value of the programming on those Canadian signals.

My second concern with the implementation of the constant sum approach in the Bortz and Horowitz surveys is that the particular categories of programming to which respondents are asked to assign value are not all comparable and are likely to differ in the effort required to assign them value. Because the constant sum task is designed to elicit relative judgments, i.e., valuation of each programming category with respect to all of the other programming categories, the categories are assumed to be comparable. Yet, two of the categories, “PBS and all other programming broadcast by U.S. non-commercial stations” and “All programming broadcast by Canadian stations” are different than the other five (Bortz) or six (Horowitz) categories and so may not be directly comparable. This was acknowledged by the CRB, which refers to “the basic difficulty that stems from asking cable operators to compare five different categories of programming with two types of distant signals.12

12 2004-2005 Phase I Cable Distribution, 74 Fed. Reg. at 57067.

Exhibit CCG-3 (Conrad), Page 10 The PTV and Canadian categories – the “distant signals” referred to by the CRB – each organize programming from different genres under one heading with the only common thread being a property such as retransmission on an educational/non-commercial or Canadian signal. A key attribute of category instances in everyday life – “natural categories” – is that they are more similar to each other than to instances from outside the category.13 This is certainly the case for categories in the Bortz and Horowitz surveys defined in terms of content: Movies, Sports, News, Devotional, and Syndicated programming (what the CRB calls “categories of programming”). For example, the movie “Star Wars” is more similar to the movie “The Godfather” than it is to “Local 4 News at 5” on Detroit’s WDIV-TV. But this does not seem to be the case for PTV or Canadian programming. For example, “Busytown Mysteries,” an animated children’s show carried on distant Canadian signals, is not very similar to other instances of Canadian programming such as “Steven & Chris,” a Canadian “lifestyle show” whose two hosts were an openly gay married couple, or “Coronation Street,” a British soap opera, also carried on distant Canadian signals.14 It is almost certainly more similar to other animated children’s shows (e.g., “SpongeBob”). It seems unlikely to me that asking about “Canadian programming” will bring to mind the kind of programming that the survey designers assume it will.

Psychological research on categorization suggests that Canadian programming and PTV are “unnatural categories” in that they cut across the categories into which people spontaneously classify objects in the world, which in the case of television programs would likely be programming content, such as movies and news. This implies that respondents are unlikely to have classified the programs that could legitimately be instances of PTV or Canadian programming as members of those categories, instead

13 Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive psychology, 8(3), 382-439.

14 These instances of Canadian programming are among the examples of the category provided to respondents in the Horowitz survey.

Exhibit CCG-3 (Conrad), Page 11 classifying them in terms of their content (e.g., children’s shows, movies, sports). As a result, the kinds of programs that the Bortz and Horowitz surveys consider instances of PTV and Canadian programming will not readily come to mind for respondents, at least compared to instances of more natural categories.15 This can potentially lead to undervaluation of PTV and Canadian programming. Even those programs that do come to mind are likely to fit more naturally into content-based categories (e.g., movies, sports, news) and so respondents may well consider them primarily when assigning values to content-based categories.

One way to determine whether or not the programming categories match the way respondents think about programming content would have been to pretest the questionnaires. A widely used pretesting technique is “cognitive interviewing” in which participants think out loud while answering questions in a draft of the questionnaire.16 This technique, which is used in virtually all US government statistical agencies in developing their survey questionnaires, is ideally suited to testing the programming categories in the Bortz and Horowitz surveys. As far as I can tell, no pretesting was done, at least that was mentioned in recent testimony. I believe that had the categories been pretested, this would have shown that respondents had more difficulty recalling PTV and Canadian programming than Movies, News, Sports, Devotional, and Syndicated programming. This kind of pretesting would also have helped clarify how respondents arrive at their judgments of relative value. Understanding this process and how it might be affected by the particular categories could have led to revised question wording, which could in turn have led to more accurate estimates.

15 Conrad, F. G., Brown, N. R., & Dashen, M. (2003). Estimating the frequency of events from unnatural categories. Memory & cognition, 31(4), 552-562.

16 Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287-311.

Exhibit CCG-3 (Conrad), Page 12 One of the ways in which the Horowitz survey was purported to improve upon the Bortz survey was by including examples with each programming category. As Horowitz says:

[W]e clarified the definition of the content of each program category in the Horowitz Survey by providing examples of representative programs for the categories. This is particularly important to insure that respondents reasoned well about the distinction among various categories instead of reflexively assigning values to the wrong categories. For example, a respondent could confuse non- network sports content with network sports content (which is not compensable in these proceedings). Another example would be a respondent confusing Other Sports content, which falls in the Program Suppliers category, with Live Coverage of Professional and College Team Sports, which falls in the Joint Sports Claimants category. We think the Horowitz Survey’s use of proper categorization, relevant and probing introductory questioning, and incorporation of programming examples makes for a major improvement over the Bortz Survey.17

But examples used to clarify the categories in survey questions are not uniformly helpful. Tourangeau and his colleagues demonstrated that examples can improve respondents’ recall of category instances to the extent that they expand the set of instances respondents consider beyond those they would already have considered.18 Atypical examples are particularly effective because unlike typical examples which would likely be redundant with respondents’ current thinking, atypical examples would differ from what respondents already had in mind and thus expand the set of instances that define the category. Examples that are typical of the category do not help because they do not expand the set respondents were already considering. Trautman, raises a related concern about Horowitz’s use of examples “We believe the use of examples is inappropriate in

17 Horowitz Testimony, p.6.

18 Tourangeau, R., Conrad, F. G., Couper, M. P., & Ye, C. (2014). The effects of providing examples in survey questions. Public opinion quarterly, 78(1), 100-125.

Exhibit CCG-3 (Conrad), Page 13 that it necessarily excludes programming types not included as examples.”19 While the Tourangeau study does not demonstrate this phenomenon exactly, I concur with Trautman’s intuition. Again, pretesting would have revealed the extent to which examples help, have no impact, or impede recall, but this was not done.

The examples presented by Horowitz also bolster my earlier point about the use of unnatural categories. The examples presented with the six content-based categories may well facilitate recall of particular programs for consideration in assigning value to the category. Consider Movies:

Movies such as feature films, Movies of the Week, and specials broadcast on (STATION(S)). Examples include movies such as Pirates of the Caribbean: The Curse of the Black Pearl, Crash, Ghostbusters, The Matrix, X-Men: The Last Stand, Signs, and Girl, Interrupted. [Gladiator, The Lord Of The Rings: The Return of the King, and Home Alone 2: Lost In New York for WGN Only systems.].

When presented with this wording, it seems likely that other movies, e.g., “Star Wars” or “Avatar” will come to mind. The category, Movies, is coherent and its members are more similar to each other than to non-movie programming, as discussed above. But consider the effect on respondents’ thinking when they are presented with the following examples for “All programming broadcast by Canadian stations.” “Examples include Steven & Chris, The Social, Coronation Street, Busytown Mysteries, and CBC News.” It seems unlikely these examples will suggest any other instances of Canadian programming – unless one has represented programs in memory that are tagged as “Canadian,” which I believe is unlikely, even for industry experts. This is not a matter of the examples’ typicality but rather that the category is not sufficiently coherent for these programs to stimulate recall of other Canadian programs. The first example program is a “lifestyle” show; the second is a daytime talk series, the third is a British soap opera, the fourth is an

19 Trautman Testimony p., A8.

Exhibit CCG-3 (Conrad), Page 14 animated children’s show, and the fifth is the CBC’s English language news service that includes many programs. The only feature these programs share is that they are broadcast on Canadian television. They do not help to define the category and, I maintain, are unlikely to bring other programs from the category to mind.

I mentioned earlier that by including systems whose only distant signal is Canadian, the Horowitz survey addressed the CRB’s concerns about the exclusion of these systems from the Bortz survey. However, in 2011-2013 none of these systems assigned 100% value to the Canadian signal – the only signal about which they were asked – suggesting there was a problem communicating the task to the respondents.20 One reason this might have occurred is that the question asks for a judgment of “relative value” even though just one signal is presented, possibly suggesting to respondents that they should compare the Canadian signal to other unspecified signals in assigning it a value. The question wording further implies there are multiple programming categories even though there is just one (e.g., preceding the presentation of the only signal with “here they are” or instructing respondents to “write them down in the order I read them”). We cannot know for sure but it seems the prevalence of values well below 100% indicates the respondents were confused by the question. Pretesting could have identified this possible confusion and led to clearer question wording.

A final note about data collection: telephone interviews may have exacerbated the recall difficulty associated with the PTV and Canadian programming categories. Interviews impose pressure to respond quickly. In ordinary conversation people tend to begin speaking within about one second of the previous speaker completing his or her speaking turn.21 This is attributed to discomfort from longer periods of silence. If respondents are

21 Two such studies are: (1) Jefferson, G (1988). Notes on a possible metric which provides for a ‘standard maximum’ silence of approximately one second in conversation. In: Roger D, Bull PP, editors. Conversation: An interdisciplinary perspective. Clevedon, UK: Multilingual Matters. pp 166–196; and (2) Roberts, F, Francis A. L. (2013). Identifying a temporal threshold of tolerance for silent gaps after requests. J Acoust Soc Am 133(6), EL471–EL477. pmid:23742442.

Exhibit CCG-3 (Conrad), Page 15 investing effort to recall instances of a category that do not readily come to mind, the pressure to begin one’s answer may truncate this recall process, further reducing to the number of programs that come to mind. With more time, as can be the case in self- administered survey modes such web surveys, respondents can not only think more carefully about their answers, they can also consult records, for example, program listings.

5. CONCLUSIONS

Much about the Bortz and Horowitz surveys is reasonable. However, they may have disadvantaged the estimates of value for Canadian programming carried on distant Canadian signals.

First, only a small subset of eligible cable systems carries a Canadian signal so the estimates for this programming category are based on a much smaller number of observations than the estimates for other categories (with the possible exception of PTV programming), making them more vulnerable to extreme values than are the estimates of value for the other programming categories.

Second, the constant sum task requires relative judgments, i.e., percent of value attributed to each category compared to all the others. The Bortz and Horowitz combine different versions of the constant sum questions (those where there are no Canadian or education signals, those with one signal but not the other, and those with both signals), which changes the number of categories in the constant sum question. Because of the small percent of participating systems that carry a Canadian signal the maximum overall value that can be attributed to this category of programming is capped at a much lower level than the maximum overall value that can be attributed to the other programming categories which are carried by most if not all of the participating cable systems. This seems to be exacerbated in the analysis by treating what should be missing values for

Exhibit CCG-3 (Conrad), Page 16 Canadian programming for systems that do not carry a Canadian signal as having zero value.

Finally, the Canadian programming and PTV categories are at risk of being undervalued because they are unnatural categories, organized by a shared property rather than content. The result is that it is likely to be harder to recall instances of Canadian programming and PTV programming than Movies, Sports, News, Syndicated shows and Devotional programs, leading respondents to overlook, i.e., fail to retrieve, relevant programs in the PTV and Canadian categories. Considering that adjustments made to royalty payments in previous years based on statistical modeling of signal selection by cable operators – which is not prone to the same kinds of memory and categorization error as are the survey data – have led to increased payments to PTV and Canadian claimants, I believe the way this programming is classified may well have downwardly biased the estimates of their value.

In sum, I conclude the data collected in these surveys should not be relied upon in assessing the relative value of CCG programming.

Exhibit CCG-3 (Conrad), Page 17

(Intentionally Left Blank)

Exhibit CCG-3 (Conrad), Page 18

Appendix A

Exhibit CCG-3 (Conrad), Page 19

(Intentionally Left Blank)

Exhibit CCG-3 (Conrad), Page 20

September 2017

Frederick G. Conrad

Institute for Social Research University of Michigan Room 4009 P.O. Box 1248 Ann Arbor, MI 48106 Phone: 734-936-1019

[email protected]

Education Ph.D., University of Chicago, 1986. Cognitive Psychology B.A., Hampshire College, 1977. Cognitive Science

Current Research

New technologies in survey data collection including interactive web surveys, virtual interviewers, and mobile devices; correspondence between analyses of social media and survey data; interaction between interviewers and respondents; interviewing techniques

Professional Employment 2012 – present: Director, Michigan Program in Survey Methodology, University of Michigan

2014 – present: Professor, Psychology, University of Michigan

2012 – 2015: Director, Joint Program in Survey Methodology, University of Maryland

2011 – present: Research Professor, Institute for Social Research, University of Michigan, Research Professor, Joint Program in Survey Methodology, University of Maryland

2006 – 2011: Research Associate Professor, Institute for Social Research, University of Michigan Adjunct Associate Professor of Psychology, University of Michigan Research Associate Professor, Joint Program in Survey Methodology, University of Maryland

Exhibit CCG-3 (Conrad), Page 21

2002-2006: Associate Research Scientist, Institute for Social Research, University of Michigan and Research Associate Professor, Joint Program in Survey Methodology, University of Maryland

1991 - 2002: Research/Senior Research Psychologist, Bureau of Labor Statistics.

1989 – 1991: Principal Software Engineer, Artificial Intelligence Research Group, Digital Equipment Corporation.

1986 – 1989: Post-doctoral Research Associate, Intelligent Tutoring Laboratory, Department of Psychology, Carnegie-Mellon University.

Visiting and Adjunct Appointments, Consulting, Graduate Assistantships 2009 – 2011: Consultant, Center for AIDS Preventions Studies, University of California San Francisco.

2007 – 2011: Consultant, Bureau of Labor Statistics, Washington, DC.

June 2000: Visiting Scholar, Department of Research Methodology, Vrieje Universiteit of Amsterdam, (laboratory of Dr. Wil Dijkstra).

1998 to 2002: Adjunct Assistant Professor, Joint Program for Survey Methodology, University of Maryland.

1998: Adjunct Associate Professor, Department of Psychology, George Mason University.

July, 1998: Instructor, Swiss Summer School, Swiss National Science Foundation, L'Università della Svizzera Italiana

1991 – 1995: Occasional Consultant, Survey Research Center, University of Maryland.

1991: Consultant, Center for Survey Research, University of Massachusetts at Boston.

1985 – 1986: Research Coordinator, Project on Estimation and Survey Research, University of Chicago.

1979 – 1983: Research Assistant to Lance Rips, University of Chicago.

1981 – 1986: Graduate Teaching Assistant, University of Chicago.

Exhibit CCG-3 (Conrad), Page 22 2

1977 – 1978: Research Assistant to Edward Smith, Psychology Department, Stanford University, and Summer, 1977, Psychology Department, Rockefeller University.

Grants and Awards

7/13-4/18 “Addressing Acquiescence: Reducing survey error to promote Latino Health.” National Cancer Institute Grant # 1 R01 CA172283-01A1. Principle investigator: Rachel Davis, University of South Carolina. 1/13-12/15 “Decomposing interviewer variance in standardized and conversational interviewing,” National Science Foundation Grant # SES1324689. Principle Investigator: Brady West, University of Michigan. 5/13 Warren J. Mitovsky Innovators Award, American Association for Public Opinion Research (with Michael Schober) 10/10 – 9/13 “Collaborative Research: Responding to Surveys on Mobile, Multimodal Devices,” National Science Foundation Grant # SES 1026225. Principal Investigator: Frederick Conrad; simultaneous award to New School for Social Research, Michael Schober, PI. 07/09 – 06/11 “Risk Communication for Environmental Exposure,” National Institute of Environmental Health Sciences Grant # R01ES016306. Principal investigator, Edith Parker. 09/08 – 08/09 “Collaborative Research: Acoustic Properties, Listener Perceptions, and Outcomes of Interactions between Survey Interviewers and Sample Persons,” National Science Foundation grant SES-0819734. Principle investigator, Frederick Conrad (original PI, Robert Groves, University of Michigan); parallel award to Jose Benki, Michigan State University. 05/07 – 04/12 “Improving the Design of Health Surveys on the Web,” National Institutes of Health grant # R01 HD041386-04A1, Principal investigator Roger Tourangeau, University of Michigan. 03/07 – 02/10 “Disability, Time Use, and Well-being Among Middle-Aged and Older Married Couples,” National Institutes of Health grant # P01 AG029409-01. Principal investigator Vicki Friedman, University of Medicine and Dentistry of New Jersey School of Public Health. 05/06 – 08/06 Rackham Graduate School (University of Michigan) Spring/Summer Fellowship for Support of a Doctoral Student 10/05 – 09/08 “Animated Agents in Self-Administered Surveys” National Science Foundation grant SES 0551300. Principal investigator; co- PI Michael Schober, New School for Social Research. 04/05 – 08/05 “Experiments to Understand How Americans React to New Election Procedures” with Michael Hanmer, Georgetown University and Michael Traugott, University of Michigan. A module in survey administered by “Time-sharing Experiments in

Exhibit CCG-3 (Conrad), Page 23 3

the Social Sciences” NSF Grant 0094964, Diana C. Mutz and Arthur Lupia, Principal Investigators. 03/05 – 02/06 “Envisioning the Survey Interview of the Future,” a workshop to foster synergy between survey methodologists and communication technologists. National Science Foundation grant SES-0454832. Principal investigator with Michael Schober, New School for Social Research (Co-PI). Supplemental award made to organize follow-up workshop in United Kingdom in 2007. 01/04 – 12/08 “Informed consent and perceptions of risk and harm in survey participation,” National Institute of Child Health and Human Development, National Institutes of Health. With Eleanor Singer (PI), Mick Couper and Robert Groves, all of the University of Michigan. 06/01/03-05/31/06 “Visual and Interactive Issues in the Design of Web Surveys,” National Institute of Child Health and Human Development, National Institutes of Health, grant # R01 HD041386-01A1. Roger Tourangeau (PI) and Mick Couper (both at University of Michigan). 06/01/03-05/31/06 “An Assessment of voting technology and ballot design.” National Science Foundation grant IIS0306698. Paul Herrnson (PI, University of Maryland), Ben Bederson (University of Maryland), Richard Niemi (University of Rochester), Mike Traugott (University of Michigan). 2001-2004 “Visual and interactive features of web surveys” National Science Foundation grant SES0106222, Co-Principal Investigator with Roger Tourangeau (PI), University of Michigan, Mick Couper (Co-PI), University of Michigan and Reginald Baker (Co-PI), MS- Interactive. 2001 United States Department of Labor Secretary’s Exceptional Achievement Award. 2000-2003 “Adaptive interfaces for collecting survey data from users” National Science Foundation grant IIS-0081550, Co-Principal Investigator with Michael Schober (PI), New School University. 1999-2001 “The cognitive basis of seam effects in panel surveys” National Science Foundation grant SES-99-07414, Government Partner with Lance Rips (PI), Northwestern University. 1998-2000 “Costs and benefits of conversational survey interviewing” National Science Foundation grant SBR-97-0140, Government Partner with Michael Schober (PI), New School for Social Research. 1998 Bureau of Labor Statistics Award for Eminent Achievement. 1997 United States Department of Labor Secretary’s Exceptional Achievement Award 1995 Annual Research Practicum, Joint Program on Survey Methodology, University of Maryland; proposed project (on behalf

Exhibit CCG-3 (Conrad), Page 24 4

of Bureau of Labor Statistics) about improving occupational classification of survey respondents by asking about their skills. 1985, 1986 Co-authored two proposals with Lance Rips, University of Chicago, to study sentence comprehension, funded by the Benton Foundation awarded to Lance Rips.

Publications Books

Tourangeau, R., Conrad, F.G., Couper, M.P. (2013). The Science of Web Surveys. Oxford: Oxford University Press.

Conrad, F.G. & Schober, M.F. (Eds.) (2008). Envisioning the Survey Interview of the Future. New York: Wiley & Sons.

Herrnson, P.S, Niemi, R.G., Hanmer, M.J., Bederson, B., Conrad, F.G. & Traugott, M. (2008). Voting Technology: The Not-So-Simple Act of Casting a Ballot. Brookings Institution Press.

Payne, D.G. & Conrad, F.G. (Eds.) (1997). Intersections in Basic and Applied Memory Research. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.

Journal articles and book chapters

Antoun, C., Couper, M. P., & Conrad, F. G. (2017). Effects of Mobile versus PC Web on Survey Response Quality: A Crossover Experiment in a Probability Web Panel. Public Opinion Quarterly, 81, 280-306.

Conrad, F.G., Schober, M.F., Antoun, C., Yan, H.Y., Hupp, A.L., Johnston, M., Ehlen, P., Vickers, L., Zhang, C. (2017). Respondent mode choice in a smartphone survey. Public Opinion Quarterly, 81, 307-337.

Conrad, F.G. Tourangeau, R., Couper, M. P., & Zhang, C. (2017). Reducing speeding in web surveys by providing immediate feedback. Survey Research Methods, 11, 45- 61.

Conrad, F.G., Schober, M.F., Hupp, A.L, Antoun, C., & Yan, H.Y. (2017). Text interviews on mobile devices. In P.P. Biemer, E. de Leeuw, S. Eckman, B. Edwards, F. Kreuter, L.E. Lyberg, C. Tucker, & B.T. West (Eds.), Total survey error in practice (299-318). Hoboken, NJ: John Wiley & Sons, Inc.

Mittereder, F., Durow, J., West, B.T., Kreuter, F. & Conrad, F.G. (2016, online). Interviewer-Respondent Interactions in Conversational and Standardized Interviewing. Field Methods.

Exhibit CCG-3 (Conrad), Page 25 5

West, B. T., Conrad, F. G., Kreuter, F., & Mittereder, F. (2016, online). Can conversational interviewing improve survey response quality without increasing interviewer effects? Journal of the Royal Statistical Society: Series A.

Zhang, C. & Conrad, F.G. (2017). Intervention as a strategy to reduce satisficing behaviors in web surveys: Evidence from two experiments on how it works. Social Science Computer Review. Available online: http://journals.sagepub.com/doi/full/10.1177/0894439316683923

Conrad, F.G., Couper, M.P., & Sakshaug, J. W. (2016). Classifying open-ended reports: Coding occupation in the current population survey. Journal of Official Statistics, 32, 75-92.

Horwitz, R., Kreuter, F., Conrad, F.G. (2016). Using mouse movements to predict web survey response difficulty. Social Science Computer Review. DOI: 10.1177/0894439315626360

Liu, M., & Conrad, F. G. (2016). An experiment testing six formats of 101-point rating scales. Computers in Human Behavior, 55, 364-371.

Liu, M., Conrad, F. G., & Lee, S. (2016) Comparing acquiescent and extreme response styles in face-to-face and web surveys. Quality & Quantity, 1-18.

Schober, M.F., Pasek, J., Guggenheim, L., Lampe, C., & Conrad, F.G. (2016). Research Synthesis: Social media analyses for social measurement. Public Opinion Quarterly, 80(1), 180-211. doi:10.1093/poq/nfv048

Antoun, C., Zhang, C., Conrad, F.G., & Schober, M.F. (2015). Comparisons of online recruitment strategies for convenience samples: Craigslist, Google AdWords, Facebook and Amazon Mechanical Turk. Field Methods. DOI: 10.1177/1525822X15603149

Conrad, F.G., Schober, M.F., Jans, M., Orlowski, R.A, Nielsen, D., & Levenstein, R. (2015). Comprehension and engagement in survey interviews with virtual agents. Frontiers in Psychology: Cognitive Science, 6:1578. doi: 10.3389/fpsyg.2015.01578

Liu, M., Lee, S. & Conrad, F.G. (2015). Comparing extreme response styles between agree-disagree and item specific scales. Public Opinion Quarterly, 79 (4), 952- 975.

Schober, M.F., Conrad, F.G., Antoun, C., Ehlen, P., Fail, S., Hupp, A.L., Johnston, M., Vickers, L., Yan, H., & Zhang, C. (2015). Precision and disclosure in text and voice interviews on smartphones. PLOS ONE 10(6): e0128337. doi:10.1371/journal.pone.0128337

Exhibit CCG-3 (Conrad), Page 26 6

Schober, M.F., & Conrad, F.G. (2015). Improving social measurement by understanding interaction in survey interviews. Policy Insights from Behavioral and Brain Sciences, 2, 211-219. doi: 10.1177/2372732215601112

Conrad, F.G., Schober, M.F., & Schwarz, N. (2014). Pragmatic processes in survey interviewing. In T. Holtgraves (Ed.), Oxford Handbook of Language and Social Psychology (420-437). Oxford: Oxford University of Press.

Zhang, C., & Conrad, F.G. (2014). Speeding in Web Surveys: The tendency to answer very fast and its association with straightlining. Survey Research Methods, 8 (2),127-135.

Freedman, V.A., Conrad, F., Cornman, J., Schwarz, N., Stafford, F. (2014). Does time fly when you are having fun? A day reconstruction method analysis. Journal of Happiness Studies,15, 639-655.

Tourangeau,, R., Conrad, F.G., Couper, M.P., and Ye, C. (2014). The effects of providing examples in survey questions. Public Opinion Quarterly, 78, 100-125.

Conrad, F., Broome, J., Benkí, J., Kreuter, F., Groves, R., Vannette, D., & McClain, C. (2013). Interviewer speech and the success of survey invitations. Journal of the Royal Statistical Society: A, 176, part1, pp. 191–210.

Couper, M.P., Tourangeau, R., Conrad, F.G. & Zhang, C. (2013). The design of grids in web surveys. Social Science Computer Review, 31, 322-345.

Lind, L. H., Schober, M.F., Conrad, F.G. and Reichert, H (2013). Why do survey respondents disclose more when computers ask the questions? Public Opinion Quarterly 77, 888–935.

Tourangeau, R., Couper, M. P., & Conrad, F. G. (2013). “Up means good”: The effect of screen position on evaluative ratings in web surveys. Public Opinion Quarterly, 77, 69-88.

Schober, M.F., Conrad, F.G, Dijkstra, W., & Ongena, Y. (2012). Disfluencies and gaze aversion in unreliable responses to survey questions. Journal of Official Statistics, 28, 555-582.

Traugott, M. W. and Conrad, F.G. (2012). Confidence in the electoral system: Why we do auditing. In Alvarez, R. M., Atkeson, L.R. & Hall, T.E. (Eds.) Confirming elections: Creating confidence and integrity through election auditing (41-56). New York: Palgrave MacMillan.

Freedman, V. A., Stafford, F., Schwarz, N., and Conrad, F. (2012). Measuring time use of older couples: Lessons from the panel study of income dynamics. Field Methods, 25, 405-422.

Exhibit CCG-3 (Conrad), Page 27 7

Brown, N. R., Hansen, T. G. B., Lee, P. J., Vanderveen, S. A., & Conrad, F. G. (2012). Historically-defined autobiographical periods: Their origins and implications. In D. Berntsen & D. Rubin (Eds). Understanding autobiographical memories: Theories and approaches (pp. 160-180). Cambridge: Cambridge.

Freedman, V.A., Stafford, F., Conrad, F. & Schwarz, N. (2012). Time together: An assessment of diary quality for older couples. Annals of Statistics and Economics, 105-106, 271-289.

Freedman, V. A., Stafford, F., Schwarz, N., and Conrad, F. (2012). Disability, participation, and subjective wellbeing among older couples. Social Science & Medicine. 74, 588-96.

Blair, J. & Conrad, F.G. (2011). Sample size for cognitive interview pretesting. Public Opinion Quarterly, 75, 636-658.

Conrad, F.G. (2011). Response 2 to Miller’s Chapter: Cognitive Interviewing. In Madans, J., Miller K., Maitland, A., and Willis, G. (Eds.). Question Evaluation Methods (pp. 93-102). Hoboken, NJ: John Wiley and Sons.

Yan, T., Conrad, F.G., Couper, M.P. & Tourangeau, R. (2011). Should I stay or should I go: The effects of progress feedback, promised task duration, and length of questionnaire on completing web surveys. International Journal of Public Opinion Research, 23, 131-147.

Couper, M., Kennedy, C., Conrad, F. & Tourangeau, R. (2011) . Designing input fields for non-narrative open-ended responses in web surveys. Journal of Official Statistics, 27, 65-85.

Houle, C., Joseph, L.M., Caldwell, C.H., Conrad, F.G., Parker, E.A. (2011). Congruence between urban adolescent and caregiver responses to questions about the adolescent’s asthma. Journal of Urban Health, 88, 30-40.

Conrad, F. G., Couper, M. P., Tourangeau, R. & Peytchev, A. (2010). Impact of progress indicators on task completion. Interacting with Computers, 22, 417–427.

Couper, M.P., Singer, E., Conrad, F.G., Groves, R. M. (2010). Experimental studies of disclosure risk, disclosure harm, topic sensitivity, and survey participation. Journal of Official Statistics, 26.287-300.

Hanmer, M.J., Park W-H., Traugott, M.W., Niemi, R. G., Herrnson, P. S., Bederson, B. B., Conrad, F. G. (2010) Losing Fewer Votes: The Impact of Changing Voting Systems on Residual Votes. Political Research Quarterly. 63, 129-143.

Exhibit CCG-3 (Conrad), Page 28 8

Peytchev, A., Conrad, F.G., Couper, M.P & Tourangeau, R. (2010). Increasing respondents’ use of definitions in web surveys. Journal of Official Statistics, 26, 630-350.

Brown, N., Lee, P., Krslak, M., Conrad, F., Hansen, T., Havelka, J., Reddon, J. (2009). Autobiographical memory, war, terrorism. Psychological Science, 20 399-405.

Conrad, F. G. and Blair, J. (2009). Sources of error in cognitive interviews. Public Opinion Quarterly, 73, 32-55.

Conrad, F.G., Rips, L.J. & Fricker, S.S. (2009). Seam effects in quantitative responses. Journal of Official Statistics. 25, 339–361.

Conrad, F.G., Bederson, B. B., Lewis, B. , Traugott, M. W., Hanmer, M. J., Herrnson, P. S., Niemi, R. G. & Peytcheva, E. (2009). Electronic voting eliminates hanging chads but introduces new usability challenges. International Journal of Human- Computer Studies. 67, 111-124.

Galesic,. M., Tourangeau. R., Couper, M.P., & Conrad, F.G. (2009). Eye-tracking data: New insights on response order effects and other cognitive shortcuts in survey responding. Public Opinion Quarterly, 72, 892-913.

Herrnson, P.S., Niemi, R.G., Hanmer, M. J., Francia, P.J., Bederson, B.B., Conrad, F.G., and Traugott, M.W. (2008) Voter reactions to electronic voting systems: Results from a usability field test. American Politics Research., 36, 580-611

Conrad, F.G. & Schober, M.F. (2008). New frontiers in standardized survey interviewing. In Hesse-Biber, S.N. & Leavey, P. (Ed.) Handbook of Emergent Methods in Social Research (pp. 173-188). New York: NY: Guilford Publications.

Couper, M. P., Singer, E., Conrad, F., & Groves, R.M. (2008). Risk of disclosure, perceptions of risk, and concerns about privacy and confidentiality as factors in survey participation. Journal of Official Statistics, 24, 255-275.

Conrad, F.G., Schober, M. F. & Dijkstra, W. (2008). Cues of communication difficulty in telephone interviews. In Lepkowski, J.M., Tucker, C., Brick, M., de Leeuw, E., Japec, L., Lavrakas, P., Link, M. & Sangster, R. (Eds). Advances in telephone survey methodology (pp. 212-230). New York: Wiley.

Schober, M.F &. Conrad, F.G (2008). Survey interviews and new communication technologies. In Conrad, F.G. & Schober, M.F. (Eds.) Envisioning the Survey Interview of the Future. New York: Wiley & Sons.

Conrad, F.G., Schober, M. F., & Coiner, T. (2007) Bringing features of human dialogue to web surveys. Applied Cognitive Psychology, 21, 165-188.

Exhibit CCG-3 (Conrad), Page 29 9

Couper, M.P., Conrad, F.G. & Tourangeau (2007). Visual context effects in web surveys. Public Opinion Quarterly, 71, 91-112.

Ehlen, P., Schober, M.F. & Conrad, F.G. (2007). Modeling speech disfluency to predict conceptual misalignment in speech survey systems. Discourse Processes, 44, 3, 245-266..

Tonn, B. & Conrad, F.G. (2007). Thinking about the future: A psychological analysis. Social Behavior and Personality, 35, 889-902.

Tourangeau, R., Couper, M.P., & Conrad, F.G. (2007). Color, labels and interpretive heuristics for response scales. Public Opinion Quarterly, 71, 91-112.

Conrad, F.G., Couper, M.P., Tourangeau, R. & Peytchev, A. (2006). Use and non-use of clarification features in web surveys. Journal of Official Statistics, 22,245-269.

Tourangeau, R., Conrad, F.G., Arens, Z., Fricker, S., Lee, S. & Smith, E. (2006). Everyday concepts and classification errors: Judgments of disability and residence. Journal of Official Statistics, 22, 385-418.

Couper, M. P., Tourangeau, R., Conrad, F.G. & Singer, E. (2006). Evaluating the effectiveness of visual analog scales: A web experiment. Social Science Computer Review, 24, 227-245.

Tonn, B., Conrad, F. & Hemrick, A. (2006). Cognitive representations of the future: Survey results. Futurist. 38, 810-829.

Conrad, F. G. & Schober, M.F. (2005). Promoting uniform question understanding in today’s and tomorrow’s surveys. Journal of Official Statistics, 21, 215 – 231.

Conrad, F.G. (2005). Standardized interviewing and alternatives. In Best, S. & Radcliff, B. (Eds.), Polling America: An Encyclopedia of Public Opinion, Volume 2. Portsmouth, NH: Greenwood Press, pp. 774-777.

Herrnson, P. S., Abbe, O.G, Francia, PL., Bederson, B. B., Lee, B., Sherman, R.M. Conrad, F., Niemi, R. G. & Traugott, M. (2005). Early appraisals of electronic voting. Social Science Computer Review, 23, 274-292.

Conrad, F.G. & Blair, J. (2004). Aspects of data quality in cognitive interviews: The case of verbal reports. In S. Presser, J. Rothgeb, M. Couper, J. Lessler, E. Martin, J. Martin & E. Singer (Eds.) Questionnaire Development, Evaluation and Testing Methods. New York: John Wiley and Sons, pp. 67-88.

Couper, M. P, Tourangeau, R., Conrad, F. & Crawford, S. (2004). What they see is what we get: Response options for web surveys. Social Science Computer Review, 22, 111-127

Exhibit CCG-3 (Conrad), Page 30 10

Schober, M.F., Conrad, F.G. and Fricker, S.S. (2004). Misunderstanding standardized language in research interviews. Applied Cognitive Psychology, 18, 169-188.

Tourangeau, R., Couper, M. F., Conrad, F. G. (2004). Spacing, position, and order: interpretive heuristics for visual features of survey questions. Public Opinion Quarterly, 68, 368 – 393.

Conrad, F.G., Brown, N.R. and Dashen, M. (2003). Estimating the frequency of events from unnatural categories. Memory and Cognition, 31, 552-562

Rips. L. J., Conrad, F.G. & Fricker, S. S. (2003). Straightening the seam effect in panel surveys. Public Opinion Quarterly, 67, 522-554.

Tonn, B. and Conrad, F. (2003). A technique for characterizing the attributes of hard-to- price products. Field Methods, 15, 202-217.

Schober, M.F. & Conrad, F.G. (2002). A collaborative view of standardized survey interviews. In D. Maynard, H. Houtkoop-Steenstra, Schaeffer, N. C. & van der Zouwen. (Eds.). Standardization and Tacit Knowledge: Interaction and Practice in the Survey Interview. New York: John Wiley and Sons, pp. 67-94.

Bosley, J. J., & Conrad, F.G. (2001). Usability testing of data access tools. In Smith, M.J., Salvendy, G., Harris, D. and Koubek, R.J. (Eds.), Usability Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents and Virtual Reality. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, pp. 978-982.

Conrad, F.G. (2001). Review of “The Science of Self-Report: Implications for Research and Practice.” Journal of Official Statistics, 17, 436-439.

Conrad, F.G. & Schober. M.F. (2000). Clarifying question meaning in a household telephone survey. Public Opinion Quarterly, 64, 1-28.

Conrad, F.G. (1999). Customizing survey procedures to reduce measurement error. In M.G. Sirken, D.J. Herrmann & S. Schechter, N. Schwarz, J. Tanur & R. Tourangeau (Eds.), Cognition and Survey Research. New York: John Wiley and Sons, pp. 301-317.

Bosley, J., Conrad, F.G. & Uglow, D.A. (1998). Pen CASIC: Design and usability. In M. Couper, R. Baker, J. Bethlehem, C. Clark, J. Martin, W. Nicholls, & J. O’Reilly (Eds.), Computer Assisted Survey Information Collection. New York: John Wiley & Sons, pp. 521-541.

Conrad, F.G., Brown, N. R. & Cashman, E. R. (1998). Strategies for estimating behavioural frequency in survey interviews. Memory, 6, 339-366.

Exhibit CCG-3 (Conrad), Page 31 1 1

Conrad, F.G. (1997). Using expert systems to model and improve survey classification processes. In L. Lyberg, P.Biemer, M. Collins, E. DeLeeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey Measurement and Process Quality. New York: John Wiley & Sons, pp. 393-414.

Conrad, F.G. (1997). Measuring consumption and consuming measurement: The challenge of studying consumers from a federal perspective. In Merrie Brucks and Deborah J. MacInnis, eds., Advances in Consumer Research, Vol. 24, Provo, UT: Association for Consumer Research, pp. 330-332.

Payne, D.G., Conrad, F.G. & Hager, D.R. (1997). Basic and applied memory research: Empirical, theoretical and metatheoretical issues. In D.G. Payne & F.G. Conrad, (Eds.) Intersections in Basic and Applied Memory Research. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, pp.45-68.

Schober, M.F. & Conrad, F.G. (1997). Does conversational interviewing reduce survey measurement error? Public Opinion Quarterly, 61, 576-602. Reprinted in N.G. Fielding (Ed.), (2003), Interviewing, Vol. 1 (SAGE Benchmarks in Social Science Research Series). London, UK/Thousand Oaks, CA: Sage Publications.

Conrad, F.G. & Brown, N.R. (1996). Estimating frequency: A multiple strategy perspective. In Herrmann, D.J., McEvoy, C., Hertzog, C., Hertel, P. and Johnson, M.K. (Eds.) Basic and Applied Memory Research: Practical Applications, Volume 2. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, pp. 166-179.

Conrad, F.G. and Tucker, N.C. (1996, March). How has cognitive psychology affected survey methodology? Amstat News, pp. 29-31.

Conrad, F.G. and Tucker, N.C. (1996, January). What does cognitive psychology offer survey methodology? Amstat News, pp. 27-29.

Levi, M.D. & Conrad, F.G. (1996, July and August). A heuristic evaluation of a World Wide Web prototype. interactions, 3, 50-61.

Anderson, J.R., Conrad, F.G. & Corbett, A.T. (1993). The LISP tutor and skill acquisition. In Anderson, J.R. Rules of the Mind. Hillsdale, NJ: Lawrence Erlbaum, Publishers, pp. 143-164.

Anderson, J.R., Conrad, F.G., Corbett, A.T., Fincham, J.M., Hoffman, D., & Wu, Q. (1993). Computer programming and transfer. In Anderson, J.R. Rules of the Mind. Hillsdale, NJ: Lawrence Erlbaum, Publishers, pp. 205-233.

Rips, L.J. & Conrad, F.G.(1990). Parts of activities: A response to Fellbaum and Miller. Psychological Review, 97, 572-575.

Exhibit CCG-3 (Conrad), Page 32 12

Rips, L.J. & Conrad, F.G.(1989). Folk psychology of mental activities. Psychological Review, 96, 187-207.

Anderson, J.R., Conrad, F.G. & Corbett, A.T. (1989). Skill acquisition and the LISP tutor. Cognitive Science, 13, 467-505.

Conrad, F.G. & Rips, L. J. (1986). Conceptual combination and the given/new distinction. Journal of Memory and Language, 25, 255-278.

Rips, L.J. & Conrad, F.G. (1983). Individual differences in deduction. Cognition and Brain Theory, 6, 259-285.

Manuscripts in Preparation or Under Review Conrad, F.G., Antoun, C., Hupp, A. L., Yan, H.Y. & Schober, M.F., (in preparation). Efficiency of text message survey interviews.

Conrad, F.G., Corey, J., Goldstein, S., Ostrow, J, & Sadowsky, M. (conditionally accepted). Extreme re-listening: Songs people love... and continue to love. Psychology of Music.

Hubbard, F.A., Antoun, C. & Conrad, F.G. (under review). Two Long Standing Questions About Conversational Interviews: What Kinds of Questions Can Be Asked and Who Is Best Suited to Ask Them?

Sun, H., Conrad, F.G., & Kreuter, F. (under review). The impact of interviewer- respondent rapport on data quality: Disclosure of sensitive information and item nonresponse

Sun, H., Conrad, F.G., & Kreuter, F. (in preparation). Influence of Preceding Interviewer- Respondent Interaction on Responses in Audio Computer-assisted Self- interviewing (ACASI).

Traugott, M. & Conrad, F.G. (being revised). Public attitudes about electronic voting: The impact of concerns about technology and personal attributes.

West, B.T., Conrad, F.G., Kreuter, F. & Mittereder, F. (conditionally accepted). Nonresponse and measurement error variance among interviewers in standardized and conversational interviewing. Journal of Survey Statistics and Methodology.

Conference Proceedings

Johnston, M., Ehlen, P., Conrad, F.G., Schober, M.F., Antoun, C., Fail, S., Hupp, A., Vickers, L., Yan, H., Zhang, C. (2013). Spoken Dialog Systems for Automated Survey Interviewing. Proceedings of the 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) conference.

Exhibit CCG-3 (Conrad), Page 33 1 3

Benkí, J., Broome, J., Conrad, F., Groves, R., & Kreuter, F. (2011). Effects of Speech Rate, Pitch, and Pausing on Survey Participation Decisions. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Conrad, F., Broome, J., Benki, J., Groves, R., Kreuter, F., & Vannette, D. (2010). To Agree or Not to Agree: Impact of interviewer speech on survey participation decisions. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Levenstein, R.M.,. Conrad, F.G., Blair, J., Tourangeau, R. & Maitland, A. (2007). The Effect of Probe Type on Cognitive Interview Results: A Signal Detection Analysis. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Blair, J., Conrad, F., Ackerman, A.C. & Claxton, G. (2006). The Effect of Sample Size on Cognitive Interview Findings. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Conrad, F.G., Couper, M.P. Tourangeau, R. & Galesic, M. (2005). Interactive feedback can improve the quality of responses in web surveys. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Conrad, F.G., Peytcheva, E., Traugott, M., Hanmer, M.J., Bederson, B.B., Herrnson, P.S., & Niemi, R.G. (2005). An evaluation of six electronic voting machines. Proceedings of the Usability Professionals' Association Conference, Montreal, QB.

Conrad, F. G., Couper, M. P., Tourangeau, R. & Peytchev, A. (2005). Effectiveness of progress indicators in web surveys: First impressions matter. Proceedings of SIGCHI 2005: Human Factors in Computing Systems Portland, OR.

Ehlen, P., Schober, M.F., & Conrad, F.G. (July, 2005). Modeling speech disfluency to predict conceptual misalignment in speech survey interfaces. Proceedings of the Symposium on Dialogue Modeling and Generation, 15th Annual meeting of the Society for Text & Discourse, Vrije Universiteit, Amsterdam, 2005

Herrnson, P. S., Conrad, F. G., Niemi, R.G., Traugott, M. & Bederson, B. (2005). A Project to Assess Voting Technology and Ballot Design. Proceedings of the National Conference on Digital Government Research. Atlanta, GA.

Suessbrick, A. Schober, M. F. & Conrad, F. G. (2005). When do respondent misconceptions lead to survey response error? Proceedings of the American

Exhibit CCG-3 (Conrad), Page 34 14

Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Conrad, F., Schober, M. & Dijkstra, W. (2004). Non-verbal cues of respondents’ need for clarification in survey interviews. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Conrad, F. G., Couper, M. P., Tourangeau, R. & Peytchev, A. (2003). Effectiveness of progress indicators in web surveys: It’s what’s up front that counts. Proceedings of the Fifth International ASC conference. Chesham, UK: Association for Survey Computing, pp. 333-342.

Couper, M.P., Tourangeau, R., Conrad, F.G. (2003). The effect of images on web survey responses. Proceedings of the Fifth International ASC conference. Chesham, UK: Association for Survey Computing, pp. 343-350.

Garas, N., Blair, J. & Conrad, F. (2003). Inside the black box: Analysis of interviewer- respondent interactions in cognitive interviews. Proceedings of the Federal Committee on Statistical Methodology Research Conference.

Schober, M., Conrad, F., Ehlen, P. & Fricker, S. (2003). How web surveys differ from other kinds of user interfaces. Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Schober, M.F., Conrad, F.G., Ehlen, P., Lind, L H., Coiner, T. (2003). Initiative and clarification in web-based surveys. Proceedings of American Association for Artificial Intelligence Spring Symposium: Natural Language Generation in Spoken and Written Dialogue. Menlo Park, CA: American Association for Artificial Intelligence, pp. 125-132.

Coiner, T.F., Schober, M.F., Conrad, F.G. & Ehlen, P. (2002). Improving comprehension of web survey questions by modeling users’ age. Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association.

Bosley, J. J & Conrad, F. G. (2001). Usability testing of data access tools. Proceedings of Human-Computer Interaction International Conference 2001.

Conrad, F.G. & Blair, J. (2001). Interpreting verbal reports in cognitive interviews: Probes matter. . Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association.

Conrad, F.G. & Couper, M. P. (2001). Classifying open ended reports: Coding occupation in the current occupation survey. Proceedings of the Federal

Exhibit CCG-3 (Conrad), Page 35 1 5

Committee on Statistical Methodology Research Conference Friday, All Sessions, pp 21-30

Conrad, F.G. & Schober, M.F. (2001). Clarifying survey questions when respondents don’t know they need clarification. Proceedings of the Federal Committee on Statistical Methodology Research Conference, Thursday B Sessions , pp. 100- 106.

Lind, L., Schober, M. F. & Conrad, F.G. (2001). Clarifying question meaning in a web- based survey. Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association.

Rips, L.J., Conrad, F.G. & Fricker, S. (2001). Unraveling the seam effect. Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association.

Bosley J.J. & Conrad, F.G. (2000). Usability testing of data access tools. Proceedings of the Second International Conference on Establishment Surveys. Alexandria, VA: American Statistical Association, pp. 971-980.

Conrad, F.G. (2000). Discussion of papers. In Sirken, M.G. (Ed.), Survey Research at the Intersection of Statistics and Cognitive Psychology. National Center for Health Statistics, Working Paper Series, No. 28, pp. 41-45.

Dippo, C.S., Conrad, F.G. & Gillman, D.W. (2000). Metadata and data quality. United Nations/Economic Council of Europe Statistical Metadata Work Session, Working Paper #5, Washington, DC.

Suessbrick, A., Schober, M.F. & Conrad, F.G. (2000). Different respondents interpret ordinary questions quite differently. Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association, pp. 907-912.

Schober, M. F. & Conrad, F.G. (2001). Adaptive interfaces for collecting survey data from users. Proceedings of the National Conference on Digital Government Research, pp. 92-99. Los Angeles and New York: Digital Government Research Center.

Schober, M.F., Conrad, F.G. & Bloom, J.E. (2000). Clarifying word meaning in computer-administered survey interviews. Proceedings of the Twenty-second Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, pp. 447-452.

Conrad, F.G., Brown, N.B. & Dashen, M. (1999). Estimating the frequency of events from unnatural categories. Proceedings of the American Statistical Association,

Exhibit CCG-3 (Conrad), Page 36 16

Section on Survey Research Methods . Alexandria, VA: American Statistical Association.

Conrad, F., Blair, J. & Tracy, E. (1999). Verbal reports are data! A theoretical approach to cognitive interviews. Proceedings of the Federal Committee on Statistical Methodology Research Conference, Tuesday B Sessions. Arlington, VA, pp. 11- 20.

Conrad, F.G. & Schober, M.F. (1999). Conversational interviewing and data quality. Proceedings of the Federal Committee on Statistical Methodology Research Conference, Tuesday B Sessions. Arlington, VA, pp. 21-30.

Conrad, F.G. & Schober, M.F. (1999). A conversational approach to text-based computer-administered questionnaires. Proceedings of the Third International ASC conference. Chesham, UK: Association for Survey Computing, pp. 91-101.

Levi, M.D. and Conrad, F.G. (1999). Interacting with statistics: Report from a workshop at CHI 99. SIGCHI Bulletin, 31, 31-35.

Schober, M.F., Conrad, F.G. & Bloom, J.E. (1999). Enhancing collaboration in computer-administered surveys. Proceedings of American Association for Artificial Intelligence Fall Symposium: Psychological Models of Communication in Collaborative Systems. Menlo Park, CA: American Association for Artificial Intelligence, pp. 108-115.

Schober, M.F., Conrad, F.G. & Fricker, S.S. (1999). When and how should survey interviewers clarify question meaning? Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association (in press)

Conrad, F.G. & Schober, M.F. (1998). A conversational approach to computer- administered questionnaires. Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association, pp. 962-967.

Schober, M.F. & Conrad, F.G. (1998). Response accuracy when interviewers stray from standardization. Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association, pp. 940-945.

Frederickson-Mele, K., Levi, M., & Conrad, F. (1997). Evaluating web site structure: A set of techniques. Proceedings of the Usability Professionals' Association Conference, Monterey, CA, pp. 415-435.

Levi, M.D. and Conrad, F.G. (1997). Usability testing of World Wide Web sites: A workshop at CHI 97. SIGCHI Bulletin, 29, 40-43.

Exhibit CCG-3 (Conrad), Page 37 1 7

Schober, M.F. & Conrad, F.G. (1997). Does conversational interviewing improve survey data quality beyond the laboratory? Proceedings of the American Statistical Association, Section on Survey Research Methods . Alexandria, VA: American Statistical Association, pp. 910-915.

Conrad, F. & Blair, J. (1996). From impressions to data: Increasing the objectivity of cognitive interviews. Proceedings of the Section on Survey Research Methods, Annual Meetings of the American Statistical Association. Alexandria, VA: American Statistical Association, pp. 1-10.

Conrad, F.G. & Schober, M.F. (1996). How interviewers’ conversational flexibility affects the accuracy of survey data. Proceedings of the Annual Meetings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, pp. 883-888.

Levi, M. & Conrad, F.G. (1996). A heuristic evaluation of a World Wide Web prototype. Proceedings of Annual Research Conference, U.S. Census Bureau. Washington, DC: Department of Commerce, pp. 681-695.

Conrad, F.G. & Brown, N.R. (1994). Strategies for estimating category frequency: Effects of abstractness and distinctiveness. Proceedings of the Annual Meetings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, pp. 1345-1350.

Conrad, F., Kamalich, R., Longacre, J. & Barry, D. (1993). COMPASS: An expert system for reviewing commodity substitutions in the Consumer Price Index. Proceedings of the Ninth IEEE conference on Artificial Intelligence for applications. Los Alamedos, CA: IEEE Computer Society Press, pp. 299-305.

Conrad, F.G., Brown, N.R. & Cashman, E.R. (1993). How the memorability of events affects frequency judgments. Proceedings of the Annual Meetings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, pp. 1058-1063.

Conrad, F. & Tonn, B.(1993). Intuitive classification of occupation. Proceedings of International Conference on Occupational Classification, Washington, DC: Bureau of Labor Statistics, pp. 169-178.

Sander, J.E., Conrad, F.G., Mullin, P.A., & Herrmann, D.J. (1992). Cognitive modeling of the survey interview. Proceedings of the Annual Meetings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, pp. 818-823.

Exhibit CCG-3 (Conrad), Page 38 18

Conrad, F.G. & Anderson, J.R. (1988). The process of learning LISP. Proceedings of the Tenth Annual Conference of the Cognitive Science Society. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers, pp. 454-460.

Conrad, F.G. & Rips, L.J. (1981). Perceptual focus, text focus and semantic composition. In M.A. Miller, C.S. Masek, & R.A. Henrik (Eds.), Papers from the Parasession on Language and Behavior, Annual Conference of the Chicago Linguistic Society. Chicago: Chicago Linguistics Society, pp. 36-49.

Invited Presentations Conrad, F.G. (August, 2017). Text Interviewing. Workshop on Emerging Survey Methods. National Institutes of Health, Bethesda, MD).

Conrad, F.G. (June, 2017). What we know about conversational interviewing. International Total Survey Error Workshop. Nuremberg, Germany.

Conrad, F.G. (April, 2017). Affording Participants Discretion: Interview Mode Choice in a Smartphone Survey. Neil Fest: A Celebration and Symposium in Honor of Professor Neil Stillings. Hampshire College.

Conrad, F.G. (January, 2017). Taking Stock: Twenty Years of Research on Conversational Interviewing. Keynote presentation: Groningen Symposium on Language and Social Interaction. Groningen University, Netherlands.

Conrad, F.G. (October, 2016). Affording Participants Discretion: Interview Mode Choice in a Smartphone Survey. Psychology Department Brownbag, University of Texas at El Paso.

Conrad, F.G. and Schober, M.F. (October, 2016). Taking Surveys to People's Technology: Implications for Federal Statistics and Social Science Research. Committee on National Statistics Public Seminar, National Academy of Science.

Conrad, F.G. (May, 2016). View from Academia. Panelist in round table discussion: Defining Data Science and its critical place in our world. Annual Conference of the American Association for Public Opinion Research. Austin, TX.

Conrad, F.G. (March, 2015). Voice versus SMS interviews: Collecting Survey Data with Mobile, Multimodal Devices. Invited talk at 5th Annual Grushin Sociological Conference, Russian Public Opinion Research Center, Moscow, Russia.

Conrad, F.G. (March, 2015). Can analyses of social media ever replace survey estimates? Invited talk at 5th Annual Grushin sociological conference, Russian Public Opinion Research Center, Moscow, Russia.

Exhibit CCG-3 (Conrad), Page 39 1 9

Conrad, F.G. (December, 2014). Can analyses of social media ever replace survey estimates? Invited talk at Seventh Internet Survey Methodology Workshop. Free University of Bozen-Bolzano, South Tyrol, Italy

Conrad, F.G. (September, 2014). Collecting Survey Data with Mobile, Multimodal Devices. The 6th International Workshop on Internet Surveys and Survey Methodology. Statistics Korea, Daejeon, Republic of Korea.

Conrad, F.G., (June, 2014). Survey Design. 8th Annual Symposium New Connections: Increasing Diversity of RWJF Programming. Princeton, NJ.

Conrad, F.G. (February, 2014). Interactivity and measurement in web surveys. Invited talk in “Measuring from a Distance: The Emerging Science of Internet-Based Survey Research, Conference sponsored by Program in Survey Research, Harvard University.

Conrad, F.G. (October, 2013). Collecting survey data with mobile, multimodal devices. Invited research seminar at Westat, Rockville, MD.

Conrad, F.G. (October, 2012). Social and cognitive factors in new approaches to survey measurement. Cognitive Science Distinguished Alumni Lecture, Hampshire College. Amherst, MA.

Conrad, F.G. (October, 2012). Thinking about survey interviews of the future. Keynote talk at the Conference of the Council of American Survey Research Organizations. Scottsdale, AZ.

Conrad, F.G. (January, 2012). Interactive intervention to reduce satisficing in web surveys. Invited talk at Westat, Rockville, MD.

Conrad, F.G. (November, 2011). Interactive intervention to reduce satisficing in web surveys. Invited talk at Abt Associates, Bethesda, MD.

Conrad, F.G. (September, 2011). Interactive intervention to reduce satisficing in web surveys. Third international workshop on internet survey methodology, Statistics Korea, Daijeon.

Conrad, F.G. (August, 2011). Race of virtual interviewer effects. Invited paper presented at the MESS Workshop. Oistervijk, Netherlands.

Conrad, F.G. (January, 2011). A conversation about conversational interviewing. “Town Meeting” presentation, Center for AIDS Prevention Studies, University of California San Francisco, San Francisco, CA.

Exhibit CCG-3 (Conrad), Page 40 20

Conrad, F.G. (December, 2010). Response to “Interview Structure” Issue Paper. Consumer Expenditure Survey Methods Workshop. US Census Bureau, Suitland, MD.

Conrad, F. (September, 2010). Improving measurement in web surveys. Second International Workshop on Internet Surveys, Statistics Korea, Daejeon, Korea.

Conrad, F.G. (July 2010). What to consider when considering a new technology (for survey data collection). Workshop on methodological innovation. St. Catherine’s College, Oxford University.

Conrad, F.G. (July 2010). Virtual interviewers. 4th Research Methods Festival. St. Catherine’s College, Oxford University.

Conrad, F.G. (March, 2010). Interactivity in web surveys. Research Triangle Institute. Research Triangle Park, NC.

Conrad, F.G. (February, 2010). Some thoughts about the future of web surveys. Survey Research Institute, Cornell University. Ithaca, NY

Conrad, F.G. (November, 2009). Thoughts about the future of survey measurement. Midwest Association for Public Opinion Research Pedagogy Hour talk. Chicago, IL.

Conrad, F.G. (November, 2009). Interactivity in web surveys. Institute for Social and Economic Research, University of Essex. Survey Research Institute, Colchester, UK.

Conrad, F.G. (October, 2009). Reaction to Kristin Miller’s Cognitive Interviewing. Questionnaire Evaluation Methods Workshop. Hyattsville, MD.

Conrad, F.G. (September, 2009). Interactivity and web surveys. Internet Survey Methodology Workshop. Bergamo, IT.

Conrad, F.G. (June, 2009). Thoughts about the future of survey measurement. 60th Anniversary Celebration, Institute for Social Research. University of Michigan. Ann Arbor, MI.

Conrad, F.G. (March, 2009) Envisioning the Survey Interview of the Future. Keynote presentation at FedCASIC 2009. Washington, DC.

Conrad, F.G. and Couper, M. P. (December, 2008). Classifying open-ended reports: Coding occupation in the Current Population Survey. Conference on Optimal Coding of Open-Ended Survey Data, University of Michigan, Ann Arbor, MI.

Exhibit CCG-3 (Conrad), Page 41 2 1

Conrad, F.G. and Traugott, M. (October, 2008) Usability of Electronic Voting and Public Opinion Toward the New Technology. Washington Statistical Society.

Conrad, F.G. (May, 2008). Envisioning the Survey Interview of the Future. Technical Keynote presentation at International Field Directors and Technologies Conference, New Orleans, LA.

Conrad, F.G. (May, 2008). Thoughts about the Future of CASM. Talk presented at “25 Years of Cognitive Research and Counting,” Committee on National Statistics, National Academy of Science, Washington, DC.

Conrad, F.G. (March, 2008). Electronic Voting: No more Hanging Chads but New Usability Challenges. Election Verification Network conference, New Orleans, LA.

Conrad, F. G. (October, 2006). Interactive aspects of web surveys. Keynote address, Midwestern Educational Research Association., Columbus, OH.

Conrad, F. G. (September, 2006). Use and non-use of clarification features in web surveys. 2006 Survey Research Methodology Conference. Center for Survey Research, Academia Sinica, Taipei, Taiwan.

Conrad, F.G., Lewis, B., Peytcheva, E., Traugott, M., Hanmer, M., Herrnson, P., Niemi, R., Bederson, B. (June, 2006). Usability of electronic voting systems: Results from a laboratory study. Workshop on Usability and Security of Electronic Voting, Human-Computer Interaction Laboratory, University of Maryland. Also presented at companion workshops in Ann Arbor, MI (April, 2007) and Salt Lake City, UT (May, 2007).

Conrad, F.G. (February, 2006). Cues of comprehension difficulty in telephone and web surveys. Westat, Inc., Rockville, MD.

Conrad, F.G. (December, 2005). Voter intent, voting technology and measurement error. Department of Methodology and Techniques, Vrije Universiteit Amsterdam.

Conrad, F. G. (December, 2005). Beyond questionnaire design: Resolving misconceptions during survey data collection. Department of Methodology and Techniques, Vrije Universiteit Amsterdam.

Conrad, F. G. (April, 2005). Beyond questionnaire design: Resolving misconceptions during survey data collection. Primary Research Staff seminar, Survey Research Center, University of Michigan.

Conrad, F.G. (January, 2005). Methodological considerations in the measurement of time use. Workshop on the Collection of Time Use Data, Institute for Social Research, University of Michigan.

Exhibit CCG-3 (Conrad), Page 42 22

Conrad, F. G. (October, 2002). Interactive aspects of web surveys: Lack of use, ease of use and user modeling. Invited paper presented at Web Survey workshop, ZUMA, Mannheim, Germany.

Conrad, F.G. (December, 2001). Generic and individual misconceptions of survey questions. Institute for Social Research, University of Michigan

Conrad, F.G. (November, 2001). Misunderstanding standardized language. Psychology Department, University of Alberta.

Conrad, F.G. (June, 2001). Conceptual fit and survey data quality. Institute for Social Research, University of Michigan

Conrad, F. G. (May, 2001). Response effects in questions about fixed attributes and memorable events. Paper presented at Seymour Sudman Symposium, Monticello, IL.

Conrad, F.G. and Blair, J. (March, 2001). Problem detection in cognitive interviews. Westat, Inc., Rockville, MD.

Conrad, F.G. and Couper, M. (December, 2000). Classifying open-ended reports: Coding occupation in the Current Population Survey. Washington Statistical Society, Washington, DC.

Conrad, F.G. and Schober, M. F. (July, 2000). A collaborative view of standardized survey interviews. Department of Research Methodology, Free University of Amsterdam.

Conrad, F.G. (April, 1998). Costs and benefits of standardized versus conversational survey interviewing. Psychology Department, George Mason University

Conrad, F.G. (March, 1998). Costs and benefits of standardized versus conversational survey interviewing. Joint Program for Survey Methodology, University of Maryland

Conrad, F.G. and Schober, M.F. (February, 1997). Reducing survey measurement error through conversational interaction. Washington, DC/Baltimore Chapter of the American Association for Public Opinion Research at Westat, Inc., Rockville, MD.

Conrad, F.G. and Schober, M.F. (December, 1996). Reducing survey measurement error through conversational interaction. U.S. Census Bureau, Washington, DC.

Conrad, F.G. (February, 1996). Knowledge-based classification of survey data: Using expert systems in data collection and review. Washington Statistical Society, Washington, DC.

Exhibit CCG-3 (Conrad), Page 43 2 3

Levi, M. D. and Conrad, F.G. (June, 1995). A heuristic evaluation of a world-wide web prototype. U.S. Census Bureau, Washington, DC.

Levi, M. D. and Conrad, F.G. (April, 1995). A heuristic evaluation of a world-wide web prototype. Software Psychology Society, Washington, DC.

Conrad, F.G. (November,1994). Procedural Aspects of CASIC. Washington Statistical Society, Washington, DC.

Conrad, F.G. (February, 1991). How the form of our knowledge affects the form of our reports. National Center for Health Statistics, Hyattsville, MD.

Conrad, F.G. (February, 1989). Learning to program in LISP with an intelligent tutoring system. Southwest Research Institute, San Antonio, TX.

Conrad, F.G. (February, 1989). Learning to program in LISP with an intelligent tutoring system. Boeing Research and Technology Center, Ridley Park, PA.

Conrad, F.G. (January, 1989). Learning to program in LISP with an intelligent tutoring system. Mitre Corporation, Bedford, MA.

Conrad, F.G. (February, 1986). Conceptual combination and the Given/New distinction. Carnegie Group Incorporated, Pittsburgh, PA.

Conrad, F.G. (January, 1986). Conceptual combination and the Given/New distinction. Department of Psychology, Carnegie-Melon University, Pittsburgh, PA.

Contributed Conference and Workshop Presentations Not in Proceedings Corey, J., Conrad, F., Reichert, H., Goldstein, S., Ostrow, J., Sadowsky, M. (August, 2017). Moment-to-moment listening experience for popular songs. Paper presented at the biennial conference of the Society for Music Perception and Cognition, San Diego, CA.

Conrad, F.G. & Schober, M.F. (July, 2017). Taking stock: Twenty years of research on conversational interviewing. Paper presented at the Seventh Conference of the European Survey Research Association, Lisbon, Portugal.

Conrad, F.G. (July, 2017). Pedagogical challenges in training survey methodologists. Paper presented at the Seventh Conference of the European Survey Research Association, Lisbon, Portugal.

Davis, R., Johnson, T., Conrad, F., Lee. S., Thrasher, J., Resnicow, K., & Peterson, K. (July, 2017). Identifying sociocultural predictors of acquiescence among Mexican American, Puerto Rican, and Cuban American survey respondents.

Exhibit CCG-3 (Conrad), Page 44 24

Paper presented at the Seventh Conference of the European Survey Research Association, Lisbon, Portugal.

Cibelli Hibben, K., Felderer, B., & Conrad, F. (July, 2017). The Effect of Respondent Commitment on Response Quality in Two Online Surveys. Paper presented at the Seventh Conference of the European Survey Research Association, Lisbon, Portugal. Fail, S., Schober, M.F., & Conrad, F.C. (July, 2017). Hesitation in socially desirable responses in a mobile phone survey. Paper presented at the Seventh Conference of the European Survey Research Association, Lisbon, Portugal.

Fail, S., Schober, M.F, & Conrad, F.G. (May, 2016). Hesitation in socially desirable responses in a mobile phone survey. Paper presented at the Annual Conference of the American Association for Public Opinion Research. Austin, TX.

Hibben, K.C., Felderer, B., Conrad, F.G. (May, 2016). The effect of respondent commitment in an online survey. Paper presented at the Annual Conference of the American Association for Public Opinion Research. Austin, TX.

Pasek, J., Yan, H.Y., Conrad, F.G., Newport, F., Marken, S. (May, 2016). The stability of economic correlations over time: Comparing data from Gallup’s Daily Tracking Poll, Michigan’s Surveys of Consumers, the S&P 500 and . Paper presented at the Annual Conference of the American Association for Public Opinion Research. Austin, TX.

West, B., Conrad, F.G., Kreuter, F., Mittereder, F. (May, 2016). Decomposing the interviewer variance introduced by standardized and conversational interviewing. Paper presented at the Annual Conference of the American Association for Public Opinion Research. Austin, TX.

Conrad, F., Corey, J., Goldstein, S., Ostrow. J., & Sadowsky, M. (August, 2015). Attributes of songs people love and listen to most often. Paper presented at Biennial Meeting of the Society for Music Perception and Cognition, Nashville, TN.

Allum, N. & Conrad, F. (July, 2015). Consequences of mid-stream mode switching in a panel survey. Paper presented at the Sixth Conference of the European Survey Research Association, Reykjavik, Iceland.

Pasek, J., Conrad, F.G., Hou, E., Schober, M.F., Lampe, C., & Guggenheim, L. (July, 2015). Using Twitter Data to Calibrate Retrospective Assessments in Surveys. Paper presented at the Sixth Conference of the European Survey Research Association, Reykjavik, Iceland.

Exhibit CCG-3 (Conrad), Page 45 2 5

Schober, M.F., Conrad, F.G., Pasek, J., Guggenheim, L., Lampe, C., & Hou, E. (July, 2015). A “collective-vs-self” hypothesis for when Twitter and survey data tell the same story. Paper presented at the Sixth Conference of the European Survey Research Association, Reykjavik, Iceland.

Allum, N. & Conrad, F.G. (May, 2015). An evaluation of the effect of mode-switching in panel surveys using recall data. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, CA.

Conrad, F.G., Schober, M.F., Pasek, J., Guggenheim, L., Lampe, C., & Hou, E. (May, 2015). A “collective-vs-self” hypothesis for when Twitter and survey data tell the same story. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, CA.

Liu, M., Conrad, F.G., & Lee, S. (May, 2015) Examining Acquiescent and Extreme Response Styles between Face-to-Face and Web Surveys. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, CA.

Pasek, J., Hou, E., Schober, M.F., Conrad, F.G., Lampe, C., & Guggenheim, L. (May, 2015). Using Twitter Data to Calibrate Retrospective Assessments in Surveys. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, CA.

West, B.T., Conrad, F.G., Kreuter, F. & Mittereder, F. (May 2015). Comparing the Interviewer Variance Introduced by Standardized and Conversational Interviewing. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, CA.

Conrad, F.G., McCullough, W., & Nishimura, R. (2014). Matrix versus paging designs for a brand attribution task. Paper presented at the Seventh Workshop on Internet Survey Methodology. Free University of Bozen-Bolzano, South Tyrol, Italy

Conrad, F.G., Schober, M.F., Antoun, C.& Hupp, A., Yan, H. Y. (July 2014). Interviewing by Texting: Costs, Efficiency and Data Quality. VI European Congress of Methodology. Utrecht, Netherlands.

Conrad, F.G., Schober, M.F., Antoun, C., Hupp, A., & Yan, H.Y. (May 2014). Interviewing by Texting: Costs, Efficiency and Data Quality. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Anaheim, CA.

Schober, M.F., Conrad, F.G., Yan, H., & Sauvage-Mar, M. (May, 2014). Effort and sensitivity effects in mobile text messaging interviews. Paper presented at

Exhibit CCG-3 (Conrad), Page 46 26

the Annual Conference of the American Association for Public Opinion Research, Anaheim, CA.

Sun, H., Conrad, F.G., & Kreuter, F. (May, 2014). Influence of Prior Respondent- Interviewer Interaction on Disclosure in Audio-CASI. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Anaheim. CA.

Sun, H., Conrad, F.G., & Kreuter, F. (May, 2014). CAPI vs. Video-mediated Interviews: Rapport Evaluation and Sensitive DisclosurePaper presented at the Annual Conference of the American Association for Public Opinion Research, Anaheim. CA.

Zhang, C., Antoun, C., Yan, H., Conrad, F.G., Tourangeau, R., & Couper, M.P. (May, 2013). Characteristics and Behaviors of Professional Respondents on Online Opt-in Panels. Paper presented at Annual Conference of the American Association for Public Opinion Research, Anaheim. CA.

Conrad, F. G. & Schober, M.F. (May, 2013). Comparing text and voice survey modes on smartphones. Paper presented at the Fifth Conference of the European Survey Research Association. Ljubljana, Slovenia July 15-19

Antoun, C., Zhang, C., Conrad, F.G. & Schober, M.F. (May, 2013). Comparisons of Online Recruitment Strategies: Craigslist, Google Ads and Amazon’s Mechanical Turk. Poster presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.

Conrad, F.G., Schober, M.F., Zhang, C., Yan, H., Vickers, L., Johnston, M., Hupp, A.L., Hemingway, L., Fail, S., Ehlen, P., & Antoun, C. (August, 2013). Mode choice on an iPhone increases survey data quality. Paper presented at MESS workshop, the Hague, Netherlands.

Conrad, F.G., Schober, M.F., Zhang, C., Yan, H., Vickers, L., Johnston, M., Hupp, A.L., Hemingway, L., Fail, S., Ehlen, P., & Antoun, C. (May, 2013). Mode choice on an iPhone increases survey data quality. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.

Johnston, M., Ehlen, P., Conrad, F.G., Schober, M.F., Antoun, C., Fail, S., Hupp, A.L., Vickers, L., Yan, H., Zhang, C. (May, 2013). Reducing survey error in a mobile speech-IVR system. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.

Schober, M.F., Conrad, F.G., Antoun, C., Bowers, A.W., Hupp, A.L. & Yan, H. (May, 2013). Conversational interaction and survey data quality in SMS text

Exhibit CCG-3 (Conrad), Page 47 2 7

interviews. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Boston, MA.

Schober, M.F. & Conrad, F.G. (May, 2013). Conversational interaction and data quality in mobile text and voice interviews. Paper presented at the Interviewer- Respondent Interaction Workshop (Honoring Charles Cannell). Boston, MA.

Yan, T., Conrad, F.G., & Liu, M. (2013). How do interviewers change their speech and interaction characteristics as they make more contacts? . Paper presented at the Interviewer-Respondent Interaction Workshop (Honoring Charles Cannell). Boston, MA.

Conrad, F.G., Tourangeau, R., Couper, M. P. & Zhang, C. (August, 2012). Professional web respondents and data quality. Sixth Measurement and Experimentation in the Social Sciences Workshop. Amsterdam, NE.

Hubbard, F., Antoun, F. & Conrad, F. (May, 2012). Conversational interviewing, the comprehension of opinion questions and nonverbal sensitivity. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Orlando, FL.

Schober, M.F., Conrad, F.G., Antoun, C., Carroll, D., Ehlen, P., Fail, S , Hupp, A.L., Johnston, M., Kellner, C., Nichols, K.F., Percifield, L., Vickers, L., Yan, H., & Zhang, C. (May, 2012). Disclosure and quality of answers in text and voice interviews on iPhones. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Orlando, FL.

Conrad, F.G. (August, 2011). Interactive carrots and sticks to increase response accuracy. Paper presented at Internet Survey Methodology Workshop, Central Bureau of Statistics, the Hague, Netherlands.

Coiner, T.F., Schober, M.F., and Conrad, F.G. (May, 2011). Which web survey respondents are most likely to click for clarification? Paper presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Conrad, F.G., Schober, M.F., & Nielsen, D. (May, 2011). Race of Virtual Interviewer Effects. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Conrad, F., Tourangeau. R., Couper, M. & Zhang, C. (May, 2011) . Interactive interventions in Web surveys can increase response accuracy. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Exhibit CCG-3 (Conrad), Page 48 28

Tourangeau, R., Conrad, F., & Couper, M. (May, 2011). Up means good: The impact of screen position on evaluative ratings in web surveys. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Conrad, F.G. (December, 2010). Response to “Interview Structure” Issue Paper. Consumer Expenditure Survey Methods Workshop. US Census Bureau, Suitland, MD.

Conrad, F., Schober, M. & Nielsen, D. (August, 2010). Effects of race and gender of virtual interviewers on survey responses. Conference of the Society for Text and Discourse, Chicago, IL.

Conrad, F., Rips, L. & Fricker, S. (August, 2010). Seam effects for quantitative information in panel surveys. Paper presented at . Paper presented at the Annual Conference of the American Statistical Association, Vancouver, British Columbia, Canada.

Conrad, F., Zhang, C., Tourangeau, R. & Couper, M. (May, 2010). Professional Web respondents and data quality. Presentation at American Association for Public Opinion Research, Chicago, IL.

Freedman, VA., F. Stafford, F. Conrad & N. Schwarz. (July, 2010). Assessing time diary quality: evidence from Disability and Use of Time (DUST), a supplement to the Panel Study of Income Dynamics (PSID). Paper presented at the International Association of Time Use Researchers, Paris, France.

Conrad, F., Couper, M., Tourangeau, R., Galesik, M. & Yan, T. (July, 2009). Interactive Feedback Can Improve the Quality of Responses in Web Surveys. Conference of the European Survey Research Association. Warsaw, .

Conrad, F.G., Tourangeau, R., Couper, M. & Kennedy, C. (May, 2009). Interactive interventions in web surveys increase respondent conscientiousness. Presentation at American Association for Public Opinion Research, Hollywood, FL.

Couper, M.P., Singer E., Conrad, F. G., Groves, R. M. (May, 2009). Disclosure risk, disclosure harm, topic and participation in a mail survey. Presentation at American Association for Public Opinion Research, Hollywood, FL.

Freedman, V.A., Stafford, F., Schwarz, N., and Conrad, F. (June, 2009). Measuring disability, time use and well-being of older couples: Lessons from the PSID. Presentation at the American Time Use Research Conference, College Park, MD

Exhibit CCG-3 (Conrad), Page 49 2 9

Rosen, R., Schober, M., Conrad, F. (May, 2009). Mode effects in questions about stigmatized behaviors and personal distress. Presentation at American Association for Public Opinion Research, Hollywood, FL.

Tourangeau, R., Conrad, F., Couper, M., Redline, C., Ye. C. (May, 2009). The effects of providing examples: Questions about frequencies and ethnicity background. Presentation at American Association for Public Opinion Research, Hollywood, FL.

Conrad, F.G., Tourangeau, R., Couper, M. & Kennedy, C. (September, 2008). Interactive interventions in web surveys can improve data quality. Presentation at RC33 International Conference on Social Science Methodology. Naples, Italy.

Conrad, F.G., Schober, M.F., Jans, M., Orlowski, R., Nielsen, D. & Levenstein, R. (May, 2008). Virtual interviews on mundane, non-sensitive topics: Dialog capability affects response accuracy more than visual realism does. Paper presented at annual conference of the American Association for Public Opinion Research, New Orleans, LA.

Conrad, F.G., Schober, M.F., Jans, M., Orlowski, R., Nielsen, D. & Levenstein, R. (March, 2008). Coding interviews conducted by virtual agents. Presentation at Coding Behavioral Video Data and Reasoning Data in Human-Robot Interaction Workshop, Human-Robot Interaction 2008 Conference. Amsterdam, Netherlands.

Couper, M.P., Singer, E., Conrad, F.G., Groves, R.M. (May, 2008). An experimental study of disclosure risk, disclosure harm, incentives, and survey participation. Paper presented at annual conference of the American Association for Public Opinion Research, New Orleans, LA.

Kennedy, C., Tourangeau, R., Conrad, F., Couper, M., Redline, C. (May, 2008). The impact of the spacing of the scale options in a web survey. Paper presented at annual conference of the American Association for Public Opinion Research, New Orleans, LA.

Lind, L.H., Schober, M.F., Conrad, F.G. (May, 2008) Social cues can affect answers to threatening questions in virtual interviews. Paper presented at annual conference of the American Association for Public Opinion Research, New Orleans, LA.

Traugott, M., Conrad, F. & Rice, T. (May, 2008). Public opinion about electronic voting: Voters’ knowledge and their beliefs about the new voting technology. Paper presented at the annual conference of the American Association for Public Opinion Research, New Orleans, LA.

Exhibit CCG-3 (Conrad), Page 50 30

Conrad, F.G. & Schober, M.F. (September, 2007). Considerations in adopting new technologies for survey interviews. Presentation at the “Envisioning the survey interview of the future” workshop. Southampton. UK

Schober, M.F. & Conrad, F.G. (September, 2007) Dialogue capability and perceptual realism in survey interviewing agents. Paper presented at the conference of the Association for Survey Computing, Southampton, UK.

Conrad, F.G. (August, 2007). Improving the ARMS : Dealing with Complexity in Surveys – Questionnaire Design and Data Collection. Paper presented at the Annual Conference of the American Statistical Association, Salt Lake City, UT.

Conrad, F.G., Schober, M. F., Dijkstra, W. & Ongena, Y. (July, 2007). Visual and verbal cues of survey respondents’ need for clarification. Paper presented at seventh conference of the Society for Applied Research in Memory and Cognition, Lewiston, ME.

Conrad, F.G. & Schober, M. F. (July, 2007). Dialogue capability and perceptual realism in survey interviewing agents. Paper presented at the annual conference of the Society for Text and Discourse, Glasgow, Scotland.

Peytchev, A., Conrad, F.G., Couper, M. P. & Tourangeau, R. (May, 2007). Minimizing respondent effort increases use of definitions in web surveys. Paper presented at the annual conference of the American Association for Public Opinion Research, Anaheim, CA.

Yan, T., Conrad, F.G., Couper, M.P. & Tourangeau, R. (May, 2007). Should I stay or should I go: The effects of progress indicators, promised duration, and questionnaire length on completing web surveys. Paper presented at the annual conference of the American Association for Public Opinion Research, Anaheim, CA.

Schober, M.F. & Conrad, F.G. (May, 2007). Dialogue capability and perceptual realism in survey interviewing agents. Paper presented at the annual conference of the American Association for Public Opinion Research, Anaheim, CA.

Conrad, F.G., Hanmer, M.J. & Traugott, M. W. (November, 2006). Voter confidence in the new generation of election technology. Paper presented at Midwest Association for Public Opinion Research, Chicago, IL.

Conrad, F., Park, H., Singer, E., Couper, M., Hubbard, F. & Groves, R. (May, 2006). Impact of disclosure risk on survey participation decisions. Paper presented at annual conference of the American Association for Public Opinion Research, Montreal, QB

Exhibit CCG-3 (Conrad), Page 51 3 1

Ehlen, P. Schober, M.F. & Conrad, F.G. (May, 2006). Modeling response times for old and young respondents to improve their understanding of survey questions. Paper presented at annual conference of the American Association for Public Opinion Research, Montreal, QB

Schober, M., Conrad, F. & Dijkstra, W. (May, 2006). Visual and verbal cues of survey respondents’ need for clarification. Paper presented at annual conference of the American Association for Public Opinion Research, Montreal, QB

Suessbrick, A., Schober, M.F. & Conrad, F.G. (May, 2006). Think-aloud evidence of conceptual misalignment in telephone interviews. Paper presented at annual conference of the American Association for Public Opinion Research, Montreal, QB

Conrad, F.G., Lewis, B., Peytcheva, E., Traugott, M., Hanmer, M., Herrnson, P., Niemi, R., & Bederson, B. (April, 2006). The usability of electronic voting systems: Results from a laboratory study. Paper presented at Midwest Political Science Association, Chicago, IL.

Conrad, F. G., Schober, M.F. & Dijkstra, W. (January, 2006). Cues of comprehension difficulty in telephone interviews. Second Conference on Telephone Survey Methodology. Miami, FL.

Niemi, R.G., Herrnson, P.S., Hanmer, M. J., Conrad, F., Traugott, M. & Bederson, B. B. (April, 2006). Voters’ abilities to cast a write-in vote using electronic voting systems. Paper presented at Midwest Political Science Association, Chicago, IL. Conrad, F. G. & Schober, M.F. (November, 2005). Envisioning the survey interview of the future. Presentation at workshop, Envisioning the survey interview of the future. University of Michigan, Ann Arbor, MI.

Conrad, F.G., Peytcheva, E., Traugott, M.W., Hanmer, M.J., Herrnson, P. S., Bederson, B. B., Niemi, R. G. ( May, 2005). Voter Intent, Voting Technology and Measurement Error. Paper presented at the annual conference of American Association for Public Opinion Research, Miami, FL.

Herrnson, P. S., Niemi, R.G., Hanmer, M. J., Francia, P. L., Bederson, B. B., Conrad, F. G. & Traugott, M.W. (April, 2005). Assessments of Electronic Voting Systems: Field Tests with a Usability Focus. Paper presented at the annual conference of the Midwest Political Science Association, Chicago, IL.

Tonn, B., Conrad, F. & Hemrick, A. (August, 2005). Cognitive representations of the future. Paper to be presented at World Futures Studies Federation XIXth World Conference, Budapest, Hungary.

Exhibit CCG-3 (Conrad), Page 52 32

Traugott, M. W., Hanmer, M. J., Park, W., Herrnson, P. S., Niemi, R. G., Bederson, B. B., Conrad, F. G. (April, 2005). The Impact of Voting Systems on Residual Votes, Incomplete Ballots, and Other Measures of Voting Behavior. Paper presented at the annual conference of the Midwest Political Science Association, Chicago, IL.

Conrad, F., Schober, M. & Dijkstra, W. (November, 2004). Implicit cues of misunderstanding in spoken conversation. Annual Meeting of the Psychonomic Society. Minneapolis, MN.

Conrad, F. (September, 2004). Testimony on panel for Usability Testing Voting Systems, at Hearings held by Technical Guidelines Development Committee of the Elections Assistance Committee. Gaithersberg, MD.

Conrad, F., Couper, M., Tourangeau, R., Peytchev, A. (August, 2004). Effectiveness of Progress Indicators in Web Surveys. Paper presented at RC33 Sixth International Conference on Social Science Methodology, Amsterdam. NE.

Conrad, F. (September, 2003). Invited participant in National Science Foundation sponsored workshop on e-rulemaking, Arlington, VA.

Conrad, F., Couper, M. & Tourangeau, R. (October, 2003). Interactive and visual aspects of web surveys. Paper presented at the Interuniversity Consortium for Political and Social Research Meeting of Official Representatives. Ann Arbor, MI.

Conrad, F., Couper, M. & Tourangeau, R. (August, 2003). Interactive features in web surveys. Paper presented at the Annual Conference of the American Statistical Association. San Francisco.

Conrad, F., Couper, M., Tourangeau, R. & Baker, R. (2003). Use and non-use of clarification features in web surveys. Paper presented at 58th Annual Conference of the American Association for Public Opinion Research, Nashville, TN.

Conrad, F. & Blair, J. (2003). Aspects of data quality in cognitive interviews: The case of verbal reports. Round table presentation at 58th Annual Conference of the American Association for Public Opinion Research, Nashville, TN.

Couper, M. P, Tourangeau, R. & Conrad, F. (2003). Visual aspects of web survey design. Paper presented at the Annual Conference of the American Statistical Association. San Francisco.

Couper, M. P, Tourangeau, R., Conrad, F. & Crawford, S. (2003). What they see is what we get: Response options for web surveys. Paper presented at 58th Annual

Exhibit CCG-3 (Conrad), Page 53 3 3

Conference of the American Association for Public Opinion Research, Nashville, TN.

Bosley, J. & Conrad, F. (2002). Usability issues with heterogeneous populations. Annual conference of American Society for Information Science and Technology (SIG-USE), Philadelphia, PA.

Conrad, F.G. & Blair, J. (2002). Aspects of data quality in cognitive interviews: The case of verbal reports. Invited paper presented at Questionnaire Design Evaluation and Testing conference, Charleston, S.C.

Coiner, T.F., Schober, M.F., Conrad, F.G. & Ehlen, P. (2002). Improving comprehension of web survey questions by modeling users’ age. Paper presented at the annual conference of the Society for Text and Discourse, Chicago, IL.

Lind, L.H., Conrad, F.G., & Schober, M.F. (2002). Sensitizing respondents to conceptual misalignment in a web-based survey. Paper presented at the 13th Annual Winter Conference on Discourse, Text & Cognition, Jackson Hole, WY.

Conrad, F.G. & Couper, M.P. (2001, November). Classifying Open Ended Reports: Coding Occupation in the Current Population Survey. Paper presented at the Federal Conference on Statistical Methodology. Arlington, VA

Conrad, F.G. & Schober, M.F. (2001, November). Clarifying survey questions when respondents don’t know they need clarification. Paper presented at the Federal Conference on Statistical Methodology. Arlington, VA

Conrad, F.G & Schober, M. F. (2001). Adaptive interfaces for collecting survey data from users. Paper presented at National conference for Digital Government Research (NSF sponsored). Redondo Beach, CA.

Conrad, F.G. & Schober, M. F. (2001). Improving respondents’ understanding of survey questions in web-based questionnaires. Poster presented at the Workshop on Statistics-related Digital Government Research (NSF sponsored, by invitation). US Bureau of Labor Statistics, Washington, DC.

Schober, M.F., Suessbrick, A., & Conrad, F.G. (2001). How aware are conversational partners of their conceptual differences? Twelfth Annual Winter Conference on Discourse, Text & Cognition, Jackson Hole, WY.

Conrad, F.G. and Schober, M.F. (2000). Conversational interviewing and data quality. Paper presented at Fifth International Conference on Social Science Methodology, Cologne, Germany.

Exhibit CCG-3 (Conrad), Page 54 34

Conrad, F.G. and Schober, M.F. (2000). Standardized wording does not guarantee standardized interpretation. Poster presented at the Seventh International Pragmatics Conference, Budapest, Hungary.

Couper, M.P. and Conrad, F.G. (2000). Classifying open-ended reports: Coding occupation in the Current Population Survey. Paper presented at Fifth International Conference on Social Science Methodology, Cologne, Germany.

Marchionini, G., Brunk, B., Komlodi, A., Conrad, F. and Bosley, J. (2000). Look before you click: A relational browser for federal statistics web sites. Annual Meeting of the American Society for Information Science, Chicago, IL.

Schober, M.F. and Conrad, F.G. (2000). Metacognition about conceptual differences with conversational partners. Seventh International Pragmatics Conference, Budapest, Hungary.

Schober, M.F., & Conrad, F.G. (2000). User interfaces that promote accurate interpretation of survey questions. Paper presented at Fifth International Conference on Social Science Methodology, Cologne, Germany.

Schober, M.F., Conrad, F.G., & Fricker, S.S. (2000). Listeners often don't recognize when their conceptions differ from speakers'. Paper presented at the Forty- first Annual Meeting of the Psychonomics Society, New Orleans, LA.

Schober, M.F. and Conrad, F.G. (1999). When is conversational collaboration necessary for accurate comprehension? Fortieth Annual Meeting of the Psychonomic Society, Los Angeles, CA.

Conrad, F.G, Brown, N.R. and Dashen, M. (1999). Estimating the frequency of events from unnatural categories. Third Conference of the Society for Applied Research in Memory and Cognition, Boulder, CO.

Conrad, F.G.(1999). Invited participant in National Research Council sponsored workshop, Computer and Communications Research to Enable Better Use of Information Technology in Government, Washington, DC.

Schober, M.F. and Conrad, F.G. (1999). Standardized interviewing methods can actually harm survey response accuracy. Tenth Annual Winter Conference on Discourse, Text & Cognition, Jackson Hole, WY.

Schober, M.F., Conrad, F.G. and Bloom, J.E. (1999). A collaborative approach to computer-administered surveys. Ninth Annual Meeting of the Society for Text and Discourse, Vancouver, BC.

Exhibit CCG-3 (Conrad), Page 55 3 5

Schober, M.F. and Conrad, F.G. (1998). A collaborative view of standardized survey interviews. Sixth International Conference on Pragmatics, Reims, France.

Conrad, F.G., (1998). Invited participant in National Science Foundation Workshop on Information Retrieval Toolkits, Pittsburgh, PA.

Conrad, F.G. (1997). Modeling survey participants to reduce measurement error. Second Advanced Seminar on Cognitive Aspects of Survey Methodology (by invitation), Charlottesville, VA.

Conrad, F.G. and Schober, M.F. (1996). Scripted versus conversational interviewing: A cost-benefit analysis. Twenty-fourth Annual Conference of the Association for Consumer Research, Tucson, AZ.

Katz, I. and Conrad, F.G. (1997). Questionnaire designer: A software tool for specification of CASIC instruments. Fifty-Second Annual Conference of the American Association for Public Opinion Research, Norfolk, VA.

Katz, I., Stinson, L.L, and Conrad, F.G. (1997). Questionnaire designers versus instrument authors: Bottlenecks in the development of computer administered questionnaires. Fifty-Second Annual Conference of the American Association for Public Opinion Research, Norfolk, VA.

Couper, M.P. and Conrad, F.G. (1996). Collecting data to facilitate the classification of occupations using a skill-based approach. Fourth International Social Science Methodology Conference, Essex, UK.

Katz, I. and Conrad, F.G. (1996). Questionnaire designer: A software tool for specification of CASIC instruments. InterCASIC ’96: The International Conference on Computer-Assisted Survey Information Collection, San Antonio, TX.

Katz, I., Conrad, F.G. and Stinson, L.L. (1996). Questionnaire designers versus instrument authors: An investigation of the development of CASIC instruments at BLS and Census. InterCASIC ’96: The International Conference on Computer-Assisted Survey Information Collection, San Antonio, TX.

Schober, M.F. and Conrad, F.G. (1996). Scripted versus collaborative interaction: The case of response accuracy in survey interviews. Sixth Annual Conference of the Society for Text and Discourse, San Diego, CA.

Uglow, D.A., Conrad, F.G. and Bosley, J. (1996). Prospects and principles for pen CASIC. InterCASIC ‘96, San Antonio, TX.

Exhibit CCG-3 (Conrad), Page 56 36

Conrad, F.G. and Schober, M.F. (1995). On the costs of conversational inflexibility in survey interviews. Vrieje Universiteit of Amsterdam Workshop on Interviewer-Respondent Interaction in the Standardized Survey Interview, Amsterdam, Netherlands.

Conrad, F.G. (1995). Using expert systems to model and improve survey classification processes. International Conference on Survey Methods and Process Quality, Bristol, England.

Conrad, F.G. (1995). Using expert systems to model and improve survey classification processes. Field Directors and Field Technologies Conference, Fort Lauderdale, FL.

Mullin, P.A., Conrad, F.G., Sander, J.E. and Herrmann, D. (1994). Modeling the question answering processes of survey respondents. Annual Conference of the American Psychological Society, Washington, DC.

Conrad, F.G. and Brown, N.R. (1994). Estimating frequency: A multiple strategy perspective. Third Conference on Practical Aspects of Memory, College Park, MD.

Conrad, F.G. (1993). Procedural aspects of CASIC. Field Directors and Field Technologies Conference, Chicago, IL.

Conrad, F.G., Mullin, P., Sander, J. and Herrmann, D. (1992). A cognitive theory of the survey interview. 47th Annual Conference of the American Association for Public Opinion Research, St. Petersburg Beach, FL.

Conrad, F.G. and Cooper, T.A. (1990). Programming maintainable, complex, expert systems. DEC Sessions, American Association for Artificial Intelligence, Boston, MA.

Discussant Commentary: Findings from the ESRC Survey Design and Measurement Initiative. Royal Statistical Society, London, England; September, 2010).

Distinguished Lecture by Nora Cate Schaeffer, “Conversational practices with a purpose: Interaction within the standardized survey interview.” Joint Program in Survey Methodology, College Park, MD, April, 2006.

Papers in session on “Questionnaire Development in Survey Instruments,” American Association for Public Opinion 2004, Phoenix, AZ (May).

Papers in session on “Questionnaire Design,” American Association for Public Opinion Research 2003, Nashville, TN (May).

Exhibit CCG-3 (Conrad), Page 57 3 7

Paper by Mick Couper, Roger Tourangeau and Darby Steiger, “Social Presence in Web Surveys.” FCSM Seminar On The Funding Opportunity In Survey Research. Bureau of Labor Statistics, Washington, DC. 2001.

Papers in session on “When participants have unequal knowledge,” American Association for Artificial Intelligence Fall Symposium: Psychological Models of Communication in Collaborative Systems, North Falmouth, MA, 1999 (November).

Papers in session on “At the Intersection of Cognition and Survey Methodology,” Joint Meetings of the American Statistical Association, Baltimore, MD, 1999 (August).

Paper by James Lepkowski, “Event History Analysis of Interviewer and Respondent Survey Behavior.” Washington Statistical Society Methodology Seminar, Washington, DC, 1998

Papers in session on “Frequency Estimation” Annual Conference of the American Association for Public Opinion Research, Norfolk, VA, 1997 (May).

Organizer/Chair of Conference Sessions Coordinator, “When do social media data align with survey responses and administrative data?,” panel at 6th Conference of the European Survey Research Association, Reykjavik, Iceland.

Co-organizer and chair, “Survey Responses vs. Tweets: New Choices for Social Measurement,” panel at annual conference of American Association for Public Opinion Research. Orlando, FL, May 19, 2012.

Co-organizer and chair, “New Frontiers in Virtual Interviewing,” panel at annual conference of American Association for Public Opinion Research, New Orleans, LA, May 18, 2008.

Co-organizer, co-presenter with Mick Couper, “Designing and Implementing On-Line Surveys” workshop at E-Social Science 2007 conference, Ann Arbor, MI, October 7, 2007.

Organizer and chair “Envisioning the Survey Interview of the Future,” panel at the conference of the Association for Survey Computing, September 13, 2007, Southampton, UK.

Co-organizer with Michael Schober, Workshop “Envisioning the Survey Interview of the Future,” September 12, 2007, Southampton University, Southampton, UK..

Exhibit CCG-3 (Conrad), Page 58 38

Co-organizer with Michael Schober and Chair, “Communication Technologies and the Survey Interview Process,” panel at annual conference of American Association for Public Opinion Research, May 19, 2007, Anaheim, CA.

Co-organizer with Michael Schober, Roundtable at annual meeting American Association for Public Opinion Research, “Envisioning the Survey Interview of the Future,” May 20, 2006, Montreal, QB.

Co-organizer with Michael Schober, Workshop “Envisioning the Survey Interview of the Future,” November 4-6, 2005, University of Michigan, Ann Arbor, MI.

Chaired session “Sampling II,” at annual AAPOR conference, Miami, FL.

Chaired session “Internet Surveys,” at Questionnaire Design Testing and Evaluation conference, Charleston, SC, 2002.

Chaired Methodology Section Seminar, “Delivering Interactive Graphics on the Web.” Washington Statistical Society, Washington, DC, 2000.

Co-organized with Michael D. Levi and moderated panel “Is the Web really different than everything else?” Human Factors in Computer Systems CHI 98, Los Angeles, CA, 1998.

Co-organized and co-facilitated workshop with Michael D. Levi: “Web site usability testing.” Human Factors in Computer Systems CHI 97. Atlanta, GA, 1997.

Co-organized and co-facilitated workshop with Michael D. Levi: “Interacting with statistics: Designing interfaces to statistical databases.” Human Factors in Computer Systems CHI 99. Pittsburgh, PA, 1999.

Co-organized and co-chaired session with Mick Couper, “Usability testing of survey interviewing software.” Federal CASIC Workshop, Washington, DC, 1997.

Organized and chaired session, “Measuring consumption and consuming measurement: The challenges of studying consumers from a Federal perspective.” Twenty- fourth Annual Conference of the Association for Consumer Research, Tucson, AZ, 1996.

Organized and chaired session, Memory for Time and Frequency. Third Conference on Practical Aspects of Memory, College Park, MD, 1994.

Teaching Experience Program in Survey Methodology, University of Michigan; Joint Program in Survey Methodology, University of Maryland; and Summer Institute in Survey Research Techniques, University of Michigan

Exhibit CCG-3 (Conrad), Page 59 3 9

• Social and Cognitive Foundations of Survey Measurement/Cognition, Communication and Survey Measurement, taught or co-taught 22 times between 1998 and 2016 • Advanced Seminar in Cognition and Survey Research, co-taught, 2007 • Envisioning the survey interview of the future, taught seven times between 2006 and 2016. • Questionnaire Design, co-taught, 2003. • Data Collection Methods, taught/co-taught 24 times between 2003 and 2016. • Introduction to Survey Research, team taught 2003 and 2004. • Fundamentals in Survey Methodology, coordinated team-taught graduate course in some years and taught multiple sessions in all years (19 times between 2004 - 2014). • Doctoral Seminar in Survey Methodology, taught 3 week module, 2002, 2003, co- taught full course 5 semesters from 2011-2013 • Survey Design Seminar, Program in Survey Methodology, University of Michigan, taught/co-taught, 2003-4, 2004-5, 2006 (in some years a two term sequence, in others one term)

Digital Education and Innovation Lab, Massive Online Open Courses (MOOCs), University of Michigan • Questionnaire Design (co-taught), continuously offered from 2014 • Data Collection: Online, Face-to-face and Telephone, continuously offered from 2016

Psychology Department, University of Michigan • Psychology of Survey Response, Winter 2015.

London School of Economics Summer School, taught 3 day module on survey data collection, July 2011.

Center for Statistical Consulting, Advising and Research, University of Michigan. Introduction to Survey Design: Data Collection, Questionnaire Design and Response Processes. One day short course, taught twice a year from 2010 - 2017)

Inter-university Consortium for Political and Social Research, University of Michigan. Introduction to Survey Design: Data Collection, Questionnaire Design and Response Processes. One day short course, December 2013.

Joint Program in Survey Methodology, University of Maryland. Psychology of Survey Response. Two day short course, co-taught with Roger Tourangeau (2011)

Certificate Program in Survey Methodology, Odem Institute for Research in the Social Sciences, University of North Carolina. Survey Interviewing Techniques, one day short course (2007)

Exhibit CCG-3 (Conrad), Page 60 40

Psychology Department, George Mason University Human-Technology Interaction: Cognition and Usability, semester long graduate seminar (1998)

Swiss Summer School, Swiss National Science Foundation, at L'Università della Svizzera Italiana Reducing Survey Measurement Error, one week doctoral course (1998)

Department of Psychology, Carnegie Mellon University Introduction to Symbolic Processing (LISP programming), undergraduate semester-long course, (1987, 1988)

Guest lecturer • School of Information, University of Michigan: Evaluation of Systems and Services (March, 2011) • Department of Psychology, University of Michigan: Research Methods (March 2009, March 2011, November 2012) • University of Illinois, Library and Information Science distance learning program (1998 - 2002): various topics in web site usability • New School University, Department of Psychology (2002): Research Methods • Free University (Amsterdam), Department of Research Methodology (2000): Research Methods • George Mason University, Department of Public Administration (1993,1994): Research Methods in Public Policy.

Doctoral Dissertation Committees Currently chairing two committees: one at the University of Michigan (Survey Methodology) and one at the University of Maryland (Survey Methodology); serving on committee at New School for Social Research (Psychology) and one at the University of Maryland (Government and Politics)

Previously chaired or co-chaired five committees at the University of Michigan (Survey Methodology) and served on seven additional committees at the University of Michigan (three in Survey Methodology, one in Public Health; one in Architecture; co-chaired two committees at the University of Maryland (Survey Methodology) and served on five additional committees (Survey Methodology); served on five committees at New School for Social Research (Psychology); served on one committee at Vrije Universiteit of Amsterdam (Research Methods); served on one committee at George Mason University (Psychology).

Masters Committees Exhibit CCG-3 (Conrad), Page 61 4 1

Served on committee in Department of Sociology, Darmstadt University, Germany

Served on three committees in Department of Psychology, New School for Social Research

Professional Activities Editorial: Associate Editor, Journal of Official Statistics (2002 - 2011) Member of Editorial Board, Public Opinion Quarterly (2006 – 2009, 2013-2018) Member of Advisory Board, Public Opinion Quarterly (2015-2018) Co-Editor, Applied Cognitive Psychology, Special Issue on Cognitive Aspects of Survey Methodology (2007)

Panels/Committees: US Food and Drug Administration Public Workshop, Data and Methods for Evaluating the Impact of Opioid Formulation with Properties Designed to Deter Abuse in the Postmarket Setting, Invited Panelist. (July, 2017). National Academy of Science/Committee on National Statistics, Panel on the Review and Evaluation of the 2014 Survey of Income and Program Participation Content and Design, Consultant (October 2015) National Academy of Science/Committee on National Statistics, Standing Committee on Integrating New Behavioral Health Measures Into The Substance Abuse And Mental Health Services Administration’s Data Collection Programs (2015-16) National Institute of Statistical Science, Expert Panel on Assessment and Reporting of Contributions of Women and New/Beginning Farmers to US Agriculture, National Agricultural Statistics Service (April – June, 2015) National Academy of Science/Committee on National Statistics, Panel to Review the Commercial Buildings Energy Consumption Survey and the Residential Energy Consumption Survey (2009-11) National Academy of Science/Committee on National Statistics, Panel to Review the Agricultural Resource Management Survey (2007-8) American Association for Public Opinion Research Education Committee, Member 2005-6. Review: Acta Psychologica American Education Research Association Applied Cognitive Psychology Assessment Cambridge University Press Cognitive Science Society Annual Conference, 1998, 2001 Discourse Processes International Journal of Public Opinion Research Field Methods Glaser Foundation Human Computer Interaction

Exhibit CCG-3 (Conrad), Page 62 42

Human-Communication Research Human Factors International Journal for Public Opinion Research International Journal of Social Research Methods Journal of the American Statistical Association Journal of Marketing Journal of Official Statistics Journal of Survey Statistics and Methodology Lawrence Erlbaum Associates, Publishers Memory and Cognition National Academy of Science, Panel report on Nonresponse in Social Science Data Collection: A Research Agenda” National Institutes of Health (Grant review panel) National Science Foundation (Grant review panels Social, Behavioral and Economic Sciences and Information Science and Engineering) Oxford University Press Psychological Bulletin Psychological Science Public Opinion Quarterly SAGE Social Science Computer Review Sociological Research and Methods Survey Methodology John Wiley & Sons, Inc.

Program Editor, The Third Practical Aspects of Memory Conference

Professional Memberships American Association for Public Opinion Research American Statistical Association Association for Psychological Science European Survey Research Association Midwest Association for Public Opinion Research

Exhibit CCG-3 (Conrad), Page 63 4 3

EXHIBIT CCG-R-4

WRITTEN REBUTTAL TESTIMONY OF DANIELLE BOUDREAU

Written Rebuttal Testimony of Danielle Boudreau CBC – BUSINESS AND RIGHTS 2010-2013 Cable Royalty Distribution Proceeding Docket No. 14-CRB-0010-CD (2010-2013)

September 15, 2017

1. Introduction I am Senior Specialist of Business and Rights for the Canadian Broadcasting Corporation/Radio-Canada (CBC) at the Head Office in Ottawa. On behalf of the Canadian Claimants Group (“CCG”), I have previously submitted Written Direct Testimony (and corrections dated May 16, 2017) during this Allocation Phase proceeding as Exhibit CCG-1 (Corrected). In that testimony, I described my experience and background.

In this rebuttal phase I am providing additional testimony that (a) describes supplemental signal content information I provided to Dr. Lisa George; (b) supplements my corrected testimony regarding Devotional programming on distantly retransmitted Canadian signals in response to claims made by the Settling Devotional Claimants in their direct case as amended and corrected; and (c) describes a minor error I discovered in our content data related to television station CIII for December 2011.

2. Supplemental Signal Content Information

To assist Dr. Lisa George with her rebuttal testimony, I undertook two tasks: (a) I provided a breakdown of content on distantly retransmitted Canadian signals by account period; and (b) I reviewed certain program titles Dr. George extracted from the regression of Dr. Mark Israel, submitted on behalf of Joint Sports Claimants.

Exhibit CCG-R-4 (Boudreau), Page 1 a. Account Period Content Data

For her Corrected Amended Written Direct Testimony, I provided Dr. George with data reporting the number of hours and share of programming in four claimant categories on Canadian signals broadcast on a distant basis. These data were organized on a full year basis for each of the years covered by this proceeding, 2010 through 2013. For her rebuttal testimony, Dr. George asked me to supply her with the same data on an accounting period basis. I prepared the requested data using the same techniques described in my prior testimony, Exhibit CCG-1 (Corrected).

b. Review of Dr. George’s Table 1: Major Broadcasts Misclassified by Joint Sports Claimants

Table 1 in Dr. George's written rebuttal testimony lists 50 widely-broadcast programs on Canadian distant signals constituting over 8,000 hours of programming. These programs were classified in Dr. Israel's data as Program Supplier programming. At Dr. George's request, I reviewed the list of titles. Based on my professional knowledge of programming on Canadian signals, and supplemented by additional research for some titles, I have confirmed that each of these broadcasts should be classified as a CCG programs. The programs appear in CRTC logs as Canadian or other non-US programming and were thus classified in my analysis as CCG programming.

3. Devotional Programming on Canadian Distant Signals

In this section of my rebuttal testimony, I provide additional information regarding programming content on distantly retransmitted Canadian signals, as well as information on the extent of such retransmissions in response to the direct case of Settling Devotional Claimants (“SDC”).

Exhibit CCG-R-4 (Boudreau), Page 2

In the Testimony of John S. Sanders, submitted with the Amended Written Direct Statement of Settling Devotional Claimants (“Sanders Testimony”), Mr. Sanders criticizes the study done by Dr. Lisa George, claiming that it does not properly account for SDC content (which is sometimes referred to as “Devotional programming”). (Sanders Testimony, p. 18, n. 27 & n. 29.) He also claims that regarding the surveys, “a portion of the broadcasts attributable to the Canadian Claimants clearly represents compensable programming in the Devotional category and should be allocated to the Devotional Claimants, accordingly.” (Sanders Testimony at p. 27, ¶ 35). He later quantifies this at approximately 0.5% of the entire royalty fund. (Sanders Testimony at p. 28, ¶ 37; p. 30, ¶ 41.)

In the Testimony of Dr. Erkan Erdem, also submitted with the Amended Written Direct Statement of Settling Devotional Claimants (“Erdem Testimony”), Dr. Erdem addresses religious and Devotional broadcasts shown on Canadian signals asserting that the amount of Devotional programming on Canadian stations is relevant in these proceedings. (Erdem Testimony at p. 6 and Exhibits 2 and 3.) It is important to note that the programming listed in Dr. Erdem’s Exhibit 2 is a mixture of both Devotional content and other non-SDC religious programming. Though the Devotional programming is claimed by SDC, the other religious programming (about two thirds of the total hours and broadcasts shown in Exhibit 2) falls within the CCG category because it is not produced in the United States.

The Devotional programming as shown in Dr. Erdem’s corrected Exhibit 3 closely matches the content data shown for SDC in my Exhibit CCG-1-D (Corrected).1 Notably, however the, SDC programming is not equally distributed among Canadian signals and

1 Dr. Erdem’s corrected Exhibit 2 appears to omit 27 and 29 hours of US-originated religious programming, in 2012 and 2013 respectively, on signal CIVT for the “Sacred Name Telecast.” In all other respects, the total hours in Dr. Erdem’s corrected Exhibit 3 match, within an hour or two each year, the totals I present in my corrected Exhibit CCG-1-D.

Exhibit CCG-R-4 (Boudreau), Page 3 Canadian signals are not retransmitted equally within the United States. Some signals, and therefore their content, reach far more US cable system subscribers than other signals.

My Exhibit CCG-1-D (Corrected) lists the percentage of claimant content that appears on distantly retransmitted Canadian signals. Using the information provided in that exhibit, Table 1 below lists just those distantly retransmitted signals with SDC programming and shows the percentage of SDC programming found each year on such signals. (Note, in this table I use the corrected 2011 CIII content percentage discussed in Section 4, below.)

Table 1 Settling Devotional Claimant Content on Distantly Retransmitted Canadian Signals

Signal 2010 2011 2012 2013 CFCF 0.29% 0.00% 0.00% 0.00% CFTO 0.29% 0.00% 0.00% 0.00% CIII 3.92% 4.19% 3.84% 3.74% CISA 4.03% 4.37% * 0.86% CIVT * * 0.34% 0.37% CJOH 0.36% 0.00% 0.00% 0.27% CKLT 0.22% 0.00% 0.00% 0.00% CKWS 3.31% 3.50% 2.30% 1.89% CKY 1.71% 1.58% 1.78% 2.25% *CIVT was not distantly retransmitted in 2010 or 2011; CISA was not distantly retransmitted in 2012.

I present data to consider the reach of those signals in two tables. Table 2, below, presents the number of distant subscriber instances for each retransmitted Canadian signal. This table is based on data from Cable Data Corporation. The Written Direct Testimony of Jonda Martin on behalf of CCG, previously submitted as Exhibit CCG-4, explains the source of CDC’s data and how distant subscriber instances are calculated.

Exhibit CCG-R-4 (Boudreau), Page 4

Table 2 Distant Subscriber Instances for Canadian Signals 2010-2013, Average Per Accounting Period

Signal* 2010 2011 2012 2013 CBUT 1,014,907 966,581 890,330 919,794 CKSH 368,824 355,378 376,637 367,641 CBMT 270,095 272,072 274,453 259,615 CFTO 224,288 213,637 225,240 210,241 CBET 167,787 221,990 212,586 238,445 CBLT 201,175 191,437 201,644 188,028 CKWS 73,734 110,822 108,446 99,186 CHLT 90,687 82,885 82,900 80,123 CBFT 66,689 76,586 84,355 85,637 CIII 47,750 25,673 24,893 24,642 CBWT 26,094 29,499 30,993 32,420 CJOH 20,706 31,957 31,940 31,142 CIVT - - 22,645 47,769 CBAFT 13,661 12,608 12,121 11,500 CISA 19,425 19,146 - 1,293 CFCF 8,093 9,681 10,095 10,279 CIMT 8,898 8,253 7,968 7,581 CKLT 4,341 4,540 4,835 4,813 CHCH 4,206 3,863 3,797 3,617 CBAT 9,394 173 191 4,619 CKY 2,038 2,009 2,061 1,797 CFTM 1,169 - - - CBOT 220 325 309 272

*Sorted by the sum of each signal's average distant subscriber instances over four years.

Table 3, below lists the total distant royalties reported by Cable Data Corporation for each retransmitted Canadian signal. An explanation of total distant royalties can be found in the written direct testimony of Jonda Martin previously submitted as Exhibit CCG-4.

Exhibit CCG-R-4 (Boudreau), Page 5 Table 3 Total Distant Royalties for Canadian Signals 2010-2013, By Year

Signal* 2010 2011 2012 2013 CBUT $1,939,933 $1,999,545 $2,100,510 $2,249,095 CBMT $1,185,517 $1,147,723 $1,072,223 $1,085,800 CKSH $654,486 $651,463 $674,577 $675,492 CBET $296,386 $383,894 $407,683 $536,308 CFTO $318,891 $315,558 $312,694 $305,685 CBLT $228,888 $230,183 $226,170 $220,660 CBFT $144,325 $148,973 $156,137 $165,114 CKWS $147,965 $150,035 $145,822 $137,852 CHLT $160,685 $144,393 $132,868 $128,722 CBWT $77,076 $84,499 $91,573 $97,246 CIII $63,251 $42,381 $44,802 $49,431 CIVT $58,929 $136,864 CJOH $33,867 $36,477 $37,594 $36,995 CISA $65,358 $60,532 $9,304 CBAFT $21,691 $19,399 $17,410 $16,653 CFCF $13,104 $13,343 $13,680 $14,484 CIMT $13,689 $12,480 $11,185 $10,727 CKY $7,598 $8,195 $8,686 $8,083 CKLT $7,047 $7,317 $7,873 $7,119 CBAT $16,424 $288 $292 $6,764 CHCH $4,837 $4,903 $4,874 $4,476 CBOT $1,237 $1,275 $1,251 $1,115 CFTM $2,045 Total $5,404,302 $5,462,856 $5,526,837 $5,903,990

*Sorted by each signal's total distant royalties over four years.

Finally, Tables 2 and 3 can be compared to Exhibit CCG-1-D (Corrected), which shows the Canadian signal content percentages. Doing so reveals that SDC content is found on signals with lower distant subscribers and royalties.

Exhibit CCG-R-4 (Boudreau), Page 6

Figure 1 below provides one method of comparing these content and carriage data using weighted averages. This exhibit shows the effect of weighting each signal’s content percentages by the distant subscriber instances and compares that to a simple average of the content on the signals. The simple average is just the average for all signals for each of the four claimant categories with content on Canadian signals listed for each signal in Exhibit CCG-1-D (corrected). The weighted average takes the percentage of content for each signal and multiplies it by the number of distant subscriber instances for that signal, sums them up, and divides by the total distant subscriber instances. This is done for all four content types. (Note, in these calculations I use the corrected 2011 CIII content percentage discussed in Section 4, below.)

Using this weighting method, SDC programming’s weighted average content is under one quarter of one percent of all programming reaching distant subscribers. What is also notable is that the weighted average of CCG content increases substantially over its simple average. The weighted average of programming attributable to Joint Sports Claimants increases and weighted average of programming attributable to Program Suppliers declines.

Exhibit CCG-R-4 (Boudreau), Page 7

Exhibit CCG-R-4 (Boudreau), Page 8

4. Minor Error in Content Data

In this section, I address a minor error in our content data.

Since filing my corrected testimony in May 2017, I learned that our database at the CBC contained certain incorrect data for the station CIII from December 2011. As described in my Written Direct Testimony, Canadian stations submit logs of the programs they broadcast each month to the Canadian Radio-Television and Telecommunications Commissions (CRTC). After CBC first imported CIII’s December 2011 log data into its database, CIII submitted an updated log to the CRTC for December 2011. When CBC became aware of this update, we imported the new updated log into the database. Unfortunately, the old data was not removed. That old data has been carried forward in all our reports and exhibits, as well as in the data we supplied Dr. George and provided to the other parties in discovery. We have now removed the old data. Because the old data was distributed relatively evenly among all claimant categories on the signal, the effect on claimant category content percentages is very small. The changes can be seen in Table 4, below.

Table 4 CIII Content for 2011

Revised Claimant Prior Content Content Net Change Group Shares Shares CCG 56.33% 56.21% - 0.12% PS 39.50% 39.60% + 0.10% JSC 0.0% 0.0% 0.0% SDC 4.17% 4.19% + 0.02% Total 100.00% 100.00% 0.0%

To be clear, this issue was not an error in underlying CRTC data. The issue only affected CIII for the month of December 2011. No other signals or months were affected.

Exhibit CCG-R-4 (Boudreau), Page 9