1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Total Page:16
File Type:pdf, Size:1020Kb
1 2 3 4 5 6 UNITED STATES DISTRICT COURT 7 DISTRICT OF ARIZONA 8 Tyler Bowyer; Michael John Burke; Nancy Cottle; No. CV-20-02321-PHX-DJH Jake Hoffman; Anthony Kern; Christopher M. 9 King; James R. Lamon; Sam Moorhead; Robert Montgomery; Loraine Pellegrino; Greg Safsten; EXPERT REPORT OF 10 Salvatore Luke Scarmardo; Kelli Ward; and PROFESSOR GARY KING 11 Michael Ward, 12 Plaintiffs, 13 v. 14 Doug Ducey, in his official capacity as Governor of the State of Arizona; and Katie Hobbs, in her 15 official capacity as Arizona Secretary of State, 16 Defendants. 17 MARICOPA COUNTY BOARD OF 18 SUPERVISORS; and ADRIAN FONTES, in his official capacity as Maricopa County Recorder, 19 Intervenors. 20 21 22 Pursuant to 28 U.S.C. § 1746, I hereby verify that the following statements are 23 true and correct to the best of my knowledge: 24 Report of Gary King 25 In this report, I evaluate evidence described and conclusions drawn in several Exhibits in this 26 case offered by the Plaintiffs. I conclude that the evidence is insufficient to support conclusions about election fraud. Throughout, the authors break the chain of evidence repeatedly – from the 27 2020 election, to the data analyzed, to the quantitative results presented, to the conclusions 28 drawn – and as such cannot be relied on. In addition, the Exhibits make many crucial {00526054.1 } 1 assumptions without justification, discussion, or even recognition – each of which can lead to substantial bias, which was unrecognized and uncorrected. The data analytic and statistical 2 procedures used in the Exhibits for data providence, data analysis, replication information, and 3 statistical analysis all violate professional standards and should be disregarded. 4 Exhibit 2: “An Analysis of Surveys Regarding Absentee Ballots Across Several States” by 5 William M. Briggs 6 1. Summary: The conclusions of this Exhibit are not supported by the evidence provided. The lack of crucial information provided about the survey violates professional 7 standards in this field and is insufficient to support the stated conclusions. 8 2. Proper survey research requires precise details about all of the following (among others), none of which appear in the Exhibit: 9 a. A probability sample, which normally involves (a) an enumerated list of all 10 members of the target population of interest and (b) a known random mechanism of selecting members of the population to be interviewed. (For example, we 11 could have a list of all voters and sample selection conducted by random lottery, 12 where each voter has an equal probability of selection.) b. Detailed information about the entire chain of evidence from the election we are 13 studying to the quantitative information in the dataset to be analyzed to the 14 numerical results. c. The response rate, including precisely how this rate was computed, and precise 15 information about how those who responded to the survey differed from those 16 who refused (which indicates how representative the survey respondents are and whether adjustments need to be made during statistical analysis). 17 d. Carefully worded and validated survey questions. Surveys are well known to be 18 highly sensitive to the specific questions asked (for example, using the word “baby” in a question about “attitudes toward abortion rights” can completely 19 change respondent answers) and so best practices in the field requires pretesting, 20 cognitive debriefing, and clear, measurable validation. Without these steps, we cannot know whether the answer a respondent gives reflects the specific views 21 we seek to measure. 22 e. Detailed information about survey response biases. Retrospective surveys, such as this, are well known to have substantial biases which must be studied, known, 23 and corrected. This retrospective survey, in particular, was conducted while the 24 President was claiming election fraud, and so we need to know whether supporters and opponents of the President responded to this survey ways that 25 might bias the results toward their favored positions. Even in elections without 26 this behavior, retrospective studies are well known to give incorrect answers to who each respondent voted for and whether they turned out to vote in the first 27 place. (For one of many examples, more survey respondents typically report 28 having voted than there were voters.) These and other types of biases can be {00526054.1 } -2- 1 large, but correcting them is impossible if – as in the present Exhibit – they are not measured and reported, and proper statistical techniques are not used. 2 f. Statistical analysis methods must be developed to adjust for all the information 3 in items a-e. Applying simple means or counts to the data without adapting them to all of the above – as was done in the Exhibit – can yield highly 4 misleading results. 5 g. Complete information must be provided about how the data was analyzed. The standard of information reporting (now used widely in the academic literature) is 6 that it must be sufficiently detailed so that a third party would be able to 7 replicate the results in the Exhibit without talking to the original author. See Gary King. 1995. “Replication, Replication.” PS: Political Science and Politics, 8 28, Pp. 444-452. Copy at https://j.mp/2oSOXJL. 9 3. The following two sentences is a summary of the information Exhibit 2 provides about its sampling procedures: “Survey data was collected from individuals in several states, 10 sampling those who the states listed as not returning absentee ballots. The data was 11 provided by Matt Braynard.” a. None of the terms in this sentence are defined. We do not know what “survey 12 data was collected”, who collected it, how it was collected, etc. The Exhibit 13 does not say which “individuals” were surveyed or who was approached to answer a survey question. The terms “several states” is not defined. Where we 14 can find “those who the states listed” is not reported. 15 b. Thus, this Exhibit excludes information necessary for making valid scientific inferences and drawing accurate conclusions. The Exhibit itself violates 16 academic standards, and cannot be relied on for the purposes claimed. 17 4. Exhibit 2A reports several undocumented and unexplained numerical tables apparently from the survey. If this is correct, the first table indicates that the survey researcher 18 attempted to interview 81,704 people of which 684 completed the survey (“1- 19 Completed Survey”), for a response rate of 0.008 (8 tenths of one percent), far below any professional standard for a modern survey. 20 5. The Exhibit does not measure or discuss how representative these 684 people are of the 21 target population and how the broader group was selected, and likely biases are not addressed, corrected, quantified, or even noted. 22 6. To provide more information about the inadequacy of this report, I also list a few more 23 specific examples from the report indicating undefined procedures, unprofessional methods, and unjustified analyses: 24 a. The Exhibit indicates that “The survey asked respondents whether they (a) had 25 ever requested an absentee ballot”. Unfortunately, the precise survey question was withheld, which is sufficient to reject any conclusions based on this 26 question, but if we take the phrase literally it means a respondent could answer 27 yes if they requested an absentee ballot in any election, including those prior to the 2020 Presidential election, which is obviously irrelevant to the present case. 28 {00526054.1 } -3- 1 b. The Exhibit reads: “If so, (b) whether they had in fact returned the ballot”. The Exhibit needs to provide evidence that the respondent is interpreting the word 2 “returned” in the same way as the Exhibit, which itself is not precisely defined. 3 Does it mean mailed, dropped off, received, counted, or something else? Apparently small issues like this can greatly bias statistical conclusions if not 4 known and adjusted. 5 c. The Exhibit says “I produce predictions…” but does not indicate how these predictions were made. No information is provided and so the claimed “errors” 6 have no relevance for drawing conclusions, since they could easily be errors in 7 the authors’ predictions, computed with secretive procedures. d. Almost regardless of how the predictions were produced from survey data, it is 8 inconceivable that the author could reliably estimate what the Exhibit calls 9 “Error #1, those who were recorded as receiving absentee ballots without requesting them” or “Error #2, those who returned absentee ballots but whose 10 votes went missing”. Methods do not exist that can do this without knowing 11 considerably more than the Exhibit provided. e. “The size of the errors were large” – No metrics were provided for the errors 12 and so “large” is undefined. 13 f. The report also apparently references data about official records, such as the number of absentee ballots and the number returned. The report does not give 14 the origin for this information. The chain of evidence for this information (just 15 as with the survey) must be made available. With any break in the chain – and the links in the Exhibit are mostly missing – no reliable conclusions can be 16 drawn from the data. 17 g. The report fails to explain how the quantitative tables that appear in the body of the report were constructed. As such, they cannot be interpreted and no reliable 18 conclusions can be drawn from them. 19 h. The report fails to provide information about how the figures in the Appendix were constructed. As such, no conclusions can be drawn from them. 20 21 Exhibit 4: Declaration of [redacted] 22 1. Summary: The statistical methods used in this Exhibit do not represent best practice in current scholarship and can easily induce bias.