1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Total Page:16

File Type:pdf, Size:1020Kb

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 1 2 3 4 5 6 UNITED STATES DISTRICT COURT 7 DISTRICT OF ARIZONA 8 Tyler Bowyer; Michael John Burke; Nancy Cottle; No. CV-20-02321-PHX-DJH Jake Hoffman; Anthony Kern; Christopher M. 9 King; James R. Lamon; Sam Moorhead; Robert Montgomery; Loraine Pellegrino; Greg Safsten; EXPERT REPORT OF 10 Salvatore Luke Scarmardo; Kelli Ward; and PROFESSOR GARY KING 11 Michael Ward, 12 Plaintiffs, 13 v. 14 Doug Ducey, in his official capacity as Governor of the State of Arizona; and Katie Hobbs, in her 15 official capacity as Arizona Secretary of State, 16 Defendants. 17 MARICOPA COUNTY BOARD OF 18 SUPERVISORS; and ADRIAN FONTES, in his official capacity as Maricopa County Recorder, 19 Intervenors. 20 21 22 Pursuant to 28 U.S.C. § 1746, I hereby verify that the following statements are 23 true and correct to the best of my knowledge: 24 Report of Gary King 25 In this report, I evaluate evidence described and conclusions drawn in several Exhibits in this 26 case offered by the Plaintiffs. I conclude that the evidence is insufficient to support conclusions about election fraud. Throughout, the authors break the chain of evidence repeatedly – from the 27 2020 election, to the data analyzed, to the quantitative results presented, to the conclusions 28 drawn – and as such cannot be relied on. In addition, the Exhibits make many crucial {00526054.1 } 1 assumptions without justification, discussion, or even recognition – each of which can lead to substantial bias, which was unrecognized and uncorrected. The data analytic and statistical 2 procedures used in the Exhibits for data providence, data analysis, replication information, and 3 statistical analysis all violate professional standards and should be disregarded. 4 Exhibit 2: “An Analysis of Surveys Regarding Absentee Ballots Across Several States” by 5 William M. Briggs 6 1. Summary: The conclusions of this Exhibit are not supported by the evidence provided. The lack of crucial information provided about the survey violates professional 7 standards in this field and is insufficient to support the stated conclusions. 8 2. Proper survey research requires precise details about all of the following (among others), none of which appear in the Exhibit: 9 a. A probability sample, which normally involves (a) an enumerated list of all 10 members of the target population of interest and (b) a known random mechanism of selecting members of the population to be interviewed. (For example, we 11 could have a list of all voters and sample selection conducted by random lottery, 12 where each voter has an equal probability of selection.) b. Detailed information about the entire chain of evidence from the election we are 13 studying to the quantitative information in the dataset to be analyzed to the 14 numerical results. c. The response rate, including precisely how this rate was computed, and precise 15 information about how those who responded to the survey differed from those 16 who refused (which indicates how representative the survey respondents are and whether adjustments need to be made during statistical analysis). 17 d. Carefully worded and validated survey questions. Surveys are well known to be 18 highly sensitive to the specific questions asked (for example, using the word “baby” in a question about “attitudes toward abortion rights” can completely 19 change respondent answers) and so best practices in the field requires pretesting, 20 cognitive debriefing, and clear, measurable validation. Without these steps, we cannot know whether the answer a respondent gives reflects the specific views 21 we seek to measure. 22 e. Detailed information about survey response biases. Retrospective surveys, such as this, are well known to have substantial biases which must be studied, known, 23 and corrected. This retrospective survey, in particular, was conducted while the 24 President was claiming election fraud, and so we need to know whether supporters and opponents of the President responded to this survey ways that 25 might bias the results toward their favored positions. Even in elections without 26 this behavior, retrospective studies are well known to give incorrect answers to who each respondent voted for and whether they turned out to vote in the first 27 place. (For one of many examples, more survey respondents typically report 28 having voted than there were voters.) These and other types of biases can be {00526054.1 } -2- 1 large, but correcting them is impossible if – as in the present Exhibit – they are not measured and reported, and proper statistical techniques are not used. 2 f. Statistical analysis methods must be developed to adjust for all the information 3 in items a-e. Applying simple means or counts to the data without adapting them to all of the above – as was done in the Exhibit – can yield highly 4 misleading results. 5 g. Complete information must be provided about how the data was analyzed. The standard of information reporting (now used widely in the academic literature) is 6 that it must be sufficiently detailed so that a third party would be able to 7 replicate the results in the Exhibit without talking to the original author. See Gary King. 1995. “Replication, Replication.” PS: Political Science and Politics, 8 28, Pp. 444-452. Copy at https://j.mp/2oSOXJL. 9 3. The following two sentences is a summary of the information Exhibit 2 provides about its sampling procedures: “Survey data was collected from individuals in several states, 10 sampling those who the states listed as not returning absentee ballots. The data was 11 provided by Matt Braynard.” a. None of the terms in this sentence are defined. We do not know what “survey 12 data was collected”, who collected it, how it was collected, etc. The Exhibit 13 does not say which “individuals” were surveyed or who was approached to answer a survey question. The terms “several states” is not defined. Where we 14 can find “those who the states listed” is not reported. 15 b. Thus, this Exhibit excludes information necessary for making valid scientific inferences and drawing accurate conclusions. The Exhibit itself violates 16 academic standards, and cannot be relied on for the purposes claimed. 17 4. Exhibit 2A reports several undocumented and unexplained numerical tables apparently from the survey. If this is correct, the first table indicates that the survey researcher 18 attempted to interview 81,704 people of which 684 completed the survey (“1- 19 Completed Survey”), for a response rate of 0.008 (8 tenths of one percent), far below any professional standard for a modern survey. 20 5. The Exhibit does not measure or discuss how representative these 684 people are of the 21 target population and how the broader group was selected, and likely biases are not addressed, corrected, quantified, or even noted. 22 6. To provide more information about the inadequacy of this report, I also list a few more 23 specific examples from the report indicating undefined procedures, unprofessional methods, and unjustified analyses: 24 a. The Exhibit indicates that “The survey asked respondents whether they (a) had 25 ever requested an absentee ballot”. Unfortunately, the precise survey question was withheld, which is sufficient to reject any conclusions based on this 26 question, but if we take the phrase literally it means a respondent could answer 27 yes if they requested an absentee ballot in any election, including those prior to the 2020 Presidential election, which is obviously irrelevant to the present case. 28 {00526054.1 } -3- 1 b. The Exhibit reads: “If so, (b) whether they had in fact returned the ballot”. The Exhibit needs to provide evidence that the respondent is interpreting the word 2 “returned” in the same way as the Exhibit, which itself is not precisely defined. 3 Does it mean mailed, dropped off, received, counted, or something else? Apparently small issues like this can greatly bias statistical conclusions if not 4 known and adjusted. 5 c. The Exhibit says “I produce predictions…” but does not indicate how these predictions were made. No information is provided and so the claimed “errors” 6 have no relevance for drawing conclusions, since they could easily be errors in 7 the authors’ predictions, computed with secretive procedures. d. Almost regardless of how the predictions were produced from survey data, it is 8 inconceivable that the author could reliably estimate what the Exhibit calls 9 “Error #1, those who were recorded as receiving absentee ballots without requesting them” or “Error #2, those who returned absentee ballots but whose 10 votes went missing”. Methods do not exist that can do this without knowing 11 considerably more than the Exhibit provided. e. “The size of the errors were large” – No metrics were provided for the errors 12 and so “large” is undefined. 13 f. The report also apparently references data about official records, such as the number of absentee ballots and the number returned. The report does not give 14 the origin for this information. The chain of evidence for this information (just 15 as with the survey) must be made available. With any break in the chain – and the links in the Exhibit are mostly missing – no reliable conclusions can be 16 drawn from the data. 17 g. The report fails to explain how the quantitative tables that appear in the body of the report were constructed. As such, they cannot be interpreted and no reliable 18 conclusions can be drawn from them. 19 h. The report fails to provide information about how the figures in the Appendix were constructed. As such, no conclusions can be drawn from them. 20 21 Exhibit 4: Declaration of [redacted] 22 1. Summary: The statistical methods used in this Exhibit do not represent best practice in current scholarship and can easily induce bias.
Recommended publications
  • Artificial Intelligence, Automation, and Work
    Artificial Intelligence, Automation, and Work The Economics of Artifi cial Intelligence National Bureau of Economic Research Conference Report The Economics of Artifi cial Intelligence: An Agenda Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb The University of Chicago Press Chicago and London The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2019 by the National Bureau of Economic Research, Inc. All rights reserved. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations in critical articles and reviews. For more information, contact the University of Chicago Press, 1427 E. 60th St., Chicago, IL 60637. Published 2019 Printed in the United States of America 28 27 26 25 24 23 22 21 20 19 1 2 3 4 5 ISBN-13: 978-0-226-61333-8 (cloth) ISBN-13: 978-0-226-61347-5 (e-book) DOI: https:// doi .org / 10 .7208 / chicago / 9780226613475 .001 .0001 Library of Congress Cataloging-in-Publication Data Names: Agrawal, Ajay, editor. | Gans, Joshua, 1968– editor. | Goldfarb, Avi, editor. Title: The economics of artifi cial intelligence : an agenda / Ajay Agrawal, Joshua Gans, and Avi Goldfarb, editors. Other titles: National Bureau of Economic Research conference report. Description: Chicago ; London : The University of Chicago Press, 2019. | Series: National Bureau of Economic Research conference report | Includes bibliographical references and index. Identifi ers: LCCN 2018037552 | ISBN 9780226613338 (cloth : alk. paper) | ISBN 9780226613475 (ebook) Subjects: LCSH: Artifi cial intelligence—Economic aspects. Classifi cation: LCC TA347.A78 E365 2019 | DDC 338.4/ 70063—dc23 LC record available at https:// lccn .loc .gov / 2018037552 ♾ This paper meets the requirements of ANSI/ NISO Z39.48-1992 (Permanence of Paper).
    [Show full text]
  • Fintech, Racial Equity, and an Inclusive Financial System
    Volume 15 , Issue 2, Summer 2021 Community Development INNOVATION REVIEW Fintech, Racial Equity, and an Inclusive Financial System Community Development INNOVATION REVIEW The Community Development Innovation Review focuses on bridging the gap between theory and practice, from as many viewpoints as possible. The goal of this journal is to promote cross-sector dialogue around a range of emerging issues and related investments that advance economic resilience and mobility for low- and moderate-income communities. For submission guidelines and themes of upcoming issues, or other inquiries and subscription requests, contact [email protected]. The material herein may not represent the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System. Community Development Innovation Review Issue Editorial Team Bina Patel Shrimali Co-Editor Rocio Sanchez-Moyano Co-Editor Kaitlin Asrow Guest Issue Editor Sean Creehan Guest Issue Editor Genevieve Melford Guest Issue Editor Tim Shaw Guest Issue Editor Mackenzie Wallace Guest Issue Editor Federal Reserve Bank of San Francisco. 2021. “Fintech, Racial Equity, and an Inclusive Financial System,” Federal Reserve Bank of San Francisco Community Development Innovation Review 2021-2. Available at https://doi.org/10.24148/cdir2021-02 Table of Contents Foreword ...........................................................................................................................................................5 Laura Choi, Federal Reserve Bank of San Francisco Ida
    [Show full text]
  • European Union Law Working Papers
    Stanford – Vienna Transatlantic Technology Law Forum A joint initiative of Stanford Law School and the University of Vienna School of Law European Union Law Working Papers No. 49 An Analysis of Articles 15 and 17 of the EU Directive on Copyright in the Digital Single Market: a boost for the creative industries or the death of the internet? Diana Passinke 2020 European Union Law Working Papers Editors: Siegfried Fina and Roland Vogl About the European Union Law Working Papers The European Union Law Working Paper Series presents research on the law and policy of the European Union. The objective of the European Union Law Working Paper Series is to share “works in progress”. The authors of the papers are solely responsible for the content of their contributions and may use the citation standards of their home country. The working papers can be found at http://ttlf.stanford.edu. The European Union Law Working Paper Series is a joint initiative of Stanford Law School and the University of Vienna School of Law’s LLM Program in European and International Business Law. If you should have any questions regarding the European Union Law Working Paper Series, please contact Professor Dr. Siegfried Fina, Jean Monnet Professor of European Union Law, or Dr. Roland Vogl, Executive Director of the Stanford Program in Law, Science and Technology, at: Stanford-Vienna Transatlantic Technology Law Forum http://ttlf.stanford.edu Stanford Law School University of Vienna School of Law Crown Quadrangle Department of Business Law 559 Nathan Abbott Way Schottenbastei 10-16 Stanford, CA 94305-8610 1010 Vienna, Austria About the Author Diana Passinke is a graduate of the University of Vienna, where she earned her LLM degree in European and International Business Law with distinction in 2020.
    [Show full text]
  • C:\Working Papers\7804.Wpd
    1%(5:25.,1*3$3(56(5,(6 7+(6/2:'2:12)7+((&2120,&638%/,6+,1*352&(66 *OHQQ(OOLVRQ :RUNLQJ3DSHU KWWSZZZQEHURUJSDSHUVZ 1$7,21$/%85($82)(&2120,&5(6($5&+ 0DVVDFKXVHWWV$YHQXH &DPEULGJH30$ -XO\ ,ZRXOGOLNHWRWKDQNWKH1DWLRQDO6FLHQFH)RXQGDWLRQ6%5WKH6ORDQ)RXQGDWLRQWKH&HQWHUIRU$GYDQFHG 6WXG\LQWKH%HKDYLRUDO6FLHQFHVDQGWKH3DXO(*UD\8523)XQGIRUWKHLUVXSSRUW7KLVSDSHUZRXOGQRWKDYHEHHQ SRVVLEOHZLWKRXWWKHKHOSRIDJUHDWPDQ\SHRSOH,DPYHU\JUDWHIXOIRUWKHHIIRUWVWKDWDQXPEHURIMRXUQDOVPDGHWR VXSSO\PHZLWKGDWD,QDGGLWLRQPDQ\RIWKHLGHDVLQWKLVSDSHUZHUHGHYHORSHGLQWKHFRXUVHRIDVHULHVRIFRQYHUVDWLRQV ZLWKRWKHUHFRQRPLVWV,ZRXOGHVSHFLDOO\OLNHWRWKDQN2UOH\$VKHQIHOWHU6XVDQ$WKH\5REHUW%DUUR*DU\%HFNHU-RKQ &RFKUDQH2OLYLHU%ODQFKDUG-XG\&KHYDOLHU.HQ&RUWV%U\DQ(OOLFNVRQ6DUD)LVKHU(OOLVRQ)UDQN)LVKHU'UHZ )XGHQEHUJ-RVKXD*DQV(GZDUG*ODHVHU'DQLHO+DPHUPHVK/DUV+DQVHQ+DUULHW+RIIPDQ-LP+RVHN$ODQ.UXHJHU 3DXOD/DULFK9LFN\/RQJDZD5REHUW/XFDV:DOO\0XOOLQ3DXO6DPXHOVRQ,O\D6HJDO.DUO6KHOO$QGUHL6KOHLIHUDQG .DWK\6LPNDQLFKZLWKRXWLPSOLFDWLQJWKHPIRUDQ\RIWKHYLHZVGLVFXVVHGKHUHLQ5LFKDUG&UXPS6LPRQD-HOHVFX &KULVWLQH.LDQJ1DGD0RUDDQG&DUROLQH6PLWKSURYLGHGYDOXDEOHUHVHDUFKDVVLVWDQFH7KHYLHZVH[SUHVVHGKHUHDUH WKHDXWKRUDQGGRQRWQHFHVVDULO\UHIOHFWWKRVHRIWKH1DWLRQDO%XUHDXRI(FRQRPLF5HVHDUFK E\*OHQQ(OOLVRQ$OOULJKWVUHVHUYHG6KRUWVHFWLRQVRIWH[WQRWWRH[FHHGWZRSDUDJUDSKVPD\EHTXRWHG ZLWKRXWH[SOLFLWSHUPLVVLRQSURYLGHGWKDWIXOOFUHGLWLQFOXGLQJQRWLFHLVJLYHQWRWKHVRXUFH 7KH6ORZGRZQRIWKH(FRQRPLFV3XEOLVKLQJ3URFHVV *OHQQ(OOLVRQ 1%(5:RUNLQJ3DSHU1R -XO\ -(/1R$ $%#% 2YHUWKHODVWWKUHHGHFDGHVWKHUHKDVEHHQDGUDPDWLFLQFUHDVHLQWKHOHQJWKRIWLPHQHFHVVDU\ WRSXEOLVKDSDSHULQDWRSHFRQRPLFVMRXUQDO7KLVSDSHUGRFXPHQWVWKHVORZGRZQDQGQRWHVWKDW
    [Show full text]
  • When Should You Adjust Standard Errors for Clustering?
    When Should You Adjust Standard Errors for Clustering? Alberto Abadie Susan Athey Guido W. Imbens Jeffrey Wooldridge October, 2017 Working Paper No. 17-030 When Should You Adjust Standard Errors for Clustering?∗ Alberto Abadie† Susan Athey‡ Guido W. Imbens§ Jeffrey Wooldridge¶ October 2017 Abstract In empirical work in economics it is common to report standard errors that account for clustering of units. Typically, the motivation given for the clustering adjustments is that unobserved components in outcomes for units within clusters are correlated. However, because correlation may occur across more than one dimension, this motivation makes it difficult to justify why researchers use clustering in some dimensions, such as geographic, but not others, such as age cohorts or gender. This motivation also makes it difficult to explain why one should not cluster with data from a randomized experiment. In this paper, we argue that clustering is in essence a design problem, either a sampling design or an experimental design issue. It is a sampling design issue if sampling follows a two stage process where in the first stage, a subset of clusters were sampled randomly from a population of clusters, and in the second stage, units were sampled randomly from the sampled clusters. In this case the clustering adjustment is justified by the fact that there are clusters in the population that we do not see in the sample. Clustering is an experimental design issue if the assignment is correlated within the clusters. We take the view that this second perspective best fits the typical setting in economics where clustering adjustments are used.
    [Show full text]
  • NBER-2017-—-Economics-Of-AI.Pdf
    ECONOMICS OF AI HOME + CONFERENCES PAPERS BLOG NBER CONFERENCE, TORONTO 2017 AEA 2018 TPI WASHINGTON DC FEBRUARY 2018 2018 INET/IMF MACROECONOMICS OF AI NBER CONFERENCE TORONTO 2018 On 13-14 September, 2017, we held our inaugural conference in Toronto to set the research agenda on The Economics of AI. The conference was made possible by generous support from the NBER, Creative Destruction Lab and CIFAR. Ajay Agrawal, Joshua Gans and Avi Goldfarb of the University of Toronto organised the conference. Presentation Videos Introduction by Ajay Agrawal Agenda Introduction - Ajay Agrawal (Toronto & NBER) (Slides, Video) The Impact of Artificial Intelligence on Innovation (Slides, Video) Iain Cockburn (Boston University & NBER) Rebecca Henderson (Harvard & NBER) Scott Stern (MIT & NBER) Discussant: Matthew Mitchell, University of Toronto (Slides) Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics (Slides, Video, Media Coverage) Erik Brynjolfsson (MIT & NBER) Daniel Rock (MIT & NBER) Chad Syverson (Chicago & NBER) Discussant: Rebecca Henderson, Harvard University and NBER (Slides) Market Design (Slides, Video) Paul Milgrom (Stanford) Steven Tadelis (University of California at Berkeley & NBER) Discussant: Matt Taddy, Chicago & Microsoft (Slides) Impact on Economics (Slides, Video) Susan Athey (Stanford & NBER) Discussant: Mara Lederman, University of Toronto (Slides) Lunch Presentation by Vinod Khosla, Khosla Ventures (Video) Prediction, Judgment and Complexity (Slides, Video) Ajay Agrawal (Toronto & NBER) Joshua Gans (Toronto & NBER) Avi Goldfarb (Toronto & NBER) Discussant: Andrea Prat, Columbia University (Slides) Privacy (Slides, Video) Catherine Tucker (MIT & NBER) Discussant: Ginger Zhe Jin, Maryland & NBER (Slides) Trade (Slides, Video) Daniel Trefler (Toronto & NBER) Avi Goldfarb (Toronto & NBER) Discussant: Dave Donaldson, MIT & NBER (Slides) Panel: Impact on Policy Making (Video) Susan Athey (Stanford & NBER) Austan Goolsbee (Chicago & NBER) Lawrence H.
    [Show full text]
  • Artificial Intelligence and Its Implications for Income Distribution
    Artificial Intelligence and Its Implications ... The Economics of Artifi cial Intelligence National Bureau of Economic Research Conference Report The Economics of Artifi cial Intelligence: An Agenda Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb The University of Chicago Press Chicago and London The University of Chicago Press, Chicago 60637 The University of Chicago Press, Ltd., London © 2019 by the National Bureau of Economic Research, Inc. All rights reserved. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations in critical articles and reviews. For more information, contact the University of Chicago Press, 1427 E. 60th St., Chicago, IL 60637. Published 2019 Printed in the United States of America 28 27 26 25 24 23 22 21 20 19 1 2 3 4 5 ISBN-13: 978-0-226-61333-8 (cloth) ISBN-13: 978-0-226-61347-5 (e-book) DOI: https:// doi .org / 10 .7208 / chicago / 9780226613475 .001 .0001 Library of Congress Cataloging-in-Publication Data Names: Agrawal, Ajay, editor. | Gans, Joshua, 1968– editor. | Goldfarb, Avi, editor. Title: The economics of artifi cial intelligence : an agenda / Ajay Agrawal, Joshua Gans, and Avi Goldfarb, editors. Other titles: National Bureau of Economic Research conference report. Description: Chicago ; London : The University of Chicago Press, 2019. | Series: National Bureau of Economic Research conference report | Includes bibliographical references and index. Identifi ers: LCCN 2018037552 | ISBN 9780226613338 (cloth : alk. paper) | ISBN 9780226613475 (ebook) Subjects: LCSH: Artifi cial intelligence—Economic aspects. Classifi cation: LCC TA347.A78 E365 2019 | DDC 338.4/ 70063—dc23 LC record available at https:// lccn .loc .gov / 2018037552 ♾ This paper meets the requirements of ANSI/ NISO Z39.48-1992 (Permanence of Paper).
    [Show full text]
  • Kominers 40.Pdf
    ISSN 1936-5349 (print) ISSN 1936-5357 (online) HARVARD JOHN M. OLIN CENTER FOR LAW, ECONOMICS, AND BUSINESS FELLOWS’ DISCUSSION PAPER SERIES CONCORDANCE AMONG HOLDOUTS Scott Duke Kominers E. Glen Weyl Discussion Paper No. 40 08/2011 Harvard Law School Cambridge, MA 02138 Contributors to this series are John M. Olin Fellows or Terence M. Considine Fellows in Law and Economics at Harvard University. This paper can be downloaded without charge from: The Harvard John M. Olin Fellow’s Discussion Paper Series: http://www.law.harvard.edu/programs/olin_center/ Concordance among Holdouts∗ Scott Duke Kominersy E. Glen Weylz Abstract Holdout problems prevent decentralized aggregation of complementary goods, as in the assembly of land or transfer of corporate ownership. Therefore, as Mailath and Postelwaite (1990) formalized, some coercion is needed to enable the assembly of complements in large populations. We propose an approximate individual rationality condition under which individuals are guaranteed to receive as compensation for taking at least the best estimate of their value possible from others' information. Unlike strict individual rationality, this constraint is consistent with the bilateral efficiency achieved by a take-it-or-leave-it offer to the aggregate seller community. We propose a class of Concordance mechanisms which includes the maximally attractive mechanisms achieving these objectives together with, depending on the implementation considered, either dominant-strategy incentive compatibility or Bayesian incentive compatibility and budget balance. JEL classification: D40, D70, D81, L10 ∗Kominers gratefully acknowledges the support of an NSF Graduate Research Fellowship, a Yahoo! Key Scientific Challenges Program Fellowship, and a Terence M. Considine Fellowship in Law and Economics funded by the John M.
    [Show full text]
  • Breiman's Two Cultures: a Perspective from Econometrics Guido Imbens, Susan Athey
    Breiman's Two Cultures: A Perspective from Econometrics Guido Imbens, Susan Athey Observational Studies, Volume 7, Issue 1, 2021, pp. 127-133 (Article) Published by University of Pennsylvania Press DOI: https://doi.org/10.1353/obs.2021.0028 For additional information about this article https://muse.jhu.edu/article/799753 [ Access provided at 27 Sep 2021 20:43 GMT with no institutional affiliation ] Observational Studies Vol. 7.1 (2021): Published July 2021 Breiman's Two Cultures: A Perspective from Econometrics Guido Imbens [email protected] Graduate School of Business and Department of Economics Stanford University Stanford, CA 94305 Susan Athey [email protected] Graduate School of Business Stanford University Stanford, CA 94305 Abstract Breiman's \Two Cultures" paper painted a picture of two disciplines, data modeling, and algorithmic machine learning, both engaged in the analyses of data but talking past each other. Although that may have been true at the time, there is now much interaction between the two. For example, in economics, machine learning algorithms have become valuable and widely appreciated tools for aiding in the analyses of economic data, informed by causal/structural economic models. Keywords: Econometrics, Causal Inference, Instrumental Variables 1. Introduction When we first read Breiman's \Two Cultures" paper (Breiman, 2001) in the early 2000's it baffled us. In econometrics, as in much of statistics (98% according to Breiman), the modeling culture was dominant, and a purely prediction-based focus seemed very alien to what econometricians were doing. You can see this clearly in the history of econometrics (see for a description, e.g, Hendry and Morgan (1997)).
    [Show full text]
  • Efficient Estimation for Staggered Rollout Designs Arxiv:2102.01291V3
    Efficient Estimation for Staggered Rollout Designs∗ Jonathan Roth† Pedro H.C. Sant’Anna‡ June 30, 2021 Abstract This paper studies efficient estimation of causal effects when treatment is (quasi-) randomly rolled out to units at different points in time. We solve for the most efficient estimator in a class of estimators that nests two-way fixed effects models and other popular generalized difference-in-differences methods. A feasible plug-in version of the efficient estimator is asymptotically unbiased with efficiency (weakly) dominating that of existing approaches. We provide both t-based and permutation-test based methods for inference. We illustrate the performance of the plug-in efficient estimator in simula- tions and in an application to Wood et al.(2020a)’s study of the staggered rollout of a procedural justice training program for police officers. We find that confidence intervals based on the plug-in efficient estimator have good coverage and can be as much as five times shorter than confidence intervals based on existing state-of-the-art methods. As an empirical contribution of independent interest, our application provides the most precise estimates to date on the effectiveness of procedural justice training programs for police officers. ∗We are grateful to Yuehao Bai, Brantly Callaway, Jen Doleac, Peng Ding, Avi Feller, Ryan Hill, Li- hua Lei, David McKenzie, Emily Owens, Ashesh Rambachan, Evan Rose, Adrienne Sabety, Jesse Shapiro, Yotam Shem-Tov, Dylan Small, Ariella Kahn-Lang Spitzer, Sophie Sun, and seminar participants at In- arXiv:2102.01291v3 [econ.EM] 29 Jun 2021 sper, Notre Dame, UC-Berkeley, University of Cambridge, University of Delaware, University of Florida, the North American Summer Meetings of the Econometric Society, and the International Association of Applied Econometrics annual meeting for helpful comments and conversations.
    [Show full text]
  • Synthetic Difference in Differences Arxiv:1812.09970V4 [Stat.ME]
    Synthetic Difference in Differences Dmitry Arkhangelsky† Susan Athey‡ David A. Hirshberg§ Guido W. Imbens¶ Stefan Wager‖ Version July 2021 Abstract We present a new estimator for causal effects with panel data that builds on insights behind the widely used difference in differences and synthetic control methods. Relative to these methods we find, both theoretically and empirically, that this \synthetic difference in differences” estimator has desirable robustness properties, and that it performs well in settings where the conventional estimators are commonly used in practice. We study the asymptotic behavior of the estimator when the systematic part of the outcome model includes latent unit factors interacted with latent time factors, and we present conditions for consistency and asymptotic normality. Keywords: causal inference, difference in differences, synthetic controls, panel data ∗We are grateful for helpful comments and feedback from a co-editor and referees, as well as from arXiv:1812.09970v4 [stat.ME] 2 Jul 2021 Alberto Abadie, Avi Feller, Paul Goldsmith-Pinkham, Liyang Sun, Yiqing Xu, Yinchu Zhu, and seminar participants at several venues. This research was generously supported by ONR grant N00014-17-1- 2131 and the Sloan Foundation. The R package for implementing the methods developed here is available at https://github.com/synth-inference/synthdid. The associated vignette is at https://synth- inference.github.io/synthdid/. †Associate Professor, CEMFI, Madrid, [email protected]. ‡Professor of Economics, Graduate School of Business, Stanford University, SIEPR, and NBER, [email protected]. §Postdoctoral Fellow, Graduate School of Business and Department of Statistics, Stanford Univer- sity, [email protected]. ¶Professor of Economics, Graduate School of Business, and Department of Economics, Stanford University, SIEPR, and NBER, [email protected].
    [Show full text]
  • Imbens Wooldridge
    Journal of Economic Literature 2009, 47:1, 5–86 http:www.aeaweb.org/articles.php?doi=10.1257/jel.47.1.5 Recent Developments in the Econometrics of Program Evaluation Guido W. Imbens and Jeffrey M. Wooldridge* Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades, much research has been done on the econometric and statistical analysis of such causal effects. This recent theoreti- cal literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization, and other areas of empirical microeconomics. In this review, we discuss some of the recent developments. We focus primarily on practical issues for empirical research- ers, as well as provide a historical overview of the area and give references to more technical research. 1. Introduction research in economics and suitable for a review. In this article, we attempt to pres- any empirical questions in economics ent such a review. We will focus on practi- Mand other social sciences depend on cal issues for empirical researchers, as well as causal effects of programs or policies. In the provide an historical overview of the area and last two decades, much research has been give references to more technical research. done on the econometric and statistical anal- This review complements and extends other ysis of such causal effects.
    [Show full text]