Statistical Tests, P Values, Confidence Intervals, and Power

Total Page:16

File Type:pdf, Size:1020Kb

Statistical Tests, P Values, Confidence Intervals, and Power Case 3:16-md-02741-VC Document 656-2 Filed 10/28/17 Page 1 of 15 EXHIBIT 107 Case 3:16-md-02741-VC Document 656-2 Filed 10/28/17 Page 2 of 15 Eur J Epidemiol (2016) 31:337–350 DOI 10.1007/s10654-016-0149-3 ESSAY Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations Sander Greenland1 • Stephen J. Senn2 • Kenneth J. Rothman3 • John B. Carlin4 • Charles Poole5 • Steven N. Goodman6 • Douglas G. Altman7 Received: 9 April 2016 / Accepted: 9 April 2016 / Published online: 21 May 2016 Ó The Author(s) 2016. This article is published with open access at Springerlink.com Abstract Misinterpretation and abuse of statistical tests, literature. In light of this problem, we provide definitions confidence intervals, and statistical power have been and a discussion of basic statistics that are more general decried for decades, yet remain rampant. A key problem is and critical than typically found in traditional introductory that there are no interpretations of these concepts that are at expositions. Our goal is to provide a resource for instruc- once simple, intuitive, correct, and foolproof. Instead, tors, researchers, and consumers of statistics whose correct use and interpretation of these statistics requires an knowledge of statistical theory and technique may be attention to detail which seems to tax the patience of limited but who wish to avoid and spot misinterpretations. working scientists. This high cognitive demand has led to We emphasize how violation of often unstated analysis an epidemic of shortcut definitions and interpretations that protocols (such as selecting analyses for presentation based are simply wrong, sometimes disastrously so—and yet on the P values they produce) can lead to small P values these misinterpretations dominate much of the scientific even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We Editor’s note This article has been published online as then provide an explanatory list of 25 misinterpretations of supplementary material with an article of Wasserstein RL, Lazar NA. P values, confidence intervals, and power. We conclude The ASA’s statement on p-values: context, process and purpose. The with guidelines for improving statistical interpretation and American Statistician 2016. reporting. Albert Hofman, Editor-in-Chief EJE. & Sander Greenland 3 RTI Health Solutions, Research Triangle Institute, [email protected] Research Triangle Park, NC, USA Stephen J. Senn 4 Clinical Epidemiology and Biostatistics Unit, Murdoch [email protected] Children’s Research Institute, School of Population Health, University of Melbourne, Melbourne, VIC, Australia John B. Carlin [email protected] 5 Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, Charles Poole USA [email protected] 6 Meta-Research Innovation Center, Departments of Medicine Steven N. Goodman and of Health Research and Policy, Stanford University [email protected] School of Medicine, Stanford, CA, USA Douglas G. Altman 7 Centre for Statistics in Medicine, Nuffield Department of [email protected] Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK 1 Department of Epidemiology and Department of Statistics, University of California, Los Angeles, CA, USA 2 Competence Center for Methodology and Statistics, Luxembourg Institute of Health, Strassen, Luxembourg 123 Case 3:16-md-02741-VC Document 656-2 Filed 10/28/17 Page 3 of 15 338 S. Greenland et al. Keywords Confidence intervals Á Hypothesis testing Á Null analyzed, and how the analysis results were selected for testing Á P value Á Power Á Significance tests Á Statistical presentation. The full set of assumptions is embodied in a testing statistical model that underpins the method. This model is a mathematical representation of data variability, and thus ideally would capture accurately all sources of such vari- Introduction ability. Many problems arise however because this statis- tical model often incorporates unrealistic or at best Misinterpretation and abuse of statistical tests has been unjustified assumptions. This is true even for so-called decried for decades, yet remains so rampant that some ‘‘non-parametric’’ methods, which (like other methods) scientific journals discourage use of ‘‘statistical signifi- depend on assumptions of random sampling or random- cance’’ (classifying results as ‘‘significant’’ or not based on ization. These assumptions are often deceptively simple to a P value) [1]. One journal now bans all statistical tests and write down mathematically, yet in practice are difficult to mathematically related procedures such as confidence satisfy and verify, as they may depend on successful intervals [2], which has led to considerable discussion and completion of a long sequence of actions (such as identi- debate about the merits of such bans [3, 4]. fying, contacting, obtaining consent from, obtaining Despite such bans, we expect that the statistical methods cooperation of, and following up subjects, as well as at issue will be with us for many years to come. We thus adherence to study protocols for treatment allocation, think it imperative that basic teaching as well as general masking, and data analysis). understanding of these methods be improved. Toward that There is also a serious problem of defining the scope of a end, we attempt to explain the meaning of significance model, in that it should allow not only for a good repre- tests, confidence intervals, and statistical power in a more sentation of the observed data but also of hypothetical general and critical way than is traditionally done, and then alternative data that might have been observed. The ref- review 25 common misconceptions in light of our expla- erence frame for data that ‘‘might have been observed’’ is nations. We also discuss a few more subtle but nonetheless often unclear, for example if multiple outcome measures or pervasive problems, explaining why it is important to multiple predictive factors have been measured, and many examine and synthesize all results relating to a scientific decisions surrounding analysis choices have been made question, rather than focus on individual findings. We after the data were collected—as is invariably the case further explain why statistical tests should never constitute [33]. the sole input to inferences or decisions about associations The difficulty of understanding and assessing underlying or effects. Among the many reasons are that, in most sci- assumptions is exacerbated by the fact that the statistical entific settings, the arbitrary classification of results into model is usually presented in a highly compressed and ‘‘significant’’ and ‘‘non-significant’’ is unnecessary for and abstract form—if presented at all. As a result, many often damaging to valid interpretation of data; and that assumptions go unremarked and are often unrecognized by estimation of the size of effects and the uncertainty sur- users as well as consumers of statistics. Nonetheless, all rounding our estimates will be far more important for statistical methods and interpretations are premised on the scientific inference and sound judgment than any such model assumptions; that is, on an assumption that the classification. model provides a valid representation of the variation we More detailed discussion of the general issues can be found would expect to see across data sets, faithfully reflecting in many articles, chapters, and books on statistical methods and the circumstances surrounding the study and phenomena their interpretation [5–20]. Specific issues are covered at length occurring within it. in these sources and in the many peer-reviewed articles that In most applications of statistical testing, one assump- critique common misinterpretations of null-hypothesis testing tion in the model is a hypothesis that a particular effect has and ‘‘statistical significance’’ [1, 12, 21–74]. a specific size, and has been targeted for statistical analysis. (For simplicity, we use the word ‘‘effect’’ when ‘‘associa- tion or effect’’ would arguably be better in allowing for noncausal studies such as most surveys.) This targeted Statistical tests, P values, and confidence intervals: assumption is called the study hypothesis or test hypothe- a caustic primer sis, and the statistical methods used to evaluate it are called statistical hypothesis tests. Most often, the targeted effect Statistical models, hypotheses, and tests size is a ‘‘null’’ value representing zero effect (e.g., that the study treatment makes no difference in average outcome), Every method of statistical inference depends on a complex in which case the test hypothesis is called the null web of assumptions about how data were collected and hypothesis. Nonetheless, it is also possible to test other 123 Case 3:16-md-02741-VC Document 656-2 Filed 10/28/17 Page 4 of 15 Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations 339 effect sizes. We may also test hypotheses that the effect Specifically, the distance between the data and the does or does not fall within a specific range; for example, model prediction is measured using a test statistic (such as we may test the hypothesis that the effect is no greater than a t-statistic or a Chi squared statistic). The P value is then a particular amount, in which case the hypothesis is said to the probability that the chosen test statistic would have be a one-sided or dividing hypothesis [7, 8]. been at least as large as its observed value if every model Much statistical teaching and practice has developed a assumption were correct, including the test hypothesis. strong (and unhealthy) focus on the idea that the main aim This definition embodies a crucial point lost in traditional of a study should be to test null hypotheses. In fact most definitions: In logical terms, the P value tests all the descriptions of statistical testing focus only on testing null assumptions about how the data were generated (the entire hypotheses, and the entire topic has been called ‘‘Null model), not just the targeted hypothesis it is supposed to Hypothesis Significance Testing’’ (NHST).
Recommended publications
  • Using Epidemiological Evidence in Tort Law: a Practical Guide Aleksandra Kobyasheva
    Using epidemiological evidence in tort law: a practical guide Aleksandra Kobyasheva Introduction Epidemiology is the study of disease patterns in populations which seeks to identify and understand causes of disease. By using this data to predict how and when diseases are likely to arise, it aims to prevent the disease and its consequences through public health regulation or the development of medication. But can this data also be used to tell us something about an event that has already occurred? To illustrate, consider a case where a claimant is exposed to a substance due to the defendant’s negligence and subsequently contracts a disease. There is not enough evidence available to satisfy the traditional ‘but-for’ test of causation.1 However, an epidemiological study shows that there is a potential causal link between the substance and the disease. What is, or what should be, the significance of this study? In Sienkiewicz v Greif members of the Supreme Court were sceptical that epidemiological evidence can have a useful role to play in determining questions of causation.2 The issue in that case was a narrow one that did not present the full range of concerns raised by epidemiological evidence, but the court’s general attitude was not unusual. Few English courts have thought seriously about the ways in which epidemiological evidence should be approached or interpreted; in particular, the factors which should be taken into account when applying the ‘balance of probability’ test, or the weight which should be attached to such factors. As a result, this article aims to explore this question from a practical perspective.
    [Show full text]
  • Analysis of Scientific Research on Eyewitness Identification
    ANALYSIS OF SCIENTIFIC RESEARCH ON EYEWITNESS IDENTIFICATION January 2018 Patricia A. Riley U.S. Attorney’s Office, DC Table of Contents RESEARCH DOES NOT PROVIDE A FIRM FOUNDATION FOR JURY INSTRUCTIONS OF THE TYPE ADOPTED BY SOME COURTS OR, IN SOME INSTANCES, FOR EXPERT TESTIMONY ...................................................... 6 RESEARCH DOES NOT PROVIDE A FIRM FOUNDATION FOR JURY INSTRUCTIONS OF THE TYPE ADOPTED BY SOME COURTS AND, IN SOME INSTANCES, FOR EXPERT TESTIMONY .................................................... 8 Introduction ............................................................................................................................................. 8 Courts should not comment on the evidence or take judicial notice of contested facts ................... 11 Jury Instructions based on flawed or outdated research should not be given .................................. 12 AN OVERVIEW OF THE RESEARCH ON EYEWITNESS IDENTIFICATION OF STRANGERS, NEW AND OLD .... 16 Important Points .................................................................................................................................... 16 System Variables .................................................................................................................................... 16 Simultaneous versus sequential presentation ................................................................................... 16 Double blind, blind, blinded administration (unless impracticable ...................................................
    [Show full text]
  • Probability, Explanation, and Inference: a Reply
    Alabama Law Scholarly Commons Working Papers Faculty Scholarship 3-2-2017 Probability, Explanation, and Inference: A Reply Michael S. Pardo University of Alabama - School of Law, [email protected] Ronald J. Allen Northwestern University - Pritzker School of Law, [email protected] Follow this and additional works at: https://scholarship.law.ua.edu/fac_working_papers Recommended Citation Michael S. Pardo & Ronald J. Allen, Probability, Explanation, and Inference: A Reply, (2017). Available at: https://scholarship.law.ua.edu/fac_working_papers/4 This Working Paper is brought to you for free and open access by the Faculty Scholarship at Alabama Law Scholarly Commons. It has been accepted for inclusion in Working Papers by an authorized administrator of Alabama Law Scholarly Commons. Probability, Explanation, and Inference: A Reply Ronald J. Allen Michael S. Pardo 11 INTERNATIONAL JOURNAL OF EVIDENCE AND PROOF 307 (2007) This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: http://ssrn.com/abstract=2925866 Electronic copy available at: https://ssrn.com/abstract=2925866 PROBABILITY, EXPLANATION AND INFERENCE: A REPLY Probability, explanation and inference: a reply By Ronald J. Allen* and Wigmore Professor of Law, Northwestern University; Fellow, Procedural Law Research Center, China Political Science and Law University Michael S. Pardo† Assistant Professor, University of Alabama School of Law he inferences drawn from legal evidence may be understood in both probabilistic and explanatory terms. Consider evidence that a criminal T defendant confessed while in police custody. To evaluate the strength of this evidence in supporting the conclusion that the defendant is guilty, one could try to assess the probability that guilty and innocent persons confess while in police custody.
    [Show full text]
  • Patents & Legal Expenditures
    Patents & Legal Expenditures Christopher J. Ryan, Jr. & Brian L. Frye* I. INTRODUCTION ................................................................................................ 577 A. A Brief History of University Patents ................................................. 578 B. The Origin of University Patents ........................................................ 578 C. University Patenting as a Function of Patent Policy Incentives ........ 580 D. The Business of University Patenting and Technology Transfer ....... 581 E. Trends in Patent Litigation ................................................................. 584 II. DATA AND ANALYSIS .................................................................................... 588 III. CONCLUSION ................................................................................................. 591 I. INTRODUCTION Universities are engines of innovation. To encourage further innovation, the federal government and charitable foundations give universities grants in order to enable university researchers to produce the inventions and discoveries that will continue to fuel our knowledge economy. Among other things, the Bayh-Dole Act of 1980 was supposed to encourage additional innovation by enabling universities to patent inventions and discoveries produced using federal funds and to license those patents to private companies, rather than turning their patent rights over to the government. The Bayh-Dole Act unquestionably encouraged universities to patent inventions and license their patents.
    [Show full text]
  • Jurimetrics--The Exn T Step Forward Lee Loevinger
    University of Minnesota Law School Scholarship Repository Minnesota Law Review 1949 Jurimetrics--The exN t Step Forward Lee Loevinger Follow this and additional works at: https://scholarship.law.umn.edu/mlr Part of the Law Commons Recommended Citation Loevinger, Lee, "Jurimetrics--The exN t Step Forward" (1949). Minnesota Law Review. 1796. https://scholarship.law.umn.edu/mlr/1796 This Article is brought to you for free and open access by the University of Minnesota Law School. It has been accepted for inclusion in Minnesota Law Review collection by an authorized administrator of the Scholarship Repository. For more information, please contact [email protected]. MINNESOTA LAW REVIEW Journal of the State Bar Association VOLUME 33 APRIL, 1949 No. 5 JURIMETRICS The Next Step Forward LEE LOEVINGER* T IS ONE of the greatest anomalies of modem tames that the law, which exists as a public guide to conduct, has become such a recondite mystery that it is incomprehensible to the pub- lic and scarcely intelligible to its own votaries. The rules which are supposed to be the guides to action of men living in society have become the secret cult of a group of priestly professionals. The mystic ritual of this cult is announced to the public, if at all, only in a bewildering jargon. Daily the law becomes more complex, citizens become more confused, and society becomes less cohesive. Of course, people do not respect that which they can neither understand nor see in effective operation. So the lawmongers bemoan the lack of respect for law. Now the lawyers are even bewailing the lack of respect for lawyers.
    [Show full text]
  • Problems with the Use of Computers for Selecting Jury Panels Author(S): George Marsaglia Source: Jurimetrics, Vol
    Problems with the Use of Computers for Selecting Jury Panels Author(s): George Marsaglia Source: Jurimetrics, Vol. 41, No. 4 (SUMMER 2001), pp. 425-427 Published by: American Bar Association Stable URL: http://www.jstor.org/stable/29762720 Accessed: 02-06-2017 15:03 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://about.jstor.org/terms American Bar Association is collaborating with JSTOR to digitize, preserve and extend access to Jurimetrics This content downloaded from 128.118.10.59 on Fri, 02 Jun 2017 15:03:30 UTC All use subject to http://about.jstor.org/terms Problems with the Use of Computers for Selecting Jury Panels The idea of random selection?choosing by lot?is rooted in history and law. In Ancient Greece, pieces of wood, "lots," each bearing the mark of a competitor, were placed in a helmet and drawn to determine choice battle assignments, division of plunder, and the like. A provision of Lex Pompeia Provinciea required that governors of Roman provinces be chosen by lot from eligible ex consuls. Grafton reports that choice of exiles from Germany was made by "a maner & sort of a Lot sundrie times used in the sayde lande."1 According to Plato, "[t]he ancients knew that election by lot was the most democratic of all modes of appointment."2 The tradition of choosing by lot continues, but the difficulty of collecting thousands of lots in a large "helmet" makes the task more readily suited to computer automation.
    [Show full text]
  • Teaching Statistics to Law Students
    ICOTS 5, 1998: Michael Finkelstein TEACHING STATISTICS TO LAW STUDENTS Michael O. Finkelstein, Columbia Law School, USA There is a growing need in the legal community for statistical education. More and better materials would be welcome. Education should begin at least at the law school level, with examples drawn from the law. The methods of the author’s law school class, concluding with a mock trial in which statisticians serve as witnesses, are described. The teaching moral is: tie statistical learning to real law issues. And to persuade laymen: keep it simple. Statistics has become an increasingly important part of the legal universe. In litigation, statistical proof appears in a wide variety of cases. Statistics are also used routinely by legal scholars in appraisals of the legal system. There is even a journal called Jurimetrics published by the American Bar Association that is devoted (but not exclusively) to the subject. A few years ago the United States Supreme Court in the Daubert case held that federal district judges must evaluate proffered scientific testimony and rule out mere personal opinion parading as science. Since most judges don’t know any science, this provoked some judicial anxiety. The Federal Judicial Center responded by setting up courses for judges in scientific evidence--most of it involving statistics--and producing a Reference Manual on Scientific Evidence, which prominently features statistics. The Center gave away the Reference Manual to all federal judges and then commercial publishers sold an astonishing 70,000 copies to lawyers. A new edition will appear shortly. I have read the Reference Manual and moderated several of the teaching sessions for judges.
    [Show full text]
  • Assessing Randomness in Case Assignment: the Case Study of the Brazilian Supreme Court
    Assessing randomness in case assignment: the case study of the Brazilian Supreme Court Diego Marcondes1, Cláudia Peixoto2 and Julio Michael Stern3 April 19, 2018 1 PhD Student, Department of Applied Mathematics, Institute of Mathematics and Statistics, University of São Paulo, Brazil. E-mail: [email protected]. 2 Professor, Department of Statistics, Institute of Mathematics and Statistics, University of São Paulo, Brazil. E-mail: [email protected]. 3 Professor, Department of Applied Mathematics, Institute of Mathematics and Statistics, University of São Paulo, Brazil. E-mail: [email protected]. I am grateful for the support received from FAPESP - the State of São Paulo Research Foundation (grants CEPID-CeMEAI 2013/07375-0 and CEPID- Shell-RCGI 2014/50279-4); and CNPq - the Brazilian National Counsel of Technological and Scientific Development (grant PQ 301206/2011-2). We are grateful for the support received from IME-USP, the Institute of Mathematics and Statistics of the University of São Paulo; and ABJ, the Brazilian Jurimetrics Association. We are also grateful for advice and comments received from Adilson Simonis, Rafael Bassi Stern and Marcelo Guedes Nunes. We thank the Supremo Tribunal Federal (Brazilian Supreme Court) which made available the data analyzed in this article. Marcondes, Peixoto and Stern ii Let the lot be shaken for all of you, and see who is chosen Iliad, VII, 171. Casting the dice puts judgement quarrels to rest and keeps powerful parts separated Proverbs 18:18. Marcondes, Peixoto and Stern iii Abstract Sortition, i.e., random appointment for public duty, has been employed by societies throughout the years, especially for duties related to the judicial system, as a firewall designated to prevent illegitimate interference between parties in a legal case and agents of the legal system.
    [Show full text]
  • Improving Ecological Inference by Predicting Individual Ethnicity from Voter Registration Records
    Advance Access publication March 17, 2016 Political Analysis (2016) 24:263–272 doi:10.1093/pan/mpw001 Improving Ecological Inference by Predicting Individual Ethnicity from Voter Registration Records Kosuke Imai Department of Politics and Center for Statistics and Machine Learning, Princeton University, Princeton, NJ 08544 e-mail: [email protected]; URL: http://imai.princeton.edu (corresponding author) Kabir Khanna Department of Politics, Princeton University, Princeton, NJ 08544 Downloaded from Edited by Justin Grimmer In both political behavior research and voting rights litigation, turnout and vote choice for different racial groups are often inferred using aggregate election results and racial composition. Over the past several http://pan.oxfordjournals.org/ decades, many statistical methods have been proposed to address this ecological inference problem. We propose an alternative method to reduce aggregation bias by predicting individual-level ethnicity from voter registration records. Building on the existing methodological literature, we use Bayes’s rule to combine the Census Bureau’s Surname List with various information from geocoded voter registration records. We evaluate the performance of the proposed methodology using approximately nine million voter registration records from Florida, where self-reported ethnicity is available. We find that it is possible to reduce the false positive rate among Black and Latino voters to 6% and 3%, respectively, while maintaining the true positive rate above 80%. Moreover, we use our predictions to estimate turnout by race and find that our estimates yields substantially less amounts of bias and root mean squared error than standard ecological inference at Princeton University on April 22, 2016 estimates. We provide open-source software to implement the proposed methodology.
    [Show full text]
  • Evidence-Based Crime Investigation. a Bayesian Approach
    Evidence-based Crime Investigation. A Bayesian Approach Ph.D. Elisabet Lund Department of statistical science University College London May 2012 Preface This dissertation is about evidence and inference in crime investigation. The question is: What can and should "evidence-based" crime investigation mean? My immediate aim is to contribute to the methodological toolbox of crime investiga- tion, but I also aspire to contribute to the concept and standard of evidence in real and public decision-making. Epistemology, methodology, and the foundational issues of different knowledge disciplines have been my basic intellectual interest since my student days. During the obligatory curriculum for a degree in political science at the University of Oslo, I gradually discovered that my talent, if any, was in methodology and analysis and not in politics. I did not have the maturity or boldness of my fellow much more politically minded students. While they rapidly formed an opinion on how things should be on almost any social or political matter I became stuck in the premises: How did they know that the premises of their opinions were sufficiently certain? Is not the certainty of the premises important if your opinion is to become actual policy, having real consequence to real people? I completed my degree in political science with a thesis analysing central foundational problems of political science (rationality vs. social norms; individualism vs. collectivism). Having "discovered" that the premises for these foundational conflicts were ideological too, the question became: By which criteria may we evaluate the knowledge-claims of public poli- cies? I wanted to pursue the concept and standard of evidence and justification in practical public decision-making.
    [Show full text]
  • Auditable Blockchain Randomization Tool †
    proceedings Proceedings Auditable Blockchain Randomization Tool † Olivia Saa ‡ and Julio Michael Stern *,‡ IME-USP—Institute of Mathematics and Statistics of the University of São Paulo, Rua do Matão 1010, 05508-090 São Paulo, Brazil; [email protected] or [email protected] * Correspondence: [email protected] † Presented at the 39th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Garching, Germany, 30 June–5 July 2019. ‡ These authors contributed equally to this work. Published: 2 December 2019 Abstract: Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the juridical and social importance of randomization, it is important to develop procedures in full compliance with the following desiderata: (a) Statistical soundness and computational efficiency; (b) Procedural, cryptographical and computational security; (c) Complete auditability and traceability; (d) Any attempt by participating parties or coalitions to spuriously influence the procedure should be either unsuccessful or be detected; (e) Open-source programming; (f) Multiple hardware platform and operating system implementation; (g) User friendliness and transparency; (h) Flexibility and adaptability for the needs and requirements of multiple application areas (like, for example, clinical trials, selection of jury or judges in legal proceedings, and draft lotteries). This paper presents a simple and easy to implement randomization protocol that assures, in a formal mathematical setting, full compliance to the aforementioned desiderata for randomization procedures. Keywords: blockchain entropy; statistical randomization; judicial sortition Meos tam suspicione quam crimine judico carere oportere.
    [Show full text]
  • Statistics in Litigation: a Selective Bibliography
    STATISTICS IN LITIGATION: A SELECTIVE BIBLIOGRAPHY MICHAEL CHIORAZZI* This bibliography contains articles and books dealing with the use of statistics in litigation in the United States. Coverage for the last 15 years (1969 to present) is fairly extensive but not exhaustive. Since the vast majority of relevant literature was written after 1969, earlier materials were chosen on a highly selective basis: either they were judged to have withstood the test of time or they are quoted with regularity in the current literature. ARTICLES Abrams, Primary and Secondary Characteristics in Discrimination Cases, 23 VILL. L. REV. 35 (1977-78). Aigner & Cain, Statistical Theories of Discrimination in Labor Markets, 30 INDUS. & LAB. REL. REV. 175 (1977). Albert, Some Eptemological Aspects of Cost-Benefit Analysis, 45 GEO. WASH. L. REV. 1025 (1977). Arnold & Gold, The Use of a Public Opthion Poll on a Change of Venue Application, 21 CRIM. L.Q. 445 (1978-79). Baldus & Cole, Quantitative Proof of Intentional Discriminatzon, 1 EVALUATION Q. 53 (1977). Barnett, An Underestimated Threat to Multiple Regression Analysis Used in Job Discrimina- tion Cases, 5 INDUS. REL. L.J. 156 (1982). Bass, Gesser & Mount, Scientific StatisticalMethodology and the Doctrine of "Reasonable Doubt" in Criminal Law,- (With Specifc Reference to the Breath Analysis for Blood Alcohol) Empirical Fact or Legal Fiction, 5 DALHOUSIE L.J. 350 (1979). Berk & Oppenheim, Doing Good Well: The Use of Quantitative Social Science Data in Advocacy Proceedings, 1 LAw & Soc'Y Q. 123 (1979). Bode, Auditing Afirmative Action through Multiple Regression Analysis, 31 LAB. L.J. 115 (1980). Bogen & Falcon, The Use of RacialStatistics in FairHlousing Cases, 34 MD.
    [Show full text]