Appendix a Statistical Approaches, Probability Interpretations, and the Quantification of Standards of Proof

Total Page:16

File Type:pdf, Size:1020Kb

Appendix a Statistical Approaches, Probability Interpretations, and the Quantification of Standards of Proof Appendix A Statistical Approaches, Probability Interpretations, and the Quantification of Standards of Proof A.1 Relevance Versus Statistical Fluctuations The term statistics is often used to mean data or information, that is, nu­ merical quantities based on some type of observations. Statistics in this report is used in a more specific sense to refer to methods for planning scientific studies, collecting data, and then analyzing, presenting, and in­ terpreting the data once they are collected. Much statistical methodology has as its purpose the understanding of patterns or forms of regularity in the presence of variability and errors in observed data: If life were stable, simple, and routinely repetitious, there would be little need for statistical thinking. But there would probably be no human beings to do statistical thinking, because sufficient stability and simplicity would not allow the genetic randomness that is a central mechanism of evolution. Life is not, in fact, sta­ ble or simple, but there are stable and simple aspects to it. From one point of view, the goal of science is the discovery and eluci­ dation of these aspects, and statistics deals with some general methods of finding patterns that are hidden in a cloud of irrel­ evancies, of natural variability, and of error-prone observations or measurements (Kruskal, 1978b:l078). When the Lord created the world and people to live in it-an enterprise which, according to modern science, took a very long time-I could well imagine that He reasoned with Himself as follows: "If I make everything predictable, these human beings, whom I have endowed with pretty good brains, will undoubt­ edly learn to predict everything, and they will thereupon have no motive to do anything at all, because they will recognize that the future is totally determined and cannot be influenced by any human action. On the other hand, if I make everything unpre­ dictable, they will gradually discover that there is no rational 192 Statistical Assessments as Evidence basis for any decision whatsoever and, as in the first case, they will thereupon have no motive to do anything at all. Neither scheme would make sense. I must therefore create a mixture of the two. Let some things be predictable and let others be un­ predictable. They will then, amongst many other things, have the very important task of finding out which is which" (Schu­ macker, Small is Beautiful, 1975). This view of statistics has inherent in it the need for some form of in­ ference from the observed data to a problem of interest. There are several schools of statistical inference, each of which sets forth an approach to sta­ tistical analysis, reporting, and interpretation. The two most visible such schools are (1) the proponents of the Bayesian or "subjective" approach and (2) the frequentist or "objective" approach. The key difference between them is in terms of how they deal with the interpretation of probability and of the inferential process. A statistical analyst, whether a Bayesian or a frequentist, must ulti­ mately confront the relevance of the patterns discerned from a body of data for the problem of interest. Whether in the law or in some other field of endeavor, statistical analysis and inference are of little use if they are not designed to inform judgments about the issue that generated the statistical inquiry. Thus perhaps a better title for this introductory section might have been "Discovering Relevance in the Presence of Statistical Fluctuations." This appendix describes the two basic approaches to drawing inferences from statistical data and attempts to link the probabilistic interpretation of such inferences to the formalities of legal decision making. As one turns from the use of statistics in both the data and the method­ ological senses, i.e., in the plural and the singular, in various scientific fields to its use in the law, there will clearly be occasions when statistics has little to offer. In a somewhat different context Bliley (1983) notes: Our tendency to rely on statistics is understandable. Statistics offer to our minds evidence which is relatively clear and easy to understand. By comparison, arguments about the rights of individuals, the natural authority of the family, and the legit­ imate powers of government are complex and do not provide us with answers quickly or easily. But we must remember that these are questions which are at the root of our deliberations. Until these questions are answered, statistics are of limited use. Statisticians could not have written the Declaration of Indepen­ dence or the Constitution of the United States, and statisticians should not be depended upon to interpret them. Appendix A. Statistical Approaches 193 A.2 Views on Statistical Inference Rule 401 of the Federal Rules of Evidence provides the following definition of relevance: "Relevant evidence" means evidence having any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence. From this rule one might infer that the court wishes and expects to have its judgments about facts at issue to be expressed in terms of probabilities. Such a situation is tailor-made for the application of Bayes' Theorem and the Bayesian form of statistical inference in connection with evidence. Thus some have characterized the Bayesian approach as what the Court needs and often thinks it gets. Rule 401 has traditionally been thought of, in much less sweeping terms, as being applicable only to the admission of evidence and not to its use in reaching judgments, even though the evidence is admitted to aid the fact finder in reaching judgments. Our review of the use of statistical assessments as evidence in the courts suggests that, with some special exceptions, such as the use of blood group analyses in paternity cases, what the court actually gets is a frequentist or "classical" approach to statistical inference. Such an approach is difficult to reconcile with the language of Rule 401 (but see the discussion in Lem­ pert, 1977, and Tiller, 1983). The key weapon in the frequentist's arsenal of statistical techniques is the test of significance whose result is often ex­ pressed in terms of a p-value. There is a tendency for jurists, lawyers, and even expert statistical witnesses to misinterpret p-values as if they were probabilities associated with facts at issue (see the discussion of this issue in the case study in Section 2.6). In this section, we describe the two approaches to statistical inference: the Bayesian approach and the frequentist approach. Then we discuss the interpretation of p-values and their relationship or lack thereof to the tail values that are interpretable as probabilities in a Bayesian context. The Basic Bayesian Approach Suppose that the fact of interest is denoted by G. In a criminal case, G would be the extent that the accused is legally guilty, and for specificity we speak of the event G as guilt and of its complement G as innocence or nonguilt. Following Lindley (1977) and Fienberg and Kadane (1983), we let H denote all of the evidence presented up to a certain point in the trial, and E denote new evidence. We would like to update our assessment of the probability of G given the evidence H, i.e., peG I H), now that the new evidence E is available. This is done via Bayes' Theorem, which states that: 194 Statistical Assessments as Evidence P(G I H)P(E I G&H) P(G I E&H) = P(G I H)P(E I G&H) + P(G I H)P(E I G&H) All of these probabilities represent subjective assessments of individual de­ grees of belief. As a distinguished Bayesian statistician once remarked: coins don't have probabilities, people do. It is also important to note that for the subjectivist there is a symmetry between unknown past facts and unknown future facts. An individual can have probabilities for either. The well-known relationship of Bayes' Theorem is more easily expressed in terms of the odds on guilt, i.e., the ratio of the probability of guilt, G, to the probability of innocence (or nonguilt), G: P(E I G&H) odds (G I E&H) = P(E I G&H) x odds (G I H). Lempert (1977, p.l023), notes that This formula describes the way knowledge of a new item of evidence (E) would influence a completely rational decision maker's evaluation of the odds that a defendant is guilty (G). Since the law assumes that a factfinder should be rational, this is a normative model; that is, the Bayesian equation describes the way the law's ideal juror evaluates new items of evidence. In this sense Bayes' Theorem provides a normative approach to the issue of relevance. One might now conclude that when the likelihood ratio P(E I G&H) P(E I G&H) for an item of evidence differs from one, the evidence is relevant in the sense of Rule 401. Lempert (1977) refers to this as logical relevance. Now, the way to incorporate assessments of the evidence in a trial is clear. The trier of fact (a judge or juror) begins the trial with an initial assessment of the probability of guilt, P(G), and then updates this assess­ ment using either form of Bayes' Theorem given above, with each piece of evidence. At the end of the trial, the posterior estimate of G given all of the evidence is used to reach a decision. In the case of a jury trial, the jurors may have different posterior assessments of G, and these will need to be combined or reconciled in some way.
Recommended publications
  • Statistical Significance and Statistical Error in Antitrust Analysis
    STATISTICAL SIGNIFICANCE AND STATISTICAL ERROR IN ANTITRUST ANALYSIS PHILLIP JOHNSON EDWARD LEAMER JEFFREY LEITZINGER* Proof of antitrust impact and estimation of damages are central elements in antitrust cases. Generally, more is needed for these purposes than simple ob- servational evidence regarding changes in price levels over time. This is be- cause changes in economic conditions unrelated to the behavior at issue also may play a role in observed outcomes. For example, prices of consumer elec- tronics have been falling for several decades because of technological pro- gress. Against that backdrop, a successful price-fixing conspiracy may not lead to observable price increases but only slow their rate of decline. There- fore, proof of impact and estimation of damages often amounts to sorting out the effects on market outcomes of illegal behavior from the effects of other market supply and demand factors. Regression analysis is a statistical technique widely employed by econo- mists to identify the role played by one factor among those that simultane- ously determine market outcomes. In this way, regression analysis is well suited to proof of impact and estimation of damages in antitrust cases. For that reason, regression models have become commonplace in antitrust litigation.1 In our experience, one aspect of regression results that often attracts spe- cific attention in that environment is the statistical significance of the esti- mates. As is discussed below, some courts, participants in antitrust litigation, and commentators maintain that stringent levels of statistical significance should be a threshold requirement for the results of regression analysis to be used as evidence regarding impact and damages.
    [Show full text]
  • Statistical Proof of Discrimination: Beyond "Damned Lies"
    Washington Law Review Volume 68 Number 3 7-1-1993 Statistical Proof of Discrimination: Beyond "Damned Lies" Kingsley R. Browne Follow this and additional works at: https://digitalcommons.law.uw.edu/wlr Part of the Labor and Employment Law Commons Recommended Citation Kingsley R. Browne, Statistical Proof of Discrimination: Beyond "Damned Lies", 68 Wash. L. Rev. 477 (1993). Available at: https://digitalcommons.law.uw.edu/wlr/vol68/iss3/2 This Article is brought to you for free and open access by the Law Reviews and Journals at UW Law Digital Commons. It has been accepted for inclusion in Washington Law Review by an authorized editor of UW Law Digital Commons. For more information, please contact [email protected]. Copyright © 1993 by Washington Law Review Association STATISTICAL PROOF OF DISCRIMINATION: BEYOND "DAMNED LIES" Kingsley R. Browne* Abstract Evidence that an employer's work force contains fewer minorities or women than would be expected if selection were random with respect to race and sex has been taken as powerful-and often sufficient-evidence of systematic intentional discrimina- tion. In relying on this kind of statistical evidence, courts have made two fundamental errors. The first error is assuming that statistical analysis can reveal the probability that observed work-force disparities were produced by chance. This error leads courts to exclude chance as a cause when such a conclusion is unwarranted. The second error is assuming that, except for random deviations, the work force of a nondiscriminating employer would mirror the racial and sexual composition of the relevant labor force. This assumption has led courts inappropriately to shift the burden of proof to employers in pattern-or-practice cases once a statistical disparity is shown.
    [Show full text]
  • Exploratory Versus Confirmative Testing in Clinical Trials
    xperim & E en Gaus et al., Clin Exp Pharmacol 2015, 5:4 l ta a l ic P in h DOI: 10.4172/2161-1459.1000182 l a C r m f Journal of o a l c a o n l o r g u y o J ISSN: 2161-1459 Clinical and Experimental Pharmacology Research Article Open Access Interpretation of Statistical Significance - Exploratory Versus Confirmative Testing in Clinical Trials, Epidemiological Studies, Meta-Analyses and Toxicological Screening (Using Ginkgo biloba as an Example) Wilhelm Gaus, Benjamin Mayer and Rainer Muche Institute for Epidemiology and Medical Biometry, University of Ulm, Germany *Corresponding author: Wilhelm Gaus, University of Ulm, Institute for Epidemiology and Medical Biometry, Schwabstrasse 13, 89075 Ulm, Germany, Tel : +49 731 500-26891; Fax +40 731 500-26902; E-mail: [email protected] Received date: June 18 2015; Accepted date: July 23 2015; Published date: July 27 2015 Copyright: © 2015 Gaus W, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Abstract The terms “significant” and “p-value” are important for biomedical researchers and readers of biomedical papers including pharmacologists. No other statistical result is misinterpreted as often as p-values. In this paper the issue of exploratory versus confirmative testing is discussed in general. A significant p-value sometimes leads to a precise hypothesis (exploratory testing), sometimes it is interpret as “statistical proof” (confirmative testing). A p-value may only be interpreted as confirmative, if (1) the hypothesis and the level of significance were established a priori and (2) an adjustment for multiple testing was carried out if more than one test was performed.
    [Show full text]
  • Inference Procedures Based on Order Statistics
    INFERENCE PROCEDURES BASED ON ORDER STATISTICS DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Jesse C. Frey, M.S. * * * * * The Ohio State University 2005 Dissertation Committee: Approved by Prof. H. N. Nagaraja, Adviser Prof. Steven N. MacEachern Adviser ¨ ¨ Prof. Omer Ozturk¨ Graduate Program in Prof. Douglas A. Wolfe Statistics ABSTRACT In this dissertation, we develop several new inference procedures that are based on order statistics. Each procedure is motivated by a particular statistical problem. The first problem we consider is that of computing the probability that a fully- specified collection of independent random variables has a particular ordering. We derive an equal conditionals condition under which such probabilities can be computed exactly, and we also derive extrapolation algorithms that allow approximation and computation of such probabilities in more general settings. Romberg integration is one idea that is used. The second problem we address is that of producing optimal distribution-free confidence bands for a cumulative distribution function. We treat this problem both in the case of simple random sampling and in the more general case in which the sample consists of independent order statistics from the distribution of interest. The latter case includes ranked-set sampling. We propose a family of optimality criteria motivated by the idea that good confidence bands are narrow, and we develop theory that makes the identification and computation of optimal bands possible. The Brunn-Minkowski Inequality from the theory of convex bodies plays a key role in this work.
    [Show full text]
  • The Individualization Fallacy in Forensic Science Evidence
    Vanderbilt Law Review Volume 61 | Issue 1 Article 4 1-2008 The ndiI vidualization Fallacy in Forensic Science Evidence Michael J. Saks Jonathan J. Koehler Follow this and additional works at: https://scholarship.law.vanderbilt.edu/vlr Part of the Evidence Commons Recommended Citation Michael J. Saks and Jonathan J. Koehler, The ndI ividualization Fallacy in Forensic Science Evidence, 61 Vanderbilt Law Review 199 (2019) Available at: https://scholarship.law.vanderbilt.edu/vlr/vol61/iss1/4 This Essay is brought to you for free and open access by Scholarship@Vanderbilt Law. It has been accepted for inclusion in Vanderbilt Law Review by an authorized editor of Scholarship@Vanderbilt Law. For more information, please contact [email protected]. The Individualization Fallacy in Forensic Science Evidence Michael J. Saks* & JonathanJ. Koehler* I. FOREWORD: THE TWO STEPS IN FORENSIC IDENTIFICATION .................................................................. 199 II. THE INDIVIDUALIZATION FALLACY ...................................... 203 A. Reliance on the Notion of Individualization........... 205 B. Origins and Evolution of the Notion of Individualization..................................... 207 III. UNPROVED AND PERHAPS UNPROVABLE .............................. 208 IV . O LD N EW S ........................................................................... 214 V . W HAT TO DO ........................................................................ 216 A . The P resent .............................................................
    [Show full text]
  • Scientific Misconceptions Among Daubert Gatekeepers: the Need for Reform of Expert Review Procedures
    BEYEA_FMT.DOC 04/23/01 11:45 AM SCIENTIFIC MISCONCEPTIONS AMONG DAUBERT GATEKEEPERS: THE NEED FOR REFORM OF EXPERT REVIEW PROCEDURES JAN BEYEA* AND DANIEL BERGER** I INTRODUCTION The Supreme Court’s opinions in Daubert v. Merrell Dow Pharmaceuticals, Inc.1, General Electric Company v. Joiner2, and Kumho Tire v. Carmichael3 con- tain two inconsistent views of science. In some places, the Court views science as an imperfect “process” for refining theories, whereas in other places, the Court views science as universal knowledge derived through “formal logic.” The latter view, long out of favor with philosophers and historians of science, comports with the current cultural vision of science and is likely to be adopted by district and appeals court judges, without vigorous “education,” or until such time as higher courts recognize that the two views need to be synthesized into a Copyright © 2001 by Jan Beyea and Daniel Berger This article is also available at http://www.law.duke.edu/journals/64LCPBeyea. * Ph.D., Senior Scientist, Consulting in the Public Interest, Lambertville, New Jersey. Dr. Beyea is a regular panel member and reviewer of studies carried out by the National Research Council of the National Academy of Sciences. He can be contacted at [email protected]. ** Member, Berger & Montague, P.C., a firm that specializes in complex litigation and is a leader in the field of class actions, including toxic tort litigation. We thank Jim Cook for helping us gather statistics on Daubert decisions and Debora Fliegel- man for assistance with editing. 1. 509 U.S. 579 (1993). In Daubert, the Court displaced the so-called Frye test of general accept- ability as the guideline for the admission of expert scientific witnesses.
    [Show full text]
  • Pioneers in Criminology: Cesare Lombroso (1825-1909) Marvin E
    Journal of Criminal Law and Criminology Volume 52 Article 1 Issue 4 November-December Winter 1961 Pioneers in Criminology: Cesare Lombroso (1825-1909) Marvin E. Wolfgang Follow this and additional works at: https://scholarlycommons.law.northwestern.edu/jclc Part of the Criminal Law Commons, Criminology Commons, and the Criminology and Criminal Justice Commons Recommended Citation Marvin E. Wolfgang, Pioneers in Criminology: Cesare Lombroso (1825-1909), 52 J. Crim. L. Criminology & Police Sci. 361 (1961) This Article is brought to you for free and open access by Northwestern University School of Law Scholarly Commons. It has been accepted for inclusion in Journal of Criminal Law and Criminology by an authorized editor of Northwestern University School of Law Scholarly Commons. The Journal of CRIMINAL LAW, CRIMINOLOGY, AND POLICE SCIENCE Vol. 52 NOVEMBER-DECEMBER 1961 No. 4 PIONEERS IN CRIMINOLOGY: CESARE LOMBROSO (1835-1909) MARVIN E. WOLFGANG The author is Associate Professor of Sociology in the University of Pennsylvania, Philadelphia. He is the author of Patterns in Criminal Homicide, for which he received the August Vollmer Research Award last year, and is president of the Pennsylvania Prison Society. As a former Guggenheim Fellow in Italy, Dr. Wolfgang collected material for an historical analysis of crime and punishment in the Renaissance. Presently he is engaged in a basic research project entitled, "The Measurement of De- linquency." Some fifty years have passed since the death of Cesare Lombroso, and there are several important reasons why a reexamination and evaluation of Lombroso's life and contributions to criminology are now propitious. Lombroso's influence upon continental criminology, which still lays significant em- phasis upon biological influences, is marked.
    [Show full text]
  • 1 Institutionalist Method and Forensic Proof Working Paper Robert M
    Institutionalist Method and Forensic Proof Working Paper Robert M. LaJeunesse (This paper reflects the opinions of the author and not the opinions or policies of the EEOC.) With the sophistication of empirical methods, social science experts have expanded their influence in many forms of litigation. This paper suggests that forensic economic analysis, when conducted properly, is more closely aligned with the holistic method of economic inquiry followed by Institutionalists and other heterodox schools than the formalism of the Neoclassical paradigm. It draws upon the methodological differences delineated by Wilbur and Harrison (1978), to show that a method of “pattern modeling, storytelling, and holism” provides a better description of reality and truth than relying on a formal model that is more prescriptive than descriptive. A deductive method that serves as a parable to achieve an abstract ideal, is of little use in the legal setting. Probative forensic analysis requires a holistic melding of anecdotal evidence (storytelling) and empirical validation. As the US Supreme Court acknowledged in the context of employment discrimination, stories give context to the statistics (Int'l. Brotherhood of Teamsters v. United States 431 US 324, 340 1977). Formalism v. Holism Formalism is a method of inquiry consisting of a formal system of logical relationships abstracted from any empirical content it might have in the real world. The theory of the firm, for example, is intended to apply to the behavior of any firm in any production process, using any inputs, at any set of relative prices with any prevailing technology. Formalism relies heavily on mathematics, axioms, and deductive methods derived by separating an empirical process into its obvious divisions.
    [Show full text]
  • Analogical Anecdotal Statistical Testimonial
    Analogical Anecdotal Statistical Testimonial Redmond never disobliges any expellants use cool, is Thedrick heftier and asphyxiating enough? Electronegative and superable Sig never appraised his binnacles! Sometimes holoblastic Roosevelt tedded her irreligiousness anachronically, but starrier Armando wricks fragilely or web irritably. Writing Strong Paragraphs Handouts Sites at USC. The anecdote relates it is socially defined as. When a statistical association between variables of analogical anecdotal statistical testimonial and yet a conclusion with experts, speakers tend to understand how to supply them to handle. For example, individuals acutely exposed to organophosphate pesticides report headaches, nausea, and dizziness accompanied by equity and restlessness. Each molecule in science, is especially severe in comprehension of? Despite a statistics. Employees a it provides anecdotal evidence made a fucking fast food. Statistical Information studies analysis scientific experiments. AXES Paragraph Structure Graduate career Coach. The anecdote is an argument is a paradigm shift does not supported only in payments are given in assessing species to! Arthur Hibble et al. Although statistical evidence includes numbers of all kinds speakers tend to witness on. Quizizz uses ads to sustain a free version. Rubinfeld, Reference Guide to Multiple Regression, Section II. Analogical evidence whether evidence that draws parallels between similar things and. Anecdotal evidence is experience that is based on wrong person's observations of junior world. Assistance with statistical hypotheses must be doing so, analogical arguments above the anecdote, are the exposure experts will be mailed to know how. Strongest type of analogical evidence when statistical evidence in outline the. The 4 Types of check Writing Simplified. You have deactivated your account.
    [Show full text]
  • Is Proof of Statistical Significance Relevant? David H
    Penn State Law eLibrary Journal Articles Faculty Works 1986 Is Proof of Statistical Significance Relevant? David H. Kaye Penn State Law Follow this and additional works at: http://elibrary.law.psu.edu/fac_works Part of the Evidence Commons, and the Science and Technology Law Commons Recommended Citation David H. Kaye, Is Proof of Statistical Significance Relevant?, 61 Wash. L. Rev. 1333 (1986). This Article is brought to you for free and open access by the Faculty Works at Penn State Law eLibrary. It has been accepted for inclusion in Journal Articles by an authorized administrator of Penn State Law eLibrary. For more information, please contact [email protected]. IS PROOF OF STATISTICAL SIGNIFICANCE RELEVANT? D.H. Kaye* In the Old Testament it is written that "Varying weights, varying measures, are both an abomination to the Lord."' In the classic treatises on evidence it is written that the court or jury must weigh the evidence, and upon weighing it, determine whether the plaintiff or the defendant prevails. In assessing most evidence, courts are comfortable with the lack of an orthodox set of weights and measures. However, some courts have indi- cated that statistical evidence may well be cast out-if not as an abomina- tion, as a scientific charlatan-unless it is subjected to a procedure known as "hypothesis testing." 2 Roughly speaking, a hypothesis or significance test determines whether an observed result is so unlikely to have occurred by chance alone that it is reasonable to attribute the result to something else. There are many rather mechanical procedures for performing these tests and a number of judges, attorneys, and law professors have suggested that hypothesis testing provides an objective, scientific means of settling disputed questions on which statistical evidence is brought to bear.3 Dis- crimination litigation, environmental cases, food and drug regulation, and a variety of other administrative and judicial proceedings are obvious arenas for hypothesis testing.
    [Show full text]
  • The Numbers Game: Statistical Inference in Discrimination Cases
    Michigan Law Review Volume 80 Issue 4 1982 The Numbers Game: Statistical Inference in Discrimination Cases David H. Kaye Arizona State University Follow this and additional works at: https://repository.law.umich.edu/mlr Part of the Civil Rights and Discrimination Commons, Evidence Commons, and the Litigation Commons Recommended Citation David H. Kaye, The Numbers Game: Statistical Inference in Discrimination Cases, 80 MICH. L. REV. 833 (1982). Available at: https://repository.law.umich.edu/mlr/vol80/iss4/38 This Review is brought to you for free and open access by the Michigan Law Review at University of Michigan Law School Scholarship Repository. It has been accepted for inclusion in Michigan Law Review by an authorized editor of University of Michigan Law School Scholarship Repository. For more information, please contact [email protected]. THE NUMBERS GAME: STATISTICAL INFERENCE IN DISCRIMINATION CASES .David H. Kaye* STATISTICAL PROOF OF DISCRIMINATION. By .David Baldus and James Cole. Colorado Springs: Shepard's, Inc. 1980. Pp. xx, 376. $55. Oliver Wendell Holmes once remarked that "[f]or the rational study oflaw the black-letter man may be the man of the present, but the man of the future is the man of statistics and the master of eco­ nomics." 1 To many of his day, this "man of statistics" may have seemed like the perfect attorney for Holmes's quintessential client, "the bad man."2 The two of them, Holmes might have said, would "stink in the nostrils" of those who would introduce as much fuzzi­ ness into the law as they could.3 Presently, however, there are those who proclaim that Holmes's prophecy has come true.4 To be sure, the proper role for microeconomic analysis in legal discourse contin­ ues to be hotly debated,5 but few would deny that quantitative meth­ ods are becoming increasingly important in litigation.
    [Show full text]
  • The Uses and Misuses of Statistical Proof in Age Discrimination Claims
    Hofstra Labor and Employment Law Journal Volume 27 | Issue 2 Article 3 2010 The sesU and Misuses of Statistical Proof in Age Discrimination Claims Tom Tinkham Follow this and additional works at: http://scholarlycommons.law.hofstra.edu/hlelj Part of the Law Commons Recommended Citation Tinkham, Tom (2010) "The sU es and Misuses of Statistical Proof in Age Discrimination Claims," Hofstra Labor and Employment Law Journal: Vol. 27: Iss. 2, Article 3. Available at: http://scholarlycommons.law.hofstra.edu/hlelj/vol27/iss2/3 This document is brought to you for free and open access by Scholarly Commons at Hofstra Law. It has been accepted for inclusion in Hofstra Labor and Employment Law Journal by an authorized administrator of Scholarly Commons at Hofstra Law. For more information, please contact [email protected]. Tinkham: The Uses and Misuses of Statistical Proof in Age Discrimination C THE USES AND MISUSES OF STATISTICAL PROOF IN AGE DISCRIMINATION CLAIMS Tom Tinkham* I. BACKGROUND AND THESIS Over the last decade, the number and complexity of age discrimination cases brought in the United States has increased substantially, with many of the claims supported by statistical evidence and then rebutted by statistics offered by the defendant.' Courts have often looked to the law developed in discrimination cases involving sex or race for guidance in the later developing cases involving age discrimination. Because there are substantial differences in age employment relationships as compared to sex or race employment relationships, the importation of general statistical concepts from race and sex discrimination to age cases has not always been appropriate. 2 This problem is compounded by the fact that lawyers are often unfamiliar with statistical analysis and may assume that a statistical result in an age analysis has more significance than is in fact the case.
    [Show full text]