From P-Value to FDR
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Underestimation of Type I Errors in Scientific Analysis
Open Access Journal of Mathematical and Theoretical Physics Research Article Open Access Underestimation of type I errors in scientific analysis Abstract Volume 2 Issue 1 - 2019 Statistical significance is the hallmark, the main empirical thesis and the main argument Roman Anton to quantitatively verify or falsify scientific hypothesis in science. The correct use of Department of Theoretical Sciences, The University of Truth and statistics assures and increases the reproducibility of the results and findings, and Common Sense, Germany there is recently a widely wrong use of statistics and a reproducibility crisis in science. Today, most scientists can’t replicate the results of their peers and peer-review hinders Correspondence: Roman Anton, Department of Theoretical them to reveal it. Besides an estimated 80% of general misconduct in the practices, Sciences, The University of Truth and Common Sense, Germany, and above 90% of non-reproducible results, there is also a high proportion of generally Email false statistical assessment using statistical significance tests like the otherwise valuable t-test. The 5% confidence interval, the widely used significance threshold Received: October 29, 2018 | Published: January 04, 2019 of the t-test, is regularly and very frequently used without assuming the natural rate of false positives within the test system, referred to as the stochastic alpha or type I error. Due to the global need of a clear-cut clarification, this statistical research paper didactically shows for the scientific fields that are using the t-test, how to prevent the intrinsically occurring pitfalls of p-values and advises random testing of alpha, beta, SPM and data characteristics to determine the right p-level, statistics, and testing. -
Multiple Testing
The p-value Familywise Error Rate Fisher's Method False Discovery Rate Chapter 8 Hypothesis Testing Multiple Testing 1 / 20 The p-value Familywise Error Rate Fisher's Method False Discovery Rate Outline The p-value Familywise Error Rate Fisher's Method Benjamini-Hochberg Procedure False Discovery Rate 2 / 20 The p-value Familywise Error Rate Fisher's Method False Discovery Rate The p-value In 2016, the American Statistical Association set for itself a task to make a statement on p-values. They note that it is all too easy to set a test, create a test statistic and compute a p-value. Proper statistical practice is much more than this and includes • appropriately chosen techniques based on a thorough understanding of the phenomena under study, • adequate visual and numerical summaries of the data, • properly conducted analyses whose logic and quantitative approaches are clearly explained, • correct interpretation of statistical results in context, and • reproducibility of results via a thorough reporting. Expressing a p-value is one of many approaches to summarize the results of a statistical investigation. 3 / 20 The p-value Familywise Error Rate Fisher's Method False Discovery Rate The p-value The statement's six principles, many of which address misconceptions and misuse of the p-value, are the following: 1. P-values can indicate how incompatible the data are with a specified statistical model. 2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone. 3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold. -
Rothamsted in the Making of Sir Ronald Fisher Scd FRS
Rothamsted in the Making of Sir Ronald Fisher ScD FRS John Aldrich University of Southampton RSS September 2019 1 Finish 1962 “the most famous statistician and mathematical biologist in the world” dies Start 1919 29 year-old Cambridge BA in maths with no prospects takes temporary job at Rothamsted Experimental Station In between 1919-33 at Rothamsted he makes a career for himself by creating a career 2 Rothamsted helped make Fisher by establishing and elevating the office of agricultural statistician—a position in which Fisher was unsurpassed by letting him do other things—including mathematical statistics, genetics and eugenics 3 Before Fisher … (9 slides) The problems were already there viz. o the relationship between harvest and weather o design of experimental trials and the analysis of their results Leading figures in agricultural science believed the problems could be treated statistically 4 Established in 1843 by Lawes and Gilbert to investigate the effective- ness of fertilisers Sir John Bennet Lawes Sir Joseph Henry Gilbert (1814-1900) land-owner, fertiliser (1817-1901) professional magnate and amateur scientist scientist (chemist) In 1902 tired Rothamsted gets make-over when Daniel Hall becomes Director— public money feeds growth 5 Crops and weather and experimental trials Crops & the weather o examined by agriculturalists—e.g. Lawes & Gilbert 1880 o subsequently treated more by meteorologists Experimental trials a hotter topic o treated in Journal of Agricultural Science o by leading figures Wood of Cambridge and Hall -
Discussion: on Methods Controlling the False Discovery Rate
Sankhy¯a: The Indian Journal of Statistics 2008, Volume 70, Part 2, pp. 169-176 c 2008, Indian Statistical Institute Discussion: On Methods Controlling the False Discovery Rate Joseph P. Romano Stanford University, USA Azeem M. Shaikh University of Chicago, USA Michael Wolf University of Zurich, Switzerland 1 Introduction It is a pleasure to acknowledge another insightful article by Sarkar. By developing clever expressions for the FDP, FDR, and FNR, he manages to prove fundamental properties of stepdown and stepup methods. It is particularly important that the theory is sufficiently developed so as to apply to what Sarkar calls adaptive BH methods. Here, the goal is to improve upon the Benjamini Hochberg procedure by incorporating a data-dependent estimate of the number of true null hypotheses. Theoretical justification of such methods is vital and Sarkar’s analysis is useful for this purpose. A perhaps more ambitious task is to develop methods which implicitly or explicitly estimate the joint dependence structure of the test statistics (or p-values). The focus of our discussion is to show how resampling methods can be used to construct stepdown procedures which control the FDR or other general measures of error control. The main benefit of using the boot- strap or subsampling is the ability to estimate the joint distribution of the test statistics, and thereby offer the potential of improving upon methods based on the marginal distributions of test statistics. The procedure below is a generalization of one we developed for FDR control, and the utility of the bootstrap is that it can apply to essentially arbitrary measures of error control, such as the pairwise FDR of Sarkar, the k-FWER, or the tail prob- abilities of the false discovery proportion. -
Inference on the Limiting False Discovery Rate and the P-Value Threshold Parameter Assuming Weak Dependence Between Gene Expression Levels Within Subject
Inference on the limiting false discovery rate and the p-value threshold parameter assuming weak dependence between gene expression levels within subject Glenn Heller1 and Jing Qin2 1Department of Epidemiology and Biostatistics Memorial Sloan-Kettering Cancer Center New York, NY 10021, USA 2Biostatistics Research Branch National Institute of Allergy and Infectious Diseases Bethesda, Maryland 20892, USA Corresponding author: Glenn Heller email: [email protected] phone: 646 735 8112 fax: 646 735 0010 Running Head: FDR analysis with dependent data 1 Summary. An objective of microarray data analysis is to identify gene expressions that are associated with a disease related outcome. For each gene, a test statistic is computed to determine if an association exists, and this statistic generates a marginal p-value. In an effort to pool this information across genes, a p-value density function is derived. The p-value density is modeled as a mixture of a uniform (0,1) density and a scaled ratio of normal densities derived from the asymptotic normality of the test statistic. The p-values are assumed to be weakly dependent and a quasi-likelihood is used to estimate the parameters in the mixture density. The quasi-likelihood and the weak dependence assumption enables estimation and asymptotic inference on the false discovery rate for a given rejection region, and its inverse, the p-value threshold parameter for a fixed false discovery rate. A false discovery rate analysis on a local- ized prostate cancer data set is used to illustrate the methodology. Simulations are performed to assess the performance of this methodology. Keywords: Asymptotic normal test statistic, confidence interval, microarray, p-value mixture model, quasi-likelihood, weak dependence. -
Heterocedasticity-Adjusted Ranking and Thresholding for Large-Scale Multiple Testing
Heterocedasticity-Adjusted Ranking and Thresholding for Large-Scale Multiple Testing Luella Fu1, Bowen Gang2, Gareth M. James3, and Wenguang Sun3 Abstract Standardization has been a widely adopted practice in multiple testing, for it takes into account the variability in sampling and makes the test statistics comparable across different study units. However, despite conventional wisdom to the contrary, we show that there can be a significant loss in information from basing hypothesis tests on stan- dardized statistics rather than the full data. We develop a new class of heteroscedasticity{ adjusted ranking and thresholding (HART) rules that aim to improve existing methods by simultaneously exploiting commonalities and adjusting heterogeneities among the study units. The main idea of HART is to bypass standardization by directly incor- porating both the summary statistic and its variance into the testing procedure. A key message is that the variance structure of the alternative distribution, which is subsumed under standardized statistics, is highly informative and can be exploited to achieve higher power. The proposed HART procedure is shown to be asymptotically valid and optimal for false discovery rate (FDR) control. Our simulation results demon- strate that HART achieves substantial power gain over existing methods at the same FDR level. We illustrate the implementation through a microarray analysis of myeloma. Keywords: covariate-assisted inference; data processing and information loss; false dis- covery rate; heteroscedasticity; multiple testing with side information; structured multiple testing 1Department of Mathematics, San Francisco State University. 2Department of Statistics, Fudan University. 3Department of Data Sciences and Operations, University of Southern California. The research of Wen- guang Sun was supported in part by NSF grant DMS-1712983. -
Basic Statistics
Statistics Kristof Reid Assistant Professor Medical University of South Carolina Core Curriculum V5 Financial Disclosures • None Core Curriculum V5 Further Disclosures • I am not a statistician • I do like making clinical decisions based on appropriately interpreted data Core Curriculum V5 Learning Objectives • Understand why knowing statistics is important • Understand the basic principles and statistical test • Understand common statistical errors in the medical literature Core Curriculum V5 Why should I care? Indications for statistics Core Curriculum V5 47% unable to determine study design 31% did not understand p values 38% did not understand sensitivity and specificity 83% could not use odds ratios Araoye et al, JBJS 2020;102(5):e19 Core Curriculum V5 50% (or more!) of clinical research publications have at least one statistical error Thiese et al, Journal of thoracic disease, 2016-08, Vol.8 (8), p.E726-E730 Core Curriculum V5 17% of conclusions not justified by results 39% of studies used the wrong analysis Parsons et al, J Bone Joint Surg Br. 2011;93-B(9):1154-1159. Core Curriculum V5 Are these two columns different? Column A Column B You have been asked by your insurance carrier 19 10 12 11 to prove that your total hip patient outcomes 10 9 20 19 are not statistically different than your competitor 10 17 14 10 next door. 21 19 24 13 Column A is the patient reported score for you 12 18 and column B for your competitor 17 8 12 17 17 10 9 9 How would you prove you are 24 10 21 9 18 18 different or better? 13 4 18 8 12 14 15 15 14 17 Core Curriculum V5 Are they still the same? Column A: Mean=15 Column B: Mean=12 SD=4 Core Curriculum V5 We want to know if we're making a difference Core Curriculum V5 What is statistics? a. -
Comprehensive Comparative Analysis of Local False Discovery Rate Control Methods
H OH metabolites OH Article Comprehensive Comparative Analysis of Local False Discovery Rate Control Methods Shin June Kim, Youngjae Oh and Jaesik Jeong * Department of Mathematics and Statistics, Chonnam National University, Gwangju 61186, Korea; [email protected] (S.J.K.); [email protected] (Y.O.) * Correspondence: [email protected]; Tel.: +82-062-530-3442 Abstract: Due to the advance in technology, the type of data is getting more complicated and large-scale. To analyze such complex data, more advanced technique is required. In case of omics data from two different groups, it is interesting to find significant biomarkers between two groups while controlling error rate such as false discovery rate (FDR). Over the last few decades, a lot of methods that control local false discovery rate have been developed, ranging from one-dimensional to k-dimensional FDR procedure. For comparison study, we select three of them, which have unique and significant properties: Efron’s approach, Ploner’s approach, and Kim’s approach in chronological order. The first approach is one-dimensional approach while the other two are two-dimensional ones. Furthermore, we consider two more variants of Ploner’s approach. We compare the performance of those methods on both simulated and real data. Keywords: biomarker; familywise error rate; false discovery rate; large scale inference 1. Introduction Due to the advent of advanced high-throughput technologies, a large amount of raw data have been produced and various methods that can appropriately preprocess Citation: Kim, S.J.; Oh, Y.; Jeong, J. such data have been developed. After various preprocessing steps, statistical methods Comprehensive Comparative are applied to the preprocessed, yet large-scale data. -
Asymptotic Results on Adaptive False Discovery Rate Controlling Procedures Based on Kernel Estimators
JournalofMachineLearningResearch14(2013)1423-1459 Submitted 4/08; Revised 8/12; Published 5/13 Asymptotic Results on Adaptive False Discovery Rate Controlling Procedures Based on Kernel Estimators Pierre Neuvial∗ [email protected] Laboratoire Statistique et Génome Université d’Évry Val d’Essonne UMR CNRS 8071 – USC INRA 23 boulevard de France 91 037 Évry, France Editor: Olivier Teytaud Abstract The False Discovery Rate (FDR) is a commonly used type I error rate in multiple testing problems. It is defined as the expected False Discovery Proportion (FDP), that is, the expected fraction of false positives among rejected hypotheses. When the hypotheses are independent, the Benjamini- Hochberg procedure achieves FDR control at any pre-specified level. By construction, FDR control offers no guarantee in terms of power, or type II error. A number of alternative procedures have been developed, including plug-in procedures that aim at gaining power by incorporating an estimate of the proportion of true null hypotheses. In this paper, we study the asymptotic behavior of a class of plug-in procedures based on kernel estimators of the density of the p-values, as the number m of tested hypotheses grows to infinity. In a setting where the hypotheses tested are independent, we prove that these procedures are asymptotically more powerful in two respects: (i) a tighter asymptotic FDR control for any target FDR level and (ii) a broader range of target levels yielding positive asymptotic power. We also show that this increased asymptotic power comes at the price of slower, non-parametric convergence rates k/(2k+1) for the FDP. -
Principles of the Design and Analysis of Experiments
A FRESH LOOK AT THE BASIC PRINCIPLES OF THE DESIGN AND ANALYSIS OF EXPERIMENTS F. YATES ROTHAMSTED EXPERIMENTAL STATION, HARPENDEN 1. Introduction When Professor Neyman invited me to attend the Fifth Berkeley Symposium, and give a paper on the basic principles of the design and analysis of experiments, I was a little hesitant. I felt certain that all those here must be thoroughly conversant with these basic principles, and that to mull over them again would be of little interest. This, however, is the first symposium to be held since Sir Ronald Fisher's death, and it does therefore seem apposite that a paper discussing some aspect of his work should be given. If so, what could be better than the design and analysis of experiments, which in its modern form he created? I do not propose today to give a history of the development of the subject. This I did in a paper presented in 1963 to the Seventh International Biometrics Congress [14]. Instead I want to take a fresh look at the logical principles Fisher laid down, and the action that flows from them; also briefly to consider certain modern trends, and see how far they are really of value. 2. General principles Fisher, in his first formal exposition of experimental design [4] laid down three basic principles: replication; randomization; local control. Replication and local control (for example, arrangement in blocks or rows and columns of a square) were not new, but the idea of assigning the treatments at random (subject to the restrictions imposed by the local control) was novel, and proved to be a most fruitful contribution. -
1 Why Is Multiple Testing a Problem? 2 the Bonferroni Correction
Spring 2008 - Stat C141/ Bioeng C141 - Statistics for Bioinformatics Course Website: http://www.stat.berkeley.edu/users/hhuang/141C-2008.html Section Website: http://www.stat.berkeley.edu/users/mgoldman GSI Contact Info: Megan Goldman [email protected] Office Hours: 342 Evans M 10-11, Th 3-4, and by appointment 1 Why is multiple testing a problem? Say you have a set of hypotheses that you wish to test simultaneously. The first idea that might come to mind is to test each hypothesis separately, using some level of significance α. At first blush, this doesn't seem like a bad idea. However, consider a case where you have 20 hypotheses to test, and a significance level of 0.05. What's the probability of observing at least one significant result just due to chance? P(at least one significant result) = 1 − P(no significant results) = 1 − (1 − 0:05)20 ≈ 0:64 So, with 20 tests being considered, we have a 64% chance of observing at least one sig- nificant result, even if all of the tests are actually not significant. In genomics and other biology-related fields, it's not unusual for the number of simultaneous tests to be quite a bit larger than 20... and the probability of getting a significant result simply due to chance keeps going up. Methods for dealing with multiple testing frequently call for adjusting α in some way, so that the probability of observing at least one significant result due to chance remains below your desired significance level. 2 The Bonferroni correction The Bonferroni correction sets the significance cut-off at α=n. -
False Discovery Rate 1 the Pairwise Multiple
False Discovery Rate 1 The Pairwise Multiple Comparison Multiplicity Problem: An Alternative Approach to Familywise/Comparisonwise Type I Error Control by H. J. Keselman, Robert Cribbie University of Manitoba and Burt Holland Temple University False Discovery Rate 2 Abstract When simultaneously undertaking many tests of significance researchers are faced with the problem of how best to control the probability of committing a Type I error. The familywise approach deals directly with multiplicity problems by setting a level of significance for an entire set (family) of related hypotheses, while the comparison approach ignores the multiplicity issue by setting the rate of error on each individual contrast/test/hypothesis. A new formulation of control presented by Benjamini and Hochberg, their false discovery rate, does not provide as stringent control as the familywise rate, but concomitant with this relaxing in stringency is an increase in sensitivity to detect effects, compared to familywise control. Type I error and power rates for four relatively powerful and easy-to-compute pairwise multiple comparison procedures were compared to the Benjamni and Hochberg technique for various one-way layouts using test statistics that do not assume variance homogeneity. False Discovery Rate 3 The Pairwise Multiple Comparison Multiplicity Problem: An Alternative Approach to Familywise/Comparisonwise Type I Error Control The multiplicity problem in statistical inference refers to selecting the statistically significant findings from a large set of findings (tests) to support one's research hypotheses. Selecting the statistically significant findings from a larger pool of results that also contain nonsignificant findings is problematic since when multiple tests of significance are computed the probability that at least one will be significant by chance alone increases with the number of tests examined.