Dataset Decay: the Problem of Sequential Analyses on Open Datasets
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Sequential Analysis Tests and Confidence Intervals
D. Siegmund Sequential Analysis Tests and Confidence Intervals Series: Springer Series in Statistics The modern theory of Sequential Analysis came into existence simultaneously in the United States and Great Britain in response to demands for more efficient sampling inspection procedures during World War II. The develop ments were admirably summarized by their principal architect, A. Wald, in his book Sequential Analysis (1947). In spite of the extraordinary accomplishments of this period, there remained some dissatisfaction with the sequential probability ratio test and Wald's analysis of it. (i) The open-ended continuation region with the concomitant possibility of taking an arbitrarily large number of observations seems intol erable in practice. (ii) Wald's elegant approximations based on "neglecting the excess" of the log likelihood ratio over the stopping boundaries are not especially accurate and do not allow one to study the effect oftaking observa tions in groups rather than one at a time. (iii) The beautiful optimality property of the sequential probability ratio test applies only to the artificial problem of 1985, XI, 274 p. testing a simple hypothesis against a simple alternative. In response to these issues and to new motivation from the direction of controlled clinical trials numerous modifications of the sequential probability ratio test were proposed and their properties studied-often Printed book by simulation or lengthy numerical computation. (A notable exception is Anderson, 1960; see III.7.) In the past decade it has become possible to give a more complete theoretical Hardcover analysis of many of the proposals and hence to understand them better. ▶ 119,99 € | £109.99 | $149.99 ▶ *128,39 € (D) | 131,99 € (A) | CHF 141.50 eBook Available from your bookstore or ▶ springer.com/shop MyCopy Printed eBook for just ▶ € | $ 24.99 ▶ springer.com/mycopy Order online at springer.com ▶ or for the Americas call (toll free) 1-800-SPRINGER ▶ or email us at: [email protected]. -
Trial Sequential Analysis: Novel Approach for Meta-Analysis
Anesth Pain Med 2021;16:138-150 https://doi.org/10.17085/apm.21038 Review pISSN 1975-5171 • eISSN 2383-7977 Trial sequential analysis: novel approach for meta-analysis Hyun Kang Received April 19, 2021 Department of Anesthesiology and Pain Medicine, Chung-Ang University College of Accepted April 25, 2021 Medicine, Seoul, Korea Systematic reviews and meta-analyses rank the highest in the evidence hierarchy. However, they still have the risk of spurious results because they include too few studies and partici- pants. The use of trial sequential analysis (TSA) has increased recently, providing more infor- mation on the precision and uncertainty of meta-analysis results. This makes it a powerful tool for clinicians to assess the conclusiveness of meta-analysis. TSA provides monitoring boundaries or futility boundaries, helping clinicians prevent unnecessary trials. The use and Corresponding author Hyun Kang, M.D., Ph.D. interpretation of TSA should be based on an understanding of the principles and assump- Department of Anesthesiology and tions behind TSA, which may provide more accurate, precise, and unbiased information to Pain Medicine, Chung-Ang University clinicians, patients, and policymakers. In this article, the history, background, principles, and College of Medicine, 84 Heukseok-ro, assumptions behind TSA are described, which would lead to its better understanding, imple- Dongjak-gu, Seoul 06974, Korea mentation, and interpretation. Tel: 82-2-6299-2586 Fax: 82-2-6299-2585 E-mail: [email protected] Keywords: Interim analysis; Meta-analysis; Statistics; Trial sequential analysis. INTRODUCTION termined number of patients was reached [2]. A systematic review is a research method that attempts Sequential analysis is a statistical method in which the fi- to collect all empirical evidence according to predefined nal number of patients analyzed is not predetermined, but inclusion and exclusion criteria to answer specific and fo- sampling or enrollment of patients is decided by a prede- cused questions [3]. -
Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions
Portland State University PDXScholar Dissertations and Theses Dissertations and Theses 7-14-2020 Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions James Lamar DeLaney 3rd Portland State University Follow this and additional works at: https://pdxscholar.library.pdx.edu/open_access_etds Part of the Developmental Psychology Commons Let us know how access to this document benefits ou.y Recommended Citation DeLaney 3rd, James Lamar, "Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions" (2020). Dissertations and Theses. Paper 5553. https://doi.org/10.15760/etd.7427 This Thesis is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Detecting Reinforcement Patterns in the Stream of Naturalistic Observations of Social Interactions by James Lamar DeLaney 3rd A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Psychology Thesis Committee: Thomas Kindermann, Chair Jason Newsom Ellen A. Skinner Portland State University i Abstract How do consequences affect future behaviors in real-world social interactions? The term positive reinforcer refers to those consequences that are associated with an increase in probability of an antecedent behavior (Skinner, 1938). To explore whether reinforcement occurs under naturally occuring conditions, many studies use sequential analysis methods to detect contingency patterns (see Quera & Bakeman, 1998). This study argues that these methods do not look at behavior change following putative reinforcers, and thus, are not sufficient for declaring reinforcement effects arising in naturally occuring interactions, according to the Skinner’s (1938) operational definition of reinforcers. -
©2018 Oxford University Press
©2018 PART E OxfordChallenges in Statistics University Press mot43560_ch22_201-213.indd 201 07/31/17 11:56 AM ©2018 Oxford University Press ©2018 CHAPTER 22 Multiple Comparisons Concepts If you torture your data long enough, they will tell you what- ever you want to hear. Oxford MILLS (1993) oping with multiple comparisons is one of the biggest challenges Cin data analysis. If you calculate many P values, some are likely to be small just by random chance. Therefore, it is impossible to inter- pret small P values without knowing how many comparisons were made. This chapter explains three approaches to coping with multiple compari- sons. Chapter 23 will show that the problem of multiple comparisons is pervasive, and Chapter 40 will explain special strategies for dealing with multiple comparisons after ANOVA.University THE PROBLEM OF MULTIPLE COMPARISONS The problem of multiple comparisons is easy to understand If you make two independent comparisons, what is the chance that one or both comparisons will result in a statistically significant conclusion just by chance? It is easier to answer the opposite question. Assuming both null hypotheses are true, what is the chance that both comparisons will not be statistically significant? The answer is the chance that the first comparison will not be statistically significant (0.95) times the chance that the second one will not be statistically significant (also 0.95), which equals 0.9025, or about 90%. That leaves about a 10% chance of obtaining at least one statistically significant conclusion by chance. It is easy to generalize that logic to more comparisons. -
The Problem of Multiple Testing and Its Solutions for Genom-Wide Studies
The problem of multiple testing and its solutions for genom-wide studies How to cite: Gyorffy B, Gyorffy A, Tulassay Z:. The problem of multiple testing and its solutions for genom-wide studies. Orv Hetil , 2005;146(12):559-563 ABSTRACT The problem of multiple testing and its solutions for genome-wide studies. Even if there is no real change, the traditional p = 0.05 can cause 5% of the investigated tests being reported significant. Multiple testing corrections have been developed to solve this problem. Here the authors describe the one-step (Bonferroni), multi-step (step-down and step-up) and graphical methods. However, sometimes a correction for multiple testing creates more problems, than it solves: the universal null hypothesis is of little interest, the exact number of investigations to be adjusted for can not determined and the probability of type II error increases. For these reasons the authors suggest not to perform multiple testing corrections routinely. The calculation of the false discovery rate is a new method for genome-wide studies. Here the p value is substituted by the q value, which also shows the level of significance. The q value belonging to a measurement is the proportion of false positive measurements when we accept it as significant. The authors propose using the q value instead of the p value in genome-wide studies. Free keywords: multiple testing, Bonferroni-correction, one-step, multi-step, false discovery rate, q-value List of abbreviations: FDR: false discovery rate FWER: family familywise error rate SNP: Single Nucleotide Polymorphism PFP: proportion of false positives 1 INTRODUCTION A common feature in all of the 'omics studies is the inspection of a large number of simultaneous measurements in a small number of samples. -
Downloading of a Human Consciousness Into a Digital Computer Would Involve ‘A Certain Loss of Our Finer Feelings and Qualities’89
When We Can Trust Computers (and When We Can’t) Peter V. Coveney1,2* Roger R. Highfield3 1Centre for Computational Science, University College London, Gordon Street, London WC1H 0AJ, UK, https://orcid.org/0000-0002-8787-7256 2Institute for Informatics, Science Park 904, University of Amsterdam, 1098 XH Amsterdam, Netherlands 3Science Museum, Exhibition Road, London SW7 2DD, UK, https://orcid.org/0000-0003-2507-5458 Keywords: validation; verification; uncertainty quantification; big data; machine learning; artificial intelligence Abstract With the relentless rise of computer power, there is a widespread expectation that computers can solve the most pressing problems of science, and even more besides. We explore the limits of computational modelling and conclude that, in the domains of science and engineering that are relatively simple and firmly grounded in theory, these methods are indeed powerful. Even so, the availability of code, data and documentation, along with a range of techniques for validation, verification and uncertainty quantification, are essential for building trust in computer generated findings. When it comes to complex systems in domains of science that are less firmly grounded in theory, notably biology and medicine, to say nothing of the social sciences and humanities, computers can create the illusion of objectivity, not least because the rise of big data and machine learning pose new challenges to reproducibility, while lacking true explanatory power. We also discuss important aspects of the natural world which cannot be solved by digital means. In the long-term, renewed emphasis on analogue methods will be necessary to temper the excessive faith currently placed in digital computation. -
Caveat Emptor: the Risks of Using Big Data for Human Development
1 Caveat emptor: the risks of using big data for human development Siddique Latif1,2, Adnan Qayyum1, Muhammad Usama1, Junaid Qadir1, Andrej Zwitter3, and Muhammad Shahzad4 1Information Technology University (ITU)-Punjab, Pakistan 2University of Southern Queensland, Australia 3University of Groningen, Netherlands 4National University of Sciences and Technology (NUST), Pakistan Abstract Big data revolution promises to be instrumental in facilitating sustainable development in many sectors of life such as education, health, agriculture, and in combating humanitarian crises and violent conflicts. However, lurking beneath the immense promises of big data are some significant risks such as (1) the potential use of big data for unethical ends; (2) its ability to mislead through reliance on unrepresentative and biased data; and (3) the various privacy and security challenges associated with data (including the danger of an adversary tampering with the data to harm people). These risks can have severe consequences and a better understanding of these risks is the first step towards mitigation of these risks. In this paper, we highlight the potential dangers associated with using big data, particularly for human development. Index Terms Human development, big data analytics, risks and challenges, artificial intelligence, and machine learning. I. INTRODUCTION Over the last decades, widespread adoption of digital applications has moved all aspects of human lives into the digital sphere. The commoditization of the data collection process due to increased digitization has resulted in the “data deluge” that continues to intensify with a number of Internet companies dealing with petabytes of data on a daily basis. The term “big data” has been coined to refer to our emerging ability to collect, process, and analyze the massive amount of data being generated from multiple sources in order to obtain previously inaccessible insights. -
Reproducibility: Promoting Scientific Rigor and Transparency
Reproducibility: Promoting scientific rigor and transparency Roma Konecky, PhD Editorial Quality Advisor What does reproducibility mean? • Reproducibility is the ability to generate similar results each time an experiment is duplicated. • Data reproducibility enables us to validate experimental results. • Reproducibility is a key part of the scientific process; however, many scientific findings are not replicable. The Reproducibility Crisis • ~2010 as part of a growing awareness that many scientific studies are not replicable, the phrase “Reproducibility Crisis” was coined. • An initiative of the Center for Open Science conducted replications of 100 psychology experiments published in prominent journals. (Science, 349 (6251), 28 Aug 2015) - Out of 100 replication attempts, only 39 were successful. The Reproducibility Crisis • According to a poll of over 1,500 scientists, 70% had failed to reproduce at least one other scientist's experiment or their own. (Nature 533 (437), 26 May 2016) • Irreproducible research is a major concern because in valid claims: - slow scientific progress - waste time and resources - contribute to the public’s mistrust of science Factors contributing to Over 80% of respondents irreproducibility Nature | News Feature 25 May 2016 Underspecified methods Factors Data dredging/ Low statistical contributing to p-hacking power irreproducibility Technical Bias - omitting errors null results Weak experimental design Underspecified methods Factors Data dredging/ Low statistical contributing to p-hacking power irreproducibility Technical Bias - omitting errors null results Weak experimental design Underspecified methods • When experimental details are omitted, the procedure needed to reproduce a study isn’t clear. • Underspecified methods are like providing only part of a recipe. ? = Underspecified methods Underspecified methods Underspecified methods Underspecified methods • Like baking a loaf of bread, a “scientific recipe” should include all the details needed to reproduce the study. -
Rejecting Statistical Significance Tests: Defanging the Arguments
Rejecting Statistical Significance Tests: Defanging the Arguments D. G. Mayo Philosophy Department, Virginia Tech, 235 Major Williams Hall, Blacksburg VA 24060 Abstract I critically analyze three groups of arguments for rejecting statistical significance tests (don’t say ‘significance’, don’t use P-value thresholds), as espoused in the 2019 Editorial of The American Statistician (Wasserstein, Schirm and Lazar 2019). The strongest argument supposes that banning P-value thresholds would diminish P-hacking and data dredging. I argue that it is the opposite. In a world without thresholds, it would be harder to hold accountable those who fail to meet a predesignated threshold by dint of data dredging. Forgoing predesignated thresholds obstructs error control. If an account cannot say about any outcomes that they will not count as evidence for a claim—if all thresholds are abandoned—then there is no a test of that claim. Giving up on tests means forgoing statistical falsification. The second group of arguments constitutes a series of strawperson fallacies in which statistical significance tests are too readily identified with classic abuses of tests. The logical principle of charity is violated. The third group rests on implicit arguments. The first in this group presupposes, without argument, a different philosophy of statistics from the one underlying statistical significance tests; the second group—appeals to popularity and fear—only exacerbate the ‘perverse’ incentives underlying today’s replication crisis. Key Words: Fisher, Neyman and Pearson, replication crisis, statistical significance tests, strawperson fallacy, psychological appeals, 2016 ASA Statement on P-values 1. Introduction and Background Today’s crisis of replication gives a new urgency to critically appraising proposed statistical reforms intended to ameliorate the situation. -
Sequential Analysis Exercises
Sequential analysis Exercises : 1. What is the practical idea at origin of sequential analysis ? 2. In which cases sequential analysis is appropriate ? 3. What are the supposed advantages of sequential analysis ? 4. What does the values from a three steps sequential analysis in the following table are for ? nb accept refuse grains e xamined 20 0 5 40 2 7 60 6 7 5. Do I have alpha and beta risks when I use sequential analysis ? ______________________________________________ 1 5th ISTA seminar on statistics S Grégoire August 1999 SEQUENTIAL ANALYSIS.DOC PRINCIPLE OF THE SEQUENTIAL ANALYSIS METHOD The method originates from a practical observation. Some objects are so good or so bad that with a quick look we are able to accept or reject them. For other objects we need more information to know if they are good enough or not. In other words, we do not need to put the same effort on all the objects we control. As a result of this we may save time or money if we decide for some objects at an early stage. The general basis of sequential analysis is to define, -before the studies-, sampling schemes which permit to take consistent decisions at different stages of examination. The aim is to compare the results of examinations made on an object to control <-- to --> decisional limits in order to take a decision. At each stage of examination the decision is to accept the object if it is good enough with a chosen probability or to reject the object if it is too bad (at a chosen probability) or to continue the study if we need more information before to fall in one of the two above categories. -
Simultaneous Confidence Intervals for Comparing Margins of Multivariate
Computational Statistics and Data Analysis 64 (2013) 87–98 Contents lists available at SciVerse ScienceDirect Computational Statistics and Data Analysis journal homepage: www.elsevier.com/locate/csda Simultaneous confidence intervals for comparing margins of multivariate binary data Bernhard Klingenberg a,∗,1, Ville Satopää b a Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267, United States b Department of Statistics, The Wharton School of the University of Pennsylvania, Philadelphia, PA 19104-6340, United States article info a b s t r a c t Article history: In many applications two groups are compared simultaneously on several correlated bi- Received 17 August 2012 nary variables for a more comprehensive assessment of group differences. Although the Received in revised form 12 February 2013 response is multivariate, the main interest is in comparing the marginal probabilities be- Accepted 13 February 2013 tween the groups. Estimating the size of these differences under strong error control allows Available online 6 March 2013 for a better evaluation of effects than can be provided by multiplicity adjusted P-values. Simultaneous confidence intervals for the differences in marginal probabilities are devel- Keywords: oped through inverting the maximum of correlated Wald, score or quasi-score statistics. Correlated binary responses Familywise error rate Taking advantage of the available correlation information leads to improvements in the Marginal homogeneity joint coverage probability and power compared to straightforward Bonferroni adjustments. Multiple comparisons Estimating the correlation under the null is also explored. While computationally complex even in small dimensions, it does not result in marked improvements. Based on extensive simulation results, a simple approach that uses univariate score statistics together with their estimated correlation is proposed and recommended. -
Sequential and Multiple Testing for Dose-Response Analysis
Drug Information Journal, Vol. 34, pp. 591±597, 2000 0092-8615/2000 Printed in the USA. All rights reserved. Copyright 2000 Drug Information Association Inc. SEQUENTIAL AND MULTIPLE TESTING FOR DOSE-RESPONSE ANALYSIS WALTER LEHMACHER,PHD Professor, Institute for Medical Statistics, Informatics and Epidemiology, University of Cologne, Germany MEINHARD KIESER,PHD Head, Department of Biometry, Dr. Willmar Schwabe Pharmaceuticals, Karlsruhe, Germany LUDWIG HOTHORN,PHD Professor, Department of Bioinformatics, University of Hannover, Germany For the analysis of dose-response relationship under the assumption of ordered alterna- tives several global trend tests are available. Furthermore, there are multiple test proce- dures which can identify doses as effective or even as minimally effective. In this paper it is shown that the principles of multiple comparisons and interim analyses can be combined in flexible and adaptive strategies for dose-response analyses; these procedures control the experimentwise error rate. Key Words: Dose-response; Interim analysis; Multiple testing INTRODUCTION rate in a strong sense) are necessary. The global and multiplicity aspects are also re- THE FIRST QUESTION in dose-response flected by the actual International Confer- analysis is whether a monotone trend across ence for Harmonization Guidelines (1) for dose groups exists. This leads to global trend dose-response analysis. Appropriate order tests, where the global level (familywise er- restricted tests can be confirmatory for dem- ror rate in a weak sense) is controlled. In onstrating a global trend and suitable multi- clinical research, one is additionally inter- ple procedures can identify a recommended ested in which doses are effective, which dose. minimal dose is effective, or which dose For the global hypothesis, there are sev- steps are effective, in order to substantiate eral suitable trend tests.