Descriptive Analysis in Education: a Guide for Researchers
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Introduction to Biostatistics
Introduction to Biostatistics Jie Yang, Ph.D. Associate Professor Department of Family, Population and Preventive Medicine Director Biostatistical Consulting Core Director Biostatistics and Bioinformatics Shared Resource, Stony Brook Cancer Center In collaboration with Clinical Translational Science Center (CTSC) and the Biostatistics and Bioinformatics Shared Resource (BB-SR), Stony Brook Cancer Center (SBCC). OUTLINE What is Biostatistics What does a biostatistician do • Experiment design, clinical trial design • Descriptive and Inferential analysis • Result interpretation What you should bring while consulting with a biostatistician WHAT IS BIOSTATISTICS • The science of (bio)statistics encompasses the design of biological/clinical experiments the collection, summarization, and analysis of data from those experiments the interpretation of, and inference from, the results How to Lie with Statistics (1954) by Darrell Huff. http://www.youtube.com/watch?v=PbODigCZqL8 GOAL OF STATISTICS Sampling POPULATION Probability SAMPLE Theory Descriptive Descriptive Statistics Statistics Inference Population Sample Parameters: Inferential Statistics Statistics: 흁, 흈, 흅… 푿ഥ , 풔, 풑ෝ,… PROPERTIES OF A “GOOD” SAMPLE • Adequate sample size (statistical power) • Random selection (representative) Sampling Techniques: 1.Simple random sampling 2.Stratified sampling 3.Systematic sampling 4.Cluster sampling 5.Convenience sampling STUDY DESIGN EXPERIEMENT DESIGN Completely Randomized Design (CRD) - Randomly assign the experiment units to the treatments -
DATA COLLECTION METHODS Section III 5
AN OVERVIEW OF QUANTITATIVE AND QUALITATIVE DATA COLLECTION METHODS Section III 5. DATA COLLECTION METHODS: SOME TIPS AND COMPARISONS In the previous chapter, we identified two broad types of evaluation methodologies: quantitative and qualitative. In this section, we talk more about the debate over the relative virtues of these approaches and discuss some of the advantages and disadvantages of different types of instruments. In such a debate, two types of issues are considered: theoretical and practical. Theoretical Issues Most often these center on one of three topics: · The value of the types of data · The relative scientific rigor of the data · Basic, underlying philosophies of evaluation Value of the Data Quantitative and qualitative techniques provide a tradeoff between breadth and depth, and between generalizability and targeting to specific (sometimes very limited) populations. For example, a quantitative data collection methodology such as a sample survey of high school students who participated in a special science enrichment program can yield representative and broadly generalizable information about the proportion of participants who plan to major in science when they get to college and how this proportion differs by gender. But at best, the survey can elicit only a few, often superficial reasons for this gender difference. On the other hand, separate focus groups (a qualitative technique related to a group interview) conducted with small groups of men and women students will provide many more clues about gender differences in the choice of science majors, and the extent to which the special science program changed or reinforced attitudes. The focus group technique is, however, limited in the extent to which findings apply beyond the specific individuals included in the groups. -
Experimentation Science: a Process Approach for the Complete Design of an Experiment
Kansas State University Libraries New Prairie Press Conference on Applied Statistics in Agriculture 1996 - 8th Annual Conference Proceedings EXPERIMENTATION SCIENCE: A PROCESS APPROACH FOR THE COMPLETE DESIGN OF AN EXPERIMENT D. D. Kratzer K. A. Ash Follow this and additional works at: https://newprairiepress.org/agstatconference Part of the Agriculture Commons, and the Applied Statistics Commons This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License. Recommended Citation Kratzer, D. D. and Ash, K. A. (1996). "EXPERIMENTATION SCIENCE: A PROCESS APPROACH FOR THE COMPLETE DESIGN OF AN EXPERIMENT," Conference on Applied Statistics in Agriculture. https://doi.org/ 10.4148/2475-7772.1322 This is brought to you for free and open access by the Conferences at New Prairie Press. It has been accepted for inclusion in Conference on Applied Statistics in Agriculture by an authorized administrator of New Prairie Press. For more information, please contact [email protected]. Conference on Applied Statistics in Agriculture Kansas State University Applied Statistics in Agriculture 109 EXPERIMENTATION SCIENCE: A PROCESS APPROACH FOR THE COMPLETE DESIGN OF AN EXPERIMENT. D. D. Kratzer Ph.D., Pharmacia and Upjohn Inc., Kalamazoo MI, and K. A. Ash D.V.M., Ph.D., Town and Country Animal Hospital, Charlotte MI ABSTRACT Experimentation Science is introduced as a process through which the necessary steps of experimental design are all sufficiently addressed. Experimentation Science is defined as a nearly linear process of objective formulation, selection of experimentation unit and decision variable(s), deciding treatment, design and error structure, defining the randomization, statistical analyses and decision procedures, outlining quality control procedures for data collection, and finally analysis, presentation and interpretation of results. -
Data Extraction for Complex Meta-Analysis (Decimal) Guide
Pedder, H. , Sarri, G., Keeney, E., Nunes, V., & Dias, S. (2016). Data extraction for complex meta-analysis (DECiMAL) guide. Systematic Reviews, 5, [212]. https://doi.org/10.1186/s13643-016-0368-4 Publisher's PDF, also known as Version of record License (if available): CC BY Link to published version (if available): 10.1186/s13643-016-0368-4 Link to publication record in Explore Bristol Research PDF-document This is the final published version of the article (version of record). It first appeared online via BioMed Central at http://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0368-4. Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/ Pedder et al. Systematic Reviews (2016) 5:212 DOI 10.1186/s13643-016-0368-4 RESEARCH Open Access Data extraction for complex meta-analysis (DECiMAL) guide Hugo Pedder1*, Grammati Sarri2, Edna Keeney3, Vanessa Nunes1 and Sofia Dias3 Abstract As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. -
Descriptive AI Ethics: Collecting and Understanding the Public Opinion
Descriptive AI Ethics: Collecting and Understanding the Public Opinion GABRIEL LIMA, School of Computing, KAIST and Data Science Group, IBS, Republic of Korea MEEYOUNG CHA, Data Science Group, IBS and School of Computing, KAIST, Republic of Korea There is a growing need for data-driven research efforts on how the public perceives the ethical, moral, and legal issues of autonomous AI systems. The current debate on the responsibility gap posed by these systems is one such example. This work proposes a mixed AI ethics model that allows normative and descriptive research to complement each other, by aiding scholarly discussion with data gathered from the public. We discuss its implications on bridging the gap between optimistic and pessimistic views towards AI systems’ deployment. 1 INTRODUCTION In light of the significant changes artificial intelligence (AI) systems bring to society, many scholars discuss whether and how their influence can be positive and negative [21]. As we start to encounter AI systems in various morally and legally salient environments, some have begun to explore how the current responsibility ascription practices might be adapted to meet such new technologies [19, 33]. A critical viewpoint today is that autonomous and self-learning AI systems pose a so-called responsibility gap [27]. These systems’ autonomy challenges human control over them [13], while their adaptability leads to unpredictability. Hence, it might infeasible to trace back responsibility to a specific entity if these systems cause any harm. Considering responsibility practices as the adoption of certain attitudes towards an agent [40], scholarly work has also posed the question of whether AI systems are appropriate subjects of such practices [15, 29, 37] — e.g., they might “have a body to kick,” yet they “have no soul to damn” [4]. -
Causal Effects of Monetary Shocks: Semiparametric
Causal effects of monetary shocks: Semiparametric conditional independence tests with a multinomial propensity score1 Joshua D. Angrist Guido M. Kuersteiner MIT Georgetown University This Revision: April 2010 1This paper is a revised version of NBER working paper 10975, December 2004. We thank the Romers’for sharing their data, Alberto Abadie, Xiaohong Chen, Jordi Gali, Simon Gilchrist, Yanqin Fan, Stefan Hoderlein, Bernard Salanié and James Stock for helpful discussions and seminar and conference participants at ASSA 2006, Auckland, Berkeley, BU, Columbia, Harvard-MIT, Georgetown, Indiana, Illinois, Konstanz, LAMES 2006, the 2004 NSF/NBER Time Series Conference, CIREQ Monteral, Rochester, Rutgers, Stanford, St. Gallen, the Triangle Econometrics Workshop, UCLA, Wisconsin, Zurich and the 2nd conference on evaluation research in Mannheim for helpful comments. Thanks also go to the editor and referees for helpful suggestions and comments. Kuersteiner gratefully acknowledges support from NSF grants SES-0095132 and SES-0523186. Abstract We develop semiparametric tests for conditional independence in time series models of causal effects. Our approach is motivated by empirical studies of monetary policy effects. Our approach is semiparametric in the sense that we model the process determining the distribution of treatment —the policy propensity score — but leave the model for outcomes unspecified. A conceptual innovation is that we adapt the cross-sectional potential outcomes framework to a time series setting. We also develop root-T consistent distribution-free inference methods for full conditional independence testing, appropriate for dependent data and allowing for first-step estimation of the (multinomial) propensity score. Keywords: Potential outcomes, monetary policy, causality, conditional independence, functional martingale difference sequences, Khmaladze transform, empirical Rosenblatt transform 1 Introduction The causal connection between monetary policy and real economic variables is one of the most important and widely studied questions in macroeconomics. -
An Invitation to Qualitative Research
CHAPTER 1 An Invitation to Qualitative Research n recent years, binge drinking has caused considerable concern among admin - istrators at colleges and universities, compelled by statistics that show marked Iincreases in such behavior. A qualitative researcher studying this topic would seek to go behind the statistics to understand the issue. Recently, we attended a fac - ulty meeting that addressed the problem of binge drinking and heard concerned faculty and administrators suggest some of the following solutions: • stricter campus policies with enforced consequences • more faculty-student socials with alcohol for those over 21 years old • more bus trips into the city to local sites such as major museums to get students interested in other pastimes Although well-intentioned, these folks were grasping at straws. This is because although they were armed with statistics indicating binge drinking was prevalent and thus could identify a problem, they had no information about why this trend was occurring. Without understanding this issue on a meaningful level, it is diffi - cult to remedy. At this point, we invite you to spend 5 to 10 minutes jotting down a list of questions you think are important to investigate as we try to better under - stand the phenomenon of binge drinking at college. What Is Qualitative Research? The qualitative approach to research is a unique grounding—the position from which to conduct research—that fosters particular ways of asking questions and particular ways of thinking through problems. As noted in the opening dis - cussion of binge drinking in college, the questions asked in this type of research usually begin with words like how, why, or what. -
Reasoning with Qualitative Data: Using Retroduction with Transcript Data
Reasoning With Qualitative Data: Using Retroduction With Transcript Data © 2019 SAGE Publications, Ltd. All Rights Reserved. This PDF has been generated from SAGE Research Methods Datasets. SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 Reasoning With Qualitative Data: Using Retroduction With Transcript Data Student Guide Introduction This example illustrates how different forms of reasoning can be used to analyse a given set of qualitative data. In this case, I look at transcripts from semi-structured interviews to illustrate how three common approaches to reasoning can be used. The first type (deductive approaches) applies pre-existing analytical concepts to data, the second type (inductive reasoning) draws analytical concepts from the data, and the third type (retroductive reasoning) uses the data to develop new concepts and understandings about the issues. The data source – a set of recorded interviews of teachers and students from the field of Higher Education – was collated by Dr. Christian Beighton as part of a research project designed to inform teacher educators about pedagogies of academic writing. Interview Transcripts Interviews, and their transcripts, are arguably the most common data collection tool used in qualitative research. This is because, while they have many drawbacks, they offer many advantages. Relatively easy to plan and prepare, interviews can be flexible: They are usually one-to one but do not have to be so; they can follow pre-arranged questions, but again this is not essential; and unlike more impersonal ways of collecting data (e.g., surveys or observation), they Page 2 of 13 Reasoning With Qualitative Data: Using Retroduction With Transcript Data SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. -
Double Blind Trials Workshop
Double Blind Trials Workshop Introduction These activities demonstrate how double blind trials are run, explaining what a placebo is and how the placebo effect works, how bias is removed as far as possible and how participants and trial medicines are randomised. Curriculum Links KS3: Science SQA Access, Intermediate and KS4: Biology Higher: Biology Keywords Double-blind trials randomisation observer bias clinical trials placebo effect designing a fair trial placebo Contents Activities Materials Activity 1 Placebo Effect Activity Activity 2 Observer Bias Activity 3 Double Blind Trial Role Cards for the Double Blind Trial Activity Testing Layout Background Information Medicines undergo a number of trials before they are declared fit for use (see classroom activity on Clinical Research for details). In the trial in the second activity, pupils compare two potential new sunscreens. This type of trial is done with healthy volunteers to see if the there are any side effects and to provide data to suggest the dosage needed. If there were no current best treatment then this sort of trial would also be done with patients to test for the effectiveness of the new medicine. How do scientists make sure that medicines are tested fairly? One thing they need to do is to find out if their tests are free of bias. Are the medicines really working, or do they just appear to be working? One difficulty in designing fair tests for medicines is the placebo effect. When patients are prescribed a treatment, especially by a doctor or expert they trust, the patient’s own belief in the treatment can cause the patient to produce a response. -
Observational Studies and Bias in Epidemiology
The Young Epidemiology Scholars Program (YES) is supported by The Robert Wood Johnson Foundation and administered by the College Board. Observational Studies and Bias in Epidemiology Manuel Bayona Department of Epidemiology School of Public Health University of North Texas Fort Worth, Texas and Chris Olsen Mathematics Department George Washington High School Cedar Rapids, Iowa Observational Studies and Bias in Epidemiology Contents Lesson Plan . 3 The Logic of Inference in Science . 8 The Logic of Observational Studies and the Problem of Bias . 15 Characteristics of the Relative Risk When Random Sampling . and Not . 19 Types of Bias . 20 Selection Bias . 21 Information Bias . 23 Conclusion . 24 Take-Home, Open-Book Quiz (Student Version) . 25 Take-Home, Open-Book Quiz (Teacher’s Answer Key) . 27 In-Class Exercise (Student Version) . 30 In-Class Exercise (Teacher’s Answer Key) . 32 Bias in Epidemiologic Research (Examination) (Student Version) . 33 Bias in Epidemiologic Research (Examination with Answers) (Teacher’s Answer Key) . 35 Copyright © 2004 by College Entrance Examination Board. All rights reserved. College Board, SAT and the acorn logo are registered trademarks of the College Entrance Examination Board. Other products and services may be trademarks of their respective owners. Visit College Board on the Web: www.collegeboard.com. Copyright © 2004. All rights reserved. 2 Observational Studies and Bias in Epidemiology Lesson Plan TITLE: Observational Studies and Bias in Epidemiology SUBJECT AREA: Biology, mathematics, statistics, environmental and health sciences GOAL: To identify and appreciate the effects of bias in epidemiologic research OBJECTIVES: 1. Introduce students to the principles and methods for interpreting the results of epidemio- logic research and bias 2. -
Chapter 5 Statistical Inference
Chapter 5 Statistical Inference CHAPTER OUTLINE Section 1 Why Do We Need Statistics? Section 2 Inference Using a Probability Model Section 3 Statistical Models Section 4 Data Collection Section 5 Some Basic Inferences In this chapter, we begin our discussion of statistical inference. Probability theory is primarily concerned with calculating various quantities associated with a probability model. This requires that we know what the correct probability model is. In applica- tions, this is often not the case, and the best we can say is that the correct probability measure to use is in a set of possible probability measures. We refer to this collection as the statistical model. So, in a sense, our uncertainty has increased; not only do we have the uncertainty associated with an outcome or response as described by a probability measure, but now we are also uncertain about what the probability measure is. Statistical inference is concerned with making statements or inferences about char- acteristics of the true underlying probability measure. Of course, these inferences must be based on some kind of information; the statistical model makes up part of it. Another important part of the information will be given by an observed outcome or response, which we refer to as the data. Inferences then take the form of various statements about the true underlying probability measure from which the data were obtained. These take a variety of forms, which we refer to as types of inferences. The role of this chapter is to introduce the basic concepts and ideas of statistical inference. The most prominent approaches to inference are discussed in Chapters 6, 7, and 8. -
Design of Experiments and Data Analysis,” 2012 Reliability and Maintainability Symposium, January, 2012
Copyright © 2012 IEEE. Reprinted, with permission, from Huairui Guo and Adamantios Mettas, “Design of Experiments and Data Analysis,” 2012 Reliability and Maintainability Symposium, January, 2012. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of ReliaSoft Corporation's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. 2012 Annual RELIABILITY and MAINTAINABILITY Symposium Design of Experiments and Data Analysis Huairui Guo, Ph. D. & Adamantios Mettas Huairui Guo, Ph.D., CPR. Adamantios Mettas, CPR ReliaSoft Corporation ReliaSoft Corporation 1450 S. Eastside Loop 1450 S. Eastside Loop Tucson, AZ 85710 USA Tucson, AZ 85710 USA e-mail: [email protected] e-mail: [email protected] Tutorial Notes © 2012 AR&MS SUMMARY & PURPOSE Design of Experiments (DOE) is one of the most useful statistical tools in product design and testing. While many organizations benefit from designed experiments, others are getting data with little useful information and wasting resources because of experiments that have not been carefully designed. Design of Experiments can be applied in many areas including but not limited to: design comparisons, variable identification, design optimization, process control and product performance prediction. Different design types in DOE have been developed for different purposes.