Descriptive Statistics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
DATA COLLECTION METHODS Section III 5
AN OVERVIEW OF QUANTITATIVE AND QUALITATIVE DATA COLLECTION METHODS Section III 5. DATA COLLECTION METHODS: SOME TIPS AND COMPARISONS In the previous chapter, we identified two broad types of evaluation methodologies: quantitative and qualitative. In this section, we talk more about the debate over the relative virtues of these approaches and discuss some of the advantages and disadvantages of different types of instruments. In such a debate, two types of issues are considered: theoretical and practical. Theoretical Issues Most often these center on one of three topics: · The value of the types of data · The relative scientific rigor of the data · Basic, underlying philosophies of evaluation Value of the Data Quantitative and qualitative techniques provide a tradeoff between breadth and depth, and between generalizability and targeting to specific (sometimes very limited) populations. For example, a quantitative data collection methodology such as a sample survey of high school students who participated in a special science enrichment program can yield representative and broadly generalizable information about the proportion of participants who plan to major in science when they get to college and how this proportion differs by gender. But at best, the survey can elicit only a few, often superficial reasons for this gender difference. On the other hand, separate focus groups (a qualitative technique related to a group interview) conducted with small groups of men and women students will provide many more clues about gender differences in the choice of science majors, and the extent to which the special science program changed or reinforced attitudes. The focus group technique is, however, limited in the extent to which findings apply beyond the specific individuals included in the groups. -
Data Extraction for Complex Meta-Analysis (Decimal) Guide
Pedder, H. , Sarri, G., Keeney, E., Nunes, V., & Dias, S. (2016). Data extraction for complex meta-analysis (DECiMAL) guide. Systematic Reviews, 5, [212]. https://doi.org/10.1186/s13643-016-0368-4 Publisher's PDF, also known as Version of record License (if available): CC BY Link to published version (if available): 10.1186/s13643-016-0368-4 Link to publication record in Explore Bristol Research PDF-document This is the final published version of the article (version of record). It first appeared online via BioMed Central at http://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0368-4. Please refer to any applicable terms of use of the publisher. University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/ Pedder et al. Systematic Reviews (2016) 5:212 DOI 10.1186/s13643-016-0368-4 RESEARCH Open Access Data extraction for complex meta-analysis (DECiMAL) guide Hugo Pedder1*, Grammati Sarri2, Edna Keeney3, Vanessa Nunes1 and Sofia Dias3 Abstract As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. -
Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range)
NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 7 M2 ALGEBRA I Lesson 7: Measuring Variability for Skewed Distributions (Interquartile Range) Student Outcomes . Students explain why a median is a better description of a typical value for a skewed distribution. Students calculate the 5-number summary of a data set. Students construct a box plot based on the 5-number summary and calculate the interquartile range (IQR). Students interpret the IQR as a description of variability in the data. Students identify outliers in a data distribution. Lesson Notes Distributions that are not symmetrical pose some challenges in students’ thinking about center and variability. The observation that the distribution is not symmetrical is straightforward. The difficult part is to select a measure of center and a measure of variability around that center. In Lesson 3, students learned that, because the mean can be affected by unusual values in the data set, the median is a better description of a typical data value for a skewed distribution. This lesson addresses what measure of variability is appropriate for a skewed data distribution. Students construct a box plot of the data using the 5-number summary and describe variability using the interquartile range. Classwork Exploratory Challenge 1/Exercises 1–3 (10 minutes): Skewed Data and Their Measure of Center Verbally introduce the data set as described in the introductory paragraph and dot plot shown below. Exploratory Challenge 1/Exercises 1–3: Skewed Data and Their Measure of Center Consider the following scenario. A television game show, Fact or Fiction, was cancelled after nine shows. Many people watched the nine shows and were rather upset when it was taken off the air. -
Uses for Census Data in Market Research
Uses for Census Data in Market Research MRS Presentation 4 July 2011 Andrew Zelin, Director, Sampling & RMC, Ipsos MORI Why Census Data is so critical to us Use of Census Data underlies almost all of our quantitative work activities and projects Needed to ensure that: – Samples are balanced and representative – ie Sampling; – Results derived from samples reflect that of target population – ie Weighting; – Understanding how demographic and geographic factors affect the findings from surveys – ie Analysis. How would we do our jobs without the use of Census Data? Our work withoutCensusData Our work Regional Assembly What I will cover Introduction, overview and why Census Data is so critical; Census data for sampling; Census data for survey weighting; Census data for statistical analysis; Use of SARs; The future (eg use of “hypercubes”); Our work without Census Data / use of alternatives; Conclusions Census Data for Sampling For Random (pre-selected) Samples: – Need to define stratum sample sizes and sampling fractions; – Need to define cluster size, esp. if PPS – (Need to define stratum / cluster) membership; For any type of Sample: – To balance samples across demographics that cannot be included as quota controls or stratification variables – To determine number of sample points by geography – To create accurate booster samples (eg young people / ethnic groups) Census Data for Sampling For Quota (non-probability) Samples: – To set appropriate quotas (based on demographics); Data are used here at very localised level – ie Census Outputs Census Data for Sampling Census Data for Survey Weighting To ensure that the sample profile balances the population profile Census Data are needed to tell you what the population profile is ….and hence provide accurate weights ….and hence provide an accurate measure of the design effect and the true level of statistical reliability / confidence intervals on your survey measures But also pre-weighting, for interviewer field dept. -
CORRELATION COEFFICIENTS Ice Cream and Crimedistribute Difficulty Scale ☺ ☺ (Moderately Hard)Or
5 COMPUTING CORRELATION COEFFICIENTS Ice Cream and Crimedistribute Difficulty Scale ☺ ☺ (moderately hard)or WHAT YOU WILLpost, LEARN IN THIS CHAPTER • Understanding what correlations are and how they work • Computing a simple correlation coefficient • Interpretingcopy, the value of the correlation coefficient • Understanding what other types of correlations exist and when they notshould be used Do WHAT ARE CORRELATIONS ALL ABOUT? Measures of central tendency and measures of variability are not the only descrip- tive statistics that we are interested in using to get a picture of what a set of scores 76 Copyright ©2020 by SAGE Publications, Inc. This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher. Chapter 5 ■ Computing Correlation Coefficients 77 looks like. You have already learned that knowing the values of the one most repre- sentative score (central tendency) and a measure of spread or dispersion (variability) is critical for describing the characteristics of a distribution. However, sometimes we are as interested in the relationship between variables—or, to be more precise, how the value of one variable changes when the value of another variable changes. The way we express this interest is through the computation of a simple correlation coefficient. For example, what’s the relationship between age and strength? Income and years of education? Memory skills and amount of drug use? Your political attitudes and the attitudes of your parents? A correlation coefficient is a numerical index that reflects the relationship or asso- ciation between two variables. The value of this descriptive statistic ranges between −1.00 and +1.00. -
Reasoning with Qualitative Data: Using Retroduction with Transcript Data
Reasoning With Qualitative Data: Using Retroduction With Transcript Data © 2019 SAGE Publications, Ltd. All Rights Reserved. This PDF has been generated from SAGE Research Methods Datasets. SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. All Rights Reserved. 2 Reasoning With Qualitative Data: Using Retroduction With Transcript Data Student Guide Introduction This example illustrates how different forms of reasoning can be used to analyse a given set of qualitative data. In this case, I look at transcripts from semi-structured interviews to illustrate how three common approaches to reasoning can be used. The first type (deductive approaches) applies pre-existing analytical concepts to data, the second type (inductive reasoning) draws analytical concepts from the data, and the third type (retroductive reasoning) uses the data to develop new concepts and understandings about the issues. The data source – a set of recorded interviews of teachers and students from the field of Higher Education – was collated by Dr. Christian Beighton as part of a research project designed to inform teacher educators about pedagogies of academic writing. Interview Transcripts Interviews, and their transcripts, are arguably the most common data collection tool used in qualitative research. This is because, while they have many drawbacks, they offer many advantages. Relatively easy to plan and prepare, interviews can be flexible: They are usually one-to one but do not have to be so; they can follow pre-arranged questions, but again this is not essential; and unlike more impersonal ways of collecting data (e.g., surveys or observation), they Page 2 of 13 Reasoning With Qualitative Data: Using Retroduction With Transcript Data SAGE SAGE Research Methods Datasets Part 2019 SAGE Publications, Ltd. -
Normal Probability Distribution
MATH2114 Week 8, Thursday The Normal Distribution Slides Adopted & modified from “Business Statistics: A First Course” 7th Ed, by Levine, Szabat and Stephan, 2016 Pearson Education Inc. Objective/Goals To compute probabilities from the normal distribution How to use the normal distribution to solve practical problems To use the normal probability plot to determine whether a set of data is approximately normally distributed Continuous Probability Distributions A continuous variable is a variable that can assume any value on a continuum (can assume an uncountable number of values) thickness of an item time required to complete a task temperature of a solution height, in inches These can potentially take on any value depending only on the ability to precisely and accurately measure The Normal Distribution Bell Shaped f(X) Symmetrical Mean=Median=Mode σ Location is determined by X the mean, μ μ Spread is determined by the standard deviation, σ Mean The random variable has an = Median infinite theoretical range: = Mode + to The Normal Distribution Density Function The formula for the normal probability density function is 2 1 (X μ) 1 f(X) e 2 2π Where e = the mathematical constant approximated by 2.71828 π = the mathematical constant approximated by 3.14159 μ = the population mean σ = the population standard deviation X = any value of the continuous variable The Normal Distribution Density Function By varying 휇 and 휎, we obtain different normal distributions Image taken from Wikipedia, released under public domain The -
What Is Statistic?
What is Statistic? OPRE 6301 In today’s world. ...we are constantly being bombarded with statistics and statistical information. For example: Customer Surveys Medical News Demographics Political Polls Economic Predictions Marketing Information Sales Forecasts Stock Market Projections Consumer Price Index Sports Statistics How can we make sense out of all this data? How do we differentiate valid from flawed claims? 1 What is Statistics?! “Statistics is a way to get information from data.” Statistics Data Information Data: Facts, especially Information: Knowledge numerical facts, collected communicated concerning together for reference or some particular fact. information. Statistics is a tool for creating an understanding from a set of numbers. Humorous Definitions: The Science of drawing a precise line between an unwar- ranted assumption and a forgone conclusion. The Science of stating precisely what you don’t know. 2 An Example: Stats Anxiety. A business school student is anxious about their statistics course, since they’ve heard the course is difficult. The professor provides last term’s final exam marks to the student. What can be discerned from this list of numbers? Statistics Data Information List of last term’s marks. New information about the statistics class. 95 89 70 E.g. Class average, 65 Proportion of class receiving A’s 78 Most frequent mark, 57 Marks distribution, etc. : 3 Key Statistical Concepts. Population — a population is the group of all items of interest to a statistics practitioner. — frequently very large; sometimes infinite. E.g. All 5 million Florida voters (per Example 12.5). Sample — A sample is a set of data drawn from the population. -
Chapter 5 Statistical Inference
Chapter 5 Statistical Inference CHAPTER OUTLINE Section 1 Why Do We Need Statistics? Section 2 Inference Using a Probability Model Section 3 Statistical Models Section 4 Data Collection Section 5 Some Basic Inferences In this chapter, we begin our discussion of statistical inference. Probability theory is primarily concerned with calculating various quantities associated with a probability model. This requires that we know what the correct probability model is. In applica- tions, this is often not the case, and the best we can say is that the correct probability measure to use is in a set of possible probability measures. We refer to this collection as the statistical model. So, in a sense, our uncertainty has increased; not only do we have the uncertainty associated with an outcome or response as described by a probability measure, but now we are also uncertain about what the probability measure is. Statistical inference is concerned with making statements or inferences about char- acteristics of the true underlying probability measure. Of course, these inferences must be based on some kind of information; the statistical model makes up part of it. Another important part of the information will be given by an observed outcome or response, which we refer to as the data. Inferences then take the form of various statements about the true underlying probability measure from which the data were obtained. These take a variety of forms, which we refer to as types of inferences. The role of this chapter is to introduce the basic concepts and ideas of statistical inference. The most prominent approaches to inference are discussed in Chapters 6, 7, and 8. -
2018 Statistics Final Exam Academic Success Programme Columbia University Instructor: Mark England
2018 Statistics Final Exam Academic Success Programme Columbia University Instructor: Mark England Name: UNI: You have two hours to finish this exam. § Calculators are necessary for this exam. § Always show your working and thought process throughout so that even if you don’t get the answer totally correct I can give you as many marks as possible. § It has been a pleasure teaching you all. Try your best and good luck! Page 1 Question 1 True or False for the following statements? Warning, in this question (and only this question) you will get minus points for wrong answers, so it is not worth guessing randomly. For this question you do not need to explain your answer. a) Correlation values can only be between 0 and 1. False. Correlation values are between -1 and 1. b) The Student’s t distribution gives a lower percentage of extreme events occurring compared to a normal distribution. False. The student’s t distribution gives a higher percentage of extreme events. c) The units of the standard deviation of � are the same as the units of �. True. d) For a normal distribution, roughly 95% of the data is contained within one standard deviation of the mean. False. It is within two standard deviations of the mean e) For a left skewed distribution, the mean is larger than the median. False. This is a right skewed distribution. f) If an event � is independent of �, then � (�) = �(�|�) is always true. True. g) For a given confidence level, increasing the sample size of a survey should increase the margin of error. -
Questionnaire Analysis Using SPSS
Questionnaire design and analysing the data using SPSS page 1 Questionnaire design. For each decision you make when designing a questionnaire there is likely to be a list of points for and against just as there is for deciding on a questionnaire as the data gathering vehicle in the first place. Before designing the questionnaire the initial driver for its design has to be the research question, what are you trying to find out. After that is established you can address the issues of how best to do it. An early decision will be to choose the method that your survey will be administered by, i.e. how it will you inflict it on your subjects. There are typically two underlying methods for conducting your survey; self-administered and interviewer administered. A self-administered survey is more adaptable in some respects, it can be written e.g. a paper questionnaire or sent by mail, email, or conducted electronically on the internet. Surveys administered by an interviewer can be done in person or over the phone, with the interviewer recording results on paper or directly onto a PC. Deciding on which is the best for you will depend upon your question and the target population. For example, if questions are personal then self-administered surveys can be a good choice. Self-administered surveys reduce the chance of bias sneaking in via the interviewer but at the expense of having the interviewer available to explain the questions. The hints and tips below about questionnaire design draw heavily on two excellent resources. SPSS Survey Tips, SPSS Inc (2008) and Guide to the Design of Questionnaires, The University of Leeds (1996). -
Data Collection, Data Quality and the History of Cause-Of-Death Classifi Cation
Chapter 8 Data Collection, Data Quality and the History of Cause-of-Death Classifi cation Vladimir Shkolnikov , France Meslé , and Jacques Vallin Until 1996, when INED published its work on trends in causes of death in Russia (Meslé et al . 1996 ) , there had been no overall study of cause-specifi c mortality for the Soviet Union as a whole or for any of its constituent republics. Yet at least since the 1920s, all the republics had had a modern system for registering causes of death, and the information gathered had been subject to routine statistical use at least since the 1950s. The fi rst reason for the gap in the literature was of course that, before perestroika, these data were not published systematically and, from 1974, had even been kept secret. A second reason was probably that researchers were often ques- tioning the data quality; however, no serious study has ever proved this. On the contrary, it seems to us that all these data offer a very rich resource for anyone attempting to track and understand cause-specifi c mortality trends in the countries of the former USSR – in our case, in Ukraine. Even so, a great deal of effort was required to trace, collect and computerize the various archived data deposits. This chapter will start with a brief description of the registration system and a quick summary of the diffi culties we encountered and the data collection methods we used. We shall then review the results of some studies that enabled us to assess the quality of the data.