UNIT 2 DESCRIPTIVE STATISTICS Introduction to Statistics

Total Page:16

File Type:pdf, Size:1020Kb

UNIT 2 DESCRIPTIVE STATISTICS Introduction to Statistics UNIT 2 DESCRIPTIVE STATISTICS Introduction to Statistics Structure 2.0 Introduction 2.1 Objectives 2.2 Meaning of Descriptive Statistics 2.3 Organising Data 2.3.1 Classification 2.3.2 Tabulation 2.3.3 Graphical Presentation of Data 2.3.4 Diagrammatical Presentation of Data 2.4 Summarising Data 2.4.1 Measures of Central Tendency 2.4.2 Measures of Dispersion 2.5 Use of Descriptive Statistics 2.6 Let Us Sum Up 2.7 Unit End Questions 2.8 Glossary 2.9 Suggested Readings 2.0 INTRODUCTION We have learned in the previous unit that looking at the functions of statistics point of view, statistics may be descriptive, correlational and inferential. In this unit we shall discuss the various aspects of descriptive statistics, particularly how to organise and discribe the data. Most of the observations in this universe are subject to variability, especially observations related to human behaviour. It is a well known fact that Attitude, Intelligence, Personality, etc. differ from individual to individual. In order to make a sensible definition of the group or to identify the group with reference to their observations/ scores, it is necessary to express them in a precise manner. For this purpose observations need to be expressed as a single estimate which summarises the observations. Such single estimate of the series of data which summarises the distribution are known as parameters of the distribution. These parameters define the distribution completely. In this unit we will be focusing on descriptive statistics, the characteristic features and the various statistics used in this category. 2.1 OBJECTIVES After going through this unit, you will be able to: Define the nature and meaning of descriptive statistics; Describe the methods of organising and condensing raw data; Explain concept and meaning of different measures of central tendency; and Analyse the meaning of different measures of dispersion. 15 Introduction to Statistics 2.2 MEANING OF DESCRIPTIVE STATISTICS Let us take up a hypothetical example of two groups of students taking a problem solving test. One group is taken as experimental group in that the subjects in this group are given training in problem solving while the other group subjects do not get any training. Both were tested on problem solving and the scores they obtained were as given in the table below. Table 2.1: Hypothetical performance scores of 10 students of experimental and control groups based on Problem solving experiment. Experimental Condition Control Condition ( With Training ) (Without Training) 4 2 8 4 12 10 8 6 7 3 9 4 15 8 6 4 5 2 8 3 The scores obtained by children in the two groups are the actual scores and are considered as raw scores. A look at the table shows that the control group subjects have scored rather lower as compared to that of the experimental group which had undergone training. There are some procedures to be followed and statistical tests to be used to describe the raw data and get some meaningful interpretation of the same. This is what Descriptive statistics is all about. Description of data performs two operations: (i) Organising Data and (ii) Summarising Data. 2.3 ORGANISING DATA Univariate analysis involves the examination across cases of one variable at a time. There are four major statistical techniques for organising the data. These are: Classification Tabulation Graphical Presentation Diagrammatical Presentation 2.3.1 Classification The classification is a summary of the frequency of individual scores or ranges of scores for a variable. In the simplest form of a distribution, we will have such value of variable as well as the number of persons who have had each value. 16 Once data are collected, researchers have to arrange them in a format from which Descriptive Statistics they would be able to draw some conclusions. The arrangement of data in groups according to similarities is known as classification. Thus by classifying data, the investigators move a step ahead to the scores and proceed forward concrete decision. Classification is done with following objectives: Presenting data in a condensed form Explaining the affinities and diversities of the data Facilitating comparisons Classification may be qualitative and quantitative Frequency distribution. A much clear picture of the information of score emerges when the raw data are organised as a frequency distribution. Frequency distribution shows the number of cases following within a given class interval or range of scores. A frequency distribution is a table that shows each score as obtained by a group of individuals and how frequently each score occurred. Frequency distribution can be with ungrouped data and grouped data i) An ungrouped frequency distribution may be constructed by listing all score value either from highest to lowest or lowest to highest and placing a tally mark (/) besides each scores every times it occurs. The frequency of occurrence of each score is denoted by ‘f’ . ii) Grouped frequency distribution: If there is a wide range of score value in the data, then it is difficult to get a clear picture of such series of data. In this case grouped frequency distribution should be constructed to have clear picture of the data. A group frequency distribution is a table that organises data into classes, into groups of values describing one characteristic of the data. It shows the number of observations from the data set that fall into each of the class. Construction of Frequency Distribution Before proceeding we need to know a few terminologies used in further discussion as for instance, a variable. A variable refers to the phenomenon under study. It may be the performance of students on a problem solving issue or it can be a method of teaching students that could affect their performance. Here the performance is one variable which is being studied and the method of teaching is another variable that is being manipulated. Variables are of two kinds : i) Continuous variable ii) Discrete variable. Those variables which can take all the possible values in a given specified range is termed as Continuous variable. For example, age ( it can be measured in years, months, days, hours, minutes , seconds etc.) , weight (lbs), height(in cms), etc. On the other hand those variables which cannot take all the possible values within the given specified range are termed as discrete variables. For example, number of children, marks obtained in an examination ( out of 200), etc. 17 Introduction to Statistics Preparation of Frequency Distribution To prepare a frequency distribution, we, first decide the range of the given data, that is, the difference between the highest and lowest scores. This will tell about the range of the scores. Prior to the construction of any grouped frequency distribution, it is important to decide the following 1) The number of class intervals: There is no hard and fast rules regarding the number of classes into which data should be grouped . If there are very few scores, it is useless to have a large number of class-intervals. Ordinarily, the number of classes should be between 5 to 30 2) Limits of each class interval: Another factor used in determining the number of classes is the size/ width or range of the class which is known as ‘class interval’ and is denoted by ‘i’. Class interval should be of uniform width resulting in the same-size classes of frequency distribution. The width of the class should be a whole number and conveniently divisible by 2, 3, 5, 10 or 20. There are three methods for describing the class limits for distribution: i) Exclusive method ii) Inclusive method iii) True or actual class method i) Exclusive method: In this method of class formation, the classes are so formed that the upper limit of one class also becomes the lower limit of the next class. Exclusive method of classification ensures continuity between two successive classes. In this classification, it is presumed that score equal to the upper limit of the class is exclusive, i.e., a score of 40 will be included in the class of 40 to 50 and not in a class of 30 to 40 ii) Inclusive method: In this method classification includes scores, which are equal to the upper limit of the class. Inclusive method is preferred when measurements are given in whole numbers. iii) True or Actual class method: In inclusive method upper class limit is not equal to lower class limit of the next class. Therefore, there is no continuity between the classes. However, in many statistical measures continuous classes are required. To have continuous classes it is assumed that an observation or score does not just represent a point on a continuous scale but an internal unit length of which the given score is the middle point. Thus, mathematically, a score is internal when it extends from 0.5 units below to 0.5 units above the face value of the score on a continuum. These class limits are known as true or actual class limits. Types of frequency distributions: There are various ways to arrange frequencies of a data array based on the requirement of the statistical analysis or the study. A couple of them are discussed below: Relative frequency distribution: A relative frequency distribution is a distribution that indicates the proportion of the total number of cases observed at each score value or internal of score values. 18 Cumulative frequency distribution: Sometimes investigator is interested to know Descriptive Statistics the number of observations less than a particular value. This is possible by computing the cumulative frequency. A cumulative frequency corresponding to a class-interval is the sum of frequencies for that class and of all classes prior to that class.
Recommended publications
  • UNIT 1 INTRODUCTION to STATISTICS Introduction to Statistics
    UNIT 1 INTRODUCTION TO STATISTICS Introduction to Statistics Structure 1.0 Introduction 1.1 Objectives 1.2 Meaning of Statistics 1.2.1 Statistics in Singular Sense 1.2.2 Statistics in Plural Sense 1.2.3 Definition of Statistics 1.3 Types of Statistics 1.3.1 On the Basis of Function 1.3.2 On the Basis of Distribution of Data 1.4 Scope and Use of Statistics 1.5 Limitations of Statistics 1.6 Distrust and Misuse of Statistics 1.7 Let Us Sum Up 1.8 Unit End Questions 1.9 Glossary 1.10 Suggested Readings 1.0 INTRODUCTION The word statistics has different meaning to different persons. Knowledge of statistics is applicable in day to day life in different ways. In daily life it means general calculation of items, in railway statistics means the number of trains operating, number of passenger’s freight etc. and so on. Thus statistics is used by people to take decision about the problems on the basis of different type of quantitative and qualitative information available to them. However, in behavioural sciences, the word ‘statistics’ means something different from the common concern of it. Prime function of statistic is to draw statistical inference about population on the basis of available quantitative information. Overall, statistical methods deal with reduction of data to convenient descriptive terms and drawing some inferences from them. This unit focuses on the above aspects of statistics. 1.1 OBJECTIVES After going through this unit, you will be able to: Define the term statistics; Explain the status of statistics; Describe the nature of statistics; State basic concepts used in statistics; and Analyse the uses and misuses of statistics.
    [Show full text]
  • The Landscape of R Packages for Automated Exploratory Data Analysis by Mateusz Staniak and Przemysław Biecek
    CONTRIBUTED RESEARCH ARTICLE 1 The Landscape of R Packages for Automated Exploratory Data Analysis by Mateusz Staniak and Przemysław Biecek Abstract The increasing availability of large but noisy data sets with a large number of heterogeneous variables leads to the increasing interest in the automation of common tasks for data analysis. The most time-consuming part of this process is the Exploratory Data Analysis, crucial for better domain understanding, data cleaning, data validation, and feature engineering. There is a growing number of libraries that attempt to automate some of the typical Exploratory Data Analysis tasks to make the search for new insights easier and faster. In this paper, we present a systematic review of existing tools for Automated Exploratory Data Analysis (autoEDA). We explore the features of fifteen popular R packages to identify the parts of the analysis that can be effectively automated with the current tools and to point out new directions for further autoEDA development. Introduction With the advent of tools for automated model training (autoML), building predictive models is becoming easier, more accessible and faster than ever. Tools for R such as mlrMBO (Bischl et al., 2017), parsnip (Kuhn and Vaughan, 2019); tools for python such as TPOT (Olson et al., 2016), auto-sklearn (Feurer et al., 2015), autoKeras (Jin et al., 2018) or tools for other languages such as H2O Driverless AI (H2O.ai, 2019; Cook, 2016) and autoWeka (Kotthoff et al., 2017) supports fully- or semi-automated feature engineering and selection, model tuning and training of predictive models. Yet, model building is always preceded by a phase of understanding the problem, understanding of a domain and exploration of a data set.
    [Show full text]
  • Univariate Analysis and Normality Test Using SAS, STATA, and SPSS
    © 2002-2006 The Trustees of Indiana University Univariate Analysis and Normality Test: 1 Univariate Analysis and Normality Test Using SAS, STATA, and SPSS Hun Myoung Park This document summarizes graphical and numerical methods for univariate analysis and normality test, and illustrates how to test normality using SAS 9.1, STATA 9.2 SE, and SPSS 14.0. 1. Introduction 2. Graphical Methods 3. Numerical Methods 4. Testing Normality Using SAS 5. Testing Normality Using STATA 6. Testing Normality Using SPSS 7. Conclusion 1. Introduction Descriptive statistics provide important information about variables. Mean, median, and mode measure the central tendency of a variable. Measures of dispersion include variance, standard deviation, range, and interquantile range (IQR). Researchers may draw a histogram, a stem-and-leaf plot, or a box plot to see how a variable is distributed. Statistical methods are based on various underlying assumptions. One common assumption is that a random variable is normally distributed. In many statistical analyses, normality is often conveniently assumed without any empirical evidence or test. But normality is critical in many statistical methods. When this assumption is violated, interpretation and inference may not be reliable or valid. Figure 1. Comparing the Standard Normal and a Bimodal Probability Distributions Standard Normal Distribution Bimodal Distribution .4 .4 .3 .3 .2 .2 .1 .1 0 0 -5 -3 -1 1 3 5 -5 -3 -1 1 3 5 T-test and ANOVA (Analysis of Variance) compare group means, assuming variables follow normal probability distributions. Otherwise, these methods do not make much http://www.indiana.edu/~statmath © 2002-2006 The Trustees of Indiana University Univariate Analysis and Normality Test: 2 sense.
    [Show full text]
  • Chapter Four: Univariate Statistics SPSS V11 Chapter Four: Univariate Statistics
    Chapter Four: Univariate Statistics SPSS V11 Chapter Four: Univariate Statistics Univariate analysis, looking at single variables, is typically the first procedure one does when examining first time data. There are a number of reasons why it is the first procedure, and most of the reasons we will cover at the end of this chapter, but for now let us just say we are interested in the "basic" results. If we are examining a survey, we are interested in how many people said, "Yes" or "No", or how many people "Agreed" or "Disagreed" with a statement. We aren't really testing a traditional hypothesis with an independent and dependent variable; we are just looking at the distribution of responses. The SPSS tools for looking at single variables include the following procedures: Frequencies, Descriptives and Explore all located under the Analyze menu. This chapter will use the GSS02A file used in earlier chapters, so start SPSS and bring the file into the Data Editor. ( See Chapter 1 to refresh your memory on how to start SPSS). To begin the process start SPSS, then open the data file. Under the Analyze menu, choose Descriptive Statistics and the procedure desired: Frequencies, Descriptives, Explore, Crosstabs. Frequencies Generally a frequency is used for looking at detailed information in a nominal (category) data set that describes the results. Categorical data is for variables such as gender i.e. males are coded as "1" and females are coded as "2." Frequencies options include a table showing counts and percentages, statistics including percentile values, central tendency, dispersion and distribution, and charts including bar charts and histograms.
    [Show full text]
  • Bivariate Analysis: the Statistical Analysis of the Relationship Between Two Variables
    bivariate analysis: The statistical analysis of the relationship between two variables. cell frequency: The number of cases in a cell of a cross-tabulation (contingency table). chi-square (χ2) test for independence: A test of statistical significance used to assess the likelihood that an observed association between two variables could have occurred by chance. consistency checking: A data-cleaning procedure involving checking for unreasonable patterns of responses, such as a 12-year-old who voted in the last US presidential election. correlation coefficient: A statistical measure of the strength and direction of a linear relationship between two variables; it may vary from −1 to 0 to +1. data cleaning: The detection and correction of errors in a computer datafile that may have occurred during data collection, coding, and/or data entry. data matrix: The form of a computer datafile, with rows as cases and columns as variables; each cell represents the value of a particular variable (column) for a particular case (row). data processing: The preparation of data for analysis. descriptive statistics: Procedures for organizing and summarizing data. dummy variable: A variable or set of variable categories recoded to have values of 0 and 1. Dummy coding may be applied to nominal- or ordinal-scale variables for the purpose of regression or other numerical analysis. frequency distribution: A tabulation of the number of cases falling into each category of a variable. histogram: A graphic display in which the height of a vertical bar represents the frequency or percentage of cases in each category of an interval/ratio variable. imputation: A procedure for handling missing data in which missing values are assigned based on other information, such as the sample mean or known values of other variables.
    [Show full text]
  • Instructions for Univariate Analysis
    Instructions for Univariate Analysis 1. Select a dataset from the pull‐down menu. 2. Select a table from the pull‐down menu. 3. Select a variable for which a univariate analysis is desired. 4. You may use the Add Variable feature or the Shift‐Select or Ctrl‐Select options to analyze multiple variables in a single run. 1 6/22/2012 5. Click the Run button to generate the desired descriptive statistical analysis. The results will appear at the bottom. The results are arranged in a series of tabs, providing the following information: a. Descriptive statistics (standard descriptors such as sample size, mean, median, standard deviation, coefficient of variation) b. Percentiles showing the value of the variable corresponding to every 10th percentile c. Frequency providing a detailed frequency distribution, including a cumulative distribution, for the selected variables d. Frequency distribution graph providing a graph depicting the frequency distribution, both in absolute number and in percent e. Percent distribution graph providing the frequency distribution in percent and cumulative percent 6. Use the Clear Results button to clear out all of the results and start fresh. 7. The web portal allows you to perform either unweighted or weighted data analysis. Generally, it is recommended that weighted analysis be conducted so that values reflective of population characteristics are obtained. However, if you are interested in the numbers based on the raw sample, then choose unweighted analysis. The web portal has already identified the appropriate weight variable for each data set and made it available as an option. 8. You may choose to undertake the cross tabulation analysis for any selected subsample of the survey sample.
    [Show full text]
  • Univariate Analyses Can Be Used for Which of the Following
    Chapter Coverage and Supplementary Lecture Material: Chapter 17 Please replace the hand-out in class with a print-out or on-line reading of this material. There were typos and a few errors in that hand-out. This covers (and then some) the material in pages 275-278 of Chapter 17, which is all you are responsible for at this time. There are a number of important concepts in Chapter 17 which need to be understood in order to proceed further with the exercises and to prepare for the data analysis we will do on data collected in our group research projects. I want to go over a number of them, and will be sure to cover those items that will be on the quiz. First, at the outset of the chapter, the authors explain the difference between quantitative analysis which is descriptive and that which is inferential. And then the explain the difference between univariate analysis, bivariate analysis and multivariate analysis. Descriptive data analysis, the authors point out on page 275, involves focus in on the data collected per se, which describes what Ragin and Zaret refer to as the object of the research - what has been studied (objectively one hopes). This distinction between the object of research and the subject of research is an important one, and will help us understand the distinction between descriptive and inferential data analysis. As I argue in the paper I have written: “In thinking about the topic of a research project, it is helpful to distinguish between the object and subject of research.
    [Show full text]
  • Normality Testing Sas Spss.Pdf
    © 2002-2008 The Trustees of Indiana University Univariate Analysis and Normality Test: 1 I n d i a n a U n i v e r s i t y U n i v e r s i t y I n f o r m a t i o n T e c h n o l o g y S e r v i c e s Univariate Analysis and Normality Test Using SAS, Stata, and SPSS* Hun Myoung Park, Ph.D. © 2002-2008 Last modified on November 2008 University Information Technology Services Center for Statistical and Mathematical Computing Indiana University 410 North Park Avenue Bloomington, IN 47408 (812) 855-4724 (317) 278-4740 http://www.indiana.edu/~statmath * The citation of this document should read: “Park, Hun Myoung. 2008. Univariate Analysis and Normality Test Using SAS, Stata, and SPSS. Working Paper. The University Information Technology Services (UITS) Center for Statistical and Mathematical Computing, Indiana University.” http://www.indiana.edu/~statmath/stat/all/normality/index.html http://www.indiana.edu/~statmath © 2002-2008 The Trustees of Indiana University Univariate Analysis and Normality Test: 2 This document summarizes graphical and numerical methods for univariate analysis and normality test, and illustrates how to do using SAS 9.1, Stata 10 special edition, and SPSS 16.0. 1. Introduction 2. Graphical Methods 3. Numerical Methods 4. Testing Normality Using SAS 5. Testing Normality Using Stata 6. Testing Normality Using SPSS 7. Conclusion 1. Introduction Descriptive statistics provide important information about variables to be analyzed. Mean, median, and mode measure central tendency of a variable.
    [Show full text]
  • Measures of Central Tendency
    © Jones & Bartlett Learning, LLC © Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION © Jones & Bartlett Learning, LLC © Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION Chapter 2 © Jones & BartlettMeasures Learning, LLC of Central© Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION Tendency © Jones & Bartlett Learning, LLC © Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION Learning Objectives ■■©Understand Jones & how Bartlett frequency Learning, tables are used LLC in statistical analysis. © Jones & Bartlett Learning, LLC ■■NOTExplain FOR the conventionsSALE OR for DISTRIBUTION building distributions. NOT FOR SALE OR DISTRIBUTION ■■ Understand the mode, median, and mean as measures of central tendency. ■■ Identify the proper measure of central tendency to use for each level of measure- ment. © Jones & Bartlett■■ Explain Learning, how to calculate LLC the mode, median, and mean.© Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION Key Terms bivariate median central tendency mode © Jones & Bartlett Learning,frequency LLC distribution © Jonesmultivariate & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTIONmean NOT univariateFOR SALE OR DISTRIBUTION Now we begin statistical analysis. Statistical analysis may be broken down into three broad categories: © Jones■■ Univariate & Bartlett analyses Learning, LLC © Jones & Bartlett Learning, LLC NOT■■ Bivariate FOR SALE analyses OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION ■■ Multivariate analyses These divisions are fairly straightforward. Univariate analyses deal with one variable at a time. Bivariate analyses compare two variables to each other to see how they dif- © Jones & Bartlett Learning, LLC © Jones & Bartlett Learning, LLC NOT FOR SALE OR DISTRIBUTION NOT FOR SALE OR DISTRIBUTION 17 © Jones & Bartlett Learning, LLC © Jones & Bartlett Learning, LLC.© NOTJones FOR SALE & OR Bartlett DISTRIBUTION.
    [Show full text]
  • A Comparative Study of Univariate and Multivariate Methodological Approaches to Educational Research Robert M
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1989 A comparative study of univariate and multivariate methodological approaches to educational research Robert M. Crawford Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Educational Assessment, Evaluation, and Research Commons, and the Teacher Education and Professional Development Commons Recommended Citation Crawford, Robert M., "A comparative study of univariate and multivariate methodological approaches to educational research " (1989). Retrospective Theses and Dissertations. 8923. https://lib.dr.iastate.edu/rtd/8923 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS The most advanced technology has been used to photo­ graph and reproduce this manuscript from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.
    [Show full text]
  • Skewness to Test Normality for Mean Comparison
    International Journal of Assessment Tools in Education 2020, Vol. 7, No. 2, 255–265 https://doi.org/10.21449/ijate.656077 Published at https://dergipark.org.tr/en/pub/ijate Research Article Parametric or Non-parametric: Skewness to Test Normality for Mean Comparison Fatih Orcan 1,* 1Department of Educational Measurement and Evaluation, Trabzon University, Turkey ARTICLE HISTORY Abstract: Checking the normality assumption is necessary to decide whether a Received: Dec 06, 2019 parametric or non-parametric test needs to be used. Different ways are suggested in literature to use for checking normality. Skewness and kurtosis values are one of Revised: Apr 22, 2020 them. However, there is no consensus which values indicated a normal distribution. Accepted: May 24, 2020 Therefore, the effects of different criteria in terms of skewness values were simulated in this study. Specifically, the results of t-test and U-test are compared KEYWORDS under different skewness values. The results showed that t-test and U-test give Normality test, different results when the data showed skewness. Based on the results, using Skewness, skewness values alone to decide about normality of a dataset may not be enough. Mean comparison, Therefore, the use of non-parametric tests might be inevitable. Non-parametric, 1. INTRODUCTION Mean comparison tests, such as t-test, Analysis of Variance (ANOVA) or Mann-Whitney U test, are frequently used statistical techniques in educational sciences. The techniques used differ according to the properties of the data sets such as normality or equal variance. For example, if the data is not normally distributed Mann-Whitney U test is used instead of independent sample t-test.
    [Show full text]
  • Quantitative Data Analysis
    8 Quantitative Data Analysis Chapter Summary Introduction Quantitative data analysis is a stage of social research designed to make sense of our findings. During data analysis, we assess whether our original theories and hypotheses formulated at the early stages of research process are supported by evidence we collected. Quantitative data analy- sis looks at findings expressed in numbers and statistical tables and figures. These numerical findings are usually obtained by carrying out surveys of public opinion and structured interviews. This chapter looks at many quantitative techniques used in the process of data analysis. These techniques depend on the statistical types of variables (nominal, ordinal, interval-ratio), because the types of variables determine what kind of statistical analysis is possible. They also depend on how many variables are involved into the analysis: one, two, or three and more. The chapter examines the most common statistical techniques, such as descriptive statistics and graphs, several measures of association between the two variables (correlation coefficient, Spearman’s rho, eta), basic regression, and tests of statistical significance. In shows the applica- tion of techniques on a small student project about the use of gyms and fitness activities. The main goal of the chapter is to give an overview of the statistical techniques that are most useful in the student research projects. Data analysis stage is the final stage of the research process. It brings the theoretical ideas formulated at the beginning of the study together with the evidence (the data) we collected in the process of research. The main task of data analysis in quantitative research is to assess whether the empirical evidence supports or refutes the theoretical arguments of the project.
    [Show full text]