<<

Mathematics Letters 2019; 5(3): 33-40 http://www.sciencepublishinggroup.com/j/ml doi: 10.11648/j.ml.20190503.11 ISSN: 2575-503X (Print); ISSN: 2575-5056 (Online)

The Derivation and Choice of Appropriate Test (Z, t, F and Chi-Square Test) in Research Methodology

Teshome Hailemeskel Abebe

Department of Economics, Ambo University, Ambo, Ethiopia Email address:

To cite this article: Teshome Hailemeskel Abebe. The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology. Mathematics Letters . Vol. 5, No. 3, 2019, pp. 33-40. doi: 10.11648/j.ml.20190503.11

Received : April 18, 2019; Accepted : July 16, 2019; Published : January 7, 2020

Abstract: The main objective of this paper is to choose an appropriate test statistic for research methodology. Specifically, this article tries to explore the concept of statistical hypothesis test, derivation of the test statistic and its role on research methodology. It also try to show the basic formulating and testing of hypothesis using test statistic since choosing appropriate test statistic is the most important tool of research. To test a hypothesis various statistical test like Z-test, Student’s t-test, F test (like ANOVA), Chi square test were identified. In testing the of a population or comparing the from two continuous populations, the z-test and t-test were used, while the F test is used for comparing more than two means and equality of . The chi-square test was used for testing independence, and population variance of single sample in categorical . Therefore, choosing an appropriate test statistic gives valid results about hypothesis testing. Keywords: Test Statistic, Z-test, Student’s t-test, F Test (Like ANOVA), Chi Square Test, Research Methodology

1. Introduction 2. Derivation of Z, t, F and Chi-Square The application of statistical test in scientific research has increased dramatically in recent years almost in every Test Statistic science. Despite, its applicability in every science, there were Theorem 1: if we draw independent random samples, misunderstanding in choosing of appropriate test statistic for from a population and compute the mean and the given problems. Thus, the main objective of this study is repeat, , … this , process many times, then is approximately to give a direction which test statistic is appropriate in normal. Since assumptions are part of the “if” part, the conducting statistical hypothesis testing in research conditions used to deduce distribution of statistic, methodology. then the t, and F distributions all depend on normal Choosing appropriate statistical test may be depending on “parent” population. the types of data (continuous or categorical), i.e. whether a t- test, z-test, F-test or a chi-square test should be used depend 2.1. Chi-Square Distribution on the nature of data. For example the two-sample The chi square distribution is a theoretical distribution, independent t-test and z-test used if the two samples are which has wide application in statistical work. The term `chi- independent, a paired z-test and t-test used if two samples are square' (denoted by Greek letter and pronounced with dependent, while F-test can be used if the sample is `chi') is used to define this distribution. The chi square independent. theoretical distribution is a continuous distribution, but the Therefore, testing hypothesis about the mean of a single statistic is obtained in a discrete manner based on discrete population, comparing the means from two populations such differences between the observed and expected values. Chi- as one-way ANOVA, testing the variance of a single square ) is the sum of independent squared normal population using chi-square test and test of two or several random variables with mean and variance (i.e., mean and variance using F-test are an important topic in this standard normal random variables). 0 1 paper. Suppose we pick from a , , … , 34 Teshome Hailemeskel Abebe: The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology with mean and variance, , that is ⁄ (2) . Then it turns out that be ℱ, = ⁄ ~independent, ) standard normal variables ( , , …) , and if Since ’s depend on the normal distribution, the the sum of independent standard random~(0,1) variable ( = ) have a chi-square distribution with K degree of distribution also depends on normal distribution. The ℱlimiting distribution of as is because freedom∑ given as follows: ℱ, → ∞ ⁄ as . → ∞ ⁄ ~1 2.3. Student’s -Distribution ( − ) ( − 1) = = ~ The t distribution is a , which is Based on the , the “limit of the frequently used to evaluate hypothesis regarding the mean of distribution is normal (as ). So that the chi-square continuous variables. Student’s t-distribution is quite similar test is given by → ∞ to normal distribution, but the exact shape of t-distribution depends on sample size, i.e., as the sample size increases then () (1) t-distribution approaches normal distribution. Unlike the = normal distribution, however, which has a 2.2. The F Distribution of 1, the standard deviation of the t distribution varies with an is the ratio of two independent chi-squared random entity known as the degrees of freedom. We know that variablesℱ, each divided by their respective degrees of freedom as given by:

− ( − ) ( − ) = = = ⁄√ ∑ ( − ) ⁄ − 1 ∑ ( − ) ⁄ − 1 () () () () ( − ) ⁄ ⁄ ⁄ ⁄ = = ∑ ( ) = ∑ = () = = ∑ ( − ) ⁄ − 1 () ⁄( − 1) ⁄( − 1) ⁄() () ⁄() ⁄() = ⁄( − 1) where is the degree of freedom . Moreover, , then the A squared − 1 equals the ratio of squared ( ) = ⁄√ ~ = = standard normal variable divided by chi-squared with ⁄ ⁄ . For t-distribution, , , − 1 ⁄ = ⁄ = ℱ = ⁄ degree of freedom. Since , the . As where and () , then the = ⁄ = ⁄ = ⁄ ~(0,1) = ~ , because . In this case, test statistic: √ squared → ∞ root ~(0,1) of both side results, ⁄ the − Student’s 1 → 1 based on normal distribution become: (3) = ~ Summary of relationships ⁄ Let Distribution~(0,1) Definition Parent Limiting Normal As , Independent ∑ ′ As → ∞, , → Independent Chi-squared ℱ , (⁄)( ⁄), ′ Normal As → ∞, ℱ, → ⁄ ⁄ ⁄ → ∞ → Note: , also . ℱ, = ℱ, = = = approaches of hypothesis testing are test statistic, p-value and 3. Test Statistic Approach of Hypothesis . Testing In this paper, we focus only test statistic approach of hypotheses testing. A test statistic is a function of a random Hypotheses are predictions about the relationship among sample, and is therefore a random variable. When we two or more variables or groups based on a theory or compute the statistic for a given sample, we obtain an previous research [3]. Hence, hypothesis testing is the art of outcome of the test statistic. In order to perform a statistical testing if variation between two sample distributions can be test we should know the distribution of the test statistic under explained through random chance or not. The three the null hypothesis. This distribution depends largely on the 34 Teshome Hailemeskel Abebe: The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology

assumptions made in the model. If the specification of the random variable with and variance , then the standard model includes the assumption of normality, then the random variable Z (one-sample Z-test) has a standard normal appropriate statistical distribution is the normal distribution distribution given by: or any of the distributions associated with it, such as the Chi- square, Student’s t, or Snedecor’s F. (4) = ⁄√ ~ (0,1) 3.1. -tests (Large Sample Case) where is the sample mean, is the hypothesized population mean under the null hypothesis,μ is the population standard A Z-test is any statistical test for which the distribution of deviation and is the sample size, recall that the test statistic under the null hypothesis can be (). However, according to Joginder K.⁄ √[6] if= the ̅ approximated by a normal distribution [17]. In this paper, the population standard deviation is unknown, but , given -test is used for testing significance of single population the Central Limit Theorem, the test statistic is given ≥ 30by: mean and difference of two population means if our sample is taken from a normal distribution with known variance or if (5) our sample size is large enough to invoke the Central Limit = ⁄√ Theorem (usually is a good rule of thumb). ≥ 30 Where . 3.1.1. One Sample -test for Mean = ∑ ( − ) A one-sample -test helps to determine whether the 3.1.2 . Two-Sample -test (When Are Unequal) population mean, is equal to a hypothesized value when the A two-sample test can be applied to investigate the sample is taken from a normal distribution with known significant of the difference between the mean of two variance . Recall theorem 1. If are a random populations when the population variances are known and , , … , variables, then, ∑ . In this case, the mean of the unequal, but both distributions are normal and sample from the population= is not constant rather it varies independently drawn. Suppose we have independent depend on which sample drawn from the population is random sample of size n and m from two normal chosen. Therefore, the sample mean is a random variable populations having means and and known variances with its own distribution is called . and , respectively, and that we want to test the null Lemma 1. If is the random variable of the sample mean hypothesis , where is a given constant, of all the random samples of size n from the population with against one of − the alternatives= , now we can expected value and variance , then the expected derive a test statistic as follow. − ≠ value and variance() of are () and Lemma 2. If and are () = () () = independent normally , distributed , … , random , variables , … , with (), respectively. and , then Proof: consider all of the possible samples of size n from a ~ (, ) ~ (, ) − ~ ( − population with expected value and variance . . If a sample are chosen,() each comes from() the , + ) Proof: by Lemma 1. and . same population, so, … ,each has the same expected values, ~ , ~(, ) and variance . Then . () () Then the appropriate− = + test (−) statistic ~ ( for −this test, i+s given (−1) by: )

1 1 1 () , where and () = = = () = ~(0,1) = ∑ ( − )

1 = (. ()) = () If= we ∑do not know− the variance of both distributions, then we may use: 1 1 () = = (6) ()() = 1 1 () = () = (. () = Provided n and m are large enough to invoke the Central Limit Theorem. The first step is to state the null hypothesis which is a where is the sample size of the one group (X) and is statement about a parameter of the population (s) labeled the sample size of the other group (Y). as and alternative hypothesiswhich is a statement about a is the sample mean of one group and calculated as parameterH of the population (s) that is opposite to the null hypothesis labeled . The null and alternatives hypotheses ∑ , while the sample mean of the other group is (two-tail test) are givenH by; against . = denoted by and calculated as . H: μ = μ H: μ ≠ μ ∑ Theorem 2: If are normally distributed = , , … , Mathematics Letters 2019; 5(3): 33-40 35

3.1.3. Two Sample Z-test (When Variances Are Equal) for needs an assumption that the sample value should be drawn Mean randomly and distributed normally (since the data is It can be applied to investigate the significance of the continuous). Therefore, the -test is a useful technique for difference between the mean of two populations when the comparing the mean value of a group against some population variances are known and equal, but both hypothesized mean (one-sample) or of two separate sets of distributions are normal. The null and alternatives hypotheses numbers against each other (two-sample) in case of small (two-tail test) are given by and samples ( taken from a normal distribution of , respectively. Then testH statistic: μ − μ is: μ H: μ − unknown variance. < 30) μ ≠ μ A t-test defined as a statistical test used for testing of hypothesis , where standard error of difference ~0,1) in which the test statistic follows a Student’s t-distribution under the assumption that the null hypothesis is true [5]. is + 3.2.1. One-Sample t-test If we don't know the variance of both distributions, then Banda Gerald [2] suggested that a one-sample -test used we may use: to compares the mean of a sample to a predefined value. Thus, we use one sample t-test to investigate the significance ()() (7) of the difference between an assumed population mean and a = sample mean. The population characteristics are known from theory or are calculated from the population. Thus, a provided and are large enough to invoke the Central researcher can use one-sample t-test to compare the mean of Limit Theorem. a sample with a hypothesized population mean to see if the 3.1.4. Paired Sample z Test sample is significantly different. The null and alternative According to Banda Gerald [2], the words like dependent, hypotheses (two-tail test) are given; and H: μ = μ H: μ ≠ repeated, before and after, matched pairs, paired and so on are , respectively. Then the t-test statistic is: μ hints for dependent samples. Paired samples also called (9) dependent samples used for testing the population mean of = ~ () paired differences. A paired -test helps to determine whether ⁄√ where is the sample mean, is the population mean or the mean difference between randomly drawn samples of paired hypothesized value, is the sample size and is the sample observations is significant. Statistically, it is equivalent to a one- sample -test on the differences between the paired observations. standard deviation which is calculated as ∑() . The null and alternative hypotheses (two-tail test) are given by = and , respectively. 3.2.2. Independent Two Sample -test (Variances Unknown HTheorem3.: = If H: ≠ and are normally But Equal) distributed random , variables,, … , but , and, … , are dependent, A two-sample t-test is used to compare two sample means then the standard random variable is: when the independent variable is nominal level data while the dependent variable is interval/ratio level data [13]. Thus, independent samples -test is used to compare two groups (8) whose means are not dependent on one another. Two samples = ⁄√ ~ (0,1) where is a pairwise difference (is the sample are independent if the sample values selected from one population are not related with the sample values selected mean of the= sum − of differences between the first results and from the other population. Thus, an independent sample t-test the second results of participants in the study) also given helps to investigate the significance of the difference between by ∑ , is the mean value of pair wise difference the means of two populations when the population variance ̅ under = the null hypothesis, and is the population standard is unknown but equal. Let assume a group of independent deviation of the pairwise difference. The basic reason is that normal samples given by and , , … , , , … , and are normal random variable, so is also distributed as and , respectively. normal random variable, thus are− thought = to be Then, the null and~( alternative, ) hypotheses~(, (two-tail) test) are a sequence of random variable. , , … , given by and , respectively. IfH sample: μ − μ sizes = μ aren'tH: μ large − μ enough ≠ μ to 3.2. Student’s t -test invoke the Central Limit Theorem, then the test statistic is: Unfortunately, -tests require either the population is (10) normally distributed with a known variance, or the sample () = ~() size is large. However, when the sample size is small , the value for the area under the normal curve is where refres to sample size from the first group (X), is not(

we use a t -test for two population mean given the null hypothesis of the independent sample t-test (two-tail test) − 1) + ( − 1) , the test statistic is H: μ − μ = μ + − 2 () (12) () = ~ ∑ ( − ) + ∑ ( − ) = + − 2 where and = ∑ ( − ) = ∑ ( − ) 1 3.2.4. Paired (Dependent) Sample Test = = ( − ) − 1 Two samples are dependent (or consist of matched pairs) if and the members of one sample can be used to determine the members of the other sample. A paired t-test is used to compare two population means when we have two samples

1 in which observations in one sample can be paired with = ( − ) − 1 observations in the other sample under Gaussian distribution [7, 16]. When two variables are paired, the difference of Let us derived -distribution in two sample t-test (variances these two variables, , is treated as if it were a unknown but equal). single sample. This test = is appropriate − for pre-post treatment Lemma 3. If and are normally responses. The null hypothesis is that the true mean and independent , distributed, … , random, , … , variables with difference of the two variables is as given by; and , then . Then the test statistic: : = ~ (, ) ~ (, ) − ~ ( − . , + ) (13) Proof: by Lemma 1. and . () = ~ ~ (, ) ~(, ) Then . where , , , is the mean of the ∑ ∑() Lemma − 4.= If + (−) ~ ( and− , + (−1) are normally) ̅ pairwise = difference = under the null= √ hypothesisμ and is and independent , distributed, … , random, , … , variables with sample standard deviation of the pairwise differences. and , then () . Proof: Let and are normal random variable. ~ ( , ) ~ ( , ) ~(0,1) Therefore, is also a normal random variable. Theorem 4. If and are normally Then , = can− be thought as the sequence of random and independent , distributed, … , random, , … , variables with variable, , … , . and , then the test statistic is ~ given~ ( as: , ) ~ (, ) 3.2.5. Both independent and dependent sample t-test will give the

() (11) researcher an indication of whether the difference between = the two groups is statistically significant, but not indicates the size (magnitude) of the effect. However, effect size explains

the degree to which the two variables are associated with one where () () ∑ ( )∑ ( )has a another. Thus, effect size is the that indicates the = = -distribution with degree of freedom. relative magnitude of the differences between means or the + − 2 Proof. () amount of total variance in the dependent variable that is = = () () predictable from the knowledge of the levels of independent variable.

i. Effect Size of an Independent Sample t-test () () () + An effect size is a standardized measure of the magnitude of observed effect. The Pearson’s r is Let , by lemma 4. and if () used as common measures of effect size. Correlation = ~(0,1) = coefficient is used to measure the strength of relationship () () . Then since () and between two variables. From the researcher’s point of view, + ~ ~ correlation coefficient of 0means there is no effect, and a . Therefore, which has () value of 1 means that there is a perfect effect. The following ~ = ~ () is the formula used to calculate the effect size; a -distribution with degree of freedom. + − 2 3.2.3. Independent Two Sample t-test (Variances Unknown (14) But Unequal) = However, if the variances are unknown and unequal, then where is the calculated value of the independent sample t- Mathematics Letters 2019; 5(3): 33-40 37

test, is the sample size, if refers to small such that either or . effect, refers to medium effect and0.1 refers to ≤ ⁄ , − 1 ≥ ⁄ , − 1 large effect. = 0.3 = 0.5 3.3.2. Chi-Square Test of Independence ii. Effect Size of Dependent Sample t-test The chi-square test of independence is a nonparametric The most common used method for calculating effect size statistical test used for deciding whether two categorical independent sample t-test is the Eta Squared [11]. The (nominal) variables are associated or independent [1, 5]. Let formula for simplified Eta Squared is given by: the two variables in the cross classification are X and Y, then the null against H: no association between X and Y (15) alternative . The chi- square statisticH: some association between X and Y used to conduct this test is the same as in the = goodness of fit test given by: where is the calculated value of the dependent sample - test, n is the sample size. The calculated value of the Eta Squared should be between 0 and 1. (17) [()()] , and are small, medium, and large effect, = χ = ∑ ∑ ~ 0.01, 0.06 0.14 respectively. where is the test statistic that asymptotically approaches a chi-squareχ distribution, is the observed of the ith 3.3. Chi-Square Test Statistic row and jth column, representing the numbers of respondents Karl Pearson in 1900 proposed the famous chi-squared test that take on each of the various combinations of values for for comparing an observed to a the two variables. ( ) ( ) is the theoretically assumed distribution in large-sample expected (theoretical) frequency= of the i th row and j th column, approximation with the chi-squared distribution. According where is the number of rows in the , is to Pearson, chi -square test is a nonparametric test used for the number of columns in the contingency table. testing the hypothesis of no association between two or more The statistical decision is if , reject groups, population or criteria (i.e. to check independence the null hypothesis and concludeχ that> there[( is a)( relationship)] between two variables) and how likely the observed between the variables. If , the null distribution of data fits with the distribution that is expected hypothesis cannot be rejectedχ < and [ conclude()( that)] there is ‑ ‑ (i.e., to test the goodness of fit) in categorical data. In this insufficient evidence of a relationship between the variables. case, we apply chi-square test for testing population variance, independence and goodness of fit. 3.3.3. Chi-Square Test for Goodness of Fit A chi-square test for goodness of fit is used to compare the 3.3.1. Chi-Square Test of Population Variance (One-sample frequencies (counts) among multiple categories of nominal or Test for Variance) ordinal level data for one-sample [13]. Moreover, a chi- The chi-square test can be used to judge if a random square test for goodness of fit used to compares the expected sample has been drawn from a normal population (interval and observed values to determine how well the and ratio data) with mean ( ) and with specified variance experimenter’s predictions fit the data [14]. ( ) [15]. Thus, it used to compare the sample variance to The null hypothesis ( is stated as the a hypothesized variance (i.e. it is a test for a single sample population distribution of H the: variable = ) is the same as the variance). Assume the data are independent proposed distribution against the alternative ( normal samples with , , . .and , we know that the distributions are different. The chi square : sta tistic≠ is) ~(, ) defined as: () if . The null and ∑ = ∑ ~ ~(0,1) alternative hypotheses (two-tail) given by and , respectively. Then the test statistic:: = (18) [] : ≠ χ = ∑ ∑ ~ Large values of this statistic lead rejection of the null ( − ) = ~ hypothesis, meaning the observed values are not equal to theoretical values (expected). However, we don’t know , so we use as an ∑ 3.4. The F-test Statistic = estimate of and ∑( ). Thus, sampling distribution F-tests are used when comparing statistical models that of test statistic if is= true: have been fit to a data set to determine models that best fit the population from the data being sampled [4]. The F (16) () distribution is a joint distribution of two independent = ~ variables, with each having a chi-square distribution [9]. The The decision rule is to compare the obtained test statistic to F Distribution needs both parent populations to be normal, the chi-squared distribution with degree of have the same variance and independent sample. F-test is freedom. If we construct the critical region = − of 1 size in the applicable for testing equality of variances and testing two-tail case, our critical region of size will be all values of 38 Teshome Hailemeskel Abebe: The Derivation and Choice of Appropriate Test Statistic (Z, t, F and Chi-Square Test) in Research Methodology equality of several means. variance, ∑ ∑X − = ∑ . (X − ) + 3.4.1. Testing for Equality of Variances , where is The chi-square test used for testing a single population ∑sum ∑of squareX − variation X about∑ the ∑ grandX mean − across= all n variance, while -test for testing the equality of variances observations, is sum of squared of two independentℱ populations (comparing two variances). deviation for ∑ each . group(X − mean) = about the grand mean, Thus, it helps to make inferences about a population variance or the sampling distribution of the sample is sum of squared deviation for all∑ observation ∑X − X within = each group from that group mean, variance from a normal population given the null : summed across all groups. Then the mean sum of square against the alternativehypothesis . Suppose two independent random samples : drawn ≠ from each population (normally and identically distributed between or ∑ .() denotes the sum of squares that represents = the variation among the different population) as given by and . ~, ) ~(, ) samples, while within mean sum square or Moreover, let and represent the sample variances of = two different populations. If both populations are normal ∑ ∑ which refers to the sum of squares that and the population variances and are equal, but unknown, then the sampling distribution of is called an represents the variation within samples that is due to chance. -distribution. ℱ Then the test statistic is the ratio of group and error mean ℱ The first samples are normally and squares: , , … , independent distributed random variables, () (20) = = ℱ = ~ ℱ, with d. f. ∑() One-Way ANOVA Table − 1 Source of The second samples are normally and Sum of square DF Mean squares Variation , , … , ℱ Between (between independent random variables ∑ with SSB k-1 MSSB= () group) = = d. f. Let assume are two independent random Within (error) SSW n-k MSSW= − 1 Total TSS n-1 variables & given the null hypothesis ( ) holds true, then -test for population variance is given: by:= Reject if the calculated is greater than the tabulate ℱ value ( ), meaning variabilityℱ between groups is large ℱ, ⁄ (19) compared to variation within groups, while fail to reject if , , ℱ = ⁄ = = ~ℱ = ℱ the calculated is less than the tabulate value ( ), where and are samples from population X and Y, meaning the ℱ variability between groups is ℱ negligibl, e = numerator degrees of freedom, == compared to variation within groups. denominator − 1 degrees of freedom. = − 1 3.4.2. Testing Equality of Several Means (ANOVA) 4. Conclusion of variance (ANOVA) is statistical technique In this paper, the derivation and choice of appropriate a test used for comparing means of more than two groups (usually statistic were reviewed since application of right statistical at least three) given the assumption of normality, hypothesis tests to the right data is the main role of statistical independency and equality of the error variances. In one-way tools in of every science. Hence, the ANOVA, group means are compared by comparing the researcher apply -test, which is appropriate to test the existence variability between groups with that of variability within the of population mean difference in the case of large sample size groups. This is done by computing an F-statistic. The F-value and the t-test is for small sample size. Moreover, -test is used is computed by dividing the mean sum of squares between for test of several mean and variance, while Chi-squareℱ test is for the groups by the mean sum of squares within the groups [5], testing single variance, goodness of fit and independence. [8]. A one-way analysis of variance (ANOVA) is given by: References = + + [1] Alena Košťálová. (2013). Proceedings of the 10th where is the j th observation for the i th income level, is a th International Conference “Reliability and Statistics in constant term, is the effect on the i income level, Transportation and Communication” (RelStat’10), 20–23 October 2010, Riga, Latvia, p. 163-171. ISBN 978-9984-818- . The null hypothesis given by ~(0, ), = 1, 2, … , against 34-4 Transport and Telecommunication Institute, Lomonosova 1, LV-1019, Riga, Latvia. : = …. Total = variation= = 0 of the data: is partitioned as≠ variation, =1,2,…, within levels (groups) and between levels (groups). [2] Banda Gerald. (2018). A Brief Review of Independent, Dependent and One Sample t-test. International Journal of Mathematically, and portion of total Applied Mathematics and Theoretical Physics. 4 (2), pp. 50- X = ∑ ∑ 54. doi: 10.11648/j.ijamtp.20180402.13. Mathematics Letters 2019; 5(3): 33-40 39

[3] David, J. Pittenger. (2001). Hypothesis Testing as a Moral [11] Pallant, J. (2007). SPSS Survival Manual: A Step by Step to Choice. Ethics & Behavior, 11 (2), 151-162, DOI: Data Analysis Using SPSS for Windows (Version 15). Sydney: 10.1207/S15327019EB1102_3. Allen and Unwin. [4] DOWNWARD, L. B., LUKENS, W. W. & BRIDGES, F. [12] Pearson, K. On the criterion that a given system of deviations (2006). A Variation of the F-test for Determining Statistical from the probable in the case of a correlated system of Relevance of Particular Parameters in EXAFS Fits. 13th variables that arisen from random sampling. Philos. Mag. International Conference on X-ray Absorption Fine Structure, 1900, 50, 157-175. Reprinted in K. Pearson (1956), pp. 339- 129-131. 357. [5] J. P. Verma. (2013). Data Analysis in Management with SPSS [13] Philip, E. Crewson.(2014). Applied Statistics, First Edition. Software, DOI 10.1007/978-81-322-0786-3_7, Springer, United States Department of Justice. India. https://www.researchgate.net/publication/297394168. [6] Joginder Kaur. (2013). Techniques Used in Hypothesis Testing [14] Sorana, D. B., Lorentz, J., Adriana, F. S., Radu, E. S. and in Research Methodology A Review. International Journal of Doru, C. P. (2011). Pearson-Fisher Chi-Square Statistic Science and Research (IJSR) ISSN (Online): 2319-7064 Index Revisited. Journal of Information; 2, 528-545; doi: Copernicus Value: 6.14 | Impact Factor: 4.438. 10.3390/info2030528; ISSN 2078-2489, www.mdpi.com/journal/information. [7] Kousar, J. B. and Azeez Ahmed. (2015). The Importance of Statistical Tools in Research Work. International Journal of [15] Sureiman Onchiri.(2013). Conceptual model on application of Scientific and Innovative Mathematical Research (IJSIMR) chi-square test in education and social sciences. Academic Volume 3, Issue 12, PP 50-58 ISSN 2347-307X (Print) & Journals, Educational Research and Reviews; 8 (15); 1231- ISSN 2347-3142 (Online). 1241; DOI: 10.5897/ERR11.0305; ISSN 1990- 3839.http://www.academicjournals.org/ERR. [8] Liang, J. (2011). Testing the Mean for Business Data: Should One Use the Z-Test, T-Test, F-Test, The Chi-Square Test, Or [16] Tae Kyun Kim. (2015). T test as a parametric statistic. Korean The P-Value Method? Journal of College Teaching and Society of Anesthesiologists. pISSN 2005-6419, eISSN 2005- Learning, 3 (7), doi: 10.19030/tlc.v3i7.1704. 7563.

[9] LING, M. (2009b). Compendium of Distributions: Beta, [17] Vinay Pandit. (2015). A Study on Statistical Z Test to Analyses Binomial, Chi-Square, F, Gamma, Geometric, Poisson, Behavioral Finance Using Psychological Theories, Journal of Student's t, and Uniform. The Python Papers Source Codes Research in Applied Mathematics Volume 2~ Issue 1, pp: 01- 1:4. 07 ISSN (Online), 2394-0743 ISSN (Print): 2394-0735. www.questjournals.org. [10] MCDONALD, J. H. (2008). Handbook of Biological Statistics. Baltimore, Sparky House Publishing.