Robustness and Comparative Statistical Power of the Repeated Measures Anova and Friedman Test with Real Data

Total Page:16

File Type:pdf, Size:1020Kb

Robustness and Comparative Statistical Power of the Repeated Measures Anova and Friedman Test with Real Data ROBUSTNESS AND COMPARATIVE STATISTICAL POWER OF THE REPEATED MEASURES ANOVA AND FRIEDMAN TEST WITH REAL DATA By OPEOLUWA BOLU FADEYI DISSERTATION Submitted to the Graduate School Of Wayne State University, Detroit, Michigan In partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY 2021 MAJOR: EVALUATION AND RESEARCH Approved by: ____________________________________ Advisor Date ____________________________________ ____________________________________ ____________________________________ i DEDICATION To my husband, children, and parents. ii ACKNOWLEDGEMENTS I would like to express my profound gratitude to my dissertation advisor Dr. Shlomo Sawilowsky for his insightful guidance. It is difficult for me to comprehend how he made out time to review my dissertation progressively and thoroughly. His demise was shocking and devastating as my study was concluding. However, he has written his name in gold in my heart. I sincerely appreciate Dr. Barry Markman for all his helpful comments and also for stepping out of retirement to chair my final defense. The course that I took with Dr. Monte Piliawsky impacted the quality of my research significantly. On several instances, Dr. Aguwa reviewed my work and contributed immensely to the richness of my research. I am very grateful to these professors for servin g on my dissertation committee. Also, I will like to express my love and thankfulness to my husband, Johnson, who spent sleepless nights with me proofreading my work. Our children, Wisdom, Delight, and Goodness, that endured a busy mom are much appreciated. I sincerely acknowledge several people that have been a part of my life during the doctoral study who are too numerous to be included. Above all, I want to express my unquantifiable gratitude to God. His grace sustained and enabled me to reach the zenith of the academic ladder. iii TABLE OF CONTENTS DEDICATION II ACKNOWLEDGEMENTS III LIST OF TABLES V LIST OF FIGURES VII CHAPTER ONE: OVERVIEW OF THE PARAMETRIC TESTS 1 CHAPTER TWO: THEORETICAL FOUNDATIONS AND LITERATURE REVIEW 11 CHAPTER THREE: METHODOLOGY 46 CHAPTER FOUR : RESULTS AND DISCUSSION 60 CHAPTER FIVE CONCLUSIONS AND IMPLICATIONS 90 APPENDIX A 98 APPENDIX B 107 APPENDIX C 109 ABSTRACT 157 AUTOBIOGRAPHICAL STATEMENT 159 iv LIST OF TABLES Table 1. Hypothesis Table. ......................................................................................... 28 Table 2. Type I Error Rates when α=0.05 with G-G correction. ................................. 60 Table 3. Type I Error Rates for k=3,and G-G corrections. ......................................... 61 Table 4. Type I Error Rates for k=5,and G-G corrections .......................................... 62 Table 5. Type I Error Rates for k=7,and G-G corrections .......................................... 62 Table 6. Partial Eta squared for RM & Kendall’s W for Fr test ................................... 62 Table 7. The Sphericity Assumption Results. ............................................................. 64 Table 8. The power rates for n=5, k=3. ....................................................................... 83 Table 9. The power rates for n=8, k=3. ....................................................................... 83 Table 10. The power rates for n=12, k=3. .................................................................. 84 Table 14. The power rates for n=5, k=5...................................................................... 84 Table 15. The power rates for n=8, k=5...................................................................... 85 Table 16. The power rates for n=12, k=5 ................................................................... 85 Table 17. The power rates for n=18, k=5 ................................................................... 85 Table 18. The power rates for n=25, k=5 ................................................................... 85 Table 19. The power rates for n=33, k=5. .................................................................. 86 Table 20. The power rates for n=5, k=7...................................................................... 86 Table 21. The power rates for n=8, k=7...................................................................... 86 Table 22. The power rates for n=12, k=7. .................................................................. 86 Table 23. The power rates for n=18, k=7. .................................................................. 86 Table 24. The power rates for n=25, k=7. .................................................................. 87 v Table 25. The power rates for n=33, k=7. .................................................................. 87 Table 26. Power differences for all Samples, when K=3. .......................................... 87 Table 27. Power differences for all Samples, when K=5. .......................................... 88 Table 28 Power differences for all Samples, when K=7. ........................................... 89 vi LIST OF FIGURES Figure 1. Partition of Errors for One-factor Repeated Measures ANOVA. ................ 16 Figure 2.Gaussian (Normal) Distribution. ................................................................... 51 Figure 3.Chi-Square Distribution. ................................................................................ 52 Figure 4.Uniform Distribution. ..................................................................................... 53 Figure 5. Multivariate Normal Distribution for Sample Size of 5, k=7. ....................... 55 Figure 6. Multivariate Normal Distribution for Sample Size of 8, k=7. ....................... 55 Figure 7. Multivariate Normal Distribution for Sample Size of 12, k=7. ..................... 56 Figure 8. Multivariate Normal Distribution for Sample Size of 18, k=7. ..................... 56 Figure 9. Multivariate Normal Distribution for Sample Size of 25, k=7. ..................... 57 Figure 10. Multivariate Normal Distribution for Sample Size of 33, k=7. ................... 57 Figure 11. Comparative Power rate for the RM & Fr for n=5,k=3. ............................. 66 Figure 12. Comparative Power rate for the RM & Fr for n=5,k=5. ............................. 67 Figure 13. Comparative Power rate for the RM & Fr for n=5,k=7. ............................. 68 Figure 14. Comparative Power rate for the RM & Fr for n=8,k=3. ............................. 69 Figure 15. Comparative Power rate for the RM & Fr for n=8,k=5. ............................. 70 Figure 16. Comparative Power rate for the RM & Fr for n=8,k=7. ............................. 71 Figure 17. Comparative Power rate for the RM & Fr for n=12,k=3. ........................... 72 Figure 18. Comparative Power rate for the RM & Fr for n=12,k=5. ........................... 73 Figure 19. Comparative Power rate for the RM & Fr for n=12,k=7. ........................... 74 Figure 20. Comparative Power rate for the RM & Fr for n=18,k=3. ........................... 75 Figure 21. Comparative Power rate for the RM & Fr for n=18,k=5. ........................... 76 vii Figure 22. Comparative Power rate for the RM & Fr for n=18,k=7. ........................... 77 Figure 23. Comparative Power rate for the RM & Fr for n=25,k=3. ........................... 78 Figure 24. Comparative Power rate for the RM & Fr for n=25,k=5. ........................... 79 Figure 25. Comparative Power rate for the RM & Fr for n=25,k=7. ........................... 80 Figure 26. Comparative Power rate for the RM & Fr for n=33,k=3. ........................... 81 Figure 27. Comparative Power rate for the RM & Fr for n=33,k=5. ........................... 82 Figure 28. Comparative Power rate for the RM & Fr for n=33,k=7. ........................... 83 viii © COPYRIGHT BY OPEOLUWA BOLU FADEYI 2021 All Rights Reserved ix 1 CHAPTER ONE INTRODUCTION OVERVIEW OF THE PARAMETRIC TESTS Parametric tests are those which based the necessary assumptions on the parameters of the underlying population distribution from which the samples are drawn. It is generally believed that parametric tests are robust to the violation of some of the assumptions, this means that the tests have the power to control the probability of rejecting the false null hypothesis. For example, ANOVA can be used to analyze ordinal scale data such as Likert scales, even without any consequences (Leys & Schumann, 2010; Nanna & Sawilowsky, 1998; Zimmerman & Zumbo, 1993). Another peculiar characteristic of a parametric test is that it is uniformly most powerful unbiased (UMPU). “This means that when all underlying assumptions are met based on the inference from the samples, no other test has greater ability to detect a true difference for a given sample” (Bridge & Sawilowsky, 1999, p. 229). For example, the t-test is uniformly most powerful unbiased when the assumptions of independence, homoscedasticity, and normality are met (Bradley, 1968b; Kelley & Sawilowsky, 1997). However, a “light shift” in the shapes of the distribution of the variables when the number of samples in each treatment group gets close to 30 or more still generates robust results (Glass, Peckham, & Sanders, 1972; Leys & Schumann, 2010; Lix, Keselman, & Keselman, 1996; Lumley, Diehr, Emerson, & Chen, 2002). Studies have been carried out to examine the violation of the assumption of homogeneity of variances which may have a severe impact on the type I error rate of F-tests, and It has been established that F-test will yield statistically significant results when the group sample sizes are equal and size of the groups greater
Recommended publications
  • 14. Non-Parametric Tests You Know the Story. You've Tried for Hours To
    14. Non-Parametric Tests You know the story. You’ve tried for hours to normalise your data. But you can’t. And even if you can, your variances turn out to be unequal anyway. But do not dismay. There are statistical tests that you can run with non-normal, heteroscedastic data. All of the tests that we have seen so far (e.g. t-tests, ANOVA, etc.) rely on comparisons with some probability distribution such as a t-distribution or f-distribution. These are known as “parametric statistics” as they make inferences about population parameters. However, there are other statistical tests that do not have to meet the same assumptions, known as “non-parametric tests.” Many of these tests are based on ranks and/or are comparisons of sample medians rather than means (as means of non-normal data are not meaningful). Non-parametric tests are often not as powerful as parametric tests, so you may find that your p-values are slightly larger. However, if your data is highly non-normal, non-parametric test may detect an effect that is missed in the equivalent parametric test due to falsely large variances caused by the skew of the data. There is, of course, no problem in analyzing normally distributed data with non-parametric statistics also. Non-parametric tests are equally valid (i.e. equally defendable) and most have been around as long as their parametric counterparts. So, if your data does not meet the assumption of normality or homoscedasticity and if it is impossible (or inappropriate) to transform it, you can use non-parametric tests.
    [Show full text]
  • Nonparametric Statistics for Social and Behavioral Sciences
    Statistics Incorporating a hands-on approach, Nonparametric Statistics for Social SOCIAL AND BEHAVIORAL SCIENCES and Behavioral Sciences presents the concepts, principles, and methods NONPARAMETRIC STATISTICS FOR used in performing many nonparametric procedures. It also demonstrates practical applications of the most common nonparametric procedures using IBM’s SPSS software. Emphasizing sound research designs, appropriate statistical analyses, and NONPARAMETRIC accurate interpretations of results, the text: • Explains a conceptual framework for each statistical procedure STATISTICS FOR • Presents examples of relevant research problems, associated research questions, and hypotheses that precede each procedure • Details SPSS paths for conducting various analyses SOCIAL AND • Discusses the interpretations of statistical results and conclusions of the research BEHAVIORAL With minimal coverage of formulas, the book takes a nonmathematical ap- proach to nonparametric data analysis procedures and shows you how they are used in research contexts. Each chapter includes examples, exercises, SCIENCES and SPSS screen shots illustrating steps of the statistical procedures and re- sulting output. About the Author Dr. M. Kraska-Miller is a Mildred Cheshire Fraley Distinguished Professor of Research and Statistics in the Department of Educational Foundations, Leadership, and Technology at Auburn University, where she is also the Interim Director of Research for the Center for Disability Research and Service. Dr. Kraska-Miller is the author of four books on teaching and communications. She has published numerous articles in national and international refereed journals. Her research interests include statistical modeling and applications KRASKA-MILLER of statistics to theoretical concepts, such as motivation; satisfaction in jobs, services, income, and other areas; and needs assessments particularly applicable to special populations.
    [Show full text]
  • 05 36534Nys130620 31
    Monte Carlos study on Power Rates of Some Heteroscedasticity detection Methods in Linear Regression Model with multicollinearity problem O.O. Alabi, Kayode Ayinde, O. E. Babalola, and H.A. Bello Department of Statistics, Federal University of Technology, P.M.B. 704, Akure, Ondo State, Nigeria Corresponding Author: O. O. Alabi, [email protected] Abstract: This paper examined the power rate exhibit by some heteroscedasticity detection methods in a linear regression model with multicollinearity problem. Violation of unequal error variance assumption in any linear regression model leads to the problem of heteroscedasticity, while violation of the assumption of non linear dependency between the exogenous variables leads to multicollinearity problem. Whenever these two problems exist one would faced with estimation and hypothesis problem. in order to overcome these hurdles, one needs to determine the best method of heteroscedasticity detection in other to avoid taking a wrong decision under hypothesis testing. This then leads us to the way and manner to determine the best heteroscedasticity detection method in a linear regression model with multicollinearity problem via power rate. In practices, variance of error terms are unequal and unknown in nature, but there is need to determine the presence or absence of this problem that do exist in unknown error term as a preliminary diagnosis on the set of data we are to analyze or perform hypothesis testing on. Although, there are several forms of heteroscedasticity and several detection methods of heteroscedasticity, but for any researcher to arrive at a reasonable and correct decision, best and consistent performed methods of heteroscedasticity detection under any forms or structured of heteroscedasticity must be determined.
    [Show full text]
  • The Effects of Simplifying Assumptions in Power Analysis
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Public Access Theses and Dissertations from Education and Human Sciences, College of the College of Education and Human Sciences (CEHS) 4-2011 The Effects of Simplifying Assumptions in Power Analysis Kevin A. Kupzyk University of Nebraska-Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/cehsdiss Part of the Educational Psychology Commons Kupzyk, Kevin A., "The Effects of Simplifying Assumptions in Power Analysis" (2011). Public Access Theses and Dissertations from the College of Education and Human Sciences. 106. https://digitalcommons.unl.edu/cehsdiss/106 This Article is brought to you for free and open access by the Education and Human Sciences, College of (CEHS) at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Public Access Theses and Dissertations from the College of Education and Human Sciences by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. Kupzyk - i THE EFFECTS OF SIMPLIFYING ASSUMPTIONS IN POWER ANALYSIS by Kevin A. Kupzyk A DISSERTATION Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfillment of Requirements For the Degree of Doctor of Philosophy Major: Psychological Studies in Education Under the Supervision of Professor James A. Bovaird Lincoln, Nebraska April, 2011 Kupzyk - i THE EFFECTS OF SIMPLIFYING ASSUMPTIONS IN POWER ANALYSIS Kevin A. Kupzyk, Ph.D. University of Nebraska, 2011 Adviser: James A. Bovaird In experimental research, planning studies that have sufficient probability of detecting important effects is critical. Carrying out an experiment with an inadequate sample size may result in the inability to observe the effect of interest, wasting the resources spent on an experiment.
    [Show full text]
  • Introduction to Hypothesis Testing
    Introduction to Hypothesis Testing OPRE 6301 Motivation . The purpose of hypothesis testing is to determine whether there is enough statistical evidence in favor of a certain belief, or hypothesis, about a parameter. Examples: Is there statistical evidence, from a random sample of potential customers, to support the hypothesis that more than 10% of the potential customers will pur- chase a new product? Is a new drug effective in curing a certain disease? A sample of patients is randomly selected. Half of them are given the drug while the other half are given a placebo. The conditions of the patients are then mea- sured and compared. These questions/hypotheses are similar in spirit to the discrimination example studied earlier. Below, we pro- vide a basic introduction to hypothesis testing. 1 Criminal Trials . The basic concepts in hypothesis testing are actually quite analogous to those in a criminal trial. Consider a person on trial for a “criminal” offense in the United States. Under the US system a jury (or sometimes just the judge) must decide if the person is innocent or guilty while in fact the person may be innocent or guilty. These combinations are summarized in the table below. Person is: Innocent Guilty Jury Says: Innocent No Error Error Guilty Error No Error Notice that there are two types of errors. Are both of these errors equally important? Or, is it as bad to decide that a guilty person is innocent and let them go free as it is to decide an innocent person is guilty and punish them for the crime? Or, is a jury supposed to be totally objective, not assuming that the person is either innocent or guilty and make their decision based on the weight of the evidence one way or another? 2 In a criminal trial, there actually is a favored assump- tion, an initial bias if you will.
    [Show full text]
  • Power of a Statistical Test
    Power of a Statistical Test By Smita Skrivanek, Principal Statistician, MoreSteam.com LLC What is the power of a test? The power of a statistical test gives the likelihood of rejecting the null hypothesis when the null hypothesis is false. Just as the significance level (alpha) of a test gives the probability that the null hypothesis will be rejected when it is actually true (a wrong decision), power quantifies the chance that the null hypothesis will be rejected when it is actually false (a correct decision). Thus, power is the ability of a test to correctly reject the null hypothesis. Why is it important? Although you can conduct a hypothesis test without it, calculating the power of a test beforehand will help you ensure that the sample size is large enough for the purpose of the test. Otherwise, the test may be inconclusive, leading to wasted resources. On rare occasions the power may be calculated after the test is performed, but this is not recommended except to determine an adequate sample size for a follow-up study (if a test failed to detect an effect, it was obviously underpowered – nothing new can be learned by calculating the power at this stage). How is it calculated? As an example, consider testing whether the average time per week spent watching TV is 4 hours versus the alternative that it is greater than 4 hours. We will calculate the power of the test for a specific value under the alternative hypothesis, say, 7 hours: The Null Hypothesis is H0: μ = 4 hours The Alternative Hypothesis is H1: μ = 6 hours Where μ = the average time per week spent watching TV.
    [Show full text]
  • Nonparametric Estimation by Convex Programming
    The Annals of Statistics 2009, Vol. 37, No. 5A, 2278–2300 DOI: 10.1214/08-AOS654 c Institute of Mathematical Statistics, 2009 NONPARAMETRIC ESTIMATION BY CONVEX PROGRAMMING By Anatoli B. Juditsky and Arkadi S. Nemirovski1 Universit´eGrenoble I and Georgia Institute of Technology The problem we concentrate on is as follows: given (1) a convex compact set X in Rn, an affine mapping x 7→ A(x), a parametric family {pµ(·)} of probability densities and (2) N i.i.d. observations of the random variable ω, distributed with the density pA(x)(·) for some (unknown) x ∈ X, estimate the value gT x of a given linear form at x. For several families {pµ(·)} with no additional assumptions on X and A, we develop computationally efficient estimation routines which are minimax optimal, within an absolute constant factor. We then apply these routines to recovering x itself in the Euclidean norm. 1. Introduction. The problem we are interested in is essentially as fol- lows: suppose that we are given a convex compact set X in Rn, an affine mapping x A(x) and a parametric family pµ( ) of probability densities. Suppose that7→N i.i.d. observations of the random{ variable· } ω, distributed with the density p ( ) for some (unknown) x X, are available. Our objective A(x) · ∈ is to estimate the value gT x of a given linear form at x. In nonparametric statistics, there exists an immense literature on various versions of this problem (see, e.g., [10, 11, 12, 13, 15, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28] and the references therein).
    [Show full text]
  • Statistical Analysis in JASP
    Copyright © 2018 by Mark A Goss-Sampson. All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the author except for the purposes of research, education or private study. CONTENTS PREFACE .................................................................................................................................................. 1 USING THE JASP INTERFACE .................................................................................................................... 2 DESCRIPTIVE STATISTICS ......................................................................................................................... 8 EXPLORING DATA INTEGRITY ................................................................................................................ 15 ONE SAMPLE T-TEST ............................................................................................................................. 22 BINOMIAL TEST ..................................................................................................................................... 25 MULTINOMIAL TEST .............................................................................................................................. 28 CHI-SQUARE ‘GOODNESS-OF-FIT’ TEST............................................................................................. 30 MULTINOMIAL AND Χ2 ‘GOODNESS-OF-FIT’ TEST. ..........................................................................
    [Show full text]
  • Friedman Test
    community project encouraging academics to share statistics support resources All stcp resources are released under a Creative Commons licence stcp-marquier-FriedmanR The following resources are associated: The R dataset ‘Video.csv’ and R script ‘Friedman.R’ Friedman test in R (Non-parametric equivalent to repeated measures ANOVA) Dependent variable: Continuous (scale) but not normally distributed or ordinal Independent variable: Categorical (Time/ Condition) Common Applications: Used when several measurements of the same dependent variable are taken at different time points or under different conditions for each subject and the assumptions of repeated measures ANOVA have not been met. It can also be used to compare ranked outcomes. Data: The dataset ‘Video’ contains some results from a study comparing videos made to aid understanding of a particular medical condition. Participants watched three videos (A, B, C) and one product demonstration (D) and were asked several Likert style questions about each. These were summed to give an overall score for each e.g. TotalAGen below is the total score of the ordinal questions for video A. Friedman ranks each participants responses The Friedman test ranks each person’s score from lowest to highest (as if participants had been asked to rank the methods from least favourite to favourite) and bases the test on the sum of ranks for each column. For example, person 1 gave C the lowest Total score of 13 and A the highest so SPSS would rank these as 1 and 4 respectively. As the raw data is ranked to carry out the test, the Friedman test can also be used for data which is already ranked e.g.
    [Show full text]
  • Confidence Intervals and Hypothesis Tests
    Chapter 2 Confidence intervals and hypothesis tests This chapter focuses on how to draw conclusions about populations from sample data. We'll start by looking at binary data (e.g., polling), and learn how to estimate the true ratio of 1s and 0s with confidence intervals, and then test whether that ratio is significantly different from some baseline value using hypothesis testing. Then, we'll extend what we've learned to continuous measurements. 2.1 Binomial data Suppose we're conducting a yes/no survey of a few randomly sampled people 1, and we want to use the results of our survey to determine the answers for the overall population. 2.1.1 The estimator The obvious first choice is just the fraction of people who said yes. Formally, suppose we have samples x1,..., xn that can each be 0 or 1, and the probability that each xi is 1 is p (in frequentist style, we'll assume p is fixed but unknown: this is what we're interested in finding). We'll assume our samples are indendepent and identically distributed (i.i.d.), meaning that each one has no dependence on any of the others, and they all have the same probability p of being 1. Then our estimate for p, which we'll callp ^, or \p-hat" would be n 1 X p^ = x : n i i=1 Notice thatp ^ is a random quantity, since it depends on the random quantities xi. In statistical lingo,p ^ is known as an estimator for p. Also notice that except for the factor of 1=n in front, p^ is almost a binomial random variable (in particular, (np^) ∼ B(n; p)).
    [Show full text]
  • Compare Multiple Related Samples
    Compare Multiple Related Samples The command compares multiple related samples using the Friedman test (nonparametric alternative to the one-way ANOVA with repeated measures) and calculates the Kendall's coefficient of concordance (also known as Kendall’s W). Kendall's W makes no assumptions about the underlying probability distribution and allows to handle any number of outcomes, unlike the standard Pearson correlation coefficient. Friedman test is similar to the Kruskal-Wallis one-way analysis of variance with the difference that Friedman test is an alternative to the repeated measures ANOVA with balanced design. How To For unstacked data (each column is a sample): o Run the STATISTICS->NONPARAMETRIC STATISTICS -> COMPARE MULTIPLE RELATED SAMPLES [FRIEDMAN ANOVA, CONCORDANCE] command. o Select variables to compare. For stacked data (with a group variable): o Run the STATISTICS->NONPARAMETRIC STATISTICS -> COMPARE MULTIPLE RELATED SAMPLES (WITH GROUP VARIABLE) command. o Select a variable with observations (VARIABLE) and a text or numeric variable with the group names (GROUPS). RESULTS The report includes Friedman ANOVA and Kendall’s W test results. THE FRIEDMAN ANOVA tests the null hypothesis that the samples are from identical populations. If the p-value is less than the selected 훼 level the null-hypothesis is rejected. If there are no ties, Friedman test statistic Ft is defined as: 푘 12 퐹 = [ ∑ 푅2] − 3푛(푘 + 1) 푡 푛푘(푘 + 1) 푖 푖=1 where n is the number of rows, or subjects; k is the number of columns or conditions, and Ri is the sum of the ranks of ith column. If ranking results in any ties, the Friedman test statistic Ft is defined as: 푅2 푛(푘 − 1) [∑푘 푖 − 퐶 ] 푖=1 푛 퐹 퐹푡 = 2 ∑ 푟푖푗 − 퐶퐹 where n is the number rows, or subjects, k is the number of columns, and Ri is the sum of the ranks from column, or condition I; CF is the ties correction (Corder et al., 2009).
    [Show full text]
  • Repeated-Measures ANOVA & Friedman Test Using STATCAL
    Repeated-Measures ANOVA & Friedman Test Using STATCAL (R) & SPSS Prana Ugiana Gio Download STATCAL in www.statcal.com i CONTENT 1.1 Example of Case 1.2 Explanation of Some Book About Repeated-Measures ANOVA 1.3 Repeated-Measures ANOVA & Friedman Test 1.4 Normality Assumption and Assumption of Equality of Variances (Sphericity) 1.5 Descriptive Statistics Based On SPSS dan STATCAL (R) 1.6 Normality Assumption Test Using Kolmogorov-Smirnov Test Based on SPSS & STATCAL (R) 1.7 Assumption Test of Equality of Variances Using Mauchly Test Based on SPSS & STATCAL (R) 1.8 Repeated-Measures ANOVA Based on SPSS & STATCAL (R) 1.9 Multiple Comparison Test Using Boferroni Test Based on SPSS & STATCAL (R) 1.10 Friedman Test Based on SPSS & STATCAL (R) 1.11 Multiple Comparison Test Using Wilcoxon Test Based on SPSS & STATCAL (R) ii 1.1 Example of Case For example given data of weight of 11 persons before and after consuming medicine of diet for one week, two weeks, three weeks and four week (Table 1.1.1). Tabel 1.1.1 Data of Weight of 11 Persons Weight Name Before One Week Two Weeks Three Weeks Four Weeks A 89.43 85.54 80.45 78.65 75.45 B 85.33 82.34 79.43 76.55 71.35 C 90.86 87.54 85.45 80.54 76.53 D 91.53 87.43 83.43 80.44 77.64 E 90.43 84.45 81.34 78.64 75.43 F 90.52 86.54 85.47 81.44 78.64 G 87.44 83.34 80.54 78.43 77.43 H 89.53 86.45 84.54 81.35 78.43 I 91.34 88.78 85.47 82.43 78.76 J 88.64 84.36 80.66 78.65 77.43 K 89.51 85.68 82.68 79.71 76.5 Average 89.51 85.68 82.68 79.71 76.69 Based on Table 1.1.1: The person whose name is A has initial weight 89,43, after consuming medicine of diet for one week 85,54, two weeks 80,45, three weeks 78,65 and four weeks 75,45.
    [Show full text]