14. Non-Parametric Tests You Know the Story. You've Tried for Hours To

Total Page:16

File Type:pdf, Size:1020Kb

14. Non-Parametric Tests You Know the Story. You've Tried for Hours To 14. Non-Parametric Tests You know the story. You’ve tried for hours to normalise your data. But you can’t. And even if you can, your variances turn out to be unequal anyway. But do not dismay. There are statistical tests that you can run with non-normal, heteroscedastic data. All of the tests that we have seen so far (e.g. t-tests, ANOVA, etc.) rely on comparisons with some probability distribution such as a t-distribution or f-distribution. These are known as “parametric statistics” as they make inferences about population parameters. However, there are other statistical tests that do not have to meet the same assumptions, known as “non-parametric tests.” Many of these tests are based on ranks and/or are comparisons of sample medians rather than means (as means of non-normal data are not meaningful). Non-parametric tests are often not as powerful as parametric tests, so you may find that your p-values are slightly larger. However, if your data is highly non-normal, non-parametric test may detect an effect that is missed in the equivalent parametric test due to falsely large variances caused by the skew of the data. There is, of course, no problem in analyzing normally distributed data with non-parametric statistics also. Non-parametric tests are equally valid (i.e. equally defendable) and most have been around as long as their parametric counterparts. So, if your data does not meet the assumption of normality or homoscedasticity and if it is impossible (or inappropriate) to transform it, you can use non-parametric tests. The assumptions of independence and randomization, of course, still apply. 14.1 Non-Normal Data For this lab, we will work with tree improvement data from a tree breeding program in British Columbia. The data we will work with has two categorical independent variables: three species of spruce tree (SPECIES) and two different site moisture conditions (SITE). Tree heights were measured after 1 year (HEIGHT). Download the “non-parametric.csv” data from the website. Import it into R as “data” and attach it. You can use boxplots to see that the data is not normal for many treatment levels. We will assume for this lab that we cannot or do not want to transform the data (perhaps we want to maintain the original units). You can also use a Shapiro test to verify quantitatively that the data is not normal in many cases: boxplot(HEIGHT~SPECIES) boxplot(HEIGHT~SITE) shapiro.test(data[data$SPECIES=="engelman","HEIGHT"]) shapiro.test(data[data$SPECIES=="black","HEIGHT"]) shapiro.test(data[data$SPECIES=="white","HEIGHT"]) # Repeat for SITE treatment levels 14.2 Wilcoxon Rank Sum Test (alternative to t-test when distributions are similarly shaped) The Wilcoxon test (aka. Mann-Whitney test) is very common and is the non-parametric equivalent of a t- test. However, you should only use this test when your distributions are similarly shaped (e.g. when your samples are skewed in the same direction). To start, we will compare just the two levels of the SITE treatment: xeric (dry) and mesic (moist). We will do this first with a t-test for comparison, then with a Wilcoxon test. Create an R object of the height after the first year (HEIGHT) for each of the SITE treatments: xeric <- data[data$SITE=="xeric","HEIGHT"] mesic <- data[data$SITE=="mesic","HEIGHT"] First, we will run a one-sample t-test equivalent using both the t-test and the Wilcoxon test in R. We will compare a height measurement of 0.46 to the rest of the xeric heights. How do the results of the two tests compare? t.test(xeric,mu=0.46) wilcox.test(xeric,mu=0.46) Now, run the non-parametric equivalent of a two-sample t-test using the Wilcoxon test. (This is often referred to also as a Mann-Whitney test.) Compare the tree heights between the xeric and mesic sites with a t-test (which assumes normality for both samples) and with a Wilcoxon test. What do you notice? t.test(xeric,mesic) wilcox.test(xeric,mesic) Wilcoxon tests are also able to run one-tailed equivalents. We simply add the alternative hypothesis to the statement, just as we did with the t-test. In this case, our alternative hypothesis is that the xeric site produces larger trees: wilcox.test(xeric,mesic,alternative=”greater”) # And compare to the same t-test: t.test(xeric,mesic,alternative=”greater”) We can also run a paired test with a Wilcoxon test (but not with these data), just as we did with a t- test. To do this, we would simply add the option paired=TRUE to the command (just like a t-test). Note that there is an exact option that you can specify if your sample sizes are not large (default is on). If you disable it (set this option to NULL in R), the procedure will take some computational shortcuts. You may need to do this for large samples. Otherwise the test may run for hours or days! 14.3 Kolmogorov-Smirnov Test (alternative to t-test when distributions are of different shape) The Wilcox rank sum test is not suitable if you have distributions/samples that do not look roughly similar in a histogram (i.e. they don’t have similar skew). In that case, you can use the again less-powerful Kolmogorov-Smirnov test. Coding in R for the Kolmogorov-Smirnov test, using the ks.test() command, uses basically the same syntax as the t.test() and wilcox.test() commands. We’ll try using it for comparing the same samples (xeric and mesic) as above. As far as I know, there is no way to run a one-sample test with the K-S test (but this is no problem, as you can always use a Wilcoxon test for that). # Two-sample, two-tailed test ks.test(xeric,mesic) # Two-sample, one-tailed test # Note that the order of the variables must be reversed (or use “less”) ks.test(xeric,mesic, alternative=”greater”) 14.4 Kruskal-Wallis test (alternative to one-way ANOVA for non-normal distributions) The Kruskal-Wallis test is an extension of the Wilcox rank sum test for more than two treatment levels, just as a parametric ANOVA is a kind of extension of the parametric t-test. However, the K-W test may only be used as an equivalent of a one-way ANOVA. It is not capable of comparing more than one treatment (more than one factor). Use a K-W test to compare the heights of the three spruce species: kruskal.test(HEIGHT~SPECIES) # And compare the output to a parametric ANOVA summary(aov(HEIGHT~SPECIES)) As in ANOVA, a significant treatment effect indicates that at least one population median differs from another (we compare medians, as means are somewhat meaningless in skewed data). So, it must be followed up by pair-wise Wilcoxon tests, comparing each of the treatment levels and because of that you will have to manually adjust your p-values if you run multiple comparisons. 14.5 Friedman’s Rank Test (alternative to two-way ANOVA with blocks) There are no non-parametric statistical tests for multi-factor experimental designs (that I know of, anyway). However, there is one non-parametric test to analyze a randomized complete block design, which is Friedman’s Rank Test. In R, you can only execute the Friedman Rank Test if there are no replications within blocks. So, we will add some blocks to the lab data just for the sake of this example: # Use the LETTERS command to repeat capital letters in a vector data$BLOCK = rep(LETTERS[1:8],3) # You will have to detach & reattach the data to refer to BLOCK detach(data) attach(data) Now, you can run the blocked non-parametric ANOVA with Friendman’s Test. Note that BLOCK and SPECIES (or whatever you name the variables) cannot be stated in reverse order: friedman.test(YIELD~SPECIES|BLOCK) # Compare this to a parametric blocked ANOVA summary(aov(YIELD~SPECIES+BLOCK)) CHALLENGE: 1. Why are parametric tests considered stronger statistical tests? 2. What should your steps be before using a non-parametric test? 3. How do the p-values between a t-test and a Wilcox test compare? .
Recommended publications
  • Statistical Analysis in JASP
    Copyright © 2018 by Mark A Goss-Sampson. All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the author except for the purposes of research, education or private study. CONTENTS PREFACE .................................................................................................................................................. 1 USING THE JASP INTERFACE .................................................................................................................... 2 DESCRIPTIVE STATISTICS ......................................................................................................................... 8 EXPLORING DATA INTEGRITY ................................................................................................................ 15 ONE SAMPLE T-TEST ............................................................................................................................. 22 BINOMIAL TEST ..................................................................................................................................... 25 MULTINOMIAL TEST .............................................................................................................................. 28 CHI-SQUARE ‘GOODNESS-OF-FIT’ TEST............................................................................................. 30 MULTINOMIAL AND Χ2 ‘GOODNESS-OF-FIT’ TEST. ..........................................................................
    [Show full text]
  • Friedman Test
    community project encouraging academics to share statistics support resources All stcp resources are released under a Creative Commons licence stcp-marquier-FriedmanR The following resources are associated: The R dataset ‘Video.csv’ and R script ‘Friedman.R’ Friedman test in R (Non-parametric equivalent to repeated measures ANOVA) Dependent variable: Continuous (scale) but not normally distributed or ordinal Independent variable: Categorical (Time/ Condition) Common Applications: Used when several measurements of the same dependent variable are taken at different time points or under different conditions for each subject and the assumptions of repeated measures ANOVA have not been met. It can also be used to compare ranked outcomes. Data: The dataset ‘Video’ contains some results from a study comparing videos made to aid understanding of a particular medical condition. Participants watched three videos (A, B, C) and one product demonstration (D) and were asked several Likert style questions about each. These were summed to give an overall score for each e.g. TotalAGen below is the total score of the ordinal questions for video A. Friedman ranks each participants responses The Friedman test ranks each person’s score from lowest to highest (as if participants had been asked to rank the methods from least favourite to favourite) and bases the test on the sum of ranks for each column. For example, person 1 gave C the lowest Total score of 13 and A the highest so SPSS would rank these as 1 and 4 respectively. As the raw data is ranked to carry out the test, the Friedman test can also be used for data which is already ranked e.g.
    [Show full text]
  • Compare Multiple Related Samples
    Compare Multiple Related Samples The command compares multiple related samples using the Friedman test (nonparametric alternative to the one-way ANOVA with repeated measures) and calculates the Kendall's coefficient of concordance (also known as Kendall’s W). Kendall's W makes no assumptions about the underlying probability distribution and allows to handle any number of outcomes, unlike the standard Pearson correlation coefficient. Friedman test is similar to the Kruskal-Wallis one-way analysis of variance with the difference that Friedman test is an alternative to the repeated measures ANOVA with balanced design. How To For unstacked data (each column is a sample): o Run the STATISTICS->NONPARAMETRIC STATISTICS -> COMPARE MULTIPLE RELATED SAMPLES [FRIEDMAN ANOVA, CONCORDANCE] command. o Select variables to compare. For stacked data (with a group variable): o Run the STATISTICS->NONPARAMETRIC STATISTICS -> COMPARE MULTIPLE RELATED SAMPLES (WITH GROUP VARIABLE) command. o Select a variable with observations (VARIABLE) and a text or numeric variable with the group names (GROUPS). RESULTS The report includes Friedman ANOVA and Kendall’s W test results. THE FRIEDMAN ANOVA tests the null hypothesis that the samples are from identical populations. If the p-value is less than the selected 훼 level the null-hypothesis is rejected. If there are no ties, Friedman test statistic Ft is defined as: 푘 12 퐹 = [ ∑ 푅2] − 3푛(푘 + 1) 푡 푛푘(푘 + 1) 푖 푖=1 where n is the number of rows, or subjects; k is the number of columns or conditions, and Ri is the sum of the ranks of ith column. If ranking results in any ties, the Friedman test statistic Ft is defined as: 푅2 푛(푘 − 1) [∑푘 푖 − 퐶 ] 푖=1 푛 퐹 퐹푡 = 2 ∑ 푟푖푗 − 퐶퐹 where n is the number rows, or subjects, k is the number of columns, and Ri is the sum of the ranks from column, or condition I; CF is the ties correction (Corder et al., 2009).
    [Show full text]
  • Repeated-Measures ANOVA & Friedman Test Using STATCAL
    Repeated-Measures ANOVA & Friedman Test Using STATCAL (R) & SPSS Prana Ugiana Gio Download STATCAL in www.statcal.com i CONTENT 1.1 Example of Case 1.2 Explanation of Some Book About Repeated-Measures ANOVA 1.3 Repeated-Measures ANOVA & Friedman Test 1.4 Normality Assumption and Assumption of Equality of Variances (Sphericity) 1.5 Descriptive Statistics Based On SPSS dan STATCAL (R) 1.6 Normality Assumption Test Using Kolmogorov-Smirnov Test Based on SPSS & STATCAL (R) 1.7 Assumption Test of Equality of Variances Using Mauchly Test Based on SPSS & STATCAL (R) 1.8 Repeated-Measures ANOVA Based on SPSS & STATCAL (R) 1.9 Multiple Comparison Test Using Boferroni Test Based on SPSS & STATCAL (R) 1.10 Friedman Test Based on SPSS & STATCAL (R) 1.11 Multiple Comparison Test Using Wilcoxon Test Based on SPSS & STATCAL (R) ii 1.1 Example of Case For example given data of weight of 11 persons before and after consuming medicine of diet for one week, two weeks, three weeks and four week (Table 1.1.1). Tabel 1.1.1 Data of Weight of 11 Persons Weight Name Before One Week Two Weeks Three Weeks Four Weeks A 89.43 85.54 80.45 78.65 75.45 B 85.33 82.34 79.43 76.55 71.35 C 90.86 87.54 85.45 80.54 76.53 D 91.53 87.43 83.43 80.44 77.64 E 90.43 84.45 81.34 78.64 75.43 F 90.52 86.54 85.47 81.44 78.64 G 87.44 83.34 80.54 78.43 77.43 H 89.53 86.45 84.54 81.35 78.43 I 91.34 88.78 85.47 82.43 78.76 J 88.64 84.36 80.66 78.65 77.43 K 89.51 85.68 82.68 79.71 76.5 Average 89.51 85.68 82.68 79.71 76.69 Based on Table 1.1.1: The person whose name is A has initial weight 89,43, after consuming medicine of diet for one week 85,54, two weeks 80,45, three weeks 78,65 and four weeks 75,45.
    [Show full text]
  • Chapter 4. Data Analysis
    4. DATA ANALYSIS Data analysis begins in the monitoring program group of observations selected from the target design phase. Those responsible for monitoring population. In the case of water quality should identify the goals and objectives for monitoring, descriptive statistics of samples are monitoring and the methods to be used for used almost invariably to formulate conclusions or analyzing the collected data. Monitoring statistical inferences regarding populations objectives should be specific statements of (MacDonald et al., 1991; Mendenhall, 1971; measurable results to be achieved within a stated Remington and Schork, 1970; Sokal and Rohlf, time period (Ponce, 1980b). Chapter 2 provides 1981). A point estimate is a single number an overview of commonly encountered monitoring representing the descriptive statistic that is objectives. Once goals and objectives have been computed from the sample or group of clearly established, data analysis approaches can observations (Freund, 1973). For example, the be explored. mean total suspended solids concentration during baseflow is 35 mg/L. Point estimates such as the Typical data analysis procedures usually begin mean (as in this example), median, mode, or with screening and graphical methods, followed geometric mean from a sample describe the central by evaluating statistical assumptions, computing tendency or location of the sample. The standard summary statistics, and comparing groups of data. deviation and interquartile range could likewise be The analyst should take care in addressing the used as point estimates of spread or variability. issues identified in the information expectations report (Section 2.2). By selecting and applying The use of point estimates is warranted in some suitable methods, the data analyst responsible for cases, but in nonpoint source analyses point evaluating the data can prevent the “data estimates of central tendency should be coupled rich)information poor syndrome” (Ward 1996; with an interval estimate because of the large Ward et al., 1986).
    [Show full text]
  • 9 Blocked Designs
    9 Blocked Designs 9.1 Friedman's Test 9.1.1 Application 1: Randomized Complete Block Designs • Assume there are k treatments of interest in an experiment. In Section 8, we considered the k-sample Extension of the Median Test and the Kruskal-Wallis Test to test for any differences in the k treatment medians. • Suppose the experimenter is still concerned with studying the effects of a single factor on a response of interest, but variability from another factor that is not of interest is expected. { Suppose a researcher wants to study the effect of 4 fertilizers on the yield of cotton. The researcher also knows that the soil conditions at the 8 areas for performing an experiment are highly variable. Thus, the researcher wants to design an experiment to detect any differences among the 4 fertilizers on the cotton yield in the presence a \nuisance variable" not of interest (the 8 areas). • Because experimental units can vary greatly with respect to physical characteristics that can also influence the response, the responses from experimental units that receive the same treatment can also vary greatly. • If it is not controlled or accounted for in the data analysis, it can can greatly inflate the experimental variability making it difficult to detect real differences among the k treatments of interest (large Type II error). • If this source of variability can be separated from the treatment effects and the random experimental error, then the sensitivity of the experiment to detect real differences between treatments in increased (i.e., lower the Type II error). • Therefore, the goal is to choose an experimental design in which it is possible to control the effects of a variable not of interest by bringing experimental units that are similar into a group called a \block".
    [Show full text]
  • Basic Statistical Analysis Aov(), Anova(), Lm(), Glm(): Linear And
    basic statistical analysis R reference card, by Jonathan Baron aov(), anova(), lm(), glm(): linear and non- Parentheses are for functions, brackets are for indi- linear models, anova cating the position of items in a vector or matrix. t.test(): t test (Here, items with numbers like x1 are user-supplied variables.) prop.test(), binom.test(): sign test chisq.test(x1): chi-square test on matrix x1 Miscellaneous fisher.test(): Fisher exact test q(): quit cor(a): show correlations <-: assign cor.test(a,b): test correlation INSTALL package1: install package1 friedman.test(): Friedman test m1[,2]: column 2 of matrix m1 m1[,2:5] or m1[,c(2,3,4,5)]: columns 2–5 some statistics in mva package m1$a1: variable a1 in data frame m1 prcomp(): principal components NA: missing data kmeans(): kmeans cluster analysis is.na: true if data missing factanal(): factor analysis library(mva): load (e.g.) the mva package cancor(): canonical correlation Help Graphics help(command1): get help with command1 (NOTE: plot(), barplot(), boxplot(), stem(), hist(): USE THIS FOR MORE DETAIL THAN THIS basic plots CARD CAN PROVIDE.) matplot(): matrix plot help.start(): start browser help pairs(matrix): scatterplots help(package=mva): help with (e.g.) package mva coplot(): conditional plot apropos("topic1"): commands relevant to topic1 stripchart(): strip chart example(command1): examples of command1 qqplot(): quantile-quantile plot Input and output qqnorm(), qqline(): fit normal distribution source("file1"): run the commands in file1. read.table("file1"): read in data from file1
    [Show full text]
  • 12.9 Friedman Rank Test: Nonparametric Analysis for The
    M12_BERE8380_12_SE_C12.9.qxd 2/21/11 3:56 PM Page 1 12.9 Friedman Rank Test: Nonparametric Analysis for the Randomized Block Design 1 12.9 Friedman Rank Test: Nonparametric Analysis for the Randomized Block Design When analyzing a randomized block design, sometimes the data consist of only the ranks within each block. Other times, you cannot assume that the data from each of the c groups are from normally distributed populations. In these situations, you can use the Friedman rank test. You use the Friedman rank test to determine whether c groups have been selected from populations having equal medians. That is, you test = = Á = H0: M.1 M.2 M.c against the alternative = Á H1: Not all M.j are equal (where j 1, 2, , c). To conduct the test, you replace the data values in each of the r independent blocks with the corresponding ranks, so that you assign rank 1 to the smallest value in the block and rank c to the largest. If any values in a block are tied, you assign them the mean of the ranks that they would otherwise have been assigned. Thus, Rij is the rank (from 1 to c) associated with the jth group in the ith block. Equation (12.13) defines the test statistic for the Friedman rank test. FRIEDMAN RANK TEST FOR DIFFERENCES AMONG C MEDIANS c = 12 2 - + FR + a R.j 3r(c 1) (12.13) rc(c 1) j=1 where 2 = = Á R.j square of the total of the ranks for group j ( j 1, 2, , c) r = number of blocks c = number of groups As the number of blocks gets large (i.e., greater than 5), you can approximate the test sta- - tistic FR by using the chi-square distribution with c 1 degrees of freedom.
    [Show full text]
  • Sample Size Calculation with Gpower
    Sample Size Calculation with GPower Dr. Mark Williamson, Statistician Biostatistics, Epidemiology, and Research Design Core DaCCoTA Purpose ◦ This Module was created to provide instruction and examples on sample size calculations for a variety of statistical tests on behalf of BERDC ◦ The software used is GPower, the premiere free software for sample size calculation that can be used in Mac or Windows https://dl2.macupdate.com/images/icons256/24037.png Background ◦ The Biostatistics, Epidemiology, and Research Design Core (BERDC) is a component of the DaCCoTA program ◦ Dakota Cancer Collaborative on Translational Activity has as its goal to bring together researchers and clinicians with diverse experience from across the region to develop unique and innovative means of combating cancer in North and South Dakota ◦ If you use this Module for research, please reference the DaCCoTA project The Why of Sample Size Calculation ◦ In designing an experiment, a key question is: How many animals/subjects do I need for my experiment? ◦ Too small of a sample size can under-detect the effect of interest in your experiment ◦ Too large of a sample size may lead to unnecessary wasting of resources and animals ◦ Like Goldilocks, we want our sample size to be ‘just right’ ◦ The answer: Sample Size Calculation ◦ Goal: We strive to have enough samples to reasonably detect an effect if it really is there without wasting limited resources on too many samples. https://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/The_Three_Bears_- _Project_Gutenberg_eText_17034.jpg/1200px-The_Three_Bears_-_Project_Gutenberg_eText_17034.jpg
    [Show full text]
  • BMC Immunology Biomed Central
    BMC Immunology BioMed Central Commentary Open Access A guide to modern statistical analysis of immunological data Bernd Genser*1,2, Philip J Cooper3,4, Maria Yazdanbakhsh5, Mauricio L Barreto2 and Laura C Rodrigues2 Address: 1Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK, 2Instituto de Saúde Coletiva, Federal University of Bahia, Salvador, Brazil, 3Centre for Infection, St George's University of London, London, UK, 4Instituto de Microbiologia, Universidad San Francisco de Quito, Quito, Ecuador and 5Department of Parasitology, Leiden University Medical Center, Leiden, The Netherlands Email: Bernd Genser* - [email protected]; Philip J Cooper - [email protected]; Maria Yazdanbakhsh - [email protected]; Mauricio L Barreto - [email protected]; Laura C Rodrigues - [email protected] * Corresponding author Published: 26 October 2007 Received: 22 February 2007 Accepted: 26 October 2007 BMC Immunology 2007, 8:27 doi:10.1186/1471-2172-8-27 This article is available from: http://www.biomedcentral.com/1471-2172/8/27 © 2007 Genser et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract Background: The number of subjects that can be recruited in immunological studies and the number of immunological parameters that can be measured has increased rapidly over the past decade and is likely to continue to expand. Large and complex immunological datasets can now be used to investigate complex scientific questions, but to make the most of the potential in such data and to get the right answers sophisticated statistical approaches are necessary.
    [Show full text]
  • Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures Jacob O
    CHI 2011 • Session: Research Methods May 7–12, 2011 • Vancouver, BC, Canada The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures Jacob O. Wobbrock,1 Leah Findlater,1 Darren Gergle,2 James J. Higgins3 1The Information School 2School of Communication 3Department of Statistics DUB Group Northwestern University Kansas State University University of Washington Evanston, IL 60208 USA Manhattan, KS 66506 USA Seattle, WA 98195 USA [email protected] [email protected] wobbrock, [email protected] ABSTRACT some methods for handling nonparametric factorial data Nonparametric data from multi-factor experiments arise exist (see Table 1) [12], most are not well known to HCI often in human-computer interaction (HCI). Examples may researchers, require advanced statistical knowledge, and are include error counts, Likert responses, and preference not included in common statistics packages.1 tallies. But because multiple factors are involved, common Table 1. Some possible analyses for nonparametric data. nonparametric tests (e.g., Friedman) are inadequate, as they Method Limitations are unable to examine interaction effects. While some statistical techniques exist to handle such data, these General Linear Models Can perform factorial parametric analyses, (GLM) but cannot perform nonparametric analyses. techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Mann-Whitney U, Can perform nonparametric analyses, but Kruskal-Wallis cannot handle repeated measures or analyze Transform (ART) for nonparametric factorial data analysis multiple factors or interactions. in HCI. The ART relies on a preprocessing step that “aligns” Wilcoxon, Friedman Can perform nonparametric analyses and data before applying averaged ranks, after which point handle repeated measures, but cannot common ANOVA procedures can be used, making the ART analyze multiple factors or interactions.
    [Show full text]
  • Exact P-Values for Pairwise Comparison of Friedman Rank Sums
    Eisinga et al. BMC Bioinformatics (2017) 18:68 DOI 10.1186/s12859-017-1486-2 METHODOLOGY ARTICLE Open Access Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers Rob Eisinga1* , Tom Heskes2, Ben Pelzer1 and Manfred Te Grotenhuis1 Abstract Background: The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. Results: We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. Conclusions: We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete.
    [Show full text]