A Fast Algorithm for Computing Distance Correlation

A Fast Algorithm for Computing Distance Correlation

A fast algorithm for computing distance correlation Arin Chaudhuri ∗1 and Wenhao Hu †1 1Internet of Things, SAS Institute Inc. Abstract Classical dependence measures such as Pearson correlation, Spear- man’s ρ, and Kendall’s τ can detect only monotonic or linear dependence. To overcome these limitations, Sz´ekely et al. (2007) proposed distance co- variance as a weighted L2 distance between the joint characteristic func- tion and the product of marginal distributions. The distance covariance is 0 if and only if two random vectors X and Y are independent. This measure has the power to detect the presence of a dependence structure when the sample size is large enough. They further showed that the sam- ple distance covariance can be calculated simply from modified Euclidean 2 distances, which typically requires O(n ) cost. The quadratic computing time greatly limits the application of distance covariance to large data. In this paper, we present a simple exact O(n log(n)) algorithm to calculate the sample distance covariance between two univariate random variables. The proposed method essentially consists of two sorting steps, so it is easy to implement. Empirical results show that the proposed algorithm is significantly faster than state-of-the-art methods. The algorithm’s speed will enable researchers to explore complicated dependence structures in large datasets. Keywords: Distance Correlation; Dependency Measure; Fast Algorithm; Merge Sort arXiv:1810.11332v2 [stat.CO] 15 Nov 2018 ∗[email protected][email protected] 1 1 Introduction Detecting dependencies between two random vectors X and Y is a fundamen- tal problem in statistics and machine learning. Dependence measures such as Pearson’s correlation, Spearman’s ρ, and Kendall’s τ are used in almost all quantitative areas; example areas are bioinformatics (Guo et al., 2014; Sferra et al., 2017) and time-series (Zhou, 2012). However, those classical dependence measures are usually designed to detect one specific dependence structure such as a monotonic or linear structure. It is easy to construct highly dependent X and Y whose dependence cannot be detected by classical dependence mea- sures. To overcome these limitations, Sz´ekely et al. (2007); Sz´ekely and Rizzo (2009) proposed distance covariance as a weighted L2 distance between the joint characteristic function and the product of marginal characteristic distributions. The distance covariance is 0 if and only if two random vectors X and Y are independent. A closely related measure is the Hilbert-Schmidt independence measure (HSIC). HSIC has been extensively studied in machine learning liter- ature (Gretton et al., 2005, 2008; Pfister et al., 2018). Sejdinovic et al. (2013) established equivalence of distance covariance with HSIC. Despite the power of sample distance covariance to detect a dependence structure, its use for large sample sizes, is inhibited by the high computational cost required. The sample distance covariance and HSIC computation typically requires (n2) pairwise distance (kernel) calculations and (n2) memory for storing them.O This is undesirable and greatly limits the applicationO of distance correlation to large datasets. In the era of big data, it is not rare to see data that consists of millions of observations. For such data, an (n2) algorithm is almost impossible to run on a personal computer. To approximateO the distance covariance or HSIC for large data, Nystr¨om approach or the random Fourier feature method is often adopted. However, the use of these approximations leads to a reduction in power (Zhang et al., 2018). In this article, we describe an exact method to compute the sample distance covariance between two univariate random variables with computational cost (n log(n)) and memory cost (n). Our proposed method essentially consists ofO just two sorting steps, which makesO it easy to implement. A closely related (n log(n)) algorithm for sample distance covariance was proposed by Huo andO Sz´ekely (2016). Our algorithm differs from Huo and Sz´ekely (2016) in the following ways: First, they implicitly assume that there are no ties in the data (see Algorithm 1 and proof in Huo and Sz´ekely (2016)), whereas our proposed method is valid for any pair of real-valued univariate variables. In practice, it is common to see datasets with ties, especially for discrete variables or bootstrap sample. Second, we use a merge sort instead of an AVL tree-type implementation to compute the Frobenius inner product of the distance matrices of x and y. Empirical results show that our proposed method is significantly faster; for example, for one million observations our MATLAB implementation runs 10 times faster (finishing in 4 seconds) on our desktop, whereas the implementation in Huo and Sz´ekely (2016) requires 40 seconds. Because our implementation consists only of MATLAB code while the key step in the Huo and Sz´ekely (2016) routine is implemented in C, even greater speed increases are possible by rewriting the critical parts of our implementation in C. The rest of paper is organized as follows. In Section 2, we briefly introduce 2 the definition of distance covariance and its sample estimate. In Section 3, we describe the proposed (n log(n)) algorithm for sample distance covariance. In Section 4, experimentO results are presented. Finally, conclusions and remarks are made in Section 5. 2 Some Preliminaries p q Denote the joint characteristic function of X R and Y R as fX,Y (t,s), ∈ ∈ and denote the marginal characteristic functions of X and Y as fX (t) and k fY (s), respectively. Denote k as the Euclidean norm in R . The squared | · | distance covariance is defined as the weighted L2 distance between fX,Y (t,s) and fX (t) fY (s), · 2 2 (X, Y )= fX,Y (t,s) fX (t)fY (s) w(t,s)dtds, V Rp+q | − | Z 1+p 1+q −1 where w(t,s) = (cpcq t p s q ) , cp and cq are constants. It is obvious that 2(X, Y ) = 0 if and| | only| | if X and Y are independent. Under some mild conditions,V the squared distance covariance can be defined equivalently as the expectation of Euclidean distances 2 ′ ′ ′ ′′ (X, Y )=E( X X p Y Y q) 2E( X X p Y Y q) V | − | |′ − | − ′ | − | | − | + E( X X p)E( Y Y q), (1) | − | | − | where (X, Y ), (X′, Y ′), and (X′′, Y ′′) are identical independent copies from the joint distribution of (X, Y ). The squared distance correlation is defined by 2 V (X,Y ) , if 2(X,X) 2(Y, Y ) > 0 2(X, Y )= √V2(X,X)V2(Y,Y ) V V (2) R 0 otherwise. ⊺ ⊺ Let X = (x1,...,xn) and Y = (y1,...,yn) be the sample collected. Define aij = xi xj p, bij = yi yj q | − | | − | n n ai. = aij , bi. = bij j=1 j=1 X X n n a.. = ai., b.. = bi. i=1 i=1 X X D = aij bij ≤i,j≤n 1 X The squared sample distance covariance between X and Y is n 2 D 2 a..b.. (X, Y)= ai.bi. + , (3) Vn n2 − n3 n4 i=1 X 3 which is similar in form to (1). The squared sample distance correlation is given by V2 X,Y n( ) , if 2(X, X) 2(Y, Y) > 0 2 V2 X,X V2 Y,Y n n (X, Y )= √ n( ) n( ) V V (4) Rn 0 otherwise. From (3), it is easy to see a (n2) brute force algorithm exists for distance covariance. However, the bruteO force implementation is difficult to handle large datasets. Moreover, the p-value of distance covariance or correlation is typically calculated by using permutation test, which makes it more computationally intensive. If we can compute D and all ai.,bi. for 1 i n and D in (n log n) steps, 2 ≤ ≤ O then we can also compute n(X, Y) in (n log n) steps. In this paper, we considerV the caseO where X and Y are univariate random variables; that is, p = q = 1. For the rest of this document we assume that x x xn (because after an (n log n) sort step, we can ensure that 1 ≤ 2 ≤···≤ O x x xn). 1 ≤ 2 ≤···≤ 3 Fast Algorithm for Distance Covariance Define the function I(x) as 1 if x> 0 I(x)= (5) (0 otherwise. For any two real x and y we have x y = (x y)(2I(x y) 1). (6) | − | − − − We use (6) extensively in the rest of paper. 3.1 Fast computation of the ai. and bi. i Define si = j=1 xi for 1 i n and note that s1,...,sn can be computed in (n) time. ≤ ≤ O P Since x x xn we have 1 ≤ 2 ≤···≤ ai. = (xi xj )+ (xj xi) − − j<i j>i X X = (2i n)xi + (sn 2si). (7) − − So a1.,...,an. can be computed in (n) time. We can use an (n log n) sortingO algorithm to determine a permutation π(1), π(2),...,π(n) ofO 1, 2,...,n such that y y y . Therefore as π(1) ≤ π(2) ≤···≤ π(n) in (7), b ,...,b can be computed in (n) time after y ,...,yn is sorted. π(1) π(n) O 1 4 3.2 Fast computation of D In this subsection, we describe an (n log(n)) algorithm for computing D. First, we have O n n D = xi xj yi yj | − || − | i=1 j=1 X X n =2 xi xj yi yj . (8) | − || − | i=1 ≤j<i X 1X In (8) note that xi xj = xi xj if 1 j i, thus showing that | − | − ≤ ≤ n D = (xi xj )(yi yj)(2I(yi yj ) 1) 2 − − − − i=1 ≤j<i X 1X n =2 (xi xj )(yi yj )I(yi yj ) − − − i=1 ≤j<i X 1X n (xi xj )(yi yj ).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us