An Improved Modified Cholesky Decomposition Approach for Precision Matrix Estimation Xiaoning Kang and Xinwei Deng Abstract The modified Cholesky decomposition is commonly used for precision matrix esti- mation given a specified order of random variables. However, the order of variables is often not available or cannot be pre-determined. In this work, we propose to address the variable order issue in the modified Cholesky decomposition for sparse precision matrix estimation. The key idea is to effectively combine a set of estimates obtained from multiple permutations of variable orders, and to efficiently encourage the sparse structure for the resultant estimate by the thresholding technique on the ensemble Cholesky factor matrix. The consistent property of the proposed estimate is estab- lished under some weak regularity conditions. Simulation studies are conducted to evaluate the performance of the proposed method in comparison with several existing approaches. The proposed method is also applied into linear discriminant analysis of real data for classification. Keywords: LDA; Precision matrix; Sparsity; Thresholding Declarations of interest: none 1 Introduction arXiv:1710.05163v2 [stat.ML] 1 Apr 2020 The estimation of large sparse precision matrix is of fundamental importance in the multivari- ate analysis and various statistical applications. For example, in the classification problem, linear discriminant analysis (LDA) needs the precision matrix to compute the classification rule. In financial applications, portfolio optimization often involves the precision matrix in minimizing the portfolio risk. A sparse estimate of precision matrix not only provides a parsimonious model structure, but also gives meaningful interpretation on the conditional independence among the variables. 1 0 Suppose that X = (X1;:::;Xp) is a p-dimensional vector of random variables with an unknown covariance matrix Σ. Without loss of generality, we assume that the expectation of X is zero. Let x1;:::; xn be n independent and identically distributed observations following a multivariate normal distribution N (0; Σ) with mean equal to the zero vector and covariance −1 matrix Σ. The goal of this work is to estimate the precision matrix Ω = (!ij)p×p = Σ . Particular interest is to identify zero entries of !ij, since !ij = 0 implies the conditional independence between Xi and Xj given all the other random variables. One way is that we obtain a sparse covariance matrix and then take its inverse. The inverse, however, is often computationally intensive, especially in the high-dimensional cases. Moreover, the inverse of a sparse covariance matrix often would not result in sparse structure for the precision matrix. Therefore, it is desirable to obtain a sparse estimate directly to catch the underlying sparsity in the precision matrix. The estimation of sparse precision matrix has attracted great attention in the literature. Meinshausen and B¨uhlmann(2006) [1] introduced a neighborhood-based approach: it first estimates each column of the precision matrix by the scaled Lasso or Dantzig selector, and then adjusts the matrix estimator to be symmetric. Yuan and Lin (2007) [2] proposed the Graphical Lasso (Glasso) method, which gives a sparse and shrinkage estimator of Ω by penalizing the negative log-likelihood as ^ Ω = arg min − log jΩj + tr[ΩS] + ρjjΩjj1; Ω 1 Pn 0 where S = n i=1 xixi is the sample covariance matrix, ρ ≥ 0 is a tuning parameter, and jj · jj1 denotes the L1 norm for the off-diagonal entries. Hence, the penalty term encourages some of the off-diagonal entries of the estimated Ω to be exact zeroes. Different variations of the Glasso formulation have been later studied [3, 4, 5, 6, 7, 8]. The corresponding theoretical properties of Glasso method are also developed [2, 5, 9, 10]. In particular, the results from Raskutti et al. (2008) [5] and Rothman et al. (2008) [9] suggest that, although better than the sample covariance matrix, the Glasso estimate may not perform well when p is larger than the sample size n. In addition, Fan, Fan and Lv (2008) [11] developed a factor model to estimate both covari- ance matrix and its inverse. They also studied the estimation in the asymptotic framework 2 that both the dimension p and the sample size n go to infinity. Xue and Zou (2012) [12] intro- duced a rank-based approach for estimating high-dimensional nonparametric graphical mod- els under a strong sparsity assumption that the true precision matrix has only a few nonzero entries. Wieringen and Peeters (2016) [13] studied the estimation of high-dimensional preci- sion matrix based on the Ridge penalty. There are also a few work focusing on the inference for the precision matrix estimation. Drton and Perlman (2008) [14] proposed a new model selection strategy for Gaussian graphical models via hypotheses testings of the conditional independence between variables. Sun and Zhang (2012) [15] derived a residual-based esti- mator to construct confidence intervals for entries of the estimated precision matrix. Some recent Bayesian literature can also be found in the work of [16, 17, 18, 19, 20], among others. Another set of commonly used methods is to consider the matrix decomposition for es- timating sparse precision matrix. The modified Cholesky decomposition (MCD) approach was developed to estimate Ω [21, 22]. This method reduces the challenge of estimating a precision matrix into solving a sequence of regression problems, and provides an uncon- strained and statistically interpretable parametrization of a precision matrix. Although the MCD approach is statistically meaningful, the resultant estimate depends on the order of the random variables X1;:::;Xp. In many applications, the variables often do not have a natural order, that is, the variable order is not available or cannot be pre-determined before the analysis. There are several Cholesky-based methods for estimating the precision matrix without specifying a natural order to the variables [23, 24, 25]. In this work, we propose an improved MCD approach to estimate the sparse precision matrix via addressing the vari- able order issue using the permutation idea in Zheng et al. (2017) [25]. By considering an ensemble estimate under multiple permutations of the variable orders, Zheng et al. (2017) [25] introduced an order-averaged estimator for the large covariance matrix. However, they did not give the theoretical property of their estimator. Moreover, their estimator does not have sparsity, which is a very important and desired property for the matrix estimation in the high-dimensional cases. Hence, this paper improves the estimate of Zheng et al. (2017) [25] by encouraging the sparse structure in the precision matrix estimate. Specifically, we take average on the multiple estimates of the Cholesky factor matrix, and consequently con- 3 struct the final estimate of the precision matrix. Since the averaged Cholesky factor matrix may not have sparse structure, we adopt the hard thresholding technique on the averaged Cholesky factor matrix to obtain the sparsity, thus leading to the sparse structure in the estimated precision matrix. The proposed method provides an accurate estimate and is able to capture the underlying sparse structure of the precision matrix. The sensitivity of the number of permutations of variable orders is studied in simulation. We also establish the consistency property of the proposed estimator regarding Frobenius norm under some appropriate conditions. The remainder of the paper is divided into seven sections . In Section 2, we briefly review the MCD approach to estimate the precision matrix. In Section 3, we address the order issue of the MCD approach and propose an ensemble sparse estimate of Ω. The consistent property is established in Section 4. Simulation studies and illustrative examples of real data are presented in Sections 5 and 6, respectively. We conclude our work with some discussion in Section 7. The technical proof is given in Appendix. 2 Revisit of Modified Cholesky Decomposition The key idea of the modified Cholesky decomposition approach is that the precision matrix Ω can be decomposed using a unique lower triangular matrix T and a unique diagonal matrix D with positive diagonal entries [21] such that Ω = T 0D−1T : The entries of T and the diagonal of D are unconstrained and interpretable as regression coefficients and corresponding variances when one variable Xj is regressed on its predecessors X1;:::;Xj−1. Clearly, here an order for variables X1;:::;Xp is pre-specified. Specifically, consider X1 = 1, and for j = 2; : : : ; p, define j−1 X Xj = ajkXk + j k=1 0 = Zjaj + j; (2.1) 4 0 0 where Zj = (X1;:::;Xj−1) , and aj = (aj1; : : : ; aj;j−1) is the corresponding vector of regres- sion coefficients. The errors j are assumed to be independent with zero mean and variance 2 0 2 2 dj . Denote = (1; : : : ; p) and D = Cov() = diag(d1; : : : ; dp). Then the p regression models in (2.1) can be expressed in the matrix form X = AX + , where A is a lower triangular matrix with ajk in the (j, k)th position, and 0 as its diagonal entries. Thus one can easily write TX = with T = I − A to derive the expression of Ω = T 0D−1T . The MCD approach therefore reduces the challenge of modeling a precision matrix into the task of modeling (p − 1) regression problems. Note that the MCD-based estimate is affected by the order of variables X1;:::;Xp, since the regressions in (2.1) change when the order changes [23]. Hence, different orders of variables would result in different regressions, leading to different estimates
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages30 Page
-
File Size-