PCA Vs. Tensor-Based Dimension Reduction Methods: an Empirical Comparison on Active Shape Models of Organs

PCA Vs. Tensor-Based Dimension Reduction Methods: an Empirical Comparison on Active Shape Models of Organs

PCA vs. Tensor-Based Dimension Reduction Methods: An Empirical Comparison on Active Shape Models of Organs Jiun-Hung Chen and Linda G. Shapiro Abstract— How to model shape variations plays an important our empirical comparison of four reconstruction methods, role in active shape models that is widely used in model- including PCA, 2DPCA, Parafac and the Tucker decompo- based medical image segmentation, and principal component sition, on several different organs such as livers, spleens analysis is a common approach for this task. Recently, different tensor-based dimension reduction methods have been proposed and kidneys. From our experimental comparisons, 2DPCA and have achieved better performances than PCA in face achieves the best performance among the four compared recognition. However, how they perform in modeling 3D shape methods and there are statistically significant differences variations of organs in terms of reconstruction errors in medical between the performance of 2DPCA and those of the other image analysis is still unclear. methods. In this paper, we propose to use tensor-based dimension reduction methods to model shape variations. We empirically II. METHODS compare two-dimensional principal component analysis, the parallel factor model and the Tucker decomposition with PCA Assume that we have a training set of N 3D shapes in terms of the reconstruction errors. From our experimental and each shape is represented by M 3D landmark points. results on several different organs such as livers, spleens and Conventionally, we can represent each such shape by a vector kidneys, 2DPCA performs best among the four compared of 3M × 1. methods, and the performance differences between 2DPCA and the other methods are statistically significant. A. PCA I. INTRODUCTION The total scatter matrix S is defined as Modeling shape variations is a significant step in active N X t shape models [1] that is widely used in model-based medical S = (xi − x¯) (xi − x¯) (1) image segmentation. A standard method for this step is i=1 principal component analysis (PCA). Unlike PCA that uses where xi is the i-th training shape vector and x¯ is the mean vector-based representations, varied tensor-based dimension shape vector as defined below. reduction methods [2][3][4] have been recently proposed and PN x achieved better performances than PCA in face recognition. x¯ = i=1 i (2) In contrast with conventionally using a vector representa- N tion to represent a shape, tensor-based dimension reduction PCA finds a projection axis b that maximizes btSb. methods can represent a shape by a two-dimensional matrix Intuitively, the total scatter of the projected samples is directly or can represent the whole training set of shapes as maximized after the projection of a sample onto b. The a tensor [5]. For example, two-dimensional principal compo- optimal L projection axes bl; l = 1;:::;L that maximize nent analysis (2DPCA) [2] constructs the image covariance the above criterion are the eigenvectors of S corresponding matrix directly by using the original image matrices without to the largest L eigenvalues1. For a shape vector x, we can transforming them into 1D vectors and uses its eigenvectors use its reconstruction x~ defined below to approximate it. as principal components. The parallel factor model (Parafac) L X [6][5] and the Tucker decomposition [7][5] are two major x~ = x¯ + c b (3) tensor decomposition methods that decompose a tensor into l l l=1 components. t However, we have not seen any work that has used where cl = (x − x¯) bl. tensor-based dimension reduction methods in medical image B. Tensor-Based Dimension Reduction Methods analysis except [8] that compared 2DPCA [2] with PCA on a normal/abnormal left ventricle shape classification task. In In contrast with conventionally using a vector representa- addition, in contrast with previous papers that mainly focus tion to represent a shape, tensor-based dimension reduction on classification, our work requires accurate 3D reconstruc- methods represent a shape by a two-dimensional matrix tions of 3D organs whose shape can vary significantly. representation. In other words, let X be a 3 × M matrix In this paper, we propose to model shape variations with to represent a shape. In the following, we give a very tensor-based dimension reduction methods. We report on brief introduction on tensors. For more details about tensors, please refer to [5]. A tensor is a generalization of vectors Jiun-Hung Chen and Linda G. Shapiro are with Computer Sci- ence and Engineering, University of Washington, Seattle, WA 98195. 1To be consistent, we will use L in the following discussions to denote fjhchen,[email protected] the number of components used in reconstructing a shape. and matrices. The order (or mode) of a tensor is the number of dimensions. We use a third-order tensor X 2 R3×M×N to represent the whole training set of N shapes where the first mode represents the x,y,z dimension of a point, the second mode represents the order of points and the third mode represents different patients. Although we focus on using third-order tensors in this paper, it is easy to extend our concepts to higher-order tensors. For example, if the above training set changed at regular intervals of time, then a fourth-order tensor in which the fourth order represents time can be used. 1) 2DPCA: 2DPCA [2] projects a shape matrix X, which (a) Livers (b) Left kidneys is a 3×M matrix onto a vector, b, which is a M ×1 vector, by the linear transformation. c = Xb (4) The image scatter matrix G is defined as N X t G = (Xi − X¯ ) (Xi − X¯ ) (5) i=1 where Xi is the shape matrix that represents the i-th training shape and PN X X¯ = i=1 i (6) N (c) Right kidneys (d) Spleens Similar to PCA, the goal of 2DPCA is to find a projection axis that maximizes btGb. The optimal L projection axes Fig. 1: The 3D triangular meshes of different organs we use in the experiments. bl; l = 1;:::;L that maximize the above criterion are the eigenvectors of G corresponding to the largest L eigenvalues. For a shape matrix X, we can use its reconstruction X~ defined below to approximate it. 3) Tucker Decomposition: In contrast with Parafac, which L decomposes a tensor into rank-one tensors, the Tucker de- X t composition is a form of higher-order principal component X~ = X¯ + clbl (7) analysis that decomposes a tensor into a core tensor mul- l=1 ¯ tiplied by a matrix along each mode [5]. Given a tensor where cl = (X − X)bl. 2 RI×J×K , the Tucker decomposition is given by 2) Parallel Factor Model: Parafac [6][5] factorizes a X tensor into a weighted sum of component rank-one tensors ≈ ×1 A ×2 B ×3 C (10) I×J×K X G [5]. In other words, given a tensor X 2 R , Parafac P Q R X X X decomposes it as = gpqrap ◦ bq ◦ cr (11) L p=1 q=1 r=1 X ≈ λlal ◦ bl ◦ cl (8) X P ×Q×R I×P l=1 where G 2 R is called the core tensor, A 2 R , B 2 RJ×Q,C 2 RK×R, a 2 RI×1 is the p-th column where a 2 RI×1, b 2 RJ×1, c 2 RK×1, λ 2 R1 for l = p l l l l in A, b 2 RJ×1 is the q-th column in B, c 2 RK×1 is 1;:::;L and ◦ represents the vector outer product [5]. The q r the r-th column in C and × is the n-mode matrix product alternating least squares (ALS) method [6][5] is commonly n operator for multiplying a tensor by a matrix in mode n [5]. used to find the Parafac decomposition. ALS can be used to find the Tucker decomposition. After the decomposition is computed, for a test shape, Let V = × A × B be the matrix formed by mode- different methods [9][10] can be used to find the associated (3) G 1 2 n matricizing [5] the tensor × A × B with respect to coefficient vectors and to compute the reconstruction that G 1 2 the third mode. Based on the above linear projection idea approximates it. In this paper, we follow the linear projection [9], given a shape vector x, we calculate its reconstruction method in [9]. Given a shape matrix X, we calculate its L L x~ = P c v where v is the l-th column of V to reconstruction X~ = P c λ a ◦ b to approximate it by l=1 l l l (3) l=1 l l l l approximate it by solving the following equation. solving the following equation. min jjX − X~ jj (9) ~ min jjx − x~jj (12) X x~ where jjXjj is the Frobenius norm of X. III. EXPERIMENTAL RESULTS AND DISCUSSIONS We have 3D mesh models of 20 livers, 17 left kidneys, Euclidean distance vs. Numbers of components in use Euclidean distance vs. Numbers of components in use 900 400 15 right kidneys, and 18 spleens as shown in Figure 1. All PCA PCA 800 2DPCA 2DPCA Parafac 350 Parafac Tucker Tucker these 3D triangular meshes are constructed from CT scans of 700 300 different patients and the 3D point correspondence problems 600 2 250 among different 3D mesh models of the organs are solved . 500 200 400 All the mesh models of the same organ have the same Euclidean distance Euclidean distance 150 number of vertices (2563) and the same number of faces 300 100 (5120), and all vertices are used as landmarks to represent 200 100 50 5 6 7 8 9 10 11 12 13 14 15 5 6 7 8 9 10 11 12 13 14 15 shapes.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us