Principal Components Analysis

Principal Components Analysis

NCSS Statistical Software NCSS.com Chapter 425 Principal Components Analysis Introduction Principal Components Analysis, or PCA, is a data analysis tool that is usually used to reduce the dimensionality (number of variables) of a large number of interrelated variables, while retaining as much of the information (variation) as possible. PCA calculates an uncorrelated set of variables (components or pc’s). These components are ordered so that the first few retain most of the variation present in all of the original variables. Unlike its cousin Factor Analysis, PCA always yields the same solution from the same data (apart from arbitrary differences in the sign). The computations of PCA reduce to an eigenvalue-eigenvector problem. NCSS uses a double-precision version of the modern QL algorithm as described by Press (1986) to solve the eigenvalue-eigenvector problem. Note that PCA is a data analytical, rather than statistical, procedure. Hence, you will not find many t-tests or F- tests in PCA. Instead, you will make subjective judgments. This NCSS program performs a PCA on either a correlation or a covariance matrix. Missing values may be dealt with using one of three methods. The analysis may be carried out using robust estimation techniques. Chapters on PCA are contained in books dealing with multivariate statistical analysis. Books that are devoted solely to PCA include Dunteman (1989), Jolliffe (1986), Flury (1988), and Jackson (1991). Technical Details Mathematical Development This section will document the basic formulas used by NCSS in performing a principal components analysis. We begin with an adjusted data matrix, X, which consists of n observations (rows) on p variables (columns). The adjustment is made by subtracting the variable’s mean from each value. That is, the mean of each variable is subtracted from all of that variable’s values. This adjustment is made since PCA deals with the covariances among the original variables, so the means are irrelevant. New variables are constructed as weighted averages of the original variables. These new variables are called the principal components, latent variables, or factors. Their specific values on a specific row are referred to as the factor scores, the component scores, or simply the scores. The matrix of scores will be referred to as the matrix Y. The basic equation of PCA is, in matrix notation, given by: Y = W ′X where W is a matrix of coefficients that is determined by PCA. This matrix is provided in NCSS in the Score Coefficients report. For those not familiar with matrix notation, this equation may be thought of as a set of p linear equations that form the components out of the original variables. 425-1 © NCSS, LLC. All Rights Reserved. NCSS Statistical Software NCSS.com Principal Components Analysis These equations are also written as: y = w 1i x 1j +w 2i x 2j +...+w pi x pj ij As you can see, the components are a weighted average of the original variables. The weights, W, are constructed so that the variance of y1, Var(y1), is maximized. Also, so that Var(y2) is maximized and that the correlation between y1 and y2 is zero. The remaining yi’s are calculated so that their variances are maximized, subject to the constraint that the covariance between yi and yj, for all i and j (i not equal to j), is zero. The matrix of weights, W, is calculated from the variance-covariance matrix, S. This matrix is calculated using the formula: n ∑(xik − xi )(x jk − x j ) s = k=1 ij n -1 Later, we will discuss how this equation may be modified both to be robust to outliers and to deal with missing values. The singular value decomposition of S provides the solution to the PCA problem. This may be defined as: U ′SU = L where L is a diagonal matrix of the eigenvalues of S, and U is the matrix of eigenvectors of S. W is calculated from L and U, using the relationship: − 1 W = UL 2 It is interesting to note that W is simply the eigenvector matrix U, scaled so that the variance of each component, yi, is one. The correlation between an ith component and the jth original variable may be computed using the formula: u ji l i rij = s jj Here uij is an element of U, li is a diagonal element of L, and sjj is a diagonal element of S. The correlations are called the component loadings and are provided in the Component Loadings report. When the correlation matrix, R, is used instead of the covariance matrix, S, the equation for Y must be modified. The new equation is: − 1 Y = W' D 2 X where D is a diagonal matrix made up of the diagonal elements of S. In this case, the correlation formula may be simplified since the sjj are equal to one. Missing Values Missing values may be dealt with by ignoring rows with missing values, estimating the missing value with the variable’s average, or estimating the missing value by regressing it on variables whose values are not missing. These will now be described in detail. Most of this information comes from Jackson (1991) and Little (1987). When estimating statistics from data sets with missing values, you should first consider the mechanism that created the missing values. This mechanism determines whether your method of dealing with the missing values is appropriate. The worst case arises when the probability of obtaining a missing value is dependent on one or more variables in your study. For example, suppose one of your variables was a person’s income level. You might suspect that the higher a person’s income, the less likely he is to reveal it to you. When the probability of 425-2 © NCSS, LLC. All Rights Reserved. NCSS Statistical Software NCSS.com Principal Components Analysis obtaining a missing value is dependent on one or more other variables, serious biases can occur in your results. A complete discussion of missing value mechanisms is given in Little (1987). NCSS provides three methods of dealing with missing values. In all three cases, the overall strategy is to deal with the missing values while estimating the covariance matrix, S. Hence, the rest of the section will consider estimating S. Complete-Case Missing-Value Analysis One method of dealing with missing values is to remove all cases (observations or rows) that contain missing values from the analysis. The analysis is then performed only on those cases that are “complete.” The advantages of this approach are speed (since no iteration is required), comparability (since univariate statistics, such as the mean, calculated on individual variables, will be equal to the results of the multivariate calculations), and simplicity (since the method is easy to explain). Disadvantages of this approach are inefficiency and bias. This method is inefficient since as the number of missing values increases, the number of discarded cases also increases. In the extreme case, suppose a data set has 100 variables and 200 cases. Suppose one value is missing at random in 80 cases, so these cases are deleted from the study. Hence, of the 20,000 values in the study, 80 values or 0.4% were missing. Yet this method has us omit 8000 values or 40%, even though 7920 of those values were actually available. This is similar to the saying that one rotten apple ruins the whole barrel. A certain amount of bias may occur if the pattern of missing values is related to at least one of the variables in the study. This could lead to gross distortions if this variable were correlated with several other variables. One method of determining if the complete-case methodology is causing bias is to compare the means of each variable calculated from only complete cases, with the corresponding means of each variable calculated from cases that were dropped but had this variable present. This comparison could be run using a statistic like the t-test, although we would also be interested in comparing the variances, which the t-test does not do. Significant differences would indicate the presence of a strong bias introduced by the pattern of missing values. A modification of the complete-case method is the pairwise available-case method in which covariances are calculated one at a time from all cases that are complete for those two variables. This method is not available in this program for three reasons: the univariate statistics change from pair to pair causing serious numeric problems (such as correlations greater than one), the resulting covariance matrix may not be positive semi-definite, and the method is dominated by other methods that are available in this program. Filling in Missing Values with Averages A growing number of programs offer the ability to fill in (or impute) the missing values. The naive choice is to fill in with the variable average. NCSS offers this option, implemented iteratively. During the first iteration, no imputation occurs. On the second, third, and additional iterations, each missing value is estimated using the mean of that variable from the previous iteration. Hence, at the end of each iteration, a new set of means is available for imputation during the next iteration. The process continues until it converges. The advantages of this method are greater efficiency (since it takes advantage of the cases in which missing values occur) and speed (since it is much faster than the EM algorithm to be presented next). The disadvantages of this method are biases (since it consistently underestimates the variances and covariances), unreliability (since simulation studies have shown it unreliable in some cases), and domination (since it is dominated by the EM algorithm, which does much better although that method requires more computations).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    41 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us