Principal Components Analysis (Pca)
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Generalized Linear Model for Principal Component Analysis of Binary Data
A Generalized Linear Model for Principal Component Analysis of Binary Data Andrew I. Schein Lawrence K. Saul Lyle H. Ungar Department of Computer and Information Science University of Pennsylvania Moore School Building 200 South 33rd Street Philadelphia, PA 19104-6389 {ais,lsaul,ungar}@cis.upenn.edu Abstract they are not generally appropriate for other data types. Recently, Collins et al.[5] derived generalized criteria We investigate a generalized linear model for for dimensionality reduction by appealing to proper- dimensionality reduction of binary data. The ties of distributions in the exponential family. In their model is related to principal component anal- framework, the conventional PCA of real-valued data ysis (PCA) in the same way that logistic re- emerges naturally from assuming a Gaussian distribu- gression is related to linear regression. Thus tion over a set of observations, while generalized ver- we refer to the model as logistic PCA. In this sions of PCA for binary and nonnegative data emerge paper, we derive an alternating least squares respectively by substituting the Bernoulli and Pois- method to estimate the basis vectors and gen- son distributions for the Gaussian. For binary data, eralized linear coefficients of the logistic PCA the generalized model's relationship to PCA is anal- model. The resulting updates have a simple ogous to the relationship between logistic and linear closed form and are guaranteed at each iter- regression[12]. In particular, the model exploits the ation to improve the model's likelihood. We log-odds as the natural parameter of the Bernoulli dis- evaluate the performance of logistic PCA|as tribution and the logistic function as its canonical link. -
Choosing Principal Components: a New Graphical Method Based on Bayesian Model Selection
Choosing principal components: a new graphical method based on Bayesian model selection Philipp Auer Daniel Gervini1 University of Ulm University of Wisconsin–Milwaukee October 31, 2007 1Corresponding author. Email: [email protected]. Address: P.O. Box 413, Milwaukee, WI 53201, USA. Abstract This article approaches the problem of selecting significant principal components from a Bayesian model selection perspective. The resulting Bayes rule provides a simple graphical technique that can be used instead of (or together with) the popular scree plot to determine the number of significant components to retain. We study the theoretical properties of the new method and show, by examples and simulation, that it provides more clear-cut answers than the scree plot in many interesting situations. Key Words: Dimension reduction; scree plot; factor analysis; singular value decomposition. MSC: Primary 62H25; secondary 62-09. 1 Introduction Multivariate datasets usually contain redundant information, due to high correla- tions among the variables. Sometimes they also contain irrelevant information; that is, sources of variability that can be considered random noise for practical purposes. To obtain uncorrelated, significant variables, several methods have been proposed over the years. Undoubtedly, the most popular is Principal Component Analysis (PCA), introduced by Hotelling (1933); for a comprehensive account of the methodology and its applications, see Jolliffe (2002). In many practical situations the first few components accumulate most of the variability, so it is common practice to discard the remaining components, thus re- ducing the dimensionality of the data. This has many advantages both from an inferential and from a descriptive point of view, since the leading components are often interpretable indices within the context of the problem and they are statis- tically more stable than subsequent components, which tend to be more unstruc- tured and harder to estimate. -
Ph 21.5: Covariance and Principal Component Analysis (PCA)
Ph 21.5: Covariance and Principal Component Analysis (PCA) -v20150527- Introduction Suppose we make a measurement for which each data sample consists of two measured quantities. A simple example would be temperature (T ) and pressure (P ) taken at time (t) at constant volume(V ). The data set is Ti;Pi ti N , which represents a set of N measurements. We wish to make sense of the data and determine thef dependencej g of, say, P on T . Suppose P and T were for some reason independent of each other; then the two variables would be uncorrelated. (Of course we are well aware that P and V are correlated and we know the ideal gas law: PV = nRT ). How might we infer the correlation from the data? The tools for quantifying correlations between random variables is the covariance. For two real-valued random variables (X; Y ), the covariance is defined as (under certain rather non-restrictive assumptions): Cov(X; Y ) σ2 (X X )(Y Y ) ≡ XY ≡ h − h i − h i i where ::: denotes the expectation (average) value of the quantity in brackets. For the case of P and T , we haveh i Cov(P; T ) = (P P )(T T ) h − h i − h i i = P T P T h × i − h i × h i N−1 ! N−1 ! N−1 ! 1 X 1 X 1 X = PiTi Pi Ti N − N N i=0 i=0 i=0 The extension of this to real-valued random vectors (X;~ Y~ ) is straighforward: D E Cov(X;~ Y~ ) σ2 (X~ < X~ >)(Y~ < Y~ >)T ≡ X~ Y~ ≡ − − This is a matrix, resulting from the product of a one vector and the transpose of another vector, where X~ T denotes the transpose of X~ . -
6 Probability Density Functions (Pdfs)
CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution. -
Exploratory and Confirmatory Factor Analysis Course Outline Part 1
Course Outline Exploratory and Confirmatory Factor Analysis 1 Principal components analysis Part 1: Overview, PCA and Biplots FA vs. PCA Least squares fit to a data matrix Biplots Michael Friendly 2 Basic Ideas of Factor Analysis Parsimony– common variance small number of factors. Linear regression on common factors→ Partial linear independence Psychology 6140 Common vs. unique variance 3 The Common Factor Model Factoring methods: Principal factors, Unweighted Least Squares, Maximum λ1 X1 z1 likelihood ξ λ2 Factor rotation X2 z2 4 Confirmatory Factor Analysis Development of CFA models Applications of CFA PCA and Factor Analysis: Overview & Goals Why do Factor Analysis? Part 1: Outline Why do “Factor Analysis”? 1 PCA and Factor Analysis: Overview & Goals Why do Factor Analysis? Data Reduction: Replace a large number of variables with a smaller Two modes of Factor Analysis number which reflect most of the original data [PCA rather than FA] Brief history of Factor Analysis Example: In a study of the reactions of cancer patients to radiotherapy, measurements were made on 10 different reaction variables. Because it 2 Principal components analysis was difficult to interpret all 10 variables together, PCA was used to find Artificial PCA example simpler measure(s) of patient response to treatment that contained most of the information in data. 3 PCA: details Test and Scale Construction: Develop tests and scales which are “pure” 4 PCA: Example measures of some construct. Example: In developing a test of English as a Second Language, 5 Biplots investigators calculate correlations among the item scores, and use FA to Low-D views based on PCA construct subscales. -
A Resampling Test for Principal Component Analysis of Genotype-By-Environment Interaction
A RESAMPLING TEST FOR PRINCIPAL COMPONENT ANALYSIS OF GENOTYPE-BY-ENVIRONMENT INTERACTION JOHANNES FORKMAN Abstract. In crop science, genotype-by-environment interaction is of- ten explored using the \genotype main effects and genotype-by-environ- ment interaction effects” (GGE) model. Using this model, a singular value decomposition is performed on the matrix of residuals from a fit of a linear model with main effects of environments. Provided that errors are independent, normally distributed and homoscedastic, the signifi- cance of the multiplicative terms of the GGE model can be tested using resampling methods. The GGE method is closely related to principal component analysis (PCA). The present paper describes i) the GGE model, ii) the simple parametric bootstrap method for testing multi- plicative genotype-by-environment interaction terms, and iii) how this resampling method can also be used for testing principal components in PCA. 1. Introduction Forkman and Piepho (2014) proposed a resampling method for testing in- teraction terms in models for analysis of genotype-by-environment data. The \genotype main effects and genotype-by-environment interaction ef- fects"(GGE) analysis (Yan et al. 2000; Yan and Kang, 2002) is closely related to principal component analysis (PCA). For this reason, the method proposed by Forkman and Piepho (2014), which is called the \simple para- metric bootstrap method", can be used for testing principal components in PCA as well. The proposed resampling method is parametric in the sense that it assumes homoscedastic and normally distributed observations. The method is \simple", because it only involves repeated sampling of standard normal distributed values. Specifically, no parameters need to be estimated. -
GGL Biplot Analysis of Durum Wheat (Triticum Turgidum Spp
756 Bulgarian Journal of Agricultural Science, 19 (No 4) 2013, 756-765 Agricultural Academy GGL BIPLOT ANALYSIS OF DURUM WHEAT (TRITICUM TURGIDUM SPP. DURUM) YIELD IN MULTI-ENVIRONMENT TRIALS N. SABAGHNIA2*, R. KARIMIZADEH1 and M. MOHAMMADI1 1 Department of Agronomy and Plant Breeding, Faculty of Agriculture, University of Maragheh, Maragheh, Iran 2 Dryland Agricultural Research Institute (DARI), Gachsaran, Iran Abstract SABAGHNIA, N., R. KARIMIZADEH and M. MOHAMMADI, 2013. GGL biplot analysis of durum wheat (Triticum turgidum spp. durum) yield in multi-environment trials. Bulg. J. Agric. Sci., 19: 756-765 Durum wheat (Triticum turgidum spp. durum) breeders have to determine the new genotypes responsive to the environ- mental changes for grain yield. Matching durum wheat genotype selection with its production environment is challenged by the occurrence of significant genotype by environment (GE) interaction in multi-environment trials (MET). This investigation was conducted to evaluate 20 durum wheat genotypes for their stability grown in five different locations across three years using randomized completely block design with 4 replications. According to combined analysis of variance, the main effects of genotypes, locations and years, were significant as well as the interactions effects. The first two principal components of the site regression model accounted for 60.3 % of the total variation. Polygon view of genotype plus genotype by location (GGL) biplot indicated that there were three winning genotypes in three mega-environments for durum wheat in rain-fed conditions. Genotype G14 was the most favorable genotype for location Gachsaran and the most favorable genotype of mega-environment Kouhdasht and Ilam was G12 while G10 was the most favorable genotypes for mega-environment Gonbad and Moghan. -
Covariance of Cross-Correlations: Towards Efficient Measures for Large-Scale Structure
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by RERO DOC Digital Library Mon. Not. R. Astron. Soc. 400, 851–865 (2009) doi:10.1111/j.1365-2966.2009.15490.x Covariance of cross-correlations: towards efficient measures for large-scale structure Robert E. Smith Institute for Theoretical Physics, University of Zurich, Zurich CH 8037, Switzerland Accepted 2009 August 4. Received 2009 July 17; in original form 2009 June 13 ABSTRACT We study the covariance of the cross-power spectrum of different tracers for the large-scale structure. We develop the counts-in-cells framework for the multitracer approach, and use this to derive expressions for the full non-Gaussian covariance matrix. We show that for the usual autopower statistic, besides the off-diagonal covariance generated through gravitational mode- coupling, the discreteness of the tracers and their associated sampling distribution can generate strong off-diagonal covariance, and that this becomes the dominant source of covariance as spatial frequencies become larger than the fundamental mode of the survey volume. On comparison with the derived expressions for the cross-power covariance, we show that the off-diagonal terms can be suppressed, if one cross-correlates a high tracer-density sample with a low one. Taking the effective estimator efficiency to be proportional to the signal-to-noise ratio (S/N), we show that, to probe clustering as a function of physical properties of the sample, i.e. cluster mass or galaxy luminosity, the cross-power approach can outperform the autopower one by factors of a few. -
A Biplot-Based PCA Approach to Study the Relations Between Indoor and Outdoor Air Pollutants Using Case Study Buildings
buildings Article A Biplot-Based PCA Approach to Study the Relations between Indoor and Outdoor Air Pollutants Using Case Study Buildings He Zhang * and Ravi Srinivasan UrbSys (Urban Building Energy, Sensing, Controls, Big Data Analysis, and Visualization) Laboratory, M.E. Rinker, Sr. School of Construction Management, University of Florida, Gainesville, FL 32603, USA; sravi@ufl.edu * Correspondence: rupta00@ufl.edu Abstract: The 24 h and 14-day relationship between indoor and outdoor PM2.5, PM10, NO2, relative humidity, and temperature were assessed for an elementary school (site 1), a laboratory (site 2), and a residential unit (site 3) in Gainesville city, Florida. The primary aim of this study was to introduce a biplot-based PCA approach to visualize and validate the correlation among indoor and outdoor air quality data. The Spearman coefficients showed a stronger correlation among these target environmental measurements on site 1 and site 2, while it showed a weaker correlation on site 3. The biplot-based PCA regression performed higher dependency for site 1 and site 2 (p < 0.001) when compared to the correlation values and showed a lower dependency for site 3. The results displayed a mismatch between the biplot-based PCA and correlation analysis for site 3. The method utilized in this paper can be implemented in studies and analyzes high volumes of multiple building environmental measurements along with optimized visualization. Keywords: air pollution; indoor air quality; principal component analysis; biplot Citation: Zhang, H.; Srinivasan, R. A Biplot-Based PCA Approach to Study the Relations between Indoor and 1. Introduction Outdoor Air Pollutants Using Case The 2020 Global Health Observatory (GHO) statistics show that indoor and outdoor Buildings 2021 11 Study Buildings. -
Principal Component Analysis (PCA) As a Statistical Tool for Identifying Key Indicators of Nuclear Power Plant Cable Insulation
Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2017 Principal component analysis (PCA) as a statistical tool for identifying key indicators of nuclear power plant cable insulation degradation Chamila Chandima De Silva Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Materials Science and Engineering Commons, Mechanics of Materials Commons, and the Statistics and Probability Commons Recommended Citation De Silva, Chamila Chandima, "Principal component analysis (PCA) as a statistical tool for identifying key indicators of nuclear power plant cable insulation degradation" (2017). Graduate Theses and Dissertations. 16120. https://lib.dr.iastate.edu/etd/16120 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Principal component analysis (PCA) as a statistical tool for identifying key indicators of nuclear power plant cable insulation degradation by Chamila C. De Silva A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Materials Science and Engineering Program of Study Committee: Nicola Bowler, Major Professor Richard LeSar Steve Martin The student author and the program of study committee are solely responsible for the content of this thesis. The Graduate College will ensure this thesis is globally accessible and will not permit alterations after a degree is conferred. -
Lecture 4 Multivariate Normal Distribution and Multivariate CLT
Lecture 4 Multivariate normal distribution and multivariate CLT. T We start with several simple observations. If X = (x1; : : : ; xk) is a k 1 random vector then its expectation is × T EX = (Ex1; : : : ; Exk) and its covariance matrix is Cov(X) = E(X EX)(X EX)T : − − Notice that a covariance matrix is always symmetric Cov(X)T = Cov(X) and nonnegative definite, i.e. for any k 1 vector a, × a T Cov(X)a = Ea T (X EX)(X EX)T a T = E a T (X EX) 2 0: − − j − j � We will often use that for any vector X its squared length can be written as X 2 = XT X: If we multiply a random k 1 vector X by a n k matrix A then the covariancej j of Y = AX is a n n matrix × × × Cov(Y ) = EA(X EX)(X EX)T AT = ACov(X)AT : − − T Multivariate normal distribution. Let us consider a k 1 vector g = (g1; : : : ; gk) of i.i.d. standard normal random variables. The covariance of g is,× obviously, a k k identity × matrix, Cov(g) = I: Given a n k matrix A, the covariance of Ag is a n n matrix × × � := Cov(Ag) = AIAT = AAT : Definition. The distribution of a vector Ag is called a (multivariate) normal distribution with covariance � and is denoted N(0; �): One can also shift this disrtibution, the distribution of Ag + a is called a normal distri bution with mean a and covariance � and is denoted N(a; �): There is one potential problem 23 with the above definition - we assume that the distribution depends only on covariance ma trix � and does not depend on the construction, i.e. -
Exam-Srm-Sample-Questions.Pdf
SOCIETY OF ACTUARIES EXAM SRM - STATISTICS FOR RISK MODELING EXAM SRM SAMPLE QUESTIONS AND SOLUTIONS These questions and solutions are representative of the types of questions that might be asked of candidates sitting for Exam SRM. These questions are intended to represent the depth of understanding required of candidates. The distribution of questions by topic is not intended to represent the distribution of questions on future exams. April 2019 update: Question 2, answer III was changed. While the statement was true, it was not directly supported by the required readings. July 2019 update: Questions 29-32 were added. September 2019 update: Questions 33-44 were added. December 2019 update: Question 40 item I was modified. January 2020 update: Question 41 item I was modified. November 2020 update: Questions 6 and 14 were modified and Questions 45-48 were added. February 2021 update: Questions 49-53 were added. July 2021 update: Questions 54-55 were added. Copyright 2021 by the Society of Actuaries SRM-02-18 1 QUESTIONS 1. You are given the following four pairs of observations: x1=−==−=( 1,0), xx 23 (1,1), (2, 1), and x 4(5,10). A hierarchical clustering algorithm is used with complete linkage and Euclidean distance. Calculate the intercluster dissimilarity between {,xx12 } and {}x4 . (A) 2.2 (B) 3.2 (C) 9.9 (D) 10.8 (E) 11.7 SRM-02-18 2 2. Determine which of the following statements is/are true. I. The number of clusters must be pre-specified for both K-means and hierarchical clustering. II. The K-means clustering algorithm is less sensitive to the presence of outliers than the hierarchical clustering algorithm.