CHAPTER 4 Exploratory Factor Analysis and Principal Components

Total Page:16

File Type:pdf, Size:1020Kb

CHAPTER 4 Exploratory Factor Analysis and Principal Components CHAPTER 4 Exploratory Factor Analysis and Principal Components Analysis Exploratory factor analysis (EFA) and principal components analysis (PCA) both are methods that are used to help investigators represent a large number of relationships among normally distributed or scale variables in a simpler (more parsimonious) way. Both of these approaches determine which, of a fairly large set of items, “hang together” as groups or are answered most similarly by the participants. EFA also can help assess the level of construct (factorial) validity in a dataset regarding a measure purported to measure certain constructs. A related approach, confirmatory factor analysis, in which one tests very specific models of how variables are related to underlying constructs (conceptual variables), requires additional software and is beyond the scope of this book so it will not be discussed. The primary difference, conceptually, between exploratory factor analysis and principal components analysis is that in EFA one postulates that there is a smaller set of unobserved (latent) variables or constructs underlying the variables actually observed or measured (this is commonly done to assess validity), whereas in PCA one is simply trying to mathematically derive a relatively small number of variables to use to convey as much of the information in the observed/measured variables as possible. In other words, EFA is directed at understanding the relations among variables by understanding the constructs that underlie them, whereas PCA is simply directed toward enabling one to derive fewer variables to provide the same information that one would obtain from the larger set of variables. There are actually a number of different ways of computing factors for factor analysis; in this chapter, we will use only one of these methods, principal axis factor analysis (PA). We selected this approach because it is highly similar mathematically to PCA. The primary difference, computationally, between PCA and PA is that in the former the analysis typically is performed on an ordinary correlation matrix, complete with the correlations of each item or variable with itself. In contrast, in PA factor analysis, the correlation matrix is modified such that the correlations of each item with itself are replaced with a “communality”—a measure of that item’s relation to all other items (usually a squared multiple correlation). Thus, with PCA the researcher is trying to reproduce all information (variance and covariance) associated with the set of variables, whereas PA factor analysis is directed at understanding only the covariation among variables. Conditions for Exploratory Factor Analysis and Principal Components Analysis There are two main conditions necessary for factor analysis and principal components analysis. The first is that there need to be relationships among the variables. Further, the larger the sample size, especially in relation to the number of variables, the more reliable the resulting factors. Sample size is less crucial for factor analysis to the extent that the communalities of items with the other items are high, or at least relatively high and variable. Ordinary principal axis factor analysis should never be done if the number of items/variables is greater than the number of participants. Assumptions for Exploratory Factor Analysis and Principal Components Analysis The methods of extracting factors and components that are used in this book do not make strong distributional assumptions; normality is important only to the extent that skewness or outliers affect the observed correlations or if significance tests are performed (which is rare for EFA and PCA). The normality of the distribution can be checked by computing the skewness value of each variable. Maximum likelihood estimation, which we will not cover, does require multivariate normality; the variables need to be normally distributed and the joint distribution of all the variables should be normal. Because both principal axis factor analysis and principal components analysis are based on correlations, independent sampling is required and the variables should be related to each other (in pairs) in a linear 68 EXPLORATORY FACTOR ANALYSIS AND PRINCIPAL COMPONENTS ANALYSIS 69 fashion. The assumption of linearity can be assessed with matrix scatterplots, as shown in Chapter 2. Finally, each of the variables should be correlated at a moderate level with some of the other variables. Factor analysis and principal components analysis seek to explain or reproduce the correlation matrix, which would not be a sensible thing to do if the correlations all hover around zero. Bartlett’s test of sphericity addresses this assumption. However, if correlations are too high, this may cause problems with obtaining a mathematical solution to the factor analysis. • Retrieve your data file: hsbdataNew.sav. Problem 4.1: Factor Analysis on Math Attitude Variables In Problem 4.1, we perform a principal axis factor analysis on the math attitude variables. Factor analysis is more appropriate than PCA when one has the belief that there are latent variables underlying the variables or items measured. In this example, we have beliefs about the constructs underlying the math attitude questions; we believe that there are three constructs: motivation, competence, and pleasure. Now, we want to see if the items that were written to index each of these constructs actually do “hang together”; that is, we wish to determine empirically whether participants’ responses to the motivation questions are more similar to each other than to their responses to the competence items, and so on. Conducting factor analysis can assist us in validating the data: if the data do fit into the three constructs that we believe exist, then this gives us support for the construct validity of the math attitude measure in this sample. The analysis is considered exploratory factor analysis even though we have some ideas about the structure of the data because our hypotheses regarding the model are not very specific; we do not have specific predictions about the size of the relation of each observed variable to each latent variable, etc. Moreover, we “allow” the factor analysis to find factors that best fit the data, even if this deviates from our original predictions. 4.1 Are there three constructs (motivation, competence, and pleasure) underlying the math attitude questions? To answer this question, we will conduct a factor analysis using the principal axis factoring method and specify the number of factors to be three (because our conceptualization is that there are three math attitude scales or factors: motivation, competence, and pleasure). • Analyze → Dimension Reduction → Factor… to get Fig. 4.1. • Next, select the variables item01 through item14. Do not include item04r or any of the other reversed items because we are including the unreversed versions of those same items. Fig. 4.1. Factor analysis. 70 CHAPTER 4 • Now click on Descriptives… to produce Fig. 4.2. • Then click on the following: Initial solution and Univariate Descriptives (under Statistics), Coefficients, Determinant, and KMO and Bartlett’s test of sphericity (under Correlation Matrix). • Click on Continue to return to Fig. 4.1. Fig. 4.2. Factor analysis: Descriptives. • Next, click on Extraction… This will give you Fig. 4.3. • Select Principal axis factoring from the Method pull-down menu. • Unclick Unrotated factor solution (under Display). We will examine this only in Problem 4.2. We also usually would check the Scree plot box. However, again, we will request and interpret the scree plot only in Problem 4.2. • Click on Fixed number of factors under Extract, and type 3 in the box. This setting instructs the computer to extract three math attitude factors. • Click on Continue to return to Fig. 4.1. Fig. 4.3. Extraction method to produce principal axis factoring. • Now click on Rotation… in Fig. 4.1, which will give you Fig. 4.4. EXPLORATORY FACTOR ANALYSIS AND PRINCIPAL COMPONENTS ANALYSIS 71 • Click on Varimax, then make sure Rotated solution is also checked. Varimax rotation creates a solution in which the factors are orthogonal (uncorrelated with one another), which can make results easier to interpret and to replicate with future samples. If you believe that the factors (latent concepts) are correlated, you could choose Direct Oblimin, which will provide an oblique solution allowing the factors to be correlated. • Click on Continue. Fig. 4.4. Factor analysis: Rotation. • Next, click on Options…, which will give you Fig. 4.5. • Click on Sorted by size. • Click on Suppress small coefficients and type .3 (point 3) in the Absolute Value below box (see Fig. 4.5). Suppressing small factor loadings makes the output easier to read. • Click on Continue then OK. Compare Output 4.1 with your output and syntax. Fig. 4.5. Factor analysis: Options. Output 4.1: Factor Analysis for Math Attitude Questions FACTOR /VARIABLES item01 item02 item03 item04 item05 item06 item07 item08 item09 item10 item11 item12 item13 item14 /MISSING LISTWISE /ANALYSIS item01 item02 item03 item04 item05 item06 item07 item08 item09 item10 item11 item12 item13 item14 72 CHAPTER 4 /PRINT UNIVARIATE INITIAL CORRELATION DET KMO EXTRACTION ROTATION /FORMAT SORT BLANK(.3) /CRITERIA FACTORS(3) ITERATE(25) /EXTRACTION PAF /CRITERIA ITERATE(25) /ROTATION VARIMAX /METHOD=CORRELATION. Factor Analysis Interpretation of Output 4.1 The factor analysis program generates a variety of tables depending on which options you have chosen. The first table includes Descriptive Statistics for each variable and the Analyses N, which in this case is 71 because several items have one or more participants missing. It is especially important to check the Analysis N when you have a small sample, scattered missing data, or one variable with lots of missing data. In the latter case, it may be wise to run the analysis without that variable. Indicates how each question is Should be greater than .0001. If very close associated (correlated) with each to zero, collinearity is too high. If zero, no of the other questions. Only part of solution is possible.
Recommended publications
  • Choosing Principal Components: a New Graphical Method Based on Bayesian Model Selection
    Choosing principal components: a new graphical method based on Bayesian model selection Philipp Auer Daniel Gervini1 University of Ulm University of Wisconsin–Milwaukee October 31, 2007 1Corresponding author. Email: [email protected]. Address: P.O. Box 413, Milwaukee, WI 53201, USA. Abstract This article approaches the problem of selecting significant principal components from a Bayesian model selection perspective. The resulting Bayes rule provides a simple graphical technique that can be used instead of (or together with) the popular scree plot to determine the number of significant components to retain. We study the theoretical properties of the new method and show, by examples and simulation, that it provides more clear-cut answers than the scree plot in many interesting situations. Key Words: Dimension reduction; scree plot; factor analysis; singular value decomposition. MSC: Primary 62H25; secondary 62-09. 1 Introduction Multivariate datasets usually contain redundant information, due to high correla- tions among the variables. Sometimes they also contain irrelevant information; that is, sources of variability that can be considered random noise for practical purposes. To obtain uncorrelated, significant variables, several methods have been proposed over the years. Undoubtedly, the most popular is Principal Component Analysis (PCA), introduced by Hotelling (1933); for a comprehensive account of the methodology and its applications, see Jolliffe (2002). In many practical situations the first few components accumulate most of the variability, so it is common practice to discard the remaining components, thus re- ducing the dimensionality of the data. This has many advantages both from an inferential and from a descriptive point of view, since the leading components are often interpretable indices within the context of the problem and they are statis- tically more stable than subsequent components, which tend to be more unstruc- tured and harder to estimate.
    [Show full text]
  • Arxiv:1908.07390V1 [Stat.AP] 19 Aug 2019
    An Overview of Statistical Data Analysis Rui Portocarrero Sarmento Vera Costa LIAAD-INESC TEC FEUP, University of Porto PRODEI - FEUP, University of Porto [email protected] [email protected] August 21, 2019 Abstract The use of statistical software in academia and enterprises has been evolving over the last years. More often than not, students, professors, workers and users in general have all had, at some point, exposure to statistical software. Sometimes, difficulties are felt when dealing with such type of software. Very few persons have theoretical knowledge to clearly understand software configurations or settings, and sometimes even the presented results. Very often, the users are required by academies or enterprises to present reports, without the time to explore or understand the results or tasks required to do a optimal prepara- tion of data or software settings. In this work, we present a statistical overview of some theoretical concepts, to provide a fast access to some concepts. Keywords Reporting Mathematics Statistics and Applications · · 1 Introduction Statistics is a set of methods used to analyze data. The statistic is present in all areas of science involving the collection, handling and sorting of data, given the insight of a particular phenomenon and the possibility that, from that knowledge, inferring possible new results. One of the goals with statistics is to extract information from data to get a better understanding of the situations they represent. Thus, the statistics can be thought of as the science of learning from data. Currently, the high competitiveness in search technologies and markets has caused a constant race for the information.
    [Show full text]
  • Discriminant Function Analysis
    Discriminant Function Analysis Discriminant Function Analysis ● General Purpose ● Computational Approach ● Stepwise Discriminant Analysis ● Interpreting a Two-Group Discriminant Function ● Discriminant Functions for Multiple Groups ● Assumptions ● Classification General Purpose Discriminant function analysis is used to determine which variables discriminate between two or more naturally occurring groups. For example, an educational researcher may want to investigate which variables discriminate between high school graduates who decide (1) to go to college, (2) to attend a trade or professional school, or (3) to seek no further training or education. For that purpose the researcher could collect data on numerous variables prior to students' graduation. After graduation, most students will naturally fall into one of the three categories. Discriminant Analysis could then be used to determine which variable(s) are the best predictors of students' subsequent educational choice. A medical researcher may record different variables relating to patients' backgrounds in order to learn which variables best predict whether a patient is likely to recover completely (group 1), partially (group 2), or not at all (group 3). A biologist could record different characteristics of similar types (groups) of flowers, and then perform a discriminant function analysis to determine the set of characteristics that allows for the best discrimination between the types. To index Computational Approach Computationally, discriminant function analysis is very similar to analysis of variance (ANOVA ). Let us consider a simple example. Suppose we measure height in a random sample of 50 males and 50 females. Females are, on the average, not as tall as http://www.statsoft.com/textbook/stdiscan.html (1 of 11) [2/11/2008 8:56:30 AM] Discriminant Function Analysis males, and this difference will be reflected in the difference in means (for the variable Height ).
    [Show full text]
  • Factor Analysis
    Factor Analysis Qian-Li Xue Biostatistics Program Harvard Catalyst | The Harvard Clinical & Translational Science Center Short course, October 27, 2016 1 Well-used latent variable models Latent Observed variable scale variable scale Continuous Discrete Continuous Factor Discrete FA analysis IRT (item response) LISREL Discrete Latent profile Latent class Growth mixture analysis, regression General software: MPlus, Latent Gold, WinBugs (Bayesian), NLMIXED (SAS) Objectives § What is factor analysis? § What do we need factor analysis for? § What are the modeling assumptions? § How to specify, fit, and interpret factor models? § What is the difference between exploratory and confirmatory factor analysis? § What is and how to assess model identifiability? 3 What is factor analysis § Factor analysis is a theory driven statistical data reduction technique used to explain covariance among observed random variables in terms of fewer unobserved random variables named factors 4 An Example: General Intelligence (Charles Spearman, 1904) Y1 δ1 δ Y2 2 δ General Y3 3 Intelligence Y4 δ4 F Y5 δ5 5 Why Factor Analysis? 1. Testing of theory § Explain covariation among multiple observed variables by § Mapping variables to latent constructs (called “factors”) 2. Understanding the structure underlying a set of measures § Gain insight to dimensions § Construct validation (e.g., convergent validity) 3. Scale development § Exploit redundancy to improve scale’s validity and reliability 6 Part I. Exploratory Factor Analysis (EFA) 7 One Common Factor Model: Model Specification Y δ1 λ1 1 Y1 = λ1F +δ1 λ2 Y δ F 2 2 Y2 = λ2 F +δ 2 λ3 Y δ 3 3 Y3 = λ3F +δ3 § The factor F is not observed; only Y1, Y2, Y3 are observed § δi represent variability in the Yi NOT explained by F § Yi is a linear function of F and δi 8 One Common Factor Model: Model Assumptions λ Y δ1 1 1 Y1 = λ1F +δ1 λ2 Y δ F 2 2 Y2 = λ2 F +δ 2 λ3 Y δ 3 3 Y3 = λ3F +δ3 § Factorial causation § F is independent of δj, i.e.
    [Show full text]
  • Principal Component Analysis (PCA) As a Statistical Tool for Identifying Key Indicators of Nuclear Power Plant Cable Insulation
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2017 Principal component analysis (PCA) as a statistical tool for identifying key indicators of nuclear power plant cable insulation degradation Chamila Chandima De Silva Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Materials Science and Engineering Commons, Mechanics of Materials Commons, and the Statistics and Probability Commons Recommended Citation De Silva, Chamila Chandima, "Principal component analysis (PCA) as a statistical tool for identifying key indicators of nuclear power plant cable insulation degradation" (2017). Graduate Theses and Dissertations. 16120. https://lib.dr.iastate.edu/etd/16120 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Principal component analysis (PCA) as a statistical tool for identifying key indicators of nuclear power plant cable insulation degradation by Chamila C. De Silva A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Materials Science and Engineering Program of Study Committee: Nicola Bowler, Major Professor Richard LeSar Steve Martin The student author and the program of study committee are solely responsible for the content of this thesis. The Graduate College will ensure this thesis is globally accessible and will not permit alterations after a degree is conferred.
    [Show full text]
  • Covid-19 Epidemiological Factor Analysis: Identifying Principal Factors with Machine Learning
    medRxiv preprint doi: https://doi.org/10.1101/2020.06.01.20119560; this version posted June 5, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license . Covid-19 Epidemiological Factor Analysis: Identifying Principal Factors with Machine Learning Serge Dolgikh1[0000-0001-5929-8954] Abstract. Based on a subset of Covid-19 Wave 1 cases at a time point near TZ+3m (April, 2020), we perform an analysis of the influencing factors for the epidemics impacts with several different statistical methods. The consistent con- clusion of the analysis with the available data is that apart from the policy and management quality, being the dominant factor, the most influential factors among the considered were current or recent universal BCG immunization and the prevalence of smoking. Keywords: Infectious epidemiology, Covid-19 1 Introduction A possible link between the effects of Covid-19 pandemics such as the rate of spread and the severity of cases; and a universal immunization program against tuberculosis with BCG vaccine (UIP, hereinafter) was suggested in [1] and further investigated in [2]. Here we provide a factor analysis based on the available data for the first group of countries that encountered the epidemics (Wave 1) by applying several commonly used factor-ranking methods. The intent of the work is to repeat similar analysis at several different points in the time series of cases that would allow to make a confident conclusion about the epide- miological and social factors with strong influence on the course of the epidemics.
    [Show full text]
  • T-Test & Factor Analysis
    University of Brighton Parametric tests T-test & factor analysis Better than non parametric tests Stringent assumptions More strings attached Assumes population distribution of sample is normal Major problem Alternatives • Continue using parametric test if the sample is large or enough evidence available to prove the usage • Transforming and manipulating variables • Use non parametric test Dr. Paurav Shukla t-test Tests used for significant differences between Used when they are two groups only groups Compares the mean score on some continuous Independent sample t-test variable Paired sample t-test Largely used in measuring before – after effect One-way ANOVA (between groups) Also called intervention effect One-way ANOVA (repeated groups) Option 1: Same sample over a period of time Two-way ANOVA (between groups) Option 2: Two different groups at a single point of MANOVA time Assumptions associated with t-test Why t-test is important? Population distribution of sample is normal Highly used technique for hypothesis testing Probability (random) sampling Can lead to wrong conclusions Level of measurement Type 1 error Dependent variable is continuous • Reject null hypothesis instead of accepting – When we assume there is a difference between groups, when it Observed or measured data must be is not independent Type 2 error If interaction is unavoidable use a stringent alpha • Accept null hypothesis instead of rejecting value (p<.01) – When we assume there is no difference between groups, when Homogeneity of variance there is Assumes that samples are obtained from populations Solution of equal variance • Interdependency between both errors ANOVA is reasonably robust against this • Selection of alpha level Dr.
    [Show full text]
  • Exam-Srm-Sample-Questions.Pdf
    SOCIETY OF ACTUARIES EXAM SRM - STATISTICS FOR RISK MODELING EXAM SRM SAMPLE QUESTIONS AND SOLUTIONS These questions and solutions are representative of the types of questions that might be asked of candidates sitting for Exam SRM. These questions are intended to represent the depth of understanding required of candidates. The distribution of questions by topic is not intended to represent the distribution of questions on future exams. April 2019 update: Question 2, answer III was changed. While the statement was true, it was not directly supported by the required readings. July 2019 update: Questions 29-32 were added. September 2019 update: Questions 33-44 were added. December 2019 update: Question 40 item I was modified. January 2020 update: Question 41 item I was modified. November 2020 update: Questions 6 and 14 were modified and Questions 45-48 were added. February 2021 update: Questions 49-53 were added. July 2021 update: Questions 54-55 were added. Copyright 2021 by the Society of Actuaries SRM-02-18 1 QUESTIONS 1. You are given the following four pairs of observations: x1=−==−=( 1,0), xx 23 (1,1), (2, 1), and x 4(5,10). A hierarchical clustering algorithm is used with complete linkage and Euclidean distance. Calculate the intercluster dissimilarity between {,xx12 } and {}x4 . (A) 2.2 (B) 3.2 (C) 9.9 (D) 10.8 (E) 11.7 SRM-02-18 2 2. Determine which of the following statements is/are true. I. The number of clusters must be pre-specified for both K-means and hierarchical clustering. II. The K-means clustering algorithm is less sensitive to the presence of outliers than the hierarchical clustering algorithm.
    [Show full text]
  • Dynamic Factor Models Catherine Doz, Peter Fuleky
    Dynamic Factor Models Catherine Doz, Peter Fuleky To cite this version: Catherine Doz, Peter Fuleky. Dynamic Factor Models. 2019. halshs-02262202 HAL Id: halshs-02262202 https://halshs.archives-ouvertes.fr/halshs-02262202 Preprint submitted on 2 Aug 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. WORKING PAPER N° 2019 – 45 Dynamic Factor Models Catherine Doz Peter Fuleky JEL Codes: C32, C38, C53, C55 Keywords : dynamic factor models; big data; two-step estimation; time domain; frequency domain; structural breaks DYNAMIC FACTOR MODELS ∗ Catherine Doz1,2 and Peter Fuleky3 1Paris School of Economics 2Université Paris 1 Panthéon-Sorbonne 3University of Hawaii at Manoa July 2019 ∗Prepared for Macroeconomic Forecasting in the Era of Big Data Theory and Practice (Peter Fuleky ed), Springer. 1 Chapter 2 Dynamic Factor Models Catherine Doz and Peter Fuleky Abstract Dynamic factor models are parsimonious representations of relationships among time series variables. With the surge in data availability, they have proven to be indispensable in macroeconomic forecasting. This chapter surveys the evolution of these models from their pre-big-data origins to the large-scale models of recent years. We review the associated estimation theory, forecasting approaches, and several extensions of the basic framework.
    [Show full text]
  • Statistical Analysis in JASP
    Copyright © 2018 by Mark A Goss-Sampson. All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the author except for the purposes of research, education or private study. CONTENTS PREFACE .................................................................................................................................................. 1 USING THE JASP INTERFACE .................................................................................................................... 2 DESCRIPTIVE STATISTICS ......................................................................................................................... 8 EXPLORING DATA INTEGRITY ................................................................................................................ 15 ONE SAMPLE T-TEST ............................................................................................................................. 22 BINOMIAL TEST ..................................................................................................................................... 25 MULTINOMIAL TEST .............................................................................................................................. 28 CHI-SQUARE ‘GOODNESS-OF-FIT’ TEST............................................................................................. 30 MULTINOMIAL AND Χ2 ‘GOODNESS-OF-FIT’ TEST. ..........................................................................
    [Show full text]
  • Principal Components Analysis Maths and Statistics Help Centre
    Principal Components Analysis Maths and Statistics Help Centre Usage - Data reduction tool - Vital first step in the multivariate analysis of continuous data - Used to find patterns in high dimensional data - Expresses data such that similarities and differences are highlighted - Once the patterns in the data are found, PCA is used to reduce the dimensionality of the data without much loss of information (choosing the right number of principal components is very important) - If you are using PCA for modelling purposes (either subsequent gradient analyses or regression) then normality is ideal. If it is for data reduction or exploratory purposes, then normality is not a strict requirement How does PCA work? - Uses the covariance/correlations of the raw data to define principal components that combine many correlated variables - The principal components are uncorrelated - Similar concept to regression – in regression you use one line to describe a relationship between two variables; here the principal component represents the line. Given a point on the line, or a value of the principal component you can discover the values of the variables. Choosing the right number of principal components - This is one of the most important parts of PCA. The number you choose needs to be the ones that give you the most information without significant loss of information. - Scree plots show the eigenvalues. These are used to tell us how important the principal components are. - When the scree plot plateaus then no more principal components are needed. - The loadings are a measure of how much each original variable contributes to each of the principal components.
    [Show full text]
  • Optimal Principal Component Analysis of STEM XEDS Spectrum Images Pavel Potapov1,2* and Axel Lubk2
    Potapov and Lubk Adv Struct Chem Imag (2019) 5:4 https://doi.org/10.1186/s40679-019-0066-0 RESEARCH Open Access Optimal principal component analysis of STEM XEDS spectrum images Pavel Potapov1,2* and Axel Lubk2 Abstract STEM XEDS spectrum images can be drastically denoised by application of the principal component analysis (PCA). This paper looks inside the PCA workfow step by step on an example of a complex semiconductor structure con- sisting of a number of diferent phases. Typical problems distorting the principal components decomposition are highlighted and solutions for the successful PCA are described. Particular attention is paid to the optimal truncation of principal components in the course of reconstructing denoised data. A novel accurate and robust method, which overperforms the existing truncation methods is suggested for the frst time and described in details. Keywords: PCA, Spectrum image, Reconstruction, Denoising, STEM, XEDS, EDS, EDX Background components [3–9]. In general terms, PCA reduces the Scanning transmission electron microscopy (STEM) dimensionality of a large dataset by projecting it into an delivers images of nanostructures at high spatial resolu- orthogonal basic of lower dimension. It can be shown tion matching that of broad beam transmission electron that among all possible linear projections, PCA ensures microscopy (TEM). Additionally, modern STEM instru- the smallest Euclidean diference between the initial and ments are typically equipped with electron energy-loss projected datasets or, in other words, provides the mini- spectrometers (EELS) and/or X-rays energy-dispersive mal least squares errors when approximating data with a spectroscopy (XEDS, sometimes abbreviated as EDS smaller number of variables [10].
    [Show full text]