
Probabilistic PCA Factor Analysis Independent Component Analysis CPSC 540: Machine Learning Probabilistic PCA, Factor Analysis, Independent Component Analysis Mark Schmidt University of British Columbia Winter 2018 Probabilistic PCA Factor Analysis Independent Component Analysis Outline 1 Probabilistic PCA 2 Factor Analysis 3 Independent Component Analysis Probabilistic PCA Factor Analysis Independent Component Analysis Expectation Maximization with Many Discrete Variables EM iterations take the form ( ) t+1 X Θ = argmax αH log p(O; H j Θ) ; Θ H and with multiple MAR variables fH1;H2;:::;Hmg this means 8 9 t+1 <X X X = Θ = argmax ··· αH log p(O; H j Θ) ; Θ : H1 H2 Hm ; In mixture models, EM sums over all kn possible cluster assignments. In binary semi-supervised learning, EM sums over all 2t assignments to y~. But conditional independence allows efficient calculation in the above cases. The H are independent given fO; Θg which simplifies sums (see EM notes). We'll cover general case when we discuss probabilistic graphical models. Probabilistic PCA Factor Analysis Independent Component Analysis Today: Continuous-Latent Variables If H is continuous, the sums are replaceed by integrals, Z log p(O j Θ) = log p(O; H j Θ)dH (log-likelihood) H Z t+1 Θ = argmax αH log p(O; H j Θ)dH (EM update); Θ H where if have 5 hidden varialbes R means R R R R R . H H1 H2 H3 H4 H5 Even with conditional independence these might be hard. Gaussian assumptions allow efficient calculation of these integrals. We'll cover general case when we get discuss Bayesian statistics. Probabilistic PCA Factor Analysis Independent Component Analysis Today: Continuous-Latent Variables In mixture models, we have a discrete latent variable zi: In mixture of Gaussians, if you know the cluster zi then p(xi j zi) is a Gaussian. In latent-factor models, we have continuous latent variables zi: In probabilistic PCA, if you know the latent-factors zi then p(xi j zi) is a Gaussian. But what would a continuous zi be useful for? Do we really need to start solving integrals? Probabilistic PCA Factor Analysis Independent Component Analysis Today: Continuous-Latent Variables Data may live in a low-dimensional manifold: http://isomap.stanford.edu/handfig.html Mixtures are inefficient at representing the 2D manifold. Probabilistic PCA Factor Analysis Independent Component Analysis Principal Component Analysis (PCA) PCA replaces X with a lower-dimensional approximation Z. Matrix Z has n rows, but typically far fewer columns. PCA is used for: Dimensionality reduction: replace X with a lower-dimensional Z. Outlier detection: if PCA gives poor approximation of xi, could be outlier. Basis for linear models: use Z as features in regression model. Data visualization: display zi in a scatterplot. Factor discovering: discover important hidden \factors" underlying data. http://infoproc.blogspot.ca/2008/11/european-genetic-substructure.html Probabilistic PCA Factor Analysis Independent Component Analysis PCA Notation PCA approximates the original matrix by factor-loadings Z and latent-factors W , X ≈ ZW: n×k k×d where Z 2 R , W 2 R , and we assume columns of X have mean 0. We're trying to split redundancy in X into its important \parts". We typically take k << d so this requires far fewer parameters: 2 3 2 3 6 7 6 7 6 7 6 7 6 7 ≈ 6 7 6 7 6 7 4 5 4 5 | {z } W 2Rk×d | {z } | {z } X2Rn×d Z2Rn×k Also computationally convenient: Xv costs O(nd) but Z(W v) only costs O(nk + dk). Probabilistic PCA Factor Analysis Independent Component Analysis PCA Notation Using X ≈ ZW , PCA approximates each examples xi as xi ≈ W T zi: Usually we only need to estimate W : If using least squares, then given W we can find zi from xi using zi = argmin kxi − W T zk2 = (WW T )−1W xi: z We often assume that W T is orthogonal: This means that WW T = I. In this case we have zi = W xi. In standard formulations, solution only unique up to rotation: Usually, we fit the rows of W sequentially for uniqueness. Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA PCA approximates the original matrix by latent-variables Z and latent-factors W , X ≈ ZW: n×k k×d where Z 2 R , W 2 R . Two classical interpretations/derivations of PCA (equivalent for orthogonal W T ): 1 Choose latent-factors W to minimize error (\synthesis view"): n d 2 X X i T i 2 argmin kX − ZW kF = (xj − (wj) z ) : n×k k×d Z2R ;W 2R i=1 j=1 2 Choose latent-factors W T to maximize variance (\analysis view"): n n X i 2 X i 2 i i argmax = kz − µzk = kW x k (z = W x and µz = 0) k×d W 2R i=1 i=1 n n X X = Tr((xi)T W T W xi) = Tr(W T W xi(xi)T ) = Tr(W T WXT X); i=1 i=1 and we note that XT X is n times sample covariance S because data is centered. Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA Probabilistic PCA Factor Analysis Independent Component Analysis Two Classic Views on PCA Probabilistic PCA Factor Analysis Independent Component Analysis Probabilistic PCA With zero-mean (\centered") data, in PCA we assume that xi ≈ W T zi: In probabilistic PCA we assume that xi ∼ N (W T zi; σ2I); zi ∼ N (0;I): (we can actually use any Gaussian density for z) We can treat zi as nuisance parameters integrate over them in likelihood, Z p(xi j W ) = p(xi; zi j W )dzi: zi Looks ugly, but this is marginal of Gaussian so it's Gaussian. Regular PCA is obtained as the limit of σ2 going to 0. Probabilistic PCA Factor Analysis Independent Component Analysis Manipulating Gaussians From the assumptions of the previous slide we have (leaving out i superscripts): (x − W T z)T (x − W T z) zT z p(x j z; W ) / exp − ; p(z) / exp − : 2σ2 2 Multiplying and expanding we get p(x; z j W ) = p(x j z; W )p(z j W ) = p(x j z; W )p(z)(z ? W ) (x − W T z)T (x − W T z) zT z / exp − − 2σ2 2 xT x − xT W T z − zT W x + zT WW T z zT z = exp − + 2σ2 2 Probabilistic PCA Factor Analysis Independent Component Analysis Manipulating Gaussians So the \complete" likelihood satsifies xT x − xT W T z − zT W x + zT WW T z zT z p(x; z j W ) / exp − + 2σ2 2 1 1 1 1 1 = exp − xT I x + xT W T z + zT W x + zT WW T + I z ; 2 σ2 σ2 σ2 σ2 We can re-write the exponent as a quadratic form, 1 1 1 T T T σ2 I − σ2 W x p(x; z j W ) / exp − x z 1 1 T ; 2 − σ2 W σ2 WW + I z This has the form of a Gaussian distribution, 1 p(v j W ) / exp − (v − µ)T Σ−1(v − µ) ; 2 1 1 T x −1 σ2 I − σ2 W with v = , µ = 0, and Σ = 1 1 T . z − σ2 W σ2 WW + I Probabilistic PCA Factor Analysis Independent Component Analysis Manipulating Gaussians Remember that if we write multivariate Gaussian in partitioned form, x µ Σ Σ ∼ N x ; xx xz ; z µz Σzx Σzz then the marginal distribution p(x) (integrating over z) is given by x ∼ N (µx; Σxx): https://en.wikipedia.org/wiki/Multivariate_normal_distribution Probabilistic PCA Factor Analysis Independent Component Analysis Manipulating Gaussians Remember that if we write multivariate Gaussian in partitioned form, x µ Σ Σ ∼ N x ; xx xz ; z µz Σzx Σzz then the marginal distribution p(x) (integrating over z) is given by x ∼ N (µx; Σxx): −1 For probabilistic PCA we assume µx = 0, but we partitioned Σ instead of Σ. To get Σ we can use a partitioned matrix inversion formula, 1 1 T −1 T 2 T σ2 I − σ2 W W W + σ IW Σ = 1 1 T = ; − σ2 W σ2 WW + I WI which gives that solution to integrating over z is x j W ∼ N (0;W T W + σ2I): Probabilistic PCA Factor Analysis Independent Component Analysis Notes on Probabilistic PCA NLL of observed data has the form n n − log p(x j W ) = Tr(SΘ) − log jΘj + const.; 2 2 where Θ = (W T W + σ2I)−1 and S is the sample covariance. Not convex, but non-global stationary points are saddle points. Equivalence with regular PCA: Consider W T orthogonal so WW T = I (usual assumption). Using matrix determinant lemma we have 1 jW T W + σ2Ij = jI + WW T j · jσ2Ij = const. σ2 | {z } I Using matrix inversion lemma we have 1 1 (W T W + σ2I)−1 = I − W T W; σ2 σ2(σ2 + 1) so minimizing NLL maximizes Tr(W T WS) as in PCA. Probabilistic PCA Factor Analysis Independent Component Analysis Generalizations of Probabilistic PCA Why do we need a probabilistic interpretation of PCA? Good excuse to play with Gaussian identities and matrix formulas? We now understand that PCA fits a Gaussian with restricted covariance: Hope is that W T W + σI is a good approximation of full covariance XT X.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages45 Page
-
File Size-