10-Eigenimages.Pdf

10-Eigenimages.Pdf

Eigenimages n Unitary transforms n Karhunen-Loève transform and eigenimages n Sirovich and Kirby method n Eigenfaces for gender recognition n Fisher linear discriminant analysis n Fisherimages and varying illumination n Fisherfaces vs. eigenfaces Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 1 Image recognition using linear projection n To recognize complex patterns (e.g., faces), large portions of an image (say N pixels) have to be considered n High dimensionality of image space results in high computational burden for many recognition techniques Example: nearest-neighbor search requires pairwise comparison with every image in a database n Transform c = W f is a projection on a J-dimensional linear subspace that greatly reduces the dimensionality of the image space J << N n Idea: tailor the projection to a set of representative training images and preserve the salient features by using Principal Component Analysis (PCA) Mean squared value H of projection Wopt = argmax det WRff W W ( ( )) JxN projection matrix with orthonormal rows. Autocorrelation matrix of image Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 2 Image recognition using linear projection f 2-d example: 2 Goal: project samples on a 1-d subspace, then perform classification. f1 x Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 3 Image recognition using linear projection f 2-d example: 2 Goal: project samples on a 1-d subspace, then perform classification. f1 Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 4 Image recognition using linear projection f 2-d example: 2 Goal: project samples on a 1-d subspace, then perform classification. f1 x x Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 5 Unitary transforms ! n Sort pixels f [x,y] of an image into column vector f of length N n Calculate N transform coefficients c = Af where A is a matrix of size NxN n The transform A is unitary, iff −1 *T H A = A≡A Hermitian conjugate n If A is real-valued, i.e., A=A*, transform is „orthonormal“ Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 6 Energy conservation with unitary transforms n For any unitary transform c = Af we obtain 2 2 c = c H c = f H AH Af = f n Interpretation: every unitary transform is simply a rotation of the coordinate system (and, possibly, sign flips) n Vector length is conserved. n Energy (mean squared vector length) is conserved. Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 7 Energy distribution for unitary transforms n Energy is conserved, but, in general, unevenly distributed among coefficients. n Autocorrelation matrix H H H H Rcc = E ⎡cc ⎤ = E ⎡ Af ⋅ f A ⎤ = ARff A ⎣ ⎦ ⎣ ⎦ n Diagonal of Rcc comprises mean squared values („energies“) of the coefficients ci E ⎡c2 ⎤ = ⎡R ⎤ = ⎡ AR AH ⎤ ⎣ i ⎦ ⎣ cc ⎦i,i ⎣ ff ⎦i,i Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 8 Eigenmatrix of the autocorrelation matrix Definition: eigenmatrix Φ of autocorrelation matrix Rff l Φ is unitary l The columns of Φ form a set of eigenvectors of Rff, i.e., Λ is a diagonal matrix of eigenvalues λ Rff Φ = ΦΛ i ⎛ λ 0 ⎞ ⎜ 0 ⎟ ⎜ λ ⎟ Λ = ⎜ 1 ⎟ ⎜ ! ⎟ ⎜ 0 λ ⎟ ⎝ N−1 ⎠ H H l Rff is normal matrix, i.e., R f f R f f = R ff R ff , hence unitary eigenmatrix exists („spectral theorem“) l R is symmetric nonnegative definite, hence ff λi ≥ 0 for all i Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 9 Karhunen-Loève transform n Unitary transform with matrix A = ΦH n Transform coefficients are pairwise uncorrelated H H H Rcc = ARff A = Φ Rff Φ = Φ ΦΛ = Λ n Columns of Φ are ordered according to decreasing eigenvalues. n Energy concentration property: l No other unitary transform packs as much energy into the first J coefficients. l Mean squared approximation error by keeping only first J coefficients is minimized. l Holds for any J. Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 10 Illustration of energy concentration ⎛ ⎞ f2 cosθ −sinθ c2 A = ⎜ ⎟ ⎝ sinθ cosθ ⎠ After KLT: Strongly correlated uncorrelated samples, samples, most of the energy in equal energies f1 c 1 first coefficient Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 11 Basis images and eigenimages n For any transform, the inverse transform ! 1! f = A− c can be interpreted in terms of the superposition of columns of A-1 („basis images“) n For the KL transform, the basis images are the eigenvectors of the autocorrelation matrix Rff and are called „eigenimages.“ n If energy concentration works well, only a limited number of eigenimages is needed to approximate a set of images with small error. These eigenimages span an optimal linear subspace of dimensionality J. n Eigenimages can be used directly as rows of the projection matrixMean squared value H of projection Wopt = argmax det WRff W W ( ( )) JxN projection matrix with orthonormal rows Autocorrelation matrix of image Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 12 Eigenimages for face recognition f f c + f - W Rejection Normalization Projection Similarity New Face measure 1 !T ! Image (e.g., c p ) θ k* f p … 1 … Class of most k * × similarp k … … … ! p Recognition K Result Mean Face Database of Similarity Eigenface Matching Coefficients Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 13 Computing eigenimages from a training set n How to obtain NxN covariance matrix? ! ! ! l Use training set Γ ,Γ ,…,Γ (each column vector represents one image) 1 2 L+1 l Let µ be the mean image of all L+1 training images ! "! ! "! ! "! ! "! l Define training set matrix S = Γ − µ,Γ − µ,Γ − µ,…,Γ − µ , ( 1 2 3 L ) L ! "! ! "! H and calculate scatter matrix R = Γ − µ Γ − µ = SS H ∑( l )( l ) l=1 Problem 1: Training set size should be L +1>> N If L < N, scatter matrix R is rank-deficient Problem 2: Finding eigenvectors of an NxN matrix. n Can we find a small set of the most important eigenimages from a small training set L << N ? Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 14 Sirovich and Kirby algorithm n Instead of eigenvectors of SS H , consider the eigenvectors of S H S, i.e., H S Svi = λivi n Premultiply both sides by S H ! ! SS Sv = λ Sv i i i n H By inspection, we find that Svi are eigenvectors of SS Sirovich and Kirby Algorithm (for L < < N ) l Compute the LxL matrix SHS l Compute L eigenvectors v of SHS i l Compute eigenimages corresponding to the L ≤ L largest eigenvalues 0 as a linear combination of training images Svi L. Sirovich and M. Kirby, "Low-dimensional procedure for the characterization of human faces," Journal of the Optical Society of America A, 4(3), pp. 519-524, 1987. Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 15 Example: eigenfaces n The first 8 eigenfaces obtained from a training set of 100 male and 100 female training images Eigenface 1 Eigenface 2 Eigenface 3 Eigenface 4 Mean Face Eigenface 5 Eigenface 6 Eigenface 7 Eigenface 8 n Can be used to generate faces by adjusting 8 coefficients. n Can be used for face recognition by nearest-neighbor search in 8-d „face space.“ Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 16 Gender recognition using eigenfaces Nearest neighbor search in “face space” Female face samples Male face samples Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 17 Fisher linear discriminant analysis n Eigenimage method maximizes scatter within the linear subspace over the entire image set – regardless of classification task W = argmax det WRW H opt ( ( )) W n Fisher linear discriminant analysis (1936): maximize between-class scatter, while minimizing within-class scatter c H R = N µ − µ µ − µ B ∑ i ( i )( i ) i=1 ⎛ det WR W H ⎞ ( B ) W = argmax⎜ ⎟ Samples Mean in class i opt H in class i W ⎜ det WR W ⎟ ⎝ ( W )⎠ c H R = Γ − µ Γ − µ W ∑ ∑ ( l i )( l i ) i=1 Γ ∈Class(i) l Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 18 Fisher linear discriminant analysis (cont.) !!" n Solution: Generalized eigenvectors wi corresponding to the J largest eigenvalues | i 1,2,...,J , i.e. {λi = } !!" !!" RB wi = λi RW wi , i =1,2,...,J n Problem: within-class scatter matrix Rw at most of rank L-c, hence usually singular. n Apply KLT first to reduce dimensionality of feature space to L-c (or less), proceed with Fisher LDA in lower-dimensional space Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 19 Eigenimages vs. Fisherimages f 2-d example: 2 Goal: project samples on a 1-d subspace, then perform classification. f1 Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 20 Eigenimages vs. Fisherimages f 2-d example: 2 KLT Goal: project samples on a 1-d subspace, then perform classification. The KLT preserves maximum energy, but f the 2 classes are no 1 longer distinguishable. Digital Image Processing: Bernd Girod, © 2013-2018 Stanford University -- Eigenimages 21 Eigenimages vs. Fisherimages f 2-d example: 2 KLT Goal: project samples on a 1-d subspace, then perform classification. The KLT preserves maximum energy, but f the 2 classes are no 1 longer distinguishable. Fisher LDA separates

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    28 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us