<<

Mixture Models & algorithm Lecture 21

David Sontag New York University

Slides adapted from Carlos Guestrin, Dan Klein, Luke lemoyer, Dan Weld, Vibhav Gogate, and Andrew Moore Evils of “Hard Assignments”?

• Clusters may overlap • Some clusters may “wider” than others • Distances can be deceiving! Probabilisc Clustering

• Try probabilisc model! Y X X • allows overlaps, clusters of different 1 2 size, etc. ?? 0.1 2.1

• Can tell a genera story for ?? 0.5 -1.1 data ?? 0.0 3.0 – P(X|Y) P(Y) ?? -0.1 -2.0 • Challenge: need to mate model parameters without ?? 0.2 1.5 labeled Ys … … … The General GMM assumpon • P(Y): There are k components • P(X|Y): Each component generates data from a mulvariate

Gaussian with mean μi and covariance matrix Σi Each data point is sampled from a generave process: 1. Choose component with probability P(y=i)

2. Generate datapoint ~ N(mi, Σi )

µ2 Gaussian mixture model µ (GMM) 1

µ3 (x µ )2 i− i0 2σ2 1 − i √ = ln σi 2π  (x µ )2  i− i1 1 2σ2  e− i   σi√2π   2  2 (xi µi0) (xi µi1) = − 2 + − 2 − 2σi 2σi 2 2 µi0 + µi1 µi0 + µi1 = 2 xi + 2 σi 2σi 2 2 1 θ µi0 + µi1 w0 = ln − + 2 θ 2σi

µi0 + µi1 wi = 2 σi w = w + η [y∗ p(y∗ x ,w)]f(x ) j − j | j j ￿j w = w +[y∗ y(x;w)]f(x) −

w = w + y∗f(x)

h(x) = sign αihi(x) ￿ ￿ ￿i n Count(Y = y)= δ(Y j = y) What Model Should We Use? =1 ￿j n • Depends on X! j Count(Y = y)= D(j)δ(Y = y) Y X1 X2 • Here, maybe Gaussian Naïve Bayes? j=1 ?? 0.1 2.1 – Mulnomial over clusters Y ￿ ?? 0.5 -1.1 f(x)=– (Independent) Gaussian for each Xw0 + wihi(x) i given Y ?? 0.0 3.0 ￿i ?? -0.1 -2.0 p( = yk)=θk ?? 0.2 1.5

… … …

4 Could we make fewer assumpons?

• What if the Xi co-vary? • What if there are mulple peaks? • Gaussian Mixture Models! – P(Y) sll mulnomial – P(X|Y) is a mulvariate Gaussian distribuon:

1 # 1 T & P(X x |Y i) exp x "1 x = j = = m/2 1/2 %" ( j " µi ) !i ( j " µi )( (2! ) || !i || $ 2 ' Mulvariate Gaussians

1 # 1 T & P(X x |Y i) exp x "1 x =P(X=j xj)== = m/2 1/2 %" ( j " µi ) !i ( j " µi )( (2! ) || !i || $ 2 '

Σ ∝ identy matrix Mulvariate Gaussians

1 # 1 T & P(X x |Y i) exp x "1 x =P(X=j xj)== = m/2 1/2 %" ( j " µi ) !i ( j " µi )( (2! ) || !i || $ 2 '

Σ = diagonal matrix

Xi are independent ala Gaussian NB Mulvariate Gaussians

1 # 1 T & P(X x |Y i) exp x "1 x =P(X=j xj)== = m/2 1/2 %" ( j " µi ) !i ( j " µi )( (2! ) || !i || $ 2 '

Σ = arbitrary (semidefinite) matrix: - specifies rotaon (change of basis) - eigenvalues specify relave elongaon Mulvariate Gaussians Eigenvalue, λ, of Σ

Covariance matrix, Σ, =

degree to which xi vary together

1 # 1 T & P(X x |Y i) exp x "1 x =P(X=j xj)== = m/2 1/2 %" ( j " µi ) !i ( j " µi )( (2! ) || !i || $ 2 ' Mixtures of Gaussians (1)

Old Faithful Data Set Time to Erupon

Duraon of Last Erupon Mixtures of Gaussians (1)

Old Faithful Data Set

Single Gaussian Mixture of two Gaussians Mixtures of Gaussians (2)

Combine simple models into a complex model:

Component Mixing coefficient

K=3 Mixtures of Gaussians (3) Eliminang Hard Assignments to Clusters Model data as mixture of mulvariate Gaussians Eliminang Hard Assignments to Clusters Model data as mixture of mulvariate Gaussians Eliminang Hard Assignments to Clusters Model data as mixture of mulvariate Gaussians

Shown is the posterior probability that a point was generated from ith Gaussian: Pr(Y = i x) | ML esmaon in supervised seng

• Univariate Gaussian

• Mixture of Mulvariate Gaussians

ML esmate for each of the Mulvariate Gaussians is given by:

n n 1 T µ k = x k 1 k k ML ! n !ML = #(x j " µML )(x j " µML ) n j=1 n j=1

Just sums over x generated from the k’th Gaussian That was easy! But what if unobserved data? • MLE:

– argmaxθ ∏j P(yj,xj) – θ: all model parameters • eg, class probs, means, and variances

• But we don’t know yj’s!!! • Maximize marginal likelihood: K – argmaxθ ∏j P(xj) = argmax ∏j ∑k=1 P(Yj=k, xj) How do we opmize? Closed Form?

• Maximize marginal likelihood: K – argmaxθ ∏j P(xj) = argmax ∏j ∑k=1 P(Yj=k, xj) • Almost always a hard problem! – Usually no closed form soluon – Even when lgP(X,Y) is convex, lgP(X) generally isn’t… – For all but the simplest P(X), we will have to do gradient ascent, in a big messy space with lots of local opmum… Learning general mixtures of Gaussian

1 ⎡ 1 T ⎤ P(y k | x ) exp x −1 x P(y k) = j ∝ m / 2 1/ 2 ⎢ − ( j − µk ) Σk ( j − µk )⎥ = (2π) || Σk || ⎣ 2 ⎦ • Marginal likelihood:

€ m m K ∏P(x j ) = ∏∑P(x j ,y = k) j =1 j =1 k=1 m K 1 ⎡ 1 T ⎤ exp x −1 x P(y k) = ∏∑ m / 2 1/ 2 ⎢ − ( j − µk ) Σk ( j − µk )⎥ = j =1 k=1 (2π) || Σk || ⎣ 2 ⎦

• Need to differenate and solve for μk, Σk, and P(Y=k) for k=1..K € • There will be no closed form soluon, gradient is complex, lots of local opmum • Wouldn’t it be nice if there was a be way!?! Expectaon Maximizaon The EM Algorithm • A clever method for maximizing marginal likelihood: K – argmaxθ ∏j P(xj) = argmaxθ ∏j ∑k=1 P(Yj=k, xj) – A type of gradient ascent that can be easy to implement (eg, no line search, learning rates, etc.) • Alternate between two steps: – Compute an expectaon – Compute a maximizaon • Not magic: sll opmizing a non-convex funcon with lots of local opma – The computaons are just easier (, significantly so!) EM: Two Easy Steps

K K Objecve: argmaxθ lg∏j ∑k=1 P(Yj=k, xj |θ) = ∑j lg ∑k=1 P(Yj=k, xj|θ)

Data: {x | j=1 .. n} Notaon a bit inconsistent j Parameters = θ=λ • E-step: Compute expectaons to “fill in” missing y values according to current parameters, θ

– For all examples j and values k for Yj, compute: P(Yj=k | xj, θ)

• M-step: Re-esmate the parameters with “weighted” MLE esmates

– Set θ = argmaxθ ∑j ∑k P(Yj=k | xj, θ) log P(Yj=k, xj |θ)

Especially useful when the E and M steps have closed form soluons!!! EM algorithm: Pictorial View

Given a set of Parameters and training data

Class assignment is probabilisc or weighted (so EM)

Class assignment is Supervised learning hard (hard EM) problem

Esmate the class of each Relearn the parameters training example using the based on the new training parameters yielding new data (weighted) training data Simple example: learn means only! Consider: • 1D data • Mixture of k=2 Gaussians • Variances fixed to σ=1 • Distribuon over classes is uniform .01 .03 .05 .07 .09 • Just need to esmate

μ1 and μ2

m K m K =2 ⎡ 1 2 ⎤ ∏∑P(x,Y j = k) ∝ ∏∑exp⎢ − 2 x − µk ⎥ P(Y j = k) j =1 k=1 j =1 k=1 ⎣ 2σ ⎦

€ EM for GMMs: only learning means Iterate: On the t’th iteraon let our esmates be (t) (t) (t) λt = { μ1 , μ2 … μK } E-step Compute “expected” classes of all datapoints ⎛ 1 2⎞ P Y = k x ,µ ...µ ∝ exp⎜ − x − µ ⎟P Y = k ( j j 1 K ) ⎝ 2σ 2 j k ⎠ ( j )

M-step Compute most likely new μs given class expectaons

€ m P Y k x x ∑ ( j = j ) j j =1 µk = m P Y k x ∑ ( j = j ) j =1

€ (t) E.M. for General GMMs pk is shorthand for esmate of P(y=k) on Iterate: On the t’th iteraon let our esmates be t’th iteraon (t) (t) (t) (t) (t) (t) (t) (t) (t) λt = { μ1 , μ2 … μK , Σ1 , Σ2 … ΣK , p1 , p2 … pK } E-step Compute “expected” classes of all datapoints for each class

(t ) (t ) (t ) P Y k x , p p x , Just evaluate a ( j = j λt ) ∝ k ( j µk Σk ) Gaussian at xj M-step Compute weighted MLE for μ given expected classes above € T P Y = k x ,λ x P Y k x , x (t +1) x (t +1) ∑ ( j j t ) j ∑ ( j = j λt ) [ j − µk ][ j − µk ] (t +1) j (t +1) j µk = Σk = P Y = k x ,λ P Y k x , ∑ ( j j t ) ∑ ( j = j λt ) j j P Y k x , ∑ ( j = j λt ) p (t +1) = j k m € € m = #training examples

€ Gaussian Mixture Example: Start Aer first iteraon Aer 2nd iteraon Aer 3rd iteraon Aer 4th iteraon Aer 5th iteraon Aer 6th iteraon Aer 20th iteraon What if we do hard assignments? Iterate: On the t’th iteraon let our esmates be (t) (t) (t) λt = { μ1 , μ2 … μK } E-step Compute “expected” classes of all datapoints ⎛ 1 2⎞ P Y = k x ,µ ...µ ∝ exp⎜ − x − µ ⎟P Y = k ( j j 1 K ) ⎝ 2σ 2 j k ⎠ ( j )

M-step δ represents hard Compute most likely new μs given class expectaons assignment to “most likely” or nearest € m cluster P Y = k x x Y k,x x ∑ ( j j ) j δ( j = j ) j j =1 µ = µ = k m k m δ Y = k,x P Y = k x ∑ ( j j ) ∑ ( j j ) j =1 j =1

Equivalent to k-means clustering algorithm!!! € € Lets look at the math behind the magic!

Next lecture, we will argue that EM: • Opmizes a bound on the likelihood • Is a type of coordinate ascent • Is guaranteed to converge to a (oen local) opma