CPSC 540: Machine Learning Density Estimation

CPSC 540: Machine Learning Density Estimation

Discrete Vairables Continuous Distributions CPSC 540: Machine Learning Density Estimation Mark Schmidt University of British Columbia Winter 2018 Discrete Vairables Continuous Distributions Last Time: Density Estimation The next topic we'll focus on is density estimation: 21 0 0 03 2????3 60 1 0 07 6????7 6 7 6 7 X = 60 0 1 07 X~ = 6????7 6 7 6 7 40 1 0 15 4????5 1 0 1 1 ???? What is probability of xi for a generic feature vector xi? For the training data this is easy: Set p(xi) to \number of times xi is in the training data" divided by n. We're interested in the probability of test data, What is probability of seeing feature vector x~i for a new example i. Discrete Vairables Continuous Distributions Bernoulli Distribution on Binary Variables Let's start with the simplest case: xi 2 f0; 1g (e.g., coin flips), 213 607 6 7 607 X = 6 7 : 607 6 7 405 1 For IID data the only choice is the Bernoulli distribution: p(xi = 1 j θ) = θ; p(xi = 0 j θ) = 1 − θ: We can write both cases ( i i 1 if y is true p(xijθ) = θI[x =1](1 − θ)I[x =0]; where I[y] = : 0 if y is false Discrete Vairables Continuous Distributions Maximum Likelihood with Bernoulli Distribution MLE for Bernoulli likelihood is n Y argmax p(Xjθ) = argmax p(xijθ) 0≤θ≤1 0≤θ≤1 i=1 n Y i i = argmax θI[x =1](1 − θ)I[x =0] 0≤θ≤1 i=1 = argmax θ1θ1 ··· θ1 (1 − θ)(1 − θ) ··· (1 − θ) 0≤θ≤1 | {z } number of x = 1 | {z } i number of xi = 0 = argmax θN1 (1 − θ)N0 ; 0≤θ≤1 where N1 is count of number of 1 values and N0 is the number of 0 values. If you equate the derivative of the log-likelihood with zero, you get θ = N1 . N1+N0 So if you toss a coin 50 times and it lands heads 24 times, your MLE is 24=50. Discrete Vairables Continuous Distributions Multinomial Distribution on Categorical Variables Consider the multi-category case: xi 2 f1; 2; 3; : : : ; kg (e.g., rolling di), 223 617 6 7 617 X = 6 7 : 637 6 7 415 2 The categorical distribution is i p(x = cjθ1; θ2; : : : ; θk) = θc; Pk where c=1 θc = 1. We can write this for a generic x as k i Y I[xi=c] p(x jθ1; θ2; : : : ; θk) = θc : c=1 Discrete Vairables Continuous Distributions Multinomial Distribution on Categorical Variables Using Lagrange multipliers (bonus) to handle constraints, the MLE is Nc θc = P : (\fraction of times you rolled a 4") c0 Nc0 If we never see category 4 in the data, should we assume θ4 = 0? If we assume θ4 = 0 and we have a 4 in test set, our test set likelihood is 0. To leave room for this possibility we often use \Laplace smoothing", Nc + 1 θc = P : c0 (Nc0 + 1) This is like adding a \fake" example to the training set for each class. Discrete Vairables Continuous Distributions MAP Estimation with Bernoulli Distributions In the binary case, a generalization of Laplace smoothing is N + α − 1 θ = 1 ; (N1 + α − 1) + (N0 + β − 1) We get the MLE when α = β = 1, and Laplace smoothing with α = β = 2. This is a MAP estimate under a beta prior, 1 p(θjα; β) = θα−1(1 − θ)β−1; B(α; β) where the beta function B makes the probability integrate to one. Z Z We want p(θjα; β)dθ = 1; so define B(α; β) = θα−1(1 − θ)β−1dθ: θ θ Note that B(α; β) is constant in terms of θ, it doesn't affect MAP estimate. Discrete Vairables Continuous Distributions MAP Estimation with Categorical Distributions In the categorical case, a generalization of Laplace smoothing is N + α − 1 θ = c c ; c Pk c0=1(Nc0 + αc0 − 1) which is a MAP estimate under a Dirichlet prior, k 1 Y p(θ ; θ ; : : : ; θ jα ; α ; : : : ; α ) = θαc−1; 1 2 k 1 2 k B(α) c c=1 where B(α) makes the multivariate distribution integrate to 1 over θ, Z Z Z Z k Y αc−1 B(α) = ··· θc dθkdθk−1 ··· dθ2dθ1: θ1 θ2 θk−1 θk c=1 Because of MAP-regularization connection, Laplace smoothing is regularization. Discrete Vairables Continuous Distributions General Discrete Distribution Now consider the case where xi 2 f0; 1gd (e..g, words in e-mails): 21 0 0 03 60 1 0 07 6 7 X = 60 0 1 07 : 6 7 40 1 0 15 1 0 1 1 Now there are 2d possible values of xi. Can't afford to even store a θ for each possible xi. With n training examples we see at most n unique xi values. But unless we have a small number of repeated xi values, we'll hopelessly overfit. With finite dataset, we'll need to make assumptions... Discrete Vairables Continuous Distributions Product of Independent Distributions A common assumption is that the variables are independent: d i i i Y i p(x1; x2; : : : ; xdjΘ) = p(xj j θj): j=1 Now we just need to model each column of X as its own dataset: 21 0 0 03 213 203 203 203 60 1 0 07 607 617 607 607 6 7 6 7 6 7 6 7 6 7 X = 60 0 1 07 ! X1 = 607 ;X2 = 607 ;X3 = 617 ;X4 = 607 : 6 7 6 7 6 7 6 7 6 7 40 1 0 15 405 415 405 415 1 0 1 1 1 0 1 1 A big assumption, but now you can fit Bernoulli for each variable. We did this in CPSC 340 for naive Bayes. Discrete Vairables Continuous Distributions Density Estimation and Fundamental Trade-off \Product of independent" distributions: Easily estimate each θc but can't model many distributions. General discrete distribution: Hard to estimate 2d parameters but can model any distribution. An unsupervised version of the fundamental trade-off: Simple models often don't fit the data well but don't overfit much. Complex models fit the data well but often overfit. We'll consider models that lie between these extremes: 1 Mixture models. 2 Graphical models. 3 Boltzmann machines. Discrete Vairables Continuous Distributions Outline 1 Discrete Vairables 2 Continuous Distributions Discrete Vairables Continuous Distributions Univariate Gaussian Consider the case of a continuous variable x 2 R: 2 0:53 3 6 1:83 7 X = 6 7 : 4−2:265 0:86 Even with 1 variable there are many possible distributions. Most common is the Gaussian (or \normal") distribution: 1 (xi − µ)2 p(xijµ, σ2) = p exp − or xi ∼ N (µ, σ2); σ 2π 2σ2 for µ 2 R and σ > 0. Discrete Vairables Continuous Distributions Univariate Gaussian https://en.wikipedia.org/wiki/Gaussian_function Mean parameter µ controls location of center of density. Variance parameter σ2 controls how spread out density is. Discrete Vairables Continuous Distributions Univariate Gaussian Why use the Gaussian distribution? Data might actually follow Gaussian. Good justification if true, but usually false. Central limit theorem: mean estimators converge in distribution to a Gaussian. Bad justification: doesn't imply data distribution converges to Gaussian. Distribution with maximum entropy that fits mean and variance of data (bonus). \Makes the least assumptions" while matching first two moments of data. But for complicated problems, just matching mean and variance isn't enough. Closed-form maximum likelihood esitmate (MLE). MLE for the mean is the mean of the data (\sample mean" or \empirical mean"). MLE for the variance is the variance of the data (\sample variance"). \Fast and simple". Discrete Vairables Continuous Distributions Univariate Gaussian Gaussian likelihood for an example xi is 1 (xi − µ)2 p(xijµ, σ2) = p exp − : σ 2π 2σ2 So the negative log-likelihood for n IID examples is n n X 1 X − log p(Xjµ, σ2) = − log p(xijµ, σ2) = (xi − µ)2 + n log(σ)+ const. 2σ2 i=1 i=1 Setting derivative with respect to µ to 0 gives MLE of n 1 X µ^ = xi: (for any σ > 0) n i=1 Plugging in µ^ and setting derivative with respect to σ to zero gives n 1 X σ2 = (xi − µ^)2: (if this zero, the NLL is unbounded and MLE doesn't exist): n i=1 Discrete Vairables Continuous Distributions Alternatives to Univariate Gaussian Why not the Gaussian distribution? Negative log-likelihood is a quadratic function of µ, n 1 X − log p(Xjµ, σ2) = (xi − µ)2 + n log(σ)+ const. 2σ2 i=1 so as with least squares the Gaussian is not robust to outliers. \Light-tailed": assumes most data is really close to mean. Discrete Vairables Continuous Distributions Alternatives to Univariate Gaussian Robust: Laplace distribution or student's t-distribution \Heavy-tailed": has non-trivial probability that data is far from mean. Discrete Vairables Continuous Distributions Alternatives to Univariate Gaussian Gaussian distribution is unimodal. Laplace and student t are also unimodal so don't fix this issue. Next time we'll discuss \mixture models" that address this. Discrete Vairables Continuous Distributions Product of Independent Gaussians If we have d variables, we could make each follow an independent Gaussian, i 2 xj ∼ N (µj; σj ); In this case the joint density over all d variables is d d i 2 ! Y Y (xj − µj) p(xi jµ ; σ2) / exp − j j j 2σ2 j=1 j=1 j 0 d 1 1 X 1 i 2 a b a+b = exp @− (xj − µj) A (e e = e ) 2 σ2 j=1 j 0 1 d 1 i T −1 X 2 T 1 1 = exp − (x − µ) Σ (x − µ) v w = w V 2 V 2 w : 2 @ j j A j=1 2 where µ = (µ1; µ2; : : : ; µd) and Σ is diagonal with diagonal elements σj .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    28 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us