Maximum Likelihood Theory

Maximum Likelihood Theory

Maximum Likelihood Theory Umberto Triacca Universit`a dell'Aquila Department of Computer Engineering, Computer Science and Mathematics, University of L'Aquila, L'Aquila, Italy [email protected] Umberto Triacca Maximum Likelihood Theory Maximum Likelihood Theory Introduction Likelihood and log-likelihood Score vector and score function Fisher information matrix Information matrix equality Cramer-Rao inequality Maximum Likelihood esimate/estimator Invariance Consistency Large sample distribution Asymptotic efficiency Umberto Triacca Maximum Likelihood Theory Lesson 1: The Log-likelihood Function Umberto Triacca Universit`a dell'Aquila Department of Computer Engineering, Computer Science and Mathematics, University of L'Aquila, L'Aquila, Italy [email protected] Umberto Triacca Lesson 1: The Log-likelihood Function Probabilistic model Consider an observable random phenomenon. Suppose that this phenomenon can be appropriately described by random variable X with probability density function (pdf ) (or probability mass function (pmf )) belonging to the family Φ = ff (x; θ); θ 2 Θg k where Θ is a subset of the k-dimensional Euclidean space R called the parametric space. The family Φ is a probabilistic model. Umberto Triacca Lesson 1: The Log-likelihood Function Gaussian model 2 1 − 1 ( x−µ ) 2 0 + Φ = f (x; θ) = p e 2 σ ; θ = (µ, σ ) 2 Θ = R × R 2πσ2 Umberto Triacca Lesson 1: The Log-likelihood Function Bernoulli model Φ = f (x; θ) = θx (1 − θ)1−x ; θ 2 Θ = (0; 1) Umberto Triacca Lesson 1: The Log-likelihood Function Poisson model e−θθx Φ = f (x; θ) = ; θ 2 Θ = (0; 1) x! Umberto Triacca Lesson 1: The Log-likelihood Function Probabilistic model The starting point of any inferential process is a probabilistic model. Since the functional form of the density functions of the probabilistic model is known, we have that all the uncertainty concerning the random phenomenon is that concerning the parameter θ. In order to get information on θ, we will consider a sample from the population described by our random variable. In particular, we will consider a random sample. What is a random sample? Umberto Triacca Lesson 1: The Log-likelihood Function Random sample 0 Definition 1. Let Xn = (X1; X2; :::; Xn) be a vector of n random variables independent, identically distributed (i.i.d.) with pdf (or pmf ) belonging to the family Φ = ff (x; θ); θ 2 Θg : 0 We say that Xn = (X1; X2; :::; Xn) is a random sample of size n from f (x; θ). 0 The distribution of the random sample Xn = (X1; X2; :::; Xn) is the joint distribution of the random variables X1; X2; :::; Xn denoted by f1;2;:::;n(xn; θ) = f1;2;:::;n(x1; x2; :::; xn; θ) We have that n Y f1;2;:::;n(x1; x2; :::; xn; θ) = f (x1; θ)f (x2; θ):::f (xn; θ) = f (xi ; θ) i=1 Umberto Triacca Lesson 1: The Log-likelihood Function An example 2 Let X = (X1; X2; :::; Xn) be a random sample from an N(µ, σ ) distribution with µ and σ2 unknown. 2 + In this case θ = (µ, σ ) 2 R × R , and the distribution of the random sample is n ( 2) Y 1 1 xi − µ f1;2;:::;n(xn; θ) = p exp − 2 2 σ i=1 2πσ n ( n ) 1 1 X 2 = p exp − (xi − µ) 2 2σ2 2πσ i=1 Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function Definition 2.Let Xn = (X1; X2; :::; Xn) be a random sample of size n from f (x; θ). Given a realization xn = (x1; x2; :::; xn) of the random sample Xn = (X1; X2; :::; Xn), the function L :Θ ! [0; 1) defined by L(θ; xn) = f1;2;:::;n(xn; θ) is called the likelihood function. Umberto Triacca Lesson 1: The Log-likelihood Function An example Let x = (x1; x2; :::; xn) be a realization of a random random sample from an N(µ, σ2) distribution with µ and σ2 unknown. 2 + In this case θ = (µ, σ ) 2 R × R , and the likelihood function is n ( n ) 2 1 1 X 2 L(µ, σ ; x) = p exp − (xi − µ) 2 2σ2 2πσ i=1 Umberto Triacca Lesson 1: The Log-likelihood Function Another example 0 Let xn = (x1; x2; :::; xn) be a realization of a random sample 0 X = (X1; X2; :::; Xn) from a Bernoulli distribution with probability mass function θ if x = 1 f (x; θ) = 1 − θ if x = 0 The likelihood function is x1 (1−x1) x2 (1−x2) xn (1−xn) L(θ; xn) = θ (1 − θ) θ (1 − θ) ...θ (1 − θ) Pn x (n−Pn x ) = θ i=1 i (1 − θ) i=1 i Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function 0 ∗ ∗ ∗ ∗ 0 Let xn = (x1; x2; :::; xn) and xn = (x1 ; x2 ; :::; xn ) be two different 0 realizations of a random sample Xn = (X1; X2; :::; Xn) . The likelihood function at the point xn is (generally) a different ∗ ∗ ∗ ∗ 0 function from what it is at the point xn = (x1 ; x2 ; :::; xn ) , that is ∗ L(θ; xn) 6= L(θ; xn) Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function Consider a realization x5 = (x1; x2; x3; x4; x5) of a random sample 0 X5 = (X1; X2; X3; X4; X5) from a Bernoulli distribution with parameter θ. 0 Suppose x5 = (1; 0; 1; 0; 1) . The likelihood function is: L(θ; (1; 0; 1; 0; 1)0) = θ3(1 − θ)2 0 Suppose x5 = (1; 0; 1; 0; 0) . The likelihood function is: L(θ; (1; 0; 1; 0; 0)0) = θ2(1 − θ)3 0 Suppose x5 = (1; 0; 0; 0; 0) . The likelihood function is: L(θ; (1; 0; 0; 0; 0)0) = θ(1 − θ)4 Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function The likelihood function at the point xn is (generally) a different ∗ ∗ ∗ ∗ 0 function from what it is at the point xn = (x1 ; x2 ; :::; xn ) , that is ∗ L(θ; xn) 6= L(θ; xn) Generally, but not always! Consider again two realizations of a random sample 0 X5 = (X1; X2; X3; X4; X5) from a Bernoulli distribution, 0 ∗ 0 ∗ x5 = (1; 0; 0; 0; 0) and x5 = (0; 0; 0; 0; 1) . We have that x5 6= x5 but ∗ 4 L(θ; x5) = L(θ; x5) = θ(1 − θ) Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function The likelihood function expresses the plausiblities of different ∗ parameters after we have observed xn. In particular, for θ = θ , ∗ the number L(θ ; xn) is considered a measure of support that the ∗ observation xn gives to the parameter θ . Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function Consider a realization x5 = (x1; x2; x3; x4; x5) of a random sample 0 X5 = (X1; X2; X3; X4; X5) from a Bernoulli distribution with parameter θ. 0 Suppose x5 = (1; 1; 1; 1; 1) and consider two possible values of θ: θ1 = 1=3 and θ2 = 2=3. The plausibility of θ1 is: 15 L(θ ; (1; 1; 1; 1; 1)0) = = 0:004115226 1 3 The plausibility of θ2 is: 25 L(θ ; (1; 1; 1; 1; 1)0) = = 0:1316872 2 3 Clearly 0 0 L(θ2; (1; 1; 1; 1; 1) ) > L(θ1; (1; 1; 1; 1; 1) ) Umberto Triacca Lesson 1: The Log-likelihood Function Likelihood Function 0 Now, suppose x5 = (0; 0; 0; 0; 0) . The plausibility of θ1 is: 25 L(θ ; (0; 0; 0; 0; 0)0) = = 0:1316872 1 3 The plausibility of θ2 is: 15 L(θ ; (0; 0; 0; 0; 0)0) = = 0:004115226 2 3 Clearly 0 0 L(θ1; (0; 0; 0; 0; 0) ) > L(θ2; (0; 0; 0; 0; 0) ): Umberto Triacca Lesson 1: The Log-likelihood Function The Log-likelihood Function Often, we work with the natural logarithm of the likelihood function, the so-called log-likelihood function: n X l(x; θ) = lnL(x; θ) = lnf (xi ; θ) i=1 Umberto Triacca Lesson 1: The Log-likelihood Function An example Let x = (x1; x2; :::; xn) be a realization of a random random sample from an N(µ, σ2) distribution with µ and σ unknown. 2 + In this case θ = (µ, σ ) 2 R × R , and the likelihood function is n ( n ) 2 1 1 X 2 L(µ, σ ; x) = p exp − (xi − µ) 2 2σ2 2πσ i=1 " n # 1 1 X 2 = n exp − 2 (xi − µ) n 2 2σ σ (2π) i=1 and the log-likelihood function is given by n n 1 X l(µ, σ2; x) = −nlnσ − ln2π − (x − µ)2 2 2σ2 i i=1 Umberto Triacca Lesson 1: The Log-likelihood Function Lesson 2: Score vector and information matrix Umberto Triacca Universit`a dell'Aquila Department of Computer Engineering, Computer Science and Mathematics, University of L'Aquila, L'Aquila, Italy [email protected] Umberto Triacca Lesson 2: Score vector and information matrix Score function Definition 3. If the likelihood function, L(θ; x), is differentiable, then the gradient of the log-likelihood δl(θ; x) δlnL(θ; x) s(θ; x) = = δθ δθ is called the score function. The score function can be found through the chain rule: δl(θ; x) 1 δL(θ; x) = δθ L(θ; x) δθ Umberto Triacca Lesson 2: Score vector and information matrix Score function: An example 0 Let x = (x1; x2; :::; xn) be a realization of random sample from an N(µ, σ2) distribution. Here θ = (µ, σ2). The score function is given by Σ(x − µ) Σ(x − µ)2 n 0 s(θ; x) = i ; i − σ2 2σ4 2σ2 Umberto Triacca Lesson 2: Score vector and information matrix Score vector Definition 4.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    82 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us