<<

Chapter 2: Maximum Likelihood Estimation Advanced - HEC Lausanne

Christophe Hurlin

University of Orléans

December 9, 2013

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 1/207 Section 1

Introduction

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 2/207 1. Introduction

The Maximum Likelihood Estimation (MLE) is a method of estimating the of a model. This estimation method is one of the most widely used.

The method of maximum likelihood selects the of values of the model parameters that maximizes the likelihood . Intuitively, this maximizes the "agreement" of the selected model with the observed .

The Maximum-likelihood Estimation gives an uni…ed approach to estimation.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 3/207 2. The Principle of Maximum Likelihood

What are the main properties of the maximum likelihood ?

I Is it asymptotically unbiased? I Is it asymptotically e¢ cient? Under which condition(s)? I Is it consistent? I What is the ?

How to apply the maximum to the multiple model, to the Probit/Logit Models etc. ?

... All of these questions are answered in this lecture...

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 4/207 1. Introduction

The outline of this chapter is the following: Section 2: The principle of the maximum likelihood estimation Section 3: The Section 4: Maximum likelihood estimator Section 5: , Hessian and Fisher Section 6: Properties of maximum likelihood

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 5/207 1. Introduction

References

Amemiya T. (1985), Advanced Econometrics. Harvard University Press.

Greene W. (2007), Econometric Analysis, sixth edition, Pearson - Prentice Hil

Pelgrin, F. (2010), Lecture notes Advanced Econometrics, HEC Lausanne (a special thank) Ruud P., (2000) An introduction to Classical Econometric Theory, Oxford University Press. Zivot, E. (2001), Maximum Likelihood Estimation, Lecture notes.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 6/207 Section 2

The Principle of Maximum Likelihood

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 7/207 2. The Principle of Maximum Likelihood

Objectives In this section, we present a simple example in order

1 To introduce the notations

2 To introduce the notion of likelihood and log-likelihood.

3 To introduce the concept of maximum likelihood estimator

4 To introduce the concept of maximum likelihood estimate

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 8/207 2. The Principle of Maximum Likelihood

Example

Suppose that X1,X2, ,XN are i.i.d. discrete random variables, such that    Xi Pois (θ) with a pmf ( mass function) de…ned as:  exp ( θ) θxi Pr (Xi = xi ) = xi ! where θ is an unknown to estimate.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 9/207 2. The Principle of Maximum Likelihood

Question: What is the probability of observing the particular x1, x2, .., xN , assuming that a with as yet unknown parameterf θ generatedg the data? This probability is equal to

Pr ((X1 = x1) ... (XN = xN )) \ \

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 10/207 2. The Principle of Maximum Likelihood

Since the variables Xi are i.i.d. this joint probability is equal to the of the marginal

N Pr ((X1 = x1) ... (XN = xN )) = ∏ Pr (Xi = xi ) \ \ i=1 Given the pmf of the Poisson distribution, we have:

N exp ( θ) θxi Pr ((X1 = x1) ... (XN = xN )) = ∏ \ \ i=1 xi ! N x θ∑i=1 i = exp ( θN) N ∏ xi ! i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 11/207 2. The Principle of Maximum Likelihood

De…nition This joint probability is a function of θ (the unknown parameter) and corresponds to the likelihood of the sample x1, .., xN denoted by f g

LN (θ; x1.., xN ) = Pr ((X1 = x1) ... (XN = xN )) \ \ with N 1 ∑=1 xi LN (θ; x1.., xN ) = exp ( θN) θ   N ∏ xi ! i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 12/207 2. The Principle of Maximum Likelihood

Example Let us assume that for N = 10, we have a realization of the sample equal to 5, 0, 1, 1, 0, 3, 2, 3, 4, 1 , then: f g

LN (θ; x1.., xN ) = Pr ((X1 = x1) ... (XN = xN )) \ \ e 10θθ20 L (θ; x .., x ) = N 1 N 207, 360

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 13/207 2. The Principle of Maximum Likelihood

Question: What value of θ would make this sample most probable?

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 14/207 2. The Principle of Maximum Likelihood

This Figure plots the function LN (θ; x) for various values of θ. It has a single at θ = 2, which would be the maximum likelihood estimate, or MLE, of θ.

•8 x 10 1.2

1

0.8

0.6

0.4

0.2

0 0 0.5 1 1.5 2 2.5 3 3.5 4 q

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 15/207 2. The Principle of Maximum Likelihood

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 16/207 2. The Principle of Maximum Likelihood

Consider maximizing the likelihood function LN (θ; x1.., xN ) with respect to θ. Since the log function is monotonically increasing, we usually maximize ln LN (θ; x1.., xN ) instead. In this case:

N N ln LN (θ; x1.., xN ) = θN + ln (θ) xi ln ∏ xi ! ∑ i=1 i=1 

N ∂ ln LN (θ; x1.., xN ) 1 = N + ∑ xi ∂θ θ i=1 2 N ∂ ln LN (θ; x1.., xN ) 1 2 = 2 ∑ xi < 0 ∂θ θ i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 17/207 2. The Principle of Maximum Likelihood

Under suitable regularity conditions, the maximum likelihood estimate (estimator) is de…ned as:

θ = ln LN (θ; x1.., xN ) θ R+ 2 b N ∂ ln LN (θ; x1.., xN ) 1 FOC : = N + xi = 0 ∂θ ∑ θ θ i=1

N b b θ = (1/N) ∑ xi () i=1 2 b N ∂ ln LN (θ; x1.., xN ) 1 SOC : = xi < 0 2 2 ∑ ∂θ θ θ i=1

θ is a maximum. b b

Christopheb Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 18/207 2. The Principle of Maximum Likelihood

The maximum likelihood estimate (realization) is:

1 N θ θ (x) = ∑ xi  N i=1 b b Given the sample 5, 0, 1, 1, 0, 3, 2, 3, 4, 1 , we have θ (x) = 2. f g The maximum likelihood estimator (random variableb) is: 1 N θ = ∑ Xi N i=1 b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 19/207 2. The Principle of Maximum Likelihood

Continuous variables The reference to the probability of observing the given sample is not exact in a continuous distribution, since a particular sample has probability zero. Nonetheless, the principle is the same.

The likelihood function then corresponds to the pdf associated to the joint distribution of (X1, X2, .., XN ) evaluated at the point (x1, x2, .., xN ) :

LN (θ; x1.., xN ) = fX1,..,XN (x1, x2, .., xN ; θ)

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 20/207 2. The Principle of Maximum Likelihood

Continuous variables

If the random variables X1, X2, .., XN are i.i.d. then we have: f g N LN (θ; x1.., xN ) = ∏ fX (xi ; θ) i=1

where fX (xi ; θ) denotes the pdf of the of X (or Xi since all the variables have the same distribution).

The values of the parameters that maximize LN (θ; x1.., xN ) or its log are the maximum likelihood estimates, denoted θ (x).

b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 21/207 Section 3

The Likelihood function

De…nitions and Notations

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 22/207 3. The Likelihood Function

Objectives

1 Introduce the notations for an estimation problem that deals with a marginal distribution or a conditional distribution (model).

2 De…ne the likelihood and the log-likelihood functions.

3 Introduce the concept of conditional log-likelihood

4 Propose various applications

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 23/207 3. The Likelihood Function

Notations

Let us consider a continuous random X , with a pdf denoted fX (x; θ) , for x R 2 | θ = (θ1..θK ) is a K 1 vector of unknown parameters. We assume  that θ Θ RK . 2  Let us consider a sample X1, .., XN of i.i.d. random variables with the same arbitrary distributionf as X .g

The realisation of X1, .., XN (the data set..) is denoted x1, .., xN or x for simplicity.f g f g

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 24/207 3. The Likelihood Function

Example () If X N m, σ2 then:   1 (z m)2 fX (z; θ) = exp 2 z R σp2π 2σ ! 8 2 with K = 2 and m θ = σ2  

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 25/207 3. The Likelihood Function

De…nition (Likelihood Function) The likelihood function is de…ned to be:

N + LN : Θ R R  ! N (θ; x1, .., xn) LN (θ; x1, .., xn) = ∏ fX (xi ; θ) 7! i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 26/207 3. The Likelihood Function

De…nition (Log-Likelihood Function) The log-likelihood function is de…ned to be:

N `N : Θ R R  ! N (θ; x1, .., xn) `N (θ; x1, .., xn) = ∑ ln fX (xi ; θ) 7! i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 27/207 3. The Likelihood Function

Remark: the (log-)likelihood function depends on two type of arguments:

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 28/207 3. The Likelihood Function

Notations: In the rest of the chapter, I will use the following alternative notations:

LN (θ; x) L (θ; x1, .., xN ) LN (θ)  

`N (θ; x) ln LN (θ; x) ln L (θ; x1, .., xN ) ln LN (θ)   

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 29/207 3. The Likelihood Function

Example (Sample of Normal Variables) 2 We consider a sample Y1, .., YN .i.d. m, σ and denote the f g N 2 | realisation by y1, .., yN or y. Let us de…ne θ = m σ , then we have: f g  N 2  1 (yi m) LN (θ; y) = ∏ exp 2 i=1 σp2π 2σ ! N N /2 1 2 2 = σ 2π exp 2 ∑ (yi m) 2σ i=1 !  N N 2 N 1 2 `N (θ; y) = ln σ ln (2π) 2 ∑ (yi m) 2 2 2σ i=1 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 30/207 3. The Likelihood Function

De…nition (Likelihood of one observation)

We can also de…ne the (log-)likelihood of one observation xi :

N Li (θ; x) = fX (xi ; θ) with LN (θ; x) = ∏ Li (θ; x) i=1

N `i (θ; x) = ln fX (xi ; θ) with `N (θ; x) = ∑ `i (θ; x) i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 31/207 3. The Likelihood Function

Example ()

Suppose that D1, D2, .., DN are i.i.d. positive random variables (durations for instance), with Di Exp (θ) with θ 0 and   1 di Li (θ; di ) = fD (di ; θ) = exp θ θ  

di `i (θ; di ) = ln (fD (di ; θ)) = ln (θ) θ Then we have: N N 1 LN (θ; d) = θ exp ∑ di θ i=1 ! 1 N `N (θ; d) = N ln (θ) ∑ di θ i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 32/207 3. The Likelihood Function

Remark: The (log-)likelihood and the Maximum Likelihood Estimator are always based on an assumption (bet?) about the distribution of Y .

Yi Distribution with pdf fY (y; θ) = LN (θ; y) and `N (θ; y)  )

In practice, generally we have no idea about the true distribution of Yi .... A solution: the Quasi-Maximum Likelihood Estimator

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 33/207 3. The Likelihood Function

Remark: We can also use the MLE to estimate the parameters of a model (with dependent and explicative variables) such that:

y = g (x; θ) + ε

where β denotes the vector or parameters, X a set of explicative variables, ε and error term and g (.) the link function. In this case, we generally consider the conditional distribution of Y given X , which is equivalent to unconditional distribution of the error term ε :

Y X D ε D j  () 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 34/207 3. The Likelihood Function

Notations (model)

Let us consider two continuous random variables Y and X We assume that Y has a conditional distribution given X = x with a pdf denoted f Y x (y; θ) , for y R j 2 | θ = (θ1..θK ) is a K 1 vector of unknown parameters. We assume  that θ Θ RK . 2  N Let us consider a sample X1, YN i=1 of i.i.d. random variables and N f g a realisation x1, yN . f gi=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 35/207 3. The Likelihood Function

De…nition (Conditional likelihood function) The (conditional) likelihood function is de…ned to be:

N LN (θ; y x) = ∏ f Y X (yi xi ; θ) j i=1 j j

where f Y X (yi xi ; θ) denotes the conditional pdf of Yi given Xi . j j Remark: The conditional likelihood function is the joint conditional density of the data in which the unknown parameter is .

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 36/207 3. The Likelihood Function

De…nition (Conditional log-likelihood function) The (conditional) log-likelihood function is de…ned to be:

N `N (θ; y x) = ∑ ln f Y X (yi xi ; θ) j i=1 j j

where f Y X (yi xi ; θ) denotes the conditional pdf of Yi given Xi . j j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 37/207 3. The Likelihood Function

Remark: The density function (pdf) can denoted by: f Y X (y x; θ) fY (y X = x; θ) fY (y X = x) j j  j  j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 38/207 3. The Likelihood Function

Example (Linear Regression Model) Consider the following linear regression model:

yi = Xi>β + εi

where Xi is a K 1 vector of random variables and β = (β ..β )> a  1 K K 1 vector of parameters. We assume that the εi are i.i.d. with  2 εi 0, σ . Then, the conditional distribution of Yi given Xi = xi is:  N  2 Yi xi x>β, σ j  N i   2 1 yi x>β L (θ; y x) = f (y x ; θ) = exp i i Y x i i 2 2 j j j σp2π σ  !

2 > where θ = β> σ is K + 1 1 vector.    Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 39/207 3. The Likelihood Function

Example (Linear Regression Model, cont’d) N Then, if we consider an i.i.d. sample yi , xi i=1, the corresponding conditional (log-)likelihood is de…nedf to be:g

N N 2 1 yi x>β L (θ; y x) = f (y x ; θ) = exp i N ∏ Y X i i ∏ 2 2 j i=1 j j i=1 σp2π σ  ! N N /2 1 2 2 = σ 2π exp 2 ∑ yi xi>β 2σ i=1 !    N 2 N 2 N 1 `N (θ; y x) = ln σ ln (2π) 2 ∑ yi xi>β j 2 2 2σ i=1   

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 40/207 3. The Likelihood Function

Remark: Given this principle, we can derive the (conditional) likelihood and the log-likelihood functions associated to a speci…c sample for any type of in which the conditional distribution of the dependent variable is known. Dichotomic models: probit, logit models etc. Censored regression models: Tobit etc. Times series models: AR, ARMA, VAR etc. GARCH models ....

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 41/207 3. The Likelihood Function

Example (Probit/Logit Models)

Let us consider a dichotomic variable Yi such that Yi = 1 if the …rm i is in default and 0 otherwise. Xi = (Xi1...XiK ) denotes a a K 1 vector of individual caracteristics. We assume that the conditional probability of default is de…ned as:

Pr (Yi = 1 Xi = xi ) = F x>β j i   where β = (β1..βK )> is a vector of parameters and F (.) is a cdf (cumlative distribution function).

1 with probability F x β Y = i> i 0 with probability 1 F x β  i> 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 42/207 3. The Likelihood Function

Remark: Given the choice of the link function F (.) we get a probit or a logit model.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 43/207 3. The Likelihood Function

De…nition ()

In a probit model, the conditional probability of the event Yi = 1 is:

x i> β 1 u2 Pr (Yi = 1 Xi = xi ) = Φ (xi β) = exp du j p2 2 ∞ π   R where Φ (.) denotes the cdf of the standard normal distribution.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 44/207 3. The Likelihood Function

De…nition (Logit Model)

In a logit model, the conditional probability of the event Yi = 1 is: 1 Pr (Yi = 1 Xi = xi ) = Λ xi>β = j 1 + exp x>β   i where Λ (.) denotes the cdf of the . 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 45/207 3. The Likelihood Function

Example (Probit/Logit Models, cont’d) N What is the (conditional) log-likelihood of the sample yi , xi i=1? Whatever the choice of F (.), the conditional distributionf of gYi given Xi = xi is a Bernouilli distribution since:

1 with probability F x β Y = i> i 0 with probability 1 F x β  i> Then, for θ = β, we have: 

yi 1 yi Li (θ; y x) = f Y x (yi xi ; θ) = F xi>β 1 F xi>β j j j h  i h  i where f Y x (yi xi ; θ) denotes the conditional probability mass function j j (pmf) of Yi .

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 46/207 3. The Likelihood Function

Example (Probit/Logit Models, cont’d) N The (conditional) likelihood and log-likelihood of the sample yi , xi i=1are de…ned to be: f g

N N yi 1 yi LN (θ; y x) = ∏ f Y x (yi xi ; θ) = ∏ F xi>β 1 F xi>β j i=1 j j i=1 h  i h  i

N N `N (θ; y x) = ∑ yi ln F xi>β + ∑ (1 yi ) ln 1 F xi>β j i=1 i=1 h  i h  i = ∑ ln F xi>β + ∑ ln 1 F xi>β i y =1 i y =0 : i   : i h  i

where f Y x (yi xi ; θ) denotes the conditional probability mass function j j (pmf) of Yi .

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 47/207 3. The Likelihood Function

Key Concepts

1 Likelihood (of a sample) function

2 Log-likelihood (of a sample) function

3 Conditional Likelihood and log-likelihood function

4 Likelihood and log-likelihood of one observation

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 48/207 Section 4

Maximum Likelihood Estimator

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 49/207 4. Maximum Likelihood Estimator

Objectives

1 This section will be concerned with obtaining estimates of the parameters θ.

2 We will de…ne the maximum likelihood estimator (MLE).

3 Before we begin that study, we consider the question of whether estimation of the parameters is possible at all: the question of identi…cation.

4 We will introduce the invariance principle

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 50/207 4. Maximum Likelihood Estimator

De…nition (Identi…cation) The parameter vector θ is identi…ed (estimable) if for any other parameter vector, θ = θ, for some data y, we have 6

LN (θ; y) = LN (θ; y) 6

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 51/207 4. Maximum Likelihood Estimator

Example

Let us consider a latent (continuous and unobservable) variable Yi such that: Yi = Xi>β + εi

with β = (β1..βK )>, Xi = (Xi1...XiK )> and where the error term εi is 2 i.i.d. such that E (εi ) = 0 and V (εi ) = σ . The distribution of εi is symmetric around 0 and we denote by G (.) the cdf of the standardized error term εi /σ. We assume that this cdf does not depend on σ or β. Example: εi /σ (0, 1).  N

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 52/207 4. Maximum Likelihood Estimator

Example (cont’d)

We observe a dichotomic variable Yi such that:

1 if Y > 0 Y = i i 0 otherwise  2 Problem: are the parameters θ = (β> σ )> identi…able?

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 53/207 4. Maximum Likelihood Estimator Solution: To answer to this question we have to compute the (log-)likelihood of the N sample of observed data yi , xi . We have: f gi=1 Pr (Yi = 1 Xi = xi ) = Pr (Y  > 0 Xi = xi ) j i j = Pr εi > x>β i   = 1 Pr εi x>β  i  εi β = 1 Pr x> σ  i σ   If we denote by G (.) the cdf associated to the distribution of εi /σ, since this distribution is symetric around 0, then we have: β Pr (Yi = 1 Xi = xi ) = G x> j i σ  

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 54/207 4. Maximum Likelihood Estimator

Solution (cont’d): 2 For θ = (β> σ )>, we have

N β N β `N (θ; y x) = yi ln G x> + (1 yi ) ln 1 G x> j ∑ i σ ∑ i σ i=1    i=1    2 This log-likelihood depends only on the ratio β/σ. So, for θ = (β> σ )> and θ = (k β> k σ) , with k = 1 :   > 6

`N (θ; y x) = `N (θ; y x) j j The parameters β and σ2 cannot be identi…ed. We can only identify the ratio β/σ.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 55/207 4. Maximum Likelihood Estimator

Remark: In this latent model, only the ratio β/σ can be identi…ed since

εi β β Pr (Yi = 1 Xi = xi ) = Pr < x> = G x> j σ i σ i σ     The choice of a logit or probit model implies a normalisation on the 2 of εi /σ and then on σ :

εi probit : Pr (Yi = 1 Xi = xi ) = Φ x>β with β = β /σ, V = 1 j i i σ     e e

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 56/207 4. Maximum Likelihood Estimator

De…nition (Maximum Likelihood Estimator) A maximum likelihood estimator θ of θ Θ is a solution to the maximization problem: 2 b θ = arg max `N (θ; y x) θ Θ j 2 or equivalently b θ = arg max LN (θ; y x) θ Θ j 2 b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 57/207 4. Maximum Likelihood Estimator

Remarks

1 Do not confuse the maximum likelihood estimator θ (which is a ) and the maximum likelihood estimate θ (x) which corresponds to the realisation of θ on the sample x.b b 2 Generally, it is easier to maximiseb the log-likelihood than the likelihood (especially for the distributions that belong to the ).

3 When we consider an unconditional likelihood, the MLE is de…ned by:

θ = arg max`N (θ; x) θ Θ 2 b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 58/207 4. Maximum Likelihood Estimator

De…nition (Likelihood equations) Under suitable regularity conditions, a maximum likelihood estimator (MLE) of θ is de…ned to be the solution of the …rst-order conditions (FOC): ∂`N (θ; y x) j = 0 ∂θ (K ,1) θ

or ∂LN (θ; y x) b j = 0 ∂θ (K ,1) θ

These conditions are generally called the likelihood or log-likelihood b equations.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 59/207 4. Maximum Likelihood Estimator

Notations The …rst () of the (conditional) log-likelihood evaluated at the point θ satis…es:

b ∂L θ; y x ∂LN (θ; y x) N j j = g θ; y x = 0 ∂θ  ∂θ  θ j b   b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 60/207 4. Maximum Likelihood Estimator

Remark The log-likelihood equations correspond to a linear/nonlinear system of K equations with K unknown parameters θ1, .., θK :

∂`N (θ; Y x ) j ∂θ1 0 ∂`N (θ; Y x) θ j = 0 ... 1 = ... ∂θ 0 1 θ ∂`N (θ; Y x ) b j 0 B ∂θK C B θ C @ A b @ A

b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 61/207 4. Maximum Likelihood Estimator

De…nition (Second Order Conditions) Second order condition (SOC) of the likelihood maximisation problem: the Hessian evaluated at θ must be negative de…nite.

2 ∂ `N (θ; y x) bj is negative de…nite ∂θ∂θ> θ

or 2 b ∂ LN (θ; y x) j is negative de…nite ∂θ∂θ> θ

b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 62/207 4. Maximum Likelihood Estimator

Remark: The (realisation) is a K K matrix:  2 2 2 ∂ `N (θ; y x ) ∂ `N (θ; y x ) ∂ `N (θ; y x ) 2 j j .. j ∂θ1 ∂θ1 ∂θ2 ∂θ1 ∂θK 2 2 2 0 ∂ `N (θ; y x ) ∂ `N (θ; y x ) 1 ∂ `N (θ; y x) ∂θ ∂θ j 2 j .. .. j = 2 1 ∂θ2 B C ∂θ∂θ> B ...... C B 2 2 C B ∂ `N (θ; y x ) ∂ `N (θ; y x ) C B ∂θ ∂θ j .. .. 2 j C B K 1 ∂θK C @ A

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 63/207 4. Maximum Likelihood Estimator

Reminders

A negative de…nite matrix is a symetric (Hermitian if there are complex entries) matrix all of whose eigenvalues are negative.

The n n M is said to be negative-de…nite if:  x|Mx < 0

for all non-zero x in Rn.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 64/207 4. Maximum Likelihood Estimator

Example (MLE problem with one parameter) Let us consider a real-valued random variable X with a pdf given by:

2 2 x x fX x; σ = exp x [0, +∞[ 2σ2 σ2 8 2    2 where σ is an unknown parameter. Let us consider a sample X1, .., XN of i.i.d. random variables with the same arbitrary distributionf as X . g Problem: What is the maximum likelihood estimator (MLE) of σ2?

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 65/207 4. Maximum Likelihood Estimator

Solution: We have: 2 2 x 2 ln fX x; σ = + ln (x) ln σ 2σ2 So, the log-likelihood of the sample x1, .., xN is:  f g N N N 2 2 1 2 2 `N σ ; x = ∑ ln fX xi ; σ = 2 ∑ xi + ∑ ln (xi ) N ln σ i=1 2σ i=1 i=1   

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 66/207 4. Maximum Likelihood Estimator

Solution (cont’d): The maximum likelihood estimator σ2 of σ2 R+ is a solution to the maximization problem: 2

b N N 2 2 1 2 2 σ = arg max`N σ ; x = arg max 2 ∑ xi + ∑ ln (xi ) N ln σ σ2 R+ σ2 R+ 2σ i=1 i=1 2  2  b ∂` σ2; x 1 N N N = x2 2 2 4 ∑ i 2 ∂σ  σ i=1 σ FOC (log-likelihood equation):

2 N N ∂`N σ ; x 1 2 N 2 1 2 2 = 4 ∑ xi 2 = 0 σ = ∑ xi ∂σ 2 2σ i=1 σ () 2N i=1  σ

b b b b Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 67/207 4. Maximum Likelihood Estimator

Solution (cont’d): Check that σ2 is a maximum:

∂` σ2; x 1 N N ∂2` σ2; x 1 N N N b = x2 N = x2 + 2 2 4 ∑ i 2 4 6 ∑ i 4 ∂σ  σ i=1 σ ∂σ  σ i=1 σ SOC:

2 2 N ∂ `N σ ; x 1 2 N 4 = 6 ∑ xi + 4 ∂σ 2 σ i=1 σ  σ 2 N 2Nσ N 2 1 b 2 = b 6 + 4 b since σ = ∑ xi σ σ 2N i=1 N b = < 0 b σ4b b

Christophe Hurlin (University of Orléans) Advanced Econometricsb - HEC Lausanne December9,2013 68/207 4. Maximum Likelihood Estimator

Conclusion: The maximum likelihood estimator (MLE) of the parameter σ2 is de…ned by: N 2 1 2 σ = ∑ Xi 2N i=1 The maximum likelihood estimateb of the parameter σ2 is equal to:

N 2 1 2 σ (x) = ∑ xi 2N i=1 b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 69/207 4. Maximum Likelihood Estimator

Example (Sample of normal variables) 2 We consider a sample Y1, .., YN N.i.d. m, σ . Problem: what are f g the MLE of m and σ2?  Solution: Let us de…ne θ = m σ2 | .  θ = arg max `N (θ; y) σ2 R+,m R 2 2 with b

N N 2 N 1 2 `N (θ; y) = ln σ ln (2π) 2 ∑ (yi m) 2 2 2σ i=1 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 70/207 4. Maximum Likelihood Estimator

Solution (cont’d):

N N 2 N 1 2 `N (θ; y) = ln σ ln (2π) 2 ∑ (yi m) 2 2 2σ i=1  The …rst derivative of the log-likelihood function is de…ned by:

∂` (θ;y ) ∂` (θ; y) N N = ∂m ∂θ ∂`N (θ;y ) ∂σ2 !

N N ∂`N (θ; y) 1 ∂`N (θ; y) N 1 2 = 2 ∑ (yi m) 2 = 2 + 4 ∑ (yi m) ∂m σ i=1 ∂σ 2σ 2σ i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 71/207 4. Maximum Likelihood Estimator

Solution (cont’d): FOC (log-likelihood equations)

1 N ∂` (θ; y) 2 ∑i=1 (yi m) 0 N = σ = ∂θ N + 1 N (y m)2 0 θ 2 2 2 4 ∑i=1 i ! ! σ b σ b b So, the MLE correspond to theb empiricalb andb variance: m θ = σ2   b with b N b N 1 2 1 2 m = ∑ Yi σ = ∑ Yi Y N N i=1 N i=1  b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 72/207 4. Maximum Likelihood Estimator

Solution (cont’d):

N N ∂`N (θ; y) 1 ∂`N (θ; y) N 1 2 = 2 ∑ (yi m) 2 = 2 + 4 ∑ (yi m) ∂m σ i=1 ∂σ 2σ 2σ i=1 The Hessian matrix (realization) is:

2 2 2 ∂ `N (θ;y ) ∂ `N (θ;y ) ∂ ` (θ; y) 2 2 N = ∂m ∂m∂σ ∂2 ` (θ;y ) ∂2 ` (θ;y ) ∂θ∂θ> N N ∂σ2 ∂m ∂σ4 ! N 1 N 2 4 ∑ (yi m) = σ σ i=1 1 N N 1 N 2 4 ∑i=1 (yi m) 4 6 ∑i=1 (yi m) ! σ 2σ σ

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 73/207 4. Maximum Likelihood Estimator

Solution (cont’d): SOC

2 N 1 N ∂ ` (θ; y) 2 4 ∑i=1 (yi m) N = σ σ 1 N N 1 N 2 ∂θ∂θ 4 (y m) 4 6 (y m) > θ ∑i=1 i 2 ∑i=1 i ! σ b σ b σ b N 0 b σ2 = b 2b b b b 0 N N σ 2 4 6 ! b σ σ b N 2 N 2 since since N m = ∑ yi andb N σb = ∑ (yi m) i=1 i=1 2 N ∂ ` (θ; y) 2 0 N b = σ b is de…niteb negative 0 N ∂θ∂θ> θ 2σ4 ! b

b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 74/207 4. Maximum Likelihood Estimator

Example (Linear Regression Model) Consider the linear regression model:

yi = xi>β + εi

where xi = (xi1...xiK )> and β = (β1..βK )> are K 1 vectors. We assume 2  that the εi are .i.d. 0, σ . Then, the (conditional) log-likelihood of the observations (xN, y ) is given by i i  N 2 N 2 N 1 `N (θ; y x) = ln σ ln (2π) 2 ∑ yi xi>β j 2 2 2σ i=1    2 where θ = (β> σ ) is (K + 1) 1 vector. Question: what are the MLE >  of β and σ2?

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 75/207 4. Maximum Likelihood Estimator

Notation 1: The derivative of a scalar y by a K 1 vector  x = (x1...xK )> is K 1 vector  ∂y ∂y ∂x1 = 0 .. 1 ∂x ∂y B ∂xK C @ A Notation 2: If x and β are two K 1 vectors, then:  ∂ x β > = x ∂β  (K ,1)

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 76/207 4. Maximum Likelihood Estimator

Solution

N 2 N 2 N 1 θ = arg max ln σ ln (2π) 2 ∑ yi xi>β β RK ,σ2 R+ 2 2 2σ i=1 2 2    b The …rst derivative of the log-likelihood function is a (K + 1) 1 vector: 

∂`N (θ; y x ) j ∂β1 ∂`N (θ; y x ) j ∂`N (θ; y x) ∂β 0 .. 1 j = = ∂`N (θ; y x ) ∂θ 0 ∂`N (θ; y x ) 1 j j B ∂βK C ∂σ2 B C (K +1) 1 @ A B ∂`N (θ; y x ) C  B ∂σ2 j C | {z } @ A

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 77/207 4. Maximum Likelihood Estimator

Solution (cont’d)

N 2 N 2 N 1 θ = arg max ln σ ln (2π) 2 ∑ yi xi>β β RK ,σ2 R+ 2 2 2σ i=1 2 2    b The …rst derivative of the log-likelihood function is a (K + 1) 1 vector:  N ∂`N (θ; y x) 1 j = 2 ∑ xi yi xi>β ∂β σ i=1 (K ,1)  (K ,1) (1,1) |{z}| {z } | {z } N ∂`N (θ; y x) N 1 2 2 j = 2 + 4 ∑ yi xi>β ∂σ 2σ 2σ i=1   (1,1) (1,1) | {z } | {z }

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 78/207 4. Maximum Likelihood Estimator

Solution (cont’d): FOC (log-likelihood equations)

1 N 2 xi yi x β ∂`N (θ; y x) σ ∑i=1 i> 0K j = 2 = ∂θ 0 N 1 N  1 0 θ 2 + 4 ∑i=1 yi xi>β   2σb 2σ b @ A b   So, the MLE is de…ned by: b b b

β θ = σ2   b 1 b N N N 2 b 2 1 β = ∑ Xi Xi> ∑ Xi Yi σ = ∑ Yi Xi>β i=1 ! i=1 ! N i=1   b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 79/207 4. Maximum Likelihood Estimator

Solution (cont’d): The Hessian is a (K + 1) (K + 1) matrix: 

2 2 ∂ `N (θ; y x) ∂ `N (θ; y x) j j ∂β∂σ2 0 ∂β∂β> 1 2 ∂ ` (θ; y x) K K K 1 N  j = B 2  2 C ∂θ∂θ B ∂ `N (θ; y x) ∂ `N (θ; y x) C > B | {z j } | {z j } C B 2 4 C (K +1) (K +1) B ∂σ ∂β> ∂σ C  B C B 1 1 C | {z } B 1 K  C @  A | {z } | {z }

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 80/207 4. Maximum Likelihood Estimator Solution (cont’d):

N ∂`N (θ; y x) 1 j = 2 ∑ xi yi xi>β ∂β σ i=1   N ∂`N (θ; y x) N 1 2 2 j = 2 + 4 ∑ yi xi>β ∂σ 2σ 2σ i=1   So, the Hessian matrix (realization) is equal to:

1 N 1 N 2 ∑ xi x > 4 ∑ xi yi x >β σ i=1 i σ i=1 i 2 K 1 1 K K 1   ∂ `N (θ; y x) 0    1 1 1 j =  2 ∂θ∂θ> B 1 N x |{z}y |{z}x N 1 |{z}N y x C B σ4 ∑i=1 i> i i>β 2σ4 σ6 ∑i=1 | i {zi>β } C B C 1 K B  1 1   1 1  C B    C @ |{z} A Christophe Hurlin (University of Orléans) Advanced Econometrics| -{z HEC Lausanne} December9,2013| {z 81/207} 4. Maximum Likelihood Estimator

Solution (cont’d): Second Order Conditions (SOC)

1 N 1 N 2 2 ∑i=1 xi x > 4 ∑i=1 xi yi x >β ∂ `N (θ) σ i σ i = 0   2 1 ∂θ∂θ> θ 1 N N 1 N 4 ∑i=b1 xi> yi xi>β 4 b 6 ∑i=1 yi xi>bβ σ 2σ σ B C b @     A b b b b 2b N 2 N Since ∑ x yi x β = 0 (FOC) and Nσ = ∑ yi x β i=1 i> i> i=1 i>     2 N N b 2 xi x b 0 b ∂ `N (θ) σ ∑i=1 i> 2 = N N σ ∂θ∂θ 0 4 6 > θ 2σ σ ! b b b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 82/207 4. Maximum Likelihood Estimator

Solution (cont’d): Second Order Conditions (SOC).

2 1 N ∂ `N (θ; y x) 2 ∑i=1 xi x > 0 j = σ i is de…nite negative 0 N ∂θ∂θ> θ 2σ4 ! b

N b b Since ∑i=1 xi xi> is positive de…nite (assumption), the Hessian matrix is de…nite negative and θ is the MLE of the parameters θ.

b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 83/207 4. Maximum Likelihood Estimator

Theorem (Equivariance or Invariance Principle) Under suitable regularity conditions, the maximum likelihood estimator of a function g (.) of the parameter θ is g θ , where θ is the maximum likelihood estimator of θ.   b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 84/207 4. Maximum Likelihood Estimator

Invariance Principle

The MLE is to one-to-one transformations of θ. Any transformation that is not one to one either renders the model inestimable if it is one to many or imposes restrictions if it is many to one.

For the practitioner, this result is extremely useful. For example, when a parameter appears in a likelihood function in the form 1/θ , it is usually worthwhile to reparameterize the model in terms of γ = 1/θ. Example: Olsen (1978) and the reparametrisation of the likelihood function of the .

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 85/207 4. Maximum Likelihood Estimator

Example (Invariance Principle) Suppose that the normal log-likelihood in the previous example is parameterized in terms of the parameter, γ2 = 1/σ2. The log-likelihood

N 2 N 2 N 1 2 `N m, σ ; y = ln σ ln (2π) 2 ∑ (yi m) 2 2 2σ i=1   becomes

2 N 2 N 2 N γ 2 `N m, γ ; y = ln γ ln (2π) ∑ (yi m) 2 2 2 i=1  

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 86/207 4. Maximum Likelihood Estimator

Example (Invariance Principle, cont’d) 2 The MLE for m is clearly still Y N . But the likelihood equation for γ is now: ∂` m, γ2; y N 1 N N = (y m)2 2 2 2 2 ∑ i ∂γ  γ i=1 and the MLE for γ2 is now de…ned by: N 1 2 = = γ N 2 2 ∑ (Yi m) σ i=1 as expected. b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 87/207 Key Concepts

1 Identi…cation.

2 Maximum likelihood estimator.

3 Maximum likelihood estimate.

4 Log-likelihood equations.

5 Equivariance or invariance principle.

6 Gradient Vector and Hessian Matrix (deterministic elements).

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 88/207 Section 5

Score, Hessian and

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 89/207 5. Score, Hessian and Fisher Information

Objectives We aim at introducing the following concepts:

1 Score vector and gradient

2 Hessian matrix

3 Fischer information matrix of the sample

4 Fischer information matrix of one observation for marginal and conditional distributions

5 Average Fischer information matrix of one observation

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 90/207 5. Score, Hessian and Fisher Information

De…nition (Score Vector) The (conditional) score vector is a K 1 vector de…ned by:  ∂`N (θ; Y x) sN (θ; Y x) s (θ) = j (K ,1)j  ∂θ

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 91/207 5. Score, Hessian and Fisher Information

Remarks:

The score sN (θ; Y x) is a vector of random elements since it j depends on the random variables Y1, .., .YN .

For an unconditional log-likelihood, `N (θ; x) , the score is denoted by

sN (θ; X ) = ∂`N (θ; X ) /∂θ

The score is a K 1 vector such that:  ∂`N (θ; Y x ) j ∂θ1 sN (θ; Y x) = 0 . 1 j ∂`N (θ; Y x ) j B ∂θK C @ A

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 92/207 5. Score, Hessian and Fisher Information

Corollary By de…nition, the score vector satis…es

E (sN (θ; Y x)) = 0K θ j

where Eθ the expectation with respect to the conditional distribution Y X = x. j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 93/207 5. Score, Hessian and Fisher Information

Remark: If we consider a variable X with a pdf fX (x; θ) , x R, then 8 2 Eθ (.) means the expectation with respect to the distribution of X :

Eθ (sN (θ; X )) = sN (θ; x) fX (x; θ) dx = 0 Z∞

Remark: If we consider a variable Y with a conditional pdf f Y x (y; θ) , y R, then E (.) means the expectation with respect to thej 8 2 θ distribution of Y X = x : j ∞

Eθ (sN (θ; Y x)) = sN (θ; Y x) f Y x (y; θ) dy = 0 j j j Z∞

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 94/207 5. Score, Hessian and Fisher Information

Proof.

If we consider a variable X with a pdf fX (x; θ) , x R, then: 8 2

Eθ (sN (θ; X )) = sN (θ; x) fX (x; θ) dx Z ∂ ln fX (x; θ) = N fX (x; θ) dx ∂θ Z 1 ∂fX (x; θ) = N fX (x; θ) dx f (x; θ) ∂θ Z X ∂ = N fX (x; θ) dx ∂θ Z ∂1 = N = 0 ∂θ

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 95/207 5. Score, Hessian and Fisher Information

Example (Exponential Distribution)

Suppose that D1, D2, .., DN are i.i.d., positive random variable with Di Exp (θ) and E (Di ) = θ > 0. 

1 d + fD (d; θ) = exp , d R θ θ 8 2   1 N `N (θ; d) = N ln (θ) ∑ di θ i=1 The score (scalar) is equal to:

N 1 N sN (θ; D) = + 2 ∑ Di θ θ i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 96/207 5. Score, Hessian and Fisher Information

Example (Exponential Distribution, cont’d) By de…nition:

N 1 N E E θ (sN (θ; D)) = θ + 2 ∑ Di θ θ i=1 ! N 1 N E = + 2 ∑ θ (Di ) θ θ i=1 N Nθ = + θ θ2 = 0 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 97/207 5. Score, Hessian and Fisher Information

Example (Linear Regression Model)

Let us consider the previous linear regression model yi = xi>β + εi . The score is de…ned by:

1 N x Y x σ2 ∑i=1 i i i>β sN (θ; Y x) = j 0 N 1 N 2 1 2 + 4 ∑ Yi x β 2σ 2σ i=1 i> @ A Then, we have 

1 N x Y x σ2 ∑i=1 i i i>β Eθ (sN (θ; Y x)) = Eθ j 0 N 1 N 2 1 2 + 4 ∑ Yi x β 2σ 2σ i=1 i> @  A

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 98/207 5. Score, Hessian and Fisher Information

Example (Linear Regression Model, cont’d)

We know that E (Yi x) = x β. So, we have: θ j i>

1 N 1 N E ∑ xi Yi x >β = ∑ xi E (Yi x) x >β θ σ2 i=1 i σ2 i=1 θ j i      1 N = ∑ xi x >β x >β σ2 i=1 i i = 0K  

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 99/207 5. Score, Hessian and Fisher Information

Example (Linear Regression Model, cont’d)

2 N 1 N E + ∑ Yi x >β θ 2σ2 2σ4 i=1 i    2 N 1 N = + ∑ E Yi x >β 2σ2 2σ4 i=1 θ i    N 1 N 2 = + ∑ E (Yi E (Yi x)) 2σ2 2σ4 i=1 θ θ j N 1 N   = + ∑ V (Yi x) 2σ2 2σ4 i=1 θ j N Nσ2 = + 2σ2 2σ4 = 0 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 100/207 5. Score, Hessian and Fisher Information

De…nition (Gradient) The gradient vector associated to the log-likelihood function is a K 1 vector de…ned by: 

∂`N (θ; y x) gN (θ; y x) g (θ) = j (K ,1)j  ∂θ

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 101/207 5. Score, Hessian and Fisher Information

Remarks

1 The gradient gN (θ; y x) is a vector of deterministic entries since it j depends on the realisation y1, .., yN .

2 For an unconditional log-likelihood, the gradient is de…ned by

gN (θ; x) = ∂`N (θ; x) /∂θ

3 The gradient is a K 1 vector such that:  ∂`N (θ; y x ) j ∂θ1 gN (θ; y x) = 0 . 1 j ∂`N (θ; y x ) j B ∂θK C @ A

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 102/207 5. Score, Hessian and Fisher Information

Corollary By de…nition of the FOC, the gradient vector satis…es

gN θ; y x = 0K j   where θ = θ (x) is the maximumb likelihood estimate of θ.

b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 103/207 5. Score, Hessian and Fisher Information

Example (Linear regression model) In the linear regression model, the gradient associated to the log-likelihood function is de…ned to be:

1 N 2 ∑i=1 xi yi xi>β g (θ; y x) = σ N N 1 N 2 j 2 + 4 ∑i 1 yi x > β ! 2σ 2σ = i Given the FOC, we have: 

1 N 2 ∑ xi yi x β σ i=1 i> 0K gN θ; y x = 2 = j 0 N 1 N  1 0 2 + 4 yi x β !   2σb 2σ ∑i=1 bi> B C b @   A b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 104/207 5. Score, Hessian and Fisher Information

De…nition (Hessian Matrix) The Hessian matrix (deterministic) is de…ned as to be:

2 ∂ `N (θ; y x) HN (θ; y x) = j j ∂θ∂θ>

2 ∂ `N (θ; y x ) Remarks: The matrix j is also called the Hessian matrix, but do ∂θ∂θ> 2 2 ∂ `N (θ; Y x ) ∂ `N (θ; y x ) not confuse the two matrices j and j . ∂θ∂θ> ∂θ∂θ>

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 105/207 5. Score, Hessian and Fisher Information

Random Variable Constant

∂`N (θ; Y x ) ∂`N (θ; y x ) Score vector ∂θ j Gradient vector ∂θ j 2 2 ∂ `N (θ; Y x ) ∂ `N (θ; y x ) Hessian Matrix j Hessian Matrix j ∂θ∂θ> ∂θ∂θ>

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 106/207 5. Score, Hessian and Fisher Information

De…nition (Fisher Information Matrix) The (conditional) Fisher information matrix associated to the sample Y1, .., YN is the variance- matrix of the score vector: f g

I N (θ) = V (sN (θ; Y x)) θ j K K  or equivalently: | {z } ∂`N (θ; Y x) I N (θ) = V j θ ∂θ   where Vθ means the variance with respect to the conditional distribution Y X . j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 107/207 5. Score, Hessian and Fisher Information

Corollary

Since by de…nition E (sN (θ; Y x)) = 0, then an alternative de…nition of θ j the Fisher information matrix of the sample Y1, .., YN is: f g

I N (θ) = E sN (θ; Y x) sN (θ; Y x)> θ 0 j  j 1 K K K 1 1 K  B   C @ A | {z } | {z } | {z }

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 108/207 5. Score, Hessian and Fisher Information

De…nition (Fisher Information Matrix)

The (conditional) Fisher information matrix of the sample Y1, .., YN is also given by: f g

2 ∂ `N (θ; Y x) I N (θ) = E j = E ( HN (θ; Y x)) θ θ j  ∂θ∂θ> 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 109/207 5. Score, Hessian and Fisher Information

De…nition (Fisher Information Matrix, summary)

The (conditional) Fisher information matrix of the sample Y1, .., YN can alternatively be de…ned by: f g

I N (θ) = V (sN (θ; Y x)) θ j

I N (θ) = E sN (θ; Y x) sN (θ; Y x)> θ j  j   I N (θ) = E ( HN (θ; Y x)) θ j where Eθ and Vθ denote the mean and the variance with respect to the conditional distribution Y X , and where sN (θ; Y x) denotes the score j j vector and HN (θ; Y x) the Hessian matrix. j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 110/207 5. Score, Hessian and Fisher Information

De…nition (Fisher Information Matrix, summary)

The (conditional) Fisher information matrix of the sample Y1, .., YN can alternatively be de…ned by: f g

∂`N (θ; Y x) I N (θ) = V j θ ∂θ  

> ∂`N (θ; Y x) ∂`N (θ; Y x) I N (θ) = E j j θ ∂θ  ∂θ   ! 2 ∂ `N (θ; Y x) I N (θ) = E j θ  ∂θ∂θ>  where Eθ and Vθ denote the mean and the variance with respect to the conditional distribution Y X . j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 111/207 5. Score, Hessian and Fisher Information

Remarks

1 Three equivalent de…nitions of the Fisher information matrix, and as a consequence three di¤erent consistent estimates of the Fisher information matrix (see later).

2 The Fisher information matrix associated to the sample Y1, .., YN can also be de…ned from the Fisher information matrix forf the g observation i.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 112/207 5. Score, Hessian and Fisher Information

De…nition (Fisher Information Matrix) The (conditional) Fisher information matrix associated to the i th individual can be de…ned by:

∂`i (θ; Yi xi ) I i (θ) = V j θ ∂θ  

∂`i (θ; Yi xi ) ∂`i (θ; Yi xi )> I i (θ) = Eθ j j ∂θ ∂θ ! 2 ∂ `i (θ; Yi xi ) I i (θ) = E j θ  ∂θ∂θ>  where Eθ and Vθ denote the expectation and variance with respect to the true conditional distribution Yi Xi . j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 113/207 5. Score, Hessian and Fisher Information

De…nition (Fisher Information Matrix) The (conditional) Fisher information matrix associated to the i th individual can be alternatively be de…ned by:

I i (θ) = V (si (θ; Yi xi )) θ j

I i (θ) = E si (θ; Yi xi ) si (θ; Yi xi )> θ j j   I i (θ) = E ( Hi (θ; Yi xi )) θ j where Eθ and Vθ denote the expectation and variance with respect to the true conditional distribution Yi Xi . j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 114/207 5. Score, Hessian and Fisher Information

Theorem

The Fisher information matrix associated to the sample Y1, .., YN is equal to the sum of individual Fisher information matrices:f g

N I N (θ) = ∑ I i (θ) i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 115/207 5. Score, Hessian and Fisher Information

Remark:

1 In the case of a marginal log-likelihood, the Fisher information matrix associated to the variable Xi is the same for the observations i :

I i (θ) = I (θ) i = 1, ..N 8 2 In the case of a conditional log-likelihood, the Fisher information matrix associated to the variable Yi given Xi = xi depends on the observation i : I i (θ) = I j (θ) i = j 6 8 6

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 116/207 5. Score, Hessian and Fisher Information

Example (Exponential marginal distribution)

Suppose that D1, D2, .., DN are i.i.d., positive random variable with Di Exp (θ)  2 E (Di ) = θ V (Di ) = θ

1 d + fD (d; θ) = exp , d R θ θ 8 2   di `i (θ; di ) = ln (θ) θ Question: what is the Fisher information number (scalar) associated to Di ?

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 117/207 5. Score, Hessian and Fisher Information

Solution di ` (θ; di ) = ln (θ) θ The score of the observation Xi is de…ned by:

∂`i (θ; Di ) 1 Di si (θ; Di ) = = + ∂θ θ θ2

Let us use the three de…nitions of the information quantity I i (θ) :

I i (θ) = Vθ (si (θ; Di )) 2 = Eθ si (θ; Di )

= E ( Hi (θ; Di)) θ

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 118/207 5. Score, Hessian and Fisher Information

Solution, cont’d

∂`i (θ; Di ) 1 Di si (θ; Di ) = = + ∂θ θ θ2 First de…nition:

I i (θ) = Vθ (si (θ; Di )) 1 D = V + i θ θ 2  θ  1 = Vθ (Di ) θ4 1 = θ2

Conclusion: I i (θ) =I (θ) does not depend on i.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 119/207 5. Score, Hessian and Fisher Information

Solution, cont’d

∂`i (θ; Di ) 1 Di si (θ; Di ) = = + ∂θ θ θ2 Second de…nition:

2 I i (θ) = Eθ si (θ; Di )  1 D 2 = E + i θ θ 2  θ  ! 1 D 1 D = V + i since E + i = 0 θ θ 2 θ θ 2  θ   θ  1 = θ2

Conclusion: I i (θ) =I (θ) does not depend on i.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 120/207 5. Score, Hessian and Fisher Information Solution, cont’d

∂`i (θ; Di ) 1 Di si (θ; Di ) = = + ∂θ θ θ2 2 ∂ `i (θ; Di ) 1 2Di Hi (θ; Di ) = = ∂θ2 θ2 θ3 Third de…nition:

I i (θ) = E ( Hi (θ; Di )) θ 1 2D = E i θ 2 3   θ θ  1 2 = + Eθ (Di ) θ2 θ3 1 2 1 = + θ = θ2 θ3 θ2

Conclusion: I i (θ) =I (θ) does not depend on i. Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 121/207 5. Score, Hessian and Fisher Information

Example (Linear regression model) We shown that:

1 1 2 xi x > 4 xi Yi x >β σ i σ i 2 K 1 1 K K 1   ∂ `i (θ; Yi xi ) 0    1 1 1 j =  2 ∂θ∂θ> B 1 x |{z}Y |{z}x 1 |{z}1 Y x C B σ4 i> i i>β 2σ4 σ6 | i {zi>β } C B C 1 K B  1 1   1 1  C B    C @ |{z} A Question: what is the Fisher information| {z matrix} associated| to{z the } observation Yi ?

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 122/207 5. Score, Hessian and Fisher Information

Solution The information matrix is then de…ned by:

2 ∂ `i (θ; Yi xi ) I i (θ) = Eθ j = Eθ ( Hi (θ; Yi xi )) ∂θ∂θ> j K +1 K +1   

where Eθ |means{z } the expectation with respect to the conditional distribution Yi Xi = xi j

1 1 2 xi x 4 xi E (Yi ) x β σ i> σ θ i> I i (θ) = 1 1 1 2 0 4 x E (Yi ) x β 4 + 6 E Yi x β 1 σ i> θ i> 2σ σ θ i> @     A

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 123/207 5. Score, Hessian and Fisher Information

Solution (cont’d)

1 1 2 xi x 4 xi E (Yi ) x β σ i> σ θ i> I i (θ) = 1 1 1 2 0 4 x E (Yi ) x β 4 + 6 E Yi x β 1 σ i> θ i> 2σ σ θ i> @     A 2 2 Given that E (Yi ) = x β and E ( Yi x β ) = σ , then we have: θ i> θ i> 1 x x 0 σ2 i i> I i (θ) = 0 1 2σ4 !

Conclusion: I i (θ) depends on xi and I i (θ) =I j (θ) for i = j. 6 6

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 124/207 5. Score, Hessian and Fisher Information

De…nition (Average Fisher information matrix) For a conditional model, the average Fisher information matrix for one observation is de…ned by:

I (θ) = EX (I i (θ))

where EX denotes the expectation with respect to X (conditioning variable).

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 125/207 5. Score, Hessian and Fisher Information

Summary: For a conditional model (and only for a conditional model), we have:

∂`i (θ; Yi Xi ) I (θ) = EX V j = EX (V (s (θ; Yi Xi ))) θ ∂θ θ j   

∂`i (θ; Yi Xi ) ∂`i (θ; Yi Xi )> I (θ) = EX Eθ j j ∂θ ∂θ !

= EX E si (θ; Yi Xi ) si (θ; Yi Xi )> θ j j   2 ∂ `i (θ; Yi Xi ) I (θ) = EX E j = EX E ( Hi (θ; Yi Xi )) θ θ j  ∂θ∂θ> 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 126/207 5. Score, Hessian and Fisher Information

Summary: For a marginal distribution, we have:

∂`i (θ; Yi ) I (θ) = V = V (s (θ; Yi )) θ ∂θ θ  

∂`i (θ; Yi ) ∂`i (θ; Yi )> I (θ) = Eθ ∂θ ∂θ !

= Eθ si (θ; Yi ) si (θ; Yi )>   2 ∂ `i (θ; Yi ) I (θ) = E = E ( Hi (θ; Yi )) θ θ  ∂θ∂θ> 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 127/207 5. Score, Hessian and Fisher Information

Example (Linear Regression Model) In the , the individual Fisher information matrix is equal to:

1 x x 0 σ2 i i> I i (θ) = 0 1 2σ4 ! and the average Fisher information Matrix for one observation is de…ned by: 1 E X X 0 σ2 X i i> I (θ) = EX (I i (θ)) = 0 1  2σ4 !

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 128/207 5. Score, Hessian and Fisher Information

Summary: in order to compute the average information matrix I (θ) for one observation: Step 1: Compute the Hessian matrix or the score vector for one observation

2 ∂ `i (θ; Yi xi ) ∂`i (θ; Yi xi ) Hi (θ; Yi xi ) = j si (θ; Yi xi ) = j j ∂θ∂θ> j ∂θ Step 2: Take the expectation (or the variance) with respect to the conditional distribution Yi Xi = xi j

I i (θ) = V (si (θ; Yi xi )) = E ( Hi (θ; Yi xi )) θ j θ j Step 3: Take the expectation with respect to the conditioning variable X

I (θ) = EX (I i (θ))

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 129/207 5. Score, Hessian and Fisher Information

Theorem In a model (with i.i.d. observations), one has:

IN (θ) = N I (θ)

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 130/207 5. Score, Hessian and Fisher Information

Marginal Distribution Cond. Distribution (model)

pdf fXi (θ; xi ) f Yi xi (θ; y x) j j

Score Vector si (θ; Xi ) si (θ; Yi xi ) j Hessian Matrix Hi (θ; Xi ) Hi (θ; Yi xi ) j Information matrix I i (θ) = I (θ) I i (θ)

Av. Infor. Matrix I (θ) = I i (θ) I (θ) = EX (I i (θ))

with I i (θ) = V (si (θ; Yi xi )) = E si (θ; Yi xi ) si (θ; Yi xi )> = θ j θ j j E ( Hi (θ; Yi xi ))  θ j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 131/207 5. Score, Hessian and Fisher Information

How to estimate the average Fisher Information Matrix?

This matrix is particularly important, since we will see that its corresponds to the asymptotic variance of the MLE.

Let us assume that we have a θ of the parameter θ, how to estimate the average Fisher information matrix? b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 132/207 5. Score, Hessian and Fisher Information

De…nition (Estimators of the average Fisher Information Matrix)

If θ converges in probability to θ0 (true value), then:

b 1 N I θ = ∑ I i θ N i=1     b b b b N > 1 ∂`i (θ; yi xi ) ∂`i (θ; yi xi ) I θ = j j N ∑ ∂θ ∂θ i=1 θ θ !   b b N 2 1 ∂ `i (bθ; yi xi ) b I θ = j N ∑ i=1  ∂θ∂θ> θ   are three consistentb estimatorsb of the average Fisher information matrix. b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 133/207 5. Score, Hessian and Fisher Information

1 The …rst estimator corresponds to the average of the N Fisher information matrices (for Y1, .., YN ) evaluated at the estimated value θ. This estimator will rarely be available in practice.

2 bThe second estimator corresponds to the average of the product of the individual score vectors evaluated at θ. It is known as the BHHH (Berndt, Hall, Hall, and Hausman, 1994) estimator or OPG estimator (outer product of ). b

N 1 > I θ = ∑ gi θ; yi xi gi θ; yi xi N i=1 j j         b b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 134/207 5. Score, Hessian and Fisher Information

3. The third estimator corresponds to the opposite of the average of the Hessian matrices evaluated at θ.

N 1 b I θ = ∑ Hi θ; yi xi N i=1 j      b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 135/207 5. Score, Hessian and Fisher Information

Problem These three estimators are asymptotically equivalent, but they could give di¤erent results in …nite samples. Available evidence suggests that in small or moderate sized samples, the Hessian is preferable (Greene, 2007). However, in most cases, the BHHH estimator will be the easiest to compute.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 136/207 5. Score, Hessian and Fisher Information

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 137/207 5. Score, Hessian and Fisher Information

Example (CAPM) The empirical analogue of the CAPM is given by:

rit = αi + βi rmt + εt

rit = rit rft rmt = (rmt rft ) e e excess return of security i at time t market excess return at time t e e where εt is an i.i.d. error| {z term} with: | {z }

2 E (εt ) = 0 V (εt ) = σ E ( εt rmt ) = 0 j e

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 138/207 5. Score, Hessian and Fisher Information

Example (CAPM, cont’d) Data (data …le: capm.xls): Microsoft, SP500 and Tbill (closing prices) from 11/1/1993 to 04/03/2003

0.10

0.08 0.05

0.04 0.00

RMSFT 0.00

•0.05 •0.04

•0.10 •0.06•0.04•0.02 0.00 0.02 0.04 0.06 0.08 •0.08 500 1000 1500 2000

RSP500 RSP500 RMSFT

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 139/207 5. Score, Hessian and Fisher Information

Example (CAPM, cont’d) We consider the CAPM model rewritten as follows

rit = xt>β + εt t = 1, ..T

where xt = (1 rmt )> is 2 1 vector of random variables, e  2 2 > θ = αi : β : σ > = β> : σ is 3 1 vector of parameters, and i  e 2 where the error term εtsatis…esE (εt ) = 0, V (εt ) = σ and E ( εt rmt ) = 0. j e

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 140/207 5. Score, Hessian and Fisher Information

Example (CAPM, cont’d) Question: Compute three alternative estimators of the asymptotic 2 > variance covariance matrix of the MLE estimator θ = αi βi σ   1 T bT b b b αi β = = xt x xt r β ∑ t> ∑ it  i  t=1 ! t=1 ! b b T 2 e b 2 1 σ = ∑ rit xt>β T t=1   b e b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 141/207 5. Score, Hessian and Fisher Information

Solution The ML estimator is de…ned by:

T 2 T 2 T 1 θ = arg max ln σ ln (2π) 2 ∑ rit xt>β β R2,σ2 R+ 2 2 2σ t=1 2 2    b Theb problem is regular, so we have: e

d 1 pT θ θ0 0, I (θ0) !N    or equivalently b asy 1 1 θ θ0, I (θ0)  N T   The asymptotic varianceb covariance matrix of θ is 1 V θ = I 1 (θ b) T 0   Christophe Hurlin (University of Orléans) Advancedb Econometrics - HEC Lausanne December9,2013 142/207 5. Score, Hessian and Fisher Information

Solution (cont’d) First estimator: The information matrix at time t is de…ned by (third de…nition):

2 ∂ `t θ; Rit xt I t (θ) = Eθ = Eθ Ht θ; Rit xt 0 ∂θ∂θ>  1 e   

@ A e where Eθ means the expectation with respect to the conditional distribution Rit Xt = xt

e 1 1 2 xt x 4 xt E Rit x β σ t> σ θ t> I t (θ) = 0     2 1 1 E R 1 1 E R σ4 xt> θ it xt>β 2σ4 + σ6 θ e it xt>β B C @        A e e Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 143/207 5. Score, Hessian and Fisher Information

Solution (cont’d) First estimator:

1 1 2 xt x 4 xt E Rit x β σ t> σ θ t> I t (θ) = 0     2 1 1 E R 1 1 E R σ4 xt> θ it xt>β 2σ4 + σ6 θ e it xt>β B C @        A e 2 e 2 Given that E Rit = x β and E Rit x β = σ , then we have: θ t> θ t>      e 1 e 2 xt xt> 02 1 σ  I t (θ) = 1 01 2 4 !  2σ

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 144/207 5. Score, Hessian and Fisher Information

Solution (cont’d) First estimator:

1 2 xt xt> 02 1 σ  I t (θ) = 1 01 2 4 !  2σ An estimator of the asymptotic variance covariance matrix of θ is given by:

1 1 V θ = I θ b asy T     T b b 1 b T b 1 2 ∑t=1 xt xt> 02 1 T σ  I θ = ∑ I t θ = 1  T t=1 01 2 4 !     b  2σ b b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 145/207 5. Score, Hessian and Fisher Information

Solution (cont’d) Second de…nition (BHHH):

1 1 V θ = I θ asy T     b b b b T > 1 ∂`t (θ; rit xt ) ∂`t (θ; rit xt ) I θ = j j T ∑ ∂θ  ∂θ t=1 θ θ !   b b e e with b b 1 x r x 1 2 t it t>β 2 xt εt ∂`t (θ; rit xt ) σ σ j = 2 = 2 ∂θ 0 1 1  1 1 1 θ + r x β 2 + 4 εt ! 2 2b 2 e4 it bt> 2σb 2σ e B σ σ C b b @   A b b b b e b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 146/207 5. Score, Hessian and Fisher Information

Solution (cont’d) Second de…nition (BHHH):

> ∂`t (θ; rit xt ) ∂`t (θ; rit xt ) j j ∂θ  ∂θ θ θ 1 e2 xt εt e σ b 1 b 1 1 2 = 2 x εt 2 + 4 ε 1 1 2 σ t> 2σ 2σ t 2 + 4 εt !  2σb b2σ   1 2 b 1 b 1 b 1 2 4 xt x ε b2 xt εt 2 + b 4 ε b σ b t> t σ 2σ 2σ t = b 2 0 1 1 1 2 1 1 2  1 2 x εt 2 + 4 ε 2 + 4 ε σ t> b 2σ b 2σ t b b 2σ b 2σ tb b @     A b b b b b b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 147/207 5. Score, Hessian and Fisher Information

Solution (cont’d) Second de…nition (BHHH): so we have

1 1 V θ = I θ asy T     with b b b b

1 2 1 1 1 2 T 4 xt x ε 2 xt εt 2 + 4 ε 1 σ t> t σ 2σ 2σ t I θ = 2 T ∑ 0 1 1 1 2 1 1 2  1 t=1 2 x εt 2 + 4 ε 2 + 4 ε   σ t> b 2σ b 2σ t b b 2σ b 2σ tb b b b @     A b b b b b b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 148/207 5. Score, Hessian and Fisher Information

Solution (cont’d) Third de…nition (inverse of the Hessian): we know that

1 1 V θ = I θ asy T     b 1 bT b b I θ = ∑ Ht θ; rit xt T t=1 j      b b 1 b e 1 2 xt x 4 xt rit x β σ t> σ t> Ht θ; rit xt = 2 j 0 1 1 1   1 4 x rit x β 4 6 rit x β   σ t> b t> 2σ b σ e t>b b e @     A b e b b b e b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 149/207 5. Score, Hessian and Fisher Information

Solution (cont’d) Third de…nition (inverse of the Hessian):

1 1 2 xt x 4 xt rit x β σ t> σ t> Ht θ; rit xt = 2 j 0 1 1 1   1 4 x rit x β 4 6 rit x β   σ t> b t> 2σ b σ e t>b b e @     A b b T b b b Given the FOC (log-likelihood equations),e ∑ xt rit ex β = 0 and t=1 t> 2 2   rit x β = T σ . t> e b   b T 1 T e b 2 xt x 02 1 σ ∑t=1 t> ∑ Ht θ; rit xt = T 01 2 4 t=1 j 2σ !   b  b e b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 150/207 5. Score, Hessian and Fisher Information

Solution (cont’d) Third de…nition (inverse of the Hessian): So, in this case, the third estimator of I θ coïncides with the …rst one:   b1 b 1 V θ = I θ asy T     T b b b b1 T 1 2 xt x 02 1 T σ ∑t=1 t> I θ = ∑ Ht θ; rit xt = 1 T j 01 2 4 t=1  2σ !      b b b b e b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 151/207 5. Score, Hessian and Fisher Information

Solution (cont’d) These three estimates of the asymptotic variance covariance matrix are asymptotically equivalent, but can be largely di¤erent in …nite sample... 1 1 V θ = I θ asy T     with b b b b 1 T I θ = ∑ I t θ T t=1     T b b b > 1 ∂`t (θ; rit xt ) ∂`t (θ; rit xt ) I θ = j j T ∑ ∂θ  ∂θ t=1 θ θ !   e e b b T 1 b b I θ = ∑ ( Ht (θ; rit xt )) T t=1 j   b b Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausannee December9,2013 152/207 5. Score, Hessian and Fisher Information

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 153/207 5. Score, Hessian and Fisher Information

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 154/207 5. Score, Hessian and Fisher Information

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 155/207  Key Concepts

1 Gradient and Hessian Matrix (deterministic elements).

2 Score Vector (random elements).

3 Hessian Matrix (random elements).

4 Fisher information matrix associated to the sample.

5 (Average) Fisher information matrix for one observation.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 156/207 Section 6

Properties of Maximum Likelihood Estimators

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 157/207 6. Properties of Maximum Likelihood Estimators

Objectives MLE is a good estimator? Under which conditions the MLE is unbiased, consistent and corresponds to the BUE (Best Unbiased Estimator)? => regularity conditions

Is the MLE consistent?

Is the MLE optimal or e¢ cient?

What is the asymptotic distribution of the MLE? The magic of the MLE...

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 158/207 6. Properties of Maximum Likelihood Estimators

De…nition (Regularity conditions) Greene (2007) identify three regularity conditions

R1 The …rst three of ln fX (θ; xi ) with respect to θ are continuous and …nite for xi and for all θ. This condition ensures the existence of a certain approximation and the …nite variance of the derivatives of `i (θ; xi ). R2 The conditions necessary to obtain the expectations of the …rst and second derivatives of ln fX (θ; Xi ) are met. 3 R3 For all values of θ, ∂ ln fX (θ; xi ) /∂θi ∂θj ∂θk is less than a function that has a …nite expectation. This condition will allow us to truncate the

Taylor series.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 159/207 6. Properties of Maximum Likelihood Estimators

De…nition (Regularity conditions, Zivot 2001)

A pdf fX (θ; x) is regular if and only of:

R1 The of the random variables X , SX = x : fX (θ; x) > 0 , does not depend on θ. f g

R2 fX (θ; x) is at least three times di¤erentiable with respect to θ, and these derivatives are continuous. R3 The true value of θ lies in a compact set Θ.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 160/207 6. Properties of Maximum Likelihood Estimators

Under these regularity conditions, the maximum likelihood estimator θ possesses many appealing properties: b 1 The maximum likelihood estimator is consistent.

2 The maximum likelihood estimator is asymptotically normal (the magic of the MLE..).

3 The maximum likelihood estimator is asymptotically optimal or e¢ cient.

4 The maximum likelihood estimator is equivariant: if θ is an estimator of θ then g(θ) is an estimator of g (θ). b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 161/207 6. Properties of Maximum Likelihood Estimators

Theorem (Consistency) Under regularity conditions, the maximum likelihood estimator is consistent p θ θ0 N!∞ ! or equivalently: b p limθ = θ0 N ∞ ! where θ0 denotes the true value of theb parameter θ.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 162/207 6. Properties of Maximum Likelihood Estimators

Sketch of the proof (Greene, 2007) Because θ is the MLE, in any …nite sample, for any θ = θ (including the 6 true θ0) it must be true that b b ln LN θ; y x ln LN (θ; y x) j  j   Consider, then, the random variableb LN (θ; Y x) /LN (θ0; Y x). Because the log function is strictly concave, from Jensen’sInequality,j j we have

LN (θ; Y x) LN (θ; Y x) Eθ ln j ln Eθ j LN (θ0; Y x)  LN (θ0; Y x)   j    j 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 163/207 6. Properties of Maximum Likelihood Estimators

Sketch of the proof, cont’d The expectation on the right-hand side is exactly equal to one, as

LN (θ; Y x) LN (θ; y x) Eθ j = j LN (θ0; y x) dy LN (θ0; Y x) LN (θ0; y x) j  j  Z  j  = LN (θ; y x) dy Z j = 1

is simply the of a joint density.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 164/207 6. Properties of Maximum Likelihood Estimators

Sketch of the proof, cont’d So we have

LN (θ; Y x) LN (θ; Y x) Eθ ln j ln Eθ j = ln (1) = 0 LN (θ0; Y x)  LN (θ0; Y x)   j    j  Divide the left hand side of this equation by N to produce

1 1 E ln LN (θ; Y x) E ln LN (θ0; Y x) θ N j  θ N j     This produces a central result:

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 165/207 6. Properties of Maximum Likelihood Estimators

Theorem (Likelihood Inequality) The of the log-likelihood is maximized at the true value of the parameters. For any θ, including θ : 1 1 E `N (θ0; Yi xi ) b E `N (θ; Yi xi ) θ N j  θ N j    

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 166/207 6. Properties of Maximum Likelihood Estimators

Sketch of the proof, cont’d Notice that 1 1 N `N (θ; Yi xi ) = ∑ `i (θ; Yi xi ) N j N i=1 j where the elements `i (θ; Yi xi ) for i = 1, ..N are i.i.d.. So, using a , we get: j

1 p 1 `N (θ; Yi xi ) Eθ `N (θ; Yi xi ) N j N!∞ N j !  

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 167/207 6. Properties of Maximum Likelihood Estimators

Sketch of the proof, cont’d The Likelihood inequality for θ = θ implies

1 1 E `N (θ0; Yi xi )b E `N θ; Yi xi θ N j  θ N j      with b 1 p 1 `N (θ0; Yi xi ) Eθ `N (θ0; Yi xi ) N j N!∞ N j !   1 p 1 `N θ; Yi xi Eθ `N θ; Yi xi N j N!∞ N j   !    and thus b b 1 1 lim Pr `N (θ0; Yi xi ) `N θ; Yi xi = 1 N ∞ N j  N j !    b Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 168/207 6. Properties of Maximum Likelihood Estimators Sketch of the proof, cont’d So we have two results: 1 1 lim Pr `N (θ0; Yi xi ) `N θ; Yi xi = 1 N ∞ N j  N j !    1 1 b `N θ; Yi xi `N (θ0; Yi xi ) N N j  N j 8 It necessarily implies that  b 1 p 1 `N θ; Yi xi `N (θ0; Yi xi ) N j N!∞ N j   ! If θ is a scalar, we have immediatly:b p θ θ0 N!∞ ! For a more general case with dimb(θ) = K, see a formal proof in Amemiya (1985). Amemiya T., (1985) Advanced Econometrics. Harvard University Press

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 169/207 6. Properties of Maximum Likelihood Estimators

Remark The proof of the consistency of the MLE is largely easiest when we have a formal expression for the maximum likelihood estimator θ

θ = θ (X1, .., XN ) b

b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 170/207 6. Properties of Maximum Likelihood Estimators

Example

Suppose that D1, D2, .., DN are i.i.d., positive random variable with Di Exp (θ0), with 

1 d + fD (d; θ) = exp , d R θ θ 8 2   2 Eθ (Di ) = θ0 Vθ (Di ) = θ0

where θ0 is the true value of θ. Question: show that the MLE is consistent.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 171/207 6. Properties of Maximum Likelihood Estimators

Solution

The log-likelihood function associated to the sample d1, .., dN is de…ned by: f g 1 N `N (θ; d) = N ln (θ) ∑ di θ i=1 We admit that maximum likelihood estimator corresponds to the sample mean: 1 N θ = ∑ Di N i=1 b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 172/207 6. Properties of Maximum Likelihood Estimators

Solution, cont’d Then, we have:

1 N E θ = ∑ E (Di ) = θ θ is unbiased θ N i=1 θ   b b 2 1 N θ V θ = ∑ V (Di ) = θ N2 i=1 θ N   As a consequence b

Eθ θ = θ lim Vθ θ = 0 N ∞   !   and b b p θ θ N!∞ ! b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 173/207 6. Properties of Maximum Likelihood Estimators

Lemma Under stronger conditions, the maximum likelihood estimator converges to θ0 a.s. p θ θ0 = θ θ0 N!∞ ) N!∞ ! ! b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 174/207 6. Properties of Maximum Likelihood Estimators

1 If we restrict ourselves to the class of unbiased estimators (linear and nonlinear) then we de…ne the best estimator as the one with the smallest variance.

2 With linear estimators (next chapter), the Gauss-Markov theorem tells us that the ordinary (OLS) estimator is best (BLUE).

3 When we expand the class of estimators to include linear and nonlinear estimators it turns out that we can establish an absolute lower bound on the variance of any unbiased estimator θ of θ under certain conditions. b 4 Then if an unbiased estimator θ has a variance that is equal to the lower bound then we have found the best unbiased estimator (BUE). b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 175/207 6. Properties of Maximum Likelihood Estimators

De…nition (Cramer-Rao or FDCR bound)

Let X1, .., XN be an i.i.d. sample with pdf fX (θ; x). Let θ be an unbiased estimator of θ; i.e., Eθ(θ) = θ. If fX (θ; x) is regular then b 1 V θ Ib (θ0) FDCR or Cramer-Rao bound θ  N   where I N (θ0) denotesb the Fisher information number for the sample evaluated at the true value θ0.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 176/207 6. Properties of Maximum Likelihood Estimators

Remarks

1 Hence, the Cramer-Rao Bound is the inverse of the information matrix associated to the sample. Reminder: three de…nitions for I N (θ0) .

∂`N (θ; Y x) I N (θ0) = V j θ ∂θ θ0 !

∂`N (θ; Y x) ∂`N (θ; Y x)> I N (θ0) = E j j θ ∂θ ∂θ θ0 ! θ0

2 ∂ `N (θ; Y x) I N (θ0) = E j θ ∂θ∂θ> θ0 !

1 1 2 If θ is a vector then V θ I (θ0) means that V θ I (θ0) θ  N θ N is positive semi-de…nite     b b Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 177/207 6. Properties of Maximum Likelihood Estimators

Theorem (E¢ ciency) Under regularity conditions, the maximum likelihood estimator is asymptotically e¢ cient and attains the FDCR (Frechet - Darnois - Cramer - Rao) or Cramer-Rao bound:

1 Vθ θ = I N (θ0)   where I N (θ0) denotes the Fisherb information matrix associated to the sample evaluated at the true value θ0.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 178/207 6. Properties of Maximum Likelihood Estimators

Example (Exponential Distribution)

Suppose that D1, D2, .., DN are i.i.d., positive random variable with Di Exp (θ0), with 

1 d + fD (d; θ) = exp , d R θ θ 8 2   2 Eθ (Di ) = θ0 Vθ (Di ) = θ0

where θ0 is the true value of θ. Question: show that the MLE is e¢ cient.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 179/207 6. Properties of Maximum Likelihood Estimators

Solution We shown that the maximum likelihood estimator corresponds to the sample mean, 1 N θ = ∑ Di N i=1 b θ2 V θ = 0 θ N   Eθ bθ = θ0   b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 180/207 6. Properties of Maximum Likelihood Estimators

Solution, cont’d The log-likelihood function is

1 N `N (θ; d) = N ln (θ) ∑ di θ i=1 The score vector is de…ned by:

N ∂`N (θ; D) N 1 sN (θ; D) = = + 2 ∑ Di ∂θ θ θ i=1

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 181/207 6. Properties of Maximum Likelihood Estimators Solution, cont’d

Let us use one of the three de…nitions of the information quantity I N (θ) :

∂`N (θ; D) I N (θ) = V θ ∂θ   N 1 N V = θ + 2 ∑ Di θ θ i=1 ! 1 N = ∑i=1 Vθ (Di ) θ4 Nθ2 N = = θ4 θ2 Then, θ is e¢ cient and attains the Cramer-Rao bound. 2 1 θ b Vθ θ = I N (θ0) = N    Christophe Hurlin (University of Orléans) Advancedb Econometrics - HEC Lausanne December9,2013 182/207 6. Properties of Maximum Likelihood Estimators

Theorem (Convergence of the MLE) Under suitable regularity conditions, the MLE is asymptotically normally distributed with

d 1 pN θ θ0 0, I (θ0) !N    where θ0 denotes the trueb value of the parameter and I (θ0) the (average) Fisher information matrix for one observation.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 183/207 6. Properties of Maximum Likelihood Estimators

Corollary Another way, to write this result, is to say that for large sample size N, the MLE θ is approximatively distributed according a normal distribution

asy 1 1 b θ θ0, N I (θ0)  N  or equivalently b asy 1 θ θ0, I (θ0)  N N where I N (θ0) = N I (θ0) denotes the Fisher information matrix b associated to the sample.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 184/207 6. Properties of Maximum Likelihood Estimators

De…nition (Asymptotic Variance) The asymptotic variance of the MLE is de…ned by:

1 Vasy θ = I N (θ0)   where I N (θ0) denotes the Fisherb information matrix associated to the sample. This asymptotic variance of the MLE corresponds to the Cramer-Rao or FDCR bound.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 185/207 6. Properties of Maximum Likelihood Estimators

The magic of the MLE

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 186/207 6. Properties of Maximum Likelihood Estimators

Proof (MLE convergence) At the maximum likelihood estimator, the gradient of the log-likelihood equals zero (FOC):

∂`N (θ; y x) gN θ gN θ; y x = j = 0K  ∂θ j θ (K,1)   b b b where θ = θ (x) denotes here the ML estimate. Expand this set of equations in a Taylor series around the true parameters θ0. We will use the meanb valueb theorem to truncate the Taylor series at the second term:

gN θ = gN (θ0) + HN θ θ θ0 = 0      The Hessian is evaluatedb at a point θ that is betweenb θ and θ0, for instance θ = ωθ + (1 ω) θ0 for some 0 < ω < 1. b Christophe Hurlin (Universityb of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 187/207 6. Properties of Maximum Likelihood Estimators Proof (MLE convergence, cont’d) We then rearrange this equation and multiply the result by pN to obtain:

1 pN θ θ0 = HN θ pNgN (θ0)      By dividing HN θ andb gN (θ0) by N, we obtain:

1  1 1 pN θ θ0 = HN θ pN gN (θ0) N N        1 b 1 = HN θ pNg (θ0) N      where g (θ0) denotes the sample mean of the individual gradient vectors

1 1 N g (θ0) = gN (θ0) = ∑ gi (θ0; yi xi ) N N i=1 j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 188/207 6. Properties of Maximum Likelihood Estimators

Proof (MLE convergence, cont’d) Let us now consider the same expression in terms of random variables: θ now denotes the ML estimator, HN θ = HN θ; Y x and sN (θ0; Y x) the score vector. We have: j j b   1 1 pN θ θ0 = HN θ; Y x pNs (θ0; Y x) N j j        where the scoreb vectors associated to the variables Yi are i.i.d.

1 N s (θ0; Y x) = ∑ si (θ0; Yi xi ) j N i=1 j

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 189/207 6. Properties of Maximum Likelihood Estimators

Proof (MLE convergence, cont’d) Let us consider the …rst element:

1 N s (θ0) = ∑ si (θ0; Yi xi ) N i=1 j

The individual scores si (θ0; Yi xi ) are i.i.d. with j

E (si (θ0; Yi xi )) = 0 θ j

Ex V (si (θ0; Yi xi )) = Ex (I i (θ0)) = I (θ0) θ j By using the Lindberg-Levy , we have:

d pNs (θ0) (0, I (θ0)) !N

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 190/207 6. Properties of Maximum Likelihood Estimators Proof (MLE convergence, cont’d) We known that:

1 1 N HN θ; Y x = ∑ Hi θ; Yi xi N j N i=1 j   where the hessian matrices Hi θ; Yi xi are i.i.d. Besides, because j plim θ θ0 = 0, plim θ θ0 = 0 as well. By applying a law of large  numbers, we get:    b 1 p HN θ; Y x EX E ( Hi (θ0; Yi xi )) N j ! θ j with  2 ∂ `i (θ; Yi xi ) EX E ( Hi (θ0; Yi xi )) = EX E j = I (θ0) θ j θ  ∂θ∂θ> 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 191/207 6. Properties of Maximum Likelihood Estimators

Reminder:

If XN and YN verify p XN X (K ,K ) ! (K ,K )

d YN 0 , Σ !N (K ,1) (K ,K ) (K ,1)   then d XN YN 0 , X Σ X > !N (K ,1) (K ,K )(K ,K )(K ,K ) (K ,K )(K ,1)  

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 192/207 6. Properties of Maximum Likelihood Estimators

Proof (MLE convergence, cont’d) Here we have

1 1 pN θ θ0 = HN θ; Y x pNs (θ0; Y x) N j j        b 1 1 p 1 HN θ; Y x I (θ0) N j !    d pNs (θ0) (0, I (θ0)) !N Then, we get:

d 1 1 pN θ θ0 0, I (θ0) I (θ0) I (θ0) !N    b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 193/207 6. Properties of Maximum Likelihood Estimators

Proof (MLE convergence, cont’d) And …nally.... d 1 pN θ θ0 0, I (θ0) !N    The magic of the MLE..... b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 194/207 6. Properties of Maximum Likelihood Estimators

Example (Exponential Distribution)

Suppose that D1, D2, .., DN are i.i.d., positive random variable with Di Exp (θ0), with 

1 d + fD (d; θ) = exp , d R θ θ 8 2   2 Eθ (Di ) = θ0 Vθ (Di ) = θ0

where θ0 is the true value of θ. Question: what is the asymptotic distribution of the MLE? Propose a consistent estimator of the asymptotic variance of θ.

b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 195/207 6. Properties of Maximum Likelihood Estimators Solution N We shown that θ = (1/N) ∑i=1 Di and:

∂`i (θ; Di ) 1 Di b si (θ; Di ) = = + ∂θ θ θ2

The (average) Fisher information matrix associated to Di is:

1 Di 1 1 I (θ) = V + = V (Di ) = θ θ 2 4 θ 2  θ  θ θ Then, the asymptotic distribution of θ is:

d 2 pN θ θ0 0, θ b!N    or equivalently b asy θ2 θ θ0,  N N !

Christophe Hurlin (University of Orléans) Advancedb Econometrics - HEC Lausanne December9,2013 196/207 6. Properties of Maximum Likelihood Estimators

Solution, cont’d The asymptotic variance of θ is:

θ2 b V θ = asy N   b A consistent estimator of Vas θ is simply de…ned by:   2 b θ Vasy θ = N    b b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 197/207 6. Properties of Maximum Likelihood Estimators

Example (Linear Regression Model)

Let us consider the previous linear regression model yi = xi>β + εi , with εi .i.d. 0, σ2 . Let us denote θ the K + 1 1 vector de…ned by N  2 > θ = β> σ  . The MLE estimator of θ is de…ned by:   β θ = σ2   b 1 b N N N 2 b 2 1 β = ∑ Xi Xi> ∑ Xi>Yi σ = ∑ Yi Xi>β i=1 ! i=1 ! N i=1   Question:b what is the asymptotic distributionb of θ? Propose an estimatorb of the asymptotic variance. b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 198/207 6. Properties of Maximum Likelihood Estimators

Solution This model satisfy the regularity conditions. We shown that the average Fisher information matrix is equal to:

1 E X X 0 I (θ) = σ2 X i i> 0 1   2σ4  From the MLE convergence theorem, we get immediately:

d 1 pN θ θ0 0, I (θ0) !N    where θ0 is the true value ofb θ.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 199/207 6. Properties of Maximum Likelihood Estimators

Solution, cont’d The asymptotic variance covariance matrix of θ is equal to:

1 1 1 Vasy θ = N I (θ0) =b I N (θ0)   with b N E X X 0 I (θ) = σ2 X i i> N 0 N   2σ4 

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 200/207 6. Properties of Maximum Likelihood Estimators

Solution, cont’d

A consistent estimate of I N (θ) is:

N 2 QX 0 I (θ) = V 1 θ = σ N asy 0 N 2σ4 !   b b b b b with b 1 N QX = ∑ xi xi> N i=1 b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 201/207 6. Properties of Maximum Likelihood Estimators

Solution, cont’d Thus we get: asy 1 2 N β β , σ ∑ xi x >  N 0 i=1 i     b 4 2 asy b 2 2σ σ σ0,  N N ! b b

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 202/207 6. Properties of Maximum Likelihood Estimators

Summary Under regular conditions

1 The MLE is consistent.

2 The MLE is asymptotically e¢ cient and its variance attains the FDCR or Cramer-Rao bound.

3 The MLE is asymptotically normally distributed.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 203/207 6. Properties of Maximum Likelihood Estimators

But, …nite sample properties can be very di¤erent from large sample properties:

1 The maximum likelihood estimator is consistent but can be severely biased in …nite samples

2 The estimation of the variance-covariance matrix can be seriously doubtful in …nite samples.

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 204/207 6. Properties of Maximum Likelihood Estimators

Theorem (Equivariance) Under regular conditions and if g (.) is a continuously di¤erentiable function of θ and is de…ned from RK to RP , then:

p g θ g (θ0) !   d b 1 pN g θ g (θ0) 0, G (θ0) I (θ0) G (θ0)> !N       where θ0 is the trueb value of the parameters and the matrix G (θ0) is de…ned by ∂g (θ) G (θ) = (P,K ) ∂θ>

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 205/207 Key Concepts of the Chapter 2

1 Likelihood and log-likelihood function 2 Maximum likelihood estimator (MLE) and Maximum likelihood estimate 3 Gradient and Hessian Matrix (deterministic elements) 4 Score Vector and Hessian Matrix (random elements) 5 Fisher information matrix associated to the sample 6 (Average) Fisher information matrix for one observation 7 FDCR or Cramer Rao Bound: the notion of e¢ ciency 8 Asymptotic distribution of the MLE 9 Asymptotic variance of the MLE 10 Estimator of the asymptotic variance of the MLE

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 206/207 End of Chapter 2

Christophe Hurlin (University of Orléans)

Christophe Hurlin (University of Orléans) Advanced Econometrics - HEC Lausanne December9,2013 207/207