<<

Properties of

BS2 , Lecture 2 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; October 15, 2004

1 Notation and setup

X denotes , typically either finite or countable, or an open subset of Rk. We have observed x ∈ X which are assumed to be a realisation X = x of a X. The probability mass function (or density) of X is partially unknown, i.e. of the form f(x; θ) where θ is a parameter, varying in the parameter space Θ. This lecture is concerned with principles and methods for estimating (guessing) θ on the basis of having observed X = x.

2 Unbiased estimators

An θˆ = t(x) is said to be unbiased for a function θ if it equals θ in expectation: ˆ Eθ{t(X)} = E{θ} = θ.

Intuitively, an unbiased estimator is ‘right on target’. The of an estimator θˆ = t(X) of θ is

bias(θˆ) = E{t(X) − θ}.

If bias(θˆ) is of the form cθ, θ˜ = θ/ˆ (1 + c) is unbiased for θ. We then say that θ˜ is a bias-corrected version of θˆ.

3 Unbiased functions

More generally t(X) is unbiased for a function g(θ) if

Eθ{t(X)} = g(θ).

Note that even if θˆ is an unbiased estimator of θ, g(θˆ) will generally not be an unbiased estimator of g(θ) unless g is linear or affine. This limits the importance of the notion of unbiasedness. It might be at least as important that an estimator is accurate so its distribution is highly concentrated around θ. If an unbiased estimator of g(θ) has mimimum among all unbiased estimators of g(θ) it is called a minimum variance unbiased estimator (MVUE).

4 Is unbiasedness a good thing?

Unbiasedness is important when combining estimates, as averages of unbiased estimators are unbiased (sheet 1).

When combining standard deviations s1, . . . , sk with d..o.f. d1, . . . , dk we always average their squares s d s2 + ··· + d s2 s¯ = 1 1 k k d1 + ··· + dk as each of these are unbiased estimators of the variance σ2, whereas si are not unbiased estimates of σ. Be careful when averaging biased estimators! It may well be appropriate to make a bias-correction before averaging.

5 Square Error

One way of measuring the accuracy of an estimator is via its mean square error (MSE): mse(θˆ) = E(θˆ − θ)2. Since it holds for any Y that E(Y 2) = V(Y ) + {E(Y )}2, the MSE can be decomposed as mse(θˆ) = V(θˆ − θ) + {E(θˆ − θ)}2 = V(θˆ) + {bias(θ)}2, so getting a small MSE often involves a trade-off between variance and bias. By not insisting on θˆ being unbiased, the variance can sometimes be drastically reduced. For unbiased estimators, the MSE is obviously equal to the variance, mse(θˆ) = V(θˆ), so no trade-off can be made.

6 Asymptotic consistency

ˆ ˆ An estimator θ (more precisely a sequence of estimators θn) is said to be (weakly) consistent if it converges to θ in probability, i.e. if for all  > 0, lim P {|θˆ − θ| > } = 0. n→∞

ˆ It is consistent in mean square error if limn→∞ mse(θ) = 0. Both of these notions refer to the asymptotic behaviour of θˆ and expresses that, as data accumulates, θˆ gets closer and closer to the true value of θ. Asymptotic consistency is a good thing. However, in a given case, for fixed n it may only be modestly relevant. Asymptotic inconsistency is generally worrying.

7 Fisher consistency

An estimator is Fisher consistent if the estimator is the same functional of the empirical distribution function as the parameter of the true distribution function: ˆ θ = h(Fn), θ = h(Fθ) where Fn and Fθ are the empirical and theoretical distribution functions: n 1 X F (t) = 1{X ≤ t),F (t) = P {X ≤ t}. n n i θ θ 1 Examples are µˆ = X¯ which is Fisher consistent for the mean µ and σˆ2 = SSD/n which is Fisher consistent for σ2. Note s2 = SSD/(n − 1) is not Fisher consistent.

8 Consistency relations

If an estimator is mean square consistent, it is weakly consistent. This follows from Chebyshov’s inequality: E(θˆ − θ)2 mse(θˆ) P {|θˆ − θ| > } ≤ = , 2 2 so if mse(θˆ) → 0 for n → ∞, so does P {|θˆ − θ| > }. The relationship between Fisher consistency and asymptotic consistency is less clear. It is generally true that

lim Fn(t) = Fθ(t) for continuity points t of Fθ, n→∞ ˆ so θ = h(Fn) → Fθ if h is a suitably continuous functional.

9 Score

For X = x to be informative about θ, the density (and therefore the ) must vary with θ. If f(x; θ) is smooth and differentiable, this change is quantified to first order by the score function: ∂ f 0(x; θ) s(x; θ) = log f(x; θ) = . ∂θ f(x; θ) If differentiation w.r.t. θ and integration w.r.t. x can be interchanged, the score statistic has expectation zero Z f 0(x; θ) Z E{S(θ)} = f(x; θ) dx = f 0(x; θ) dx f(x; θ) ∂ Z  ∂ = f(x; θ) dx = 1 = 0. ∂θ ∂θ

10

The variance of S(θ) is the Fisher information about θ: i(θ) = E{S(θ)2}. If integration and differentiation can be interchanged  ∂   ∂2  i(θ) = V{S(θ)} = −E S(θ) = −E log f(X; θ) , ∂θ ∂θ2 since then ∂2 E log f(X; θ) = ∂θ2 Z f 00(x; θ) Z f 0(x; θ)2 f(x; θ) dx − f(x; θ) dx f(x; θ) f(x; θ) = 0 − E{S(θ)}2 = −i(θ).

11 The normal case

It may be illuminating to consider the special case when X ∼ N (θ, σ2) with σ2 known and θ unknown. Then 1 log f(x; θ) = − log(2πσ2) − (x − θ)2/(2σ2) 2 so the score statistic and information are

s(x; θ) = (x − θ)/σ2, i(θ) = E(1/σ2) = 1/σ2.

So the score statistic can be seen as a linear approximation to the normal case, with the information determining the scale, here equal to the inverse of the variance.

12 Cram´er–Rao’s inequality

The Fisher information yields a lower bound on the variance of an unbiased estimator: We assume suitable smoothness conditions, including that

• The region of positivity of f(x; θ) is constant in θ; • Integration and differentiation can be interchanged.

Then for any unbiased estimator T = t(X) of g(θ) it holds

V(T ) = V(ˆg(θ)) ≥ {g0(θ)}2/i(θ).

Note that for g(θ) = θ the lower bound is simply the inverse Fisher information i−1(θ).

13 Proof of Cram´er–Rao’sinequality

Since E{S(θ)} = 0, the Cauchy–Schwarz inequality yields

|Cov{T,S(θ)}|2 ≤ V(T )V{S(θ)} = V(T )i(θ). (1)

Now, since E{S(θ)} = 0,

Z f 0(x; θ) Cov{T,S(θ)} = E{TS(θ)} = t(x) f(x; θ) dx f(x; θ) Z ∂ = t(x)f 0(x; θ) dx = E{T } = g0(θ), ∂θ inserting this into the inequality (1) and dividing both sides with i(θ) yields the result.

14 Attaining the lower bound

It is rarely possible to find an estimator which attains the bound. In fact (under the usual conditions) An unbiased estimator of g(θ) with variance {g0(θ)}2/i(θ) exists if and only if the score statistic has the form

i(θ){t(x) − g(θ)} s(x; θ) = . g0(θ)

In the special case where g(θ) = θ we have

s(x; θ) = i(θ){t(x) − g(θ)}.

15 Proof of the expression for the score statistic

Cauchy–Schwarz inequality is sharp unless T is an affine function of S(θ) so t(x) =g ˆ(θ) = a(θ)s(x; θ) + b(θ) (2) for some a(θ), b(θ). Since t(X) is unbiased for θ and E{S(θ)} = 0, we have b(θ) = g(θ). From the proof of the inequality we have Cov{T,S(θ)} = g0(θ). Combining with the linear expression in (2) gives g0(θ) = Cov{T,S(θ)} = a(θ)V{S(θ)} = a(θ)i(θ) and the result follows.

16 Efficiency

If an unbiased estimator attains the Cram´er–Rao bound, it it said to be efficient. An efficient unbiased estimator is clearly also MVUE. The Bahadur efficiency of an unbiased estimator is the inverse of the ratio between its variance and the bound: {g0(θ)}2 0 ≤ beffg ˆ(θ) = ≤ 1. i(θ)V{gˆ(θ)} Since the bound is rarely attained, it is sometimes more reasonable to compare with the smallest obtainable inf V(T ) 0 ≤ effg ˆ(θ) = {T :E(T )=g(θ)} ≤ 1. V{gˆ(θ)}

17