STOR 654 Homework 4

1. Symmetric formulas for the

(a) Let X be a with finite second , and let X0 be an independent copy of X (a random variable that is independent of X but has the same distribution). 1 0 2 Show that Var(X) = 2 E(X − X ) .

(b) Let x = x1, . . . , xn ∈ R be a finite sample with average x. Show that n n n 1 X 1 X X (x − x)2 = (x − x )2 n − 1 i 2n(n − 1) i j i=1 i=1 j=1

˜2 1 Pn 2 (c) Use the formula in (b) to show that the sample variance S = n−1 i=1(Xi − X) is unbiased.

2 2 2. Let X be a random variable with EX finite. Show that E(X − a) is minimized when a = EX.

3. Let X be a random variable with density f and E|X| finite. Show as carefully as you can that E|X − a| is minimized when a is a of X.

4. Consider a family P = {f(x|θ): θ ∈ Θ} of densities with Θ = R. Given X ∼ f(x|θ) ∈ P we wish to find a Bayesian point estimate of θ based on prior density π(θ).

(a) Show that under absolute loss `(θ, θ0) = |θ − θ0|, the Bayes is the posterior median Z u 1 θˆπ(x) = u such that π(θ|x) dθ = −∞ 2 You may assume the median exists.

0 0 (b) Show that under zero-one loss `(θ, θ ) = I(θ 6= θ ), is the posterior

θˆ3(x) = argmax π(θ|x) θ∈Θ You may assume the maximum exists.

5. Let D be a family of decision rules for a decision problem. Show that if d ∈ D is admissible and has constant risk then it is .

1 6. Let X ∼ Bin(n, θ) with θ ∈ (0, 1). Suppose that we wish to estimate θ under the (θ − θ0)2 `(θ, θ0) = θ(1 − θ) Consider the natural estimator θˆ(x) = x/n.

(a) Find the risk of θˆ(x) under the loss `.

Suppose that we place the (uniform) prior π(θ) = 1 on the unknown parameter θ.

(b) Find the marginal distribution p(x) of X and the the posterior distribution π(θ|x) of θ given X = x under π.

(c) By considering posterior risk, show that θˆ is the Bayes estimator under π.

(d) What can you say about the minimaxity of θˆ?

Hint: in parts (b) and (c) you will want to make use of the .

7. (Ancillary ). Let P = {f(x|θ): θ ∈ Θ} be a family of distributions on a sample space, and let X ∈ X have density f(x|θ) ∈ P. A T : X → T is said to be ancillary if the distribution of T (X) does not depend is the same for each θ ∈ Θ.

(a) Show that if X1,...,Xn are i.i.d. samples from a location family P = {f(x−θ): θ ∈ R}

then T (x) = x(n) − x(1) is ancillary.

−1 (b) Show that if X1,...,Xn are i.i.d. samples from a scale family P = {σ f(σx): σ > 0}

then T (x) = x1/(x1 + x2) is ancillary.

2