<<

STAT 571 Assignment 1 solutions

1. If Ω is a set and a collection of subsets of Ω, let be the intersection of all σ-algebras that contain . ProveC that is the σ-algebra generatedA by . C A C

Solution: Let α α A be the collection of all σ-algebras that contain , and {A | ∈ } C set = α. We first show that is a σ-algebra. There are three things to A \ A A α A prove. ∈

(a) For every α A, α is a σ-algebra, so Ω α, and hence Ω α A α = . ∈ A ∈ A ∈ ∩ ∈ A A (b) If B , then B α for every α A. Since α is a σ-algebra, we have c ∈ A ∈ A ∈ A c B α. But this is true for every α A, so we have B . ∈ A ∈ ∈ A (c) If B1,B2,... are sets in , then B1,B2,... belong to α for each α A. A A ∈ Since α is a σ-algebra, we have ∞ Bn α. But this is true for every A ∪n=1 ∈ A α A, so we have ∞ Bn . ∈ ∪n=1 ∈ A Thus is a σ-algebra that contains , and it must be the smallest one since α for everyA α A. C A ⊆ A ∈

2. Prove that the set of rational numbers Q is a Borel set in R.

Solution: For every x R, the set x is the of an , and hence Borel. Since there are only∈ countably{ } many rational numbers1, we may express Q as the countable of Borel sets: Q = x Q x . Therefore Q is a Borel set. ∪ ∈ { }

3. Prove that the countable union of countable sets is countable.

Solution: First we note that a subset of a must be countable. If F is countable there is a function c : F N that is one-to-one. If E F , then the → ⊆ restriction c E : E N is also one-to-one, so E is countable. | → n 1 Now if E1,E2,... are countable sets, define F1 = E1 and Fn = En ( − Fi) for \ ∪i=1 n 2. Then the Fn’s are countable, disjoint, and ∞ En = ∞ Fn. For every ≥ ∪n=1 ∪n=1 f ∞ Fn, let n(f) denote the unique so that f F . Also for n N ∈ ∪n=1 ∈ n(f) ∈ with Fn = , let cn be a one-to-one function from Fn into N. 6 ∅ Now define a map c from ∞ Fn into N N by c(f) = (n(f), c (f)). Let’s ∪n=1 × n(f) convince ourselves that c is one-to-one. Suppose that f1, f2 ∞ Fn and that ∈ ∪n=1 c(f1) = c(f2). Taking the first coordinate of c(f1) = c(f2), we find that n(f1) = n(f2); let’s call the common value n. This means that f1, f2 Fn. The second ∈ component of c(f1) = c(f2) tells us that cn(f1) = cn(f2) and since cn is one-to-one, we conclude that f1 = f2. That is, c is one-to-one.

1 See Proposition 1.3.13 on page 15 of our text. We know2 that N N is countable, so there is a one-to-one map φ from N N to × × N. The composition φ c is a one-to-one map from ∞ Fn into N. ◦ ∪n=1

4. Let be the σ-algebra in R generated by the singletons. That is = σ( ), where = x A: x R . Show that is a proper subset of the Borel sets on RA. C C {{ } ∈ } A Solution: The solution depends on the fact that we have a concrete way to identify sets in . Define = E R E is countable, or Ec is countable ; we claim A F { ⊆ | } that = . If E is a countable set, then E = x E x is the countable union of singletons,A F and so belongs to σ( ) = . If Ec is∪ countable,∈ { } then Ec, and hence E, belongs to σ( ) = . This showsC thatA . To prove the other inclusion, we note that C , soA it suffices to prove thatF ⊆ Ais a σ-algebra. C ⊆ F F (a) The empty set is countable, so R = c . (b) If E is countable,∅ then Ec has countable∅ complement,∈ F while if E has countable complement, then Ec is countable. Either way, E implies Ec . ∈ F ∈ F (c) Suppose that E1,E2,... belong to . If all of the En’s are countable, then so is the union3, and hence belongsF to . On the other hand, if one of the F c c c E’s, say EN , has countable complement, then ( nEn) = nE E is ∪ ∩ n ⊆ N countable, so that nEn . Either way, nEn . ∪ ∈ F ∪ ∈ F Since singletons are Borel sets, so is every member of σ( ) = . However, the Borel set (0, 1) is not countable4 and neither is its complementC A( , 0] [1, ). Thus (0, 1) is an example of a Borel set that does not belong to −∞. ∪ ∞ A

5. Prove the following, where (Ω, ,P ) is a probability and all sets are assumed to be in . F F (i) If A B, then P (A) P (B). ⊆ ≤ (ii) P ( ∞ An) ∞ P (An). ∪n=1 ≤ Pn=1 (iii) If An+1 An for all n, then P (An) P ( ∞ An). ⊆ → ∩n=1

Solution: (i) B is the disjoint union B = A (B A), so P (B) = P (A)+P (B A) P (A). ∪ \ n 1 \ ≥ (ii) Define A0 = A1 and for n 2, A0 = An ( − A0 ). Then the A0 ’s are 1 ≥ n \ ∪i=1 i n disjoint, A0 An for each n, and nAn = nA0 . Therefore n ⊆ ∪ ∪ n P ( A ) = P ( A ) = P (A ) P (A ). n n n n0 X n0 X n ∪ ∪ n ≤ n

(iii) For every n we can write An as the disjoint union

An = (An An+1) (An+1 An+2) ... ( nAn), \ ∪ \ ∪ ∪ ∩ 2 Remember that we proved this in the first lecture? 3 I just knew that Exercise 3 would come in handy! 4 See Proposition 1.3.14 on page 15 of the text. to obtain

P (An) = P (An An+1) + P (An+1 An+2) + + P ( nAn). \ \ ··· ∩ This shows that P (An) P ( nAn) is the tail of a convergent series, and thus converges to zero as n − .∩ → ∞

STAT 571 Assignment 2 solutions

6. Prove that a simple function s (as in Definition 2.1.1) is a (as in Definition 2.1.6).

n Solution: Write s = k=1 ak1Ak where the Ak’s are disjoint members of F. Then for any λ R we haveP ∈ ω s(ω) < λ = Ak. { | } [ k ak<λ { | } Since this set belongs to F, s is a random variable.

1 7. Suppose that Ω = 0, 1, 2,... , F = all subsets of Ω, and P( n ) = e− /n! for n Ω. Calculate E(X) where {X(n) = n3} for all n Ω. { } ∈ ∈ 3 1 Solution: We need to calculate the infinite sum n∞=0 n e− /n!. Let’s begin with 1 P a simpler problem: n∞=0 ne− /n!. Here the factor of n cancels nicely with part of the factorial on theP bottom to give

∞ 1 ∞ 1 ∞ 1 ne− /n! = e− /(n 1)! = e− /k! = 1. X X − X n=0 n=1 k=0 Attempting the same trick with n2 shows that we will not get the desired cancel- lation unless we write n2 = n(n 1) + n: − ∞ ∞ n2e 1/n! = [n(n 1) + n]e 1/n! X − X − n=0 n=0 −

∞ ∞ = n(n 1)e 1/n! + ne 1/n! X − X − n=0 − n=0

∞ ∞ = e 1/(n 2)! + ne 1/n! X − X − n=2 − n=0

∞ ∞ = e 1/k! + ne 1/n! X − X − k=0 n=0 = 1 + 1 = 2. To solve the original question, write n3 = n(n 1)(n 2) + 3n(n 1) + n and 3 1 − − − repeat the method above to get ∞ n e− /n! = 1 + 3 + 1 = 5. Pn=0

8. Show, by example, that X need not be a random variable even if ω X(ω) = λ F for every λ R. { | } ∈ ∈ Solution: Let Ω = R and F be the σ-algebra generated by the singletons. From the previous assignment, we know that A F if and only if A or Ac is countable. Therefore the map X from (R, F) to (R, B∈(R)) given by X(ω) = ω (the identity mapping) is not a random variable. For example, the A = (0, 1) is a Borel 1 set, but X− (A) = A F. 6∈ 1 On the other hand, for every singleton we have X− ( λ ) = λ F. This gives the counterexample. { } { } ∈

9. Prove that E(X)2 E(X2) for any non-negative random variable X. Hint: First look at simple functions. ≤

n 2 Solution: If s = k=1 ak1Ak is a simple function, then so is its square s = n 2 P n 2 n 2 k=1 ak1Ak , and E(s) = k=1 akP (Ak) and E(s ) = k=1 akP (Ak). Applying P P P 1/2 1/2 the Cauchy-Schwarz inequality to the vectors x = (a1P (A1) , . . . , anP (An) ) 1/2 1/2 and y = (P (A1) ,...,P (An) ) gives

E(s)2 = x, y 2 x 2 y 2 = E(s2)1. h i ≤ k k k k + Now, for a general non-negative random variable X, let sk Fs so that sk X. 2 2 2 2 2 ∈2 ↑ Then s X so E(X) = limk E(sk) limk E(s ) = E(X ). k ↑ ≤ k Here’s an even better proof that uses the variance of a random variable.

0 E((X E(X))2) ≤ − = E(X2 2XE(X) + E(X)2) − = E(X2) 2E(X)E(X) + E(X)2 − = E(X2) E(X)2. −

10. An important concept in statistics is the variance of a random variable, defined as

E[(Y E(Y ))2] if E(Y 2) < , Var (Y ) =  − otherwise. ∞ ∞

Show that if Xn(ω) X(ω) for every ω Ω, then Var (X) lim infn Var (Xn). → ∈ ≤ Solution: We may as well assume that lim infn Var (Xn) < , otherwise the ∞ conclusion is trivial. At the same time, let’s extract a subsequence Xn0 so that Var (Xn ) lim infn Var (Xn) as n0 . In other words, without loss of gener- 0 → → ∞ ality we may assume that supn Var (Xn) < , let’s call this value K. 2 ∞ Since Var (Xn) < , we have E(Xn) < and by the previous exercise, this ∞2 1/2 ∞ implies E( Xn ) E(X ) < . In other words, Xn is integrable. Our next job | | ≤ n ∞ is to show that E(Xn) is a bounded sequence of numbers. The triangle inequality X(ω) E(Xn) X(ω) Xn(ω) + Xn(ω) E(Xn) implies the following set |inclusion− for any| value≤ | M >− 0, | | − |

ω : X(ω) E(Xn) > 2M { | − | } ω : X(ω) Xn(ω) > M ω : Xn(ω) E(Xn) > M , ⊆ { | − | } ∪ { | − | } and hence

P ( X E(Xn) > 2M) P ( X Xn > M) + P ( Xn E(Xn) > M). | − | ≤ | − | | − | 2 2 Take expectations over the inequality 1 Xn E(Xn) >M (Xn E(Xn)) /M {| −2 | }2≤ − to give P ( Xn E(Xn) > M) Var (Xn)/M K/M . Combined with the previous inequality| − we obtain| ≤ ≤ 2 P ( X E(Xn) > 2M) P ( X Xn > M) + K/M . | − | ≤ | − | The pointwise convergence of Xn to X implies that the sets ( X Xn > M) | − | decrease to as n , so that P ( X Xn > M) 0. ∅ → ∞ | 2 − | → Fix M > 0 so large that K/M < 1/8. The pointwise convergence of Xn to X implies that the sets ( X Xn > M) decrease to as n , so that | − | ∅ → ∞ P ( X Xn > M) 0. Therefore we can choose N 0 so large that n N 0 implies | − | → ≥ P ( X Xn > M) 1/8 and thus P ( X E(Xn) > 2M) 1/4. | The− sets| ( X >≤ N) decrease to | as−N |, so that≤ for some large N we have P ( X > N| )| 1/4. Now lets define∅ a set→ of ∞ good points: | | ≤ G = ω : X(ω) N ω : X(ω) E(Xn) 2M . { | | ≤ } ∩ { | − | ≤ } If G is not empty, then for ωg G we have E(Xn) 2M + X(ωg) 2M + N. ∈ c | | ≤ | | ≤ Our bounds show us that P (G ) = P (( X > N) X E(Xn) > 2M) | | ∪ | − | ≤ 1/4+1/4 = 1/2 so that G is non-empty, for all n N 0. In other words, E(Xn) ≥ | | ≤ 2M + N for n N 0, which implies that E(Xn) is bounded. ≥ 2 2 Now we have that E(Xn) = Var (Xn) + E(Xn) is a bounded sequence. Ap- 2 plying Fatou’s lemma to the non-negative random variables Xn, we conclude that 2 2 2 X is integrable and E(X ) lim infn E(Xn). From problem 4, this also shows that X is integrable since E( ≤X ) E(X2)1/2 < . For any random variable| Y| and≤ constant c∞ > 0, let’s define the truncated c random variable Y = Y 1 c Y c . For any c > 0, we have {− ≤ ≤ } c c c c E(Xn) E(X) E(Xn) E(Xn) + E(Xn) E(X ) + E(X ) E(X) | − | ≤ | − | | c − c | | − | E(Xn1 Xn >c ) + E(Xn) E(X ) + E(X1 X >c ) ≤ | {| | } | | − | | {| | } | E(X2/c) + E(Xc ) E(Xc) + E(X2/c) ≤ n | n − | sup E(X2) E(X2) n n + E(Xc ) E(Xc) + ≤ c | n − | c c Now for every c, the sequence Xn is dominated by the integrable random c variable c1Ω, and converges pointwise to X . Therefore the dominated convergence theorem tells us E(Xc ) E(Xc). Letting n and then c in the above n → → ∞ → ∞ inequality shows that, in fact, E(Xn) E(X). → 2 Finally, we may apply Fatou’s lemma to the sequence (Xn E(Xn)) to obtain − 2 2 Var (X) = E[(X E(X)) ] = E[lim inf(Xn E(Xn)) ] − n − 2 lim inf E[(Xn E(Xn)) ] = lim inf Var (Xn). ≤ n − n

Whew!

STAT 571 Assignment 3 solutions

11. (Exercise 3.3.2, page 98) Let P be a finitely additive probability on a Boolean algebra U. Show that the following are equivalent. (1) P is σ-additive on U.

(2) (An)n 1 U, An An+1, and n∞=1An = A imply that P(An) P(A) if A U. ≥ ⊂ ⊃ ∩ ↓ ∈ (3) (An)n 1 U, An An+1, and n∞=1An = imply that P(An) 0. ≥ ⊂ ⊃ ∩ ∅ ↓ (4) If (An)n 1 U, An An+1, and for all n 1, P(An) δ for some δ > 0, then ≥ ⊂ ⊃ ≥ ≥ ∞ An = ∩n=1 6 ∅ (5) (An)n 1 U, An An+1, and n∞=1An = A imply that P(An) P(A) if A U. ≥ ⊂ ⊂ ∪ ↑ ∈ Solution: (1) (2) For every n we can write An as the disjoint union of U sets An = ⇒ (An An+1) (An+1 An+2) ... A, and use σ-additivity to obtain P(An) = \ ∪ \ ∪ ∪ P(An An+1) + P(An+1 An+2) + + P(A). This shows that P(An) P(A) is the tail\ of a convergent series,\ and thus··· converges to zero as n . − → ∞ (2) (3) (3) is a special case of (2). ⇒ (3) (1) Suppose that (3) holds and that Em U are disjoint and E = ⇒ ∈ n ∞ Em U. For each n N define the U set An = E ( Em) = ∞ Em. ∪m=1 ∈ ∈ \ ∪m=1 ∪m=n We have An An+1 and ∞ An = so we know P(An) 0. On the other hand ⊃ ∩n=1 ∅ → by finite additivity we have P(E) = P(E1) + + P(En 1) + P(An), so letting ··· − n we obtain P(E) = ∞ P(Em), which says that P is σ-additive. → ∞ Pm=1 (2) (5) This follows since U is closed under complementation and P is a finitely additive⇔ probability so that P(Ac) = 1 P(A). − (3) (4) These statements are contrapositives of each other. ⇔

12. (Exercise 3.6.21 parts (1)–(4), page 135) (Integration by parts) Let µ and ν be two probability measures on B(R) with distribu- tion functions F and G. Let µ(dx) and ν(dx) also be denoted by dF (x) and dG(x). Let + B = (a, b] (a, b], and set B = (x, y) B x < y and B− = (x, y) B x y . × { ∈ | } { ∈ | ≥ } (1) Use Fubini’s theorem to express (µ ν)(B−) as two distinct integrals. × (2) Let F (x ) = limy x F (y). Show that µ((a, c)) = F (c ) F (a). (3) Use (1) and− (2) to↑ show that − −

(µ ν)(B) = F (b) F (a) G(b) G(a) × { − }{ − } = Z F (u ) F (a) dG(u) + Z G(u) G(a) dF (u). (a,b]{ − − } (a,b]{ − }

(4) Deduce that

F (b)G(b) F (a)G(a) = Z F (u ) dG(u) + Z G(u) dF (u). { − } (a,b] − (a,b]

Solution: (1) From Fubini’s theorem we can write

(µ ν)(B−) = ZZ 1B (x, y)(µ ν)(dx, dy) × − ×

= ZZ 1(a,b](x)1(a,b](y)1[y, )(x) µ(dx)ν(dy) ∞

= Z Z µ(dx)ν(dy) (a,b] [y,b]

= Z µ([y, b]) ν(dy). (a,b]

Noting that 1[y, )(x) = 1( ,x](y) we can use a similar argument to obtain (µ ∞ −∞ × ν)(B−) = ν((a, x]) µ(dx). R(a,b] (2) µ((a, c)) = limn µ((a, c 1/n]) = limn F (c 1/n) F (a) = F (c ) F (a). − − − − − (3) We have two different ways to calculate (µ ν)(B). The first is by definition (µ ν)(B) = µ((a, b])ν((a, b]) = (F (b) F (a))(× G(b) G(a)). The second is by × + − − adding (µ ν)(B−) and (µ ν)(B ). We already know that × ×

(µ ν)(B−) = Z ν((a, x]) µ(dx) = Z (G(x) G(a)) F (dx), × (a,b] (a,b] −

and a similar argument shows that

+ (µ ν)(B ) = Z µ((a, x)) ν(dx) = Z (F (x ) F (a)) G(dx). × (a,b] (a,b] − − Therefore we have

(F (b) F (a))(G(b) G(a)) = Z (F (x ) F (a)) G(dx) + Z (G(x) G(a)) F (dx). − − (a,b] − − (a,b] −

(4) Starting with the equation above and multiplying out where possible gives

F (b)G(b) F (a)G(b) F (b)G(a) + F (a)G(a) − − = Z F (x ) G(dx) F (a)(G(b) G(a)) + Z G(x) F (dx) G(a)(F (b) F (a)) (a,b] − − − (a,b] − −

= Z F (x ) G(dx) + Z G(x) F (dx) F (a)G(b) + F (a)G(a) G(a)F (b) + G(a)F (a). (a,b] − (a,b] − −

Cancelling like terms on both sides, this reduces to

F (b)G(b) F (a)G(a) = Z F (x ) G(dx) + Z G(x) F (dx). − (a,b] − (a,b]

STAT 571 Assignment 4 solutions

L∞ a.s. 13. Prove that if Xn X, then Xn X. −→ −→ Solution: For every k N choose nk so that n nk implies Xn X 1/k. ∈ ≥ k − k∞ ≤ Then for n nk we have P( Xn X > 1/k) = 1, and taking the intersection over such n gives≥P(sup X | X −> 1/k| ) = 1. Now we can also take the intersection n nk n over k to obtain P(≥ [sup| − | X X > 1/k]) = 1. Since X (ω) X(ω) for k n nk n n ∩ ≥ | − | → every ω [sup X X > 1/k], we have X a.s. X. k n nk n n ∈ ∩ ≥ | − | −→

P w 14. Prove that if Xn X, then Xn X. −→ −→ P P Solution: Since Xn X, we have φ(Xn) φ(X) for any continuous bounded −→ −→ φ. Using the dominated convergence below, we have E(φ(Xn)) E(φ(X)), which w → by definition means Xn X. −→

P 1 15. (Dominated convergence theorem) Prove that if Xn X, and Xn Y L , −→ | | ≤ ∈ then E(Xn) E(X). → Solution: If E(Xn) E(X), then there is  > 0 and a subsequence nk so 6→ P that E(Xn ) E(X)  for every k. But Xn X so there is a further | k − | ≥ k −→ a.s. subsequence so that Xn X. By the usual dominated convergence theorem, kj −→ we have E(Xn ) E(X), which contradicts E(Xn ) E(X) . Therefore kj → | k − | ≥ E(Xn) E(X) is impossible, so we have E(Xn) E(X). 6→ →

16. Prove or disprove the following implication for convergence in Lp, almost surely, and 1 N in probability: Xn X implies Xn X. → N Pn=1 → 1 N Solution: (a.s.) It suffices to show that Xn(ω) X(ω) implies Xn(ω) → N Pn=1 → X(ω). Suppose Xn(ω) X(ω), and pick  > 0. Let n so that supn n Xn(ω) → n ≥ | − X(ω) . Choose N so large that Xn(ω) X(ω) N . Then for | ≤ Pn=1 | − | ≤ N N we get ≥ 1 N 1 N Xn(ω) X(ω) Xn(ω) X(ω) N X − ≤ N X − n=1 n=1

1 n 1 N Xn(ω) X(ω) ! + Xn(ω) X(ω) ! ≤ N X − N X − n=1 n=n+1

N  + ≤ N 2, ≤ 1 N which proves Xn(ω) X(ω). N Pn=1 →

Solution: (Lp) Pick  > 0 and let n so that supn n Xn X p . Choose N n ≥ k − k ≤ so large that Xn X p N . Then for N N we get Pn=1 k − k ≤ ≥ 1 N 1 N Xn X Xn X N X − ≤ N X − p n=1 p n=1

1 n 1 N ! ! Xn X p + Xn X p ≤ N X − N X − n=1 n=n+1 N  + ≤ N 2, ≤ 1 N Lp which proves Xn X. N Pn=1 −→ Solution: (in probability) Let Xn be independent random variables with P(Xn = n) = 1/n and P(Xn = 0) = 1 1/n. For any  > 0 we have P( Xn > ) = 1/n 0 P − | | → so Xn 0. On the other hand, for any N we have −→ N 1 N/2 1 P  sup Xn  = P (Xn = 0) = 1  = b c . 1 n N ≤ 2  \  Y − n N ≤ 2 ≤ ≤ N/2 2 P N sup1 n N Xn > 2 2 , so we conclude  P  ≥ ≤ ≤  ≥ N that (1/N) Xn 0 in probability. Pn=1 6→

2 17. Prove that if supn E(Xn) < , then (Xn)n N is uniformly integrable. ∞ ∈ Solution:

2 2 Xn supn E(Xn) sup Z Xn dP sup Z dP 0 as c . n Xn >c | | ≤ n Xn >c c ≤ c → → ∞ {| | } {| | }

STAT 571 Assignment 5 solutions

18. Prove that if X 0, then E[X G] 0. ≥ | ≥

Solution: Let G = ω : E[X G](ω) < 0 . Then E[X G]1G 0 and X1G 0, but G G so 0 { X dP =| E[X G}] dP 0. A| negative≤ random variable≥ ∈ ≤ RG RG | ≤ with a zero integral must be zero: thus E[X G]1G = 0, and we conclude that | 1G = 0, that is P(G) = 0.

P 1 19. (Dominated convergence theorem) Prove that if Xn X, and Xn Y L , P −→ | | ≤ ∈ then E[Xn G] E[X G]. | −→ | P Solution: Since Xn X 0 and Xn X 2Y , dominated convergence tells | − | −→ | − | ≤ us that E( Xn X ) 0 as n . Since E[Xn X G] Xn X , this | − | → → ∞ 1 | − | | ≤ | − | shows us that E[Xn G] E[X G] in L and hence also in probability. | → |

20. If X,Y L2, show that E[XE[Y G]] = E[YE[X G]]. ∈ | | Solution: Integrate over the equation E[XE[Y G] G] = E[Y G]E[X G] = E[YE[X G] G]. | | | | | |

21. True or false: If X and Y are independent, then E[X G] and E[Y G] are indepen- dent for any G. | |

Solution: This is false. Let X,Y be independent and identically distributed with non-zero variance, and set G = σ(X + Y ). Then E[X + Y G] = X + Y , so by symmetry E[X G] = E[Y G] = (X + Y )/2. That is, E[X| G] and E[Y G] are equal! | | | | + 22. If (Xn)n N is a submartingale, then so is (Yn)n N where Yn = (Xn a) and a is any constant.∈ ∈ −

Solution: Since Yn+1 Xn+1 a, we have E[Yn+1 Gn] E[Xn+1 a Gn] ≥ − | ≥ − | which is greater than or equal to Xn a as (Xn)n N is a submartingale. Also − ∈ Yn+1 0 implies E[Yn+1 Gn] 0, so combining the two inequalities we get ≥ | ≥ + E[Yn+1 Gn] sup 0,Xn a = (Xn a) = Yn, | ≥ { − } −

which shows that (Yn)n N is a submartingale. ∈

STAT 571 Term exam I solutions.

1. Determine the σ-algebra F of P∗-measurable subsets of R for the whose dis- tribution function is 1 if x 0, F (x) = n 0 if x <≥ 0.

Solution: Before we try to determine F, let’s find out as much as we can about P and P∗. Define An = ( 1/n, 0] so that An+1 An for every n and nAn = 0 . − ⊆ ∩ { } This implies that P(An) P( 0 ). Now P(An) = F (0) F ( 1/n) = 1 for every n and so we conclude that→P( {0 }) = 1, and also P(R 0− ) =− 0. { } \{ } For any subset E R with 0 E we have P∗(E) P∗( 0 ) = P( 0 ) = 1. ⊆ ∈ ≥ { } { } On the other hand, if 0 E, then E (R 0 ) and so P∗(E) P∗(R 0 ) = P(R 0 ) = 0. Therefore6∈ we have ⊆ \{ } ≤ \{ } \{ } 1 if 0 E, P (E) = ∗  0 if 0 ∈ E. 6∈ Let E and Q be arbitrary subsets of R. If 0 E, then 0 is contained in exactly one of the sets E Q and E Qc. Therefore ∈ ∩ ∩ c 1 = P∗(E) = P∗(E Q) + P∗(E Q ) = 1. ∩ ∩ If 0 E, then 0 doesn’t belong to either E Q or E Qc, so that 6∈ ∩ ∩ c 0 = P∗(E) = P∗(E Q) + P∗(E Q ) = 0. ∩ ∩ Thus any subset Q is a “good splitter” and so F contains all subsets of R.

2. If P is a on (R, B(R)) and F its distribution function, show that F is continuous at x if and only if P( x ) = 0. { } Solution: Since F is non-decreasing, it has a left limit at x given by

F ( x) = lim F (x 1/n) = lim P(( , x 1/n]) = P(( , x)). − n − n −∞ − −∞ Subtracting gives

F (x) F ( x) = P(( , x]) P(( , x)) = P( x ), − − −∞ − −∞ { } which shows that P( x ) = 0 if and only if F (x) = F ( x). This is the same as continuity at x, since{F}is right-continuous. −

3. Show that X is a random variable on (Ω, F, P) if ω X(ω) > λ F for all λ R. { | } ∈ ∈ Solution: If X > λ F for all λ R, then X λ = n X > λ 1/n F. Therefore X{ < λ =} ∈X λ c F∈, which shows{ ≥ that} X ∩is a{ random− variable.} ∈ { } { ≥ } ∈

4. Let P be a probability measure on (R, B(R)). If Q R and A, B are Borel sets so ⊂ that A Q B and P(B A) = 0, then Q is P∗-measurable. ⊂ ⊂ \ Solution: Since every Borel set is P∗-measurable, and since Q = A (Q A), it ∪ \ suffices to show that Q A is P∗-measurable. Let’s see how well Q A splits E. For any E R we have\ \ ⊆

P∗(E (Q A)) P∗(Q A) P∗(B A) = P(B A) = 0. ∩ \ ≤ \ ≤ \ \ Consequently we obtain

c c P∗(E) P∗(E (Q A) ) = P∗(E (Q A) ) + P∗(E (Q A)). ≥ ∩ \ ∩ \ ∩ \

Subadditivity of P∗ gives the reverse inequality, and shows that (Q A) is P∗- measurable. \

5. Give an example of a and integrable random variables Xn so that Xn(ω) 0 for every ω Ω, but E[Xn] 0 as n . → ∈ 6→ → ∞ n Solution: Let Ω = N, F = all subsets of Ω, and P ( n ) = 2− for n 1. Define n { } ≥ random variables by Xn = 2 1 n . For fixed ω, we have Xn(ω) = 0 for all n > ω { } n n n so Xn(ω) 0. On the other hand, E(Xn) = 2 P( n ) = 2 2− = 1, for all n. → { }

STAT 571 Term exam II solutions

1. Give an example where Xn(ω) X(ω) for every ω Ω, but E(X) < lim infn E(Xn). → ∈ n Solution: Let Ω = N, F = all subsets of Ω, and P( n ) = 2− for n 1. Define n { } ≥ random variables by Xn = 2 1 n . For fixed ω, we have Xn(ω) = 0 for all n > ω { } n n n so Xn(ω) 0. On the other hand, E(Xn) = 2 P( n ) = 2 2− = 1, for all n. → { }

2. Show that if E[(X a)2] < for some a R, then E[(X a)2] < for all a R. − ∞ ∈ − ∞ ∈ Solution: If E[(X a)2] < , then the result follows for any b R by integrating the inequality (X −b)2 2(∞X a)2 + 2(b a)2. ∈ − ≤ − −

3. Prove that X dP = sup s dP 0 s X, s simple for non-negative X. R {R | ≤ ≤ } Solution: If s X, then s dP X dP and so sup s dP 0 s ≤ R ≤ R {R | ≤ ≤ X, s simple X dP. On the other hand, X dP is defined as limk sk dP } ≤ R R R where sk is a sequence of simple functions that increases to X. Therefore

X dP = lim sk dP sup s dP 0 s X, s simple . ∫ k ∫ ≤ {∫ | ≤ ≤ } Combining the two inequalities gives the result.

4. Let P, Q be probabilities on (R, B(R)) where P has the density function f. Prove that h(z) = f(z x) Q(dx) is the density of the convolution P ? Q. R − Solution: For any Borel set B, we have by the definition of convolution and Fubini’s theorem

(P ? Q)(B) = Z 1B(x + y)(P Q)(dx, dy) R2 ×

= Z Z 1B(x + y)P(dx)Q(dy). R R

Since P has density f and using the change of variables z = x + y, this is

Z Z 1B(x + y)f(x) dx Q(dy) = Z Z 1B(z)f(z y) dz Q(dy) R R R R −

= Z Z f(z y) Q(dy) dz, B R −

which gives the result. 5. Prove that Var (X) = 1 (x y)2 (P P)(dx, dy), where P is the distribution of X. 2 RR2 − × Solution: First we use Fubini’s theorem to rewrite the integral on R2 as an iterated integral:

2 2 2 Z (x y) (P P)(dx, dy) = Z Z (x y) P(dx)P(dy) = Z E[(X y) ] P(dy). R2 − × R R − R −

This is true whether or not the value is finite. Now if E[(X y)2] = for all R 2 2− ∞ y , then both Var (X) = E[(X E(X)) ] and R E[(X y) ] P(dy) are infinite. Let’s∈ suppose that E[(X y)2] <− for some y R R, and− hence by problem 2, for all y R. Then expanding− the square∞ is justified∈ and we obtain ∈

2 2 2 Z E[(X y) ] P(dy) = Z E[X 2yX + y ] P(dy) R − R − 2 2 = Z (E[X ] 2yE[X] + y ) P(dy) R − 2 2 = E[X ] Z P(dy) 2E[X] Z y P(dy) + Z y P(dy) R − R R = E[X2] 2E[X]E[X] + E[X2] − = 2Var (X). STAT 571 Final Exam April 22 1999

Instructions: This an open book exam. You can use your text, your notes, or any other book you care to bring. You have three hours.

1. If you roll a fair die, how long on average before the pattern “ ...... ” appears?

2. Let Ω = (0, 1], F = B((0, 1]), and P be on (0, 1]. Define the sub σ-algebra G = σ (0, 1/4], (1/4, 1/2], (1/2, 1] and the random variable X(ω) = ω2. Write out an explicit formula{ for E(X G)(ω). } |

3. Prove that E(X X2) = X2, where X has density function | (1 + x)/2, if 1 x 1 f(x) =  0, otherwise.− ≤ ≤

4. For X L2, prove that the random variable E(X G) has smaller variance than X. ∈ |

5. Without using any mathematical notation, explain in grammatical English the mean- ing of a stopping time. Why are they defined in this way, and why do we only consider random times with this special property?

6. Let T be a stopping time and define FT = A F : A T n Fn . Prove that { ∈ ∩ { ≤ } ∈ } FT is a σ-algebra.

7. Let S be a stopping time. Prove that T (ω) = inf n > S(ω): Xn(ω) 3 is a { ≤ } (Fn)-stopping time, where (Xn)n N is adapted to the filtration (Fn)n N. ∈ ∈

8. For X L1, prove that the collection E(X G): G is a sub σ algebra of F is uniformly integrable.∈ { | }

This is hard and undeserved measure, my lord. – Parolles Act 2, Scene 3: All’s Well That Ends Well