
Lecture 7: Interchange of integration and limit Differentiating under an integral sign To study the properties of a chf, we need some technical result. When can we switch the differentiation and integration? If the range of the integral is finite, this switch is usually valid. Theorem 2.4.1 (Leibnitz’s rule) If f (x;q), a(q), and b(q) are differentiable with respect to q, then d Z b(q) Z b(q) ¶f (x;q) f (x;q)dx = f (b(q);q)b0(q)−f (a(q);q)a0(q)+ dx dq a(q) a(q) ¶q In particular, if a(q) = a and b(q) = b are constants, then d Z b Z b ¶f (x;q) f (x;q)dx = dx dq a a ¶q If the range of integration is infinite, problems can arise. Whether differentiation and integration can be switched is thebeamer-tu-logo same as whether limit and integration can be interchanged. UW-Madison (Statistics) Stat 609 Lecture 7 2015 1 / 15 Interchange of integration and limit Note that Z ¥ ¶f (x;q) Z ¥ f (x;q + d) − f (x;q) dx = lim dx −¥ ¶q −¥ d!0 d Hence, the interchange of differentiation and integration means whether this is equal to d Z ¥ Z ¥ f (x;q + d) − f (x;q) f (x;q)dx = lim dx dq −¥ d!0 −¥ d An example of invalid interchanging integration and limit Consider d −1 0 < x < d f (x;d) = 0 otherwise Then, limd!0 f (x;d) = 0 for any x and, hence, Z ¥ Z ¥ Z d −1 0 = lim f (x;d)dx 6= lim f (x;d)dx = lim d dx = 1beamer-tu-logo −¥ d!0 d!0 −¥ d!0 0 UW-Madison (Statistics) Stat 609 Lecture 7 2015 2 / 15 We now give some sufficient conditions under which interchanging integration and limit (or differentiation) is valid. The proof of the main result is technical and out of the scope of this course. Theorem 2.4.2 (Lebesgue’s dominated convergence theorem) Suppose that the function h(x;y) is continuous at y0 for each x, and there exists a function g(x) satisfying (i) jh(x;y)j ≤ g(x) for all x and y in a neighborhood of y0; R ¥ (ii) −¥ g(x)dx < ¥. Then, Z ¥ Z ¥ lim h(x;y)dx = lim h(x;y)dx y!y0 −¥ −¥ y!y0 The key condition in this theorem is the existence of a dominating function g(x) with a finite integral. Applying Theorem 2.4.2 to the difference [f (x;q + d) − f (x;q)]=d, we obtain the next result. beamer-tu-logo UW-Madison (Statistics) Stat 609 Lecture 7 2015 3 / 15 Theorem 2.4.3. Suppose that f (x;q) is differentiable at q = q0, i.e., f (x;q0 + d) − f (x;q0) ¶f (x;q) lim = d!0 d ¶q q=q0 exists for every x, and there exists a function g(x;q0) and a constant d0 > 0 such that f (x;q0+d)−f (x;q0) (i) d ≤ g(x;q0) for all x and jdj ≤ d0; R ¥ (ii) −¥ g(x;q0)dx < ¥. Then d Z ¥ Z ¥ ¶f (x;q) f (x;q)dx = dx dq −¥ −¥ ¶q q=q0 q=q0 This result is for q at a particular value, q0. Typically f (x;q) is differentiable at all q, and f (x;q0 + d) − f (x;q0) ¶f (x;q) = d ¶q ∗ q=q0+d (x) beamer-tu-logo ∗ ∗ for some d (x) with jd (x)j ≤ d0, which leads to the next result. UW-Madison (Statistics) Stat 609 Lecture 7 2015 4 / 15 Corollary 2.4.4. Suppose that f (x;q) is differentiable in q and there exists a function g(x;q) such that ¶f (x;J) (i) ¶J ≤ g(x;q) for all x and J such that jJ − qj ≤ d0; R ¥ (ii) −¥ g(x;q)dx < ¥ for all q. Then d Z ¥ Z ¥ ¶f (x;q) f (x;q)dx = dx dq −¥ −¥ ¶q Example 2.4.5. Let X have the exponential pdf f (x) = l −1e−x=l ; x > 0 where l > 0 is a constant. We want to prove a recursion relation d E(X n+1) = lE(X n) + l E(X n); n = 1;2;::: dl beamer-tu-logo Note that both E(X n) and E(X n+1) are functions of l. UW-Madison (Statistics) Stat 609 Lecture 7 2015 5 / 15 If we could move the differentiation inside the integral, we would have d d Z ¥ xn Z ¥ ¶ xn E(X n) = e−x=l dx = e−x=l dx dl dl 0 l 0 ¶l l Z ¥ xn x 1 1 = − −x=l = ( n+1) − ( n) 2 1 e dx 2 E X E X 0 l l l l which is the result we want to show. To justify the interchange of integration and differentiation, we take xne−x=(l+d0) x ( ; ) = + g x l 2 1 (l − d0) l − d0 Then n n −x=x ¶ x −x=x x e x e ≤ + 1 ≤ g(x;l); jx − lj ≤ d0 ¶x x x 2 x and we can apply Corollary 2.4.4. In the proof of Theorem 2.3.7 (differentiating mgf to obtain moments), we interchanged differentiation and integration without justification.beamer-tu-logo We now provide a proof. UW-Madison (Statistics) Stat 609 Lecture 7 2015 6 / 15 Completion of the proof of Theorem 2.3.7. Assuming that MX (t) exists for t 2 [−h;h], h > 0, we want to show Z ¥ Z ¥ d d tx ¶ tx MX (t) = e fX (x)dx = e fX (x)dx dt dt −¥ −¥ ¶t It is enough to show (why?) Z ¥ Z ¥ d tx ¶ tx e fX (x)dx = e fX (x)dx dt 0 0 ¶t For t 2 (−h=2;h=2) and x > 0, ¶ tx tx hx=2 e fX (x) = xe fX (x) ≤ xe fX (x) = g(x) ¶t For sufficiently large x > 0, x ≤ ehx=2 and, hence, Z ¥ Z ¥ hx g(x)dx ≤ e fX (x)dx = MX (h) < ¥ 0 0 An application of Corollary 2.4.4 proves the result. We now complete the proof of Theorem C1 and establish anotherbeamer-tu-logo useful result for inverting a characteristic function. UW-Madison (Statistics) Stat 609 Lecture 7 2015 7 / 15 Completion of the proof of Theorem C1. We give a proof for the case where X has pdf fX (x). If we can exchange the differentiation and integration, then r r Z ¥ Z ¥ r Z ¥ d fX (t) d ı_tx d ı_tx r r ı_tx r = r e fX (x)dx = r e fX (x)dx = ı_ x e fX (x)dx dt dt −¥ −¥ dt −¥ and the result follows by setting t = 0 at the beginning and end of the above expression. The exchange is justified by the dominated convergence theorem, since r d ı_tx r r ı_tx r e fX (x) = ı_ x e fX (x) ≤ jxj fX (x) dtr and Z ¥ r r EjXj = jxj fX (x)dx < ¥ −¥ Theorem C7. R ¥ Let f(t) be a chf satisfying −¥ jf(t)jdt < ¥. Then the corresponding cdf F has derivative 1 Z ¥ beamer-tu-logo F 0(x) = e−ı_tx f(t)dt 2p −¥ UW-Madison (Statistics) Stat 609 Lecture 7 2015 8 / 15 Proof. Let x − d and x + d be continuity points of F. By Theorem C2, Z T e−ı_t(x−d) − e−ı_t(x+d) F(x + d) − F(x − d) = lim f(t)dt T !¥ −T 2pı_t Z T sin(td)e−ı_tx = lim f(t)dt T !¥ −T pt Z ¥ sin(td)e−ı_tx = f(t)dt −¥ pt where the last equality follows from the fact that −ı_tx sin(td)e f(t) ≤ jf(t)j ptd which is integrable. Also, from the previous bound we can apply the dominated convergence theorem to interchange the integration and limit in the following operation to finish the proof: beamer-tu-logo UW-Madison (Statistics) Stat 609 Lecture 7 2015 9 / 15 F(x + d) − F(x − d) F 0(x) = lim d!0 2d Z ¥ sin(td)e−ı_tx = lim f(t)dt d!0 −¥ 2ptd Z ¥ sin(td)e−ı_tx = lim f(t)dt −¥ d!0 2ptd 1 Z ¥ = e−ı_tx f(t)dt 2p −¥ Example We now apply Theorem C7 to obtain the pdf corresponding to the chf f(t) = e−jtj: 1 Z ¥ 1 Z ¥ F 0(x) = e−ı_tx e−jtjdt = cos(tx)e−t dt 2p −¥ p 0 Z Since eat cos(bt)dt = eat [acos(bt) + b sin(bt)]=(a2 + b2), 1 e−t [−cos(xt) + x sin(xt)] t=¥ 1 0( ) = = beamer-tu-logo F x 2 2 p 1 + x t=0 p(1 + x ) UW-Madison (Statistics) Stat 609 Lecture 7 2015 10 / 15 We now turn to the question of when we can interchange differentiation and infinite summation. Theorem 2.4.8. ¥ Suppose that the series ∑x=0 h(q;x) converges for all q in an interval (a;b) ⊂ R and ¶ (i) ¶q h(q;x) is continuous in q for each x; ¥ ¶ (ii) ∑x=0 ¶q h(q;x) converges uniformly on every closed bounded subinterval of (a;b). Then d ¥ ¥ ¶ ∑ h(q;x) = ∑ h(q;x) dq x=0 x=0 ¶q This theorem is from calculus (or mathematical analysis) and is not easy to apply because we need to show the uniform convergence of an infinite series. It is more convenient to apply the following dominated convergence theorem for infinite sums.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-