An Introduction to Stochastic Control

An Introduction to Stochastic Control

An introduction to stochastic control F. J. Silva Dip. di Matematica Guido Castelnuovo March 2012 Some disperse but useful results This section is based on [24, Chapter 1]. Let Ω be a nonempty set, and let F ⊂ 2Ω. We say that F is a π-system if A; B 2 F ) A \ B 2 F. F is a λ-system if (i) Ω 2 F; (ii) A, B 2 F and A ⊆ B imply B n A 2 F; (iii) Ai 2 F, Ai " A implies that A 2 F. Lemma 1. [σ-field of a π-system, or λ-π lemma] If a π system A is contained in a λ system F then σ(A) ⊆ F. Proof: See any standard book on measure theory (e.g. [2]) Example of application: [Uniqueness of the extension of a measure defined on a π-system] Let P and Q be two probability measures on (Ω; F) which coincide on a 1 π-system A, then they coincide on σ(A). In fact, it is enough to define C = fC 2 F ; P (C) = Q(C)g and two verify that it is a λ-system. Corollary 1. [Measurable function w.r.t. the σ-field of a π system] Let A be a π system. Let H be a linear space of functions from Ω to R such that (i)1 2 H;( ii) IA 2 H for all A 2 A (iii) φi 2 H; 0 ≤ φi " φ, φ is bounded ) φ 2 H: Then H contains all σ(A)-measurable functions from Ω to R. Proof: Let φ be σ(A)-measurable. Clearly we have that n + n X −n φ " φ ; with φ (!) := j2 Ifφ+(!)2[j2−n;(j+1)2−n]g (1) j≥0 with an analogous approximation for φ−. Therefore, by (iii), it is enough to show that φn 2 H. But φ is the sum of indicator of elements in σ(A). Therefore it is natural to consider the set F := fA 2 Ω; IA 2 Hg: 2 and to prove that σ(A) ⊆ F. But this follows from lemma 1, since A is a π-system and A ⊂ F, which is easily shown to be a λ-system. Theorem 1. [Dynkin theorem] Let (Ω; F) and (Ω0; F 0) two measurable spaces, and let (U; d) be a Polish space. Let ξ :Ω ! Ω0 and φ :Ω ! U be r.v.'s. Then φ 2 σ(ξ)- measurable, i.e. φ−1(B(U)) ⊆ ξ−1(F 0), iff there exists a measurable η :Ω0 ! U such that φ(!) = η(ξ(!)) for all ! 2 Ω: Proof: Consider the case U = R (the general case can be obtained using an isomorphism theorem, see [19]) and define the set 0 0 H := η(ξ); for some F -measurable map η :Ω ! R : We have to shown that the set of σ(ξ)=B(R) measurable maps belongs to H. This can be done by checking the assumptions of corollary 1 with A = σ(ξ) and proving that H satisfies (i), (ii) and (iii). Exercise: Do the details of the above proof. 3 Lemma 2. [Borel-Cantelli] Let (Ω; F; P) be a probability space and consider a sequence of events Ai 2 F. We have X P(Ai) < 1 ) P(\i [j≥i Aj) = 0: i Proof: Straightforward. Note that P(\i [j≥i Aj) ≤ P([j≥iAj) for all i and use the convergence of the series. Lemma 3. [Chebyshev inequality] Consider a nonnegative r.v. X. Then, for all p 2 (0; 1) and " > 0 we have p E(X ) P(X ≥ ") ≤ : "p Proof: It suffices to note that Z Z p p p X (X ≥ ") = (X ≥ " ) = p p d (!) ≤ d (!): P P IfX ≥" g P p P Ω Ω " 4 Conditional expectation This section is based on [24, Chapter 1]. Consider a probability space (Ω; F; P). Let 1 X 2 L (Ω) and G be a sub-σ-field of F. Define the signed measure µ : G! R as Z µ(A) := X(!)dP(!) for all A 2 G: A 1 By the Radon-Nikodym theorem there exists a unique PjG a.s. r.v. f 2 LG(Ω) (i..e. in particular G-measurable) such that Z Z fdP = XdP for all A 2 G: (2) A A The function f is called the conditional expectation of X given G, and we write E(XjG) := f: 5 Fundamental properties of E(·|G) All the properties below are simple consequences of (2) (Do them as an exercise!) (i) E(·|G) is a linear bounded functional. (ii) For a constant a 2 R, we have E(ajG) = a. 1 (iii) [Mononoticity] For X; Y 2 LF with X ≥ Y we have E(XjG) ≥ E(Y jG). 2 2 (iv) [Take out the measurable part] For X 2 LF and Y 2 LG, we have E(YXjG) = Y E(XjG). (v) [Characterization of independence] X is independent of G iff for every Borel f 1 such that f(X) 2 L (Ω) we have E(f(X)jG) = E(f(X)). (vi) [Tower or \projection" property] If G1 ⊆ G2 ⊆ F, then E(E(XjG1)jG2) = E(E(XjG2)jG1) = E(XjG1): (vii) [Jensen inequality] Let φ convex such that φ(X) 2 L1(Ω), then φ(E(XjG)) ≤ E(φ(X)jG): 6 [Conditioning one r.v. X w.r.t. another r.v. ξ] 1 Let X 2 LF and ξ : (Ω; F) ! (U; B(U)). Note that we can always define −1 E(Xjξ) := E(Xjξ (B(U))) = η(ξ) for some B(U)=B(R)- measurable function η; by Dynkin theorem. Therefore, it is natural to define E(Xjξ = x) := η(x): Another way to define this is by appealing to the Radon-Nikodym. In fact, let us define in B(U) the measure Z ν(B) := X(!)dP(!); ξ−1(B) −1 which is absolutely continuous with respect to Pξ := P ◦ ξ (the image measure of P dν under ξ). Therefore, there exists a Radon -Nikodym derivative (unique "-a.s.) such dP" P that Z dν Z (x)dP"(x) = X(!)dP(!) for all B 2 B(U): B dP" ξ−1(B) 7 We define dν E(Xjξ = x) := (x): dP" It can be checked that −1 dν η(ξ(!)) = E(Xjξ (B(U)))(!) = (ξ(!)) Pjξ−1(B(U)) − a.s. (3) dP" −1 Integrating w.r.t. a set of the form ξ (A) and using the definition of Pξ, we obtain that dν η(x) = (x) P" − a.s. dP" Note that incidentally, this gives another proof of Dynkin theorem. Let us prove (3). We 8 have R dν R dν (ξ(!))d (!) = (x)d "(x) ξ−1(B) dP" P B dP" P = ν(B) R = ξ−1(B) X(!)dP(!) R −1 = ξ−1(B) E(Xjξ (B(U)))(!)dP(!): which yields the result. Now, we define the conditional probability w.r.t. a σ-field G as P(AjG) := E(IAjG): Nota that for each B 2 F we have that P(BjG) is only defined PjG a.s. Thus, it is not sure that we can find some A 2 G with P(A) = 1 such that if we fix any ! 2 A, we can measure all the sets B 2 F . However, we can give a sense to this using the concept of regular conditional probability (see [2] for more on this). 9 [Characterization of E(·|ξ) in terms of fg(ξ); g bounded continuousg] Let us now prove that E(Xjξ) = 0 iff for all bounded continuous g we have E(g(ξ)X) = 0: The \only if" part is direct. To prove the \if part", first note that E(Xjξ) = 0 iff E(φX) = 0 for all φ that are σ(ξ) measurable. Now, consider the set H := fφ : (Ω; F) ! (R; B(R)) ; E(φX) = 0g: We have to show that H contains the σ(ξ)-measurable functions. It is clear that H satisfies the assumptions (i) and (iii) of corollary 1. We only have to construct a π-system A such that IA 2 H for all A 2 A and σ(A) = σ(ξ). Let us take A := fξ−1([a; b]) ; for some a < bg: 10 If A = ξ−1([a; b]) 2 A we have E(IA(ξ)X) = E(I[a;b](ξ)X): n Now, take any sequence of continuous functions g ! I[a;b](ξ), where the convergence is n pointwise. Since E(g (ξ)X) = 0, by passing to the limit we get that E(IA(ξ)X) = 0 and so assumption (ii) of corollary 1 is verified. Note that the same proof yields that if G = σ(ξ1; :::; ξn) we have n E(XjG) = 0 iff for all bounded continuous g : R ! R we have E(g(ξ1; : : : ; ξn)X) = 0: The interesting fact is that the result is also valid when we condition to a countable set of r.v's. More precisely, using the above result and the technique of its proof we get the following result. Proposition 1. [Checking conditions only on a finite set of variables] Consider a 11 sequence of variables ξ1; ξ2;::: and define G := σ(fξi ; i 2 Ng). Then n E(XjG) = 0 iff for all n 2 N and g 2 Cb(R ) we have E(g(ξ1; : : : ; ξn)X) = 0: 12 Stochastic process: Basic definitions Good references for this part are [7, 14]. Let I be a non-empty index set and (Ω; F; P) a probability space.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    134 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us