Solution to Selected Problems

Total Page:16

File Type:pdf, Size:1020Kb

Solution to Selected Problems Solution to selected problems. Chapter 1. Preliminaries 1. 8A 2 FS, 8t ¸ 0, A \ fT · tg = (A \ fS · tg) \ fT · tg, since fT · tg ½ fS · tg. Since A \ fS · tg 2 Ft and fT · tg 2 Ft, A \ fT · tg 2 Ft. Thus FS ½ FT . 2. Let Ω = N and F = P(N) be the power set of the natural numbers. Let Fn = σ(f2g; f3g;:::; fn+ 1g), 8n. Then (Fn)n¸1 is a filtration. Let S = 3 ¢ 13 and T = 4. Then S · T and ( f3g if n = 3 f! : S(!) = ng = ; otherwise ( Ω if n = 4 f! : T (!) = ng = ; otherwise Hence fS = ng 2 Fn, fT = ng 2 Fn, 8n and S; T are both stopping time. However f! : T ¡ S = 1g = f! : 1f3g(!) = 1g = f3g 2= F1. Therefore T ¡ S is not a stopping time. 3. Observe that fTn · tg 2 Ft and fTn < tg 2 Ft for all n 2 N , t 2 R+, since Tn is stopping time and we assume usual hypothesis. Then (1) supn Tn is a stopping time since 8t ¸ 0, fsupn Tn · tg = \nfTn · tg 2 Ft. (2) infn Tn is a stopping time since finfn Tn < tg = [fTn < tg 2 Ft (3) lim supn!1 is a stopping time since lim supn!1 = infm supn¸m Tn (and (1), (2).) (4) lim infn!1 is a stopping time since lim infn!1 = supm infn¸m Tn (and (1), (2).). 4. T is clearly a stopping time by exercise 3, since T = lim infn Tn. FT ½ FTn , 8n since T · Tn, and FT ½ \nFTn by exercise 1. Pick a set A 2 \nFTn .8n ¸ 1, A 2 FTn and A \ fTn · tg 2 Ft. Therefore A \ fT · tg = A \ (\nfTn · tg) = \n (A \ fTn · tg) 2 Ft. This implies \nFTn ½ FT and completes the proof. p P p p 5. (a) By completeness of L space, X 2 L . By Jensen’s inequality, EjMtj = EjE(XjFt)j · p p E [E(jXj jFt)] = EjXj < 1 for p > 1. p 1 (b) By (a), Mt 2 L ½ L . For t ¸ s ¸ 0, E(MtjFs) = E(E(XjFt)jFs) = E(XjFs) = Ms a.s. fMtg is a martingale. Next, we show that fMtg is continuous. By Jensen’s inequality, for p > 1, n p n p n p EjMt ¡ Mtj = EjE(M1 ¡ XjFt)j · EjM1 ¡ Xj ; 8t ¸ 0: (1) n p n p It follows that supt EjMt ¡ Mtj · EjM1 ¡ Xj ! 0 as n ! 1. Fix arbitrary " > 0. By Chebychev’s and Doob’s inequality, µ ¶ µ ¶p n p n 1 n p p supt EjMt ¡ Mtj P sup jMt ¡ Mtj > " · p E(sup jMt ¡ Mtj ) · p ! 0: (2) t " t p ¡ 1 " n Therefore M converges to M uniformly in probability. There exists a subsequence fnkg such that Mnk converges uniformly to M with probability 1. Then M is continuous since for almost all !, it is a limit of uniformly convergent continuous paths. 1 6. Let p(n) denote a probability mass function of Poisson distribution with parameter ¸t. Assume ¸t is integer as given. X¸t ¡ ¡ EjNt ¡ ¸tj =E(Nt ¡ ¸t) + 2E(Nt ¡ ¸t) = 2E(Nt ¡ ¸t) = 2 (¸t ¡ n)p(n) n=0 à ! X¸t (¸t)n X¸t (¸t)n ¸tX¡1 (¸t)n =2e¡¸t (¸t ¡ n) = 2¸te¡¸t ¡ (3) n! n! n! n=0 n=0 n=0 (¸t)¸t =2e¡¸t (¸t ¡ 1)! 7. Since N has stationary increments, for t ¸ s ¸ 0, 2 2 2 E(Nt ¡ Ns) = ENt¡s = V ar(Nt¡s) + (ENt¡s) = ¸(t ¡ s)[1 + ¸(t ¡ s)]: (4) 2 2 As t # s (or s " t), E(Nt ¡ Ns) ! 0. N is continuous in L and therefore in probability. 8. Suppose ¿® is a stopping time. A 3-dimensional Brownian motion is a strong Markov process we know that P (¿® < 1) = 1. Let’s define Wt := B¿®+t ¡B¿® . W is also a 3-dimensional Brownian W motion and its argumented filtration Ft is independent of F¿®+ = F¿® . Observe kW0k = kB¿® k = W W ®. Let S = inftft > 0 : kWtk · ®g. Then S is a Ft stopping time and fS · sg 2 Fs . So fS · sg has to be independent of any sets in F¿® . However f¿® · tg \ fS · sg = ;, which implies that f¿® · tg and fS · sg are dependent. Sine f¿® · tg 2 F¿® , this contradicts the fact that F¿® and W Ft are independent. Hence ¿® is not a stopping time. 2 n Pn i 9. (a) Since L -space is complete, it suffices to show that St = i=1 Mt is a Cauchy sequence w.r.t. n. For m ¸ n, by independence of fM ig, à ! m 2 m m X X ¡ ¢2 X 1 E(Sn ¡ Sm)2 = E M i = E M i = t : (5) t t t t i2 i=n+1 i=n+1 i=n+1 P1 2 n m 2 n , Since i=1 1=i < 1, as n; m ! 1, E(St ¡ St ) ! 1. fSt gn is Cauchy and its limit Mt is well defined for all t ¸ 0. (b)First, recall Kolmogorov’sP convergence criterion:P Suppose fXigi¸1 is a sequence of independent random variables. If i V ar(Xi) < 1, then i(Xi ¡ EXi) converges a.s. i i i For all i and t, 4Mt = (1=i)4Nt and 4Mt > 0. By Fubini’s theorem and monotone convergence theorem, X X X1 X1 X X1 X 4N i X1 N i 4M = 4M i = 4M i = s = t : (6) s s s i i s·t s·t i=1 i=1 s·t i=1 s·t i=1 i Let Xi = (Nt ¡t)=i. Then fXigi is a sequence of independent random variables such that EXi = 0, 2 P1 V ar(Xi) = 1=i and hence i=1 V ar(Xi) < 1. Therefore Kolmogorov’s criterion implies that P1 P1 P i=1 Xi converges almost surely. On the other hand, i=1 t=i = 1. Therefore, s·t 4Ms = P1 i i=1 Nt =i = 1. 2 P 1 i P 1 i 10. (a) Let Nt = i i (Nt ¡ t) and Lt = i i (Lt ¡ t). As we show in exercise 9(a), N, M are well defined in L2 sense. Then by linearity of L2 space M is also well defined in L2 sense since X 1 £ ¤ X 1 X 1 M = (N i ¡ t) ¡ (Li ¡ t) = (N i ¡ t) ¡ (Li ¡ t) = N ¡ L : (7) t i t t i t i t t t i i i Both terms in right size are martingales change only by jumps as shown in exercise 9(b). Hence Mt is a martingale which changes only by jumps. P (b) First show that given two independent Poisson processes N and L, s>0 4Ns4Ls = 0 a.s., i.e. N and L almostP surely don’t jumpP simultaneously. Let fTngn¸1 be aP sequence of jump times of a process L. Then s>0 4Ns4Ls = n 4NTn . We want to show that n 4NTn = 0 a.s. Since 4NTn ¸ 0, it is enough to show that E4NTn = 0 for 8n ¸ 1. Fix n ¸ 1 and let ¹Tn be a induced probability measure on R+ of Tn. By conditioning, Z 1 Z 1 E(4NTn ) = E [E (4NTn jTn)] = E (4NTn jTn = t) ¹Tn (dt) = E (4Nt) ¹Tn (dt); (8) 0 0 where last equality is by independence of N and Tn. It follows that E4NTn = E4Nt. Since 1 4Nt 2 L and P (4Nt = 0) = 1 by problem 25, E4Nt = 0, hence E4NTn = 0. Next we show that the previous claim holds even when there are countably many Poisson processes. i assume that there exist countably many independent Poisson processes fN gi¸1. Let A ½ Ω be a i set on which more than two processes of fN gi¸1 jump simultaneously. Let Ωij denotes a set on i j which N and N don’t jump simultaneously. Then P (Ωij) = 1 for i 6= j by previous assertion. c P c Since A ½ [i>jΩij, P (A) · i>j P (Ωij) = 0. Therefore jumps don’t happen simultaneously almost surely. Going back to the main proof, by (a) and the fact that N and L don’t jump simultaneously, 8t > 0, X X X j4Msj = j4Nsj + j4Lsj = 1 a:s: (9) s·t s·t s·t 11. Continuity: We use notations adopted in Example 2 in section 4 (P33). Assume EjU1j < 1. By independence of Ui, elementary inequality, Markov inequality, and the property of Poisson process, we observe X lim P (jZt ¡ Zsj > ²) = lim P (jZt ¡ Zsj > ² jNt ¡ Ns = k)P (Nt ¡ Ns = k) s!t s!t k X Xk X h ² i · lim P ( jUij > ²)P (Nt ¡ Ns = k) · lim kP (jU1j > ) P (Nt ¡ Ns = k) (10) s!t s!t k k i=1 k X EjU1j 2 EjU1j · lim k P (Nt ¡ Ns = k) = lim f¸(t ¡ s)g = 0 s!t ² s!t ² k 3 Independent Increment: Let F be a distribution function of U. By using independence of fUkgk and strong Markov property of N, for arbitrary t; s : t ¸ s ¸ 0, ³ ´ ³ PN P ´ iu(Z ¡Z )+ivZ iu t U +iv Ns U E e t s s = E e k=Ns+1 k k=1 k ³ ³ ´´ PNt PNs iu Uk+iv Uk =E E e k=Ns+1 k=1 jFs ³ ³ P ´´ ³ ³ P ´´ (11) PNs Nt PNs Nt iv Uk iu Uk iv Uk iu Uk =E e k=1 E e k=Ns+1 jFs = E e k=1 E e k=Ns+1 ³ P ´ ³ PN ´ ³ ´ iv Ns U iu t U ¡ ivZ ¢ iu(Z ¡Z ) =E e k=1 k E e k=Ns+1 k = E e s E e t s : This shows that Z has independent increments.
Recommended publications
  • Integral Representations of Martingales for Progressive Enlargements Of
    INTEGRAL REPRESENTATIONS OF MARTINGALES FOR PROGRESSIVE ENLARGEMENTS OF FILTRATIONS ANNA AKSAMIT, MONIQUE JEANBLANC AND MAREK RUTKOWSKI Abstract. We work in the setting of the progressive enlargement G of a reference filtration F through the observation of a random time τ. We study an integral representation property for some classes of G-martingales stopped at τ. In the first part, we focus on the case where F is a Poisson filtration and we establish a predictable representation property with respect to three G-martingales. In the second part, we relax the assumption that F is a Poisson filtration and we assume that τ is an F-pseudo-stopping time. We establish integral representations with respect to some G-martingales built from F-martingales and, under additional hypotheses, we obtain a predictable representation property with respect to two G-martingales. Keywords: predictable representation property, Poisson process, random time, progressive enlargement, pseudo-stopping time Mathematics Subjects Classification (2010): 60H99 1. Introduction We are interested in the stability of the predictable representation property in a filtration en- largement setting: under the postulate that the predictable representation property holds for F and G is an enlargement of F, the goal is to find under which conditions the predictable rep- G G resentation property holds for . We focus on the progressive enlargement = (Gt)t∈R+ of a F reference filtration = (Ft)t∈R+ through the observation of the occurrence of a random time τ and we assume that the hypothesis (H′) is satisfied, i.e., any F-martingale is a G-semimartingale. It is worth noting that no general result on the existence of a G-semimartingale decomposition after τ of an F-martingale is available in the existing literature, while, for any τ, any F-martingale stopped at time τ is a G-semimartingale with an explicit decomposition depending on the Az´ema supermartingale of τ.
    [Show full text]
  • CONFORMAL INVARIANCE and 2 D STATISTICAL PHYSICS −
    CONFORMAL INVARIANCE AND 2 d STATISTICAL PHYSICS − GREGORY F. LAWLER Abstract. A number of two-dimensional models in statistical physics are con- jectured to have scaling limits at criticality that are in some sense conformally invariant. In the last ten years, the rigorous understanding of such limits has increased significantly. I give an introduction to the models and one of the major new mathematical structures, the Schramm-Loewner Evolution (SLE). 1. Critical Phenomena Critical phenomena in statistical physics refers to the study of systems at or near the point at which a phase transition occurs. There are many models of such phenomena. We will discuss some discrete equilibrium models that are defined on a lattice. These are measures on paths or configurations where configurations are weighted by their energy with a preference for paths of smaller energy. These measures depend on at leastE one parameter. A standard parameter in physics β = c/T where c is a fixed constant, T stands for temperature and the measure given to a configuration is e−βE . Phase transitions occur at critical values of the temperature corresponding to, e.g., the transition from a gaseous to liquid state. We use β for the parameter although for understanding the literature it is useful to know that large values of β correspond to “low temperature” and small values of β correspond to “high temperature”. Small β (high temperature) systems have weaker correlations than large β (low temperature) systems. In a number of models, there is a critical βc such that qualitatively the system has three regimes β<βc (high temperature), β = βc (critical) and β>βc (low temperature).
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to Lévy Processes Huang Lorick [email protected] Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome! This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/ Year of publication 2021 Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents Contents 1 1 Introduction and Examples 2 1.1 Infinitely divisible distributions . 2 1.2 Examples of infinitely divisible distributions . 2 1.3 The Lévy Khintchine formula . 4 1.4 Digression on Relativity . 6 2 Lévy processes 8 2.1 Definition of a Lévy process . 8 2.2 Examples of Lévy processes . 9 2.3 Exploring the Jumps of a Lévy Process . 11 3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition . 19 3.2 Consequences of the Lévy-Itô Decomposition . 21 3.3 Exercises . 23 3.4 Discussion . 23 4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group . 24 4.2 The Generator . 26 4.3 Recurrence and Transience . 28 4.4 Fractional Derivatives . 29 5 Elements of Stochastic Calculus with Jumps 31 5.1 Example of Use in Applications . 31 5.2 Stochastic Integration . 32 5.3 Construction of the Stochastic Integral . 33 5.4 Quadratic Variation and Itô Formula with jumps . 34 5.5 Stochastic Differential Equation . 35 Bibliography 38 1 Chapter 1 Introduction and Examples In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula.
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • PREDICTABLE REPRESENTATION PROPERTY for PROGRESSIVE ENLARGEMENTS of a POISSON FILTRATION Anna Aksamit, Monique Jeanblanc, Marek Rutkowski
    PREDICTABLE REPRESENTATION PROPERTY FOR PROGRESSIVE ENLARGEMENTS OF A POISSON FILTRATION Anna Aksamit, Monique Jeanblanc, Marek Rutkowski To cite this version: Anna Aksamit, Monique Jeanblanc, Marek Rutkowski. PREDICTABLE REPRESENTATION PROPERTY FOR PROGRESSIVE ENLARGEMENTS OF A POISSON FILTRATION. 2015. hal- 01249662 HAL Id: hal-01249662 https://hal.archives-ouvertes.fr/hal-01249662 Preprint submitted on 2 Jan 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. PREDICTABLE REPRESENTATION PROPERTY FOR PROGRESSIVE ENLARGEMENTS OF A POISSON FILTRATION Anna Aksamit Mathematical Institute, University of Oxford, Oxford OX2 6GG, United Kingdom Monique Jeanblanc∗ Laboratoire de Math´ematiques et Mod´elisation d’Evry´ (LaMME), Universit´ed’Evry-Val-d’Essonne,´ UMR CNRS 8071 91025 Evry´ Cedex, France Marek Rutkowski School of Mathematics and Statistics University of Sydney Sydney, NSW 2006, Australia 10 December 2015 Abstract We study problems related to the predictable representation property for a progressive en- largement G of a reference filtration F through observation of a finite random time τ. We focus on cases where the avoidance property and/or the continuity property for F-martingales do not hold and the reference filtration is generated by a Poisson process.
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to L´evyprocesses Graduate lecture 22 January 2004 Matthias Winkel Departmental lecturer (Institute of Actuaries and Aon lecturer in Statistics) 1. Random walks and continuous-time limits 2. Examples 3. Classification and construction of L´evy processes 4. Examples 5. Poisson point processes and simulation 1 1. Random walks and continuous-time limits 4 Definition 1 Let Yk, k ≥ 1, be i.i.d. Then n X 0 Sn = Yk, n ∈ N, k=1 is called a random walk. -4 0 8 16 Random walks have stationary and independent increments Yk = Sk − Sk−1, k ≥ 1. Stationarity means the Yk have identical distribution. Definition 2 A right-continuous process Xt, t ∈ R+, with stationary independent increments is called L´evy process. 2 Page 1 What are Sn, n ≥ 0, and Xt, t ≥ 0? Stochastic processes; mathematical objects, well-defined, with many nice properties that can be studied. If you don’t like this, think of a model for a stock price evolving with time. There are also many other applications. If you worry about negative values, think of log’s of prices. What does Definition 2 mean? Increments , = 1 , are independent and Xtk − Xtk−1 k , . , n , = 1 for all 0 = . Xtk − Xtk−1 ∼ Xtk−tk−1 k , . , n t0 < . < tn Right-continuity refers to the sample paths (realisations). 3 Can we obtain L´evyprocesses from random walks? What happens e.g. if we let the time unit tend to zero, i.e. take a more and more remote look at our random walk? If we focus at a fixed time, 1 say, and speed up the process so as to make n steps per time unit, we know what happens, the answer is given by the Central Limit Theorem: 2 Theorem 1 (Lindeberg-L´evy) If σ = V ar(Y1) < ∞, then Sn − (Sn) √E → Z ∼ N(0, σ2) in distribution, as n → ∞.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Deep Optimal Stopping
    Journal of Machine Learning Research 20 (2019) 1-25 Submitted 4/18; Revised 1/19; Published 4/19 Deep Optimal Stopping Sebastian Becker [email protected] Zenai AG, 8045 Zurich, Switzerland Patrick Cheridito [email protected] RiskLab, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Arnulf Jentzen [email protected] SAM, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Editor: Jon McAuliffe Abstract In this paper we develop a deep learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples. As such, it is broadly applicable in situations where the underlying randomness can efficiently be simulated. We test the approach on three problems: the pricing of a Bermudan max-call option, the pricing of a callable multi barrier reverse convertible and the problem of optimally stopping a fractional Brownian motion. In all three cases it produces very accurate results in high- dimensional situations with short computing times. Keywords: optimal stopping, deep learning, Bermudan option, callable multi barrier reverse convertible, fractional Brownian motion 1. Introduction N We consider optimal stopping problems of the form supτ E g(τ; Xτ ), where X = (Xn)n=0 d is an R -valued discrete-time Markov process and the supremum is over all stopping times τ based on observations of X. Formally, this just covers situations where the stopping decision can only be made at finitely many times. But practically all relevant continuous- time stopping problems can be approximated with time-discretized versions. The Markov assumption means no loss of generality. We make it because it simplifies the presentation and many important problems already are in Markovian form.
    [Show full text]
  • Optimal Stopping Time and Its Applications to Economic Models
    OPTIMAL STOPPING TIME AND ITS APPLICATIONS TO ECONOMIC MODELS SIVAKORN SANGUANMOO Abstract. This paper gives an introduction to an optimal stopping problem by using the idea of an It^odiffusion. We then prove the existence and unique- ness theorems for optimal stopping, which will help us to explicitly solve opti- mal stopping problems. We then apply our optimal stopping theorems to the economic model of stock prices introduced by Samuelson [5] and the economic model of capital goods. Both economic models show that those related agents are risk-loving. Contents 1. Introduction 1 2. Preliminaries: It^odiffusion and boundary value problems 2 2.1. It^odiffusions and their properties 2 2.2. Harmonic function and the stochastic Dirichlet problem 4 3. Existence and uniqueness theorems for optimal stopping 5 4. Applications to an economic model: Stock prices 11 5. Applications to an economic model: Capital goods 14 Acknowledgments 18 References 19 1. Introduction Since there are several outside factors in the real world, many variables in classi- cal economic models are considered as stochastic processes. For example, the prices of stocks and oil do not just increase due to inflation over time but also fluctuate due to unpredictable situations. Optimal stopping problems are questions that ask when to stop stochastic pro- cesses in order to maximize utility. For example, when should one sell an asset in order to maximize his profit? When should one stop searching for the keyword to obtain the best result? Therefore, many recent economic models often include op- timal stopping problems. For example, by using optimal stopping, Choi and Smith [2] explored the effectiveness of the search engine, and Albrecht, Anderson, and Vroman [1] discovered how the search cost affects the search for job candidates.
    [Show full text]
  • Chapter 1 Poisson Processes
    Chapter 1 Poisson Processes 1.1 The Basic Poisson Process The Poisson Process is basically a counting processs. A Poisson Process on the interval [0 , ∞) counts the number of times some primitive event has occurred during the time interval [0 , t]. The following assumptions are made about the ‘Process’ N(t). (i). The distribution of N(t + h) − N(t) is the same for each h> 0, i.e. is independent of t. ′ (ii). The random variables N(tj ) − N(tj) are mutually independent if the ′ intervals [tj , tj] are nonoverlapping. (iii). N(0) = 0, N(t) is integer valued, right continuous and nondecreasing in t, with Probability 1. (iv). P [N(t + h) − N(t) ≥ 2]= P [N(h) ≥ 2]= o(h) as h → 0. Theorem 1.1. Under the above assumptions, the process N(·) has the fol- lowing additional properties. (1). With probability 1, N(t) is a step function that increases in steps of size 1. (2). There exists a number λ ≥ 0 such that the distribution of N(t+s)−N(s) has a Poisson distribution with parameter λ t. 1 2 CHAPTER 1. POISSON PROCESSES (3). The gaps τ1, τ2, ··· between successive jumps are independent identically distributed random variables with the exponential distribution exp[−λ x] for x ≥ 0 P {τj ≥ x} = (1.1) (1 for x ≤ 0 Proof. Let us divide the interval [0 , T ] into n equal parts and compute the (k+1)T kT expected number of intervals with N( n ) − N( n ) ≥ 2. This expected value is equal to T 1 nP N( ) ≥ 2 = n.o( )= o(1) as n →∞ n n there by proving property (1).
    [Show full text]