
Appendix: A short presentation of stochastic calculus (by P.A. Meyer) This appendix was written originally as an introduction to semimartingale theory and stochastic calculus for non-probabilists. It may be read independently of the remainder of the book, or as a commentary on Chapter I to provide motivations and examples. In contradistinction to the main text, it isn't restricted to continuous semimartingales. Stochastic differential geometry uses only continuous semimartingales, since the jumps of processes taking values in a manifold cannot be described in local coordinates. On the other hand, for real (or vector) semimartingales, the discontinuous case arises almost as often as the continuous one, and it would be a pity not to mention it at all in this book. We are going to present without proofs the main facts about general semimartingales, with detailed comments and references to the second volume of Dellacherie-Meyer Probability and Potentials (hereafter quoted as PP). More recent references are mentioned at the end. There are at least two definitions of semimartingales. The older one follows the historical development of the theory. Square integrable martingales are introduced as models for a "noise", superimposed on a "signal" which is a process of integrable total variation. Then stochastic integrals are studied, general semimartingales and stochastic integrals are defined by localization, and Ito's formula is proved. Finally, it becomes clear at the very end that the class of processes considered is invariant by change of law, that is, semimartingales remain semimartingales if the basic law lP is replaced by an equivalent one (i. e. , a law which has the same sets of measure 0), though their signal-plus-noise decomposition is altered by this change. The second method starts with the stochastic integral, taking care to have everything depend only on the equivalence class of lP, and gets down to the concrete decompositions only at the end. The equivalence of these two points of view is far from trivial. It was proved by Dellacherie-Mokobodzki after much pioneering work by Metivier-Pellaumail, and was also independently discovered by Bichteler (who started from a vector integral approach). We shall follow the second path, since it is more convenient for a concise presentation. Semimartingales as integrators 1 A stochastic process is a function X( t, w) (implicitly assumed to be measur­ able) defined on a product I X S1, where (S1, F, lP) is a complete probability space and I is an interval of the line, the parameter tEl representing time. It will simplify things a little to assume that I is closed, and we take I = [0, 1] for definiteness. The usual notation for a stochastic process is. (Xt ) or simply X: Our aim is to define the stochastic integral J f dX = Jo1 ft( w) dXt( w) of a process 130 f = (ft) (the integrand) with respect to a process X = (Xt ) (the integrator), the result of this integration being a random variable defined a.e.. For simplicity, we shall assume from the start that our integrator is right continuous in its time variable t, and has the value 0 at time O. We shall say that the integrand is a simple process if there exists a dyadic subdivision ti = i2-k of [0,1] such that on each interval] ti, ti+l ] f depends only on w : ft(w) = fi(W). Then the "stochastic integral" has an obvious definition (1) We start with a linear space of uniformly bounded simple processes called S. Processes being functions of two variables (t, w), S generates a a-field F( S) on [0, 1] x n. Let us make the following definition The process (Xt ) is called an integrator if there is a map (the "stochastic integral") extending to all uniformly bounded F( S)-measurable processes the elementary integral (1), and satisfying the dominated convergence theorem in probability. More explicitly, if a uniformly bounded sequence (fn) converges pointwise to 0, then the random variables f fn dX converge to 0 in probability. Convergence in probability has the advantage of not being altered if IP is replaced by an equivalent measure. It has the disadvantage that the corresponding topology isn't locally convex, and the functional analysis involved is rather unusual and delicate. Let us look for integrators in the deterministic case, n consisting of just one point, so that a process is simply a function of time. Take for S the space of all usual step functions on [0, 1] . Then our assumption means that the "stochastic" integral is an ordinary integral, and the integrator Xes) must be a function of bounded variation. As a first attempt to generalize this, we may take some non­ degenerate n, S being the space of all uniformly bounded simple processes. Then one can show (though the proof isn't quite trivial) that for a.e. w the sample function X(., w) is a function of boullded variation, and the stochastic integral is just a pathwise Stieltjes integral. This extension of the usual integral is useful, but of course it isn't particularly exciting. 2 The situation changes radically, however, if a filtration (Ft ) is added to the above picture, the integrator (Xt ) being adapted and the class S being restricted to that of adapted simple processes (uniformly bounded). For the convenience of the reader, let us recall that (Ft ) is an increasing and right-continuous family of a-fields of F, each one containing all sets of measure 0 in n, and that adaptation of a process (Xt ) means that X t is Ft-measurable for each t. Then it turns out that there usually exist (unless the filtration is nearly trivial) integrators (Xt ) which are not of bounded variation. The best known of these integrators is brownian motion, for which the stochastic integration theory is the celebrated Ito integral. 131 Integrators w.r. to S are then called semimartingales, and the a-field gener­ ated by S is the predictable (or previsible) a-field. Functions of (t, w) measurable with respect to this a-field are called predictable processes. All left continuous adapted processes are predictable, but this isn't generally the case for right con­ tinuous ones, which are thus "unpredictable". This seems to be just a pun, but in reality it has a deep significance in nature, being a mathematical translation of the impossibility to predict exactly phenomena like radioactive desintegrations (or the exact time your children will allow you to use the phone). A simple, but very useful consequence of the dominated convergence theorem is the following : if f is a uniformly bounded left continuous adapted process, the stochastic integral J f dX is well defined, and is the limit in probability of the natural Riemann sums over dyadic subdivisions (t; = i2-k ) (2) 3 Let us pause for comments. There are two kinds of integrators for which integration theory is specially easy. The first one is that of processes whose sample paths are functions of bounded variation. The second one is that of square integrable martingales (Mt ) (see n09 below), for which one can easily extend the classical method Ito used in the case of brownian motion. Thus all square integrable martingales are integrators. We shall see later that all martingales are in fact integrators, but this is a much more difficult result. Secondly, since convergence in probability depends only on the equivalence class of the law IP, the same is true for the semimartingale notion itself, and for the value of the stochastic integral. This is extremely important for statistical problems in the theory of processes, since in statistics the probability law isn't known a priori, but must be deduced from observation. A less important point concerns the exceptional sets in the dominated conver­ gence theorem: in classical measure theory, we would allow for convergence except on a set of measure o. The same can be done here, but the exceptional set to be considered is a subset of [0, 1] x n while the law IP is on n. So we define an evanescent set A C [0,1] x n as a set whose projection on n has measure 0, and we can add the provision "except on an evanescent set" to the statement of the dominated convergence theorem. Also, the usual dominated convergence the­ orem would allow the sequence to be dominated by an integrable r.v., not just a uniformly bounded one. Extensions of this kind do exist (PP VIII. 75), but do not interest us now, since they would refer to a specific semimartingale, while we are concerned with statements valid for all semimartingales. The form of the dominated convergence theorem we have given will suffice us, except for a slight relaxation, to come a little later, of the uniform boundedness requirement. Finally, let us give a reference to PP for the equivalence of the "integrator" definition of semimartingales and the traditional one. It appears at VIII. 79-85, and in fact it assumes less than we did here. Namely, it suffices to ask that the 132 set of all integrals f f dX with f simple and bounded by 1 in absolute value be bounded in probability. Then it is shown that one can replace the law 1P by an equivalent one so that X becomes a quasi martingale - a reasonable, easy to handle type of process (see nO 10 below) which can be readily decomposed into a process with integrable variation plus a martingale. An easy consequence of this decomposition is the fact that semimartingales aren't only right continuous processes: they also have left limits (in the technical jargon, they are cadlag processes).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages23 Page
-
File Size-