Lecture 9: Filtration and Martingales (PDF)

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 9: Filtration and Martingales (PDF) MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 9 10/2/2013 Conditional expectations, filtration and martingales Content. 1. Conditional expectations 2. Martingales, sub-martingales and super-martingales 1 Conditional Expectations 1.1 Definition Recall how we define conditional expectations. Given a random variable X and an event A we define [XjA] = E[X1fAg] . E P(A) Also we can consider conditional expectations with respect to random vari­ ables. For simplicity say Y is a simple random variable on Ω taking values y1; y2; : : : ; yn with some probabilities P(! : Y (!) = yi) = pi: Now we define conditional expectation E[XjY ] as a random variable which takes value E[XjY = yi] with probability pi, where E[XjY = yi] should be understood as expectation of X conditioned on the event f! 2 Ω: Y (!) = yig. It turns out that one can define conditional expectation with respect to a σ­ field. This notion will include both conditioning on events and conditioning on random variables as special cases. Definition 1. Given Ω, two σ-fields G ⊂ F on Ω, and a probability measure P on (Ω; F). Suppose X is a random variable with respect to F but not necessar­ ily with respect to G, and suppose X has a finite L1 norm (that is E[jXj] < 1). The conditional expectation E[XjG] is defined to be a random variable Y which satisfies the following properties: (a) Y is measurable with respect to G. 1 (b) For every A 2 G, we have E[X1fAg] = E[Y 1fAg]. For simplicity, from now on we write Z 2 F to indicate that Z is measurable with respect to F. Also let F(Z) denote the smallest σ-field such with respect to which Z is measurable. Theorem 1. The conditional expectation E[XjG] exists and is unique. 0 Uniqueness means that if Y 2 G is any other random variable satisfying 0 conditions (a),(b), then Y = Y a.s. (with respect to measure P). We will prove this theorem using the notion of Radon-Nikodym derivative, the existence of which we state without a proof below. But before we do this, let us develop some intuition behind this definition. 1.2 Simple properties • Consider the trivial case when G = fØ; Ωg. We claim that the constant value c = E[X] is E[XjG]. Indeed, any constant function is measurable with respect to any σ-field So (a) holds. For (b), we have E[X1fΩg] = E[X] = c and E[c1fΩg] = E[c] = c; and E[X1fØg] = 0 and E[c1fØg] = 0. • As the other extreme, suppose G = F. Then we claim that X = E[XjG]. The condition (b) trivially holds. The condition (a) also holds because of the equality between two σ-fields. • Let us go back to our example of conditional expectation with respect to an event A ⊂ Ω. Consider the associated σ-fields G = fØ; A; Ac ; Ωg (we established in the first lecture that this is indeed a σ-field). Consider a random variable Y :Ω ! R defined as E[X1fAg] Y (!) = E[XjA] = , c1 P(A) for ! 2 A and c c E[X1fA g] Y (!) = E[XjA ] = , c2 P(Ac) c for ! 2 A . We claim that Y = E[XjG]. First Y 2 G. Indeed, assume for simplicity c1 < c2. Then f! : Y (!) ≤ xg = Ø when x < c1, = A 2 for c1 ≤ x < c2, = Ω when x ≥ c2. Thus Y 2 G. Then we need to check c equality E[X1fBg] = E[Y 1fBg] for every B = Ø; A; A ; Ω, which is straightforward to do. For example say B = A. Then E[X1fAg] = E[XjA]P(A) = c1P(A): On the other hand we defined Y (!) = c1 for all ! 2 A. Thus E[Y 1fAg] = c1E[1fAg] = c1P(A): And the equality checks. • Suppose now G corresponds to some partition A1;:::;Am of the sample space Ω. Given X 2 F, using a similar analysis, we can check that Y = E[XjG] is a random variable which takes values E[XjAj ] for all ! 2 Aj , for j = 1; 2; : : : ; m. You will recognize that this is one of our earlier examples where we considered conditioning on a simple random variable Y to get E[XjY ]. In fact this generalizes as follows: • Given two random variables X; Y :Ω ! R, suppose both 2 F. Let G = G(Y ) ⊂ F be the field generated by Y . We define E[XjY ] to be E[XjG]. 1.3 Proof of existence We now give a proof sketch of Theorem 1. Proof. Given two probability measures P1; P2 defined on the same (Ω; F), P2 is defined to be absolutely continuous with respect to P1 if for every set A 2 F, P1(A) = 0 implies P2(A) = 0. The following theorem is the main technical part for our proof. It involves using the familiar idea of change of measures. Theorem 2 (Radon-Nikodym Theorem). Suppose P2 is absolutely continuous with respect to P1. Then there exists a non-negative random variable Y :Ω ! R+ such that for every A 2 F P2(A) = EP1 [Y 1fAg]: Function Y is called Radon-Nikodym (RN) derivative and sometimes is denoted dP2=dP1. 3 0 Problem 1. Prove that Y is unique up-to measure zero. That is if Y is also RN 0 derivative, then Y = Y a.s. w.r.t. P1 and hence P2. We now use this theorem to establish the existence of conditional expec­ tations. Thus we have G ⊂ F, P is a probability measure on F and X is measurable with respect to F. We will only consider the case X ≥ 0 such that E[X] < 1. We also assume that X is not constant, so that E[X] > 0. Consider a new probability measure P2 on G defined as follows: EP[X1fAg] P2(A) = ;A 2 G; EP[X] where we write EP in place of E to emphasize that the expectation operator is with respect to the original measure P. Check that this is indeed a probability measure on (Ω; G). Now P also induced a probability measure on (Ω; G). We claim that P2 is absolutely continuous with respect to P. Indeed if P(A) = 0 then the numerator is zero. By the Radon-Nikodym Theorem then there exists Z which is measurable with respect to G such that for any A 2 G P2(A) = EP[Z1fAg]: We now take Y = ZEP[X]. Then Y satisfies the condition (b) of being a conditional expectation, since for every set B EP[Y 1fBg] = EP[X]EP[Z1fBg] = EP[X1fBg]: The second part, corresponding to the uniqueness property is proved similarly to the uniqueness of the RN derivative (Problem 1). 2 Properties Here are some additional properties of conditional expectations. Linearity. E[aX + Y jG] = aE[XjG] + E[Y jG]. Monotonicity. If X1 ≤ X2 a.s, then E[X1jG] ≤ E[X2jG]. Proof idea is similar to the one you need to use for Problem 1. Independence. Problem 2. Suppose X is independent from G. Namely, for every measurable A ⊂ R;B 2 G P(fX 2 Ag \ B) = P(X 2 A)P(B). Prove that E[XjG] = E[X]. 4 Conditional Jensen’s inequality. Let φ be a convex function and E[jXj]; E[jφ(X)j] < 1. Then φ(E[XjG]) ≤ E[φ(X)jG]. Proof. We use the following representation of a convex function, which we do not prove (see Durrett [1]). Let A = f(a; b) 2 Q : ax + b ≤ φ(x); 8 xg: Then φ(x) = supfax + b :(a; b) 2 Ag. Now we prove the Jensen’s inequality. For any pair of rationals a; b 2 Q satisfying the bound above, we have, by monotonicity that E[φ(X)jG] ≥ aE[XjG] + b, a.s., implying E[φ(X)jG] ≥ supfaE[XjG] + b :(a; b) 2 Ag = φ(E[XjG]) a.s. Tower property. Suppose G1 ⊂ G2 ⊂ F. Then E[E[XjG1]jG2] = E[XjG1] and E[E[XjG2]jG1] = E[XjG1]. That is the smaller field wins. Proof. By definition E[XjG1] is G1 measurable. Therefore it is G2 measurable. Then the first equality follows from the fact E[XjG] = X, when X 2 G, which we established earlier. Now fix any A 2 G1. Denote E[XjG1] by Y1 and E[XjG2] by Y2. Then Y1 2 G1;Y2 2 G2. Then E[Y11fAg] = E[X1fAg]; simply by the definition of Y1 = E[XjG1]. On the other hand, we also have A 2 G2. Therefore E[X1fAg] = E[Y21fAg]: Combining the two equalities we see that E[Y21fAg] = E[Y11fAg] for every A 2 G1. Therefore, E[Y2jG1] = Y1, which is the desired result. An important special case is when G1 is a trivial σ-field fØ; Ωg. We obtain that for every field G E[E[XjG]] = E[X]: 5 3 Filtration and martingales 3.1 Definition A family of σ-fields fFtg is defined to be a filtration if Ft1 ⊂ Ft2 whenever t1 ≤ t2. We will consider only two cases when t 2 Z+ or t 2 R+. A stochastic process fXtg is said to be adapted to filtration fFtg if Xt 2 Ft for every t. Definition 2. A stochastic process fXtg adapted to a filtration fFtg is defined to be a martingale if 1. E[jXtj] < 1 for all t. 2. E[XtjFs] = Xs, for all s < t. When equality is substituted with ≤, the process is called supermartingale. When it is substituted with ≥, the process is called submartingale. Suppose we have a stochastic process fXtg adapted to filtration fFtg and 0 suppose for some s < s < t we have E[XtjFs] = Xs and E[XsjFs0 ] = Xs0 .
Recommended publications
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Notes on Elementary Martingale Theory 1 Conditional Expectations
    . Notes on Elementary Martingale Theory by John B. Walsh 1 Conditional Expectations 1.1 Motivation Probability is a measure of ignorance. When new information decreases that ignorance, it changes our probabilities. Suppose we roll a pair of dice, but don't look immediately at the outcome. The result is there for anyone to see, but if we haven't yet looked, as far as we are concerned, the probability that a two (\snake eyes") is showing is the same as it was before we rolled the dice, 1/36. Now suppose that we happen to see that one of the two dice shows a one, but we do not see the second. We reason that we have rolled a two if|and only if|the unseen die is a one. This has probability 1/6. The extra information has changed the probability that our roll is a two from 1/36 to 1/6. It has given us a new probability, which we call a conditional probability. In general, if A and B are events, we say the conditional probability that B occurs given that A occurs is the conditional probability of B given A. This is given by the well-known formula P A B (1) P B A = f \ g; f j g P A f g providing P A > 0. (Just to keep ourselves out of trouble if we need to apply this to a set f g of probability zero, we make the convention that P B A = 0 if P A = 0.) Conditional probabilities are familiar, but that doesn't stop themf fromj g giving risef tog many of the most puzzling paradoxes in probability.
    [Show full text]
  • The Theory of Filtrations of Subalgebras, Standardness and Independence
    The theory of filtrations of subalgebras, standardness and independence Anatoly M. Vershik∗ 24.01.2017 In memory of my friend Kolya K. (1933{2014) \Independence is the best quality, the best word in all languages." J. Brodsky. From a personal letter. Abstract The survey is devoted to the combinatorial and metric theory of fil- trations, i. e., decreasing sequences of σ-algebras in measure spaces or decreasing sequences of subalgebras of certain algebras. One of the key notions, that of standardness, plays the role of a generalization of the no- tion of the independence of a sequence of random variables. We discuss the possibility of obtaining a classification of filtrations, their invariants, and various links to problems in algebra, dynamics, and combinatorics. Bibliography: 101 titles. Contents 1 Introduction 3 1.1 A simple example and a difficult question . .3 1.2 Three languages of measure theory . .5 1.3 Where do filtrations appear? . .6 1.4 Finite and infinite in classification problems; standardness as a generalization of independence . .9 arXiv:1705.06619v1 [math.DS] 18 May 2017 1.5 A summary of the paper . 12 2 The definition of filtrations in measure spaces 13 2.1 Measure theory: a Lebesgue space, basic facts . 13 2.2 Measurable partitions, filtrations . 14 2.3 Classification of measurable partitions . 16 ∗St. Petersburg Department of Steklov Mathematical Institute; St. Petersburg State Uni- versity Institute for Information Transmission Problems. E-mail: [email protected]. Par- tially supported by the Russian Science Foundation (grant No. 14-11-00581). 1 2.4 Classification of finite filtrations . 17 2.5 Filtrations we consider and how one can define them .
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • An Application of Probability Theory in Finance: the Black-Scholes Formula
    AN APPLICATION OF PROBABILITY THEORY IN FINANCE: THE BLACK-SCHOLES FORMULA EFFY FANG Abstract. In this paper, we start from the building blocks of probability the- ory, including σ-field and measurable functions, and then proceed to a formal introduction of probability theory. Later, we introduce stochastic processes, martingales and Wiener processes in particular. Lastly, we present an appli- cation - the Black-Scholes Formula, a model used to price options in financial markets. Contents 1. The Building Blocks 1 2. Probability 4 3. Martingales 7 4. Wiener Processes 8 5. The Black-Scholes Formula 9 References 14 1. The Building Blocks Consider the set Ω of all possible outcomes of a random process. When the set is small, for example, the outcomes of tossing a coin, we can analyze case by case. However, when the set gets larger, we would need to study the collections of subsets of Ω that may be taken as a suitable set of events. We define such a collection as a σ-field. Definition 1.1. A σ-field on a set Ω is a collection F of subsets of Ω which obeys the following rules or axioms: (a) Ω 2 F (b) if A 2 F, then Ac 2 F 1 S1 (c) if fAngn=1 is a sequence in F, then n=1 An 2 F The pair (Ω; F) is called a measurable space. Proposition 1.2. Let (Ω; F) be a measurable space. Then (1) ; 2 F k SK (2) if fAngn=1 is a finite sequence of F-measurable sets, then n=1 An 2 F 1 T1 (3) if fAngn=1 is a sequence of F-measurable sets, then n=1 An 2 F Date: August 28,2015.
    [Show full text]
  • Mathematical Finance an Introduction in Discrete Time
    Jan Kallsen Mathematical Finance An introduction in discrete time CAU zu Kiel, WS 13/14, February 10, 2014 Contents 0 Some notions from stochastics 4 0.1 Basics . .4 0.1.1 Probability spaces . .4 0.1.2 Random variables . .5 0.1.3 Conditional probabilities and expectations . .6 0.2 Absolute continuity and equivalence . .7 0.3 Conditional expectation . .8 0.3.1 σ-fields . .8 0.3.2 Conditional expectation relative to a σ-field . 10 1 Discrete stochastic calculus 16 1.1 Stochastic processes . 16 1.2 Martingales . 18 1.3 Stochastic integral . 20 1.4 Intuitive summary of some notions from this chapter . 27 2 Modelling financial markets 36 2.1 Assets and trading strategies . 36 2.2 Arbitrage . 39 2.3 Dividend payments . 41 2.4 Concrete models . 43 3 Derivative pricing and hedging 48 3.1 Contingent claims . 49 3.2 Liquidly traded derivatives . 50 3.3 Individual perspective . 54 3.4 Examples . 56 3.4.1 Forward contract . 56 3.4.2 Futures contract . 57 3.4.3 European call and put options in the binomial model . 57 3.4.4 European call and put options in the standard model . 61 3.5 American options . 63 2 CONTENTS 3 4 Incomplete markets 70 4.1 Martingale modelling . 70 4.2 Variance-optimal hedging . 72 5 Portfolio optimization 73 5.1 Maximizing expected utility of terminal wealth . 73 6 Elements of continuous-time finance 80 6.1 Continuous-time theory . 80 6.2 Black-Scholes model . 84 Chapter 0 Some notions from stochastics 0.1 Basics 0.1.1 Probability spaces This course requires a decent background in stochastics which cannot be provided here.
    [Show full text]
  • 4 Martingales in Discrete-Time
    4 Martingales in Discrete-Time Suppose that (Ω, F, P) is a probability space. Definition 4.1. A sequence F = {Fn, n = 0, 1,...} is called a filtration if each Fn is a sub-σ-algebra of F, and Fn ⊆ Fn+1 for n = 0, 1, 2,.... In other words, a filtration is an increasing sequence of sub-σ-algebras of F. For nota- tional convenience, we define ∞ ! [ F∞ = σ Fn n=1 and note that F∞ ⊆ F. Definition 4.2. If F = {Fn, n = 0, 1,...} is a filtration, and X = {Xn, n = 0, 1,...} is a discrete-time stochastic process, then X is said to be adapted to F if Xn is Fn-measurable for each n. Note that by definition every process {Xn, n = 0, 1,...} is adapted to its natural filtration {Fn, n = 0, 1,...} where Fn = σ(X0,...,Xn). The intuition behind the idea of a filtration is as follows. At time n, all of the information about ω ∈ Ω that is known to us at time n is contained in Fn. As n increases, our knowledge of ω also increases (or, if you prefer, does not decrease) and this is captured in the fact that the filtration consists of an increasing sequence of sub-σ-algebras. Definition 4.3. A discrete-time stochastic process X = {Xn, n = 0, 1,...} is called a martingale (with respect to the filtration F = {Fn, n = 0, 1,...}) if for each n = 0, 1, 2,..., (a) X is adapted to F; that is, Xn is Fn-measurable, 1 (b) Xn is in L ; that is, E|Xn| < ∞, and (c) E(Xn+1|Fn) = Xn a.s.
    [Show full text]
  • Non-Local Branching Superprocesses and Some Related Models
    Published in: Acta Applicandae Mathematicae 74 (2002), 93–112. Non-local Branching Superprocesses and Some Related Models Donald A. Dawson1 School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Canada K1S 5B6 E-mail: [email protected] Luis G. Gorostiza2 Departamento de Matem´aticas, Centro de Investigaci´ony de Estudios Avanzados, A.P. 14-740, 07000 M´exicoD. F., M´exico E-mail: [email protected] Zenghu Li3 Department of Mathematics, Beijing Normal University, Beijing 100875, P.R. China E-mail: [email protected] Abstract A new formulation of non-local branching superprocesses is given from which we derive as special cases the rebirth, the multitype, the mass- structured, the multilevel and the age-reproduction-structured superpro- cesses and the superprocess-controlled immigration process. This unified treatment simplifies considerably the proof of existence of the old classes of superprocesses and also gives rise to some new ones. AMS Subject Classifications: 60G57, 60J80 Key words and phrases: superprocess, non-local branching, rebirth, mul- titype, mass-structured, multilevel, age-reproduction-structured, superprocess- controlled immigration. 1Supported by an NSERC Research Grant and a Max Planck Award. 2Supported by the CONACYT (Mexico, Grant No. 37130-E). 3Supported by the NNSF (China, Grant No. 10131040). 1 1 Introduction Measure-valued branching processes or superprocesses constitute a rich class of infinite dimensional processes currently under rapid development. Such processes arose in appli- cations as high density limits of branching particle systems; see e.g. Dawson (1992, 1993), Dynkin (1993, 1994), Watanabe (1968). The development of this subject has been stimu- lated from different subjects including branching processes, interacting particle systems, stochastic partial differential equations and non-linear partial differential equations.
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • MTH 664 Lectures 24 - 27
    MTH 664 0 MTH 664 Lectures 24 - 27 Yevgeniy Kovchegov Oregon State University MTH 664 1 Topics: • Martingales. • Filtration. Stopping times. • Probability harmonic functions. • Optional Stopping Theorem. • Martingale Convergence Theorem. MTH 664 2 Conditional expectation. Consider a probability space (Ω, F,P ) and a random variable X ∈ F. Let G ⊆ F be a smaller σ-algebra. Definition. Conditional expectation E[X|G] is a unique func- tion from Ω to R satisfying: 1. E[X|G] is G-measurable 2. R E[X|G] dP (ω) = R X dP (ω) for all A ∈ G A A The existence and uniqueness of E[X|G] comes from the Radon-Nikodym theorem. Lemma. If X ∈ G, Y (ω) ∈ L1(Ω,P ), and X(ω) · Y (ω) ∈ L1(Ω,P ), then E[X · Y |G] = X · E[Y |G] MTH 664 3 Conditional expectation. Consider a probability space (Ω, F,P ) and a random variable X ∈ F. Lemma. If G ⊆ F, then EE[X|G] = E[X] Let G1 ⊆ G2 ⊆ F be smaller sub-σ-algebras. Lemma. E E[X|G2] | G1 = E[X|G1] Proof. For any A ∈ G1 ⊆ G2, Z Z E E[X|G2] | G1 (ω) dP (ω) = E[X|G2](ω) dP (ω) A A Z Z = X(ω) dP (ω) = E[X|G1](ω) dP (ω) A A MTH 664 4 Filtration. Definition. Consider an arbitrary linear ordered set T :A sequence of sub-σ-algebras {Ft}t∈T of F is said to be a filtration if Fs ⊆ Ft a.s. ∀s < t ∈ T Example. Consider a sequence of random variables X1,X2,..
    [Show full text]