The Basics of Stochastic Processes

Total Page:16

File Type:pdf, Size:1020Kb

The Basics of Stochastic Processes MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2018 Lecture 20 THE BASICS OF STOCHASTIC PROCESSES Contents 1. Stochastic processes: spaces R∞ and R[0,∞) 2. The Bernoulli process 3. The Poisson process We now turn to the study of some simple classes of stochastic processes. Exam- ples and a more leisurely discussion of this material can be found in the corre- sponding chapter of [BT]. A discrete-time stochastic is a sequence of random variables {Xn} defined on a common probability space ( , F, P). In more detail, a stochastic process is a function X of two variables n and ω. For every n, the function ω 7→ Xn(ω) is a random variable (a measurable function). An alternative perspective is provided by fixing some ω ∈ and viewing Xn(ω) as a function of n (a “time function,” or “sample path,” or “trajectory”). A continuous-time stochastic process is defined similarly, as a collection of random variables {Xt} defined on a common probability space ( , F, P), where t varies over non-negative real values R+. 1 SPACES OF TRAJECTORIES: R∞ and R[0,∞) 1.1 σ-algebras on spaces of trajectories Recall that earlier we defined the Borel σ-algebra Bn on Rn as the smallest σ algebra containing all measurable rectangles, i.e. events of the form x n B1 × · · · × Bn = { ∈ R : xj ∈ Bj ∀j ∈ [n]} where Bj are (1-dimensional) Borel subsets of R. A generalization is the fol- lowing: 1 Definition 1. Let T be an arbitrary set of indices. The product space RT is defined as T R , R = {(xt, t ∈ T )} . tY∈T T A subset JS(B) of R is called a cylinder with base B on time indices S = {s1, . , sn} if n JS(B) = {(xt):(xs1 , . , xsn ) ∈ B} ,B ⊂ R , (1) with B ∈ Bn . The product σ-algebra BT is the smallest σ-algebra containing all cylinders: T S B = σ{JS(B) : ∀S-finite and B ∈ B } . For the special case T = {1, 2,..., } the notation R∞ and B∞ will be used. The following are measurable subsets of R∞: ∞ E0 = {x ∈ R : xn–converges} The following are measurable subsets of R[0,∞): [0,∞) E1 = {x ∈ R : xt = 0 ∀t ∈ Q} (2) [0,∞) E2 = {x ∈ R : sup xt > 0} (3) t∈Q The following are not measurable subsets of R[0,∞): ′ [0,∞) E1 = {x ∈ R : xt = 0 ∀t} (4) ′ [0,∞) E2 = {x ∈ R : sup xt > 0} (5) t [0,∞) E3 = {x ∈ R : xt–continuous} (6) ′ ′ Non-measurability of E1 and E2 will follow from the next result. We mention ′ [0,+∞) that since E1 ∩ E3 = E1 ∩ E3, then by considering a trace of B on E3 ′ ′ sets E1 and E2 can be made measurable. This is a typical approach taken in the theory of continuous stochastic processes. Proposition 1. The following provides information about BT : T (i) For every measurable set E ∈ B there exists a countable set of time ∞ indices S = {s1,...} and a subset B ∈ B such that E = {(xt):(xs1 , . , xsn ,...) ∈ B} (7) 2 (ii) Every measurable set E ∈ BT can be approximated within arbitrary ǫ by a cylinder: P[E△JS(B)] ≤ ǫ , T T where P is any probability measure on (R , B ). (iii) If {Xt, t ∈ T } is a collection of random variables on ( , F), then the map T X : → R , (8) ω 7→ (Xt(ω), t ∈ T ) (9) is measurable with respect to BT . Proof: For (i) simply notice that collection of sets of the form (7) contains all cylinders and closed under countable unions/intersections. To see this simply notice that one can without loss of generality assume that every set in, for ex- ample, union F = En correspond to the same set of indices in (7) (otherwise extend the index setsS S first). S (ii) follows from the next exercise and the fact that {JS(B),B ∈ B } (under fixed finite S) form a σ-algebra. For (iii) note that it is sufficient to check that −1 T X (JS(B)) ∈ F (since cylinders generate B ). The latter follows at once from the definition of a cylinder (1) and the fact that {(Xs1 ,...,Xsn ) ∈ B} are clearly in F. Exercise 1. Let be a collection of -algebras and let be the F α , α ∈ S σ F = 2S F α smallest σ-algebra containing all of them. Call set B finitary if B ∈ F α , where W 2 S1 S1 is a finite subset of S. Prove that every E ∈ F is finitary approximable,W i.e. that for every ǫ > 0 there exists a finitary B such that P[E△B] ≤ ǫ . (Hint: Let L = {E : E–finitary approximable} and show that L contains the algebra of finitary sets and closed under monotone limits.) With these preparations we are ready to give a definition of stochastic pro- cess: Definition 2. Let ( Ω, F, P) be a probability space. A stochastic process with T T time set T is a measurable map X : ( Ω, F) → (R , B ). The pushforward −1 PX , P ◦ X is called the law of X. 3 1.2 Probability measures on spaces of trajectories According to Proposition 1 we may define probability measures on RT by sim- ply computing an induced measure along a map (9). An alternative way to define probabilities on RT is via the following construction. Theorem 1 (Kolmogorov). Suppose that for any finite S ⊂ T we have S a probability measure PS on R and that these measures are consistent. ′ Namely, if S ⊂ S then S\S ′ PS′ [B] = PS[B × R ] . T Then there exists a unique probability measure P on R such that P[JS(B)] = PS[B] for every cylinder JS(B). Proof (optional): As a simple exercise, reader is encouraged to show that it suffices to consider the case of countable T (cf. Proposition 1.(i)). We thus ∞ focus on constructing a measure on R . Let A = n≥1 Fn, where Fn is the σ-algebra of all cylinders with time indices {1, . , nS}. Clearly A is an algebra. Define a set-function on A via: , ∀E = {(x1, . , xn) ∈ B} : P[E] P{1,...,n}[B] . Consistency conditions guarantee that this assignment is well-defined and results in a finitely additive set-function. We need to verify countable additivity. Let En ց Ø (10) By repeating the sets as needed, we may assume En ∈ Fn. If we can show that P[En] ց 0 (11) then Caratheodory’s extension theorem guarantees that P extends uniquely to σ(A) = B∞ . We will use the following facts about Rn: n n 1. Every finite measure µ on (R , B ) is inner regular, namely for every n E ∈ B µ[E] = sup µ[K] , (12) K⊂E supremum over all compact subsets of E. 4 2. Every decreasing sequence of non-empty compact sets has non-empty in- tersection: Kn =6 Ø,Kn ց K ⇒ K 6= Ø (13) 3. If f : Rn → Rk is continuous, then f (K) is compact for every compact K. Then according to (12) for every En and every ǫ > 0 there exists a compact ′ n subset Kn ⊂ R such that such that ′ −n P[En \J1,...,n(Kn )] ≤ ǫ2 . Then, define by induction ′ Kn = Kn ∩ (Kn−1 × R) . n−1 (Note that Kn−1 ⊂ R and the set Kn−1 × R is simply an extension of Kn−1 n into R by allowing arbitrary last coordinates.) Since En ⊂ En−1 we have −n P[En \J1,...,n(Kn)] ≤ ǫ2 + P[En−1 \J1,...,n−1(Kn−1)] . Thus, continuing by induction we have shown that −1 −n P[En \J1,...,n(Kn)] ≤ ǫ(2 + ··· 2 ) < ǫ (14) We will show next that Kn = Ø for all n large enough. Since by construction En ⊃ J1,...,n(Kn) (15) we then have from (14) and Kn = Ø that lim sup P[En] < ǫ . n→∞ By taking ǫ to 0 we have shown (11) and the Theorem. It thus remains to show that Kn = Ø for all large enough n. Suppose otherwise, then by construction we have 2 n−1 Kn ⊂ Kn−1 × R ⊂ Kn−2 × R ⊂ ··· ⊂ K1 × R . Thus by projecting each Kn onto first coordinate we get a decreasing sequence of non-empty compacts, which by (13) has non-empty intersection. Then we can pick a point x1 ∈ R such that x1 ∈ Projn→1(Kn) ∀n . 5 Repeating the same argument but projecting onto first two coordinates, we can now pick x2 ∈ R such that (x1, x2) ∈ Projn→2(Kn) ∀n . By continuing in this fashion we will have constructed the sequence (x1, x2,...) ∈ J1,...,n(Kn) ∀n . By (15) then we have (x1, x2,...) ∈ En n\≥1 which contradicts (10). Thus, one of Kn must be empty. 1.3 Tail σ-algebra and Kolmogorov’s 0/1 law ∞ ∞ ∞ Definition 3. Consider (R , B ) and let Fn be a sub-σ-algebra generated by all cylinders Js1,...,sk (B) with sj ≥ n. Then the σ-algebra , ∞ T Fn n>\0 is called a tail σ-algebra on R∞ . If X : Ω → R∞ is a stochastic process, then σ-algebra X−1T is called a tail σ-algebra of X. Examples of tail events: E1 = {sequence Xn converges} (16) E2 = {series Xn converges} (17) X E3 = {lim sup Xn > 0} , (18) n→∞ An example of the event which is not a tail event: n E4 = {lim sup Xk > 0} n→∞ Xk=1 Theorem 2 (Kolmogorov’s 0/1 law). If Xj , j = 1,... are independent then any event in the tail σ-algebra of X has probability 0 or 1.
Recommended publications
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Partnership As Experimentation: Business Organization and Survival in Egypt, 1910–1949
    Yale University EliScholar – A Digital Platform for Scholarly Publishing at Yale Discussion Papers Economic Growth Center 5-1-2017 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç Timothy Guinnane Follow this and additional works at: https://elischolar.library.yale.edu/egcenter-discussion-paper-series Recommended Citation Artunç, Cihan and Guinnane, Timothy, "Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949" (2017). Discussion Papers. 1065. https://elischolar.library.yale.edu/egcenter-discussion-paper-series/1065 This Discussion Paper is brought to you for free and open access by the Economic Growth Center at EliScholar – A Digital Platform for Scholarly Publishing at Yale. It has been accepted for inclusion in Discussion Papers by an authorized administrator of EliScholar – A Digital Platform for Scholarly Publishing at Yale. For more information, please contact [email protected]. ECONOMIC GROWTH CENTER YALE UNIVERSITY P.O. Box 208269 New Haven, CT 06520-8269 http://www.econ.yale.edu/~egcenter Economic Growth Center Discussion Paper No. 1057 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç University of Arizona Timothy W. Guinnane Yale University Notes: Center discussion papers are preliminary materials circulated to stimulate discussion and critical comments. This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: https://ssrn.com/abstract=2973315 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç⇤ Timothy W. Guinnane† This Draft: May 2017 Abstract The relationship between legal forms of firm organization and economic develop- ment remains poorly understood. Recent research disputes the view that the joint-stock corporation played a crucial role in historical economic development, but retains the view that the costless firm dissolution implicit in non-corporate forms is detrimental to investment.
    [Show full text]
  • A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks†
    A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks† Fernando Antoneli1,4,*, Renata C. Ferreira3, Marcelo R. S. Briones2,4 1 Departmento de Informática em Saúde, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 2 Departmento de Microbiologia, Imunologia e Parasitologia, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 3 College of Medicine, Pennsylvania State University (Hershey), PA, USA 4 Laboratório de Genômica Evolutiva e Biocomplexidade, EPM, UNIFESP, Ed. Pesquisas II, Rua Pedro de Toledo 669, CEP 04039-032, São Paulo, Brasil Abstract. Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving
    [Show full text]
  • POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
    POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • Non-Local Branching Superprocesses and Some Related Models
    Published in: Acta Applicandae Mathematicae 74 (2002), 93–112. Non-local Branching Superprocesses and Some Related Models Donald A. Dawson1 School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Canada K1S 5B6 E-mail: [email protected] Luis G. Gorostiza2 Departamento de Matem´aticas, Centro de Investigaci´ony de Estudios Avanzados, A.P. 14-740, 07000 M´exicoD. F., M´exico E-mail: [email protected] Zenghu Li3 Department of Mathematics, Beijing Normal University, Beijing 100875, P.R. China E-mail: [email protected] Abstract A new formulation of non-local branching superprocesses is given from which we derive as special cases the rebirth, the multitype, the mass- structured, the multilevel and the age-reproduction-structured superpro- cesses and the superprocess-controlled immigration process. This unified treatment simplifies considerably the proof of existence of the old classes of superprocesses and also gives rise to some new ones. AMS Subject Classifications: 60G57, 60J80 Key words and phrases: superprocess, non-local branching, rebirth, mul- titype, mass-structured, multilevel, age-reproduction-structured, superprocess- controlled immigration. 1Supported by an NSERC Research Grant and a Max Planck Award. 2Supported by the CONACYT (Mexico, Grant No. 37130-E). 3Supported by the NNSF (China, Grant No. 10131040). 1 1 Introduction Measure-valued branching processes or superprocesses constitute a rich class of infinite dimensional processes currently under rapid development. Such processes arose in appli- cations as high density limits of branching particle systems; see e.g. Dawson (1992, 1993), Dynkin (1993, 1994), Watanabe (1968). The development of this subject has been stimu- lated from different subjects including branching processes, interacting particle systems, stochastic partial differential equations and non-linear partial differential equations.
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • Contents-Preface
    Stochastic Processes From Applications to Theory CHAPMAN & HA LL/CRC Texts in Statis tical Science Series Series Editors Francesca Dominici, Harvard School of Public Health, USA Julian J. Faraway, University of Bath, U K Martin Tanner, Northwestern University, USA Jim Zidek, University of Br itish Columbia, Canada Statistical !eory: A Concise Introduction Statistics for Technology: A Course in Applied F. Abramovich and Y. Ritov Statistics, !ird Edition Practical Multivariate Analysis, Fifth Edition C. Chat!eld A. A!!, S. May, and V.A. Clark Analysis of Variance, Design, and Regression : Practical Statistics for Medical Research Linear Modeling for Unbalanced Data, D.G. Altman Second Edition R. Christensen Interpreting Data: A First Course in Statistics Bayesian Ideas and Data Analysis: An A.J.B. Anderson Introduction for Scientists and Statisticians Introduction to Probability with R R. Christensen, W. Johnson, A. Branscum, K. Baclawski and T.E. Hanson Linear Algebra and Matrix Analysis for Modelling Binary Data, Second Edition Statistics D. Collett S. Banerjee and A. Roy Modelling Survival Data in Medical Research, Mathematical Statistics: Basic Ideas and !ird Edition Selected Topics, Volume I, D. Collett Second Edition Introduction to Statistical Methods for P. J. Bickel and K. A. Doksum Clinical Trials Mathematical Statistics: Basic Ideas and T.D. Cook and D.L. DeMets Selected Topics, Volume II Applied Statistics: Principles and Examples P. J. Bickel and K. A. Doksum D.R. Cox and E.J. Snell Analysis of Categorical Data with R Multivariate Survival Analysis and Competing C. R. Bilder and T. M. Loughin Risks Statistical Methods for SPC and TQM M.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • Notes on Stochastic Processes
    Notes on stochastic processes Paul Keeler March 20, 2018 This work is licensed under a “CC BY-SA 3.0” license. Abstract A stochastic process is a type of mathematical object studied in mathemat- ics, particularly in probability theory, which can be used to represent some type of random evolution or change of a system. There are many types of stochastic processes with applications in various fields outside of mathematics, including the physical sciences, social sciences, finance and economics as well as engineer- ing and technology. This survey aims to give an accessible but detailed account of various stochastic processes by covering their history, various mathematical definitions, and key properties as well detailing various terminology and appli- cations of the process. An emphasis is placed on non-mathematical descriptions of key concepts, with recommendations for further reading. 1 Introduction In probability and related fields, a stochastic or random process, which is also called a random function, is a mathematical object usually defined as a collection of random variables. Historically, the random variables were indexed by some set of increasing numbers, usually viewed as time, giving the interpretation of a stochastic process representing numerical values of some random system evolv- ing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule [120, page 7][51, page 46 and 47][66, page 1]. Stochastic processes are widely used as math- ematical models of systems and phenomena that appear to vary in a random manner. They have applications in many disciplines including physical sciences such as biology [67, 34], chemistry [156], ecology [16][104], neuroscience [102], and physics [63] as well as technology and engineering fields such as image and signal processing [53], computer science [15], information theory [43, page 71], and telecommunications [97][11][12].
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]