On the Suprema of Bernoulli Processes (Slides)

Total Page:16

File Type:pdf, Size:1020Kb

On the Suprema of Bernoulli Processes (Slides) On the boundedness of Bernoulli processes Rafał Latała (based on the joint work with Witold Bednorz) University of Warsaw Berkeley, October 1 2013 Introduction One of the fundamental issues of probability theory is the study of suprema of stochastic processes. Besides various practical motivations it is closely related to such theoretical problems as regularity of sample paths of stochastic processes, convergence of othogonal and random series, estimates of norms of random vectors and random matrices, limit theorems for random vectors and empirical processes, combinatorial matching theorems. In particular in many situations one needs to estimate the quantity E supt∈T Xt , where (Xt )t∈T is a stochastic process. The modern approach to this problem is based on chaining techniques, present already in the works of Kolmogorov and successfully developed over the last 40 years. To avoid measurability problems one may either assume that T is countable or define E sup Xt := sup E sup Xt , t∈T F t∈F where the supremum is taken over all finite sets F ⊂ T . Gaussian Processes Let (Gt )t∈T be a centered Gaussian process and g(T ) := E sup Gt . t∈T In this case the boundedness of the process (which by the concentration properties of Gaussian processes is equivalent to the condition g(T ) < ∞) is related to the geometry of the metric 2 1/2 space (T , d), where d(t, s) := (E(Gt − Gs ) ) . R.Dudley’67 derived an upper bound for g(T ) in terms of entropy numbers: Z ∞ g(T ) ≤ L log1/2 N(T , d, r)dr, 0 where L is a universal constant and N(T , d, r) is the minimal number of balls with radius r that cover T . Dudley’s bound may be reversed for stationary processes, but not in general. Majorizing Measure Theorem Two-sided bound for g(T ) was found by X.Fernique’74 (upper bound) and M.Talagrand’87 (lower bound) in terms of majorizing measures. Let Z ∞ 1/2 1 γ2(T ) := inf sup log dx, t∈T 0 µ(B(t, x)) where the infimum is taken over all probability measures on T and B(t, ε) denotes the closed ball in (T , d) with radius x, centered at t. Theorem (Fernique-Talagrand) There exists a universal constant L < ∞ such that for all centered Gaussian processes, 1 γ (T ) ≤ g(T ) ≤ Lγ (T ). L 2 2 In particular g(T ) < ∞ if and only if γ2(T ) < ∞. Majorizing measures without measures In general finding a majorizing measure in a concrete situation is a highly nontrivial task. Talagrand proposed a more combinatorial approach to this problem. An increasing sequence (An)n≥0 of partitions of the set T is called 2n admissible if A0 = {T } and |An| ≤ Nn := 2 . Talagrand showed that the quantity γ2(T ) is equivalent to ∞ X n/2 inf sup 2 ∆(An(t)), t∈T n=0 where the infimum runs over all admissible sequences of partitions. Here An(t) is the unique set in An which contains t and ∆(A) denotes the diameter of the set A. So to bound a supremum of some Gaussian process one may either construct a measure or build a partition. Bernoulli Processes Any separable Gaussian process has a canonical Karhunen-Loève P type representation ( i≥1 ti gi )t∈T , where g1, g2,... are i.i.d. standard normal Gaussian N (0, 1) r.v’s and T is a subset of `2. Another fundamental class of processes is obtained when in such a sum one replaces the Gaussian r.v’s (gi ) by independent random signs. Let (εi )i≥1 be a Bernoulli sequence, i.e. a sequence of i.i.d. symmetric r.v.’s taking values ±1. For any t ∈ `2 the series P i≥1 ti εi converges a.s. and we may define a Bernoulli process X 2 ti εi , T ⊂ ` . t∈T i≥1 It is natural to ask when the Bernoulli process is a.s. bounded or, equivalently, when X b(T ) := E sup ti εi < ∞? t∈T i≥1 Two easy upper bounds The first bound follows by the uniform bound P P | i ti εi | ≤ i |ti | = ktk1, so that b(T ) ≤ sup ktk1. t∈T Another bound is based on the domination by the canonical P Gaussian process Gt := i ti gi . Assuming independence of (gi ) and (εi ), Jensen’s inequality implies X X g(T ) = E sup ti gi = E sup ti εi |gi | t∈T i t∈T i s 2 ≥ sup X t ε |g | = b(T ). E i i E i π t∈T i Bernoulli Conjecture 1 2 l Obviously also if T ⊂ T1 + T2 = {t + t : t ∈ Tl } then b(T ) ≤ b(T1) + b(T2), hence n rπ o b(T ) ≤ inf sup ktk1 + g(T2): T ⊂ T1 + T2 t∈T1 2 n o ≤ inf sup ktk1 + Lγ2(T2): T ⊂ T1 + T2 . t∈T1 It was open for about 25 years (under the name of Bernoulli conjecture) whether the above estimate may be reversed. Next theorem provides a positive solution. Theorem (Bednorz, L.’13+) For any set T ⊂ `2 with b(T ) < ∞ we may find a decomposition T ⊂ T + T with sup P |t | ≤ Lb(T ) and g(T ) ≤ Lb(T ). 1 2 t∈T1 i≥1 i 2 Kwapień’s Conjecture Problem Let (F , k k) be a normed space and (ui ) be a sequence of vectors P in F such that the series i≥1 ui εi converges a.s. Does there exist a universal constant L and a decomposition ui = vi + wi such that X X X X sup vi ηi ≤ LE ui εi and E wi gi ≤ LE ui εi ? ηi =±1 i≥1 i≥1 i≥1 i≥1 The positive solution of the Bernoulli Conjecture shows that the answer is positive for F = `∞, in general one may only assume that F is a subspace of `∞. The difficulty here is that our proof gives very little additional information about the decomposition, in particular there is no reason for sets T1 and T2 to be contained in the linear space spanned by the index set T . Another bound for Bernoulli processes p 1/p For a random variable X and p > 0 we set kXkp := (E|X| ) . Corollary Suppose that (Xt )t∈T is a Bernoulli process with b(T ) < ∞. Then there exist t1, t2,... ∈ `2 such that T − T ⊂ conv{tn : n ≥ 1} and kXtn klog(n+2) ≤ Lb(T ) for all n ≥ 1. The converse statement easily follows from the union bound and Chebyshev’s inequality. Indeed, suppose that n T − T ⊂ conv{t : n ≥ 1} and kXtn klog(n+2) ≤ M. Then X P sup Xs ≥ uM ≤ P sup Xtn ≥ uM ≤ P(Xtn ≥ ukXtn klog(n+2)) s∈T −T n≥1 n≥1 ≤ X u− log(n+2) n≥1 and integration by parts easily yields for any t0 ∈ T , b(T ) = E sup(Xt − Xt0 ) = E sup(Xt−t0 ) ≤ E sup Xs ≤ LM. t∈T t∈T s∈T −T Proof of the Gaussian case Proof of the lower bound for g(T ) is based on two facts - Gaussian concentration and Sudakov minoration. Theorem (Gaussian concentration) Let (Gt )t∈T be a centered Gaussian process, then for any u > 0, u2 P sup Gt − g(T ) ≥ u) ≤ 2 exp − 2 , t∈T 2σ 2 2 where σ := supt∈T E|Gt | . Theorem (Sudakov minoration) Suppose that t1,..., tm are such that kGtl − Gtl0 k2 ≥ a for all l 6= l0. Then 1 p E sup Gtl ≥ a log m. l≤m L Concentration and minoration for Bernoulli processes Theorem (Talagrand’88) Let T ⊂ `2 be such that b(T ) < ∞, then for any u > 0, u2 sup X t ε − b(T ) ≥ u) ≤ L exp − , P i i Lσ2 t∈T i≥1 2 2 where σ := supt∈T ktk2. Theorem (Talagrand’93) 2 Suppose that vectors t1,..., tm ∈ ` (I) and numbers a, b > 0 satisfy ∀l6=l0 ktl − tl0 k2 ≥ a and ∀l ktl k∞ ≤ b. Then 1 n a2 o sup X t ε ≥ min aplog m, . E l,i i L b l≤m i∈I About the proof of BC It is however not an easy task to combine in a right way concentration and minoration properties of Bernoulli processes. Our proof builds on many ideas developed over the years by Michel Talagrand. One of difficulties lies in the fact that there is no direct way of producing a decomposition t = t1 + t2 for t ∈ T such that 1 2 supt∈T kt k1 ≤ Lb(T ) and γ2({t : t ∈ T }) ≤ Lb(T ). Following Talagrand we connect decompositions of the set T with suitable sequences of its admissible partitions (An)n≥0. To each set A ∈ An we will assign an integer jn(A) and a point πn(A) ∈ T . The main novelty in the next statement is the introduction of sets In(A). Partitions and Bernoulli decomposition Theorem Suppose that (√An)n≥0 is an admissible sequence of partitions s.t. −j (T ) i) kt − sk2 ≤ Mr 0 for t, s ∈ T, 0 ii) if n ≥ 1, An 3 A ⊂ A ∈ An−1 then either 0 0 a) jn(A) = jn−1(A ) and πn(A) = πn−1(A ) or 0 0 b) jn(A) > jn−1(A ), πn(A) ∈ A and X 2 −2jn(A) n −2jn(A) min{(ti − πn(A)i ) , r } ≤ M2 r for all t ∈ A, i∈In(A) where for any t ∈ A, −jk (t) In(A) = In(t) := i : |πk+1(t)i −πk (t)i | ≤ r for 0 ≤ k ≤ n−1 .
Recommended publications
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • A Stochastic Processes and Martingales
    A Stochastic Processes and Martingales A.1 Stochastic Processes Let I be either IINorIR+.Astochastic process on I with state space E is a family of E-valued random variables X = {Xt : t ∈ I}. We only consider examples where E is a Polish space. Suppose for the moment that I =IR+. A stochastic process is called cadlag if its paths t → Xt are right-continuous (a.s.) and its left limits exist at all points. In this book we assume that every stochastic process is cadlag. We say a process is continuous if its paths are continuous. The above conditions are meant to hold with probability 1 and not to hold pathwise. A.2 Filtration and Stopping Times The information available at time t is expressed by a σ-subalgebra Ft ⊂F.An {F ∈ } increasing family of σ-algebras t : t I is called a filtration.IfI =IR+, F F F we call a filtration right-continuous if t+ := s>t s = t. If not stated otherwise, we assume that all filtrations in this book are right-continuous. In many books it is also assumed that the filtration is complete, i.e., F0 contains all IIP-null sets. We do not assume this here because we want to be able to change the measure in Chapter 4. Because the changed measure and IIP will be singular, it would not be possible to extend the new measure to the whole σ-algebra F. A stochastic process X is called Ft-adapted if Xt is Ft-measurable for all t. If it is clear which filtration is used, we just call the process adapted.The {F X } natural filtration t is the smallest right-continuous filtration such that X is adapted.
    [Show full text]
  • Partnership As Experimentation: Business Organization and Survival in Egypt, 1910–1949
    Yale University EliScholar – A Digital Platform for Scholarly Publishing at Yale Discussion Papers Economic Growth Center 5-1-2017 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç Timothy Guinnane Follow this and additional works at: https://elischolar.library.yale.edu/egcenter-discussion-paper-series Recommended Citation Artunç, Cihan and Guinnane, Timothy, "Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949" (2017). Discussion Papers. 1065. https://elischolar.library.yale.edu/egcenter-discussion-paper-series/1065 This Discussion Paper is brought to you for free and open access by the Economic Growth Center at EliScholar – A Digital Platform for Scholarly Publishing at Yale. It has been accepted for inclusion in Discussion Papers by an authorized administrator of EliScholar – A Digital Platform for Scholarly Publishing at Yale. For more information, please contact [email protected]. ECONOMIC GROWTH CENTER YALE UNIVERSITY P.O. Box 208269 New Haven, CT 06520-8269 http://www.econ.yale.edu/~egcenter Economic Growth Center Discussion Paper No. 1057 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç University of Arizona Timothy W. Guinnane Yale University Notes: Center discussion papers are preliminary materials circulated to stimulate discussion and critical comments. This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: https://ssrn.com/abstract=2973315 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç⇤ Timothy W. Guinnane† This Draft: May 2017 Abstract The relationship between legal forms of firm organization and economic develop- ment remains poorly understood. Recent research disputes the view that the joint-stock corporation played a crucial role in historical economic development, but retains the view that the costless firm dissolution implicit in non-corporate forms is detrimental to investment.
    [Show full text]
  • A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks†
    A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks† Fernando Antoneli1,4,*, Renata C. Ferreira3, Marcelo R. S. Briones2,4 1 Departmento de Informática em Saúde, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 2 Departmento de Microbiologia, Imunologia e Parasitologia, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 3 College of Medicine, Pennsylvania State University (Hershey), PA, USA 4 Laboratório de Genômica Evolutiva e Biocomplexidade, EPM, UNIFESP, Ed. Pesquisas II, Rua Pedro de Toledo 669, CEP 04039-032, São Paulo, Brasil Abstract. Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving
    [Show full text]
  • POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
    POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p.
    [Show full text]
  • Introduction to Stochastic Processes - Lecture Notes (With 33 Illustrations)
    Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin Contents 1 Probability review 4 1.1 Random variables . 4 1.2 Countable sets . 5 1.3 Discrete random variables . 5 1.4 Expectation . 7 1.5 Events and probability . 8 1.6 Dependence and independence . 9 1.7 Conditional probability . 10 1.8 Examples . 12 2 Mathematica in 15 min 15 2.1 Basic Syntax . 15 2.2 Numerical Approximation . 16 2.3 Expression Manipulation . 16 2.4 Lists and Functions . 17 2.5 Linear Algebra . 19 2.6 Predefined Constants . 20 2.7 Calculus . 20 2.8 Solving Equations . 22 2.9 Graphics . 22 2.10 Probability Distributions and Simulation . 23 2.11 Help Commands . 24 2.12 Common Mistakes . 25 3 Stochastic Processes 26 3.1 The canonical probability space . 27 3.2 Constructing the Random Walk . 28 3.3 Simulation . 29 3.3.1 Random number generation . 29 3.3.2 Simulation of Random Variables . 30 3.4 Monte Carlo Integration . 33 4 The Simple Random Walk 35 4.1 Construction . 35 4.2 The maximum . 36 1 CONTENTS 5 Generating functions 40 5.1 Definition and first properties . 40 5.2 Convolution and moments . 42 5.3 Random sums and Wald’s identity . 44 6 Random walks - advanced methods 48 6.1 Stopping times . 48 6.2 Wald’s identity II . 50 6.3 The distribution of the first hitting time T1 .......................... 52 6.3.1 A recursive formula . 52 6.3.2 Generating-function approach .
    [Show full text]
  • Non-Local Branching Superprocesses and Some Related Models
    Published in: Acta Applicandae Mathematicae 74 (2002), 93–112. Non-local Branching Superprocesses and Some Related Models Donald A. Dawson1 School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Canada K1S 5B6 E-mail: [email protected] Luis G. Gorostiza2 Departamento de Matem´aticas, Centro de Investigaci´ony de Estudios Avanzados, A.P. 14-740, 07000 M´exicoD. F., M´exico E-mail: [email protected] Zenghu Li3 Department of Mathematics, Beijing Normal University, Beijing 100875, P.R. China E-mail: [email protected] Abstract A new formulation of non-local branching superprocesses is given from which we derive as special cases the rebirth, the multitype, the mass- structured, the multilevel and the age-reproduction-structured superpro- cesses and the superprocess-controlled immigration process. This unified treatment simplifies considerably the proof of existence of the old classes of superprocesses and also gives rise to some new ones. AMS Subject Classifications: 60G57, 60J80 Key words and phrases: superprocess, non-local branching, rebirth, mul- titype, mass-structured, multilevel, age-reproduction-structured, superprocess- controlled immigration. 1Supported by an NSERC Research Grant and a Max Planck Award. 2Supported by the CONACYT (Mexico, Grant No. 37130-E). 3Supported by the NNSF (China, Grant No. 10131040). 1 1 Introduction Measure-valued branching processes or superprocesses constitute a rich class of infinite dimensional processes currently under rapid development. Such processes arose in appli- cations as high density limits of branching particle systems; see e.g. Dawson (1992, 1993), Dynkin (1993, 1994), Watanabe (1968). The development of this subject has been stimu- lated from different subjects including branching processes, interacting particle systems, stochastic partial differential equations and non-linear partial differential equations.
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • Contents-Preface
    Stochastic Processes From Applications to Theory CHAPMAN & HA LL/CRC Texts in Statis tical Science Series Series Editors Francesca Dominici, Harvard School of Public Health, USA Julian J. Faraway, University of Bath, U K Martin Tanner, Northwestern University, USA Jim Zidek, University of Br itish Columbia, Canada Statistical !eory: A Concise Introduction Statistics for Technology: A Course in Applied F. Abramovich and Y. Ritov Statistics, !ird Edition Practical Multivariate Analysis, Fifth Edition C. Chat!eld A. A!!, S. May, and V.A. Clark Analysis of Variance, Design, and Regression : Practical Statistics for Medical Research Linear Modeling for Unbalanced Data, D.G. Altman Second Edition R. Christensen Interpreting Data: A First Course in Statistics Bayesian Ideas and Data Analysis: An A.J.B. Anderson Introduction for Scientists and Statisticians Introduction to Probability with R R. Christensen, W. Johnson, A. Branscum, K. Baclawski and T.E. Hanson Linear Algebra and Matrix Analysis for Modelling Binary Data, Second Edition Statistics D. Collett S. Banerjee and A. Roy Modelling Survival Data in Medical Research, Mathematical Statistics: Basic Ideas and !ird Edition Selected Topics, Volume I, D. Collett Second Edition Introduction to Statistical Methods for P. J. Bickel and K. A. Doksum Clinical Trials Mathematical Statistics: Basic Ideas and T.D. Cook and D.L. DeMets Selected Topics, Volume II Applied Statistics: Principles and Examples P. J. Bickel and K. A. Doksum D.R. Cox and E.J. Snell Analysis of Categorical Data with R Multivariate Survival Analysis and Competing C. R. Bilder and T. M. Loughin Risks Statistical Methods for SPC and TQM M.
    [Show full text]
  • A Representation for Functionals of Superprocesses Via Particle Picture Raisa E
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector stochastic processes and their applications ELSEVIER Stochastic Processes and their Applications 64 (1996) 173-186 A representation for functionals of superprocesses via particle picture Raisa E. Feldman *,‘, Srikanth K. Iyer ‘,* Department of Statistics and Applied Probability, University oj’ Cull~ornia at Santa Barbara, CA 93/06-31/O, USA, and Department of Industrial Enyineeriny and Management. Technion - Israel Institute of’ Technology, Israel Received October 1995; revised May 1996 Abstract A superprocess is a measure valued process arising as the limiting density of an infinite col- lection of particles undergoing branching and diffusion. It can also be defined as a measure valued Markov process with a specified semigroup. Using the latter definition and explicit mo- ment calculations, Dynkin (1988) built multiple integrals for the superprocess. We show that the multiple integrals of the superprocess defined by Dynkin arise as weak limits of linear additive functionals built on the particle system. Keywords: Superprocesses; Additive functionals; Particle system; Multiple integrals; Intersection local time AMS c.lassijication: primary 60555; 60H05; 60580; secondary 60F05; 6OG57; 60Fl7 1. Introduction We study a class of functionals of a superprocess that can be identified as the multiple Wiener-It6 integrals of this measure valued process. More precisely, our objective is to study these via the underlying branching particle system, thus providing means of thinking about the functionals in terms of more simple, intuitive and visualizable objects. This way of looking at multiple integrals of a stochastic process has its roots in the previous work of Adler and Epstein (=Feldman) (1987), Epstein (1989), Adler ( 1989) Feldman and Rachev (1993), Feldman and Krishnakumar ( 1994).
    [Show full text]
  • Notes on Stochastic Processes
    Notes on stochastic processes Paul Keeler March 20, 2018 This work is licensed under a “CC BY-SA 3.0” license. Abstract A stochastic process is a type of mathematical object studied in mathemat- ics, particularly in probability theory, which can be used to represent some type of random evolution or change of a system. There are many types of stochastic processes with applications in various fields outside of mathematics, including the physical sciences, social sciences, finance and economics as well as engineer- ing and technology. This survey aims to give an accessible but detailed account of various stochastic processes by covering their history, various mathematical definitions, and key properties as well detailing various terminology and appli- cations of the process. An emphasis is placed on non-mathematical descriptions of key concepts, with recommendations for further reading. 1 Introduction In probability and related fields, a stochastic or random process, which is also called a random function, is a mathematical object usually defined as a collection of random variables. Historically, the random variables were indexed by some set of increasing numbers, usually viewed as time, giving the interpretation of a stochastic process representing numerical values of some random system evolv- ing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule [120, page 7][51, page 46 and 47][66, page 1]. Stochastic processes are widely used as math- ematical models of systems and phenomena that appear to vary in a random manner. They have applications in many disciplines including physical sciences such as biology [67, 34], chemistry [156], ecology [16][104], neuroscience [102], and physics [63] as well as technology and engineering fields such as image and signal processing [53], computer science [15], information theory [43, page 71], and telecommunications [97][11][12].
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]