Stochastic Processes and Applications

Total Page:16

File Type:pdf, Size:1020Kb

Stochastic Processes and Applications STOCHASTIC PROCESSES AND APPLICATIONS G.A. Pavliotis Department of Mathematics Imperial College London November 11, 2015 2 Contents 1 Stochastic Processes 1 1.1 Definition of a Stochastic Process . ............ 1 1.2 StationaryProcesses .... ...... ..... ...... ...... .. ......... 3 1.3 BrownianMotion .................................. ...... 10 1.4 Examples of Stochastic Processes . ............ 13 1.5 The Karhunen-Lo´eve Expansion . ........... 16 1.6 Discussion and Bibliography . ........... 20 1.7 Exercises ....................................... ..... 21 2 Diffusion Processes 27 2.1 Examples of Markov processes . .......... 27 2.2 The Chapman-Kolmogorov Equation . ........... 30 2.3 The Generator of a Markov Processes . ........... 34 2.4 Ergodic Markov processes . ......... 37 2.5 The Kolmogorov Equations . ......... 39 2.6 Discussion and Bibliography . ........... 46 2.7 Exercises ....................................... ..... 47 3 Introduction to SDEs 49 3.1 Introduction.................................... ....... 49 3.2 The Itˆoand Stratonovich Stochastic Integrals . .................. 52 3.3 SolutionsofSDEs................................. ....... 57 3.4 Itˆo’sformula................................... ........ 59 3.5 ExamplesofSDEs .................................. ..... 65 3.6 Lamperti’s Transformation and Girsanov’s Theorem . .................. 68 3.7 Linear Stochastic Differential Equations . ................. 70 3.8 Discussion and Bibliography . ........... 73 3.9 Exercises ....................................... ..... 74 4 The Fokker-Planck Equation 77 4.1 Basic properties of the Fokker-Planck equation . ................. 77 4.2 Examples of the Fokker-Planck Equation . ............. 81 i 4.3 Diffusion Processes in One Dimension . ............. 87 4.4 The Ornstein-Uhlenbeck Process . ............ 90 4.5 TheSmoluchowskiEquation . ......... 96 4.6 Reversible Diffusions . ..........101 4.7 Eigenfunction Expansions . ..........106 4.8 MarkovChainMonteCarlo. ........108 4.9 Reduction to a Schr¨odinger Operator . ..............110 4.10 Discussion and Bibliography . ............114 4.11Exercises ...................................... 118 Appendix A Frequently Used Notation 123 Appendix B Elements of Probability Theory 125 B.1 Basic Definitions from Probability Theory . ...............125 B.2 RandomVariables................................. .......127 B.3 Conditional Expecation . ..........131 B.4 The Characteristic Function . ............132 B.5 Gaussian Random Variables . .........133 B.5.1 Gaussian Measures in Hilbert Spaces . ..........135 B.6 Types of Convergence and Limit Theorems . ............138 B.7 Discussion and Bibliography . ...........139 Index 141 Bibliography 145 ii Chapter 1 Introduction to Stochastic Processes In this chapter we present some basic results from the theory of stochastic processes and investigate the properties of some of the standard continuous-time stochastic processes. In Section 1.1 we give the definition of a stochastic process. In Section 1.2 we present some properties of stationary stochastic processes. In Section 1.3 we introduce Brownian motion and study some of its properties. Various examples of stochastic processes in continuous time are presented in Section 1.4. The Karhunen-Loeve expansion, one of the most useful tools for representing stochastic processes and random fields, is presented in Section 1.5. Further discussion and bibliographical comments are presented in Section 1.6. Section 1.7 contains exercises. 1.1 Definition of a Stochastic Process Stochastic processes describe dynamical systems whose time-evolution is of probabilistic nature. The pre- cise definition is given below.1 Definition 1.1 (stochastic process). Let T be an ordered set, (Ω, , P) a probability space and (E, ) a F G measurable space. A stochastic process is a collection of random variables X = X ; t T where, for { t ∈ } each fixed t T , X is a random variable from (Ω, , P) to (E, ). Ω is known as the sample space, where ∈ t F G E is the state space of the stochastic process Xt. The set T can be either discrete, for example the set of positive integers Z+, or continuous, T = R+. The state space E will usually be Rd equipped with the σ-algebra of Borel sets. A stochastic process X may be viewed as a function of both t T and ω Ω. We will sometimes write ∈ ∈ X(t), X(t,ω) or X (ω) instead of X . For a fixed sample point ω Ω, the function X (ω) : T E is t t ∈ t → called a (realization, trajectory) of the process X. Definition 1.2 (finite dimensional distributions). The finite dimensional distributions (fdd) of a stochastic k process are the distributions of the E -valued random variables (X(t1), X(t2),...,X(tk)) for arbitrary positive integer k and arbitrary times t T, i 1,...,k : i ∈ ∈ { } F (x)= P(X(ti) 6 xi, i = 1,...,k) with x = (x1,...,xk). 1The notation and basic definitions from probability theory that we will use can be found in Appendix B. 1 From experiments or numerical simulations we can only obtain information about the finite dimensional distributions of a process. A natural question arises: are the finite dimensional distributions of a stochastic process sufficient to determine a stochastic process uniquely? This is true for processes with continuous paths 2, which is the class of stochastic processes that we will study in these notes. Definition 1.3. We say that two processes Xt and Yt are equivalent if they have same finite dimensional distributions. Gaussian stochastic processes A very important class of continuous-time processes is that of Gaussian processes which arise in many applications. Definition 1.4. A one dimensional continuous time Gaussian process is a stochastic process for which E = R and all the finite dimensional distributions are Gaussian, i.e. every finite dimensional vector (X , X ,...,X ) is a (µ , K ) random variable for some vector µ and a symmetric nonnegative t1 t2 tk N k k k definite matrix Kk for all k = 1, 2,... and for all t1,t2,...,tk. From the above definition we conclude that the finite dimensional distributions of a Gaussian continuous- time stochastic process are Gaussian with probability distribution function n/2 1/2 1 1 γ (x) = (2π)− (detK )− exp K− (x µ ), x µ , µk,Kk k −2 k − k − k where x = (x1,x2,...xk). It is straightforward to extend the above definition to arbitrary dimensions. A Gaussian process x(t) is characterized by its mean m(t) := Ex(t) and the covariance (or autocorrelation) matrix C(t,s)= E x(t) m(t) x(s) m(s) . − ⊗ − Thus, the first two moments of a Gaussian process are sufficien t for a complete characterization of the process. It is not difficult to simulate Gaussian stochastic processes on a computer. Given a random number generator that generates (0, 1) (pseudo)random numbers, we can sample from a Gaussian stochastic pro- N cess by calculating the square root of the covariance. A simple algorithm for constructing a skeleton of a continuous time Gaussian process is the following: Fix ∆t and define t = (j 1)∆t, j = 1,...N. • j − N N N N N N Set Xj := X(tj) and define the Gaussian random vector X = Xj . Then X (µ , Γ ) • j=1 ∼N N N with µ = (µ(t1),...µ(tN )) and Γij = C(ti,tj). 2In fact, all we need is the stochastic process to be separable See the discussion in Section 1.6. 2 Then XN = µN + Λ (0,I) with ΓN = ΛΛT . • N We can calculate the square root of the covariance matrix C either using the Cholesky factorization, via the spectral decomposition of C, or by using the singular value decomposition (SVD). Examples of Gaussian stochastic processes Random Fourier series: let ξ , ζ (0, 1), i = 1,...N and define • i i ∼N N X(t)= (ξj cos(2πjt)+ ζj sin(2πjt)) . j=1 Brownian motion is a Gaussian process with m(t) = 0, C(t,s) = min(t,s). • Brownian bridge is a Gaussian process with m(t) = 0, C(t,s) = min(t,s) ts. • − The Ornstein-Uhlenbeck process is a Gaussian process with m(t) = 0, C(t,s) = λ e α t s with • − | − | α, λ> 0. 1.2 Stationary Processes In many stochastic processes that appear in applications their statistics remain invariant under time transla- tions. Such stochastic processes are called stationary. It is possible to develop a quite general theory for stochastic processes that enjoy this symmetry property. It is useful to distinguish between stochastic pro- cesses for which all finite dimensional distributions are translation invariant (strictly stationary processes) and processes for which this translation invariance holds only for the first two moments (weakly stationary processes). Strictly Stationary Processes Definition 1.5. A stochastic process is called (strictly) stationary if all finite dimensional distributions are invariant under time translation: for any integer k and times t T , the distribution of (X(t ), X(t ),...,X(t )) i ∈ 1 2 k is equal to that of (X(s+t ), X(s+t ),...,X(s+t )) for any s such that s+t T for all i 1,...,k . 1 2 k i ∈ ∈ { } In other words, P(X A , X A ...X A )= P(X A , X A ...X A ), s T. t1+s ∈ 1 t2+s ∈ 2 tk+s ∈ k t1 ∈ 1 t2 ∈ 2 tk ∈ k ∀ ∈ Example 1.6. Let Y0, Y1,... be a sequence of independent, identically distributed random variables and consider the stochastic process Xn = Yn. Then Xn is a strictly stationary process (see Exercise 1). Assume furthermore that EY = µ< + . Then, by the strong law of large numbers, Equation (B.26), we have that 0 ∞ N 1 N 1 1 − 1 − X = Y EY = µ, N j N j → 0 j=0 j=0 3 almost surely. In fact, the Birkhoff ergodic theorem states that, for any function f such that Ef(Y ) < + , 0 ∞ we have that N 1 1 − lim f(Xj)= Ef(Y0), (1.1) N + N → ∞ j=0 almost surely. The sequence of iid random variables is an example of an ergodic strictly stationary processes. We will say that a stationary stochastic process that satisfies (1.1) is ergodic. For such processes we can calculate expectation values of observable, Ef(Xt) using a single sample path, provided that it is long enough (N 1).
Recommended publications
  • Markovian Bridges: Weak Continuity and Pathwise Constructions
    The Annals of Probability 2011, Vol. 39, No. 2, 609–647 DOI: 10.1214/10-AOP562 c Institute of Mathematical Statistics, 2011 MARKOVIAN BRIDGES: WEAK CONTINUITY AND PATHWISE CONSTRUCTIONS1 By Lo¨ıc Chaumont and Geronimo´ Uribe Bravo2,3 Universit´ed’Angers and Universidad Nacional Aut´onoma de M´exico A Markovian bridge is a probability measure taken from a disin- tegration of the law of an initial part of the path of a Markov process given its terminal value. As such, Markovian bridges admit a natural parameterization in terms of the state space of the process. In the context of Feller processes with continuous transition densities, we construct by weak convergence considerations the only versions of Markovian bridges which are weakly continuous with respect to their parameter. We use this weakly continuous construction to provide an extension of the strong Markov property in which the flow of time is reversed. In the context of self-similar Feller process, the last result is shown to be useful in the construction of Markovian bridges out of the trajectories of the original process. 1. Introduction and main results. 1.1. Motivation. The aim of this article is to study Markov processes on [0,t], starting at x, conditioned to arrive at y at time t. Historically, the first example of such a conditional law is given by Paul L´evy’s construction of the Brownian bridge: given a Brownian motion B starting at zero, let s s bx,y,t = x + B B + (y x) . s s − t t − t arXiv:0905.2155v3 [math.PR] 14 Mar 2011 Received May 2009; revised April 2010.
    [Show full text]
  • Patterns in Random Walks and Brownian Motion
    Patterns in Random Walks and Brownian Motion Jim Pitman and Wenpin Tang Abstract We ask if it is possible to find some particular continuous paths of unit length in linear Brownian motion. Beginning with a discrete version of the problem, we derive the asymptotics of the expected waiting time for several interesting patterns. These suggest corresponding results on the existence/non-existence of continuous paths embedded in Brownian motion. With further effort we are able to prove some of these existence and non-existence results by various stochastic analysis arguments. A list of open problems is presented. AMS 2010 Mathematics Subject Classification: 60C05, 60G17, 60J65. 1 Introduction and Main Results We are interested in the question of embedding some continuous-time stochastic processes .Zu;0Ä u Ä 1/ into a Brownian path .BtI t 0/, without time-change or scaling, just by a random translation of origin in spacetime. More precisely, we ask the following: Question 1 Given some distribution of a process Z with continuous paths, does there exist a random time T such that .BTCu BT I 0 Ä u Ä 1/ has the same distribution as .Zu;0Ä u Ä 1/? The question of whether external randomization is allowed to construct such a random time T, is of no importance here. In fact, we can simply ignore Brownian J. Pitman ()•W.Tang Department of Statistics, University of California, 367 Evans Hall, Berkeley, CA 94720-3860, USA e-mail: [email protected]; [email protected] © Springer International Publishing Switzerland 2015 49 C. Donati-Martin et al.
    [Show full text]
  • Constructing a Sequence of Random Walks Strongly Converging to Brownian Motion Philippe Marchal
    Constructing a sequence of random walks strongly converging to Brownian motion Philippe Marchal To cite this version: Philippe Marchal. Constructing a sequence of random walks strongly converging to Brownian motion. Discrete Random Walks, DRW’03, 2003, Paris, France. pp.181-190. hal-01183930 HAL Id: hal-01183930 https://hal.inria.fr/hal-01183930 Submitted on 12 Aug 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Discrete Mathematics and Theoretical Computer Science AC, 2003, 181–190 Constructing a sequence of random walks strongly converging to Brownian motion Philippe Marchal CNRS and Ecole´ normale superieur´ e, 45 rue d’Ulm, 75005 Paris, France [email protected] We give an algorithm which constructs recursively a sequence of simple random walks on converging almost surely to a Brownian motion. One obtains by the same method conditional versions of the simple random walk converging to the excursion, the bridge, the meander or the normalized pseudobridge. Keywords: strong convergence, simple random walk, Brownian motion 1 Introduction It is one of the most basic facts in probability theory that random walks, after proper rescaling, converge to Brownian motion. However, Donsker’s classical theorem [Don51] only states a convergence in law.
    [Show full text]
  • Arxiv:2004.05394V1 [Math.PR] 11 Apr 2020 Not Equal
    SIZE AND SHAPE OF TRACKED BROWNIAN BRIDGES ABDULRAHMAN ALSOLAMI, JAMES BURRIDGE AND MICHALGNACIK Abstract. We investigate the typical sizes and shapes of sets of points obtained by irregularly tracking two-dimensional Brownian bridges. The tracking process consists of observing the path location at the arrival times of a non-homogeneous Poisson process on a finite time interval. The time varying intensity of this observation process is the tracking strategy. By analysing the gyration tensor of tracked points we prove two theorems which relate the tracking strategy to the average gyration radius, and to the asphericity { a measure of how non-spherical the point set is. The act of tracking may be interpreted either as a process of observation, or as process of depositing time decaying \evidence" such as scent, environmental disturbance, or disease particles. We present examples of different strategies, and explore by simulation the effects of varying the total number of tracking points. 1. Introduction Understanding the statistical properties of human and animal movement processes is of interest to ecologists [1, 2, 3], epidemiologists [4, 5, 6], criminologists [7], physicists and mathematicians [8, 9, 10, 11], including those interested in the evolution of human culture and language [12, 13, 14]. Advances in information and communication technologies have allowed automated collection of large numbers of human and animal trajectories [15, 16], allowing real movement patterns to be studied in detail and compared to idealised mathematical models. Beyond academic study, movement data has important practical applications, for example in controlling the spread of disease through contact tracing [6]. Due to the growing availability and applications of tracking information, it is useful to possess a greater analytical understanding of the typical shape and size characteristics of trajectories which are observed, or otherwise emit information.
    [Show full text]
  • On Conditionally Heteroscedastic Ar Models with Thresholds
    Statistica Sinica 24 (2014), 625-652 doi:http://dx.doi.org/10.5705/ss.2012.185 ON CONDITIONALLY HETEROSCEDASTIC AR MODELS WITH THRESHOLDS Kung-Sik Chan, Dong Li, Shiqing Ling and Howell Tong University of Iowa, Tsinghua University, Hong Kong University of Science & Technology and London School of Economics & Political Science Abstract: Conditional heteroscedasticity is prevalent in many time series. By view- ing conditional heteroscedasticity as the consequence of a dynamic mixture of in- dependent random variables, we develop a simple yet versatile observable mixing function, leading to the conditionally heteroscedastic AR model with thresholds, or a T-CHARM for short. We demonstrate its many attributes and provide com- prehensive theoretical underpinnings with efficient computational procedures and algorithms. We compare, via simulation, the performance of T-CHARM with the GARCH model. We report some experiences using data from economics, biology, and geoscience. Key words and phrases: Compound Poisson process, conditional variance, heavy tail, heteroscedasticity, limiting distribution, quasi-maximum likelihood estimation, random field, score test, T-CHARM, threshold model, volatility. 1. Introduction We can often model a time series as the sum of a conditional mean function, the drift or trend, and a conditional variance function, the diffusion. See, e.g., Tong (1990). The drift attracted attention from very early days, although the importance of the diffusion did not go entirely unnoticed, with an example in ecological populations as early as Moran (1953); a systematic modelling of the diffusion did not seem to attract serious attention before the 1980s. For discrete-time cases, our focus, as far as we are aware it is in the econo- metric and finance literature that the modelling of the conditional variance has been treated seriously, although Tong and Lim (1980) did include conditional heteroscedasticity.
    [Show full text]
  • An Introduction to Random Walks on Groups
    An introduction to random walks on groups Jean-Fran¸coisQuint School on Information and Randomness, Puerto Varas, December 2014 Contents 1 Locally compact groups and Haar measure 2 2 Amenable groups 4 3 Invariant means 9 4 Almost invariant vectors 13 5 Random walks and amenable groups 15 6 Growth of groups 18 7 Speed of escape 21 8 Subgroups of SL2(R) 24 1 In these notes, we will study a basic question from the theory of random walks on groups: given a random walk e; g1; g2g1; : : : ; gn ··· g1;::: on a group G, we will give a criterion for the trajectories to go to infinity with linear speed (for a notion of speed which has to be defined). We will see that this property is related to the notion of amenability of groups: this is a fun- damental theorem which was proved by Kesten in the case of discrete groups and extended to the case of continuous groups by Berg and Christensen. We will give examples of this behaviour for random walks on SL2(R). 1 Locally compact groups and Haar measure In order to define random walks on groups, I need to consider probability measures on groups, which will be the distribution of the random walks. When the group is discrete, there is no technical difficulty: a probability measure is a function from the group to [0; 1], the sum of whose values is one. But I also want to consider non discrete groups, such as vector spaces, groups of matrices, groups of automorphisms of trees, etc.
    [Show full text]
  • Integral Geometry – Measure Theoretic Approach and Stochastic Applications
    Integral geometry – measure theoretic approach and stochastic applications Rolf Schneider Preface Integral geometry, as it is understood here, deals with the computation and application of geometric mean values with respect to invariant measures. In the following, I want to give an introduction to the integral geometry of polyconvex sets (i.e., finite unions of compact convex sets) in Euclidean spaces. The invariant or Haar measures that occur will therefore be those on the groups of translations, rotations, or rigid motions of Euclidean space, and on the affine Grassmannians of k-dimensional affine subspaces. How- ever, it is also in a different sense that invariant measures will play a central role, namely in the form of finitely additive functions on polyconvex sets. Such functions have been called additive functionals or valuations in the literature, and their motion invariant specializations, now called intrinsic volumes, have played an essential role in Hadwiger’s [2] and later work (e.g., [8]) on integral geometry. More recently, the importance of these functionals for integral geometry has been rediscovered by Rota [5] and Klain-Rota [4], who called them ‘measures’ and emphasized their role in certain parts of geometric probability. We will, more generally, deal with local versions of the intrinsic volumes, the curvature measures, and derive integral-geometric results for them. This is the third aspect of the measure theoretic approach mentioned in the title. A particular feature of this approach is the essential role that uniqueness results for invariant measures play in the proofs. As prerequisites, we assume some familiarity with basic facts from mea- sure and integration theory.
    [Show full text]
  • Arxiv:1910.05067V4 [Physics.Comp-Ph] 1 Jan 2021
    A FINITE-VOLUME METHOD FOR FLUCTUATING DYNAMICAL DENSITY FUNCTIONAL THEORY ANTONIO RUSSO†, SERGIO P. PEREZ†, MIGUEL A. DURAN-OLIVENCIA,´ PETER YATSYSHIN, JOSE´ A. CARRILLO, AND SERAFIM KALLIADASIS Abstract. We introduce a finite-volume numerical scheme for solving stochastic gradient-flow equations. Such equations are of crucial importance within the framework of fluctuating hydrodynamics and dynamic density func- tional theory. Our proposed scheme deals with general free-energy functionals, including, for instance, external fields or interaction potentials. This allows us to simulate a range of physical phenomena where thermal fluctuations play a crucial role, such as nucleation and other energy-barrier crossing transitions. A positivity-preserving algorithm for the density is derived based on a hybrid space discretization of the deterministic and the stochastic terms and different implicit and explicit time integrators. We show through numerous applications that not only our scheme is able to accurately reproduce the statistical properties (structure factor and correlations) of the physical system, but, because of the multiplicative noise, it allows us to simulate energy barrier crossing dynamics, which cannot be captured by mean-field approaches. 1. Introduction The study of fluid dynamics encounters major challenges due to the inherently multiscale nature of fluids. Not surprisingly, fluid dynamics has been one of the main arenas of activity for numerical analysis and fluids are commonly studied via numerical simulations, either at molecular scale, by using molecular dynamics (MD) or Monte Carlo (MC) simulations; or at macro scale, by utilising deterministic models based on the conservation of fundamental quantities, namely mass, momentum and energy. While atomistic simulations take into account thermal fluctuations, they come with an important drawback, the enormous computational cost of having to resolve at least three degrees of freedom per particle.
    [Show full text]
  • 3. Invariant Measures
    MAGIC010 Ergodic Theory Lecture 3 3. Invariant measures x3.1 Introduction In Lecture 1 we remarked that ergodic theory is the study of the qualitative distributional properties of typical orbits of a dynamical system and that these properties are expressed in terms of measure theory. Measure theory therefore lies at the heart of ergodic theory. However, we will not need to know the (many!) intricacies of measure theory. We will give a brief overview of the basics of measure theory, before studying invariant measures. In the next lecture we then go on to study ergodic measures. x3.2 Measure spaces We will need to recall some basic definitions and facts from measure theory. Definition. Let X be a set. A collection B of subsets of X is called a σ-algebra if: (i) ; 2 B, (ii) if A 2 B then X n A 2 B, (iii) if An 2 B, n = 1; 2; 3;:::, is a countable sequence of sets in B then S1 their union n=1 An 2 B. Remark Clearly, if B is a σ-algebra then X 2 B. It is easy to see that if B is a σ-algebra then B is also closed under taking countable intersections. Examples. 1. The trivial σ-algebra is given by B = f;;Xg. 2. The full σ-algebra is given by B = P(X), i.e. the collection of all subsets of X. 3. Let X be a compact metric space. The Borel σ-algebra is the small- est σ-algebra that contains every open subset of X.
    [Show full text]
  • Stochastic Flows Associated to Coalescent Processes II
    Stochastic flows associated to coalescent processes II: Stochastic differential equations Dedicated to the memory of Paul-Andr´e MEYER Jean Bertoin(1) and Jean-Fran¸cois Le Gall(2) (1) Laboratoire de Probabilit´es et Mod`eles Al´eatoires and Institut universitaire de France, Universit´e Pierre et Marie Curie, 175, rue du Chevaleret, F-75013 Paris, France. (2) DMA, Ecole normale sup´erieure, 45, rue d'Ulm, F-75005 Paris, France. Summary. We obtain precise information about the stochastic flows of bridges that are associated with the so-called Λ-coalescents. When the measure Λ gives no mass to 0, we prove that the flow of bridges is generated by a stochastic differential equation driven by a Poisson point process. On the other hand, the case Λ = δ0 of the Kingman coalescent gives rise to a flow of coalescing diffusions on the interval [0; 1]. We also discuss a remarkable Brownian flow on the circle which has close connections with the Kingman coalescent Titre. Flots stochastiques associ´es a` des processus de coalescence II : ´equations diff´erentielles stochastiques. R´esum´e. Nous ´etudions les flots de ponts associ´es aux processus de coagulation appel´es Λ-coalescents. Quand la mesure Λ ne charge pas 0, nous montrons que le flot de ponts est engendr´e par une ´equation diff´erentielle stochastique conduite par un processus de Poisson ponctuel. Au contraire, le cas Λ = δ0 du coalescent de Kingman fait apparaitre un flot de diffusions coalescentes sur l'intervalle [0; 1]. Nous ´etudions aussi un flot brownien remarquable sur le cercle, qui est ´etroitement li´e au coalescent de Kingman.
    [Show full text]
  • Statistical Properties of Dynamical Systems - Simulation and Abstract Computation
    Statistical properties of dynamical systems - simulation and abstract computation. Stefano Galatolo, Mathieu Hoyrup, Cristobal Rojas To cite this version: Stefano Galatolo, Mathieu Hoyrup, Cristobal Rojas. Statistical properties of dynamical systems - simulation and abstract computation.. Chaos, Solitons and Fractals, Elsevier, 2012, 45 (1), pp.1-14. 10.1016/j.chaos.2011.09.011. hal-00644790 HAL Id: hal-00644790 https://hal.inria.fr/hal-00644790 Submitted on 25 Nov 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Statistical properties of dynamical systems { simulation and abstract computation. Stefano Galatolo Mathieu Hoyrup Universita di Pisa LORIA - INRIA Crist´obalRojas Universidad Andres Bello October 11, 2011 Abstract We survey an area of recent development, relating dynamics to the- oretical computer science. We discuss some aspects of the theoretical simulation and computation of the long term behavior of dynamical sys- tems. We will focus on the statistical limiting behavior and invariant measures. We present a general method allowing the algorithmic approx- imation at any given accuracy of invariant measures. The method can be applied in many interesting cases, as we shall explain. On the other hand, we exhibit some examples where the algorithmic approximation of invariant measures is not possible.
    [Show full text]
  • Invariant Measures for Set-Valued Dynamical Systems
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 351, Number 3, March 1999, Pages 1203{1225 S 0002-9947(99)02424-1 INVARIANT MEASURES FOR SET-VALUED DYNAMICAL SYSTEMS WALTER MILLER AND ETHAN AKIN Abstract. A continuous map on a compact metric space, regarded as a dy- namical system by iteration, admits invariant measures. For a closed relation on such a space, or, equivalently, an upper semicontinuous set-valued map, there are several concepts which extend this idea of invariance for a measure. We prove that four such are equivalent. In particular, such relation invari- ant measures arise as projections from shift invariant measures on the space of sample paths. There is a similarly close relationship between the ideas of chain recurrence for the set-valued system and for the shift on the sample path space. 1. Introduction All our spaces will be nonempty, compact, metric spaces. For such a space X let P (X) denote the space of Borel probability measures on X with δ : X P (X) → the embedding associating to x X the point measure δx. The support µ of a measure µ in P (X) is the smallest∈ closed subset of measure 1. If f : X | |X is 1 → 2 Borel measurable then the induced map f : P (X1) P (X2) associates to µ the measure f (µ) defined by ∗ → ∗ 1 (1.1) f (µ)(B)=µ(f− (B)) ∗ for all B Borel in X2. We regard a continuous map f on X as a dynamical system by iterating. A measure µ P (X) is called an invariant measure when it is a fixed point for the map f : P (∈X) P (X), i.e.
    [Show full text]