Chapter 3 Introduction to Stochastic Processes

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 3 Introduction to Stochastic Processes Chapter 3 Introduction to stochastic processes In this chapter we review the basic concepts of what a stochastic process is. Our aim is not to be rigorous on the mathematical side but rather to focus on the physical insights behind the concepts. The name “stochastic process” is usually associated to a trajectory in phase space which is random enough to demand a probabilistic description. A paradigmatic example is that of the Brownian motion. 3.1 Brownian motion The botanist Robert Brown discovered in 1827 that particles in a pollen in suspension execute random movements which he even interpreted initially as some sort of life. It is not so well known that L. Boltzmann knew as early as 1896 the reason for this erratic movement when he wrote “... very small particles in a gas execute motions which result from the fact that the pressure on the surface of the particles may fluctuate”. However, it was A. Einstein in 1905 who successfully introduced the first mathematical treatment of the erratic movement of the Brownian particles. Rather than focusing on the (complicated) trajectory of a single particle, Einstein introduced a probabilistic description valid for an ensemble of Brownian particles. First Einstein introduced the concept of a coarse–grained description defined by a time ⌧ such that di↵erent parts of the trajectory separated by a time ⌧ or larger can be considered independent. No attempt is made to characterize the dynamics at a time scale smaller than this coarse– grain time ⌧. Instead we consider snapshots of the system taken at time intervals ⌧ (see Figure 3.1). The second concept, probabilistic in nature, introduced by Einstein is that of the probability density function, f(∆~ ), for the three-dimensional distance ∆~ = (∆x, ∆y, ∆z) traveled by the Brownian particle in a time interval ⌧. One could assume, for instance, a Gaussian form for f(∆~ ), but this it not necessary. In fact, the only assumption one needs about the function f(∆~ ) (besides the general condition of non-negativity and normalization) comes from the fact that the collisions of the fluid molecules and the Brownian particle occur with the same probability in any direction1. The absence of 1We also disregard the e↵ect of gravity in the Brownian particle which would lead to a preferred 44 Introduction to stochastic processes Figure 3.1: Schematic representation of the movement of a Brownian particle preferred directions translates to a symmetry condition for f(∆~ ): f( ∆~ ) = f(∆~ ). (3.1) − − The third step in this description is to consider an ensemble of N Brownian particles in a large enough system. Also we focus on spatial scales which are much larger than the size of a Brownian particle so that we can define a density of particles n(~x , t ) such that n(~x , t )d~x is the number of particles in the interval (~x,~x + d~x) at time t. From the assumption that the parts of the trajectories separated a time interval ⌧ are statistically independent, it follows that the number of particles at location ~x at time t + ⌧ will be given by the number of particles at location ~x ∆~ at time t multiplied by the probability that the particle jumps from ~x ∆~ to ~x which− is f(∆~ ) and integrated for all the possible − ∆~ values: n(~x , t + ⌧)= n(~x ∆~ ,t)f(∆~ ) d∆~ . (3.2) 3 − ZR This is the basic evolution equation for the number density n(~x , t ). From the physical point of view it is a continuity equation whose interpretation is that Brownian particles can not be created nor can they disappear due to the collisions with the fluid molecules. By Taylor expanding the above expression and making use of the symmetry relation eq.(3.1) one gets the di↵usion equation: @n = D 2n, (3.3) @t r direction in the movement. 3.1 Brownian motion 45 where the di↵usion constant D is given in terms of the pdf f(∆~ ) by: 2 1 1 ∆~ D = ∆~ 2f(∆~ )d∆~ =h i (3.4) 2⌧ 2⌧ Z1 If the initial condition is that all N particles are located at the origin, n(~x , t = 0) = Nδ(~x ) the solution of the di↵usion equation is: N ~x 2/4Dt n(~x , t )= e− , (3.5) (4⇡Dt)3/2 from where it follows that the average position of the Brownian particle is x(t) =0 and that the average square position increases linearly with time, namely: h i ~x (t)2 =6Dt. (3.6) h i This prediction has been successfully confirmed in experiments and contributed to the acceptance of the atomic theory. However successful Einstein’s approach was, it is very phenomenological and it can not yield, for instance, an explicit expression that allows the calculation of the di↵usion constant in terms of microscopic quantities. Langevin (1908) initiated a di↵erent ap- proach which, in some way, can be considered complementary of the previous one. In his approach, Langevin focused on the trajectory of a single Brownian particle and wrote down Newton’s equation Force = mass acceleration. The trajectory of the Brownian particle is highly erratic and therefore its⇥ description would demand a peculiar kind of force. Langevin considered two types of forces acting on the Brownian particle: usual friction forces that, according to Stokes law, would be proportional to the velocity, and a sort of “fluctuating” force ⇠~(t) which represents the “erratic” force which comes from the action of the fluid molecules on the Brownian particle. The equation of motion becomes then: d~v m = 6⇡⌘a~v + ⇠~, (3.7) dt − ⌘ is the viscosity coefficient and a is the radius of the Brownian particle (which is assumed to be spherical). Multiplying both sides of eq.(3.7) by ~x one gets m d2~x 2 d~x 2 d~x2 m = 3⇡a⌘ + ~x ⇠~. (3.8) 2 dt2 − dt − dt · ✓ ◆ Langevin made two assumptions about the fluctuating force ⇠~(t): that is has mean 0 (collisions do not push the Brownian particle in any preferred direction) and that it is uncorrelated to the actual position of the Brownian particle (the action of the molecules of fluid on the Brownian particle is the same no matter the location of the Brownian particle): ⇠~(t) =0, h i ~x ⇠~ = ~x ⇠~ =0. (3.9) h · i h i·h i 46 Introduction to stochastic processes Taking averages in Eq. (3.8) with respect to all realizations of the random force ⇠~(t) and using the previous conditions on ⇠~(t) one gets: m d2 ~x 2 d ~x 2 h i = m ~v 3⇡a⌘ h i, (3.10) 2 dt2 h i dt which is an equation for the average square position of the Brownian particle. Langevin assumed that we are now in a regime in which thermal equilibrium between the Brownian particle and the surrounding fluid has been reached. In particular, this implies that, according to the equipartition theorem, the average kinetic energy of the Brownian particle is m~v2/2 =3kT/2 (k is Boltzmann’s constant and T is the fluid temperature). One can nowh solvei Eq. (3.10) and find that, after some transient time, the asymptotic mean square displacement is given by: kT ~x 2 = t (3.11) h i ⇡⌘a This is nothing but Einstein’s di↵usion law, but now we have an explicit expression for the di↵usion coefficient in terms of other macroscopic variables: kT D = (3.12) 6⇡⌘a Langevin’s random force ⇠~(t) is an example of a stochastic process. It is time we proceed to a more precise definition of what a stochastic process is. The natural machinery is that of probability theory. 3.2 Stochastic Processes In Chapter 1 we have introduced the concept of a random variable xˆ resulting from a probabilistic experiment. We now define a stochastic process as a family xˆ(t) of ran- dom variables depending on some continuous real parameter t. In most applications, t is a physical time and the stochastic process can be thought as performing multiple probabilistic experiments one at each time instant. The trajectory followed by the sys- tem depends on the outcome of each probabilistic experiment. As a consequence the knowledge of the initial condition x0 at time t0 is not enough to determine the position in phase space of the system at a later time t1. Instead, the trajectory, and therefore the final state of the system, acquires a probabilistic nature. To fully determine the final state, it is also necessary to know the outcome of all the successive probabilistic experiments between the initial time t0 and the final time t1. In fact, each possible set of successive outcomes determines a possible trajectory for the system, all starting at x0 at time t0 but ending at di↵erent locations at time t1. The stochastic process can be seen as the collection of all these possible trajectories. Probably the most well–known example of a stochastic process is that of the random walk. The probabilistic experiment is now a series of binary results representing, for instance, the outcome of repeatedly tossing a coin: (0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1,...) 3.2 Stochastic Processes 47 x(t)! 300 200 100 0 -100 4 2 -200 0 -2 -4 -300 -6 -8 -10 -400 -12 -14 -16 -500 20000 20010 20020 20030 20040 20050 0 20000 40000 60000 80000 100000" t! Figure 3.2: Example of a random walk trajectory. We have taken ⌧ = a =1and plotted the resulting trajectory after a large number, 105, of steps. In the insert, we see the fine detail with the discrete jumps occurring at times multiples of ⌧.
Recommended publications
  • Brownian Motion and the Heat Equation
    Brownian motion and the heat equation Denis Bell University of North Florida 1. The heat equation Let the function u(t, x) denote the temperature in a rod at position x and time t u(t,x) Then u(t, x) satisfies the heat equation ∂u 1∂2u = , t > 0. (1) ∂t 2∂x2 It is easy to check that the Gaussian function 1 x2 u(t, x) = e−2t 2πt satisfies (1). Let φ be any! bounded continuous function and define 1 (x y)2 u(t, x) = ∞ φ(y)e− 2−t dy. 2πt "−∞ Then u satisfies!(1). Furthermore making the substitution z = (x y)/√t in the integral gives − 1 z2 u(t, x) = ∞ φ(x y√t)e−2 dz − 2π "−∞ ! 1 z2 φ(x) ∞ e−2 dz = φ(x) → 2π "−∞ ! as t 0. Thus ↓ 1 (x y)2 u(t, x) = ∞ φ(y)e− 2−t dy. 2πt "−∞ ! = E[φ(Xt)] where Xt is a N(x, t) random variable solves the heat equation ∂u 1∂2u = ∂t 2∂x2 with initial condition u(0, ) = φ. · Note: The function u(t, x) is smooth in x for t > 0 even if is only continuous. 2. Brownian motion In the nineteenth century, the botanist Robert Brown observed that a pollen particle suspended in liquid undergoes a strange erratic motion (caused by bombardment by molecules of the liquid) Letting w(t) denote the position of the particle in a fixed direction, the paths w typically look like this t N. Wiener constructed a rigorous mathemati- cal model of Brownian motion in the 1930s.
    [Show full text]
  • Fluctuation-Dissipation Theorems from the Generalised Langevin Equation
    Pram~na, Vol. 12, No. 4, April 1979, pp. 301-315, © printed in India Fluctuation-dissipation theorems from the generalised Langevin equation V BALAKRISHNAN Reactor Research Centre, Kalpakkam 603 102 MS received 15 November 1978 Abstract. The generalised Langevin equation (GLE), originally developed in the context of Brownian motion, yields a convenient representation for the mobility (generalised susceptibility) in terms of a frequency-dependent friction (memory func- tion). Kubo has shown how two deep consistency conditions, or fluctuation-dissi- pation theorems, follow from the GLE. The first relates the mobility to the velocity auto-correlation in equilibrium, as is also derivable from linear response theory. The second is a generalised Nyquist theorem, relating the memory function to the auto-correlation of the random force driving the velocity fluctuations. Certain subtle points in the proofs of these theorems have not been dealt with sufficiently carefully hitherto. We discuss the input information required to make the GLE description a complete one, and present concise, systematic proofs starting from the GLE. Care is taken to settle the points of ambiguity in the original version of these proofs. The causality condition imposed is clarified, and Felderhof's recent criticism of Kubo's derivation is commented upon. Finally, we demonstrate how the ' persistence ' of equilibrium can be used to evaluate easily the equilibrium auto-correlation of the ' driven ' variable (e.g., the velocity) from the transient solution of the corresponding stochastic equation. Keywords. Generalised Langevin equation; fluctuation-dissipation theorem; Brownian motion; correlations; mobility. 1. Introduction and discussion The generalised Langevin equation (GLE), originally developed in the context of Brownian motion, is an archetypal stochastic equation for the study of fluctuations and the approach to equilibrium in a variety of physical problems, ranging flora superionic conductors to mechanical relaxation.
    [Show full text]
  • 1 Introduction Branching Mechanism in a Superprocess from a Branching
    数理解析研究所講究録 1157 巻 2000 年 1-16 1 An example of random snakes by Le Gall and its applications 渡辺信三 Shinzo Watanabe, Kyoto University 1 Introduction The notion of random snakes has been introduced by Le Gall ([Le 1], [Le 2]) to construct a class of measure-valued branching processes, called superprocesses or continuous state branching processes ([Da], [Dy]). A main idea is to produce the branching mechanism in a superprocess from a branching tree embedded in excur- sions at each different level of a Brownian sample path. There is no clear notion of particles in a superprocess; it is something like a cloud or mist. Nevertheless, a random snake could provide us with a clear picture of historical or genealogical developments of”particles” in a superprocess. ” : In this note, we give a sample pathwise construction of a random snake in the case when the underlying Markov process is a Markov chain on a tree. A simplest case has been discussed in [War 1] and [Wat 2]. The construction can be reduced to this case locally and we need to consider a recurrence family of stochastic differential equations for reflecting Brownian motions with sticky boundaries. A special case has been already discussed by J. Warren [War 2] with an application to a coalescing stochastic flow of piece-wise linear transformations in connection with a non-white or non-Gaussian predictable noise in the sense of B. Tsirelson. 2 Brownian snakes Throughout this section, let $\xi=\{\xi(t), P_{x}\}$ be a Hunt Markov process on a locally compact separable metric space $S$ endowed with a metric $d_{S}(\cdot, *)$ .
    [Show full text]
  • Solving Langevin Equation with the Stochastic Algebraically Correlated Noise
    < ” "7 Solving Langevm Equation with the Stochastic Algebraically Noise M. and 3?. SmkowskF ^ AW-fORoZ ixwrdy CE^/DSM - .BP &%% Co^n CWez, Fmmef <m<i * AW»A(('e of NWear f&gaicg, P& - KmWw, fWaW GANIL F 96 IS a* ■11 Solving Langevin Equation with the Stochastic Algebraically Correlated Noise M. Ploszajczak* and T. Srokowski* t Grand Accelerateur National d’lons Lourds (GANIL), CEA/DSM - CNRS/IN2P3, BP 5027, F-14021 Caen Cedex, France and * Institute of Nuclear Physics, PL - 31-342 Krakow, Poland (May 17, 1996) Abstract Long time tail in the velocity and force autocorrelation function has been found recently in the molecular dynamics simulations of the peripheral collisions of ions. Simulation of those slowly decaying correlations in the stochastic transport theory, requires the development of new methods of generating stochastic force of arbitrarily long correlation times. In this paper we propose the Markovian process, the multidi ­ mensional Kangaroo process, which permits describing various algebraic correlated stochastic processes. PACS numbers: 05.40.+j,05.45.+b,05.60+w Typeset using REVTgjX I. INTRODUCTION Dynamics of a classical many body system can be investigated using either the molecular dynamics approach or the kinetic rate equations. Both approaches can be generalized to incorporate also the Pauli exclusion principle for fermions. In the latter case, one considers for example different variants of the Boltzmann or Boltzmann-Langevin equations , whereas in the former case the ’quantal’ version of the molecular dynamics, the so called antisym ­ metrized molecular dynamics [1] has been proposed. Chaotic properties of atomic nuclei have been discussed in the framework of the classical molecular dynamics (CMD).
    [Show full text]
  • Download File
    i PREFACE Teaching stochastic processes to students whose primary interests are in applications has long been a problem. On one hand, the subject can quickly become highly technical and if mathe- matical concerns are allowed to dominate there may be no time available for exploring the many interesting areas of applications. On the other hand, the treatment of stochastic calculus in a cavalier fashion leaves the student with a feeling of great uncertainty when it comes to exploring new material. Moreover, the problem has become more acute as the power of the dierential equation point of view has become more widely appreciated. In these notes, an attempt is made to resolve this dilemma with the needs of those interested in building models and designing al- gorithms for estimation and control in mind. The approach is to start with Poisson counters and to identity the Wiener process with a certain limiting form. We do not attempt to dene the Wiener process per se. Instead, everything is done in terms of limits of jump processes. The Poisson counter and dierential equations whose right-hand sides include the dierential of Poisson counters are developed rst. This leads to the construction of a sample path representa- tions of a continuous time jump process using Poisson counters. This point of view leads to an ecient problem solving technique and permits a unied treatment of time varying and nonlinear problems. More importantly, it provides sound intuition for stochastic dierential equations and their uses without allowing the technicalities to dominate. In treating estimation theory, the conditional density equation is given a central role.
    [Show full text]
  • Levy Processes
    LÉVY PROCESSES, STABLE PROCESSES, AND SUBORDINATORS STEVEN P.LALLEY 1. DEFINITIONSAND EXAMPLES d Definition 1.1. A continuous–time process Xt = X(t ) t 0 with values in R (or, more generally, in an abelian topological groupG ) isf called a Lévyg ≥ process if (1) its sample paths are right-continuous and have left limits at every time point t , and (2) it has stationary, independent increments, that is: (a) For all 0 = t0 < t1 < < tk , the increments X(ti ) X(ti 1) are independent. − (b) For all 0 s t the··· random variables X(t ) X−(s ) and X(t s ) X(0) have the same distribution.≤ ≤ − − − The default initial condition is X0 = 0. A subordinator is a real-valued Lévy process with nondecreasing sample paths. A stable process is a real-valued Lévy process Xt t 0 with ≥ initial value X0 = 0 that satisfies the self-similarity property f g 1/α (1.1) Xt =t =D X1 t > 0. 8 The parameter α is called the exponent of the process. Example 1.1. The most fundamental Lévy processes are the Wiener process and the Poisson process. The Poisson process is a subordinator, but is not stable; the Wiener process is stable, with exponent α = 2. Any linear combination of independent Lévy processes is again a Lévy process, so, for instance, if the Wiener process Wt and the Poisson process Nt are independent then Wt Nt is a Lévy process. More important, linear combinations of independent Poisson− processes are Lévy processes: these are special cases of what are called compound Poisson processes: see sec.
    [Show full text]
  • From Stochastic to Deterministic Langevin Equations
    Abstract In the first part of this thesis we review the concept of stochastic Langevin equations. We state a simple example and explain its physical meaning in terms of Brownian motion. We then calculate the mean square displacement and the velocity autocorrelation function by solving this equation. These quantities are related to the diffusion coefficient in terms of the Green-Kuba formula, and will be defined properly. In the second part of the thesis we introduce a deterministic Langevin equation which models Brownian motion in terms of deterministic chaos, as suggested in recent research literature. We solve this equation by deriving a recurrence relation. We review Takagi function techniques of dynamical systems theory and apply them to this model. We also apply the correlation function technique. Finally, we derive, for the first time, an exact formula for the diffusion coefficient as a function of a control parameter for this model. Acknowledgement I would like to thank my supervisor Dr Rainer Klages for his excellent guid- ance and support throughout my thesis work. Special thanks to Dr Wolfram Just for some useful hints. i Contents 1 Introduction 1 2 Stochastic Langevin Equation 3 2.1 What is Brownian motion? . 3 2.2 The Langevin equation . 4 2.3 Calculation of the mean square displacement . 5 2.4 Derivation of the velocity autocorrelation function . 8 3 Deterministic Langevin Equation 10 3.1 Langevin equation revisited . 10 3.2 Solving the deterministic Langevin equation . 11 3.3 Calculation of the Diffusion Coefficient . 14 3.3.1 Correlation function technique . 15 3.3.2 Takagi function technique .
    [Show full text]
  • A Guide to Brownian Motion and Related Stochastic Processes
    Vol. 0 (0000) A guide to Brownian motion and related stochastic processes Jim Pitman and Marc Yor Dept. Statistics, University of California, 367 Evans Hall # 3860, Berkeley, CA 94720-3860, USA e-mail: [email protected] Abstract: This is a guide to the mathematical theory of Brownian mo- tion and related stochastic processes, with indications of how this theory is related to other branches of mathematics, most notably the classical the- ory of partial differential equations associated with the Laplace and heat operators, and various generalizations thereof. As a typical reader, we have in mind a student, familiar with the basic concepts of probability based on measure theory, at the level of the graduate texts of Billingsley [43] and Durrett [106], and who wants a broader perspective on the theory of Brow- nian motion and related stochastic processes than can be found in these texts. Keywords and phrases: Markov process, random walk, martingale, Gaus- sian process, L´evy process, diffusion. AMS 2000 subject classifications: Primary 60J65. Contents 1 Introduction................................. 3 1.1 History ................................ 3 1.2 Definitions............................... 4 2 BM as a limit of random walks . 5 3 BMasaGaussianprocess......................... 7 3.1 Elementarytransformations . 8 3.2 Quadratic variation . 8 3.3 Paley-Wiener integrals . 8 3.4 Brownianbridges........................... 10 3.5 FinestructureofBrownianpaths . 10 arXiv:1802.09679v1 [math.PR] 27 Feb 2018 3.6 Generalizations . 10 3.6.1 FractionalBM ........................ 10 3.6.2 L´evy’s BM . 11 3.6.3 Browniansheets ....................... 11 3.7 References............................... 11 4 BMasaMarkovprocess.......................... 12 4.1 Markovprocessesandtheirsemigroups. 12 4.2 ThestrongMarkovproperty. 14 4.3 Generators .............................
    [Show full text]
  • Langevin Analysis for Time-Nonlocal Brownian Motion with Algebraic Memories and Delay Interactions
    Eur. Phys. J. B (2016) 89: 87 DOI: 10.1140/epjb/e2016-70079-5 Langevin analysis for time-nonlocal Brownian motion with algebraic memories and delay interactions Matthew Chase, Tom J. McKetterick, Luca Giuggioli and V.M. Kenkre Eur. Phys. J. B (2016) 89: 87 DOI: 10.1140/epjb/e2016-70079-5 THE EUROPEAN PHYSICAL JOURNAL B Regular Article Langevin analysis for time-nonlocal Brownian motion with algebraic memories and delay interactions Matthew Chase1,a, Tom J. McKetterick2,3, Luca Giuggioli2,3,4, and V.M. Kenkre1 1 Consortium of the Americas for Interdisciplinary Science and the Department of Physics and Astronomy, University of New Mexico, Albuquerque − New Mexico 87131, USA 2 Bristol Centre for Complexity Sciences, University of Bristol, BS2 8BB, Bristol, UK 3 Department of Engineering Mathematics, University of Bristol, BS8 1UB, Bristol, UK 4 School of Biological Sciences, University of Bristol, BS8 1TQ, Bristol, UK Received 2 February 2016 / Received in final form 13 February 2016 Published online 6 April 2016 – c EDP Sciences, Societ`a Italiana di Fisica, Springer-Verlag 2016 Abstract. Starting from a Langevin equation with memory describing the attraction of a particle to a center, we investigate its transport and response properties corresponding to two special forms of the memory: one is algebraic, i.e., power-law, and the other involves a delay. We examine the properties of the Green function of the Langevin equation and encounter Mittag-Leffler and Lambert W-functions well-known in the literature. In the presence of white noise, we study two experimental situations, one involving the motional narrowing of spectral lines and the other the steady-state size of the particle under consideration.
    [Show full text]
  • Stochastic Processes and Brownian Motion X(T) (Or X ) Is a Random Variable for Each Time T and • T Is Usually Called the State of the Process at Time T
    Stochastic Processes A stochastic process • X = X(t) { } is a time series of random variables. Stochastic Processes and Brownian Motion X(t) (or X ) is a random variable for each time t and • t is usually called the state of the process at time t. A realization of X is called a sample path. • A sample path defines an ordinary function of t. • c 2006 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 396 c 2006 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 398 Stochastic Processes (concluded) Of all the intellectual hurdles which the human mind If the times t form a countable set, X is called a • has confronted and has overcome in the last discrete-time stochastic process or a time series. fifteen hundred years, the one which seems to me In this case, subscripts rather than parentheses are to have been the most amazing in character and • usually employed, as in the most stupendous in the scope of its consequences is the one relating to X = X . { n } the problem of motion. If the times form a continuum, X is called a — Herbert Butterfield (1900–1979) • continuous-time stochastic process. c 2006 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 397 c 2006 Prof. Yuh-Dauh Lyuu, National Taiwan University Page 399 Random Walks The binomial model is a random walk in disguise. • Random Walk with Drift Consider a particle on the integer line, 0, 1, 2,... • ± ± Xn = µ + Xn−1 + ξn. In each time step, it can make one move to the right • with probability p or one move to the left with ξn are independent and identically distributed with zero • probability 1 p.
    [Show full text]
  • Cognitive Network Management and Control with Significantly Reduced
    Cognitive Network Management and Control with Significantly Reduced State Sensing Arman Rezaee, Student Member, Vincent W.S. Chan, Life Fellow IEEE, Fellow OSA Claude E. Shannon Communication and Network Group, RLE Massachusetts Institute of Technology, Cambridge MA, USA Emails: armanr, chan @mit.edu f g Abstract—Future networks have to accommodate an increase should be used to connect different origin-destination (OD) of 3-4 orders of magnitude in data rates with very heterogeneous pairs. In practice, various shortest path algorithms (Bellman- session sizes and sometimes with strict time deadline require- Ford, Dijkstra, etc.) are used to identify the optimal shortest ments. The dynamic nature of scheduling of large transactions and the need for rapid actions by the Network Management path. The correctness of these algorithms requires the assump- and Control (NMC) system, require timely collection of network tion that after some period of time the system will settle state information. Rough estimates of the size of detailed net- into its steady state. We refer interested readers to [4] for a work states suggest large volumes of data with refresh rates thorough explanation of these issues. With the dynamic nature commensurate with the coherence time of the states (can be of current and future networks, the steady state assumption is as fast as 100 ms), resulting in huge burden and cost for the network transport (∼300 Gbps/link) and computation resources. particularly not valid, nonetheless the basic functional unit of Thus, judicious sampling of network states is necessary for these algorithms can be adapted to address the new challenges. a cost-effective network management system.
    [Show full text]
  • Continuous Dependence on the Coefficients for Mean-Field Fractional Stochastic Delay Evolution Equations
    Journal of Stochastic Analysis Volume 1 Number 3 Article 2 August 2020 Continuous Dependence on the Coefficients for Mean-Field Fractional Stochastic Delay Evolution Equations Brahim Boufoussi Cadi Ayyad University, Marrakesh, Morocco, [email protected] Salah Hajji Regional Center for the Professions of Education and Training, Marrakesh, Morocco, [email protected] Follow this and additional works at: https://digitalcommons.lsu.edu/josa Part of the Analysis Commons, and the Other Mathematics Commons Recommended Citation Boufoussi, Brahim and Hajji, Salah (2020) "Continuous Dependence on the Coefficients for Mean-Field Fractional Stochastic Delay Evolution Equations," Journal of Stochastic Analysis: Vol. 1 : No. 3 , Article 2. DOI: 10.31390/josa.1.3.02 Available at: https://digitalcommons.lsu.edu/josa/vol1/iss3/2 Journal of Stochastic Analysis Vol. 1, No. 3 (2020) Article 2 (14 pages) digitalcommons.lsu.edu/josa DOI: 10.31390/josa.1.3.02 CONTINUOUS DEPENDENCE ON THE COEFFICIENTS FOR MEAN-FIELD FRACTIONAL STOCHASTIC DELAY EVOLUTION EQUATIONS BRAHIM BOUFOUSSI AND SALAH HAJJI* Abstract. We prove that the mild solution of a mean-field stochastic func- tional differential equation, driven by a fractional Brownian motion in a Hilbert space, is continuous in a suitable topology, with respect to the initial datum and all coefficients. 1. Introduction In this paper we are concerned by the following stochastic delay differential equation of McKean-Vlasov type: dx(t)=(Ax(t)+f(t, x ,P ))dt + g(t)dBH (t),t [0,T] t x(t) (1.1) x(t)=ϕ(t),t [ r, 0], ∈ ∈ − where A is the infinitesimal generator of a strongly continuous semigroup of bounded linear operators, (S(t))t 0, in a Hilbert space X,thedrivingprocess H ≥ B is a fractional Brownian motion with Hurst parameter H (1/2, 1), Px(t) is the probability distribution of x(t), x ([ r, 0],X) is the function∈ defined by t ∈C − xt(s)=x(t + s) for all s [ r, 0], and f :[0, + ) ([ r, 0],X) 2(X) 0 ∈ − ∞ ×C − ×P → X, g :[0, + ) 2(Y,X) are appropriate functions.
    [Show full text]