Lectures on Stochastic Processes

Total Page:16

File Type:pdf, Size:1020Kb

Lectures on Stochastic Processes Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk . 7 1.2 Simple random walk . 9 1.3 Problems . 11 2 Gambler's ruin and martingales 13 2.1 Gambler's ruin: fair game . 13 2.2 Gambler's ruin: unfair game . 14 2.3 The dominated convergence theorem . 15 2.4 Martingales . 16 2.5 Problems . 18 3 The single server queue 19 3.1 A discrete time queue . 19 3.2 Emptying the queue . 19 3.3 The monotone convergence theorem . 21 3.4 The stationary distribution . 22 3.5 Problems . 24 4 The branching process 25 4.1 The branching model . 25 4.2 Explosion . 25 4.3 Extinction . 27 4.4 Problems . 28 5 Markov chains 29 5.1 The Markov property . 29 5.2 The strong Markov property . 30 5.3 Transient and recurrent states . 31 5.4 Recurrent classes . 32 5.5 Martingales . 33 5.6 The first passage time . 34 5.7 Problems . 34 3 4 CONTENTS 6 Markov chains: stationary distributions 35 6.1 Stationary distributions . 35 6.2 Detailed balance . 36 6.3 Positive recurrence and stationary distributions . 36 6.4 The average time in a state . 37 6.5 Examples . 38 6.6 Convergence to a stationary distribution . 38 6.7 Problems . 40 7 The Poisson process 41 7.1 The Bernoulli process . 41 7.2 The Poisson process . 42 7.3 The Poisson paradox . 43 7.4 Combining Poisson processes . 43 7.5 Problems . 44 8 Markov jump processes 45 8.1 Jump rates . 45 8.2 Hitting probabilities . 46 8.3 Stationary distributions . 46 8.4 The branching process . 47 8.5 The N server queue . 47 8.6 The Ehrenfest chain . 48 8.7 Transition probabilities . 49 8.8 The embedded Markov chain . 50 8.9 Problems . 50 9 The Wiener process 51 9.1 The symmetric random walk . 51 9.2 The Wiener process . 51 9.3 Continuity and differentiability . 52 9.4 Stochastic integrals . 52 9.5 Equilibrium statistical mechanics . 53 9.6 The Einstein model of Brownian motion . 54 9.7 Problems . 56 10 The Ornstein-Uhlenbeck process 57 10.1 The velocity process . 57 10.2 The Ornstein-Uhlenbeck position process . 59 10.3 Stationary Gaussian Markov processes . 59 10.4 Problems . 60 11 Diffusion and drift 61 11.1 Stochastic differential equation . 61 11.2 Diffusion equations . 61 11.3 Stationary distributions . 62 CONTENTS 5 11.4 Boundary conditions . 63 11.5 Martingales and hitting probabilities . 64 11.6 Problems . 65 12 Stationary processes 67 12.1 Mean and covariance . 67 12.2 Gaussian processes . 68 12.3 Mean and covariance of stationary processes . 68 12.4 Convolution . 69 12.5 Impulse response functions . 69 12.6 Problems . 71 13 Spectral analysis 73 13.1 Fourier transforms . 73 13.2 Convolution and Fourier transforms . 74 13.3 Smoothness and decay . 74 13.4 Some transforms . 76 13.5 Spectral densities . 77 13.6 Frequency response functions . 78 13.7 Causal response functions . 80 13.8 White noise . 82 13.9 1=f noise . 83 13.10Problems . 84 6 CONTENTS Chapter 1 Random walk 1.1 Symmetric simple random walk Let X0 = x and Xn+1 = Xn + ξn+1: (1.1) The ξi are independent, identically distributed random variables such that P [ξi = ±1] = 1=2. The probabilities for this random walk also depend on x, and we shall denote them by Px. We can think of this as a fair gambling game, where at each stage one either wins or loses a fixed amount. Let Ty be the first time n ≥ 1 when Xn = y. Let ρxy = Px[Ty < 1] be the probability that the walk starting at x ever gets to y at some future time. First we show that ρ12 = 1. This says that in the fair game one is almost sure to eventually get ahead by one unit. This follows from the following three equations. The first equation says that in the first step the walk either goes from 1 to 2 directly, or it goes from 1 to 0 and then must go from 0 to 2. Thus 1 1 ρ = + ρ : (1.2) 12 2 2 02 The second equation says that to go from 0 to 2, the walk has to go from 0 to 1 and then from 1 to 2. Furthermore, these two events are independent. Thus ρ02 = ρ01ρ12: (1.3) The third equation says that ρ01 = ρ12. This is obvious. Thus ρ = ρ12 satisfies 1 1 ρ = ρ2 + : (1.4) 2 2 This is a quadratic equation that can also be written as ρ2 − 2ρ + 1 = 0, or (ρ − 1)2 = 0. Its only solution is ρ = 1. How long does it take, on the average, to get ahead by this amount? Let mxy = Ex[Ty], the expected time that the random walk takes to get to y, 7 8 CHAPTER 1. RANDOM WALK starting at x. We have just seen that if x = 1, then T2 < 1 with probability one. Let us do the same kind of computation for m12 = E1[T2]. The first equation says that in the first step the walk either goes from 1 to 2 directly, or it goes from 1 to 0 and then must go from 0 to 2. Thus 1 1 m = (1 + m ) + 1: (1.5) 12 2 02 2 The second equation says that to go from 0 to 2, the walk has to go from 0 to 1 and then from 1 to 2. Thus m02 = m01 + m12: (1.6) The third equation says that m01 = m12. This is obvious. Thus m = m12 satisfies 1 1 m = (1 + 2m) + : (1.7) 2 2 This is a linear equation that can also be written as m = 1+m. Its only solution is m = 1. Thus we have seen that m01 = 1. At first this seems strange, but it is quite natural. In the fair game there is a small probability of a bad losing streak. It takes a long time to recover from the losing streak and eventually get ahead. Infinitely long, on the average. The calculation above uses a rather subtle property of random walk. Namely, it was assumed that after the walk accomplishes the task of going from 0 to 1, then it has an equally difficult task of going from 1 to 2. Now this is delicate, because the walk going from 1 to 2 starts out at a random time, not at a fixed time. Namely, if we start the walk at 0 and set T = T1 as the first time the walk gets to 1, then from that time on the walk is XT ;XT +1;XT +2;::: with XT = 1 by definition. The point is that this has the same distribution as the walk X0;X1;X2;::: with X0 = 1. This fact is called the strong Markov property. The strong Markov property has the following statement. Start the walk out at x and let T = Ty. Let B be a set of random walk paths. We must prove that Px[(XT ;XT +1;XT +2;:::) 2 B j T < 1] = Py[(X0;X1;X2;:::) 2 B]: (1.8) This says that the walk does not care when and how it got to y for the first time; from that point on it moves just as if it were started there at time zero. The proof is the following. We write X1 Px[(XT ;XT +1;XT +2;:::) 2 B; T < 1] = Px[(XT ;XT +1;XT +2;:::) 2 B; T = n]: n=1 (1.9) However Px[(XT ;XT +1;XT +2;:::) 2 B; T = n] = Px[(Xn;Xn+1;Xn+2 :::) 2 B; T = n]: (1.10) 1.2. SIMPLE RANDOM WALK 9 Now the event T = n depends only on the walk X1;X2;:::;Xn, which depends only on the ξ1; ξ2; : : : ; ξn. On the other hand, the walk Xn;Xn+1;Xn+2;::: with Xn = y depends only on the ξn+1; ξn+1;:::. Thus the events are independent, and we have Px[(XT ;XT +1;XT +2;:::) 2 B; T = n] = Px[(Xn;Xn+1;Xn+2 :::) 2 B]P [T = n]; (1.11) where Xn = y. Finally, this is Px[(XT ;XT +1;XT +2;:::) 2 B; T = n] = Py[(X0;X1;X2 :::) 2 B]P [T = n]: (1.12) The proof is finished by summing over n. The picture of symmetric simple random walk that emerges is the following. Starting from any point, the probability of eventually getting to any other point is one. However the expected time to accomplish this is infinite. Thus an imbalance in one direction is always compensated, but this random process is incredibly inefficient and can take a huge amount of time to do it. 1.2 Simple random walk Let X0 = x and Xn+1 = Xn + ξn+1: (1.13) The ξi are independent, identically distributed random variables such that P [ξi = 1] = p, P [ξi = −1] = q, and P [ξ = 0] = r. Here p + q + r = 1, so the walk can change position by only one step at a time. The probabilities for this random walk depend on x, and we shall denote them by Px. We can think of this as a gambling game, where at each stage one either wins or loses a fixed amount. Let Ty be the first time n ≥ 1 when Xn = y. Let ρxy = Px[Ty < 1] be the probability that the walk starting at x ever gets to y at some future time.
Recommended publications
  • Brownian Motion and the Heat Equation
    Brownian motion and the heat equation Denis Bell University of North Florida 1. The heat equation Let the function u(t, x) denote the temperature in a rod at position x and time t u(t,x) Then u(t, x) satisfies the heat equation ∂u 1∂2u = , t > 0. (1) ∂t 2∂x2 It is easy to check that the Gaussian function 1 x2 u(t, x) = e−2t 2πt satisfies (1). Let φ be any! bounded continuous function and define 1 (x y)2 u(t, x) = ∞ φ(y)e− 2−t dy. 2πt "−∞ Then u satisfies!(1). Furthermore making the substitution z = (x y)/√t in the integral gives − 1 z2 u(t, x) = ∞ φ(x y√t)e−2 dz − 2π "−∞ ! 1 z2 φ(x) ∞ e−2 dz = φ(x) → 2π "−∞ ! as t 0. Thus ↓ 1 (x y)2 u(t, x) = ∞ φ(y)e− 2−t dy. 2πt "−∞ ! = E[φ(Xt)] where Xt is a N(x, t) random variable solves the heat equation ∂u 1∂2u = ∂t 2∂x2 with initial condition u(0, ) = φ. · Note: The function u(t, x) is smooth in x for t > 0 even if is only continuous. 2. Brownian motion In the nineteenth century, the botanist Robert Brown observed that a pollen particle suspended in liquid undergoes a strange erratic motion (caused by bombardment by molecules of the liquid) Letting w(t) denote the position of the particle in a fixed direction, the paths w typically look like this t N. Wiener constructed a rigorous mathemati- cal model of Brownian motion in the 1930s.
    [Show full text]
  • A Class of Measure-Valued Markov Chains and Bayesian Nonparametrics
    Bernoulli 18(3), 2012, 1002–1030 DOI: 10.3150/11-BEJ356 A class of measure-valued Markov chains and Bayesian nonparametrics STEFANO FAVARO1, ALESSANDRA GUGLIELMI2 and STEPHEN G. WALKER3 1Universit`adi Torino and Collegio Carlo Alberto, Dipartimento di Statistica e Matematica Ap- plicata, Corso Unione Sovietica 218/bis, 10134 Torino, Italy. E-mail: [email protected] 2Politecnico di Milano, Dipartimento di Matematica, P.zza Leonardo da Vinci 32, 20133 Milano, Italy. E-mail: [email protected] 3University of Kent, Institute of Mathematics, Statistics and Actuarial Science, Canterbury CT27NZ, UK. E-mail: [email protected] Measure-valued Markov chains have raised interest in Bayesian nonparametrics since the seminal paper by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579–585) where a Markov chain having the law of the Dirichlet process as unique invariant measure has been introduced. In the present paper, we propose and investigate a new class of measure-valued Markov chains defined via exchangeable sequences of random variables. Asymptotic properties for this new class are derived and applications related to Bayesian nonparametric mixture modeling, and to a generalization of the Markov chain proposed by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579–585), are discussed. These results and their applications highlight once again the interplay between Bayesian nonparametrics and the theory of measure-valued Markov chains. Keywords: Bayesian nonparametrics; Dirichlet process; exchangeable sequences; linear functionals
    [Show full text]
  • Markov Process Duality
    Markov Process Duality Jan M. Swart Luminy, October 23 and 24, 2013 Jan M. Swart Markov Process Duality Markov Chains S finite set. RS space of functions f : S R. ! S For probability kernel P = (P(x; y))x;y2S and f R define left and right multiplication as 2 X X Pf (x) := P(x; y)f (y) and fP(x) := f (y)P(y; x): y y (I do not distinguish row and column vectors.) Def Chain X = (Xk )k≥0 of S-valued r.v.'s is Markov chain with transition kernel P and state space S if S E f (Xk+1) (X0;:::; Xk ) = Pf (Xk ) a:s: (f R ) 2 P (X0;:::; Xk ) = (x0;:::; xk ) , = P[X0 = x0]P(x0; x1) P(xk−1; xk ): ··· µ µ µ Write P ; E for process with initial law µ = P [X0 ]. x δx x 2 · P := P with δx (y) := 1fx=yg. E similar. Jan M. Swart Markov Process Duality Markov Chains Set k µ k x µk := µP (x) = P [Xk = x] and fk := P f (x) = E [f (Xk )]: Then the forward and backward equations read µk+1 = µk P and fk+1 = Pfk : In particular π invariant law iff πP = π. Function h harmonic iff Ph = h h(Xk ) martingale. , Jan M. Swart Markov Process Duality Random mapping representation (Zk )k≥1 i.i.d. with common law ν, take values in (E; ). φ : S E S measurable E × ! P(x; y) = P[φ(x; Z1) = y]: Random mapping representation (E; ; ν; φ) always exists, highly non-unique.
    [Show full text]
  • 1 Introduction Branching Mechanism in a Superprocess from a Branching
    数理解析研究所講究録 1157 巻 2000 年 1-16 1 An example of random snakes by Le Gall and its applications 渡辺信三 Shinzo Watanabe, Kyoto University 1 Introduction The notion of random snakes has been introduced by Le Gall ([Le 1], [Le 2]) to construct a class of measure-valued branching processes, called superprocesses or continuous state branching processes ([Da], [Dy]). A main idea is to produce the branching mechanism in a superprocess from a branching tree embedded in excur- sions at each different level of a Brownian sample path. There is no clear notion of particles in a superprocess; it is something like a cloud or mist. Nevertheless, a random snake could provide us with a clear picture of historical or genealogical developments of”particles” in a superprocess. ” : In this note, we give a sample pathwise construction of a random snake in the case when the underlying Markov process is a Markov chain on a tree. A simplest case has been discussed in [War 1] and [Wat 2]. The construction can be reduced to this case locally and we need to consider a recurrence family of stochastic differential equations for reflecting Brownian motions with sticky boundaries. A special case has been already discussed by J. Warren [War 2] with an application to a coalescing stochastic flow of piece-wise linear transformations in connection with a non-white or non-Gaussian predictable noise in the sense of B. Tsirelson. 2 Brownian snakes Throughout this section, let $\xi=\{\xi(t), P_{x}\}$ be a Hunt Markov process on a locally compact separable metric space $S$ endowed with a metric $d_{S}(\cdot, *)$ .
    [Show full text]
  • Download File
    i PREFACE Teaching stochastic processes to students whose primary interests are in applications has long been a problem. On one hand, the subject can quickly become highly technical and if mathe- matical concerns are allowed to dominate there may be no time available for exploring the many interesting areas of applications. On the other hand, the treatment of stochastic calculus in a cavalier fashion leaves the student with a feeling of great uncertainty when it comes to exploring new material. Moreover, the problem has become more acute as the power of the dierential equation point of view has become more widely appreciated. In these notes, an attempt is made to resolve this dilemma with the needs of those interested in building models and designing al- gorithms for estimation and control in mind. The approach is to start with Poisson counters and to identity the Wiener process with a certain limiting form. We do not attempt to dene the Wiener process per se. Instead, everything is done in terms of limits of jump processes. The Poisson counter and dierential equations whose right-hand sides include the dierential of Poisson counters are developed rst. This leads to the construction of a sample path representa- tions of a continuous time jump process using Poisson counters. This point of view leads to an ecient problem solving technique and permits a unied treatment of time varying and nonlinear problems. More importantly, it provides sound intuition for stochastic dierential equations and their uses without allowing the technicalities to dominate. In treating estimation theory, the conditional density equation is given a central role.
    [Show full text]
  • 1 Markov Chain Notation for a Continuous State Space
    The Metropolis-Hastings Algorithm 1 Markov Chain Notation for a Continuous State Space A sequence of random variables X0;X1;X2;:::, is a Markov chain on a continuous state space if... ... where it goes depends on where is is but not where is was. I'd really just prefer that you have the “flavor" here. The previous Markov property equation that we had P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) is uninteresting now since both of those probabilities are zero when the state space is con- tinuous. It would be better to say that, for any A ⊆ S, where S is the state space, we have P (Xn+1 2 AjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 2 AjXn = i): Note that it is okay to have equalities on the right side of the conditional line. Once these continuous random variables have been observed, they are fixed and nailed down to discrete values. 1.1 Transition Densities The continuous state analog of the one-step transition probability pij is the one-step tran- sition density. We will denote this as p(x; y): This is not the probability that the chain makes a move from state x to state y. Instead, it is a probability density function in y which describes a curve under which area represents probability. x can be thought of as a parameter of this density. For example, given a Markov chain is currently in state x, the next value y might be drawn from a normal distribution centered at x.
    [Show full text]
  • Simulation of Markov Chains
    Copyright c 2007 by Karl Sigman 1 Simulating Markov chains Many stochastic processes used for the modeling of financial assets and other systems in engi- neering are Markovian, and this makes it relatively easy to simulate from them. Here we present a brief introduction to the simulation of Markov chains. Our emphasis is on discrete-state chains both in discrete and continuous time, but some examples with a general state space will be discussed too. 1.1 Definition of a Markov chain We shall assume that the state space S of our Markov chain is S = ZZ= f:::; −2; −1; 0; 1; 2;:::g, the integers, or a proper subset of the integers. Typical examples are S = IN = f0; 1; 2 :::g, the non-negative integers, or S = f0; 1; 2 : : : ; ag, or S = {−b; : : : ; 0; 1; 2 : : : ; ag for some integers a; b > 0, in which case the state space is finite. Definition 1.1 A stochastic process fXn : n ≥ 0g is called a Markov chain if for all times n ≥ 0 and all states i0; : : : ; i; j 2 S, P (Xn+1 = jjXn = i; Xn−1 = in−1;:::;X0 = i0) = P (Xn+1 = jjXn = i) (1) = Pij: Pij denotes the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, and is referred to as a one-step transition probability. The square matrix P = (Pij); i; j 2 S; is called the one-step transition matrix, and since when leaving state i the chain must move to one of the states j 2 S, each row sums to one (e.g., forms a probability distribution): For each i X Pij = 1: j2S We are assuming that the transition probabilities do not depend on the time n, and so, in particular, using n = 0 in (1) yields Pij = P (X1 = jjX0 = i): (Formally we are considering only time homogenous MC's meaning that their transition prob- abilities are time-homogenous (time stationary).) The defining property (1) can be described in words as the future is independent of the past given the present state.
    [Show full text]
  • Levy Processes
    LÉVY PROCESSES, STABLE PROCESSES, AND SUBORDINATORS STEVEN P.LALLEY 1. DEFINITIONSAND EXAMPLES d Definition 1.1. A continuous–time process Xt = X(t ) t 0 with values in R (or, more generally, in an abelian topological groupG ) isf called a Lévyg ≥ process if (1) its sample paths are right-continuous and have left limits at every time point t , and (2) it has stationary, independent increments, that is: (a) For all 0 = t0 < t1 < < tk , the increments X(ti ) X(ti 1) are independent. − (b) For all 0 s t the··· random variables X(t ) X−(s ) and X(t s ) X(0) have the same distribution.≤ ≤ − − − The default initial condition is X0 = 0. A subordinator is a real-valued Lévy process with nondecreasing sample paths. A stable process is a real-valued Lévy process Xt t 0 with ≥ initial value X0 = 0 that satisfies the self-similarity property f g 1/α (1.1) Xt =t =D X1 t > 0. 8 The parameter α is called the exponent of the process. Example 1.1. The most fundamental Lévy processes are the Wiener process and the Poisson process. The Poisson process is a subordinator, but is not stable; the Wiener process is stable, with exponent α = 2. Any linear combination of independent Lévy processes is again a Lévy process, so, for instance, if the Wiener process Wt and the Poisson process Nt are independent then Wt Nt is a Lévy process. More important, linear combinations of independent Poisson− processes are Lévy processes: these are special cases of what are called compound Poisson processes: see sec.
    [Show full text]
  • Chapter 8: Markov Chains
    149 Chapter 8: Markov Chains 8.1 Introduction So far, we have examined several stochastic processes using transition diagrams and First-Step Analysis. The processes can be written as {X0, X1, X2,...}, where Xt is the state at time t. A.A.Markov On the transition diagram, Xt corresponds to 1856-1922 which box we are in at step t. In the Gambler’s Ruin (Section 2.7), Xt is the amount of money the gambler possesses after toss t. In the model for gene spread (Section 3.7), Xt is the number of animals possessing the harmful allele A in generation t. The processes that we have looked at via the transition diagram have a crucial property in common: Xt+1 depends only on Xt. It does not depend upon X0, X1,...,Xt−1. Processes like this are called Markov Chains. Example: Random Walk (see Chapter 4) none of these steps matter for time t+1 ? time t+1 time t ? In a Markov chain, the future depends only upon the present: NOT upon the past. 150 .. ...... .. .. ...... .. ....... ....... ....... ....... ... .... .... .... .. .. .. .. ... ... ... ... ....... ..... ..... ....... ........4................. ......7.................. The text-book image ...... ... ... ...... .... ... ... .... ... ... ... ... ... ... ... ... ... ... ... .... ... ... ... ... of a Markov chain has ... ... ... ... 1 ... .... ... ... ... ... ... ... ... .1.. 1... 1.. 3... ... ... ... ... ... ... ... ... 2 ... ... ... a flea hopping about at ... ..... ..... ... ...... ... ....... ...... ...... ....... ... ...... ......... ......... .. ........... ........ ......... ........... ........
    [Show full text]
  • Markov Random Fields and Stochastic Image Models
    Markov Random Fields and Stochastic Image Models Charles A. Bouman School of Electrical and Computer Engineering Purdue University Phone: (317) 494-0340 Fax: (317) 494-3358 email [email protected] Available from: http://dynamo.ecn.purdue.edu/»bouman/ Tutorial Presented at: 1995 IEEE International Conference on Image Processing 23-26 October 1995 Washington, D.C. Special thanks to: Ken Sauer Suhail Saquib Department of Electrical School of Electrical and Computer Engineering Engineering University of Notre Dame Purdue University 1 Overview of Topics 1. Introduction (b) Non-Gaussian MRF's 2. The Bayesian Approach i. Quadratic functions ii. Non-Convex functions 3. Discrete Models iii. Continuous MAP estimation (a) Markov Chains iv. Convex functions (b) Markov Random Fields (MRF) (c) Parameter Estimation (c) Simulation i. Estimation of σ (d) Parameter estimation ii. Estimation of T and p parameters 4. Application of MRF's to Segmentation 6. Application to Tomography (a) The Model (a) Tomographic system and data models (b) Bayesian Estimation (b) MAP Optimization (c) MAP Optimization (c) Parameter estimation (d) Parameter Estimation 7. Multiscale Stochastic Models (e) Other Approaches (a) Continuous models 5. Continuous Models (b) Discrete models (a) Gaussian Random Process Models 8. High Level Image Models i. Autoregressive (AR) models ii. Simultaneous AR (SAR) models iii. Gaussian MRF's iv. Generalization to 2-D 2 References in Statistical Image Modeling 1. Overview references [100, 89, 50, 54, 162, 4, 44] 4. Simulation and Stochastic Optimization Methods [118, 80, 129, 100, 68, 141, 61, 76, 62, 63] 2. Type of Random Field Model 5. Computational Methods used with MRF Models (a) Discrete Models i.
    [Show full text]
  • Temporally-Reweighted Chinese Restaurant Process Mixtures for Clustering, Imputing, and Forecasting Multivariate Time Series
    Temporally-Reweighted Chinese Restaurant Process Mixtures for Clustering, Imputing, and Forecasting Multivariate Time Series Feras A. Saad Vikash K. Mansinghka Probabilistic Computing Project Probabilistic Computing Project Massachusetts Institute of Technology Massachusetts Institute of Technology Abstract there are tens or hundreds of time series. One chal- lenge in these settings is that the data may reflect un- derlying processes with widely varying, non-stationary This article proposes a Bayesian nonparamet- dynamics [13]. Another challenge is that standard ric method for forecasting, imputation, and parametric approaches such as state-space models and clustering in sparsely observed, multivariate vector autoregression often become statistically and time series data. The method is appropri- numerically unstable in high dimensions [20]. Models ate for jointly modeling hundreds of time se- from these families further require users to perform ries with widely varying, non-stationary dy- significant custom modeling on a per-dataset basis, or namics. Given a collection of N time se- to search over a large set of possible parameter set- ries, the Bayesian model first partitions them tings and model configurations. In econometrics and into independent clusters using a Chinese finance, there is an increasing need for multivariate restaurant process prior. Within a clus- methods that exploit sparsity, are computationally ef- ter, all time series are modeled jointly us- ficient, and can accurately model hundreds of time se- ing a novel \temporally-reweighted" exten- ries (see introduction of [15], and references therein). sion of the Chinese restaurant process mix- ture. Markov chain Monte Carlo techniques This paper presents a nonparametric Bayesian method are used to obtain samples from the poste- for multivariate time series that aims to address some rior distribution, which are then used to form of the above challenges.
    [Show full text]
  • A Review on Statistical Inference Methods for Discrete Markov Random Fields Julien Stoehr, Richard Everitt, Matthew T
    A review on statistical inference methods for discrete Markov random fields Julien Stoehr, Richard Everitt, Matthew T. Moores To cite this version: Julien Stoehr, Richard Everitt, Matthew T. Moores. A review on statistical inference methods for discrete Markov random fields. 2017. hal-01462078v2 HAL Id: hal-01462078 https://hal.archives-ouvertes.fr/hal-01462078v2 Preprint submitted on 11 Apr 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A review on statistical inference methods for discrete Markov random fields Julien Stoehr1 1School of Mathematical Sciences & Insight Centre for Data Analytics, University College Dublin, Ireland Abstract Developing satisfactory methodology for the analysis of Markov random field is a very challenging task. Indeed, due to the Markovian dependence structure, the normalizing constant of the fields cannot be computed using standard analytical or numerical methods. This forms a central issue for any statistical approach as the likelihood is an integral part of the procedure. Furthermore, such unobserved fields cannot be integrated out and the likelihood evaluation becomes a doubly intractable problem. This report gives an overview of some of the methods used in the literature to analyse such observed or unobserved random fields.
    [Show full text]