Chapter 2 Fundamental Concepts

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 2 Fundamental Concepts Chapter 2 Fundamental Concepts This chapter describes the fundamental concepts in the theory of time series models. In particular we introduce the concepts of stochastic process, mean and covariance function, stationary process, and autocorrelation function. 2.1 Time Series and Stochastic Processes ± ± ± The sequence of random variables {Zt: t = 0, 1, 2, 3,...} is called a stochastic process. It is known that the complete probabilistic structure of such a process is determined by the set of distributions of all finite collections of the Z’s. Fortunately, we will not have to deal explicitly with these multivariate distributions. Much of the information in these joint distributions can be described in terms of means, variances, and covariances. Consequently, we concentrate our efforts on these first and second moments. (If the joint distributions of the Z’s are multivariate normal distributions, then the first and second moments completely determine all the joint distributions.) 2.2 Means, Variances, and Covariances ± ± ± For a stochastic process {Zt: t = 0, 1, 2, 3,...} the mean function is defined by µ () ± , ± , t =forEZt t = 0, 1 2 ... (2.2.1) µ µ that is, t is just the expected value of the process at time t. In general t can be different at each time point t. γ The autocovariance function, t,s, is defined as γ (), ± , ± , ts, =forCov Zt Zs t = 0, 1 2... (2.2.2) − µ − µ − µ µ where Cov(Zt, Zs) = E[(Zt t)(Zs s)] = E(ZtZs) t s. ρ The autocorrelation function, t,s, is given by ρ (), ± , ± , ts, =forCorr Zt Zs t = 0, 1 2 ... (2.2.3) where 1 Chapter 2 Fundamental Concepts (), γ Cov Zt Zs ts, Corr() Z , Z ==--------------------------------------------- ---------------------- (2.2.4) t s () () γ γ Var Zt Var Zs tt, ss, We review the basic properties of expectation, variance, covariance, and correlation in “Appendix: Expectation, Variance, Covariance, and Correlation” on page 16. Recall that both covariance and correlation are measures of the (linear) dependence between random variables but that the unitless correlation is somewhat easier to interpret. The following important properties follow from known results and our definitions: γ = Var() Z ρ tt, t tt, = 1 γ = γ ρ ρ ts, st, ts, = st, (2.2.5) γ ≤ γ γ ρ ≤ 1 ts, tt, ss, ts, ρ ± Values of t,s near 1 indicate strong (linear) dependence, whereas values near zero ρ indicate weak (linear) dependence. If t,s = 0, we say that Zt and Zs are uncorrelated. To investigate the covariance properties of various time series models, the following result will be used repeatedly: If c1, c2,..., cm and d1, d2,..., dn are constants and t1, t2,..., tm and s1, s2,..., sn are time points, then m n m n Cov∑ c Z , ∑ d Z = ∑ ∑ c d Cov() Z , Z (2.2.6) i ti j sj i j ti sj i = 1 j = 1 i = 1 j = 1 The proof of Equation (2.2.6), though tedious, is a straightforward application of the linear properties of expectation. As a special case, we obtain the following well-known result n n n i – 1 Var∑ c Z = ∑ c2Var() Z + 2∑ ∑ c c Cov() Z , Z (2.2.7) i ti i ti i j ti tj i = 1 i = 1 i = 2 j = 1 The Random Walk Let a1, a2,... be a sequence of independent, identically distributed random variables each σ2 with zero mean and variance a . The observed time series, {Zt: t = 1, 2,...}, is constructed as follows: 2 2.2 Means, Variances, and Covariances Z1 = a1 Z = a + a …2 1 2 (2.2.8) … Zt = a1 +++a2 at Alternatively, we can write Zt = Zt – 1 + at (2.2.9) If the a’s are interpreted as the sizes of the “steps” taken (forward or backward) along a number line, then Zt is the position of the “random walker” at time t. From Equation (2.2.8) we obtain the mean function: µ () ()… () ()… () t ==EZt Ea1 +++a2 at =Ea1 +++Ea2 Eat = 00++… +0 so that µ t = 0 for all t (2.2.10) We also have () ()… () ()… () Var Zt ==Var a1 +++a2 at Var a1 +++Var a2 Var at = σ2 σ2 … σ2 a +++a a t terms σ2 = t a so that () σ2 Var Zt = t a (2.2.11) Notice that the process variance increases linearly with time. To investigate the covariance function, suppose that 1 ≤ t ≤ s. Then we have γ (), ()… , … … ts, ==Cov Zt Zs Cov a1 +++a2 at a1 ++++a2 at at + 1 ++as From Equation (2.2.6) we have s t γ (), ts, = ∑ ∑ Cov ai aj i = 1 j = 1 σ2 However, these covariances are zero unless i = j in which case they equal Var(ai) = a . γ σ2 There are exactly t of these so that t,s = t.a 3 Chapter 2 Fundamental Concepts γ γ Since t,s = s,t, this specifies the autocovariance function for all time points t and s and we can write γ σ2 ≤≤ ts, =fort a 1ts (2.2.12) The autocorrelation function for the random walk is now easily obtained as γ ts, t ρ =---------------------- =- for 1 ≤≤ts (2.2.13) ts, γ γ s tt, ss, The following numerical values help us understand the behavior of the random walk. 1 8 ρ ==---0.707 ρ ==--- 0.943 12, 2 89, 9 24 1 ρ ==------0.980 ρ ==------0.200 24, 25 25 125, 25 The values of Z at neighboring time points are more and more strongly and positively correlated as time goes by. On the other hand, the values of Z at distant time points are less and less correlated. A simulated random walk is shown in Exhibit (2.1) where the a’s were selected from a standard normal distribution. Note that even though the theoretical mean function is zero for all time points, the facts that the variance increases over time and that the correlation between process values nearby in time is nearly 1 indicate that we should expect long excursions of the process away from the mean level of zero. The simple random walk process provides a good model (at least to a first approximation) for phenomena as diverse as the movement of common stock prices, and the position of small particles suspended in a fluid (so-called Brownian motion). 4 2.2 Means, Variances, and Covariances Exhibit 2.1 A Random Walk 8 3 RandWalk -2 Index 10 20 30 40 50 60 A Moving Average As a second example, suppose that {Zt} is constructed as at + at – 1 Z = ----------------------- (2.2.14) t 2 where (as always throughout this book), the a’s are assumed to be independent and σ2 identically distributed with zero mean and variance a . Here () () at + at – 1 Eat + Eat – 1 µ ==EZ() E----------------------- =----------------------------------------- t t 2 2 = 0 and () () at + at – 1 Var at + Var at – 1 Var() Z ==Var----------------------- ----------------------------------------------------- t 2 4 σ2 = 0.5 a Also 5 Chapter 2 Fundamental Concepts at + at – 1 at – 1 + at – 2 Cov() Z , Z = Cov----------------------- , -------------------------------- t t – 1 2 2 (), (), (), Cov at at – 1 ++Cov at at – 2 Cov at – 1 at – 1 = ------------------------------------------------------------------------------------------------------------------------------------ 4 (), Cov at – 1 at – 2 + -------------------------------------------- 4 (), Cov at – 1 at – 1 =-------------------------------------------- (as all the other covariances are zero) 4 σ2 = 0.25 a or γ σ2 tt, – 1 =0.25 a for all t (2.2.15) Furthermore, at + at – 1 at – 2 + at – 3 Cov() Z , Z = Cov----------------------- , -------------------------------- t t – 2 2 2 =0 since the as are independent Similarly, Cov(Zt, Zt−k) = 0 for k > 1, so we may write 0.5σ2 for ts–0= a γ ts, = σ2 0.25 a for ts–1= 0 for ts–1> For the autocorrelation function we have 1 for ts–0= ρ , = 0.5 for ts–1= (2.2.16) ts 0 for ts–1> σ2 σ2 since 0.25a /0.5a =0.5. ρ ρ ρ ρ Notice that 2,1 = 3,2 = 4,3 = 9,8 = 0.5. Values of Z precisely one time unit apart have ρ ρ exactly the same correlation no matter where they occur in time. Furthermore, 3,1 = 4,2 ρ ρ = t,t−2 and, more generally, t,t−k is the same for all values of t. This leads us to the important concept of stationarity. 6 2.3 Stationarity 2.3 Stationarity To make statistical inferences about the structure of a stochastic process on the basis of an observed record of that process, we must usually make some simplifying (and presumably reasonable) assumptions about that structure. The most important such assumption is that of stationarity. The basic idea of stationarity is that the probability laws that govern the behavior of the process do not change over time. In a sense, the process is in statistical equilibrium. Specifically, a process {Zt} is said to be strictly stationary if the joint distribution of Z ,,,Z … Z is the same as the joint t1 t2 tn ,,,… distribution of Z Z Z for all choices of time points t1, t2,..., tn and all t1 – k t2 – k tn – k choices of time lag k. Thus, when n = 1 the (univariate) distribution of Zt is the same as that of Zt−k for all t and k; in other words, the Z’s are (marginally) identically distributed. It then follows that E(Zt) = E(Zt−k) for all t and k so that the mean function is constant for all time. Additionally, Va r (Zt) = Va r (Zt−k) for all t and k so that the variance is also constant in time.
Recommended publications
  • Stationary Processes and Their Statistical Properties
    Stationary Processes and Their Statistical Properties Brian Borchers March 29, 2001 1 Stationary processes A discrete time stochastic process is a sequence of random variables Z1, Z2, :::. In practice we will typically analyze a single realization z1, z2, :::, zn of the stochastic process and attempt to esimate the statistical properties of the stochastic process from the realization. We will also consider the problem of predicting zn+1 from the previous elements of the sequence. We will begin by focusing on the very important class of stationary stochas- tic processes. A stochastic process is strictly stationary if its statistical prop- erties are unaffected by shifting the stochastic process in time. In particular, this means that if we take a subsequence Zk+1, :::, Zk+m, then the joint distribution of the m random variables will be the same no matter what k is. Stationarity requires that the mean of the stochastic process be a constant. E[Zk] = µ. and that the variance is constant 2 V ar[Zk] = σZ : Also, stationarity requires that the covariance of two elements separated by a distance m is constant. That is, Cov(Zk;Zk+m) is constant. This covariance is called the autocovariance at lag m, and we will use the notation γm. Since Cov(Zk;Zk+m) = Cov(Zk+m;Zk), we need only find γm for m 0. The ≥ correlation of Zk and Zk+m is the autocorrelation at lag m. We will use the notation ρm for the autocorrelation. It is easy to show that γk ρk = : γ0 1 2 The autocovariance and autocorrelation ma- trices The covariance matrix for the random variables Z1, :::, Zn is called an auto- covariance matrix.
    [Show full text]
  • Stochastic Process - Introduction
    Stochastic Process - Introduction • Stochastic processes are processes that proceed randomly in time. • Rather than consider fixed random variables X, Y, etc. or even sequences of i.i.d random variables, we consider sequences X0, X1, X2, …. Where Xt represent some random quantity at time t. • In general, the value Xt might depend on the quantity Xt-1 at time t-1, or even the value Xs for other times s < t. • Example: simple random walk . week 2 1 Stochastic Process - Definition • A stochastic process is a family of time indexed random variables Xt where t belongs to an index set. Formal notation, { t : ∈ ItX } where I is an index set that is a subset of R. • Examples of index sets: 1) I = (-∞, ∞) or I = [0, ∞]. In this case Xt is a continuous time stochastic process. 2) I = {0, ±1, ±2, ….} or I = {0, 1, 2, …}. In this case Xt is a discrete time stochastic process. • We use uppercase letter {Xt } to describe the process. A time series, {xt } is a realization or sample function from a certain process. • We use information from a time series to estimate parameters and properties of process {Xt }. week 2 2 Probability Distribution of a Process • For any stochastic process with index set I, its probability distribution function is uniquely determined by its finite dimensional distributions. •The k dimensional distribution function of a process is defined by FXX x,..., ( 1 x,..., k ) = P( Xt ≤ 1 ,..., xt≤ X k) x t1 tk 1 k for anyt 1 ,..., t k ∈ I and any real numbers x1, …, xk .
    [Show full text]
  • Autocovariance Function Estimation Via Penalized Regression
    Autocovariance Function Estimation via Penalized Regression Lina Liao, Cheolwoo Park Department of Statistics, University of Georgia, Athens, GA 30602, USA Jan Hannig Department of Statistics and Operations Research, University of North Carolina, Chapel Hill, NC 27599, USA Kee-Hoon Kang Department of Statistics, Hankuk University of Foreign Studies, Yongin, 449-791, Korea Abstract The work revisits the autocovariance function estimation, a fundamental problem in statistical inference for time series. We convert the function estimation problem into constrained penalized regression with a generalized penalty that provides us with flexible and accurate estimation, and study the asymptotic properties of the proposed estimator. In case of a nonzero mean time series, we apply a penalized regression technique to a differenced time series, which does not require a separate detrending procedure. In penalized regression, selection of tuning parameters is critical and we propose four different data-driven criteria to determine them. A simulation study shows effectiveness of the tuning parameter selection and that the proposed approach is superior to three existing methods. We also briefly discuss the extension of the proposed approach to interval-valued time series. Some key words: Autocovariance function; Differenced time series; Regularization; Time series; Tuning parameter selection. 1 1 Introduction Let fYt; t 2 T g be a stochastic process such that V ar(Yt) < 1, for all t 2 T . The autoco- variance function of fYtg is given as γ(s; t) = Cov(Ys;Yt) for all s; t 2 T . In this work, we consider a regularly sampled time series f(i; Yi); i = 1; ··· ; ng. Its model can be written as Yi = g(i) + i; i = 1; : : : ; n; (1.1) where g is a smooth deterministic function and the error is assumed to be a weakly stationary 2 process with E(i) = 0, V ar(i) = σ and Cov(i; j) = γ(ji − jj) for all i; j = 1; : : : ; n: The estimation of the autocovariance function γ (or autocorrelation function) is crucial to determine the degree of serial correlation in a time series.
    [Show full text]
  • LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
    LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix).
    [Show full text]
  • Covariances of ARMA Models
    Statistics 910, #9 1 Covariances of ARMA Processes Overview 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation function 5. Discussion Review of ARMA processes ARMA process A stationary solution fXtg (or if its mean is not zero, fXt − µg) of the linear difference equation Xt − φ1Xt−1 − · · · − φpXt−p = wt + θ1wt−1 + ··· + θqwt−q φ(B)Xt = θ(B)wt (1) 2 where wt denotes white noise, wt ∼ WN(0; σ ). Definition 3.5 adds the • identifiability condition that the polynomials φ(z) and θ(z) have no zeros in common and the • normalization condition that φ(0) = θ(0) = 1. Causal process A stationary process fXtg is said to be causal if there ex- ists a summable sequence (some require square summable (`2), others want more and require absolute summability (`1)) sequence f jg such that fXtg has the one-sided moving average representation 1 X Xt = jwt−j = (B)wt : (2) j=0 Proposition 3.1 states that a stationary ARMA process fXtg is causal if and only if (iff) the zeros of the autoregressive polynomial Statistics 910, #9 2 φ(z) lie outside the unit circle (i.e., φ(z) 6= 0 for jzj ≤ 1). Since φ(0) = 1, φ(z) > 0 for jzj ≤ 1. (The unit circle in the complex plane consists of those x 2 C for which jzj = 1; the closed unit disc includes the interior of the unit circle as well as the circle; the open unit disc consists of jzj < 1.) If the zeros of φ(z) lie outside the unit circle, then we can invert each Qp of the factors (1 − B=zj) that make up φ(B) = j=1(1 − B=zj) one at a time (as when back-substituting in the derivation of the AR(1) representation).
    [Show full text]
  • Covariance Functions
    C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml Chapter 4 Covariance Functions We have seen that a covariance function is the crucial ingredient in a Gaussian process predictor, as it encodes our assumptions about the function which we wish to learn. From a slightly different viewpoint it is clear that in supervised learning the notion of similarity between data points is crucial; it is a basic similarity assumption that points with inputs x which are close are likely to have similar target values y, and thus training points that are near to a test point should be informative about the prediction at that point. Under the Gaussian process view it is the covariance function that defines nearness or similarity. An arbitrary function of input pairs x and x0 will not, in general, be a valid valid covariance covariance function.1 The purpose of this chapter is to give examples of some functions commonly-used covariance functions and to examine their properties. Section 4.1 defines a number of basic terms relating to covariance functions. Section 4.2 gives examples of stationary, dot-product, and other non-stationary covariance functions, and also gives some ways to make new ones from old. Section 4.3 introduces the important topic of eigenfunction analysis of covariance functions, and states Mercer’s theorem which allows us to express the covariance function (under certain conditions) in terms of its eigenfunctions and eigenvalues. The covariance functions given in section 4.2 are valid when the input domain X is a subset of RD.
    [Show full text]
  • Lecture 1: Stationary Time Series∗
    Lecture 1: Stationary Time Series∗ 1 Introduction If a random variable X is indexed to time, usually denoted by t, the observations {Xt, t ∈ T} is called a time series, where T is a time index set (for example, T = Z, the integer set). Time series data are very common in empirical economic studies. Figure 1 plots some frequently used variables. The upper left figure plots the quarterly GDP from 1947 to 2001; the upper right figure plots the the residuals after linear-detrending the logarithm of GDP; the lower left figure plots the monthly S&P 500 index data from 1990 to 2001; and the lower right figure plots the log difference of the monthly S&P. As you could see, these four series display quite different patterns over time. Investigating and modeling these different patterns is an important part of this course. In this course, you will find that many of the techniques (estimation methods, inference proce- dures, etc) you have learned in your general econometrics course are still applicable in time series analysis. However, there are something special of time series data compared to cross sectional data. For example, when working with cross-sectional data, it usually makes sense to assume that the observations are independent from each other, however, time series data are very likely to display some degree of dependence over time. More importantly, for time series data, we could observe only one history of the realizations of this variable. For example, suppose you obtain a series of US weekly stock index data for the last 50 years.
    [Show full text]
  • Lecture 5: Gaussian Processes & Stationary Processes
    Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2019 Lecture 5: Gaussian processes & Stationary processes Readings Recommended: • Pavliotis (2014), sections 1.1, 1.2 • Grimmett and Stirzaker (2001), 8.2, 8.6 • Grimmett and Stirzaker (2001) 9.1, 9.3, 9.5, 9.6 (more advanced than what we will cover in class) Optional: • Grimmett and Stirzaker (2001) 4.9 (review of multivariate Gaussian random variables) • Grimmett and Stirzaker (2001) 9.4 • Chorin and Hald (2009) Chapter 6; a gentle introduction to the spectral decomposition for continuous processes, that gives insight into the link to the spectral decomposition for discrete processes. • Yaglom (1962), Ch. 1, 2; a nice short book with many details about stationary random functions; one of the original manuscripts on the topic. • Lindgren (2013) is a in-depth but accessible book; with more details and a more modern style than Yaglom (1962). We want to be able to describe more stochastic processes, which are not necessarily Markov process. In this lecture we will look at two classes of stochastic processes that are tractable to use as models and to simulat: Gaussian processes, and stationary processes. 5.1 Setup Here are some ideas we will need for what follows. 5.1.1 Finite dimensional distributions Definition. The finite-dimensional distributions (fdds) of a stochastic process Xt is the set of measures Pt1;t2;:::;tk given by Pt1;t2;:::;tk (F1 × F2 × ··· × Fk) = P(Xt1 2 F1;Xt2 2 F2;:::Xtk 2 Fk); k where (t1;:::;tk) is any point in T , k is any integer, and Fi are measurable events in R.
    [Show full text]
  • STA 6857---Autocorrelation and Cross-Correlation & Stationary Time Series
    Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c STA 6857—Autocorrelation and Cross-Correlation & Outline Stationary Time Series (§1.4, 1.5) 1 Announcements 2 Autocorrelation and Cross-Correlation 3 Stationary Time Series 4 Homework 1c Arthur Berg STA 6857—Autocorrelation and Cross-Correlation & Stationary Time Series (§1.4, 1.5) 2/ 25 Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c Homework Questions from Last Time We’ve seen the AR(p) and MA(q) models. Which one do we use? In scientific modeling we wish to choose the model that best Our TA, Aixin Tan, will have office hours on Thursdays from 1–2pm describes or explains the data. Later we will develop many in 218 Griffin-Floyd. techniques to help us choose and fit the best model for our data. Homework 1c will be assigned today and the last part of homework Is this white noise process in the models unique? 1, homework 1d, will be assigned on Friday. Homework 1 will be We will see later that any stationary time series can be described as collected on Friday, September 7. Don’t wait till the last minute to do a MA(1) + “deterministic part” by the Wold decomposition, where the homework, because more homework will follow next week. the white noise process in the MA(1) part is unique. So in short, the answer is yes. We will also see later how to estimate the white noise process which will aid in forecasting.
    [Show full text]
  • Introduction to Random Fields and Scale Invariance Hermine Biermé
    Introduction to random fields and scale invariance Hermine Biermé To cite this version: Hermine Biermé. Introduction to random fields and scale invariance. 2017. hal-01493834v2 HAL Id: hal-01493834 https://hal.archives-ouvertes.fr/hal-01493834v2 Preprint submitted on 5 May 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Introduction to random fields and scale invariance Hermine Bierme´ Abstract In medical imaging, several authors have proposed to characterize rough- ness of observed textures by their fractal dimensions. Fractal analysis of 1D signals is mainly based on the stochastic modeling using the famous fractional Brownian motion for which the fractal dimension is determined by its so-called Hurst param- eter. Lots of 2D generalizations of this toy model may be defined according to the scope. This lecture intends to present some of them. After an introduction to random fields, the first part will focus on the construction of Gaussian random fields with prescribed invariance properties such as stationarity, self-similarity, or operator scal- ing property. Sample paths properties such as modulus of continuity and Hausdorff dimension of graphs will be settled in the second part to understand links with frac- tal analysis.
    [Show full text]
  • Basic Properties of the Multivariate Fractional Brownian Motion Pierre-Olivier Amblard, Jean-François Coeurjolly, Frédéric Lavancier, Anne Philippe
    Basic properties of the Multivariate Fractional Brownian Motion Pierre-Olivier Amblard, Jean-François Coeurjolly, Frédéric Lavancier, Anne Philippe To cite this version: Pierre-Olivier Amblard, Jean-François Coeurjolly, Frédéric Lavancier, Anne Philippe. Basic properties of the Multivariate Fractional Brownian Motion. Séminaires et congrès, Société mathématique de France, 2013, 28, pp.65-87. hal-00497639v2 HAL Id: hal-00497639 https://hal.archives-ouvertes.fr/hal-00497639v2 Submitted on 25 Apr 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. BASIC PROPERTIES OF THE MULTIVARIATE FRACTIONAL BROWNIAN MOTION PIERRE-OLIVIER AMBLARD, JEAN-FRANÇOIS COEURJOLLY, FRÉDÉRIC LAVANCIER, AND ANNE PHILIPPE Abstract. This paper reviews and extends some recent results on the multivariate frac- tional Brownian motion (mfBm) and its increment process. A characterization of the mfBm through its covariance function is obtained. Similarly, the correlation and spectral analyses of the increments are investigated. On the other hand we show that (almost) all mfBm’s may be reached as the limit of partial sums of (super)linear processes. Fi- nally, an algorithm to perfectly simulate the mfBm is presented and illustrated by some simulations. 1. Introduction The fractional Brownian motion is the unique Gaussian self-similar process with station- ary increments.
    [Show full text]
  • Statistics 153 (Time Series) : Lecture Five
    Statistics 153 (Time Series) : Lecture Five Aditya Guntuboyina 31 January 2012 1 Contents 1. Definitions of strong and weak stationary sequences. 2. Examples of stationary sequences. 2 Stationarity The concept of a stationary time series is the most important thing in this course. Stationary time series are typically used for the residuals after trend and seasonality have been removed. Stationarity allows a systematic study of time series forecasting. In order to avoid confusion, we shall, from now on, make a distinction be- tween the observed time series data x1; : : : ; xn and the sequence of random vari- ables X1;:::;Xn. The observed data x1; : : : ; xn is a realization of X1;:::;Xn. It is convenient of think of the random variables fXtg as forming a doubly infinite sequence: :::;X−2;X−1;X0;X1;X2;:::: The notion of stationarity will apply to the doubly infinite sequence of ran- dom variables fXtg. Strictly speaking, stationarity does not apply to the data x1; : : : ; xn. However, one frequently abuses terminology and uses stationary data to mean that the random variables of which the observed data are a real- ization have a stationary distribution. 1 Definition 2.1 (Strict or Strong Stationarity). A doubly infinite se- quence of random variables fXtg is strictly stationary if the joint dis- tribution of (Xt1 ;Xt2 ;:::;Xtk ) is the same as the joint distribution of (Xt1+h;Xt2+h;:::;Xtk+h) for every choice of times t1; : : : ; tk and h. Roughly speaking, stationarity means that the joint distribution of the ran- dom variables remains constant over time. For example, under stationarity, the joint distribution of today's and tomorrow's random variables is the same as the joint distribution of the variables from any two successive days (past or future).
    [Show full text]