Stationary Time Series, Conditional Heteroscedasticity, Random Walk, Test for a Unit Root, Endogenity, Causality and IV Estimation Chapter 1

Total Page:16

File Type:pdf, Size:1020Kb

Stationary Time Series, Conditional Heteroscedasticity, Random Walk, Test for a Unit Root, Endogenity, Causality and IV Estimation Chapter 1 Stationary Time Series, Conditional Heteroscedasticity, Random Walk, Test for a Unit Root, Endogenity, Causality and IV Estimation Chapter 1 Financial Econometrics Michael Hauser WS18/19 1 / 67 Content I Notation I Conditional heteroscedasticity, GARCH, estimation, forecasting I Random walk, test for unit roots I Granger causality I Assumptions of the regression model I Endogeneity, exogeneity I Instrumental variable, IV, estimator I Testing for exogeneity: Hausman test and Hausman-Wu test I Appendix: OLS, Weak stationarity, Wold representation, ACF, AR(1), MA(1), tests for WN, prediction of an AR(1) 2 / 67 Notation X, Π ::: matrices x, β, ::: column vectors x, β, : : : real values, single variables i; j; t ::: indices L ::: lag operator. We do not use the backward shift operator B. WN, RW ::: white noise, random walk asy ::: asymptotically df ::: degrees of freedom e.g. ::: exempli gratia, for example i.e. ::: id est, that is i.g. ::: in general lhs, rhs ::: left, right hand side rv ::: random variable wrt ::: with respect to 3 / 67 Conditional heteroscedasticity GARCH models 4 / 67 Conditional heteroscedasticity A stylized fact of stock returns and other financial return series are periods of high volatility followed by periods of low volatility. The increase and decrease of this volatility pattern can be captured by GARCH models. The idea is that the approach of new information increases the uncertainty in the market and so the variance. After some while the market participants find to a new consensus and the variance decreases. We assume - for the beginning - the (log)returns, rt , are WN 2 rt = µ + t with t ∼ N(0; σt ) 2 2 σt = E(t jIt−1) 2 2 2 σt is the predictor for t given the information at period (t − 1). σt is the 2 2 2 2 2 conditional variance of t given It−1.It−1 = ft−1; t−2; : : : ; σt−1; σt−2;:::g. 5 / 67 Conditional heteroscedasticity Volatility is commonly measured either by 2 2 I t the ’local’ variance (rt contains the same information), or I jt j, the modulus of t . 6 / 67 Generalized autoregressive conditional heteroscedasticity, GARCH The generalized autoregressive conditional heteroscedasticity, GARCH, model of 2 2 order (1; 1), GARCH(1,1), uses as set of information It−1 = ft−1; σt−1g 2 2 2 σt = a0 + a1t−1 + b1σt−1 The unconditional variance of t is then 2 2 2 2 2 2 2 [We set E(σt ) = E(t ) = σ . So σ = a0 + a1σ + b1σ and we solve wrt σ .] 2 a0 E(t ) = 1 − (a1 + b1) The unconditional variance is constant and exists if a1 + b1 < 1 2 Further as σt is a variance and so positive, a0; a1 and b1 have to be positive. a0; a1; b1 > 0 7 / 67 GARCH(r; s) A GARCH model of order (r; s), GARCH(r; s), r ≥ 0; s > 0, is s r s r 2 X 2 X 2 X X σt = a0 + aj t−j + bj σt−j with aj + bj < 1; aj ; bj > 0 1 1 1 1 An autoregressive conditional heteroscedasticity, ARCH, model of order s is a GARCH(0; s). E.g. ARCH(1) is 2 2 σt = a0 + a1t−1 with a1 < 1; a0; a1 > 0 Comment 1: 2 2 A GARCH(1,0), σt = a0 + b1σt−1, is not useful as it describes a deterministic 2 decay once a starting value for some past σt−j is given. Comment 2: The order s refers to the number of aj coefficients, r to the bj coefficients. 8 / 67 IGARCH(1,1) An empirically relevant version is the integrated GARCH, IGARCH(1,1) model, 2 2 2 σt = a0 + a1t−1 + b1σt−1 with a1 + b1 = 1; a0; a1; b1 > 0 where the conditional variance exists, but not the unconditional one. 9 / 67 GARCH(1,1): ARMA and ARCH representations We can define an innovation in the variance as 2 2 2 2 νt = t − E(t jIt−1) = t − σt 2 2 2 Replacing σt by σt = t − νt in the GARCH(1,1) model we obtain 2 2 t = a0 + (a1 + b1)t−1 + νt − b1νt−1 2 2 This model is an ARMA(1,1) for t . So the ACF of rt can be inspected for getting an impression of the dynamics. The model is stationary, if (a1 + b1) < 1. 2 2 2 2 Recursive substitution of lags of σt = a0 + a1t−1 + b1σt−1 in σt gives an infinite ARCH representation with a geometric decay 1 2 a0 X j−1 2 σt = + a1 b1 t−j 1 − b1 1 10 / 67 Variants of the GARCH model I t-GARCH: As the unconditional distribution of rt can be seen as a mixture of normal distributions with different variances, GARCH is able to model fat tails. However empirically, the reduction in the kurtosis of rt by a normal GARCH model is often not sufficient. A simple solution is to consider an already fat tailed distribution for t instead of a normal one. Candidates are e.g. the t-distribution with df > 2, or the GED (generalized error distr) with tail parameter κ > 0. I ARCH-in-mean, ARCH-M: If market participants are risk avers, they want a higher average return in uncertain periods than in normal periods. So the mean return should be 2 higher when σt is high. 2 2 rt = µ0 + µ1σt + t with µ1 > 0; t ∼ N(0; σt ) 11 / 67 Variants of the GARCH model I asymmetric GARCH, threshold GARCH, GJR model: As the assumption that both good and bad new information has the same absolute (symmetric) effect might not hold. A useful variant is 2 2 2 2 σt = a0 + a1t−1 + b1σt−1 + γDt−1t−1 where the dummy variable D indicates a positive shock. So Dt−1 = 1, if t−1 > 0 Dt−1 = 0, if t−1 ≤ 0 2 2 The contribution of t−1 to σ is (a1 + γ), if t−1 > 0 a1, if t−1 ≤ 0 If γ < 0, negative shocks have a larger impact on future volatility than positive shocks. (GJR stands for Glosten, Jagannathan, Runkle.) 12 / 67 Variants of the GARCH model I Exponential GARCH, EGARCH: 2 By modeling the log of the conditional variance, log(σt ), the EGARCH guarantees positive variances, independent on the choice of the parameters. It can be formulated also in an asymmetric way (γ 6= 0). j j (σ2) = + t−1 + (σ2 ) + γ t−1 log t a0 a1 2 b1 log t−1 2 σt−1 σt−1 If γ < 0, positive shocks generate less volatility than negative shocks. 13 / 67 Estimation of the GARCH(r; s) model Say rt can be described more generally as 0 rt = xt θ + t 0 0 0 with vector xt of some third variables, xt = (x1t ;:::; xmt ) , possibly including lagged rt ’s, seasonal dummies, etc. t with conditional heteroscedasticity can be written as t = σt ξt with ξt iid N(0; 1) 2 2 2 2 2 2 σt has the form σt = a0 + a1t−1 + ::: + as t−s + b1σt−1 + ::: + br σt−r . So the conditional density of rt jxt ; It−1 is given by 1 1 2 ( j ; ) = (− t ); = ( ; ) + ;:::; fN rt xt It−1 q exp 2 t max r s 1 n 2 2 σt 2πσt 2 2 Comments: V(σt ξt jxt ; It−1) = σt V(ξt jxt ; It−1) = σt , 2 2 starting values σ1; : : : ; σr appropriately chosen, 2 2 t = t (θ), σt = σt (a0; a1;:::; as; b1;:::; br ) 14 / 67 Estimation of the GARCH model The ML, maximum likelihood, estimator of the parameter vector θ; a0; a1;:::; as, b1;:::; br is given by n Y max fN(rt jxt ; It−1) θ;a0;a1;:::;as;b1;:::;br max(r;s)+1 as the ’s are uncorrelated. The estimates are asy normal distributed. Standard t-tests, etc. apply. Note the requirement of uncorrelated ’s. In case autocorrelation in the returns rt is ignored, the model is misspecified. And, you will detect ARCH effects although they might not exist. So in a first step, model the level of rt by ARMA, e.g., and then fit GARCH models to the residuals. 2 Remark: If rt is autocorrelated, so also rt is autocorrelated i.g. 15 / 67 Forecasting a GARCH(1,1) process 2 2 2 2 Using σ = a0=(1 − (a1 + b1)) in σt = a0 + a1t−1 + b1σt−1 we rewrite the model as 2 2 2 2 2 2 σt − σ = a1[t−1 − σ ] + b1[σt−1 − σ ] 2 So the 1-step ahead forecast E(t+1jIt ) is 2 2 2 2 2 2 2 2 σt+1jt = (σt+1) = E(t+1jIt ) = σ + a1[t − σ ] + b1[σt − σ ] 2 2 with the 1-step ahead forecast error t+1 − σt+1jt = νt+1. The h-step ahead forecast is, h ≥ 2, 2 2 2 [replacing both t ; σt by their (h − 1)-step ahead forecast σt+h−1jt ] 2 2 2 2 σt+hjt = σ + (a1 + b1)[σt+h−1jt − σ ] = 2 h−1 2 2 2 2 = σ + (a1 + b1) [a1(t − σ ) + b1(σt − σ )] 16 / 67 Random walk and unit root test Random walk, I(1), Dickey Fuller test 17 / 67 Random walk, RW A process yt = αyt−1 + t with α = 1 is called random walk. yt = yt−1 + t with t WN 2 Taking the variances on both sides gives V(yt ) = V(yt−1) + σ . This has only a 2 solution V(y), if σ = 0. So no unconditional variance of yt exists. Pt Starting the process at t = 0 its explicit form is yt = y0 + 1 j .
Recommended publications
  • Stationary Processes and Their Statistical Properties
    Stationary Processes and Their Statistical Properties Brian Borchers March 29, 2001 1 Stationary processes A discrete time stochastic process is a sequence of random variables Z1, Z2, :::. In practice we will typically analyze a single realization z1, z2, :::, zn of the stochastic process and attempt to esimate the statistical properties of the stochastic process from the realization. We will also consider the problem of predicting zn+1 from the previous elements of the sequence. We will begin by focusing on the very important class of stationary stochas- tic processes. A stochastic process is strictly stationary if its statistical prop- erties are unaffected by shifting the stochastic process in time. In particular, this means that if we take a subsequence Zk+1, :::, Zk+m, then the joint distribution of the m random variables will be the same no matter what k is. Stationarity requires that the mean of the stochastic process be a constant. E[Zk] = µ. and that the variance is constant 2 V ar[Zk] = σZ : Also, stationarity requires that the covariance of two elements separated by a distance m is constant. That is, Cov(Zk;Zk+m) is constant. This covariance is called the autocovariance at lag m, and we will use the notation γm. Since Cov(Zk;Zk+m) = Cov(Zk+m;Zk), we need only find γm for m 0. The ≥ correlation of Zk and Zk+m is the autocorrelation at lag m. We will use the notation ρm for the autocorrelation. It is easy to show that γk ρk = : γ0 1 2 The autocovariance and autocorrelation ma- trices The covariance matrix for the random variables Z1, :::, Zn is called an auto- covariance matrix.
    [Show full text]
  • Methods of Monte Carlo Simulation II
    Methods of Monte Carlo Simulation II Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2014 Ulm, 2014 2 Contents 1 SomeSimpleStochasticProcesses 7 1.1 StochasticProcesses . 7 1.2 RandomWalks .......................... 7 1.2.1 BernoulliProcesses . 7 1.2.2 RandomWalks ...................... 10 1.2.3 ProbabilitiesofRandomWalks . 13 1.2.4 Distribution of Xn .................... 13 1.2.5 FirstPassageTime . 14 2 Estimators 17 2.1 Bias, Variance, the Central Limit Theorem and Mean Square Error................................ 19 2.2 Non-AsymptoticErrorBounds. 22 2.3 Big O and Little o Notation ................... 23 3 Markov Chains 25 3.1 SimulatingMarkovChains . 28 3.1.1 Drawing from a Discrete Uniform Distribution . 28 3.1.2 Drawing From A Discrete Distribution on a Small State Space ........................... 28 3.1.3 SimulatingaMarkovChain . 28 3.2 Communication .......................... 29 3.3 TheStrongMarkovProperty . 30 3.4 RecurrenceandTransience . 31 3.4.1 RecurrenceofRandomWalks . 33 3.5 InvariantDistributions . 34 3.6 LimitingDistribution. 36 3.7 Reversibility............................ 37 4 The Poisson Process 39 4.1 Point Processes on [0, )..................... 39 ∞ 3 4 CONTENTS 4.2 PoissonProcess .......................... 41 4.2.1 Order Statistics and the Distribution of Arrival Times 44 4.2.2 DistributionofArrivalTimes . 45 4.3 SimulatingPoissonProcesses. 46 4.3.1 Using the Infinitesimal Definition to Simulate Approx- imately .......................... 46 4.3.2 SimulatingtheArrivalTimes . 47 4.3.3 SimulatingtheInter-ArrivalTimes . 48 4.4 InhomogenousPoissonProcesses. 48 4.5 Simulating an Inhomogenous Poisson Process . 49 4.5.1 Acceptance-Rejection. 49 4.5.2 Infinitesimal Approach (Approximate) . 50 4.6 CompoundPoissonProcesses . 51 5 ContinuousTimeMarkovChains 53 5.1 TransitionFunction. 53 5.2 InfinitesimalGenerator . 54 5.3 ContinuousTimeMarkovChains .
    [Show full text]
  • Examples of Stationary Time Series
    Statistics 910, #2 1 Examples of Stationary Time Series Overview 1. Stationarity 2. Linear processes 3. Cyclic models 4. Nonlinear models Stationarity Strict stationarity (Defn 1.6) Probability distribution of the stochastic process fXtgis invariant under a shift in time, P (Xt1 ≤ x1;Xt2 ≤ x2;:::;Xtk ≤ xk) = F (xt1 ; xt2 ; : : : ; xtk ) = F (xh+t1 ; xh+t2 ; : : : ; xh+tk ) = P (Xh+t1 ≤ x1;Xh+t2 ≤ x2;:::;Xh+tk ≤ xk) for any time shift h and xj. Weak stationarity (Defn 1.7) (aka, second-order stationarity) The mean and autocovariance of the stochastic process are finite and invariant under a shift in time, E Xt = µt = µ Cov(Xt;Xs) = E (Xt−µt)(Xs−µs) = γ(t; s) = γ(t−s) The separation rather than location in time matters. Equivalence If the process is Gaussian with finite second moments, then weak stationarity is equivalent to strong stationarity. Strict stationar- ity implies weak stationarity only if the necessary moments exist. Relevance Stationarity matters because it provides a framework in which averaging makes sense. Unless properties like the mean and covariance are either fixed or \evolve" in a known manner, we cannot average the observed data. What operations produce a stationary process? Can we recognize/identify these in data? Statistics 910, #2 2 Moving Average White noise Sequence of uncorrelated random variables with finite vari- ance, ( 2 often often σw = 1 if t = s; E Wt = µ = 0 Cov(Wt;Ws) = 0 otherwise The input component (fXtg in what follows) is often modeled as white noise. Strict white noise replaces uncorrelated by independent. Moving average A stochastic process formed by taking a weighted average of another time series, often formed from white noise.
    [Show full text]
  • Lecture 6A: Unit Root and ARIMA Models
    Lecture 6a: Unit Root and ARIMA Models 1 Big Picture • A time series is non-stationary if it contains a unit root unit root ) nonstationary The reverse is not true. • Many results of traditional statistical theory do not apply to unit root process, such as law of large number and central limit theory. • We will learn a formal test for the unit root • For unit root process, we need to apply ARIMA model; that is, we take difference (maybe several times) before applying the ARMA model. 2 Review: Deterministic Difference Equation • Consider the first order equation (without stochastic shock) yt = ϕ0 + ϕ1yt−1 • We can use the method of iteration to show that when ϕ1 = 1 the series is yt = ϕ0t + y0 • So there is no steady state; the series will be trending if ϕ0 =6 0; and the initial value has permanent effect. 3 Unit Root Process • Consider the AR(1) process yt = ϕ0 + ϕ1yt−1 + ut where ut may and may not be white noise. We assume ut is a zero-mean stationary ARMA process. • This process has unit root if ϕ1 = 1 In that case the series converges to yt = ϕ0t + y0 + (ut + u2 + ::: + ut) (1) 4 Remarks • The ϕ0t term implies that the series will be trending if ϕ0 =6 0: • The series is not mean-reverting. Actually, the mean changes over time (assuming y0 = 0): E(yt) = ϕ0t • The series has non-constant variance var(yt) = var(ut + u2 + ::: + ut); which is a function of t: • In short, the unit root process is not stationary.
    [Show full text]
  • Stationary Processes
    Stationary processes Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania [email protected] http://www.seas.upenn.edu/users/~aribeiro/ November 25, 2019 Stoch. Systems Analysis Stationary processes 1 Stationary stochastic processes Stationary stochastic processes Autocorrelation function and wide sense stationary processes Fourier transforms Linear time invariant systems Power spectral density and linear filtering of stochastic processes Stoch. Systems Analysis Stationary processes 2 Stationary stochastic processes I All probabilities are invariant to time shits, i.e., for any s P[X (t1 + s) ≥ x1; X (t2 + s) ≥ x2;:::; X (tK + s) ≥ xK ] = P[X (t1) ≥ x1; X (t2) ≥ x2;:::; X (tK ) ≥ xK ] I If above relation is true process is called strictly stationary (SS) I First order stationary ) probs. of single variables are shift invariant P[X (t + s) ≥ x] = P [X (t) ≥ x] I Second order stationary ) joint probs. of pairs are shift invariant P[X (t1 + s) ≥ x1; X (t2 + s) ≥ x2] = P [X (t1) ≥ x1; X (t2) ≥ x2] Stoch. Systems Analysis Stationary processes 3 Pdfs and moments of stationary process I For SS process joint cdfs are shift invariant. Whereby, pdfs also are fX (t+s)(x) = fX (t)(x) = fX (0)(x) := fX (x) I As a consequence, the mean of a SS process is constant Z 1 Z 1 µ(t) := E [X (t)] = xfX (t)(x) = xfX (x) = µ −∞ −∞ I The variance of a SS process is also constant Z 1 Z 1 2 2 2 var [X (t)] := (x − µ) fX (t)(x) = (x − µ) fX (x) = σ −∞ −∞ I The power of a SS process (second moment) is also constant Z 1 Z 1 2 2 2 2 2 E X (t) := x fX (t)(x) = x fX (x) = σ + µ −∞ −∞ Stoch.
    [Show full text]
  • Econometrics Basics: Avoiding Spurious Regression
    Econometrics Basics: Avoiding Spurious Regression John E. Floyd University of Toronto July 24, 2013 We deal here with the problem of spurious regression and the techniques for recognizing and avoiding it. The nature of this problem can be best understood by constructing a few purely random-walk variables and then regressing one of them on the others. The figure below plots a random walk or unit root variable that can be represented by the equation yt = ρ yt−1 + ϵt (1) which can be written alternatively in Dickey-Fuller form as ∆yt = − (1 − ρ) yt−1 + ϵt (2) where yt is the level of the series at time t , ϵt is a series of drawings of a zero-mean, constant-variance normal random variable, and (1 − ρ) can be viewed as the mean-reversion parameter. If ρ = 1 , there is no mean-reversion and yt is a random walk. Notice that, apart from the short-term variations, the series trends upward for the first quarter of its length, then downward for a bit more than the next quarter and upward for the remainder of its length. This series will tend to be correlated with other series that move in either the same or the oppo- site directions during similar parts of their length. And if our series above is regressed on several other random-walk-series regressors, one can imagine that some or even all of those regressors will turn out to be statistically sig- nificant even though by construction there is no causal relationship between them|those parts of the dependent variable that are not correlated directly with a particular independent variable may well be correlated with it when the correlation with other independent variables is simultaneously taken into account.
    [Show full text]
  • Commodity Prices and Unit Root Tests
    Commodity Prices and Unit Root Tests Dabin Wang and William G. Tomek Paper presented at the NCR-134 Conference on Applied Commodity Price Analysis, Forecasting, and Market Risk Management St. Louis, Missouri, April 19-20, 2004 Copyright 2004 by Dabin Wang and William G. Tomek. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Graduate student and Professor Emeritus in the Department of Applied Economics and Management at Cornell University. Warren Hall, Ithaca NY 14853-7801 e-mails: [email protected] and [email protected] Commodity Prices and Unit Root Tests Abstract Endogenous variables in structural models of agricultural commodity markets are typically treated as stationary. Yet, tests for unit roots have rather frequently implied that commodity prices are not stationary. This seeming inconsistency is investigated by focusing on alternative specifications of unit root tests. We apply various specifications to Illinois farm prices of corn, soybeans, barrows and gilts, and milk for the 1960 through 2002 time span. The preponderance of the evidence suggests that nominal prices do not have unit roots, but under certain specifications, the null hypothesis of a unit root cannot be rejected, particularly when the logarithms of prices are used. If the test specification does not account for a structural change that shifts the mean of the variable, the results are biased toward concluding that a unit root exists. In general, the evidence does not favor the existence of unit roots. Keywords: commodity price, unit root tests.
    [Show full text]
  • Solutions to Exercises in Stationary Stochastic Processes for Scientists and Engineers by Lindgren, Rootzén and Sandsten Chapman & Hall/CRC, 2013
    Solutions to exercises in Stationary stochastic processes for scientists and engineers by Lindgren, Rootzén and Sandsten Chapman & Hall/CRC, 2013 Georg Lindgren, Johan Sandberg, Maria Sandsten 2017 CENTRUM SCIENTIARUM MATHEMATICARUM Faculty of Engineering Centre for Mathematical Sciences Mathematical Statistics 1 Solutions to exercises in Stationary stochastic processes for scientists and engineers Mathematical Statistics Centre for Mathematical Sciences Lund University Box 118 SE-221 00 Lund, Sweden http://www.maths.lu.se c Georg Lindgren, Johan Sandberg, Maria Sandsten, 2017 Contents Preface v 2 Stationary processes 1 3 The Poisson process and its relatives 5 4 Spectral representations 9 5 Gaussian processes 13 6 Linear filters – general theory 17 7 AR, MA, and ARMA-models 21 8 Linear filters – applications 25 9 Frequency analysis and spectral estimation 29 iii iv CONTENTS Preface This booklet contains hints and solutions to exercises in Stationary stochastic processes for scientists and engineers by Georg Lindgren, Holger Rootzén, and Maria Sandsten, Chapman & Hall/CRC, 2013. The solutions have been adapted from course material used at Lund University on first courses in stationary processes for students in engineering programs as well as in mathematics, statistics, and science programs. The web page for the course during the fall semester 2013 gives an example of a schedule for a seven week period: http://www.maths.lu.se/matstat/kurser/fms045mas210/ Note that the chapter references in the material from the Lund University course do not exactly agree with those in the printed volume. v vi CONTENTS Chapter 2 Stationary processes 2:1. (a) 1, (b) a + b, (c) 13, (d) a2 + b2, (e) a2 + b2, (f) 1.
    [Show full text]
  • Random Processes
    Chapter 6 Random Processes Random Process • A random process is a time-varying function that assigns the outcome of a random experiment to each time instant: X(t). • For a fixed (sample path): a random process is a time varying function, e.g., a signal. – For fixed t: a random process is a random variable. • If one scans all possible outcomes of the underlying random experiment, we shall get an ensemble of signals. • Random Process can be continuous or discrete • Real random process also called stochastic process – Example: Noise source (Noise can often be modeled as a Gaussian random process. An Ensemble of Signals Remember: RV maps Events à Constants RP maps Events à f(t) RP: Discrete and Continuous The set of all possible sample functions {v(t, E i)} is called the ensemble and defines the random process v(t) that describes the noise source. Sample functions of a binary random process. RP Characterization • Random variables x 1 , x 2 , . , x n represent amplitudes of sample functions at t 5 t 1 , t 2 , . , t n . – A random process can, therefore, be viewed as a collection of an infinite number of random variables: RP Characterization – First Order • CDF • PDF • Mean • Mean-Square Statistics of a Random Process RP Characterization – Second Order • The first order does not provide sufficient information as to how rapidly the RP is changing as a function of timeà We use second order estimation RP Characterization – Second Order • The first order does not provide sufficient information as to how rapidly the RP is changing as a function
    [Show full text]
  • 4.2 Autoregressive (AR) Moving Average Models Are Causal Linear Processes by Definition. There Is Another Class of Models, Based
    4.2 Autoregressive (AR) Moving average models are causal linear processes by definition. There is another class of models, based on a recursive formulation similar to the exponentially weighted moving average. Definition 4.11 (Autoregressive AR(p)). Suppose φ1,...,φp ∈ R are constants 2 2 and (Wi) ∼ WN(σ ). The AR(p) process with parameters σ , φ1,...,φp is defined through p Xi = Wi + φjXi−j, (3) Xj=1 whenever such stationary process (Xi) exists. Remark 4.12. The process in Definition 4.11 is sometimes called a stationary AR(p) process. It is possible to consider a ‘non-stationary AR(p) process’ for any φ1,...,φp satisfying (3) for i ≥ 0 by letting for example Xi = 0 for i ∈ [−p+1, 0]. Example 4.13 (Variance and autocorrelation of AR(1) process). For the AR(1) process, whenever it exits, we must have 2 2 γ0 = Var(Xi) = Var(φ1Xi−1 + Wi)= φ1γ0 + σ , which implies that we must have |φ1| < 1, and σ2 γ0 = 2 . 1 − φ1 We may also calculate for j ≥ 1 j γj = E[XiXi−j]= E[(φ1Xi−1 + Wi)Xi−j]= φ1E[Xi−1Xi−j]= φ1γ0, j which gives that ρj = φ1. Example 4.14. Simulation of an AR(1) process. phi_1 <- 0.7 x <- arima.sim(model=list(ar=phi_1), 140) # This is the explicit simulation: gamma_0 <- 1/(1-phi_1^2) x_0 <- rnorm(1)*sqrt(gamma_0) x <- filter(rnorm(140), phi_1, method = "r", init = x_0) Example 4.15. Consider a stationary AR(1) process. We may write n−1 n j Xi = φ1Xi−1 + Wi = ··· = φ1 Xi−n + φ1Wi−j.
    [Show full text]
  • Unit Roots and Cointegration in Panels Jörg Breitung M
    Unit roots and cointegration in panels Jörg Breitung (University of Bonn and Deutsche Bundesbank) M. Hashem Pesaran (Cambridge University) Discussion Paper Series 1: Economic Studies No 42/2005 Discussion Papers represent the authors’ personal opinions and do not necessarily reflect the views of the Deutsche Bundesbank or its staff. Editorial Board: Heinz Herrmann Thilo Liebig Karl-Heinz Tödter Deutsche Bundesbank, Wilhelm-Epstein-Strasse 14, 60431 Frankfurt am Main, Postfach 10 06 02, 60006 Frankfurt am Main Tel +49 69 9566-1 Telex within Germany 41227, telex from abroad 414431, fax +49 69 5601071 Please address all orders in writing to: Deutsche Bundesbank, Press and Public Relations Division, at the above address or via fax +49 69 9566-3077 Reproduction permitted only if source is stated. ISBN 3–86558–105–6 Abstract: This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T ), and the cross section dimension (N) are relatively large. It distinguishes between the ¯rst generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the di®erent units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the di®erent cross section units are due to common random walk components. JEL Classi¯cation: C12, C15, C22, C23. Keywords: Panel Unit Roots, Panel Cointegration, Cross Section Dependence, Common E®ects Nontechnical Summary This paper provides a review of the theoretical literature on testing for unit roots and cointegration in panels where the time dimension (T ), and the cross section dimension (N) are relatively large.
    [Show full text]
  • Interacting Particle Systems MA4H3
    Interacting particle systems MA4H3 Stefan Grosskinsky Warwick, 2009 These notes and other information about the course are available on http://www.warwick.ac.uk/˜masgav/teaching/ma4h3.html Contents Introduction 2 1 Basic theory3 1.1 Continuous time Markov chains and graphical representations..........3 1.2 Two basic IPS....................................6 1.3 Semigroups and generators.............................9 1.4 Stationary measures and reversibility........................ 13 2 The asymmetric simple exclusion process 18 2.1 Stationary measures and conserved quantities................... 18 2.2 Currents and conservation laws........................... 23 2.3 Hydrodynamics and the dynamic phase transition................. 26 2.4 Open boundaries and matrix product ansatz.................... 30 3 Zero-range processes 34 3.1 From ASEP to ZRPs................................ 34 3.2 Stationary measures................................. 36 3.3 Equivalence of ensembles and relative entropy................... 39 3.4 Phase separation and condensation......................... 43 4 The contact process 46 4.1 Mean field rate equations.............................. 46 4.2 Stochastic monotonicity and coupling....................... 48 4.3 Invariant measures and critical values....................... 51 d 4.4 Results for Λ = Z ................................. 54 1 Introduction Interacting particle systems (IPS) are models for complex phenomena involving a large number of interrelated components. Examples exist within all areas of natural and social sciences, such as traffic flow on highways, pedestrians or constituents of a cell, opinion dynamics, spread of epi- demics or fires, reaction diffusion systems, crystal surface growth, chemotaxis, financial markets... Mathematically the components are modeled as particles confined to a lattice or some discrete geometry. Their motion and interaction is governed by local rules. Often microscopic influences are not accesible in full detail and are modeled as effective noise with some postulated distribution.
    [Show full text]