Stat 6550 (Spring 2016) – Peter F. Craigmile Introduction to Time Series Models and Stationarity Reading: Brockwell and Davis

Total Page:16

File Type:pdf, Size:1020Kb

Stat 6550 (Spring 2016) – Peter F. Craigmile Introduction to Time Series Models and Stationarity Reading: Brockwell and Davis Stat 6550 (Spring 2016) – Peter F. Craigmile A time series model Introduction to time series models and stationarity Remember that a time series model is a specification of the Reading: Brockwell and Davis [2002, Chapter 1, Appendix A] • probabilistic distribution of a sequence of random variables (RVs) A time series model Xt : t T0 . • { ∈ } – Example: IID noise (The observed time series x is a realization of the RVs). { t} – Mean function The simplest time series model is IID noise: • – Autocovariance and autocorrelation functions – Here X is a sequence of independent and identically dis- { t} Strictly stationary processes • tributed (IID) RVs with (typically) zero mean. (Weakly) stationary processes – There is no dependence in this model • – Autocovariance (ACVF) and autocorrelation function (ACF). (but this model can be used as a building block for construct- ing more complicated models) – The white noise process – We cannot forecast future values from IID noise – First order moving average process, MA(1) (There are no dependencies between the random variables at – First order autoregressive process, AR(1) different time points). The random walk process • Gaussian processes • 1 2 Example realizations of IID noise Means and autocovariances 1. Gaussian (normal) IID noise with mean 0, variance σ2 > 0. A time series model could also be a specification of the means 2 1 xt • pdf is fXt(xt)= exp √ 2 −2σ2 and autocovariances of the RVs. 2πσ The mean function of X is • { t} µX(t) = E(Xt). Think of µX(t) are being the theoretical mean at time t, taken IID Normal(0, 0.25) noise −3 −2 −1 0 1 2 3 over the possible values that could have generated Xt. 0 20 40 60 80 100 120 time When X is a continuous RV, • t 2. Bernoulli IID noise, with success probability p. ∞ µX(t) = xfXt(x) dx, Z−∞ P (Xt =1)= p =1 P (Xt = 0). − where f ( ) is the probability density function (pdf) for X . Xt · t When X is a discrete RV, • t ∞ µX(t) = xfXt(x), X−∞ IID Bernoulli(p=0.5) noise −3 −2 −1 0 1 2 3 where fXt( ) is the probability mass function (pmf) for Xt. 0 20 40 60 80 100 120 · time Note: Variances are equal in both plots (but means are not!) 3 4 Examples of mean functions Review: The covariance between two RVs Example 1: What is µ (t) for η an IID N(0, σ2) process? The covariance between the RVs X and Y is • η { t} • cov(X,Y ) = E (X µ )(Y µ ) { − X − Y } = E(XY ) µ µ . − X Y It is a measure of linear dependence between the two RVs. Example 2 (Straight line trend plus IID noise model): • When X = Y we have • For each t, let Xt = α + βt + ηt, where α and β are constants, cov(X,X) = var(X). and ηt is defined above. What is µX(t)? For constants a, b, c, and RVs X, Y , Z: • cov(aX + bY + c,Z) = cov(aX, Z) + cov(bY,Z) = a cov(X,Z)+ b cov(Y,Z). Thus • var(X + Y ) = cov(X,X) + cov(X,Y ) + cov(Y,X) + cov(Y,Y ) = var(X) + var(Y )+2 cov(X,Y ). 5 6 Autocovariance function Why non-negative definite functions? The autocovariance function of X is A function f( , ) is non-negative definite if • { t} • · · n n γX(s,t) = cov(Xs,Xt) = E[(Xs µX(s))(Xt µX(t))]. a f(s,t) a 0, − − s t ≥ s=1 t=1 X X Measures the strength of linear dependence between the two RVs for all positive integers n and real-valued constants a , . , a . • 1 n X and X . s t So, why must γ be a non-negative definite function? • X Properties: • Consider the following: – For any integer n, and real constants a1, . , an, let 1. γX(s,t)= γX(t,s) for each s and t. n 2. When s = t we obtain Y = atXt. t=1 2 X γX(t,t) = cov(Xt,Xt) = var(Xt) = σX(t), – Then the variance of Y is equal to n n the variance function of Xt . { } var(Y ) = cov(Y,Y ) = cov asXs, atXt s=1 t=1 ! 3. γX(s,t)isa non-negative definite function n Xn X = asatcov(Xs,Xt) s=1 t=1 Xn Xn = asatγX(s,t). s=1 t=1 X X – Thus if γX is a non-negative definite function: 7 8 Autocorrelation function Strictly stationary processes The autocorrelation function of X is In strict stationarity the joint distribution of a set of RVs are • { t} • γX(s,t) unaffected by time shifts. ρX(s,t) = corr(Xs,Xt) = . γX(s,s)γX(t,t) A time series, Xt , is strictly stationary if Measures the strength of linear associationp between the two RVs • { } • (Xt1,...,Xtn) =d (Xt1+h,...,Xtn+h) Xs and Xt. Properties: for all integers h and n 1, and time points tk . • ≥ { } (Here =d means “equal in distribution to”.) 1. 1 ρ (s,t) 1 for each s and t. − ≤ X ≤ Then: 2. ρX(s,t)= ρX(t,s) for each s and t. • 3. ρ (t,t) = 1 for each t. 1. Xt is identically distributed X { } 4. ρX(s,t)isa non-negative definite function. – Not necessarily independent! 2. (Xt,Xt+h)=d (X1,X1+h) for all t and h; 3. When µX is finite, µX(t)= µX is independent of time t. 4. When the variance function exists, γX(s,t)= γX(s + h,t + h) for any s, t and h. 9 10 (Weakly) stationary processes Autocovariance function of a stationary process X is (weakly) stationary if The autocovariance function of a stationary process •{ t} • Xt (ACVF) is defined to be 1. E(Xt) = µX(t) = µX for some constant µX which does not { } depend on t. γX(h) = cov(Xt,Xt+h) 2. cov(X ,X )= γ (t,t + h)= γ (h), a finite constant that = E[(X µ )(X + µ )] . t t+h X X t − X t h − X can depend on h but not on t. Measures the amount of dependence between X and X + . • t t h The quantity h is called the lag. • Properties of the ACVF: • Other terms for this type of stationarity include second-order, • 1. γ (0) = var(X ). covariance, wide sense. X t 2. γ ( h)= γ (h) for each h. X − X Relating weak and strict stationarity: 3. γ (s t) as a function of s and t is non-negative definite. • X − 1. A strictly stationary process X is also weakly stationary { t} as long as µX(t) is finite for all t. 2. Weak stationarity does not imply strict stationarity! 11 12 Autocorrelation function of a stationary process The white noise process The autocorrelation function of a stationary process Assume E(X )= µ, var(X )= σ2 < . • • t t ∞ Xt (ACF) is defined to be 2 { } Xt is a white noise or WN(µ,σ ) process if •{ } γX(h) ρX(h) = . γX(0) γX(h)=0, Exercise: prove that this it true from previous definitions. for h = 0. • 6 Properties of the ACF: X is clearly stationary. • •{ t} 1. 1 ρ (h) 1 for each h. − ≤ X ≤ But, distributions of X and X +1 can be different! 2. ρ(0) = 1. • t t 3. ρ ( h)= ρ (h) for each h. All IID noise with finite variance is white noise (but, the X − X • 4. ρ (s t) as a function of s and t is non-negative definite. converse need not be true). X − 13 14 Examples realizations of white noise processes The moving average process of first order, MA(1) IID N(2,4) noise Let Z be a WN(0,σ2) process and θ be some real-valued con- • { t} stant. For each integer t let, ACF realization Xt = Zt + θZt 1. −2 0 2 4 6 8 − −1.0 0.0 0.5 1.0 0 20 40 60 80 100 120 0 2 4 6 8 10 The sequence of RVs X is called the moving average pro- time lag • { t} IID chisq(2) noise cess of order 1 or MA(1) process. Extends naturally to the MA(q) process). ACF realization By calculating µX(t) and γX(t,t + h) show that the MA(1) pro- −2 0 2 4 6 8 • −1.0 0.0 0.5 1.0 cess X is stationary. Also calculate the ACF. 0 20 40 60 80 100 120 0 2 4 6 8 10 { t} time lag 15 16 MA(1) process (cont.) MA(1) process (cont.) 17 18 Examples realizations of MA(1) processes First order autoregressive process, AR(1) The same Z sequence is used in each case. Let Z be a WN(0,σ2) process, and 1 <φ< 1 be a constant. • { t} • { t} − theta=0.25 Assume: • 1. Xt is a stationary process with ACF { } realization Xt = φXt 1 + Zt, −4 −2 0 2 4 − −1.0 0.0 0.5 1.0 0 20 40 60 80 100 120 0 2 4 6 8 10 time lag for each integer t. theta=0.75 2. Xs and Zt are uncorrelated for each s<t (future noise in uncorrelated with the current time point). ACF realization We will see later there is only one unique solution to this equation. −4 −2 0 2 4 −1.0 0.0 0.5 1.0 • 0 20 40 60 80 100 120 0 2 4 6 8 10 Such a sequence Xt of RVs is called an AR(1) process.
Recommended publications
  • Generalized Linear Models and Generalized Additive Models
    00:34 Friday 27th February, 2015 Copyright ©Cosma Rohilla Shalizi; do not distribute without permission updates at http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV/ Chapter 13 Generalized Linear Models and Generalized Additive Models [[TODO: Merge GLM/GAM and Logistic Regression chap- 13.1 Generalized Linear Models and Iterative Least Squares ters]] [[ATTN: Keep as separate Logistic regression is a particular instance of a broader kind of model, called a gener- chapter, or merge with logis- alized linear model (GLM). You are familiar, of course, from your regression class tic?]] with the idea of transforming the response variable, what we’ve been calling Y , and then predicting the transformed variable from X . This was not what we did in logis- tic regression. Rather, we transformed the conditional expected value, and made that a linear function of X . This seems odd, because it is odd, but it turns out to be useful. Let’s be specific. Our usual focus in regression modeling has been the condi- tional expectation function, r (x)=E[Y X = x]. In plain linear regression, we try | to approximate r (x) by β0 + x β. In logistic regression, r (x)=E[Y X = x] = · | Pr(Y = 1 X = x), and it is a transformation of r (x) which is linear. The usual nota- tion says | ⌘(x)=β0 + x β (13.1) · r (x) ⌘(x)=log (13.2) 1 r (x) − = g(r (x)) (13.3) defining the logistic link function by g(m)=log m/(1 m). The function ⌘(x) is called the linear predictor. − Now, the first impulse for estimating this model would be to apply the transfor- mation g to the response.
    [Show full text]
  • Stochastic Process - Introduction
    Stochastic Process - Introduction • Stochastic processes are processes that proceed randomly in time. • Rather than consider fixed random variables X, Y, etc. or even sequences of i.i.d random variables, we consider sequences X0, X1, X2, …. Where Xt represent some random quantity at time t. • In general, the value Xt might depend on the quantity Xt-1 at time t-1, or even the value Xs for other times s < t. • Example: simple random walk . week 2 1 Stochastic Process - Definition • A stochastic process is a family of time indexed random variables Xt where t belongs to an index set. Formal notation, { t : ∈ ItX } where I is an index set that is a subset of R. • Examples of index sets: 1) I = (-∞, ∞) or I = [0, ∞]. In this case Xt is a continuous time stochastic process. 2) I = {0, ±1, ±2, ….} or I = {0, 1, 2, …}. In this case Xt is a discrete time stochastic process. • We use uppercase letter {Xt } to describe the process. A time series, {xt } is a realization or sample function from a certain process. • We use information from a time series to estimate parameters and properties of process {Xt }. week 2 2 Probability Distribution of a Process • For any stochastic process with index set I, its probability distribution function is uniquely determined by its finite dimensional distributions. •The k dimensional distribution function of a process is defined by FXX x,..., ( 1 x,..., k ) = P( Xt ≤ 1 ,..., xt≤ X k) x t1 tk 1 k for anyt 1 ,..., t k ∈ I and any real numbers x1, …, xk .
    [Show full text]
  • Autocovariance Function Estimation Via Penalized Regression
    Autocovariance Function Estimation via Penalized Regression Lina Liao, Cheolwoo Park Department of Statistics, University of Georgia, Athens, GA 30602, USA Jan Hannig Department of Statistics and Operations Research, University of North Carolina, Chapel Hill, NC 27599, USA Kee-Hoon Kang Department of Statistics, Hankuk University of Foreign Studies, Yongin, 449-791, Korea Abstract The work revisits the autocovariance function estimation, a fundamental problem in statistical inference for time series. We convert the function estimation problem into constrained penalized regression with a generalized penalty that provides us with flexible and accurate estimation, and study the asymptotic properties of the proposed estimator. In case of a nonzero mean time series, we apply a penalized regression technique to a differenced time series, which does not require a separate detrending procedure. In penalized regression, selection of tuning parameters is critical and we propose four different data-driven criteria to determine them. A simulation study shows effectiveness of the tuning parameter selection and that the proposed approach is superior to three existing methods. We also briefly discuss the extension of the proposed approach to interval-valued time series. Some key words: Autocovariance function; Differenced time series; Regularization; Time series; Tuning parameter selection. 1 1 Introduction Let fYt; t 2 T g be a stochastic process such that V ar(Yt) < 1, for all t 2 T . The autoco- variance function of fYtg is given as γ(s; t) = Cov(Ys;Yt) for all s; t 2 T . In this work, we consider a regularly sampled time series f(i; Yi); i = 1; ··· ; ng. Its model can be written as Yi = g(i) + i; i = 1; : : : ; n; (1.1) where g is a smooth deterministic function and the error is assumed to be a weakly stationary 2 process with E(i) = 0, V ar(i) = σ and Cov(i; j) = γ(ji − jj) for all i; j = 1; : : : ; n: The estimation of the autocovariance function γ (or autocorrelation function) is crucial to determine the degree of serial correlation in a time series.
    [Show full text]
  • LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
    LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix).
    [Show full text]
  • Variance Function Regressions for Studying Inequality
    Variance Function Regressions for Studying Inequality The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Western, Bruce and Deirdre Bloome. 2009. Variance function regressions for studying inequality. Working paper, Department of Sociology, Harvard University. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:2645469 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP Variance Function Regressions for Studying Inequality Bruce Western1 Deirdre Bloome Harvard University January 2009 1Department of Sociology, 33 Kirkland Street, Cambridge MA 02138. E-mail: [email protected]. This research was supported by a grant from the Russell Sage Foundation. Abstract Regression-based studies of inequality model only between-group differ- ences, yet often these differences are far exceeded by residual inequality. Residual inequality is usually attributed to measurement error or the in- fluence of unobserved characteristics. We present a regression that in- cludes covariates for both the mean and variance of a dependent variable. In this model, the residual variance is treated as a target for analysis. In analyses of inequality, the residual variance might be interpreted as mea- suring risk or insecurity. Variance function regressions are illustrated in an analysis of panel data on earnings among released prisoners in the Na- tional Longitudinal Survey of Youth. We extend the model to a decomposi- tion analysis, relating the change in inequality to compositional changes in the population and changes in coefficients for the mean and variance.
    [Show full text]
  • Flexible Signal Denoising Via Flexible Empirical Bayes Shrinkage
    Journal of Machine Learning Research 22 (2021) 1-28 Submitted 1/19; Revised 9/20; Published 5/21 Flexible Signal Denoising via Flexible Empirical Bayes Shrinkage Zhengrong Xing [email protected] Department of Statistics University of Chicago Chicago, IL 60637, USA Peter Carbonetto [email protected] Research Computing Center and Department of Human Genetics University of Chicago Chicago, IL 60637, USA Matthew Stephens [email protected] Department of Statistics and Department of Human Genetics University of Chicago Chicago, IL 60637, USA Editor: Edo Airoldi Abstract Signal denoising—also known as non-parametric regression—is often performed through shrinkage estima- tion in a transformed (e.g., wavelet) domain; shrinkage in the transformed domain corresponds to smoothing in the original domain. A key question in such applications is how much to shrink, or, equivalently, how much to smooth. Empirical Bayes shrinkage methods provide an attractive solution to this problem; they use the data to estimate a distribution of underlying “effects,” hence automatically select an appropriate amount of shrinkage. However, most existing implementations of empirical Bayes shrinkage are less flexible than they could be—both in their assumptions on the underlying distribution of effects, and in their ability to han- dle heteroskedasticity—which limits their signal denoising applications. Here we address this by adopting a particularly flexible, stable and computationally convenient empirical Bayes shrinkage method and apply- ing it to several signal denoising problems. These applications include smoothing of Poisson data and het- eroskedastic Gaussian data. We show through empirical comparisons that the results are competitive with other methods, including both simple thresholding rules and purpose-built empirical Bayes procedures.
    [Show full text]
  • Variance Function Estimation in Multivariate Nonparametric Regression by T
    Variance Function Estimation in Multivariate Nonparametric Regression by T. Tony Cai, Lie Wang University of Pennsylvania Michael Levine Purdue University Technical Report #06-09 Department of Statistics Purdue University West Lafayette, IN USA October 2006 Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cail, Michael Levine* Lie Wangl October 3, 2006 Abstract Variance function estimation in multivariate nonparametric regression is consid- ered and the minimax rate of convergence is established. Our work uses the approach that generalizes the one used in Munk et al (2005) for the constant variance case. As is the case when the number of dimensions d = 1, and very much contrary to the common practice, it is often not desirable to base the estimator of the variance func- tion on the residuals from an optimal estimator of the mean. Instead it is desirable to use estimators of the mean with minimal bias. Another important conclusion is that the first order difference-based estimator that achieves minimax rate of convergence in one-dimensional case does not do the same in the high dimensional case. Instead, the optimal order of differences depends on the number of dimensions. Keywords: Minimax estimation, nonparametric regression, variance estimation. AMS 2000 Subject Classification: Primary: 62G08, 62G20. Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104. The research of Tony Cai was supported in part by NSF Grant DMS-0306576. 'Corresponding author. Address: 250 N. University Street, Purdue University, West Lafayette, IN 47907. Email: [email protected]. Phone: 765-496-7571. Fax: 765-494-0558 1 Introduction We consider the multivariate nonparametric regression problem where yi E R, xi E S = [0, Ild C Rd while a, are iid random variables with zero mean and unit variance and have bounded absolute fourth moments: E lail 5 p4 < m.
    [Show full text]
  • Variance Function Program
    Variance Function Program Version 15.0 (for Windows XP and later) July 2018 W.A. Sadler 71B Middleton Road Christchurch 8041 New Zealand Ph: +64 3 343 3808 e-mail: [email protected] (formerly at Nuclear Medicine Department, Christchurch Hospital) Contents Page Variance Function Program 15.0 1 Windows Vista through Windows 10 Issues 1 Program Help 1 Gestures 1 Program Structure 2 Data Type 3 Automation 3 Program Data Area 3 User Preferences File 4 Auto-processing File 4 Graph Templates 4 Decimal Separator 4 Screen Settings 4 Scrolling through Summaries and Displays 4 The Variance Function 5 Variance Function Output Examples 8 Variance Stabilisation 11 Histogram, Error and Biological Variation Plots 12 Regression Analysis 13 Limit of Blank (LoB) and Limit of Detection (LoD) 14 Bland-Altman Analysis 14 Appendix A (Program Delivery) 16 Appendix B (Program Installation) 16 Appendix C (Program Removal) 16 Appendix D (Example Data, SD and Regression Files) 16 Appendix E (Auto-processing Example Files) 17 Appendix F (History) 17 Appendix G (Changes: Version 14.0 to Version 15.0) 18 Appendix H (Version 14.0 Bug Fixes) 19 References 20 Variance Function Program 15.0 1 Variance Function Program 15.0 The variance function (the relationship between variance and the mean) has several applications in statistical analysis of medical laboratory data (eg. Ref. 1), but the single most important use is probably the construction of (im)precision profile plots (2). This program (VFP.exe) was initially aimed at immunoassay data which exhibit relatively extreme variance relationships. It was based around the “standard” 3-parameter variance function, 2 J σ = (β1 + β2U) 2 where σ denotes variance, U denotes the mean, and β1, β2 and J are the parameters (3, 4).
    [Show full text]
  • Examining Residuals
    Stat 565 Examining Residuals Jan 14 2016 Charlotte Wickham stat565.cwick.co.nz So far... xt = mt + st + zt Variable Trend Seasonality Noise measured at time t Estimate and{ subtract off Should be left with this, stationary but probably has serial correlation Residuals in Corvallis temperature series Temp - loess smooth on day of year - loess smooth on date Your turn Now I have residuals, how could I check the variance doesn't change through time (i.e. is stationary)? Is the variance stationary? Same checks as for mean except using squared residuals or absolute value of residuals. Why? var(x) = 1/n ∑ ( x - μ)2 Converts a visual comparison of spread to a visual comparison of mean. Plot squared residuals against time qplot(date, residual^2, data = corv) + geom_smooth(method = "loess") Plot squared residuals against season qplot(yday, residual^2, data = corv) + geom_smooth(method = "loess", size = 1) Fitted values against residuals Looking for mean-variance relationship qplot(temp - residual, residual^2, data = corv) + geom_smooth(method = "loess", size = 1) Non-stationary variance Just like the mean you can attempt to remove the non-stationarity in variance. However, to remove non-stationary variance you divide by an estimate of the standard deviation. Your turn For the temperature series, serial dependence (a.k.a autocorrelation) means that today's residual is dependent on yesterday's residual. Any ideas of how we could check that? Is there autocorrelation in the residuals? > corv$residual_lag1 <- c(NA, corv$residual[-nrow(corv)]) > head(corv) . residual residual_lag1 . 1.5856663 NA xt-1 = lag 1 of xt . -0.4928295 1.5856663 .
    [Show full text]
  • Lecture 1: Stationary Time Series∗
    Lecture 1: Stationary Time Series∗ 1 Introduction If a random variable X is indexed to time, usually denoted by t, the observations {Xt, t ∈ T} is called a time series, where T is a time index set (for example, T = Z, the integer set). Time series data are very common in empirical economic studies. Figure 1 plots some frequently used variables. The upper left figure plots the quarterly GDP from 1947 to 2001; the upper right figure plots the the residuals after linear-detrending the logarithm of GDP; the lower left figure plots the monthly S&P 500 index data from 1990 to 2001; and the lower right figure plots the log difference of the monthly S&P. As you could see, these four series display quite different patterns over time. Investigating and modeling these different patterns is an important part of this course. In this course, you will find that many of the techniques (estimation methods, inference proce- dures, etc) you have learned in your general econometrics course are still applicable in time series analysis. However, there are something special of time series data compared to cross sectional data. For example, when working with cross-sectional data, it usually makes sense to assume that the observations are independent from each other, however, time series data are very likely to display some degree of dependence over time. More importantly, for time series data, we could observe only one history of the realizations of this variable. For example, suppose you obtain a series of US weekly stock index data for the last 50 years.
    [Show full text]
  • Estimating Variance Functions for Weighted Linear Regression
    Kansas State University Libraries New Prairie Press Conference on Applied Statistics in Agriculture 1992 - 4th Annual Conference Proceedings ESTIMATING VARIANCE FUNCTIONS FOR WEIGHTED LINEAR REGRESSION Michael S. Williams Hans T. Schreuder Timothy G. Gregoire William A. Bechtold See next page for additional authors Follow this and additional works at: https://newprairiepress.org/agstatconference Part of the Agriculture Commons, and the Applied Statistics Commons This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License. Recommended Citation Williams, Michael S.; Schreuder, Hans T.; Gregoire, Timothy G.; and Bechtold, William A. (1992). "ESTIMATING VARIANCE FUNCTIONS FOR WEIGHTED LINEAR REGRESSION," Conference on Applied Statistics in Agriculture. https://doi.org/10.4148/2475-7772.1401 This is brought to you for free and open access by the Conferences at New Prairie Press. It has been accepted for inclusion in Conference on Applied Statistics in Agriculture by an authorized administrator of New Prairie Press. For more information, please contact [email protected]. Author Information Michael S. Williams, Hans T. Schreuder, Timothy G. Gregoire, and William A. Bechtold This is available at New Prairie Press: https://newprairiepress.org/agstatconference/1992/proceedings/13 Conference on153 Applied Statistics in Agriculture Kansas State University ESTIMATING VARIANCE FUNCTIONS FOR WEIGHTED LINEAR REGRESSION Michael S. Williams Hans T. Schreuder Multiresource Inventory Techniques Rocky Mountain Forest and Range Experiment Station USDA Forest Service Fort Collins, Colorado 80526-2098, U.S.A. Timothy G. Gregoire Department of Forestry Virginia Polytechnic Institute and State University Blacksburg, Virginia 24061-0324, U.S.A. William A. Bechtold Research Forester Southeastern Forest Experiment Station Forest Inventory and Analysis Project USDA Forest Service Asheville, North Carolina 28802-2680, U.S.A.
    [Show full text]
  • STA 6857---Autocorrelation and Cross-Correlation & Stationary Time Series
    Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c STA 6857—Autocorrelation and Cross-Correlation & Outline Stationary Time Series (§1.4, 1.5) 1 Announcements 2 Autocorrelation and Cross-Correlation 3 Stationary Time Series 4 Homework 1c Arthur Berg STA 6857—Autocorrelation and Cross-Correlation & Stationary Time Series (§1.4, 1.5) 2/ 25 Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c Announcements Autocorrelation and Cross-Correlation Stationary Time Series Homework 1c Homework Questions from Last Time We’ve seen the AR(p) and MA(q) models. Which one do we use? In scientific modeling we wish to choose the model that best Our TA, Aixin Tan, will have office hours on Thursdays from 1–2pm describes or explains the data. Later we will develop many in 218 Griffin-Floyd. techniques to help us choose and fit the best model for our data. Homework 1c will be assigned today and the last part of homework Is this white noise process in the models unique? 1, homework 1d, will be assigned on Friday. Homework 1 will be We will see later that any stationary time series can be described as collected on Friday, September 7. Don’t wait till the last minute to do a MA(1) + “deterministic part” by the Wold decomposition, where the homework, because more homework will follow next week. the white noise process in the MA(1) part is unique. So in short, the answer is yes. We will also see later how to estimate the white noise process which will aid in forecasting.
    [Show full text]