ARMA Time-Series Modeling with Graphical Models

Total Page:16

File Type:pdf, Size:1020Kb

ARMA Time-Series Modeling with Graphical Models ARMA Time-Series Modeling with Graphical Models Bo Thiesson David Maxwell Chickering David Heckerman Christopher Meek Microsoft Research Microsoft Research Microsoft Research Microsoft Research [email protected] [email protected] [email protected] [email protected] Abstract moving average (ARMA) time-series model (e.g., Box, Jenkins, and Reinsel, 1994) as a graphical model to We express the classic ARMA time-series achieve similar benefits. As we shall see, the ARMA model as a directed graphical model. In model includes deterministic relationships, making it doing so, we find that the deterministic re- effectively impossible to estimate model parameters lationships in the model make it effectively via the Expectation{Maximization (EM) algorithm. impossible to use the EM algorithm for Consequently, we introduce a variation of the ARMA learning model parameters. To remedy this model for which the EM algorithm can be applied. problem, we replace the deterministic rela- Our modification, called stochastic ARMA (σARMA), tionships with Gaussian distributions hav- replaces the deterministic component of the ARMA ing a small variance, yielding the stochastic model with a Gaussian distribution having a small ARMA (σARMA) model. This modification variance. To our surprise, this addition not only has allows us to use the EM algorithm to learn the desired effect of making the EM algorithm effective parameters and to forecast, even in situa- for parameter estimation, but our experiments also tions where some data is missing. This mod- suggest that it provides a smoothing technique that ification, in conjunction with the graphical- produces more accurate estimates. model approach, also allows us to include We have chosen to focus on ARMA time-series mod- cross predictors in situations where there are els in this paper, but our paper applies immediately multiple time series and/or additional non- to autoregressive integrated moving average (ARIMA) temporal covariates. More surprising, ex- time-series models as well. In particular, an ARIMA periments suggest that the move to stochas- model for a time series is simply an ARMA model tic ARMA yields improved accuracy through for a preprocessed version of that same time series. better smoothing. We demonstrate improve- This preprocessing consists of d consecutive differenc- ments afforded by cross prediction and better ing transformations, where each transformation re- smoothing on real data. places the observations with the differences between successive observations. For example, when d = 0 an ARIMA model is a regular ARMA model, when d = 1 1 Introduction an ARIMA model is an ARMA model of the differ- ences, and when d = 2 an ARIMA model is an ARMA Graphical models have been used to represent time- model of the differences of the differences. In practice, series models for almost two decades (e.g., Dean and d ≤ 2 is almost always sufficient for good results (Box, Kanazawa, 1988; Cooper, Horvitz, and Heckerman, Jenkins, and Reinsel, 1994). 1988). The benefits of such representation include the ability to apply standard graphical-model-inference al- In Section 2, we review the ARMA model and intro- gorithms for parameter estimation (including estima- duce our stochastic variant. We also extend the model tion in the presence of missing data), for prediction, to allow for cross prediction, yielding σARMAxp, a and for cross prediction|the use of one time series model more general than previous extensions to cross- to help predict another time series (e.g., Ghahramani, prediction ARMA (e.g., the vector ARMA model; 1998; Meek, Chickering, and Heckerman, 2002; Bach Reinsel, 2003). In Section 3, we describe how the EM and Jordan, 2004). algorithm can be applied to σARMAxp for parameter estimation. Our approach is, to our knowledge, the In this paper, we express the classic autoregressive first EM-based alternative to parameter estimation in Our stochastic time-series models handle incomplete the ARMA model class. In Section 4, we describe time-series sequences in the sense that some values in how to predict future observations|that is, forecast| the sequence are missing. Hence, in real-world sit- using σARMAxp. In Sections 5 and 6, we demonstrate uations where the length of multiple cross-predicting the utility of our extensions in an evaluation using two time series do not match, we have two possibilities. real data collections. We show that (1) the stochastic We can introduce observation variables with missing variant of ARMA produces more accurate predictions values, or we can shorten a sequence of observation than those of standard ARMA, (2) estimation and pre- variables, as necessary. In the following we will as- diction in the face of missing data using our approach sume that Y , E, and C are all of the same length. yields better forecasts than by a heuristic approach in For any sequence, say Y , we denote the sub-sequence which missing data are filled in by interpolation, and consisting of the i'th through the j'th element by Y j = (3) the inclusion of cross predictions can lead to more i (Y ;Y ;:::;Y ), i < j. accurate forecasting. i i+1 j 2.1 ARMA Models 2 Time-Series Models In slightly different notation than usual (see, e.g., In this section, we review the well-known class of Box, Jenkins, and Reinsel, 1994 or Ansley, 1979) the autoregressive-moving average (ARMA) time series ARMA(p; q) time series model is defined as the deter- models and define two closely related stochastic varia- ministic relation ∗ tions, the σARMA and the σARMA classes. We also Xq Xp define a generalization of these two stochastic varia- Y = ζ + β E − + α Y − (1) xp ∗xp t j t j i t i tions, called σARMA and σARMA , which ad- j=0 i=1 ditionally allow for selective cross-predictors from re- P p lated time series. where ζ is the intercept,P i=1 αiYt−i is the autore- q gressive (AR) part, β E − is the moving aver- We begin by introducing notation and nomenclature. j=0 j t j age (MA) part with β fixed as 1, and E ∼ N (0; γ) We denote a temporal sequence of observation vari- 0 t is \white noise" with E mutually independent for all ables by Y = (Y ;Y ;:::;Y ). Time-series data is t 1 2 T t. The construction of this model therefore involves a sequence of values for these variables denoted by estimation of the free parameters ζ,(α ; : : : ; α ), y = (y ; y ; : : : ; y ). We suppose that these observa- 1 p 1 2 T (β ; : : : ; β ), and γ. tions are obtained at discrete, equispaced intervals of 1 q time. In this paper we will consider incomplete obser- For a constructed model, the one step-ahead forecast vation sequences in the sense that some of the obser- Y^t given the past can be computed as vation variables may have missing observations. For Xq Xp notational convenience, we will represent such a se- Y^ = ζ + β E − + α Y − (2) quence of observations as a complete sequence, and it t j t j i t i j=1 i=1 will be clear from context that this sequence has miss- ing observations. where we exploit that at any time t, the error in the ARMA model can be determined as the difference be- In the time-series models that we consider in this pa- tween the actual observed value and the one step- per, we associate a latent \white noise" variable with ahead forecast each observable variable. These latent variables are Et = Yt − Y^t (3) denoted E = (E1;E2;:::;ET ). The variance for this forecast is γ. Some of the models will contain cross-predictor se- quences. A cross-predictor sequence is a sequence An ARMA model can be represented by a directed of observation variables from a related time series, graphical model (or Bayes net) with both stochastic which is used in the predictive model for the time and deterministic nodes. A node is deterministic if series under consideration. For each cross-predictor the value of the variable represented by that node is a sequence in a model, a cross-predictor variable is as- deterministic function of the values for variables rep- sociated with each observable variable. For instance, resented by nodes pointing to that node in the graph- 0 0 0 0 00 00 00 00 Y = (Y1 ;Y2 ;:::;YT 0 ) and Y = (Y1 ;Y2 ;:::;YT 00 ) ical representation. From the definition of the ARMA 0 0 may be related time series sequences where Yt−1 Yt−12 models, we see that the observable variables (the Y 's) 00 and Yt−1 are cross-predictor variables for Yt. Let Ct are represented by deterministic nodes and the error denote a vector of cross-predictor variables for Yt. The variables (the E's) are represented by stochastic nodes. set of cross-predictor vectors for all variables Y is de- The relations between variables are defined by (1) and noted C = (C1;C2;:::;CT ). accordingly, Yt−p;:::;Yt−1 and Et−q;:::;Et all point to Yt. In this paper we are interested in the condi- ting β0 vary freely. We will denote this class of models tional likelihood models, where we condition on the first as σARMA∗. R = max(p; q) variables. Relations between variables A σARMA (or σARMA∗) model has the same graphi- for t ≤ R can therefore be ignored. The graphical cal representation as the similar ARMA model, except representation for an ARMA(2,2) model is shown in that deterministic nodes are now stochastic. Figure 1. It should be noted that if we artificially ex- tend the time series back in time for R (unobserved) time steps, this model represents what is known in the 2.3 σARMAxp Models literature as the exact likelihood model.
Recommended publications
  • Kalman and Particle Filtering
    Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)).
    [Show full text]
  • Lecture 19: Wavelet Compression of Time Series and Images
    Lecture 19: Wavelet compression of time series and images c Christopher S. Bretherton Winter 2014 Ref: Matlab Wavelet Toolbox help. 19.1 Wavelet compression of a time series The last section of wavelet leleccum notoolbox.m demonstrates the use of wavelet compression on a time series. The idea is to keep the wavelet coefficients of largest amplitude and zero out the small ones. 19.2 Wavelet analysis/compression of an image Wavelet analysis is easily extended to two-dimensional images or datasets (data matrices), by first doing a wavelet transform of each column of the matrix, then transforming each row of the result (see wavelet image). The wavelet coeffi- cient matrix has the highest level (largest-scale) averages in the first rows/columns, then successively smaller detail scales further down the rows/columns. The ex- ample also shows fine results with 50-fold data compression. 19.3 Continuous Wavelet Transform (CWT) Given a continuous signal u(t) and an analyzing wavelet (x), the CWT has the form Z 1 s − t W (λ, t) = λ−1=2 ( )u(s)ds (19.3.1) −∞ λ Here λ, the scale, is a continuous variable. We insist that have mean zero and that its square integrates to 1. The continuous Haar wavelet is defined: 8 < 1 0 < t < 1=2 (t) = −1 1=2 < t < 1 (19.3.2) : 0 otherwise W (λ, t) is proportional to the difference of running means of u over successive intervals of length λ/2. 1 Amath 482/582 Lecture 19 Bretherton - Winter 2014 2 In practice, for a discrete time series, the integral is evaluated as a Riemann sum using the Matlab wavelet toolbox function cwt.
    [Show full text]
  • Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
    Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive
    [Show full text]
  • Time Series: Co-Integration
    Time Series: Co-integration Series: Economic Forecasting; Time Series: General; Watson M W 1994 Vector autoregressions and co-integration. Time Series: Nonstationary Distributions and Unit In: Engle R F, McFadden D L (eds.) Handbook of Econo- Roots; Time Series: Seasonal Adjustment metrics Vol. IV. Elsevier, The Netherlands N. H. Chan Bibliography Banerjee A, Dolado J J, Galbraith J W, Hendry D F 1993 Co- Integration, Error Correction, and the Econometric Analysis of Non-stationary Data. Oxford University Press, Oxford, UK Time Series: Cycles Box G E P, Tiao G C 1977 A canonical analysis of multiple time series. Biometrika 64: 355–65 Time series data in economics and other fields of social Chan N H, Tsay R S 1996 On the use of canonical correlation science often exhibit cyclical behavior. For example, analysis in testing common trends. In: Lee J C, Johnson W O, aggregate retail sales are high in November and Zellner A (eds.) Modelling and Prediction: Honoring December and follow a seasonal cycle; voter regis- S. Geisser. Springer-Verlag, New York, pp. 364–77 trations are high before each presidential election and Chan N H, Wei C Z 1988 Limiting distributions of least squares follow an election cycle; and aggregate macro- estimates of unstable autoregressive processes. Annals of Statistics 16: 367–401 economic activity falls into recession every six to eight Engle R F, Granger C W J 1987 Cointegration and error years and follows a business cycle. In spite of this correction: Representation, estimation, and testing. Econo- cyclicality, these series are not perfectly predictable, metrica 55: 251–76 and the cycles are not strictly periodic.
    [Show full text]
  • Generating Time Series with Diverse and Controllable Characteristics
    GRATIS: GeneRAting TIme Series with diverse and controllable characteristics Yanfei Kang,∗ Rob J Hyndman,† and Feng Li‡ Abstract The explosion of time series data in recent years has brought a flourish of new time series analysis methods, for forecasting, clustering, classification and other tasks. The evaluation of these new methods requires either collecting or simulating a diverse set of time series benchmarking data to enable reliable comparisons against alternative approaches. We pro- pose GeneRAting TIme Series with diverse and controllable characteristics, named GRATIS, with the use of mixture autoregressive (MAR) models. We simulate sets of time series using MAR models and investigate the diversity and coverage of the generated time series in a time series feature space. By tuning the parameters of the MAR models, GRATIS is also able to efficiently generate new time series with controllable features. In general, as a costless surrogate to the traditional data collection approach, GRATIS can be used as an evaluation tool for tasks such as time series forecasting and classification. We illustrate the usefulness of our time series generation process through a time series forecasting application. Keywords: Time series features; Time series generation; Mixture autoregressive models; Time series forecasting; Simulation. 1 Introduction With the widespread collection of time series data via scanners, monitors and other automated data collection devices, there has been an explosion of time series analysis methods developed in the past decade or two. Paradoxically, the large datasets are often also relatively homogeneous in the industry domain, which limits their use for evaluation of general time series analysis methods (Keogh and Kasetty, 2003; Mu˜nozet al., 2018; Kang et al., 2017).
    [Show full text]
  • Monte Carlo Smoothing for Nonlinear Time Series
    Monte Carlo Smoothing for Nonlinear Time Series Simon J. GODSILL, Arnaud DOUCET, and Mike WEST We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forward-filtering backward-smoothing procedure that can be viewed as the nonlinear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterized in terms of time-varying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based on the filtered trajectories. KEY WORDS: Bayesian inference; Non-Gaussian time series; Nonlinear time series; Particle filter; Sequential Monte Carlo; State- space model. 1. INTRODUCTION and In this article we develop Monte Carlo methods for smooth- g(yt+1|xt+1)p(xt+1|y1:t ) p(xt+1|y1:t+1) = . ing in general state-space models. To fix notation, consider p(yt+1|y1:t ) the standard Markovian state-space model (West and Harrison 1997) Similarly, smoothing can be performed recursively backward in time using the smoothing formula xt+1 ∼ f(xt+1|xt ) (state evolution density), p(xt |y1:t )f (xt+1|xt ) p(x |y ) = p(x + |y ) dx + .
    [Show full text]
  • Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks
    Article Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks Renjie Zhou 1,2, Chen Yang 1,2, Jian Wan 1,2,*, Wei Zhang 1,2, Bo Guan 3,*, Naixue Xiong 4,* 1 School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; [email protected] (R.Z.); [email protected] (C.Y.); [email protected] (W.Z.) 2 Key Laboratory of Complex Systems Modeling and Simulation of Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China 3 School of Electronic and Information Engineer, Ningbo University of Technology, Ningbo 315211, China 4 Department of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK 74464, USA * Correspondence: [email protected] (J.W.); [email protected] (B.G.); [email protected] (N.X.) Tel.: +1-404-645-4067 (N.X.) Academic Editor: Mohamed F. Younis Received: 15 November 2016; Accepted: 24 March 2017; Published: 6 April 2017 Abstract: Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity.
    [Show full text]
  • Time Series Analysis 1
    Basic concepts Autoregressive models Moving average models Time Series Analysis 1. Stationary ARMA models Andrew Lesniewski Baruch College New York Fall 2019 A. Lesniewski Time Series Analysis Basic concepts Autoregressive models Moving average models Outline 1 Basic concepts 2 Autoregressive models 3 Moving average models A. Lesniewski Time Series Analysis Basic concepts Autoregressive models Moving average models Time series A time series is a sequence of data points Xt indexed a discrete set of (ordered) dates t, where −∞ < t < 1. Each Xt can be a simple number or a complex multi-dimensional object (vector, matrix, higher dimensional array, or more general structure). We will be assuming that the times t are equally spaced throughout, and denote the time increment by h (e.g. second, day, month). Unless specified otherwise, we will be choosing the units of time so that h = 1. Typically, time series exhibit significant irregularities, which may have their origin either in the nature of the underlying quantity or imprecision in observation (or both). Examples of time series commonly encountered in finance include: (i) prices, (ii) returns, (iii) index levels, (iv) trading volums, (v) open interests, (vi) macroeconomic data (inflation, new payrolls, unemployment, GDP, housing prices, . ) A. Lesniewski Time Series Analysis Basic concepts Autoregressive models Moving average models Time series For modeling purposes, we assume that the elements of a time series are random variables on some underlying probability space. Time series analysis is a set of mathematical methodologies for analyzing observed time series, whose purpose is to extract useful characteristics of the data. These methodologies fall into two broad categories: (i) non-parametric, where the stochastic law of the time series is not explicitly specified; (ii) parametric, where the stochastic law of the time series is assumed to be given by a model with a finite (and preferably tractable) number of parameters.
    [Show full text]
  • 2.5 Nonlinear Principal Components Analysis for Visualisation of Log-Return Time Series 43 2.6 Conclusion 47 3
    Visualisation of financial time series by linear principal component analysis and nonlinear principal component analysis Thesis submitted at the University of Leicester in partial fulfilment of the requirements for the MSc degree in Financial Mathematics and Computation by HAO-CHE CHEN Department of Mathematics University of Leicester September 2014 Supervisor: Professor Alexander Gorban Contents Declaration iii Abstract iv Introduction 1 1. Technical analysis review 5 1.1 Introduction 5 1.2 Technical analysis 5 1.3 The Critics 7 1.3.1 Market efficiency 8 1.4 Concepts of Technical analysis 10 1.4.1 Market price trends 10 1.4.2 Support and Resistance 12 1.4.3 Volume 12 1.4.4 Stock chart patterns 13 1.4.4.1 Reversal and Continuation 13 1.4.4.2 Head and shoulders 16 1.4.4.3 Cup and handle 17 1.4.4.4 Rounding bottom 18 1.4.5 Moving Averages (MAs) 19 1.5 Strategies 20 1.5.1 Strategy 1 - Check trend day-by-day 22 1.5.2 Strategy 2 - Check trend with counters 25 1.5.3 Strategy 3 – Volume and MAs 28 1.5.4 Conclusion of Strategies 31 2. Linear and nonlinear principle component for visualisation of log-return time series 32 2.1 Dataset 32 2.2 The problem of visualisation 33 2.3 Principal components for visualisation of log-return 33 2.4 Nonlinear principal components methods 40 2.4.1 Kernel PCA 41 2.4.2 Elastic map 41 i 2.5 Nonlinear principal components analysis for visualisation of log-return time series 43 2.6 Conclusion 47 3.
    [Show full text]
  • Discrete Wavelet Transform-Based Time Series Analysis and Mining
    6 Discrete Wavelet Transform-Based Time Series Analysis and Mining PIMWADEE CHAOVALIT, National Science and Technology Development Agency ARYYA GANGOPADHYAY, GEORGE KARABATIS, and ZHIYUAN CHEN,Universityof Maryland, Baltimore County Time series are recorded values of an interesting phenomenon such as stock prices, household incomes, or patient heart rates over a period of time. Time series data mining focuses on discovering interesting patterns in such data. This article introduces a wavelet-based time series data analysis to interested readers. It provides a systematic survey of various analysis techniques that use discrete wavelet transformation (DWT) in time series data mining, and outlines the benefits of this approach demonstrated by previous studies performed on diverse application domains, including image classification, multimedia retrieval, and computer network anomaly detection. Categories and Subject Descriptors: A.1 [Introductory and Survey]; G.3 [Probability and Statistics]: — Time series analysis; H.2.8 [Database Management]: Database Applications—Data mining; I.5.4 [Pattern Recognition]: Applications—Signal processing, waveform analysis General Terms: Algorithms, Experimentation, Measurement, Performance Additional Key Words and Phrases: Classification, clustering, anomaly detection, similarity search, predic- tion, data transformation, dimensionality reduction, noise filtering, data compression ACM Reference Format: Chaovalit, P., Gangopadhyay, A., Karabatis, G., and Chen, Z. 2011. Discrete wavelet transform-based time series analysis and mining. ACM Comput. Surv. 43, 2, Article 6 (January 2011), 37 pages. DOI = 10.1145/1883612.1883613 http://doi.acm.org/10.1145/1883612.1883613 1. INTRODUCTION A time series is a sequence of data that represent recorded values of a phenomenon over time. Time series data constitutes a large portion of the data stored in real world databases [Agrawal et al.
    [Show full text]
  • Introduction to Time Series Analysis. Lecture 19
    Introduction to Time Series Analysis. Lecture 19. 1. Review: Spectral density estimation, sample autocovariance. 2. The periodogram and sample autocovariance. 3. Asymptotics of the periodogram. 1 Estimating the Spectrum: Outline We have seen that the spectral density gives an alternative view of • stationary time series. Given a realization x ,...,x of a time series, how can we estimate • 1 n the spectral density? One approach: replace γ( ) in the definition • · ∞ f(ν)= γ(h)e−2πiνh, h=−∞ with the sample autocovariance γˆ( ). · Another approach, called the periodogram: compute I(ν), the squared • modulus of the discrete Fourier transform (at frequencies ν = k/n). 2 Estimating the spectrum: Outline These two approaches are identical at the Fourier frequencies ν = k/n. • The asymptotic expectation of the periodogram I(ν) is f(ν). We can • derive some asymptotic properties, and hence do hypothesis testing. Unfortunately, the asymptotic variance of I(ν) is constant. • It is not a consistent estimator of f(ν). 3 Review: Spectral density estimation If a time series X has autocovariance γ satisfying { t} ∞ γ(h) < , then we define its spectral density as h=−∞ | | ∞ ∞ f(ν)= γ(h)e−2πiνh h=−∞ for <ν< . −∞ ∞ 4 Review: Sample autocovariance Idea: use the sample autocovariance γˆ( ), defined by · n−|h| 1 γˆ(h)= (x x¯)(x x¯), for n<h<n, n t+|h| − t − − t=1 as an estimate of the autocovariance γ( ), and then use · n−1 fˆ(ν)= γˆ(h)e−2πiνh h=−n+1 for 1/2 ν 1/2. − ≤ ≤ 5 Discrete Fourier transform For a sequence (x1,...,xn), define the discrete Fourier transform (DFT) as (X(ν0),X(ν1),...,X(νn−1)), where 1 n X(ν )= x e−2πiνkt, k √n t t=1 and ν = k/n (for k = 0, 1,...,n 1) are called the Fourier frequencies.
    [Show full text]
  • Principal Component Analysis for Second-Order Stationary Vector Time Series.” Davis, R
    Submitted to the Annals of Statistics PRINCIPAL COMPONENT ANALYSIS FOR SECOND-ORDER STATIONARY VECTOR TIME SERIES By Jinyuan Chang1,∗ , Bin Guo1,† and Qiwei Yao2,‡ Southwestern University of Finance and Economics1, and London School of Economics and Political Science2 We extend the principal component analysis (PCA) to second- order stationary vector time series in the sense that we seek for a contemporaneous linear transformation for a p-variate time series such that the transformed series is segmented into several lower- dimensional subseries, and those subseries are uncorrelated with each other both contemporaneously and serially. Therefore those lower- dimensional series can be analysed separately as far as the linear dynamic structure is concerned. Technically it boils down to an eige- nanalysis for a positive definite matrix. When p is large, an additional step is required to perform a permutation in terms of either maxi- mum cross-correlations or FDR based on multiple tests. The asymp- totic theory is established for both fixed p and diverging p when the sample size n tends to infinity. Numerical experiments with both simulated and real data sets indicate that the proposed method is an effective initial step in analysing multiple time series data, which leads to substantial dimension reduction in modelling and forecasting high-dimensional linear dynamical structures. Unlike PCA for inde- pendent data, there is no guarantee that the required linear transfor- mation exists. When it does not, the proposed method provides an approximate segmentation which leads to the advantages in, for ex- ample, forecasting for future values. The method can also be adapted to segment multiple volatility processes.
    [Show full text]