Time Series Forecasting Using a Moving Average Model for Extrapolation of Number of Tourist
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Chapter 2 Time Series and Forecasting
Chapter 2 Time series and Forecasting 2.1 Introduction Data are frequently recorded at regular time intervals, for instance, daily stock market indices, the monthly rate of inflation or annual profit figures. In this Chapter we think about how to display and model such data. We will consider how to detect trends and seasonal effects and then use these to make forecasts. As well as review the methods covered in MAS1403, we will also consider a class of time series models known as autore- gressive moving average models. Why is this topic useful? Well, making forecasts allows organisations to make better decisions and to plan more efficiently. For instance, reliable forecasts enable a retail outlet to anticipate demand, hospitals to plan staffing levels and manufacturers to keep appropriate levels of inventory. 2.2 Displaying and describing time series A time series is a collection of observations made sequentially in time. When observations are made continuously, the time series is said to be continuous; when observations are taken only at specific time points, the time series is said to be discrete. In this course we consider only discrete time series, where the observations are taken at equal intervals. The first step in the analysis of time series is usually to plot the data against time, in a time series plot. Suppose we have the following four–monthly sales figures for Turner’s Hangover Cure as described in Practical 2 (in thousands of pounds): Jan–Apr May–Aug Sep–Dec 2006 8 10 13 2007 10 11 14 2008 10 11 15 2009 11 13 16 We could enter these data into a single column (say column C1) in Minitab, and then click on Graph–Time Series Plot–Simple–OK; entering C1 in Series and then clicking OK gives the graph shown in figure 2.1. -
Demand Forecasting
BIZ2121 Production & Operations Management Demand Forecasting Sung Joo Bae, Associate Professor Yonsei University School of Business Unilever Customer Demand Planning (CDP) System Statistical information: shipment history, current order information Demand-planning system with promotional demand increase, and other detailed information (external market research, internal sales projection) Forecast information is relayed to different distribution channel and other units Connecting to POS (point-of-sales) data and comparing it to forecast data is a very valuable ways to update the system Results: reduced inventory, better customer service Forecasting Forecasts are critical inputs to business plans, annual plans, and budgets Finance, human resources, marketing, operations, and supply chain managers need forecasts to plan: ◦ output levels ◦ purchases of services and materials ◦ workforce and output schedules ◦ inventories ◦ long-term capacities Forecasting Forecasts are made on many different variables ◦ Uncertain variables: competitor strategies, regulatory changes, technological changes, processing times, supplier lead times, quality losses ◦ Different methods are used Judgment, opinions of knowledgeable people, average of experience, regression, and time-series techniques ◦ No forecast is perfect Constant updating of plans is important Forecasts are important to managing both processes and supply chains ◦ Demand forecast information can be used for coordinating the supply chain inputs, and design of the internal processes (especially -
Moving Average Filters
CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq. -
Kalman and Particle Filtering
Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)). -
Lecture 19: Wavelet Compression of Time Series and Images
Lecture 19: Wavelet compression of time series and images c Christopher S. Bretherton Winter 2014 Ref: Matlab Wavelet Toolbox help. 19.1 Wavelet compression of a time series The last section of wavelet leleccum notoolbox.m demonstrates the use of wavelet compression on a time series. The idea is to keep the wavelet coefficients of largest amplitude and zero out the small ones. 19.2 Wavelet analysis/compression of an image Wavelet analysis is easily extended to two-dimensional images or datasets (data matrices), by first doing a wavelet transform of each column of the matrix, then transforming each row of the result (see wavelet image). The wavelet coeffi- cient matrix has the highest level (largest-scale) averages in the first rows/columns, then successively smaller detail scales further down the rows/columns. The ex- ample also shows fine results with 50-fold data compression. 19.3 Continuous Wavelet Transform (CWT) Given a continuous signal u(t) and an analyzing wavelet (x), the CWT has the form Z 1 s − t W (λ, t) = λ−1=2 ( )u(s)ds (19.3.1) −∞ λ Here λ, the scale, is a continuous variable. We insist that have mean zero and that its square integrates to 1. The continuous Haar wavelet is defined: 8 < 1 0 < t < 1=2 (t) = −1 1=2 < t < 1 (19.3.2) : 0 otherwise W (λ, t) is proportional to the difference of running means of u over successive intervals of length λ/2. 1 Amath 482/582 Lecture 19 Bretherton - Winter 2014 2 In practice, for a discrete time series, the integral is evaluated as a Riemann sum using the Matlab wavelet toolbox function cwt. -
Mathematical Construction of Interpolation and Extrapolation Function by Taylor Polynomials Arxiv:2002.11438V1 [Math.NA] 26 Fe
Mathematical Construction of Interpolation and Extrapolation Function by Taylor Polynomials Nijat Shukurov Department of Engineering Physics, Ankara University, Ankara, Turkey E-mail: [email protected] , [email protected] Abstract: In this present paper, I propose a derivation of unified interpolation and extrapolation function that predicts new values inside and outside the given range by expanding direct Taylor series on the middle point of given data set. Mathemati- cal construction of experimental model derived in general form. Trigonometric and Power functions adopted as test functions in the development of the vital aspects in numerical experiments. Experimental model was interpolated and extrapolated on data set that generated by test functions. The results of the numerical experiments which predicted by derived model compared with analytical values. KEYWORDS: Polynomial Interpolation, Extrapolation, Taylor Series, Expansion arXiv:2002.11438v1 [math.NA] 26 Feb 2020 1 1 Introduction In scientific experiments or engineering applications, collected data are usually discrete in most cases and physical meaning is likely unpredictable. To estimate the outcomes and to understand the phenomena analytically controllable functions are desirable. In the mathematical field of nu- merical analysis those type of functions are called as interpolation and extrapolation functions. Interpolation serves as the prediction tool within range of given discrete set, unlike interpola- tion, extrapolation functions designed to predict values out of the range of given data set. In this scientific paper, direct Taylor expansion is suggested as a instrument which estimates or approximates a new points inside and outside the range by known individual values. Taylor se- ries is one of most beautiful analogies in mathematics, which make it possible to rewrite every smooth function as a infinite series of Taylor polynomials. -
Stable Extrapolation of Analytic Functions
STABLE EXTRAPOLATION OF ANALYTIC FUNCTIONS LAURENT DEMANET AND ALEX TOWNSEND∗ Abstract. This paper examines the problem of extrapolation of an analytic function for x > 1 given perturbed samples from an equally spaced grid on [−1; 1]. Mathematical folklore states that extrapolation is in general hopelessly ill-conditioned, but we show that a more precise statement carries an interesting nuance. For a function f on [−1; 1] that is analytic in a Bernstein ellipse with parameter ρ > 1, and for a uniform perturbation level " on the function samples, we construct an asymptotically best extrapolant e(x) as a least squares polynomial approximant of degree M ∗ given explicitly. We show that the extrapolant e(x) converges to f(x) pointwise in the interval −1 Iρ 2 [1; (ρ+ρ )=2) as " ! 0, at a rate given by a x-dependent fractional power of ". More precisely, for each x 2 Iρ we have p x + x2 − 1 jf(x) − e(x)j = O "− log r(x)= log ρ ; r(x) = ; ρ up to log factors, provided that the oversampling conditioning is satisfied. That is, 1 p M ∗ ≤ N; 2 which is known to be needed from approximation theory. In short, extrapolation enjoys a weak form of stability, up to a fraction of the characteristic smoothness length. The number of function samples, N + 1, does not bear on the size of the extrapolation error provided that it obeys the oversampling condition. We also show that one cannot construct an asymptotically more accurate extrapolant from N + 1 equally spaced samples than e(x), using any other linear or nonlinear procedure. -
Time Series and Forecasting
Time Series and Forecasting Time Series • A time series is a sequence of measurements over time, usually obtained at equally spaced intervals – Daily – Monthly – Quarterly – Yearly 1 Time Series Example Dow Jones Industrial Average 12000 11000 10000 9000 Closing Value Closing 8000 7000 1/3/00 5/3/00 9/3/00 1/3/01 5/3/01 9/3/01 1/3/02 5/3/02 9/3/02 1/3/03 5/3/03 9/3/03 Date Components of a Time Series • Secular Trend –Linear – Nonlinear • Cyclical Variation – Rises and Falls over periods longer than one year • Seasonal Variation – Patterns of change within a year, typically repeating themselves • Residual Variation 2 Components of a Time Series Y=T+C+S+Rtt tt t Time Series with Linear Trend Yt = a + b t + et 3 Time Series with Linear Trend AOL Subscribers 30 25 20 15 10 5 Number of Subscribers (millions) 0 2341234123412341234123 1995 1996 1997 1998 1999 2000 Quarter Time Series with Linear Trend Average Daily Visits in August to Emergency Room at Richmond Memorial Hospital 140 120 100 80 60 40 Average Daily Visits Average Daily 20 0 12345678910 Year 4 Time Series with Nonlinear Trend Imports 180 160 140 120 100 80 Imports (MM) Imports 60 40 20 0 1986 1988 1990 1992 1994 1996 1998 Year Time Series with Nonlinear Trend • Data that increase by a constant amount at each successive time period show a linear trend. • Data that increase by increasing amounts at each successive time period show a curvilinear trend. • Data that increase by an equal percentage at each successive time period can be made linear by applying a logarithmic transformation. -
Penalised Regressions Vs. Autoregressive Moving Average Models for Forecasting Inflation Regresiones Penalizadas Vs
ECONÓMICAS . Ospina-Holguín y Padilla-Ospina / Económicas CUC, vol. 41 no. 1, pp. 65 -80, Enero - Junio, 2020 CUC Penalised regressions vs. autoregressive moving average models for forecasting inflation Regresiones penalizadas vs. modelos autorregresivos de media móvil para pronosticar la inflación DOI: https://doi.org/10.17981/econcuc.41.1.2020.Econ.3 Abstract This article relates the Seasonal Autoregressive Moving Average Artículo de investigación. Models (SARMA) to linear regression. Based on this relationship, the Fecha de recepción: 07/10/2019. paper shows that penalized linear models can outperform the out-of- Fecha de aceptación: 10/11/2019. sample forecast accuracy of the best SARMA models in forecasting Fecha de publicación: 15/11/2019 inflation as a function of past values, due to penalization and cross- validation. The paper constructs a minimal functional example using edge regression to compare both competing approaches to forecasting monthly inflation in 35 selected countries of the Organization for Economic Cooperation and Development and in three groups of coun- tries. The results empirically test the hypothesis that penalized linear regression, and edge regression in particular, can outperform the best standard SARMA models calculated through a grid search when fore- casting inflation. Thus, a new and effective technique for forecasting inflation based on past values is provided for use by financial analysts and investors. The results indicate that more attention should be paid Javier Humberto Ospina-Holguín to automatic learning techniques for forecasting inflation time series, Universidad del Valle. Cali (Colombia) even as basic as penalized linear regressions, because of their superior [email protected] empirical performance. -
Package 'Gmztests'
Package ‘GMZTests’ March 18, 2021 Type Package Title Statistical Tests Description A collection of functions to perform statistical tests of the following methods: Detrended Fluctu- ation Analysis, RHODCCA coefficient,<doi:10.1103/PhysRevE.84.066118>, DMC coeffi- cient, SILVA-FILHO et al. (2021) <doi:10.1016/j.physa.2020.125285>, Delta RHODCCA coeffi- cient, Guedes et al. (2018) <doi:10.1016/j.physa.2018.02.148> and <doi:10.1016/j.dib.2018.03.080> , Delta DMCA co- efficient and Delta DMC coefficient. Version 0.1.4 Date 2021-03-19 Maintainer Everaldo Freitas Guedes <[email protected]> License GPL-3 URL https://github.com/efguedes/GMZTests BugReports https://github.com/efguedes/GMZTests NeedsCompilation no Encoding UTF-8 LazyData true Imports stats, DCCA, PerformanceAnalytics, nonlinearTseries, fitdistrplus, fgpt, tseries Suggests xts, zoo, quantmod, fracdiff RoxygenNote 7.1.1 Author Everaldo Freitas Guedes [aut, cre] (<https://orcid.org/0000-0002-2986-7367>), Aloísio Machado Silva-Filho [aut] (<https://orcid.org/0000-0001-8250-1527>), Gilney Figueira Zebende [aut] (<https://orcid.org/0000-0003-2420-9805>) Repository CRAN Date/Publication 2021-03-18 13:10:04 UTC 1 2 deltadmc.test R topics documented: deltadmc.test . .2 deltadmca.test . .3 deltarhodcca.test . .4 dfa.test . .5 dmc.test . .6 dmca.test . .7 rhodcca.test . .8 Index 9 deltadmc.test Statistical test for Delta DMC Multiple Detrended Cross-Correlation Coefficient Description This function performs the statistical test for Delta DMC cross-correlation coefficient from three univariate ARFIMA process. Usage deltadmc.test(x1, x2, y, k, m, nu, rep, method) Arguments x1 A vector containing univariate time series. -
Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive -
On Sharp Extrapolation Theorems
ON SHARP EXTRAPOLATION THEOREMS by Dariusz Panek M.Sc., Mathematics, Jagiellonian University in Krak¶ow,Poland, 1995 M.Sc., Applied Mathematics, University of New Mexico, USA, 2004 DISSERTATION Submitted in Partial Ful¯llment of the Requirements for the Degree of Doctor of Philosophy Mathematics The University of New Mexico Albuquerque, New Mexico December, 2008 °c 2008, Dariusz Panek iii Acknowledgments I would like ¯rst to express my gratitude for M. Cristina Pereyra, my advisor, for her unconditional and compact support; her unbounded patience and constant inspi- ration. Also, I would like to thank my committee members Dr. Pedro Embid, Dr Dimiter Vassilev, Dr Jens Lorens, and Dr Wilfredo Urbina for their time and positive feedback. iv ON SHARP EXTRAPOLATION THEOREMS by Dariusz Panek ABSTRACT OF DISSERTATION Submitted in Partial Ful¯llment of the Requirements for the Degree of Doctor of Philosophy Mathematics The University of New Mexico Albuquerque, New Mexico December, 2008 ON SHARP EXTRAPOLATION THEOREMS by Dariusz Panek M.Sc., Mathematics, Jagiellonian University in Krak¶ow,Poland, 1995 M.Sc., Applied Mathematics, University of New Mexico, USA, 2004 Ph.D., Mathematics, University of New Mexico, 2008 Abstract Extrapolation is one of the most signi¯cant and powerful properties of the weighted theory. It basically states that an estimate on a weighted Lpo space for a single expo- p nent po ¸ 1 and all weights in the Muckenhoupt class Apo implies a corresponding L estimate for all p; 1 < p < 1; and all weights in Ap. Sharp Extrapolation Theorems track down the dependence on the Ap characteristic of the weight.