CHAPTER 2 Univariate Time Series Models 2.1 Least Squares

Total Page:16

File Type:pdf, Size:1020Kb

CHAPTER 2 Univariate Time Series Models 2.1 Least Squares CHAPTER 2 Univariate Time Series Models 2.1 Least Squares Regression We begin our discussion of univariate and multivariate time series methods by considering the idea of a simple regression model, which we have met before in other contexts. All of the multivariate methods follow, in some sense, from the ideas involved in simple univariate linear regression. In this case, we assume that there is some collection of fixed known functions of time, say zt1; zt2; : : : ztq that are influencing our output yt which we know to be random. We express this relation between the inputs and outputs as yt = ¯1zt1 + ¯2zt2 + ¢ ¢ ¢ + ¯qztq + et (2:1) at the time points t = 1; 2; : : : ; n, where ¯1; : : : ; ¯q are unknown fixed regression coefficients and et is a random error or noise, assumed to be white noise; this means that the observations have zero means, equal variances σ2 and are independent. We traditionally assume also that the white noise series, et, is Gaussian or normally distributed. Example 2.1: We have assumed implicitly that the model yt = ¯1 + ¯2t + et is reasonable in our discussion of detrending in Chapter 1. This is in the form of the regression model (2.1) when one makes the identification zt1 = 1; zt2 = t. The problem in detrending is to estimate the coeffi- cients ¯1 and ¯2 in the above equation and detrend by constructing the estimated residual series et. We discuss the precise way in which this is accomplished below. The linear regresssion model described by Equation (2.1) can be conve- niently written in slightly more general matrix notation by defining the column 2.1: Least Squares Regression 27 0 0 vectors zzzt = (zt1; : : : ; ztq) and ¯¯¯ = (¯1; : : : ; ¯q) so that we write (2.1) in the alternate form 0 yt = ¯¯¯ zzzt + et: (2:2) To find estimators for ¯ and σ2 it is natural to determine the coefficient vector P 2 ¯¯¯ minimizing et with respect to ¯. This yields least squares or maximum likelihood estimator ¯ˆ and the maximum likelihood estimator for σ2 which is proportional to the unbiased n¡1 1 X 0 σˆ2 = (y ¡ ¯¯¯ˆ zzz )2 (2:3) (n ¡ q) t t t=0 An alternate way of writing the model (2.2) is as yyy = Z¯¯¯ + eee (2:4) 0 where Z = (zzz1; zzz2; : : : ; zzzn) is a q£n matrix composed of the values of the input 0 variables at the observed time points and yyy = (y1; y2; : : : ; yn) is the vector of 0 observed outputs with the errors stacked in the vector eee = (e1; e2; : : : ; en) .The ordinary least squares estimators ¯ˆ are the solutions to the normal equations Z0Z¯¯¯ˆ = Z0y; You need not be concerned as to how the above equation is solved in practice as all computer packages have efficient software for inverting the q £ q matrix Z0Z to obtain ¯¯¯ˆ = (Z0Z)¡1Z0yyy: (2:5) An important quantity that all software produces is a measure of uncertainty for the estimated regression coefficients, say ˆcovf¯¯¯ˆg =σ ˆ2 (Z0Z)¡1: (2:6) 0 ¡1 ˆ ˆ 2 If cij denotes an element of C = (Z Z) , then cov( ¯i; ¯j) = σ cij and a 100(1 ¡ ®)% confidence interval for ¯i is ˆ p ¯i § tn¡q(®=2)ˆσ cii; (2:7) where tdf (®=2) denotes the upper 100(1 ¡ ®)% point on a t distribution with df degrees of freedom. Example 2.2: Consider estimating the possible global warming trend alluded to in Sec- tion 1.1.2. The global temperature series, shown previously in Figure 1.3 suggests the possibility of a gradually increasing average tempera- ture over the 123 year period covered by the land-based series. If we fit the model in Example 2.1, replacing t by t=100 to convert to a 100 28 1 Univariate Time Series Models year base so that the increase will be in degrees per 100 years, we obtain ˆ ˆ ¯1 = 38:72; ¯2 = :9501 using (2.5). The error variance, from (2.3), is .0752, with q = 2 and n = 123. Then (2.6) yields µ ¶ 1:8272 ¡:0941 ˆcov(¯ˆ ; ¯ˆ ) = ; 1 2 ¡:0941 :0048 p leading to an estimated standard error of :0048 = :0696. The value of t with n¡q = 123¡2 = 121 degrees of freedom for ® = :025 is about 1:98, leading to a narrow confidence interval of :95 § :138 for the slope leading to a confidence interval on the one hundred year increase of about :81 to 1:09 degrees. We would conclude from this analysis that there is a substantial increase in global temperature amounting to an increase of roughly one degree F per 100 years. Detrended Temperature 1 1 ACF = γ (h) PACF = Φ x hh 0.5 0.5 0 0 −0.5 −0.5 0 5 10 15 20 0 5 10 15 20 lag lag Differenced Temperature 1 1 ACF = γ (h) PACF = Φ x hh 0.5 0.5 0 0 −0.5 −0.5 0 5 10 15 20 0 5 10 15 20 lag lag Figure 2.1 Autocorrelation functions (ACF) and partial autocorrelation functions (PACF) for the detrended (top panel) and differenced (bottom panel) global temperature series. ˆ ˆ If the model is reasonable, the residualse ˆt = yt ¡ ¯1 ¡ ¯2 t should be essentially independent and identically distributed with no correlation evident. The plot that we have made in Figure 1.3 of the detrended global temperature series shows that this is probably not the case because of the long low frequency 2.1: Least Squares Regression 29 in the observed residuals. However, the differenced series, also shown in Figure 1.3 (second panel), appears to be more independent suggesting that perhaps the apparent global warming is more consistent with a long term swing in an underlying random walk than it is of a fixed 100 year trend. If we check the autocorrelation function of the regression residuals, shown here in Figure 2.1, it is clear that the significant values at higher lags imply that there is significant correlation in the residuals. Such correlation can be important since the estimated standard errors of the coefficients under the assumption that the least squares residuals are uncorrelated is often too small. We can partially repair the damage caused by the correlated residuals by looking at a model with correlated errors. The procedure and techniques for dealing with correlated errors are based on the Autoregressive Moving Average (ARMA) models to be considered in the next sections. Another method of reducing correlation is to apply a first difference ∆xt = xt ¡ xt¡1 to the global trend data. The ACF of the differenced series, also shown in Figure 2.1, seems to have lower correlations at the higher lags. Figure 1.3 shows qualitatively that this transformation also eliminates the trend in the original series. Since we have again made some rather arbitrary looking specifications for the configuration of dependent variables in the above regression examples, the reader may wonder how to select among various plausible models. We mention that two criteria which reward reducing the squared error and penalize for additional parameters are the Akaike Information Criterion 2K AIC(K) = logσ ˆ2 + (2:8) n and the Schwarz Information Criterion K log n SIC(K) = logσ ˆ2 + ; (2:9) n (Schwarz, 1978) where K is the number of parameters fitted (exclusive of vari- ance parameters) andσ ˆ2 is the maximum likelihood estimator for the variance. This is sometimes termed the Bayesian Information Criterion, BIC and will often yield models with fewer parameters than the other selection methods. A modification to AIC(K) that is particularly well suited for small samples was suggested by Hurvich and Tsai (1989). This is the corrected AIC, given by n + K AIC (K) = logσ ˆ2 + (2:10) C n ¡ K ¡ 2 The rule for all three measures above is to choose the value of K leading to the smallest value of AIC(K) or SIC(K) or AICC (K). We will give an example later comparing the above simple least squares model with a model where the errors have a time series correlation structure. The organization of this chapter is patterned after the landmark approach to developing models for time series data pioneered by Box and Jenkins (see 30 1 Univariate Time Series Models Box et al, 1994). This assumes that there will be a representation of time series data in terms of a difference equation that relates the current value to its past. Such models should be flexible enough to include non-stationary realizations like the random walk given above and seasonal behavior, where the current value is related to past values at multiples of an underlying season; a common one might be multiples of 12 months (1 year) for monthly data. The models are constructed from difference equations driven by random input shocks and are labeled in the most general formulation as ARIMA , i.e., AutoRegressive Integrated Moving Average processes. The analogies with differential equations, which model many physical processes, are obvious. For clarity, we develop the separate components of the model sequentially, considering the integrated, autoregressive and moving average in order, fol- lowed by the seasonal modification. The Box-Jenkins approach suggests three steps in a procedure that they summarize as l identification, estimation and forecasting. Identification uses model selection techniques, combining the ACF and PACF as diagnostics with the versions of AIC given above to find a parsimonious (simple) model for the data. Estimation of parameters in the model will be the next step. Statistical techniques based on maximum like- lihood and least squares are paramount for this stage and will only be sketched in this course.
Recommended publications
  • Chapter 2 Time Series and Forecasting
    Chapter 2 Time series and Forecasting 2.1 Introduction Data are frequently recorded at regular time intervals, for instance, daily stock market indices, the monthly rate of inflation or annual profit figures. In this Chapter we think about how to display and model such data. We will consider how to detect trends and seasonal effects and then use these to make forecasts. As well as review the methods covered in MAS1403, we will also consider a class of time series models known as autore- gressive moving average models. Why is this topic useful? Well, making forecasts allows organisations to make better decisions and to plan more efficiently. For instance, reliable forecasts enable a retail outlet to anticipate demand, hospitals to plan staffing levels and manufacturers to keep appropriate levels of inventory. 2.2 Displaying and describing time series A time series is a collection of observations made sequentially in time. When observations are made continuously, the time series is said to be continuous; when observations are taken only at specific time points, the time series is said to be discrete. In this course we consider only discrete time series, where the observations are taken at equal intervals. The first step in the analysis of time series is usually to plot the data against time, in a time series plot. Suppose we have the following four–monthly sales figures for Turner’s Hangover Cure as described in Practical 2 (in thousands of pounds): Jan–Apr May–Aug Sep–Dec 2006 8 10 13 2007 10 11 14 2008 10 11 15 2009 11 13 16 We could enter these data into a single column (say column C1) in Minitab, and then click on Graph–Time Series Plot–Simple–OK; entering C1 in Series and then clicking OK gives the graph shown in figure 2.1.
    [Show full text]
  • Demand Forecasting
    BIZ2121 Production & Operations Management Demand Forecasting Sung Joo Bae, Associate Professor Yonsei University School of Business Unilever Customer Demand Planning (CDP) System Statistical information: shipment history, current order information Demand-planning system with promotional demand increase, and other detailed information (external market research, internal sales projection) Forecast information is relayed to different distribution channel and other units Connecting to POS (point-of-sales) data and comparing it to forecast data is a very valuable ways to update the system Results: reduced inventory, better customer service Forecasting Forecasts are critical inputs to business plans, annual plans, and budgets Finance, human resources, marketing, operations, and supply chain managers need forecasts to plan: ◦ output levels ◦ purchases of services and materials ◦ workforce and output schedules ◦ inventories ◦ long-term capacities Forecasting Forecasts are made on many different variables ◦ Uncertain variables: competitor strategies, regulatory changes, technological changes, processing times, supplier lead times, quality losses ◦ Different methods are used Judgment, opinions of knowledgeable people, average of experience, regression, and time-series techniques ◦ No forecast is perfect Constant updating of plans is important Forecasts are important to managing both processes and supply chains ◦ Demand forecast information can be used for coordinating the supply chain inputs, and design of the internal processes (especially
    [Show full text]
  • Moving Average Filters
    CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq.
    [Show full text]
  • Time Series and Forecasting
    Time Series and Forecasting Time Series • A time series is a sequence of measurements over time, usually obtained at equally spaced intervals – Daily – Monthly – Quarterly – Yearly 1 Time Series Example Dow Jones Industrial Average 12000 11000 10000 9000 Closing Value Closing 8000 7000 1/3/00 5/3/00 9/3/00 1/3/01 5/3/01 9/3/01 1/3/02 5/3/02 9/3/02 1/3/03 5/3/03 9/3/03 Date Components of a Time Series • Secular Trend –Linear – Nonlinear • Cyclical Variation – Rises and Falls over periods longer than one year • Seasonal Variation – Patterns of change within a year, typically repeating themselves • Residual Variation 2 Components of a Time Series Y=T+C+S+Rtt tt t Time Series with Linear Trend Yt = a + b t + et 3 Time Series with Linear Trend AOL Subscribers 30 25 20 15 10 5 Number of Subscribers (millions) 0 2341234123412341234123 1995 1996 1997 1998 1999 2000 Quarter Time Series with Linear Trend Average Daily Visits in August to Emergency Room at Richmond Memorial Hospital 140 120 100 80 60 40 Average Daily Visits Average Daily 20 0 12345678910 Year 4 Time Series with Nonlinear Trend Imports 180 160 140 120 100 80 Imports (MM) Imports 60 40 20 0 1986 1988 1990 1992 1994 1996 1998 Year Time Series with Nonlinear Trend • Data that increase by a constant amount at each successive time period show a linear trend. • Data that increase by increasing amounts at each successive time period show a curvilinear trend. • Data that increase by an equal percentage at each successive time period can be made linear by applying a logarithmic transformation.
    [Show full text]
  • Penalised Regressions Vs. Autoregressive Moving Average Models for Forecasting Inflation Regresiones Penalizadas Vs
    ECONÓMICAS . Ospina-Holguín y Padilla-Ospina / Económicas CUC, vol. 41 no. 1, pp. 65 -80, Enero - Junio, 2020 CUC Penalised regressions vs. autoregressive moving average models for forecasting inflation Regresiones penalizadas vs. modelos autorregresivos de media móvil para pronosticar la inflación DOI: https://doi.org/10.17981/econcuc.41.1.2020.Econ.3 Abstract This article relates the Seasonal Autoregressive Moving Average Artículo de investigación. Models (SARMA) to linear regression. Based on this relationship, the Fecha de recepción: 07/10/2019. paper shows that penalized linear models can outperform the out-of- Fecha de aceptación: 10/11/2019. sample forecast accuracy of the best SARMA models in forecasting Fecha de publicación: 15/11/2019 inflation as a function of past values, due to penalization and cross- validation. The paper constructs a minimal functional example using edge regression to compare both competing approaches to forecasting monthly inflation in 35 selected countries of the Organization for Economic Cooperation and Development and in three groups of coun- tries. The results empirically test the hypothesis that penalized linear regression, and edge regression in particular, can outperform the best standard SARMA models calculated through a grid search when fore- casting inflation. Thus, a new and effective technique for forecasting inflation based on past values is provided for use by financial analysts and investors. The results indicate that more attention should be paid Javier Humberto Ospina-Holguín to automatic learning techniques for forecasting inflation time series, Universidad del Valle. Cali (Colombia) even as basic as penalized linear regressions, because of their superior [email protected] empirical performance.
    [Show full text]
  • Package 'Gmztests'
    Package ‘GMZTests’ March 18, 2021 Type Package Title Statistical Tests Description A collection of functions to perform statistical tests of the following methods: Detrended Fluctu- ation Analysis, RHODCCA coefficient,<doi:10.1103/PhysRevE.84.066118>, DMC coeffi- cient, SILVA-FILHO et al. (2021) <doi:10.1016/j.physa.2020.125285>, Delta RHODCCA coeffi- cient, Guedes et al. (2018) <doi:10.1016/j.physa.2018.02.148> and <doi:10.1016/j.dib.2018.03.080> , Delta DMCA co- efficient and Delta DMC coefficient. Version 0.1.4 Date 2021-03-19 Maintainer Everaldo Freitas Guedes <[email protected]> License GPL-3 URL https://github.com/efguedes/GMZTests BugReports https://github.com/efguedes/GMZTests NeedsCompilation no Encoding UTF-8 LazyData true Imports stats, DCCA, PerformanceAnalytics, nonlinearTseries, fitdistrplus, fgpt, tseries Suggests xts, zoo, quantmod, fracdiff RoxygenNote 7.1.1 Author Everaldo Freitas Guedes [aut, cre] (<https://orcid.org/0000-0002-2986-7367>), Aloísio Machado Silva-Filho [aut] (<https://orcid.org/0000-0001-8250-1527>), Gilney Figueira Zebende [aut] (<https://orcid.org/0000-0003-2420-9805>) Repository CRAN Date/Publication 2021-03-18 13:10:04 UTC 1 2 deltadmc.test R topics documented: deltadmc.test . .2 deltadmca.test . .3 deltarhodcca.test . .4 dfa.test . .5 dmc.test . .6 dmca.test . .7 rhodcca.test . .8 Index 9 deltadmc.test Statistical test for Delta DMC Multiple Detrended Cross-Correlation Coefficient Description This function performs the statistical test for Delta DMC cross-correlation coefficient from three univariate ARFIMA process. Usage deltadmc.test(x1, x2, y, k, m, nu, rep, method) Arguments x1 A vector containing univariate time series.
    [Show full text]
  • Spatial Domain Low-Pass Filters
    Low Pass Filtering Why use Low Pass filtering? • Remove random noise • Remove periodic noise • Reveal a background pattern 1 Effects on images • Remove banding effects on images • Smooth out Img-Img mis-registration • Blurring of image Types of Low Pass Filters • Moving average filter • Median filter • Adaptive filter 2 Moving Ave Filter Example • A single (very short) scan line of an image • {1,8,3,7,8} • Moving Ave using interval of 3 (must be odd) • First number (1+8+3)/3 =4 • Second number (8+3+7)/3=6 • Third number (3+7+8)/3=6 • First and last value set to 0 Two Dimensional Moving Ave 3 Moving Average of Scan Line 2D Moving Average Filter • Spatial domain filter • Places average in center • Edges are set to 0 usually to maintain size 4 Spatial Domain Filter Moving Average Filter Effects • Reduces overall variability of image • Lowers contrast • Noise components reduced • Blurs the overall appearance of image 5 Moving Average images Median Filter The median utilizes the median instead of the mean. The median is the middle positional value. 6 Median Example • Another very short scan line • Data set {2,8,4,6,27} interval of 5 • Ranked {2,4,6,8,27} • Median is 6, central value 4 -> 6 Median Filter • Usually better for filtering • - Less sensitive to errors or extremes • - Median is always a value of the set • - Preserves edges • - But requires more computation 7 Moving Ave vs. Median Filtering Adaptive Filters • Based on mean and variance • Good at Speckle suppression • Sigma filter best known • - Computes mean and std dev for window • - Values outside of +-2 std dev excluded • - If too few values, (<k) uses value to left • - Later versions use weighting 8 Adaptive Filters • Improvements to Sigma filtering - Chi-square testing - Weighting - Local order histogram statistics - Edge preserving smoothing Adaptive Filters 9 Final PowerPoint Numerical Slide Value (The End) 10.
    [Show full text]
  • 1 Simple Linear Regression I – Least Squares Estimation
    1 Simple Linear Regression I – Least Squares Estimation Textbook Sections: 18.1–18.3 Previously, we have worked with a random variable x that comes from a population that is normally distributed with mean µ and variance σ2. We have seen that we can write x in terms of µ and a random error component ε, that is, x = µ + ε. For the time being, we are going to change our notation for our random variable from x to y. So, we now write y = µ + ε. We will now find it useful to call the random variable y a dependent or response variable. Many times, the response variable of interest may be related to the value(s) of one or more known or controllable independent or predictor variables. Consider the following situations: LR1 A college recruiter would like to be able to predict a potential incoming student’s first–year GPA (y) based on known information concerning high school GPA (x1) and college entrance examination score (x2). She feels that the student’s first–year GPA will be related to the values of these two known variables. LR2 A marketer is interested in the effect of changing shelf height (x1) and shelf width (x2)on the weekly sales (y) of her brand of laundry detergent in a grocery store. LR3 A psychologist is interested in testing whether the amount of time to become proficient in a foreign language (y) is related to the child’s age (x). In each case we have at least one variable that is known (in some cases it is controllable), and a response variable that is a random variable.
    [Show full text]
  • Indicators of Technical Analysis on the Basis of Moving Averages As Prognostic Methods in the Food Industry
    Kolkova, A. (2018). Indicators of Technical Analysis on the Basis of Moving Averages as Prognostic Methods in the Food Industry. Journal of Competitiveness, 10(4), 102–119. https://doi.org/10.7441/ joc.2018.04.07 INDICATORS OF TECHNICAL ANALYSIS ON THE BASIS OF MOVING AVERAGES AS PROGNOSTIC METHODS IN THE FOOD INDUSTRY ▪ Andrea Kolkova Abstract Competitiveness is an important factor in a company’s ability to achieve success, and proper forecasting can be a fundamental source of competitive advantage for an enterprise. The aim of this study is to show the possibility of using technical analysis indicators in forecasting prices in the food industry in comparison with classical methods, namely exponential smoothing. In the food industry, competitiveness is also a key element of business. Competitiveness, however, requires not only a thorough historical analysis not only of but also forecasting. Forecasting methods are very complex and are often prevented from wider application to increase competi- tiveness. The indicators of technical analysis meet the criteria of simplicity and can therefore be a good way to increase competitiveness through proper forecasting. In this manuscript, the use of simple forecasting tools is confirmed for the period of 2009-2018. The analysis was com- pleted using data on the main raw materials of the food industry, namely wheat food, wheat forage, malting barley, milk, apples and potatoes, for which monthly data from January 2009 to February 2018 was collected. The data file has been analyzed and modified, with an analysis of indicators based on rolling averages selected. The indicators were compared using exponential smoothing forecasting.
    [Show full text]
  • LECTURE 2 MOVING AVERAGES and EXPONENTIAL SMOOTHING OVERVIEW This Lecture Introduces Time-Series Smoothing Forecasting Methods
    Business Conditions & Forecasting – Exponential Smoothing Dr. Thomas C. Chiang LECTURE 2 MOVING AVERAGES AND EXPONENTIAL SMOOTHING OVERVIEW This lecture introduces time-series smoothing forecasting methods. Various models are discussed, including methods applicable to nonstationary and seasonal time-series data. These models are viewed as classical time-series model; all of them are univariate. LEARNING OBJECTIVES • Moving averages • Forecasting using exponential smoothing • Accounting for data trend using Holt's smoothing • Accounting for data seasonality using Winter's smoothing • Adaptive-response-rate single exponential smoothing 1. Forecasting with Moving Averages The naive method discussed in Lecture 1 uses the most recent observations to forecast future ˆ values. That is, Yt+1 = Yt. Since the outcomes of Yt are subject to variations, using the mean value is considered an alternative method of forecasting. In order to keep forecasts updated, a simple moving-average method has been widely used. 1.1. The Model Moving averages are developed based on an average of weighted observations, which tends to smooth out short-term irregularity in the data series. They are useful if the data series remains fairly steady over time. Notations ˆ M t ≡ Yt+1 - Moving average at time t , which is the forecast value at time t+1, Yt - Observation at time t, ˆ et = Yt − Yt - Forecast error. A moving average is obtained by calculating the mean for a specified set of values and then using it to forecast the next period. That is, M t = (Yt + Yt−1 + ⋅⋅⋅ + Yt−n+1 ) n (1.1.1) M t−1 = (Yt−1 +Yt−2 + ⋅⋅⋅+Yt−n ) n (1.1.2) Business Conditions & Forecasting Dr.
    [Show full text]
  • Some Techniques Used Is Technical Analysis
    Some Techniques Used in Technical Analysis Moving Averages Simple Moving Averages (SMA) A simple moving average is formed by computing the average (mean) price of a security over a specified number of periods. While it is possible to create moving averages from the Open, the High, and the Low data points, most moving averages are created using the closing price. For example: a 5-day simple moving average is calculated by adding the closing prices for the last 5 days and dividing the total by 5. The calculation is repeated for each price on the chart. The averages are then joined to form a smooth curving line - the moving average line. Continuing our example, if the next closing price in the average is 15, then this new period would be added and the oldest day, which is 10, would be dropped. The new 5-day simple moving average would be calculated as follows: Over the last 2 days, the SMA moved from 12 to 13. As new days are added, the old days will be subtracted and the moving average will continue to move over time. Note that all moving averages are lagging indicators and will always be "behind" the price. The price of EK is trending down, but the simple moving average, which is based on the previous 10 days of data, remains above the price. If the price were rising, the SMA would most likely be below. Because moving averages are lagging indicators, they fit in the category of trend following indicators. When prices are trending, moving averages work well.
    [Show full text]
  • B.Sc. STATISTICS - III YEAR
    MANONMANIAM SUNDARANAR UNIVERSITY DIRECTORATE OF DISTANCE & CONTINUING EDUCATION TIRUNELVELI 627012, TAMIL NADU B.Sc. STATISTICS - III YEAR DJS3E - TIME SERIES AND OFFICIAL STATISTICS (From the academic year 2016-17) Most Student friendly University - Strive to Study and Learn to Excel For more information visit: http://www.msuniv.ac.in DJS3E - TIME SERIES and OFFICIAL STATISTICS Unit-I (Time Series) Components of time series – Additive and multiplicative models - Resolving components of a time series-measuring trend: Graphic, semi-averages, moving average and least squares methods. Unit -II (Time Series) Seasonal variation- measuring seasonal variation: method of simple averages, ratio-to- trend method, ratio-to-moving average method and link relative method- Cyclical and Random fluctuations- variate difference method. Unit -III (Index Numbers) Index numbers and their definitions - construction and uses of fixed and chain based index numbers - simple and weighted index numbers - Laspeyre’s, Paasche’s, Fisher’s, and Marshall - Edgeworth index numbers – optimum tests for index numbers - Cost of living index numbers. Unit -IV (Demographic Methods) Demographic data – definition, sources and surveys –registration method. Fertility measurements – crude birth rate – general, specific, total fertility rates - gross and net reproduction rates. Mortality measurements – crude death rate – specific, standardized death rates – infant mortality rate – maternal mortality rate. Construction of Life table. Unit -V (Official Statistics) Present official statistics system in India – Ministry of statistics – NSSO, CSO and their functions - Registration of vital events – National Income Statistics – Agricultural Statistics – Industrial Statistics in India – Trade Statistics in India – Labour Statistics in India – Financial Statistics in India. REFERENCE BOOKS: 1. Goon, A.M., M. K. Gupta and B.
    [Show full text]