FEEG6017 Lecture: Time Series Analysis, Autocorrelation

Total Page:16

File Type:pdf, Size:1020Kb

FEEG6017 Lecture: Time Series Analysis, Autocorrelation FEEG6017 lecture: Time series analysis, autocorrelation [email protected] Time series analysis • The basic idea of time series analysis is simple: given an observed sequence, how can we build a model that can predict what comes next? • Obvious applications in finance, business, ecology, agriculture, demography, etc. What's different about time series? • In most of the contexts we've seen so far, there's an implicit assumption that observations are independent of each other. • In other words, the fact that subject 27 is 165cm tall and terrible at basketball says nothing at all about what will happen with subject 28. What's different about time series? • In time series data, this is not true. • We're hoping for exactly the opposite: that what happens at time t contains information about what will happen at time t+1. • Observations are treated as both outcome and then predictor variables as we move forward in time. Ways of dealing with time series • Despite (or perhaps because of) the practical uses of time series, there is no single universal technique for handling them. • Lots of different ways to proceed depending on the implicit theory of data generation we're proposing. • Easiest to illustrate with examples... Example 1: Lake Huron data • Our first example data set is a series of annual measurements of the level of Lake Huron, in feet, from 1875 to 1972. • It's a built-in data set in R. So we only need data(LakeHuron) to access it. • R already "knows" that this is a time series. Example 1: Lake Huron data Ex. 2: Australian beer production • Our second example is data on monthly Australian beer production, in millions of litres. • The time series runs from January 1956 to August 1995. • The data is available in beer.csv. Ex. 2: Australian beer production • R doesn't yet know that this is a time series: the data comes in as a list of numbers. • We use the ts function to specify that something should be interpreted as a time series, optionally specifying the seasonal period. • beer = ts(beer[,1],start=1956,freq=12) Ex. 2: Australian beer production Two goals in time series modelling • We assume there's some structure in the time series data, obscured by random noise. • Structure = trends + seasonal variation + noise • The Lake Huron data has no obvious repetitive structure, but possibly a downward trend. The beer data shows clear seasonality and a trend. • Structure = Trend + Cycle + Season + Error Models of data generation • The most basic of data generation is to suppose that there is no structure in the time series at all, and that each observation is an independent random variate. • An example: white noise. • In this case, the best we can do is simply predict the mean value of the data set. Lake Huron: prediction if observations were independent Beer production: prediction if observations were independent Producing these graphs in R png("BeerMeanPredict.png",width=800,height=400) plot(beer,xlim=c(1956,2000),lw=2,col="blue") lines(predict(nullBeer,n.ahead=50)$pred, lw=2,col="red") lines(predict(nullBeer,n.ahead=50)$pred +1.96*predict(nullBeer,n.ahead=50)$se, lw=2,lty="dotted",col="red") lines(predict(nullBeer,n.ahead=50)$pred -1.96*predict(nullBeer,n.ahead=50)$se, lw=2,lty="dotted",col="red") graphics.off() Simple approach to trends • We could ignore the seasonal variation and the random noise and simply fit a linear or polynomial model to the data. • Make predictors: tb = seq(1956,1995.8,length=length(beer)) • Linear: linearBeer = lm(beer ~ tb) • Polynomial: polyBeer = lm(beer ~ tb + tb^2) Polynomial fit of lake level on time Polynomial fit of beer production on time Regression on time a good idea? • This is an OK start: it gives us some sense of what the trend line is. • But we probably don't believe that beer production or lake level is a function of the calendar date. • More likely these things are a function of their own history, and we need methods that can capture that. Autoregression • A better approach is to ask whether the next value in the time series can be predicted as some function of its previous values. • This is called autoregression. • We want to build a regression model of the current value fitted on one or more previous values (lagged values). But how many? Autocorrelation and partial autocorrelation • We can look directly at the time series and ask how much information there is in previous values that helps predict the current value. • The acf function looks at the correlation between now and various points in the past. • Partial autocorrelation(pacf) does the same, but "partials out" the other effects to get the unique contribution of each time-lag. ACF & PACF, Lake Huron data ACF & PACF, beer data ACF & PACF plots • ACF shows a correlation that fades as we take longer lagged values in the Lake Huron time series. • ACF shows periodic structure in the beer time series reflecting its seasonal nature. ACF & PACF plots • But if t[0] is correlated with t[-1], and t[-1] is correlated with t[-2], then t[0] will necessarily be correlated with t[-2] also. • So we need to look at the PACF values. • We find that only the most recent value is really useful in building an autoregression model for the Lake Huron data, for example. Autoregression models • With the ar command we can fit autoregression models and ask R to use AIC to decide how many lagged values should be included in the model. • For example: arb = ar(beer) • The Lake Huron model includes only one lagged value; the beer model includes 24. Autoregression model, lake data, 1 lagged term Autoregression model, beer data, 24 lagged terms Automatically separating trends, seasonal effects, and noise • The stl procedure uses locally weighted regression to separate out a trend line, and parcels out the seasonal effect. • For example: plot(stl(beer,s.window="periodic"), col="blue",lw=2) • If things go well, there should be no autocorrelation structure left in the residuals. Exponential smoothing • A reasonable guess about the next value in a series is that it would be an average of previous values, with the most recent values weighted more strongly. • This assumption constitutes exponential smoothing: 2 t0 = α t-1 + α(1-α)t-2 + α(1-α) t-3 ... Holt-Winters procedure • The logic can be applied to the basic level of the prediction, to the trend term, and to the seasonal term. • The Holt-Winters procedure automatically does this for all three; for example: HWB = HoltWinters(beer) Holt-Winters analysis on beer data Holt-Winters analysis on lake data • The process seems to work well with the seasonal beer data. • For the lake data, we have not specified a seasonal period, and we might also drop the trend term, thus: • beta = trend • gamma = season HWLake = HoltWinters(LakeHuron,gamma=FALSE,beta=FALSE) Holt-Winters analysis on lake data Holt-Winters analysis on lake data • The fitted alpha value is close to 1 (i.e., a very short memory) so the prediction is that the process will stay where it was. • What if we put the trend term back in? • Implicitly beta = trend =TRUE • gamma=seasonal=FALSE HWLake = HoltWinters(LakeHuron,gamma=FALSE) Holt-Winters analysis on lake data • Trend is overdoing it (beta = 0.17)? Differencing • Some time series techniques (e.g., ARIMA) are based on the assumption that the series is stationary, i.e., that it has constant mean, variance, and autocorrelation values over time. • If we want to use these techniques we may need to work with the differenced values rather than the raw values. Differencing • This just means transforming t[1] into t[1] - t[0], etc. • We can use the diff command to make this easy. • To plot the beer data as a differenced series: plot(diff(beer),lw=2,col="green") Differencing Some housekeeping in R • To get access to some relevant ARIMA model fitting functions, we need to download the "forecast" package. • install.packages("forecast") library(forecast) Auto-regressive integrated moving- average models (ARIMA) • ARIMA is a method for putting together all of the techniques we've seen so far. • A non-seasonal ARIMA model is specified with p, d, and q parameters. • p: no. of autoregression terms. d: no. of difference levels. q: no. of moving-average (smoothing) terms. Auto-regressive integrated moving- average models (ARIMA) • ARIMA(0,0,0) is simply predicting the mean of the overall time series, i.e., no structure. • ARIMA(0,1,0) works with differences, not raw values, and predicts the next value without any autoregression or smoothing. This is therefore a random walk. • ARIMA(1,0,0) and ARIMA(24,0,0) are the models we originally fitted to the lake and beer data. Auto-regressive integrated moving- average models (ARIMA) • We can also have seasonal ARIMA models: three more terms apply to the seasonal effects. • The "forecast" library includes a very convenient auto.arima function that uses AIC to find the most parsimonious model in the space of possible models. ARIMA(1,1,2) model of lake data ARIMA(2,1,2)(2,0,0)[12] model of beer data Fourier transforms • No time to discuss Fourier transforms... • But they're useful when you suspect there are seasonal or cyclic components in the data, but you don't yet know the period of these components. • In the beer example, we already knew the seasonal period was 12, of course. Additional material • The beer.csv data set. • The R script used to do the analyses.
Recommended publications
  • Kalman and Particle Filtering
    Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)).
    [Show full text]
  • Lecture 19: Wavelet Compression of Time Series and Images
    Lecture 19: Wavelet compression of time series and images c Christopher S. Bretherton Winter 2014 Ref: Matlab Wavelet Toolbox help. 19.1 Wavelet compression of a time series The last section of wavelet leleccum notoolbox.m demonstrates the use of wavelet compression on a time series. The idea is to keep the wavelet coefficients of largest amplitude and zero out the small ones. 19.2 Wavelet analysis/compression of an image Wavelet analysis is easily extended to two-dimensional images or datasets (data matrices), by first doing a wavelet transform of each column of the matrix, then transforming each row of the result (see wavelet image). The wavelet coeffi- cient matrix has the highest level (largest-scale) averages in the first rows/columns, then successively smaller detail scales further down the rows/columns. The ex- ample also shows fine results with 50-fold data compression. 19.3 Continuous Wavelet Transform (CWT) Given a continuous signal u(t) and an analyzing wavelet (x), the CWT has the form Z 1 s − t W (λ, t) = λ−1=2 ( )u(s)ds (19.3.1) −∞ λ Here λ, the scale, is a continuous variable. We insist that have mean zero and that its square integrates to 1. The continuous Haar wavelet is defined: 8 < 1 0 < t < 1=2 (t) = −1 1=2 < t < 1 (19.3.2) : 0 otherwise W (λ, t) is proportional to the difference of running means of u over successive intervals of length λ/2. 1 Amath 482/582 Lecture 19 Bretherton - Winter 2014 2 In practice, for a discrete time series, the integral is evaluated as a Riemann sum using the Matlab wavelet toolbox function cwt.
    [Show full text]
  • Power Comparisons of Five Most Commonly Used Autocorrelation Tests
    Pak.j.stat.oper.res. Vol.16 No. 1 2020 pp119-130 DOI: http://dx.doi.org/10.18187/pjsor.v16i1.2691 Pakistan Journal of Statistics and Operation Research Power Comparisons of Five Most Commonly Used Autocorrelation Tests Stanislaus S. Uyanto School of Economics and Business, Atma Jaya Catholic University of Indonesia, [email protected] Abstract In regression analysis, autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated. The consequence is that the estimates of coefficients and their standard errors will be wrong if the autocorrelation is ignored. There are many tests for autocorrelation, we want to know which test is more powerful. We use Monte Carlo methods to compare the power of five most commonly used tests for autocorrelation, namely Durbin-Watson, Breusch-Godfrey, Box–Pierce, Ljung Box, and Runs tests in two different linear regression models. The results indicate the Durbin-Watson test performs better in the regression model without lagged dependent variable, although the advantage over the other tests reduce with increasing autocorrelation and sample sizes. For the model with lagged dependent variable, the Breusch-Godfrey test is generally superior to the other tests. Key Words: Correlated error terms; Ordinary least squares assumption; Residuals; Regression diagnostic; Lagged dependent variable. Mathematical Subject Classification: 62J05; 62J20. 1. Introduction An important assumption of the classical linear regression model states that there is no autocorrelation in the error terms. Autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem (Plackett, 1949, 1950; Greene, 2018) does not apply, and that ordinary least squares estimators are no longer the best linear unbiased estimators.
    [Show full text]
  • LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
    LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix).
    [Show full text]
  • Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
    Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive
    [Show full text]
  • 3 Autocorrelation
    3 Autocorrelation Autocorrelation refers to the correlation of a time series with its own past and future values. Autocorrelation is also sometimes called “lagged correlation” or “serial correlation”, which refers to the correlation between members of a series of numbers arranged in time. Positive autocorrelation might be considered a specific form of “persistence”, a tendency for a system to remain in the same state from one observation to the next. For example, the likelihood of tomorrow being rainy is greater if today is rainy than if today is dry. Geophysical time series are frequently autocorrelated because of inertia or carryover processes in the physical system. For example, the slowly evolving and moving low pressure systems in the atmosphere might impart persistence to daily rainfall. Or the slow drainage of groundwater reserves might impart correlation to successive annual flows of a river. Or stored photosynthates might impart correlation to successive annual values of tree-ring indices. Autocorrelation complicates the application of statistical tests by reducing the effective sample size. Autocorrelation can also complicate the identification of significant covariance or correlation between time series (e.g., precipitation with a tree-ring series). Autocorrelation implies that a time series is predictable, probabilistically, as future values are correlated with current and past values. Three tools for assessing the autocorrelation of a time series are (1) the time series plot, (2) the lagged scatterplot, and (3) the autocorrelation function. 3.1 Time series plot Positively autocorrelated series are sometimes referred to as persistent because positive departures from the mean tend to be followed by positive depatures from the mean, and negative departures from the mean tend to be followed by negative departures (Figure 3.1).
    [Show full text]
  • Time Series: Co-Integration
    Time Series: Co-integration Series: Economic Forecasting; Time Series: General; Watson M W 1994 Vector autoregressions and co-integration. Time Series: Nonstationary Distributions and Unit In: Engle R F, McFadden D L (eds.) Handbook of Econo- Roots; Time Series: Seasonal Adjustment metrics Vol. IV. Elsevier, The Netherlands N. H. Chan Bibliography Banerjee A, Dolado J J, Galbraith J W, Hendry D F 1993 Co- Integration, Error Correction, and the Econometric Analysis of Non-stationary Data. Oxford University Press, Oxford, UK Time Series: Cycles Box G E P, Tiao G C 1977 A canonical analysis of multiple time series. Biometrika 64: 355–65 Time series data in economics and other fields of social Chan N H, Tsay R S 1996 On the use of canonical correlation science often exhibit cyclical behavior. For example, analysis in testing common trends. In: Lee J C, Johnson W O, aggregate retail sales are high in November and Zellner A (eds.) Modelling and Prediction: Honoring December and follow a seasonal cycle; voter regis- S. Geisser. Springer-Verlag, New York, pp. 364–77 trations are high before each presidential election and Chan N H, Wei C Z 1988 Limiting distributions of least squares follow an election cycle; and aggregate macro- estimates of unstable autoregressive processes. Annals of Statistics 16: 367–401 economic activity falls into recession every six to eight Engle R F, Granger C W J 1987 Cointegration and error years and follows a business cycle. In spite of this correction: Representation, estimation, and testing. Econo- cyclicality, these series are not perfectly predictable, metrica 55: 251–76 and the cycles are not strictly periodic.
    [Show full text]
  • Spatial Autocorrelation: Covariance and Semivariance Semivariance
    Spatial Autocorrelation: Covariance and Semivariancence Lily Housese ­P eters GEOG 593 November 10, 2009 Quantitative Terrain Descriptorsrs Covariance and Semivariogram areare numericc methods used to describe the character of the terrain (ex. Terrain roughness, irregularity) Terrain character has important implications for: 1. the type of sampling strategy chosen 2. estimating DTM accuracy (after sampling and reconstruction) Spatial Autocorrelationon The First Law of Geography ““ Everything is related to everything else, but near things are moo re related than distant things.” (Waldo Tobler) The value of a variable at one point in space is related to the value of that same variable in a nearby location Ex. Moranan ’s I, Gearyary ’s C, LISA Positive Spatial Autocorrelation (Neighbors are similar) Negative Spatial Autocorrelation (Neighbors are dissimilar) R(d) = correlation coefficient of all the points with horizontal interval (d) Covariance The degree of similarity between pairs of surface points The value of similarity is an indicator of the complexity of the terrain surface Smaller the similarity = more complex the terrain surface V = Variance calculated from all N points Cov (d) = Covariance of all points with horizontal interval d Z i = Height of point i M = average height of all points Z i+d = elevation of the point with an interval of d from i Semivariancee Expresses the degree of relationship between points on a surface Equal to half the variance of the differences between all possible points spaced a constant distance apart
    [Show full text]
  • Modeling and Generating Multivariate Time Series with Arbitrary Marginals Using a Vector Autoregressive Technique
    Modeling and Generating Multivariate Time Series with Arbitrary Marginals Using a Vector Autoregressive Technique Bahar Deler Barry L. Nelson Dept. of IE & MS, Northwestern University September 2000 Abstract We present a model for representing stationary multivariate time series with arbitrary mar- ginal distributions and autocorrelation structures and describe how to generate data quickly and accurately to drive computer simulations. The central idea is to transform a Gaussian vector autoregressive process into the desired multivariate time-series input process that we presume as having a VARTA (Vector-Autoregressive-To-Anything) distribution. We manipulate the correlation structure of the Gaussian vector autoregressive process so that we achieve the desired correlation structure for the simulation input process. For the purpose of computational efficiency, we provide a numerical method, which incorporates a numerical-search procedure and a numerical-integration technique, for solving this correlation-matching problem. Keywords: Computer simulation, vector autoregressive process, vector time series, multivariate input mod- eling, numerical integration. 1 1 Introduction Representing the uncertainty or randomness in a simulated system by an input model is one of the challenging problems in the practical application of computer simulation. There are an abundance of examples, from manufacturing to service applications, where input modeling is critical; e.g., modeling the time to failure for a machining process, the demand per unit time for inventory of a product, or the times between arrivals of calls to a call center. Further, building a large- scale discrete-event stochastic simulation model may require the development of a large number of, possibly multivariate, input models. Development of these models is facilitated by accurate and automated (or nearly automated) input modeling support.
    [Show full text]
  • Generating Time Series with Diverse and Controllable Characteristics
    GRATIS: GeneRAting TIme Series with diverse and controllable characteristics Yanfei Kang,∗ Rob J Hyndman,† and Feng Li‡ Abstract The explosion of time series data in recent years has brought a flourish of new time series analysis methods, for forecasting, clustering, classification and other tasks. The evaluation of these new methods requires either collecting or simulating a diverse set of time series benchmarking data to enable reliable comparisons against alternative approaches. We pro- pose GeneRAting TIme Series with diverse and controllable characteristics, named GRATIS, with the use of mixture autoregressive (MAR) models. We simulate sets of time series using MAR models and investigate the diversity and coverage of the generated time series in a time series feature space. By tuning the parameters of the MAR models, GRATIS is also able to efficiently generate new time series with controllable features. In general, as a costless surrogate to the traditional data collection approach, GRATIS can be used as an evaluation tool for tasks such as time series forecasting and classification. We illustrate the usefulness of our time series generation process through a time series forecasting application. Keywords: Time series features; Time series generation; Mixture autoregressive models; Time series forecasting; Simulation. 1 Introduction With the widespread collection of time series data via scanners, monitors and other automated data collection devices, there has been an explosion of time series analysis methods developed in the past decade or two. Paradoxically, the large datasets are often also relatively homogeneous in the industry domain, which limits their use for evaluation of general time series analysis methods (Keogh and Kasetty, 2003; Mu˜nozet al., 2018; Kang et al., 2017).
    [Show full text]
  • Random Processes
    Chapter 6 Random Processes Random Process • A random process is a time-varying function that assigns the outcome of a random experiment to each time instant: X(t). • For a fixed (sample path): a random process is a time varying function, e.g., a signal. – For fixed t: a random process is a random variable. • If one scans all possible outcomes of the underlying random experiment, we shall get an ensemble of signals. • Random Process can be continuous or discrete • Real random process also called stochastic process – Example: Noise source (Noise can often be modeled as a Gaussian random process. An Ensemble of Signals Remember: RV maps Events à Constants RP maps Events à f(t) RP: Discrete and Continuous The set of all possible sample functions {v(t, E i)} is called the ensemble and defines the random process v(t) that describes the noise source. Sample functions of a binary random process. RP Characterization • Random variables x 1 , x 2 , . , x n represent amplitudes of sample functions at t 5 t 1 , t 2 , . , t n . – A random process can, therefore, be viewed as a collection of an infinite number of random variables: RP Characterization – First Order • CDF • PDF • Mean • Mean-Square Statistics of a Random Process RP Characterization – Second Order • The first order does not provide sufficient information as to how rapidly the RP is changing as a function of timeà We use second order estimation RP Characterization – Second Order • The first order does not provide sufficient information as to how rapidly the RP is changing as a function
    [Show full text]
  • On the Use of the Autocorrelation and Covariance Methods for Feedforward Control of Transverse Angle and Position Jitter in Linear Particle Beam Accelerators*
    '^C 7 ON THE USE OF THE AUTOCORRELATION AND COVARIANCE METHODS FOR FEEDFORWARD CONTROL OF TRANSVERSE ANGLE AND POSITION JITTER IN LINEAR PARTICLE BEAM ACCELERATORS* Dean S. Ban- Advanced Photon Source, Argonne National Laboratory, 9700 S. Cass Ave., Argonne, IL 60439 ABSTRACT It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter m pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually "learns" the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth. INTRODUCTION Figure 1 shows the standard setup for a feedforward transverse jitter control system. Note that one pickup #1 and two kickers are needed to correct beam position jitter, while two pickup #l's and one kicker are needed to correct beam trajectory-angle jitter. pickup #1 kicker pickup #2 Beam Fast loop Slow loop Processor Figure 1. Feedforward Transverse Jitter Control System It is assumed that the beam is fast enough to beat the correction signal (through the fast loop) to the kicker.
    [Show full text]