Introduction to Time Series Analysis. Lecture 23

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Time Series Analysis. Lecture 23 Introduction to Time Series Analysis. Lecture 23. 1. Lagged regression models. 2. Cross-covariance function, sample CCF. 3. Lagged regression in the time domain: prewhitening. 4. Lagged regression in the frequency domain: Cross spectrum. Coherence. 1 Lagged regression models Consider a lagged regression model of the form ∞ Yt = βhXt−h + Vt, h=X−∞ where Xt is an observed input time series, Yt is the observed output time series, and Vt is a stationary noise process. This is useful for Identifying the (best linear) relationship between two time series. • Forecasting one time series from the other. • (We might want βh = 0 for h< 0.) 2 Lagged regression models ∞ Yt = βhXt−h + Vt. h=X−∞ In the SOI and recruitment example, we might wish to identify how the values of the recruitment series (the number of new fish) is related to the Southern Oscillation Index. Or we might wish to predict future values of recruitment from the SOI. 3 Lagged regression models: Agenda Multiple, jointly stationary time series in the time domain: • cross-covariance function, sample CCF. Lagged regression in the time domain: model the input series, extract • the white time series driving it (‘prewhitening’), regress with transformed output series. Multiple, jointly stationary time series in the frequency domain: • cross spectrum, coherence. Lagged regression in the frequency domain: Calculate the input’s • spectral density, and the cross-spectral density between input and output, and find the transfer function relating them, in the frequency domain. Then the regression coefficients are the inverse Fourier transform of the transfer function. 4 Introduction to Time Series Analysis. Lecture 23. 1. Lagged regression models. 2. Cross-covariance function, sample CCF. 3. Lagged regression in the time domain: prewhitening. 4. Lagged regression in the frequency domain: Cross spectrum. Coherence. 5 Cross-covariance Recall that the autocovariance function of a stationary process X is { t} γ (h)= E [(X µ )(X µ )] . x t+h − x t − x The cross-covariance function of two jointly stationary processes X and { t} Y is { t} γ (h)= E [(X µ )(Y µ )] . xy t+h − x t − y (Jointly stationary = constant means, autocovariances depending only on the lag h, and cross-covariance depends only on h.) 6 Cross-correlation The cross-correlation function of jointly stationary X and Y is { t} { t} γxy(h) ρxy(h)= . γx(0)γy(0) p Notice that ρ (h)= ρ ( h) (but ρ (h) ρ ( h)). xy yx − xy 6≡ xy − Example: Suppose that Y = βX − + W for X stationary and t t ℓ t { t} uncorrelated with W , and W zero mean and white. Then X and Y { t} t { t} { t} are jointly stationary, with µy = βµx, γxy(h)= βγx(h + ℓ). If ℓ> 0, we say xt leads yt. If ℓ< 0, we say xt lags yt. 7 Sample cross-covariance and sample CCF − 1 n h γˆ (h)= (x x¯)(y y¯) xy n t+h − t − Xi=1 for h 0 (and γˆ (h) =γ ˆ ( h) for h< 0). ≥ xy yx − The sample CCF is γˆxy(h) ρˆxy(h)= . γˆx(0)ˆγy(0) p 8 Sample cross-covariance and sample CCF If either of X or Y is white, then ρˆ (h) AN(0, 1/√n). { t} { t} xy ∼ Notice that we can look for peaks in the sample CCF to identify a leading or lagging relation. (Recall that the ACF of the input series peaks at h = 0.) Example: CCF of SOI and recruitment (Figure 1.14 in text) has a peak at h = 6, indicating that recruitment at t has its strongest correlation with − SOI at time t 6. Thus, SOI leads recruitment (by 6 months). − 9 Introduction to Time Series Analysis. Lecture 23. 1. Lagged regression models. 2. Cross-covariance function, sample CCF. 3. Lagged regression in the time domain: prewhitening. 4. Lagged regression in the frequency domain: Cross spectrum. Coherence. 10 Lagged regression in the time domain (Section 5.6) Suppose we wish to fit a lagged regression model of the form ∞ Yt = α(B)Xt + ηt = αj Xt−j + ηt, Xj=0 where Xt is an observed input time series, Yt is the observed output time series, and ηt is a stationary noise process, uncorrelated with Xt. One approach (pioneered by Box and Jenkins) is to fit ARIMA models for Xt and ηt, and then find a simple rational representation for α(B). 11 Lagged regression in the time domain ∞ Yt = α(B)Xt + ηt = αj Xt−j + ηt, Xj=0 For example: θx(B) Xt = Wt, φx(B) θη(B) ηt = Zt, φη(B) δ(B) α(B)= Bd. ω(B) d Notice the delay B , indicating that Yt lags Xt by d steps. 12 Lagged regression in the time domain How do we choose all of these parameters? 1. Fit θ (B),φ (B) to model the input series X . x x { t} 2. Prewhiten the input series by applying the inverse operator φx(B)/θx(B): φx(B) φx(B) Y˜t = Yt = α(B)Wt + ηt. θx(B) θx(B) 13 Lagged regression in the time domain 3. Calculate the cross-correlation of Y˜t with Wt, ∞ E 2 γy,w˜ (h)= αj Wt+h−jWt = σwαh, Xj=0 to give an indication of the behavior of α(B) (for instance, the delay). 4. Estimate the coefficients of α(B) and hence fit an ARMA model for the noise series ηt. 14 Lagged regression in the time domain Why prewhiten? The prewhitening step inverts the linear filter Xt = θx(B)/φx(B)Wt. Then the lagged regression is between the transformed Yt and a white series Wt. This makes it easy to determine a suitable lag. For example, in the SOI/recruitment series, we treat SOI as an input, estimate an AR(1) model, prewhiten it (that is, compute the inverse of our AR(1) operator and apply it to the SOI series), and consider the cross-correlation between the transformed recruitment series and the prewhitened SOI. This shows a large peak at lag -5 (corresponding to the SOI series leading the recruitment series). Examples 5.7 and 5.8 in the text then consider α(B)= B5/(1 ω B). − 1 15 Lagged regression in the time domain This sequential estimation procedure (φx,θx, then α, then φη,θη) is rather ad hoc. State space methods offer an alternative, and they are also convenient for vector-valued input and output series. 16 Introduction to Time Series Analysis. Lecture 23. 1. Lagged regression models. 2. Cross-covariance function, sample CCF. 3. Lagged regression in the time domain: prewhitening. 4. Lagged regression in the frequency domain: Cross spectrum. Coherence. 17 Coherence To analyze lagged regression in the frequency domain, we’ll need the notion of coherence, the analog of cross-correlation in the frequency domain. Define the cross-spectrum as the Fourier transform of the cross-correlation, ∞ −2πiνh fxy(ν)= γxy(h)e , h=X−∞ 1/2 2πiνh γxy(h)= fxy(ν)e dν, Z−1/2 ∞ (provided that −∞ γ (h) < ). h= | xy | ∞ Notice that fxyP(ν) can be complex-valued. Also, γ (h)= γ ( h) implies f (ν)= f (ν)∗. yx xy − yx xy 18 Coherence The squared coherence function is 2 2 fyx(ν) ρy,x(ν)= | | . fx(ν)fy(ν) 2 2 Compare this with the correlation ρy,x = Cov(Y,X)/ σxσy. We can think of the squared coherence at a frequency ν as theq contribution to squared correlation at that frequency. (Recall the interpretation of spectral density at a frequency ν as the contribution to variance at that frequency.) 19 Estimating squared coherence Recall that we estimated the spectral density using the smoothed squared modulus of the DFT of the series, (L−1)/2 1 fˆ (ν )= X(ν l/n) 2 x k L | k − | l=−(XL−1)/2 (L−1)/2 1 ∗ = X(ν l/n)X(ν l/n) . L k − k − l=−(XL−1)/2 We can estimate the cross spectral density using the same sample estimate, (L−1)/2 1 ∗ fˆ (ν )= X(ν l/n)Y (ν l/n) . xy k L k − k − l=−(XL−1)/2 20 Coherence Also, we can estimate the squared coherence using these estimates, ˆ 2 2 fyx(ν) ρˆy,x(ν)= | | . fˆx(ν)fˆy(ν) 2 (Knowledge of the asymptotic distribution of ρˆy,x(ν) under the hypothesis of no coherence, ρy,x(ν)=0, allows us to test for coherence.) 21 Lagged regression models in the frequency domain Consider a lagged regression model of the form ∞ Yt = βj Xt−j + Vt, j=X−∞ where Xt is an observed input time series, Yt is the observed output time series, and Vt is a stationary noise process. We’d like to estimate the coefficients βj that determine the relationship between the lagged values of the input series Xt and the output series Yt. 22 Lagged regression models in the frequency domain The projection theorem tells us that the coefficients that minimize the mean squared error, 2 ∞ E Yt βj Xt−j − j=X−∞ satisfy the orthogonality conditions ∞ E Yt βj Xt−j Xt−k = 0, k = 0, 1, 2,... − ± ± j=X−∞ ∞ β γ (k j)= γ (k), k = 0, 1, 2,... j x − yx ± ± j=X−∞ 23 Lagged regression models in the frequency domain We could solve these equations for the βj using the sample autocovariance and sample cross-covariance. But it is more convenient to use estimates of the spectra and cross-spectrum. (Convolution with β in the time domain is equivalent to multiplication { j } by the Fourier transform of β in the frequency domain.
Recommended publications
  • Stationary Processes and Their Statistical Properties
    Stationary Processes and Their Statistical Properties Brian Borchers March 29, 2001 1 Stationary processes A discrete time stochastic process is a sequence of random variables Z1, Z2, :::. In practice we will typically analyze a single realization z1, z2, :::, zn of the stochastic process and attempt to esimate the statistical properties of the stochastic process from the realization. We will also consider the problem of predicting zn+1 from the previous elements of the sequence. We will begin by focusing on the very important class of stationary stochas- tic processes. A stochastic process is strictly stationary if its statistical prop- erties are unaffected by shifting the stochastic process in time. In particular, this means that if we take a subsequence Zk+1, :::, Zk+m, then the joint distribution of the m random variables will be the same no matter what k is. Stationarity requires that the mean of the stochastic process be a constant. E[Zk] = µ. and that the variance is constant 2 V ar[Zk] = σZ : Also, stationarity requires that the covariance of two elements separated by a distance m is constant. That is, Cov(Zk;Zk+m) is constant. This covariance is called the autocovariance at lag m, and we will use the notation γm. Since Cov(Zk;Zk+m) = Cov(Zk+m;Zk), we need only find γm for m 0. The ≥ correlation of Zk and Zk+m is the autocorrelation at lag m. We will use the notation ρm for the autocorrelation. It is easy to show that γk ρk = : γ0 1 2 The autocovariance and autocorrelation ma- trices The covariance matrix for the random variables Z1, :::, Zn is called an auto- covariance matrix.
    [Show full text]
  • Moving Average Filters
    CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq.
    [Show full text]
  • Power Grid Topology Classification Based on Time Domain Analysis
    Power Grid Topology Classification Based on Time Domain Analysis Jia He, Maggie X. Cheng and Mariesa L. Crow, Fellow, IEEE Abstract—Power system monitoring is significantly im- all depend on the correct network topology information. proved with the use of Phasor Measurement Units (PMUs). Topology change, no matter what caused it, must be The availability of PMU data and recent advances in ma- immediately updated. An unattended topology error may chine learning together enable advanced data analysis that lead to cascading power outages and cause large scale is critical for real-time fault detection and diagnosis. This blackout ( [2], [3]). In this paper, we demonstrate that paper focuses on the problem of power line outage. We use a machine learning framework and consider the problem using a machine learning approach and real-time mea- of line outage identification as a classification problem in surement data we can timely and accurately identify the machine learning. The method is data-driven and does outage location. We leverage PMU data and consider not rely on the results of other analysis such as power the task of identifying the location of line outage as flow analysis and solving power system equations. Since it a classification problem. The methods can be applied does not involve using power system parameters to solve in real-time. Although training a classifier can be time- equations, it is applicable even when such parameters are consuming, the prediction of power line status can be unavailable. The proposed method uses only voltage phasor done in real-time. angles obtained from PMUs.
    [Show full text]
  • Logistic-Weighted Regression Improves Decoding of Finger Flexion from Electrocorticographic Signals
    Logistic-weighted Regression Improves Decoding of Finger Flexion from Electrocorticographic Signals Weixuan Chen*-IEEE Student Member, Xilin Liu-IEEE Student Member, and Brian Litt-IEEE Senior Member Abstract— One of the most interesting applications of brain linear Wiener filter [7–10]. To take into account the computer interfaces (BCIs) is movement prediction. With the physiological, physical, and mechanical constraints that development of invasive recording techniques and decoding affect the flexion of limbs, some studies applied switching algorithms in the past ten years, many single neuron-based and models [11] or Bayesian models [12,13] to the results of electrocorticography (ECoG)-based studies have been able to linear regressions above. Other studies have explored the decode trajectories of limb movements. As the output variables utility of non-linear methods, including neural networks [14– are continuous in these studies, a regression model is commonly 17], multilinear perceptrons [18], and support vector used. However, the decoding of limb movements is not a pure machines [18], but they tend to have difficulty with high regression problem, because the trajectories can be apparently dimensional features and limited training data [13]. classified into a motion state and a resting state, which result in a binary property overlooked by previous studies. In this Nevertheless, the studies of limb movement translation paper, we propose an algorithm called logistic-weighted are in fact not pure regression problems, because the limbs regression to make use of the property, and apply the algorithm are not always under the motion state. Whether it is during an to a BCI system decoding flexion of human fingers from ECoG experiment or in the daily life, the resting state of the limbs is signals.
    [Show full text]
  • Stochastic Process - Introduction
    Stochastic Process - Introduction • Stochastic processes are processes that proceed randomly in time. • Rather than consider fixed random variables X, Y, etc. or even sequences of i.i.d random variables, we consider sequences X0, X1, X2, …. Where Xt represent some random quantity at time t. • In general, the value Xt might depend on the quantity Xt-1 at time t-1, or even the value Xs for other times s < t. • Example: simple random walk . week 2 1 Stochastic Process - Definition • A stochastic process is a family of time indexed random variables Xt where t belongs to an index set. Formal notation, { t : ∈ ItX } where I is an index set that is a subset of R. • Examples of index sets: 1) I = (-∞, ∞) or I = [0, ∞]. In this case Xt is a continuous time stochastic process. 2) I = {0, ±1, ±2, ….} or I = {0, 1, 2, …}. In this case Xt is a discrete time stochastic process. • We use uppercase letter {Xt } to describe the process. A time series, {xt } is a realization or sample function from a certain process. • We use information from a time series to estimate parameters and properties of process {Xt }. week 2 2 Probability Distribution of a Process • For any stochastic process with index set I, its probability distribution function is uniquely determined by its finite dimensional distributions. •The k dimensional distribution function of a process is defined by FXX x,..., ( 1 x,..., k ) = P( Xt ≤ 1 ,..., xt≤ X k) x t1 tk 1 k for anyt 1 ,..., t k ∈ I and any real numbers x1, …, xk .
    [Show full text]
  • Autocovariance Function Estimation Via Penalized Regression
    Autocovariance Function Estimation via Penalized Regression Lina Liao, Cheolwoo Park Department of Statistics, University of Georgia, Athens, GA 30602, USA Jan Hannig Department of Statistics and Operations Research, University of North Carolina, Chapel Hill, NC 27599, USA Kee-Hoon Kang Department of Statistics, Hankuk University of Foreign Studies, Yongin, 449-791, Korea Abstract The work revisits the autocovariance function estimation, a fundamental problem in statistical inference for time series. We convert the function estimation problem into constrained penalized regression with a generalized penalty that provides us with flexible and accurate estimation, and study the asymptotic properties of the proposed estimator. In case of a nonzero mean time series, we apply a penalized regression technique to a differenced time series, which does not require a separate detrending procedure. In penalized regression, selection of tuning parameters is critical and we propose four different data-driven criteria to determine them. A simulation study shows effectiveness of the tuning parameter selection and that the proposed approach is superior to three existing methods. We also briefly discuss the extension of the proposed approach to interval-valued time series. Some key words: Autocovariance function; Differenced time series; Regularization; Time series; Tuning parameter selection. 1 1 Introduction Let fYt; t 2 T g be a stochastic process such that V ar(Yt) < 1, for all t 2 T . The autoco- variance function of fYtg is given as γ(s; t) = Cov(Ys;Yt) for all s; t 2 T . In this work, we consider a regularly sampled time series f(i; Yi); i = 1; ··· ; ng. Its model can be written as Yi = g(i) + i; i = 1; : : : ; n; (1.1) where g is a smooth deterministic function and the error is assumed to be a weakly stationary 2 process with E(i) = 0, V ar(i) = σ and Cov(i; j) = γ(ji − jj) for all i; j = 1; : : : ; n: The estimation of the autocovariance function γ (or autocorrelation function) is crucial to determine the degree of serial correlation in a time series.
    [Show full text]
  • LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
    LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix).
    [Show full text]
  • STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS and INTERPRETATIONS by DSG Pollock
    STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS AND INTERPRETATIONS by D.S.G. Pollock (University of Leicester) Email: stephen [email protected] This paper expounds some of the results of Fourier theory that are es- sential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to anti- aliasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statis- tical and econometric time series than is commonly recognised. 1 D.S.G. POLLOCK: Statistical Fourier Analysis 1. Introduction Statistical Fourier analysis is an important part of modern time-series analysis, yet it frequently poses an impediment that prevents a full understanding of temporal stochastic processes and of the manipulations to which their data are amenable. This paper provides a survey of the theory that is not overburdened by inessential complications, and it addresses some enduring misapprehensions.
    [Show full text]
  • Validity of Spatial Covariance Matrices Over Time and Frequency
    Validity of Spatial Covariance Matrices over Time and Frequency Ingo Viering'1,Helmu t Hofstette?,W olfgang Utschick 3, ')Forschungszenmm ')Institute for Circuit Theory and Signal Processing ')Siemens AG Telekommunikation Wien Munich University of Technology Lise-Meitner-StraRe 13 Donau-City-StraRe I Arcisstraae 21 89081 Ulm, Germany 1220 Vienna, Austria 80290 Munich, Germany [email protected] [email protected] [email protected] Abstract-A MIMOchaonrl measurement campaign with s moving mo- measurement data in section IV. Finally, section V draws some bile has heen moducted in Vienna. The measund data rill be used to conclusions. investigate covariance matrices with rapst to their dependence on lime and frequency. This document fmra 00 the description ofthe WaIUatiO1I 11. MEASUREMENTSETUP techniqoa which rill be applied to the measurement data in fhe future. The F-eigen-raHo is defined expressing the degradation due to outdated The measurements were done with the MlMO capable wide- covariance maMcer. IUuitntiag the de*& methods, first nsulti based band vector channel sounder RUSK-ATM, manufactured by 00 the mrasored daU are rho- for a simple line4-sight scemlio. MEDAV [I]. The sounder was specifically adapted to operate at a center frequency of 2GHz with an output power of 2 Wan. The transmitted signal is generated in frequency domain to en- 1. INTRODUCTION sure a pre-defined spectrum over 120MHz bandwidth, and ap- proximately a constant envelope over time. In the receiver the All mobile communication systems incorporating multiple input signal is correlated with the transmitted pulse-shape in antennas on one or on both sides of the transmission link the frequency domain resulting in the specific transfer func- strongly depend on the spatial structure of the mobile chan- tions.
    [Show full text]
  • Ultrasonic Velocity Measurement Using Phase-Slope and Cross- Correlation Methods
    H84 - NASA Technical Memorandum 83794 Ultrasonic Velocity Measurement Using Phase-Slope and Cross- Correlation Methods David R. Hull, Harold E. Kautz, and Alex Vary Lewis Research Center Cleveland, Ohio Prepared for the 1984 Spring Conference of the American Society for Nondestructive Testing Denver, Colorado, May 21-24, 1984 NASA ULTRASONIC VELOCITY MEASUREMENT USING PHASE SLOPE AND CROSS CORRELATION HE I HODS David R. Hull, Harold E. Kautz, and Alex Vary National Aeronautics and Space Administration Lewis Research Center Cleveland, Ohio 44135 SUMMARY Computer Implemented phase slope and cross-correlation methods are In- troduced for measuring time delays between pairs of broadband ultrasonic pulse-echo signals for determining velocity 1n engineering materials. The phase slope and cross-correlation methods are compared with the overlap method §5 which 1s currently 1n wide use. Comparison of digital versions of the three £J methods shows similar results for most materials having low ultrasonic attenu- uj atlon. However, the cross-correlation method 1s preferred for highly attenua- ting materials. An analytical basis for the cross-correlation method 1s presented. Examples are given for the three methods investigated to measure velocity in representative materials in the megahertz range. INTRODUCTION Ultrasonic velocity measurements are widely used to determine properties and states of materials, tn the case of engineering solids measurements of ultrasonic wave propagation velocities are routinely used to determine elastic constants (refs;. 1 to 5). There has been an Increasing use of ultrasonic velocity measurements for nondestructive characterization of material micro- structures and mechanical properties (refs. 6 to 10). Therefore, 1t 1s Impor- tant to have appropriate practical methods for making velocity measurements on a variety of material samples.
    [Show full text]
  • Advanced Antenna Measurement Techniques Using Time Domain Gating
    Advanced Antenna Measurement Techniques using Time Domain Gating Zhong Chen Director, RF Engineering ETS-Lindgren ©2019 ETS-LINDGREN ©2019 ETS-LINDGREN Where do we use TD gating in EMC? TD Gating is a function provided in VNAs: • Antenna measurements • Chamber Qualifications – the new C63.25 TD sVSWR • Cable/Signal integrity measurement • General RF/Microwave loss and reflection measurements • It is a common tool in labs, but rarely fully understood and can be misused. ©2019 ETS-LINDGREN 2 1 Goals of this Presentation • Understand how VNA performs time domain transform and gating. • Understand the nuances of the different parameters, and their effects on time domain gating. • Discuss gating band edge errors, mitigation techniques and limitations of the post-gate renormalization used in a VNA. • Application Example: C63.25 Time Domain site VSWR. ©2019 ETS-LINDGREN 3 Background on Frequency/Time • Time domain data is obtained mathematically from frequency domain. Vector antenna responses in frequency domain can be transformed to time domain. This is a function in commercial Vector Network Analyzers (VNA). • Time Domain and frequency domain are in reciprocal space (via Fourier Transform), transformed from one to the other without any loss of information. They are two ways of viewing the same information. • Bandlimited frequency signals (no DC) is transformed to impulse response in TD. TD step response requires DC, and integration of the impulse response. ©2019 ETS-LINDGREN 2 Two views of the same function https://tex.stackexchange.com/questions /127375/replicate-the-fourier-transform -time-frequency-domains-correspondence Source: wikipedia -illustrati?noredirect=1&lq=1 ©2019 ETS-LINDGREN 5 Do more with time domain gating • Time gating can be thought of as a bandpass filter in time.
    [Show full text]
  • Covariances of ARMA Models
    Statistics 910, #9 1 Covariances of ARMA Processes Overview 1. Review ARMA models: causality and invertibility 2. AR covariance functions 3. MA and ARMA covariance functions 4. Partial autocorrelation function 5. Discussion Review of ARMA processes ARMA process A stationary solution fXtg (or if its mean is not zero, fXt − µg) of the linear difference equation Xt − φ1Xt−1 − · · · − φpXt−p = wt + θ1wt−1 + ··· + θqwt−q φ(B)Xt = θ(B)wt (1) 2 where wt denotes white noise, wt ∼ WN(0; σ ). Definition 3.5 adds the • identifiability condition that the polynomials φ(z) and θ(z) have no zeros in common and the • normalization condition that φ(0) = θ(0) = 1. Causal process A stationary process fXtg is said to be causal if there ex- ists a summable sequence (some require square summable (`2), others want more and require absolute summability (`1)) sequence f jg such that fXtg has the one-sided moving average representation 1 X Xt = jwt−j = (B)wt : (2) j=0 Proposition 3.1 states that a stationary ARMA process fXtg is causal if and only if (iff) the zeros of the autoregressive polynomial Statistics 910, #9 2 φ(z) lie outside the unit circle (i.e., φ(z) 6= 0 for jzj ≤ 1). Since φ(0) = 1, φ(z) > 0 for jzj ≤ 1. (The unit circle in the complex plane consists of those x 2 C for which jzj = 1; the closed unit disc includes the interior of the unit circle as well as the circle; the open unit disc consists of jzj < 1.) If the zeros of φ(z) lie outside the unit circle, then we can invert each Qp of the factors (1 − B=zj) that make up φ(B) = j=1(1 − B=zj) one at a time (as when back-substituting in the derivation of the AR(1) representation).
    [Show full text]