Periodicity Detection, Time-Series Correlation, Burst Detection
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Moving Average Filters
CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq. -
Power Grid Topology Classification Based on Time Domain Analysis
Power Grid Topology Classification Based on Time Domain Analysis Jia He, Maggie X. Cheng and Mariesa L. Crow, Fellow, IEEE Abstract—Power system monitoring is significantly im- all depend on the correct network topology information. proved with the use of Phasor Measurement Units (PMUs). Topology change, no matter what caused it, must be The availability of PMU data and recent advances in ma- immediately updated. An unattended topology error may chine learning together enable advanced data analysis that lead to cascading power outages and cause large scale is critical for real-time fault detection and diagnosis. This blackout ( [2], [3]). In this paper, we demonstrate that paper focuses on the problem of power line outage. We use a machine learning framework and consider the problem using a machine learning approach and real-time mea- of line outage identification as a classification problem in surement data we can timely and accurately identify the machine learning. The method is data-driven and does outage location. We leverage PMU data and consider not rely on the results of other analysis such as power the task of identifying the location of line outage as flow analysis and solving power system equations. Since it a classification problem. The methods can be applied does not involve using power system parameters to solve in real-time. Although training a classifier can be time- equations, it is applicable even when such parameters are consuming, the prediction of power line status can be unavailable. The proposed method uses only voltage phasor done in real-time. angles obtained from PMUs. -
Logistic-Weighted Regression Improves Decoding of Finger Flexion from Electrocorticographic Signals
Logistic-weighted Regression Improves Decoding of Finger Flexion from Electrocorticographic Signals Weixuan Chen*-IEEE Student Member, Xilin Liu-IEEE Student Member, and Brian Litt-IEEE Senior Member Abstract— One of the most interesting applications of brain linear Wiener filter [7–10]. To take into account the computer interfaces (BCIs) is movement prediction. With the physiological, physical, and mechanical constraints that development of invasive recording techniques and decoding affect the flexion of limbs, some studies applied switching algorithms in the past ten years, many single neuron-based and models [11] or Bayesian models [12,13] to the results of electrocorticography (ECoG)-based studies have been able to linear regressions above. Other studies have explored the decode trajectories of limb movements. As the output variables utility of non-linear methods, including neural networks [14– are continuous in these studies, a regression model is commonly 17], multilinear perceptrons [18], and support vector used. However, the decoding of limb movements is not a pure machines [18], but they tend to have difficulty with high regression problem, because the trajectories can be apparently dimensional features and limited training data [13]. classified into a motion state and a resting state, which result in a binary property overlooked by previous studies. In this Nevertheless, the studies of limb movement translation paper, we propose an algorithm called logistic-weighted are in fact not pure regression problems, because the limbs regression to make use of the property, and apply the algorithm are not always under the motion state. Whether it is during an to a BCI system decoding flexion of human fingers from ECoG experiment or in the daily life, the resting state of the limbs is signals. -
Spectrum Estimation Using Periodogram, Bartlett and Welch
Spectrum estimation using Periodogram, Bartlett and Welch Guido Schuster Slides follow closely chapter 8 in the book „Statistical Digital Signal Processing and Modeling“ by Monson H. Hayes and most of the figures and formulas are taken from there 1 Introduction • We want to estimate the power spectral density of a wide- sense stationary random process • Recall that the power spectrum is the Fourier transform of the autocorrelation sequence • For an ergodic process the following holds 2 Introduction • The main problem of power spectrum estimation is – The data x(n) is always finite! • Two basic approaches – Nonparametric (Periodogram, Bartlett and Welch) • These are the most common ones and will be presented in the next pages – Parametric approaches • not discussed here since they are less common 3 Nonparametric methods • These are the most commonly used ones • x(n) is only measured between n=0,..,N-1 • Ensures that the values of x(n) that fall outside the interval [0,N-1] are excluded, where for negative values of k we use conjugate symmetry 4 Periodogram • Taking the Fourier transform of this autocorrelation estimate results in an estimate of the power spectrum, known as the Periodogram • This can also be directly expressed in terms of the data x(n) using the rectangular windowed function xN(n) 5 Periodogram of white noise 32 samples 6 Performance of the Periodogram • If N goes to infinity, does the Periodogram converge towards the power spectrum in the mean squared sense? • Necessary conditions – asymptotically unbiased: – variance -
Random Signals
Chapter 8 RANDOM SIGNALS Signals can be divided into two main categories - deterministic and random. The term random signal is used primarily to denote signals, which have a random in its nature source. As an example we can mention the thermal noise, which is created by the random movement of electrons in an electric conductor. Apart from this, the term random signal is used also for signals falling into other categories, such as periodic signals, which have one or several parameters that have appropriate random behavior. An example is a periodic sinusoidal signal with a random phase or amplitude. Signals can be treated either as deterministic or random, depending on the application. Speech, for example, can be considered as a deterministic signal, if one specific speech waveform is considered. It can also be viewed as a random process if one considers the ensemble of all possible speech waveforms in order to design a system that will optimally process speech signals, in general. The behavior of stochastic signals can be described only in the mean. The description of such signals is as a rule based on terms and concepts borrowed from probability theory. Signals are, however, a function of time and such description becomes quickly difficult to manage and impractical. Only a fraction of the signals, known as ergodic, can be handled in a relatively simple way. Among those signals that are excluded are the class of the non-stationary signals, which otherwise play an essential part in practice. Working in frequency domain is a powerful technique in signal processing. While the spectrum is directly related to the deterministic signals, the spectrum of a ran- dom signal is defined through its correlation function. -
An Information Theoretic Algorithm for Finding Periodicities in Stellar Light Curves Pablo Huijse, Student Member, IEEE, Pablo A
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 1, NO. 1, JANUARY 2012 1 An Information Theoretic Algorithm for Finding Periodicities in Stellar Light Curves Pablo Huijse, Student Member, IEEE, Pablo A. Estevez*,´ Senior Member, IEEE, Pavlos Protopapas, Pablo Zegers, Senior Member, IEEE, and Jose´ C. Pr´ıncipe, Fellow Member, IEEE Abstract—We propose a new information theoretic metric aligned to Earth. Periodic drops in brightness are observed for finding periodicities in stellar light curves. Light curves due to the mutual eclipses between the components of the are astronomical time series of brightness over time, and are system. Although most stars have at least some variation in characterized as being noisy and unevenly sampled. The proposed metric combines correntropy (generalized correlation) with a luminosity, current ground based survey estimations indicate periodic kernel to measure similarity among samples separated that 3% of the stars varying more than the sensitivity of the by a given period. The new metric provides a periodogram, called instruments and ∼1% are periodic [2]. Correntropy Kernelized Periodogram (CKP), whose peaks are associated with the fundamental frequencies present in the data. Detecting periodicity and estimating the period of stars is The CKP does not require any resampling, slotting or folding of high importance in astronomy. The period is a key feature scheme as it is computed directly from the available samples. for classifying variable stars [3], [4]; and estimating other CKP is the main part of a fully-automated pipeline for periodic light curve discrimination to be used in astronomical survey parameters such as mass and distance to Earth [5]. -
STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS and INTERPRETATIONS by DSG Pollock
STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS AND INTERPRETATIONS by D.S.G. Pollock (University of Leicester) Email: stephen [email protected] This paper expounds some of the results of Fourier theory that are es- sential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to anti- aliasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statis- tical and econometric time series than is commonly recognised. 1 D.S.G. POLLOCK: Statistical Fourier Analysis 1. Introduction Statistical Fourier analysis is an important part of modern time-series analysis, yet it frequently poses an impediment that prevents a full understanding of temporal stochastic processes and of the manipulations to which their data are amenable. This paper provides a survey of the theory that is not overburdened by inessential complications, and it addresses some enduring misapprehensions. -
Use of the Kurtosis Statistic in the Frequency Domain As an Aid In
lEEE JOURNALlEEE OF OCEANICENGINEERING, VOL. OE-9, NO. 2, APRIL 1984 85 Use of the Kurtosis Statistic in the FrequencyDomain as an Aid in Detecting Random Signals Absmact-Power spectral density estimation is often employed as a couldbe utilized in signal processing. The objective ofthis method for signal ,detection. For signals which occur randomly, a paper is to compare the PSD technique for signal processing frequency domain kurtosis estimate supplements the power spectral witha new methodwhich computes the frequency domain density estimate and, in some cases, can be.employed to detect their presence. This has been verified from experiments vith real data of kurtosis (FDK) [2] forthe real and imaginary parts of the randomly occurring signals. In order to better understand the detec- complex frequency components. Kurtosis is defined as a ratio tion of randomlyoccurring signals, sinusoidal and narrow-band of a fourth-order central moment to the square of a second- Gaussian signals are considered, which when modeled to represent a order central moment. fading or multipath environment, are received as nowGaussian in Using theNeyman-Pearson theory in thetime domain, terms of a frequency domain kurtosis estimate. Several fading and multipath propagation probability density distributions of practical Ferguson [3] , has shown that kurtosis is a locally optimum interestare considered, including Rayleigh and log-normal. The detectionstatistic under certain conditions. The reader is model is generalized to handle transient and frequency modulated referred to Ferguson'swork for the details; however, it can signals by taking into account the probability of the signal being in a be simply said thatit is concernedwith detecting outliers specific frequency range over the total data interval. -
2D Fourier, Scale, and Cross-Correlation
2D Fourier, Scale, and Cross-correlation CS 510 Lecture #12 February 26th, 2014 Where are we? • We can detect objects, but they can only differ in translation and 2D rotation • Then we introduced Fourier analysis. • Why? – Because Fourier analysis can help us with scale – Because Fourier analysis can make correlation faster Review: Discrete Fourier Transform • Problem: an image is not an analogue signal that we can integrate. • Therefore for 0 ≤ x < N and 0 ≤ u <N/2: N −1 * # 2πux & # 2πux &- F(u) = ∑ f (x),cos % ( − isin% (/ x=0 + $ N ' $ N '. And the discrete inverse transform is: € 1 N −1 ) # 2πux & # 2πux &, f (x) = ∑F(u)+cos % ( + isin% (. N x=0 * $ N ' $ N '- CS 510, Image Computaon, ©Ross 3/2/14 3 Beveridge & Bruce Draper € 2D Fourier Transform • So far, we have looked only at 1D signals • For 2D signals, the continuous generalization is: ∞ ∞ F(u,v) ≡ ∫ ∫ f (x, y)[cos(2π(ux + vy)) − isin(2π(ux + vy))] −∞ −∞ • Note that frequencies are now two- dimensional € – u= freq in x, v = freq in y • Every frequency (u,v) has a real and an imaginary component. CS 510, Image Computaon, ©Ross 3/2/14 4 Beveridge & Bruce Draper 2D sine waves • This looks like you’d expect in 2D Ø Note that the frequencies don’t have to be equal in the two dimensions. hp://images.google.com/imgres?imgurl=hFp://developer.nvidia.com/dev_content/cg/cg_examples/images/ sine_wave_perturbaon_ogl.jpg&imgrefurl=hFp://developer.nvidia.com/object/ cg_effects_explained.html&usg=__0FimoxuhWMm59cbwhch0TLwGpQM=&h=350&w=350&sz=13&hl=en&start=8&sig2=dBEtH0hp5I1BExgkXAe_kg&tbnid=fc yrIaap0P3M:&tbnh=120&tbnw=120&ei=llCYSbLNL4miMoOwoP8L&prev=/images%3Fq%3D2D%2Bsine%2Bwave%26gbv%3D2%26hl%3Den%26sa%3DG CS 510, Image Computaon, ©Ross 3/2/14 5 Beveridge & Bruce Draper 2D Discrete Fourier Transform N /2 N /2 * # 2π & # 2π &- F(u,v) = ∑ ∑ f (x, y),cos % (ux + vy)( − isin% (ux + vy)(/ x=−N /2 y=−N /2 + $ N ' $ N '. -
20. the Fourier Transform in Optics, II Parseval’S Theorem
20. The Fourier Transform in optics, II Parseval’s Theorem The Shift theorem Convolutions and the Convolution Theorem Autocorrelations and the Autocorrelation Theorem The Shah Function in optics The Fourier Transform of a train of pulses The spectrum of a light wave The spectrum of a light wave is defined as: 2 SFEt {()} where F{E(t)} denotes E(), the Fourier transform of E(t). The Fourier transform of E(t) contains the same information as the original function E(t). The Fourier transform is just a different way of representing a signal (in the frequency domain rather than in the time domain). But the spectrum contains less information, because we take the magnitude of E(), therefore losing the phase information. Parseval’s Theorem Parseval’s Theorem* says that the 221 energy in a function is the same, whether f ()tdt F ( ) d 2 you integrate over time or frequency: Proof: f ()tdt2 f ()t f *()tdt 11 F( exp(j td ) F *( exp(j td ) dt 22 11 FF() *(') exp([j '])tdtd ' d 22 11 FF( ) * ( ') [2 ')] dd ' 22 112 FF() *() d F () d * also known as 22Rayleigh’s Identity. The Fourier Transform of a sum of two f(t) F() functions t g(t) G() Faft() bgt () aF ft() bFgt () t The FT of a sum is the F() + sum of the FT’s. f(t)+g(t) G() Also, constants factor out. t This property reflects the fact that the Fourier transform is a linear operation. Shift Theorem The Fourier transform of a shifted function, f ():ta Ffta ( ) exp( jaF ) ( ) Proof : This theorem is F ft a ft( a )exp( jtdt ) important in optics, because we often encounter functions Change variables : uta that are shifting (continuously) along fu( )exp( j [ u a ]) du the time axis – they are called waves! exp(ja ) fu ( )exp( judu ) exp(jaF ) ( ) QED An example of the Shift Theorem in optics Suppose that we’re measuring the spectrum of a light wave, E(t), but a small fraction of the irradiance of this light, say , takes a different path that also leads to the spectrometer. -
Validity of Spatial Covariance Matrices Over Time and Frequency
Validity of Spatial Covariance Matrices over Time and Frequency Ingo Viering'1,Helmu t Hofstette?,W olfgang Utschick 3, ')Forschungszenmm ')Institute for Circuit Theory and Signal Processing ')Siemens AG Telekommunikation Wien Munich University of Technology Lise-Meitner-StraRe 13 Donau-City-StraRe I Arcisstraae 21 89081 Ulm, Germany 1220 Vienna, Austria 80290 Munich, Germany [email protected] [email protected] [email protected] Abstract-A MIMOchaonrl measurement campaign with s moving mo- measurement data in section IV. Finally, section V draws some bile has heen moducted in Vienna. The measund data rill be used to conclusions. investigate covariance matrices with rapst to their dependence on lime and frequency. This document fmra 00 the description ofthe WaIUatiO1I 11. MEASUREMENTSETUP techniqoa which rill be applied to the measurement data in fhe future. The F-eigen-raHo is defined expressing the degradation due to outdated The measurements were done with the MlMO capable wide- covariance maMcer. IUuitntiag the de*& methods, first nsulti based band vector channel sounder RUSK-ATM, manufactured by 00 the mrasored daU are rho- for a simple line4-sight scemlio. MEDAV [I]. The sounder was specifically adapted to operate at a center frequency of 2GHz with an output power of 2 Wan. The transmitted signal is generated in frequency domain to en- 1. INTRODUCTION sure a pre-defined spectrum over 120MHz bandwidth, and ap- proximately a constant envelope over time. In the receiver the All mobile communication systems incorporating multiple input signal is correlated with the transmitted pulse-shape in antennas on one or on both sides of the transmission link the frequency domain resulting in the specific transfer func- strongly depend on the spatial structure of the mobile chan- tions. -
Ultrasonic Velocity Measurement Using Phase-Slope and Cross- Correlation Methods
H84 - NASA Technical Memorandum 83794 Ultrasonic Velocity Measurement Using Phase-Slope and Cross- Correlation Methods David R. Hull, Harold E. Kautz, and Alex Vary Lewis Research Center Cleveland, Ohio Prepared for the 1984 Spring Conference of the American Society for Nondestructive Testing Denver, Colorado, May 21-24, 1984 NASA ULTRASONIC VELOCITY MEASUREMENT USING PHASE SLOPE AND CROSS CORRELATION HE I HODS David R. Hull, Harold E. Kautz, and Alex Vary National Aeronautics and Space Administration Lewis Research Center Cleveland, Ohio 44135 SUMMARY Computer Implemented phase slope and cross-correlation methods are In- troduced for measuring time delays between pairs of broadband ultrasonic pulse-echo signals for determining velocity 1n engineering materials. The phase slope and cross-correlation methods are compared with the overlap method §5 which 1s currently 1n wide use. Comparison of digital versions of the three £J methods shows similar results for most materials having low ultrasonic attenu- uj atlon. However, the cross-correlation method 1s preferred for highly attenua- ting materials. An analytical basis for the cross-correlation method 1s presented. Examples are given for the three methods investigated to measure velocity in representative materials in the megahertz range. INTRODUCTION Ultrasonic velocity measurements are widely used to determine properties and states of materials, tn the case of engineering solids measurements of ultrasonic wave propagation velocities are routinely used to determine elastic constants (refs;. 1 to 5). There has been an Increasing use of ultrasonic velocity measurements for nondestructive characterization of material micro- structures and mechanical properties (refs. 6 to 10). Therefore, 1t 1s Impor- tant to have appropriate practical methods for making velocity measurements on a variety of material samples.