A Practical Guide to Wavelet Analysis
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
LNCS 7215, Pp
ACoreCalculusforProvenance Umut A. Acar1,AmalAhmed2,JamesCheney3,andRolyPerera1 1 Max Planck Institute for Software Systems umut,rolyp @mpi-sws.org { 2 Indiana} University [email protected] 3 University of Edinburgh [email protected] Abstract. Provenance is an increasing concern due to the revolution in sharing and processing scientific data on the Web and in other computer systems. It is proposed that many computer systems will need to become provenance-aware in order to provide satisfactory accountability, reproducibility,andtrustforscien- tific or other high-value data. To date, there is not a consensus concerning ap- propriate formal models or security properties for provenance. In previous work, we introduced a formal framework for provenance security and proposed formal definitions of properties called disclosure and obfuscation. This paper develops a core calculus for provenance in programming languages. Whereas previous models of provenance have focused on special-purpose languages such as workflows and database queries, we consider a higher-order, functional language with sums, products, and recursive types and functions. We explore the ramifications of using traces based on operational derivations for the purpose of comparing other forms of provenance. We design a rich class of prove- nance views over traces. Finally, we prove relationships among provenance views and develop some solutions to the disclosure and obfuscation problems. 1Introduction Provenance, or meta-information about the origin, history, or derivation of an object, is now recognized as a central challenge in establishing trust and providing security in computer systems, particularly on the Web. Essentially, provenance management in- volves instrumenting a system with detailed monitoring or logging of auditable records that help explain how results depend on inputs or other (sometimes untrustworthy) sources. -
Kalman and Particle Filtering
Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)). -
11 Practice Lesso N 4 R O U N D W H Ole Nu M Bers U N It 1
CC04MM RPPSTG TEXT.indb 11 © Practice and Problem Solving Curriculum Associates, LLC Copying isnotpermitted. Practice Lesson 4 Round Whole Numbers Whole 4Round Lesson Practice Lesson 4 Round Whole Numbers Name: Solve. M 3 Round each number. Prerequisite: Round Three-Digit Numbers a. 689 rounded to the nearest ten is 690 . Study the example showing how to round a b. 68 rounded to the nearest hundred is 100 . three-digit number. Then solve problems 1–6. c. 945 rounded to the nearest ten is 950 . Example d. 945 rounded to the nearest hundred is 900 . Round 154 to the nearest ten. M 4 Rachel earned $164 babysitting last month. She earned $95 this month. Rachel rounded each 150 151 152 153 154 155 156 157 158 159 160 amount to the nearest $10 to estimate how much 154 is between 150 and 160. It is closer to 150. she earned. What is each amount rounded to the 154 rounded to the nearest ten is 150. nearest $10? Show your work. Round 154 to the nearest hundred. Student work will vary. Students might draw number lines, a hundreds chart, or explain in words. 100 110 120 130 140 150 160 170 180 190 200 Solution: ___________________________________$160 and $100 154 is between 100 and 200. It is closer to 200. C 5 Use the digits in the tiles to create a number that 154 rounded to the nearest hundred is 200. makes each statement true. Use each digit only once. B 1 Round 236 to the nearest ten. 1 2 3 4 5 6 7 8 9 Which tens is 236 between? Possible answer shown. -
Lecture 19: Wavelet Compression of Time Series and Images
Lecture 19: Wavelet compression of time series and images c Christopher S. Bretherton Winter 2014 Ref: Matlab Wavelet Toolbox help. 19.1 Wavelet compression of a time series The last section of wavelet leleccum notoolbox.m demonstrates the use of wavelet compression on a time series. The idea is to keep the wavelet coefficients of largest amplitude and zero out the small ones. 19.2 Wavelet analysis/compression of an image Wavelet analysis is easily extended to two-dimensional images or datasets (data matrices), by first doing a wavelet transform of each column of the matrix, then transforming each row of the result (see wavelet image). The wavelet coeffi- cient matrix has the highest level (largest-scale) averages in the first rows/columns, then successively smaller detail scales further down the rows/columns. The ex- ample also shows fine results with 50-fold data compression. 19.3 Continuous Wavelet Transform (CWT) Given a continuous signal u(t) and an analyzing wavelet (x), the CWT has the form Z 1 s − t W (λ, t) = λ−1=2 ( )u(s)ds (19.3.1) −∞ λ Here λ, the scale, is a continuous variable. We insist that have mean zero and that its square integrates to 1. The continuous Haar wavelet is defined: 8 < 1 0 < t < 1=2 (t) = −1 1=2 < t < 1 (19.3.2) : 0 otherwise W (λ, t) is proportional to the difference of running means of u over successive intervals of length λ/2. 1 Amath 482/582 Lecture 19 Bretherton - Winter 2014 2 In practice, for a discrete time series, the integral is evaluated as a Riemann sum using the Matlab wavelet toolbox function cwt. -
Wavelet Entropy Energy Measure (WEEM): a Multiscale Measure to Grade a Geophysical System's Predictability
EGU21-703, updated on 28 Sep 2021 https://doi.org/10.5194/egusphere-egu21-703 EGU General Assembly 2021 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. Wavelet Entropy Energy Measure (WEEM): A multiscale measure to grade a geophysical system's predictability Ravi Kumar Guntu and Ankit Agarwal Indian Institute of Technology Roorkee, Hydrology, Roorkee, India ([email protected]) Model-free gradation of predictability of a geophysical system is essential to quantify how much inherent information is contained within the system and evaluate different forecasting methods' performance to get the best possible prediction. We conjecture that Multiscale Information enclosed in a given geophysical time series is the only input source for any forecast model. In the literature, established entropic measures dealing with grading the predictability of a time series at multiple time scales are limited. Therefore, we need an additional measure to quantify the information at multiple time scales, thereby grading the predictability level. This study introduces a novel measure, Wavelet Entropy Energy Measure (WEEM), based on Wavelet entropy to investigate a time series's energy distribution. From the WEEM analysis, predictability can be graded low to high. The difference between the entropy of a wavelet energy distribution of a time series and entropy of wavelet energy of white noise is the basis for gradation. The metric quantifies the proportion of the deterministic component of a time series in terms of energy concentration, and its range varies from zero to one. One corresponds to high predictable due to its high energy concentration and zero representing a process similar to the white noise process having scattered energy distribution. -
Adaptive Wavelet Clustering for Highly Noisy Data
Adaptive Wavelet Clustering for Highly Noisy Data Zengjian Chen Jiayi Liu Yihe Deng Department of Computer Science Department of Computer Science Department of Mathematics Huazhong University of University of Massachusetts Amherst University of California, Los Angeles Science and Technology Massachusetts, USA California, USA Wuhan, China [email protected] [email protected] [email protected] Kun He* John E. Hopcroft Department of Computer Science Department of Computer Science Huazhong University of Science and Technology Cornell University Wuhan, China Ithaca, NY, USA [email protected] [email protected] Abstract—In this paper we make progress on the unsupervised Based on the pioneering work of Sheikholeslami that applies task of mining arbitrarily shaped clusters in highly noisy datasets, wavelet transform, originally used for signal processing, on which is a task present in many real-world applications. Based spatial data clustering [12], we propose a new wavelet based on the fundamental work that first applies a wavelet transform to data clustering, we propose an adaptive clustering algorithm, algorithm called AdaWave that can adaptively and effectively denoted as AdaWave, which exhibits favorable characteristics for uncover clusters in highly noisy data. To tackle general appli- clustering. By a self-adaptive thresholding technique, AdaWave cations, we assume that the clusters in a dataset do not follow is parameter free and can handle data in various situations. any specific distribution and can be arbitrarily shaped. It is deterministic, fast in linear time, order-insensitive, shape- To show the hardness of the clustering task, we first design insensitive, robust to highly noisy data, and requires no pre- knowledge on data models. -
Power Comparisons of Five Most Commonly Used Autocorrelation Tests
Pak.j.stat.oper.res. Vol.16 No. 1 2020 pp119-130 DOI: http://dx.doi.org/10.18187/pjsor.v16i1.2691 Pakistan Journal of Statistics and Operation Research Power Comparisons of Five Most Commonly Used Autocorrelation Tests Stanislaus S. Uyanto School of Economics and Business, Atma Jaya Catholic University of Indonesia, [email protected] Abstract In regression analysis, autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated. The consequence is that the estimates of coefficients and their standard errors will be wrong if the autocorrelation is ignored. There are many tests for autocorrelation, we want to know which test is more powerful. We use Monte Carlo methods to compare the power of five most commonly used tests for autocorrelation, namely Durbin-Watson, Breusch-Godfrey, Box–Pierce, Ljung Box, and Runs tests in two different linear regression models. The results indicate the Durbin-Watson test performs better in the regression model without lagged dependent variable, although the advantage over the other tests reduce with increasing autocorrelation and sample sizes. For the model with lagged dependent variable, the Breusch-Godfrey test is generally superior to the other tests. Key Words: Correlated error terms; Ordinary least squares assumption; Residuals; Regression diagnostic; Lagged dependent variable. Mathematical Subject Classification: 62J05; 62J20. 1. Introduction An important assumption of the classical linear regression model states that there is no autocorrelation in the error terms. Autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem (Plackett, 1949, 1950; Greene, 2018) does not apply, and that ordinary least squares estimators are no longer the best linear unbiased estimators. -
A Fourier-Wavelet Monte Carlo Method for Fractal Random Fields
JOURNAL OF COMPUTATIONAL PHYSICS 132, 384±408 (1997) ARTICLE NO. CP965647 A Fourier±Wavelet Monte Carlo Method for Fractal Random Fields Frank W. Elliott Jr., David J. Horntrop, and Andrew J. Majda Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, New York 10012 Received August 2, 1996; revised December 23, 1996 2 2H k[v(x) 2 v(y)] l 5 CHux 2 yu , (1.1) A new hierarchical method for the Monte Carlo simulation of random ®elds called the Fourier±wavelet method is developed and where 0 , H , 1 is the Hurst exponent and k?l denotes applied to isotropic Gaussian random ®elds with power law spectral the expected value. density functions. This technique is based upon the orthogonal Here we develop a new Monte Carlo method based upon decomposition of the Fourier stochastic integral representation of the ®eld using wavelets. The Meyer wavelet is used here because a wavelet expansion of the Fourier space representation of its rapid decay properties allow for a very compact representation the fractal random ®elds in (1.1). This method is capable of the ®eld. The Fourier±wavelet method is shown to be straightfor- of generating a velocity ®eld with the Kolmogoroff spec- ward to implement, given the nature of the necessary precomputa- trum (H 5 Ad in (1.1)) over many (10 to 15) decades of tions and the run-time calculations, and yields comparable results scaling behavior comparable to the physical space multi- with scaling behavior over as many decades as the physical space multiwavelet methods developed recently by two of the authors. -
LECTURES 2 - 3 : Stochastic Processes, Autocorrelation Function
LECTURES 2 - 3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series fXtg is a series of observations taken sequentially over time: xt is an observation recorded at a specific time t. Characteristics of times series data: observations are dependent, become available at equally spaced time points and are time-ordered. This is a discrete time series. The purposes of time series analysis are to model and to predict or forecast future values of a series based on the history of that series. 2.2 Some descriptive techniques. (Based on [BD] x1.3 and x1.4) ......................................................................................... Take a step backwards: how do we describe a r.v. or a random vector? ² for a r.v. X: 2 d.f. FX (x) := P (X · x), mean ¹ = EX and variance σ = V ar(X). ² for a r.vector (X1;X2): joint d.f. FX1;X2 (x1; x2) := P (X1 · x1;X2 · x2), marginal d.f.FX1 (x1) := P (X1 · x1) ´ FX1;X2 (x1; 1) 2 2 mean vector (¹1; ¹2) = (EX1; EX2), variances σ1 = V ar(X1); σ2 = V ar(X2), and covariance Cov(X1;X2) = E(X1 ¡ ¹1)(X2 ¡ ¹2) ´ E(X1X2) ¡ ¹1¹2. Often we use correlation = normalized covariance: Cor(X1;X2) = Cov(X1;X2)=fσ1σ2g ......................................................................................... To describe a process X1;X2;::: we define (i) Def. Distribution function: (fi-di) d.f. Ft1:::tn (x1; : : : ; xn) = P (Xt1 · x1;:::;Xtn · xn); i.e. this is the joint d.f. for the vector (Xt1 ;:::;Xtn ). (ii) First- and Second-order moments. ² Mean: ¹X (t) = EXt 2 2 2 2 ² Variance: σX (t) = E(Xt ¡ ¹X (t)) ´ EXt ¡ ¹X (t) 1 ² Autocovariance function: γX (t; s) = Cov(Xt;Xs) = E[(Xt ¡ ¹X (t))(Xs ¡ ¹X (s))] ´ E(XtXs) ¡ ¹X (t)¹X (s) (Note: this is an infinite matrix). -
Sabermetrics: the Past, the Present, and the Future
Sabermetrics: The Past, the Present, and the Future Jim Albert February 12, 2010 Abstract This article provides an overview of sabermetrics, the science of learn- ing about baseball through objective evidence. Statistics and baseball have always had a strong kinship, as many famous players are known by their famous statistical accomplishments such as Joe Dimaggio’s 56-game hitting streak and Ted Williams’ .406 batting average in the 1941 baseball season. We give an overview of how one measures performance in batting, pitching, and fielding. In baseball, the traditional measures are batting av- erage, slugging percentage, and on-base percentage, but modern measures such as OPS (on-base percentage plus slugging percentage) are better in predicting the number of runs a team will score in a game. Pitching is a harder aspect of performance to measure, since traditional measures such as winning percentage and earned run average are confounded by the abilities of the pitcher teammates. Modern measures of pitching such as DIPS (defense independent pitching statistics) are helpful in isolating the contributions of a pitcher that do not involve his teammates. It is also challenging to measure the quality of a player’s fielding ability, since the standard measure of fielding, the fielding percentage, is not helpful in understanding the range of a player in moving towards a batted ball. New measures of fielding have been developed that are useful in measuring a player’s fielding range. Major League Baseball is measuring the game in new ways, and sabermetrics is using this new data to find better mea- sures of player performance. -
Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive -
Molecular Symmetry
Molecular Symmetry Symmetry helps us understand molecular structure, some chemical properties, and characteristics of physical properties (spectroscopy) – used with group theory to predict vibrational spectra for the identification of molecular shape, and as a tool for understanding electronic structure and bonding. Symmetrical : implies the species possesses a number of indistinguishable configurations. 1 Group Theory : mathematical treatment of symmetry. symmetry operation – an operation performed on an object which leaves it in a configuration that is indistinguishable from, and superimposable on, the original configuration. symmetry elements – the points, lines, or planes to which a symmetry operation is carried out. Element Operation Symbol Identity Identity E Symmetry plane Reflection in the plane σ Inversion center Inversion of a point x,y,z to -x,-y,-z i Proper axis Rotation by (360/n)° Cn 1. Rotation by (360/n)° Improper axis S 2. Reflection in plane perpendicular to rotation axis n Proper axes of rotation (C n) Rotation with respect to a line (axis of rotation). •Cn is a rotation of (360/n)°. •C2 = 180° rotation, C 3 = 120° rotation, C 4 = 90° rotation, C 5 = 72° rotation, C 6 = 60° rotation… •Each rotation brings you to an indistinguishable state from the original. However, rotation by 90° about the same axis does not give back the identical molecule. XeF 4 is square planar. Therefore H 2O does NOT possess It has four different C 2 axes. a C 4 symmetry axis. A C 4 axis out of the page is called the principle axis because it has the largest n . By convention, the principle axis is in the z-direction 2 3 Reflection through a planes of symmetry (mirror plane) If reflection of all parts of a molecule through a plane produced an indistinguishable configuration, the symmetry element is called a mirror plane or plane of symmetry .