Wavelet Transforms in Time Series Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Wavelet Transforms in Time Series Analysis Wavelet Transforms in Time Series Analysis Andrew Tangborn Global Modeling and Assimilation Office, Goddard Space Flight Center [email protected] 301-614-6178 Outline 1. Fourier Transforms. 2. What is a Wavelet? 3. Continuous and Discrete Wavelet Transforms 4. Construction of Wavelets through dilation equations. 5. Example - Haar wavelets 6. Daubechies Compactly Supported wavelets. 7. Data compression, efficient representation. 8. Soft Thresholding. 9. Continuous Transform - Morlet Wavelet 10. Applications to approximating error correlations Fourier Transforms A good way to understand how wavelets work and why they are useful is by • comparing them with Fourier Transforms. The Fourier Transform converts a time series into the frequency domain: • Continuous Transform of a function f(x): ∞ iωx fˆ(ω) = f(x)e− dx Z −∞ where fˆ(ω) represents the strength of the function at frequency ω, where ω is continuous. Discrete Transform of a function f(x): ∞ ikx fˆ(k) = f(x)e− dx Z −∞ where k is a discrete number. for discrete data f(xj), j =1,...,N N ( i2π(k 1)(j 1)/N) fˆk = fje − − − jX=1 The Fast Fourier Transform (FFT) is o(NlogN) operations. • Example: Single Frequency Signal f(t) = sin(2πt) 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 1 2 3 4 5 6 7 8 9 10 Signal 0.5 Real Coefficient Imaginary Coefficient 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 0 1 2 3 4 5 6 7 8 9 10 Fourier Transform Discrete Fourier Transform (DFT) locates the single frequency and re- • flection. Complex expansion is not exactly sine series - causes some spread. • Two Frequency Signal f(t) = sin(2πt)+ 2sin(4πt) 3 2 1 0 −1 −2 −3 0 1 2 3 4 5 6 7 8 9 10 Signal 1 Real Coefficients Imaginary Coefficients 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 1 2 3 4 5 6 7 8 9 10 Fourier Transform Both signal frequencies are represented by the Fourier Coefficients. • Intermittent Signal f(t) = sin(2πt)+ 2sin(4πt) when .37 <t<.55 f(t) = sin(2πt) otherwise 3 2 1 0 −1 −2 −3 0 1 2 3 4 5 6 7 8 9 10 Signal 0.5 Real coefficients Imaginary Coefficients 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 0 1 2 3 4 5 6 7 8 9 10 Fourier Transform First frequency is found, but higher intermittent frequency appears as • many frequencies, and is not clearly identified. What is a Wavelet? A function that is localized in time and frequency, generally with a zero • mean. It is also a tool for decomposing a signal by location and frequency. • Consider the Fourier transform: A signal is only decomposed into its frequency components. No information is extracted about location and time. What happens when applying a Fourier transform to a signal that has • a time varying frequency? 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 2 4 6 8 10 12 14 16 18 20 The Fourier transform will only give some information on which frequencies are present, but will give no information on when they occur. Schematic Representation of Decomposition Original Signal Frequency −> Time −> The signal is represented by an amplitude that is changing in time. • There is no explicit information on frequency. • Fourier Transform Frequency −> Time −> The Fourier transform results in a representation that depends only on frequency. • It gives no information on time. • Wavelet Transform Frequency −> Time −> The wavelet transform contains information on both the time location and fre- • quency of a signal. Some typical (but not required) properties of wavelets Orthogonality - Both wavelet transform matrix and wavelet functions can be • orthogonal. Useful for creating basis functions for computation. Zero Mean (admissibility condition) - Forces wavelet functions to wiggle (oscillate • between positive and negative). Compact Support - Efficient at representing localized data and functions. • How are Wavelets Defined? Families of basis functions that are based on • (1) dilations: ψ(x) ψ(2x) → (2) translations: ψ(x) ψ(x + 1) → of a given general ”mother wavelet” ψ(x). How do we use ψ(x)? • The general form is: ψj (x)=2j/2ψ(2jx k) k − where j: dilation index k: translation index 2k/2 needed for normalization How do we get ψ(x)? • Dilation Equations Construction of Wavelets We consider here only orthogonally/compactly supported • wavelets - Orthogonality means: ∞ j j′ ψk(x)ψk (x)dx = δkk δjj Z ′ ′ ′ −∞ Wavelets are constructed from scaling functions, φ(x) : • φ(x) come from the dilation equation: φ(x) = ckφ(2x k) Xk − ck: Finite set of filter coefficients General features: • - Fewer non-zero ck’s mean more compact and less smooth functions - More non-zero ck’s mean less compact and more smooth functions Restrictions on the Filter Coefficients Normality: • ∞ φ(x)dx =1 Z −∞ ∞ ckφ(2x k)dx =1 Z Xk − −∞ ∞ ck φ(2x k)dx =1 Xk Z − −∞ ∞ 1 ck φ(2x k)d(2x k)=1 Xk Z 2 − − −∞ ∞ 1 ck φ(ξ)d(ξ)=1 Xk Z 2 −∞ 1 ck( )(1) = 1 Xk 2 or ck =2 Xk Simple Examples • - Smallest number of c is 1: just get φ(x) = δ, zero support. k → - Haar Scaling Function: c0 = 1, c1 = 1. Haar Scaling Function The scaling function equation is: φ(x) = φ(2x) + φ(2x 1) − The only function that satisfies this is: φ(x)=1if0 x 1 ≤ ≤ φ(x) = 0 otherwise 1.5 1 y 0.5 0 −0.5 −1 −0.5 0 0.5 1 1.5 2 x Translation and Dilation of φ(x): • φ(2x)=1if0 x 1/2 ≤ ≤ φ(2x) = 0 elsewhere φ(2x 1)=1if1/2 x 1 − ≤ ≤ φ(2x 1) = 0 elsewhere − so the sum of the two functions is then: 1.5 1 y 0.5 0 −0.5 −1 −0.5 0 0.5 1 1.5 2 x Haar Wavelet Wavelets are constructed by taking differences of scaling • functions k ψ(x) = ( 1) c1 kφ(2x k) Xk − − − differencing is caused by the ( 1)k: − so the basic Haar wavelet is: 1.5 1 0.5 0 −0.5 −1 −1.5 −1 −0.5 0 0.5 1 1.5 2 and the family comes from dilating and translating: ψj (x)=2j/2ψ(2jx k) k − so that the j = 1 wavelets are: 1.5 1 0.5 0 −0.5 −1 −1.5 −1 −0.5 0 0.5 1 1.5 2 Orthogonality of Haar wavelets Translation no overlap • ⇒ 1.5 1 0.5 0 −0.5 −1 −1.5 −1 −0.5 0 0.5 1 1.5 2 Dilation cancellation • ⇒ 1.5 1 0.5 0 −0.5 −1 −1.5 −1 −0.5 0 0.5 1 1.5 2 Daubechies’ Compactly Supported Wavelets The following plots are wavelets created using larger numbers of filter coefficients, but having all the properties of orthogonal wavelets. For example, D4 has coefficients: 1 1 1 1 ck = (1 + √3), (3 + √3), (3 √3), (1 √3) 4 4 4 − 4 − (Reference: Daubechies, 1988) 0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 0 20 40 60 80 100 120 D4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 0 20 40 60 80 100 120 D12 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 0 20 40 60 80 100 120 D20 Continuous and Discrete Wavelet Transforms Continuous wav 1/2 t 1 (T f)(a,b) = a − dtf(t)ψ( − ) | | Z b where a: translation parameter b: dilation parameter Discrete wav m/2 m Tm,n (f) = ao− dtf(t)ψ(ao− t nbo) Z − m: dilation parameter n: translation parameter ao,bo depend on the wavelet used Fast Wavelet Transform (Reference: S. Mallat, 1989) Uses the discrete data: • f0 f1 f2 f3 f4 f5 f6 f7 h i Pyramid Algorithm o(N) !! - Start at finest scale and calculate differences • ⇒ j=0 a b 0,0 0,0 a j=1 1,0 b a b 1,0 1,1 1,1 j=2 a b a a 2,0 2,0 2,1 b 2,2 b a b 2,1 2,2 2,3 2,3 f f f f f f f f 1 2 3 4 5 6 7 8 and averages - Use Averages at next coarser scale to get new set of differences (bj,k) and averages (aj,k) And the coefficients are simply the differences (bj,k) and the average for the coarsest scale (a0,0): a0,0 b0,0 b1,0 b1,1 b2,0 b2,1 b2,2 b2,3 h i For Haar Wavelet: aj−1,k = c0aj,k + c1aj,k+1 bj−1,k = c0aj,k c1aj,k+1 − - The differences are the coefficients at each scale. Averages used for next scale. How do we store all this? (eg, Numerical Recipes algorithm). • Intermittent Signal - Wavelet Transform 3 2 1 0 −1 −2 −3 0 1 2 3 4 5 6 7 8 9 10 Signal 4 3 2 1 0 −1 j=3 j=4 −2 −3 j=5 −4 j=2 −5 0 20 40 60 80 100 120 Higher Frequency intermittent signal shows up in the j =5, 6 scales, and only in the • middle portion of the domain.
Recommended publications
  • Kalman and Particle Filtering
    Abstract: The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. With the state estimates, we can forecast and smooth the stochastic process. With the innovations, we can estimate the parameters of the model. The article discusses how to set a dynamic model in a state-space form, derives the Kalman and Particle filters, and explains how to use them for estimation. Kalman and Particle Filtering The Kalman and Particle filters are algorithms that recursively update an estimate of the state and find the innovations driving a stochastic process given a sequence of observations. The Kalman filter accomplishes this goal by linear projections, while the Particle filter does so by a sequential Monte Carlo method. Since both filters start with a state-space representation of the stochastic processes of interest, section 1 presents the state-space form of a dynamic model. Then, section 2 intro- duces the Kalman filter and section 3 develops the Particle filter. For extended expositions of this material, see Doucet, de Freitas, and Gordon (2001), Durbin and Koopman (2001), and Ljungqvist and Sargent (2004). 1. The state-space representation of a dynamic model A large class of dynamic models can be represented by a state-space form: Xt+1 = ϕ (Xt,Wt+1; γ) (1) Yt = g (Xt,Vt; γ) . (2) This representation handles a stochastic process by finding three objects: a vector that l describes the position of the system (a state, Xt X R ) and two functions, one mapping ∈ ⊂ 1 the state today into the state tomorrow (the transition equation, (1)) and one mapping the state into observables, Yt (the measurement equation, (2)).
    [Show full text]
  • Lecture 19: Wavelet Compression of Time Series and Images
    Lecture 19: Wavelet compression of time series and images c Christopher S. Bretherton Winter 2014 Ref: Matlab Wavelet Toolbox help. 19.1 Wavelet compression of a time series The last section of wavelet leleccum notoolbox.m demonstrates the use of wavelet compression on a time series. The idea is to keep the wavelet coefficients of largest amplitude and zero out the small ones. 19.2 Wavelet analysis/compression of an image Wavelet analysis is easily extended to two-dimensional images or datasets (data matrices), by first doing a wavelet transform of each column of the matrix, then transforming each row of the result (see wavelet image). The wavelet coeffi- cient matrix has the highest level (largest-scale) averages in the first rows/columns, then successively smaller detail scales further down the rows/columns. The ex- ample also shows fine results with 50-fold data compression. 19.3 Continuous Wavelet Transform (CWT) Given a continuous signal u(t) and an analyzing wavelet (x), the CWT has the form Z 1 s − t W (λ, t) = λ−1=2 ( )u(s)ds (19.3.1) −∞ λ Here λ, the scale, is a continuous variable. We insist that have mean zero and that its square integrates to 1. The continuous Haar wavelet is defined: 8 < 1 0 < t < 1=2 (t) = −1 1=2 < t < 1 (19.3.2) : 0 otherwise W (λ, t) is proportional to the difference of running means of u over successive intervals of length λ/2. 1 Amath 482/582 Lecture 19 Bretherton - Winter 2014 2 In practice, for a discrete time series, the integral is evaluated as a Riemann sum using the Matlab wavelet toolbox function cwt.
    [Show full text]
  • Adaptive Wavelet Clustering for Highly Noisy Data
    Adaptive Wavelet Clustering for Highly Noisy Data Zengjian Chen Jiayi Liu Yihe Deng Department of Computer Science Department of Computer Science Department of Mathematics Huazhong University of University of Massachusetts Amherst University of California, Los Angeles Science and Technology Massachusetts, USA California, USA Wuhan, China [email protected] [email protected] [email protected] Kun He* John E. Hopcroft Department of Computer Science Department of Computer Science Huazhong University of Science and Technology Cornell University Wuhan, China Ithaca, NY, USA [email protected] [email protected] Abstract—In this paper we make progress on the unsupervised Based on the pioneering work of Sheikholeslami that applies task of mining arbitrarily shaped clusters in highly noisy datasets, wavelet transform, originally used for signal processing, on which is a task present in many real-world applications. Based spatial data clustering [12], we propose a new wavelet based on the fundamental work that first applies a wavelet transform to data clustering, we propose an adaptive clustering algorithm, algorithm called AdaWave that can adaptively and effectively denoted as AdaWave, which exhibits favorable characteristics for uncover clusters in highly noisy data. To tackle general appli- clustering. By a self-adaptive thresholding technique, AdaWave cations, we assume that the clusters in a dataset do not follow is parameter free and can handle data in various situations. any specific distribution and can be arbitrarily shaped. It is deterministic, fast in linear time, order-insensitive, shape- To show the hardness of the clustering task, we first design insensitive, robust to highly noisy data, and requires no pre- knowledge on data models.
    [Show full text]
  • A Fourier-Wavelet Monte Carlo Method for Fractal Random Fields
    JOURNAL OF COMPUTATIONAL PHYSICS 132, 384±408 (1997) ARTICLE NO. CP965647 A Fourier±Wavelet Monte Carlo Method for Fractal Random Fields Frank W. Elliott Jr., David J. Horntrop, and Andrew J. Majda Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, New York 10012 Received August 2, 1996; revised December 23, 1996 2 2H k[v(x) 2 v(y)] l 5 CHux 2 yu , (1.1) A new hierarchical method for the Monte Carlo simulation of random ®elds called the Fourier±wavelet method is developed and where 0 , H , 1 is the Hurst exponent and k?l denotes applied to isotropic Gaussian random ®elds with power law spectral the expected value. density functions. This technique is based upon the orthogonal Here we develop a new Monte Carlo method based upon decomposition of the Fourier stochastic integral representation of the ®eld using wavelets. The Meyer wavelet is used here because a wavelet expansion of the Fourier space representation of its rapid decay properties allow for a very compact representation the fractal random ®elds in (1.1). This method is capable of the ®eld. The Fourier±wavelet method is shown to be straightfor- of generating a velocity ®eld with the Kolmogoroff spec- ward to implement, given the nature of the necessary precomputa- trum (H 5 Ad in (1.1)) over many (10 to 15) decades of tions and the run-time calculations, and yields comparable results scaling behavior comparable to the physical space multi- with scaling behavior over as many decades as the physical space multiwavelet methods developed recently by two of the authors.
    [Show full text]
  • Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients
    Alternative Tests for Time Series Dependence Based on Autocorrelation Coefficients Richard M. Levich and Rosario C. Rizzo * Current Draft: December 1998 Abstract: When autocorrelation is small, existing statistical techniques may not be powerful enough to reject the hypothesis that a series is free of autocorrelation. We propose two new and simple statistical tests (RHO and PHI) based on the unweighted sum of autocorrelation and partial autocorrelation coefficients. We analyze a set of simulated data to show the higher power of RHO and PHI in comparison to conventional tests for autocorrelation, especially in the presence of small but persistent autocorrelation. We show an application of our tests to data on currency futures to demonstrate their practical use. Finally, we indicate how our methodology could be used for a new class of time series models (the Generalized Autoregressive, or GAR models) that take into account the presence of small but persistent autocorrelation. _______________________________________________________________ An earlier version of this paper was presented at the Symposium on Global Integration and Competition, sponsored by the Center for Japan-U.S. Business and Economic Studies, Stern School of Business, New York University, March 27-28, 1997. We thank Paul Samuelson and the other participants at that conference for useful comments. * Stern School of Business, New York University and Research Department, Bank of Italy, respectively. 1 1. Introduction Economic time series are often characterized by positive
    [Show full text]
  • Time Series: Co-Integration
    Time Series: Co-integration Series: Economic Forecasting; Time Series: General; Watson M W 1994 Vector autoregressions and co-integration. Time Series: Nonstationary Distributions and Unit In: Engle R F, McFadden D L (eds.) Handbook of Econo- Roots; Time Series: Seasonal Adjustment metrics Vol. IV. Elsevier, The Netherlands N. H. Chan Bibliography Banerjee A, Dolado J J, Galbraith J W, Hendry D F 1993 Co- Integration, Error Correction, and the Econometric Analysis of Non-stationary Data. Oxford University Press, Oxford, UK Time Series: Cycles Box G E P, Tiao G C 1977 A canonical analysis of multiple time series. Biometrika 64: 355–65 Time series data in economics and other fields of social Chan N H, Tsay R S 1996 On the use of canonical correlation science often exhibit cyclical behavior. For example, analysis in testing common trends. In: Lee J C, Johnson W O, aggregate retail sales are high in November and Zellner A (eds.) Modelling and Prediction: Honoring December and follow a seasonal cycle; voter regis- S. Geisser. Springer-Verlag, New York, pp. 364–77 trations are high before each presidential election and Chan N H, Wei C Z 1988 Limiting distributions of least squares follow an election cycle; and aggregate macro- estimates of unstable autoregressive processes. Annals of Statistics 16: 367–401 economic activity falls into recession every six to eight Engle R F, Granger C W J 1987 Cointegration and error years and follows a business cycle. In spite of this correction: Representation, estimation, and testing. Econo- cyclicality, these series are not perfectly predictable, metrica 55: 251–76 and the cycles are not strictly periodic.
    [Show full text]
  • A Practical Guide to Wavelet Analysis
    A Practical Guide to Wavelet Analysis Christopher Torrence and Gilbert P. Compo Program in Atmospheric and Oceanic Sciences, University of Colorado, Boulder, Colorado ABSTRACT A practical step-by-step guide to wavelet analysis is given, with examples taken from time series of the El Niño– Southern Oscillation (ENSO). The guide includes a comparison to the windowed Fourier transform, the choice of an appropriate wavelet basis function, edge effects due to finite-length time series, and the relationship between wavelet scale and Fourier frequency. New statistical significance tests for wavelet power spectra are developed by deriving theo- retical wavelet spectra for white and red noise processes and using these to establish significance levels and confidence intervals. It is shown that smoothing in time or scale can be used to increase the confidence of the wavelet spectrum. Empirical formulas are given for the effect of smoothing on significance levels and confidence intervals. Extensions to wavelet analysis such as filtering, the power Hovmöller, cross-wavelet spectra, and coherence are described. The statistical significance tests are used to give a quantitative measure of changes in ENSO variance on interdecadal timescales. Using new datasets that extend back to 1871, the Niño3 sea surface temperature and the Southern Oscilla- tion index show significantly higher power during 1880–1920 and 1960–90, and lower power during 1920–60, as well as a possible 15-yr modulation of variance. The power Hovmöller of sea level pressure shows significant variations in 2–8-yr wavelet power in both longitude and time. 1. Introduction complete description of geophysical applications can be found in Foufoula-Georgiou and Kumar (1995), Wavelet analysis is becoming a common tool for while a theoretical treatment of wavelet analysis is analyzing localized variations of power within a time given in Daubechies (1992).
    [Show full text]
  • An Overview of Wavelet Transform Concepts and Applications
    An overview of wavelet transform concepts and applications Christopher Liner, University of Houston February 26, 2010 Abstract The continuous wavelet transform utilizing a complex Morlet analyzing wavelet has a close connection to the Fourier transform and is a powerful analysis tool for decomposing broadband wavefield data. A wide range of seismic wavelet applications have been reported over the last three decades, and the free Seismic Unix processing system now contains a code (succwt) based on the work reported here. Introduction The continuous wavelet transform (CWT) is one method of investigating the time-frequency details of data whose spectral content varies with time (non-stationary time series). Moti- vation for the CWT can be found in Goupillaud et al. [12], along with a discussion of its relationship to the Fourier and Gabor transforms. As a brief overview, we note that French geophysicist J. Morlet worked with non- stationary time series in the late 1970's to find an alternative to the short-time Fourier transform (STFT). The STFT was known to have poor localization in both time and fre- quency, although it was a first step beyond the standard Fourier transform in the analysis of such data. Morlet's original wavelet transform idea was developed in collaboration with the- oretical physicist A. Grossmann, whose contributions included an exact inversion formula. A series of fundamental papers flowed from this collaboration [16, 12, 13], and connections were soon recognized between Morlet's wavelet transform and earlier methods, including harmonic analysis, scale-space representations, and conjugated quadrature filters. For fur- ther details, the interested reader is referred to Daubechies' [7] account of the early history of the wavelet transform.
    [Show full text]
  • A Low Bit Rate Audio Codec Using Wavelet Transform
    Navpreet Singh, Mandeep Kaur,Rajveer Kaur / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.2222-2228 An Enhanced Low Bit Rate Audio Codec Using Discrete Wavelet Transform Navpreet Singh1, Mandeep Kaur2, Rajveer Kaur3 1,2(M. tech Students, Department of ECE, Guru kashi University, Talwandi Sabo(BTI.), Punjab, INDIA 3(Asst. Prof. Department of ECE, Guru Kashi University, Talwandi Sabo (BTI.), Punjab,INDIA Abstract Audio coding is the technology to represent information and perceptually irrelevant signal audio in digital form with as few bits as possible components can be separated and later removed. This while maintaining the intelligibility and quality class includes techniques such as subband coding; required for particular application. Interest in audio transform coding, critical band analysis, and masking coding is motivated by the evolution to digital effects. The second class takes advantage of the communications and the requirement to minimize statistical redundancy in audio signal and applies bit rate, and hence conserve bandwidth. There is some form of digital encoding. Examples of this class always a tradeoff between lowering the bit rate and include entropy coding in lossless compression and maintaining the delivered audio quality and scalar/vector quantization in lossy compression [1]. intelligibility. The wavelet transform has proven to Digital audio compression allows the be a valuable tool in many application areas for efficient storage and transmission of audio data. The analysis of nonstationary signals such as image and various audio compression techniques offer different audio signals. In this paper a low bit rate audio levels of complexity, compressed audio quality, and codec algorithm using wavelet transform has been amount of data compression.
    [Show full text]
  • Generating Time Series with Diverse and Controllable Characteristics
    GRATIS: GeneRAting TIme Series with diverse and controllable characteristics Yanfei Kang,∗ Rob J Hyndman,† and Feng Li‡ Abstract The explosion of time series data in recent years has brought a flourish of new time series analysis methods, for forecasting, clustering, classification and other tasks. The evaluation of these new methods requires either collecting or simulating a diverse set of time series benchmarking data to enable reliable comparisons against alternative approaches. We pro- pose GeneRAting TIme Series with diverse and controllable characteristics, named GRATIS, with the use of mixture autoregressive (MAR) models. We simulate sets of time series using MAR models and investigate the diversity and coverage of the generated time series in a time series feature space. By tuning the parameters of the MAR models, GRATIS is also able to efficiently generate new time series with controllable features. In general, as a costless surrogate to the traditional data collection approach, GRATIS can be used as an evaluation tool for tasks such as time series forecasting and classification. We illustrate the usefulness of our time series generation process through a time series forecasting application. Keywords: Time series features; Time series generation; Mixture autoregressive models; Time series forecasting; Simulation. 1 Introduction With the widespread collection of time series data via scanners, monitors and other automated data collection devices, there has been an explosion of time series analysis methods developed in the past decade or two. Paradoxically, the large datasets are often also relatively homogeneous in the industry domain, which limits their use for evaluation of general time series analysis methods (Keogh and Kasetty, 2003; Mu˜nozet al., 2018; Kang et al., 2017).
    [Show full text]
  • Fourier, Wavelet and Monte Carlo Methods in Computational Finance
    Fourier, Wavelet and Monte Carlo Methods in Computational Finance Kees Oosterlee1;2 1CWI, Amsterdam 2Delft University of Technology, the Netherlands AANMPDE-9-16, 7/7/2016 Kees Oosterlee (CWI, TU Delft) Comp. Finance AANMPDE-9-16 1 / 51 Joint work with Fang Fang, Marjon Ruijter, Luis Ortiz, Shashi Jain, Alvaro Leitao, Fei Cong, Qian Feng Agenda Derivatives pricing, Feynman-Kac Theorem Fourier methods Basics of COS method; Basics of SWIFT method; Options with early-exercise features COS method for Bermudan options Monte Carlo method BSDEs, BCOS method (very briefly) Kees Oosterlee (CWI, TU Delft) Comp. Finance AANMPDE-9-16 1 / 51 Agenda Derivatives pricing, Feynman-Kac Theorem Fourier methods Basics of COS method; Basics of SWIFT method; Options with early-exercise features COS method for Bermudan options Monte Carlo method BSDEs, BCOS method (very briefly) Joint work with Fang Fang, Marjon Ruijter, Luis Ortiz, Shashi Jain, Alvaro Leitao, Fei Cong, Qian Feng Kees Oosterlee (CWI, TU Delft) Comp. Finance AANMPDE-9-16 1 / 51 Feynman-Kac theorem: Z T v(t; x) = E g(s; Xs )ds + h(XT ) ; t where Xs is the solution to the FSDE dXs = µ(Xs )ds + σ(Xs )d!s ; Xt = x: Feynman-Kac Theorem The linear partial differential equation: @v(t; x) + Lv(t; x) + g(t; x) = 0; v(T ; x) = h(x); @t with operator 1 Lv(t; x) = µ(x)Dv(t; x) + σ2(x)D2v(t; x): 2 Kees Oosterlee (CWI, TU Delft) Comp. Finance AANMPDE-9-16 2 / 51 Feynman-Kac Theorem The linear partial differential equation: @v(t; x) + Lv(t; x) + g(t; x) = 0; v(T ; x) = h(x); @t with operator 1 Lv(t; x) = µ(x)Dv(t; x) + σ2(x)D2v(t; x): 2 Feynman-Kac theorem: Z T v(t; x) = E g(s; Xs )ds + h(XT ) ; t where Xs is the solution to the FSDE dXs = µ(Xs )ds + σ(Xs )d!s ; Xt = x: Kees Oosterlee (CWI, TU Delft) Comp.
    [Show full text]
  • Image Compression Techniques by Using Wavelet Transform
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by International Institute for Science, Technology and Education (IISTE): E-Journals Journal of Information Engineering and Applications www.iiste.org ISSN 2224-5782 (print) ISSN 2225-0506 (online) Vol 2, No.5, 2012 Image Compression Techniques by using Wavelet Transform V. V. Sunil Kumar 1* M. Indra Sena Reddy 2 1. Dept. of CSE, PBR Visvodaya Institute of Tech & Science, Kavali, Nellore (Dt), AP, INDIA 2. School of Computer science & Enginering, RGM College of Engineering & Tech. Nandyal, A.P, India * E-mail of the corresponding author: [email protected] Abstract This paper is concerned with a certain type of compression techniques by using wavelet transforms. Wavelets are used to characterize a complex pattern as a series of simple patterns and coefficients that, when multiplied and summed, reproduce the original pattern. The data compression schemes can be divided into lossless and lossy compression. Lossy compression generally provides much higher compression than lossless compression. Wavelets are a class of functions used to localize a given signal in both space and scaling domains. A MinImage was originally created to test one type of wavelet and the additional functionality was added to Image to support other wavelet types, and the EZW coding algorithm was implemented to achieve better compression. Keywords: Wavelet Transforms, Image Compression, Lossless Compression, Lossy Compression 1. Introduction Digital images are widely used in computer applications. Uncompressed digital images require considerable storage capacity and transmission bandwidth. Efficient image compression solutions are becoming more critical with the recent growth of data intensive, multimedia based web applications.
    [Show full text]