The Hilbert Transform and Empirical Mode Decomposition As Tools for Data Analysis

The Hilbert Transform and Empirical Mode Decomposition As Tools for Data Analysis

The Hilbert Transform and Empirical Mode Decomposition as Tools for Data Analysis Susan Tolwinski First-Year RTG Project University of Arizona Program in Applied Mathematics Advisor: Professor Flaschka Spring 2007 Abstract In this paper, I introduce the Hilbert transform, and explain its usefulness in the context of signal processing. I also outline a recent development in signal processing methods called Empirical Mode Decomposition (EMD) which makes use of the Hilbert transform. Finally, I illustrate EMD by using the method to analyze temperature data from western Massachusetts. 1 Real Signals and the Hilbert Transform 1.1 Definition of the Hilbert Transform from Contour Integration The Hilbert Transform and its inverse relate the real and imaginary parts of a complex function defined on the real line. The relationship given by this operation is easily derived by the application of Cauchy’s Integral Theorem to a function f(z) which is analytic in the upper half-plane, and which decays to zero at infinity. For a point z∗ inside the contour depicted in Fig. (1), Cauchy’s theorem tells us that Z ∗ 1 f(z) f(z ) = ∗ dz (1) 2πi Γ z − z Figure 1: Contour for integral 2. 1 Writing z∗ as the sum of its real and imaginary parts, and the integral as the sum of integrals along the semi-circle of radius R and the real interval [−R, R], Cauchy’s Theorem becomes Z R Z ∗ 1 f(z) 1 f(z) f(z ) = ∗ dx + ∗ dz (2) 2πi −R (x − x ) − iy∗ 2πi CR (x − x ) + i(y − y∗) As R → ∞, the second term drops out (as F vanishes by assumption). Now, if we replace z∗ with its conjugate in Eqn. (2), the integral yields zero since the pole created atz ¯∗ is not contained within the contour. Thus we can add such a term to our integrand with impunity, so that Z ∞ ∗ 1 f(z) f(z) f(z ) = ∗ + ∗ dx (3) 2πi −∞ (x − x ) − iy∗ (x − x ) + iy∗ Finally, we rewrite f(z) as the sum of its real and imaginary components, f(x, y) = u(x, y) + iv(x, y), and take y∗ = 0, since the aim is to relate u(x, y) and v(x, y) on the real line. For u(x, 0), v(x, 0) integrable and differentiable on R, Z ∞ ∗ 1 u(x, y) + iv(x, y) u(x, y) + iv(x, y) f(z ) = ∗ + ∗ dx (4) 2πi −∞ (x − x ) − iy∗ (x − x ) + iy∗ 1 Z ∞ (u(x, y) + iv(x, y))(x − x∗) = ∗ 2 2 dx (5) πi −∞ (x − x ) + y∗ Z ∞ ∗ 1 u(x, 0) + iv(x, 0) ⇒ f(x ) = P.V. ∗ dx (6) πi −∞ (x − x ) Z ∞ Z ∞ ∗ ∗ 1 v(x, 0) i u(x, 0) u(x , 0) + iv(x , 0) = P.V. ∗ dx − P.V. ∗ dx (7) π −∞ x − x π −∞ x − x Identifying real and imaginary parts, this gives Z ∞ ∗ 1 v(x, 0) ⇒ u(x , 0) = P.V. ∗ dx (8) π −∞ x − x Z ∞ ∗ 1 u(x, 0) v(x , 0) = − P.V. ∗ dx (9) π −∞ x − x This last result shows that on the real line, one need only specify the real component of an analytic function to uniquely determine its imaginary part, and vice versa. Note that because of the singularity at x∗, the integral is defined in the sense of principal values. Hence the identity is well-defined for functions which are integrable and differentiable. We take a small leap of abstraction, and in the spirit of the preceeding derivation, define the Hilbert transform of a real function of a real variable, φ(x): Definition Let φ(x) be integrable and differentiable on R. Then the Hilbert transform of φ(x), denoted H{φ(x)}, is given by 1 Z ∞ φ(ξ) H{φ(x)} = P.V. dξ (10) π −∞ ξ − x Comparing this definition to eqn.(8), we see that if φ is the imaginary part of some analytic function on the real line, then H{φ(x)} gives the real component of the same function on the real line. It seems natural to suspect a form like eqn.(9) will give the inverse Hilbert transform; we will see later on that this is indeed the case. 2 1.2 Properties of the Hilbert Transform and Inverse The form of definition (10) suggests an alternate view of the Hilbert transform of φ(x): as a convolution of 1 φ(x) with − πx . Using the convolution theorem on this definition provides some intuition as to the action of the Hilbert transform on a function in the frequency domain. Recall that Fourier transforms are useful for computing convolutions, since the transform of the convolution is equal to the product of the transforms. Let F {·} denote the Fourier Transform operation. In this paper, we adopt the convention of engineers and define Z ∞ F {f(t)} = fˆ(ω) = f(t)e−iωtdt (11) −∞ 1 Z ∞ F −1{fˆ(ω)} = fˆ(ω)eiωtdω (12) 2π −∞ Then 1 Z ∞ φ(ξ) F {H{φ(t)}} = F dξ (13) π −∞ ξ − t 1 = F φ(t) ∗ − (14) πt −1 = φˆ(ω)F (15) πt −1 −1 But what is the Fourier transform of πt ? It is worth memorizing that F { πt } = isgn(ω). To see this, consider the inverse transform of sgn(ω). This function can be written as 2H(ω) − 1, with H the Heaviside distribution. Then we can approximate the inverse transform by the sequence of distributional actions Z ∞ Z ∞ −1 2 −ω iωt 1 iωt F {2H(ω) − 1} = e e dω − e dω (16) 2π 0 2π −∞ iω(t+i) ∞ 1 e = − δ(t) (17) π −i(t + i) 0 i 1 = − δ(t) (18) π t + i Now, taking the limit of this expression as → 0, we should obtain the expression whose Fourier transform is the signum function. The first term gives i 1 i 1 i 1 lim = ≡ − iπδ(t) (19) →0 π t + i π t + i0 π t so that i 1 i 1 F −1{2H(ω) − 1} = ≡ − iπδ(t) − δ(t) (20) π x + i0 π t i = (21) πt This confirms that taking the Hilbert transform of a function in t is equivalent to multiplying the function’s transform by i times its sign at every point in the frequency domain. In other words, the Hilbert transform π rotates the transform of a signal by 2 in frequency space either clockwise or counter-clockwise, depending on the sign of ω. 3 A related fact: like the Fourier transform, the Hilbert transform preserves the ”energy” of a function. Define a function’s energy by Z ∞ 2 Ef = |f(t)| dt (22) −∞ Since the Hilbert transform just multiplies the Fourier transform by −isgn(ω), it clearly leaves the energy of the transform unchanged. By Parseval’s theorem, we know that Efˆ = Ef , and so EH{f} = Ef . Some other useful properties of the Hilbert transform are easily verified: • The Hilbert transform and its inverse are linear operations. • An extension of the property described above is that F {Hnf(t)} = (isgn(ω))n fˆ(ω) • Taking multiple Hilbert transformations of a function reveals the following identities: H2 = −I ⇒ H4 = I ⇒ H2 = H−1 From this last property, the definition of the inverse Hilbert transform is apparent: Definition The inverse Hilbert transform of ψ(x), denoted H−1{ψ(x)}, is given by 1 Z ∞ ψ(ξ) H−1{ψ(x)} = − P.V. dξ (23) π −∞ ξ − x As suggested by the result in eqn.(9), we now see that the inverse transform of ψ(x) indeed gives the imaginary part on R of an analytic function with real part ψ(x) on R. 1.3 Real and Analytic Signals It is necessary to introduce some terminology from signal processing: • A signal is any time-varying quantity carrying information we would like to analyze. • To engineers, an analytic signal is one with only positive frequency components. For such signals, it is possible (we shall soon see) to give a representation of the complex form Φ(t) = A(t)eiθ(t) (24) where the original signal is the projection of the complex signal onto the real axis. The engineering notion analyticity is closely related to the mathematician’s notion of what it means for a function to be analytic. For a function with a strictly positive frequency spectrum, 1 Z ∞ f(t) = fˆ(ω)eitωdw (25) 2π 0 If we wish to extend this function on R to the entire complex plane, extend t → z = (t + is) ∈ C. Then the function on the complex plane may be expressed Z ∞ f(z) = fˆ(ω)ei(t+is)ωdω (26) 0 Z ∞ = fˆ(ω)e−sωeitωdω (27) 0 Thus, it is plausible that the integral will return an analytic function so long as s > 0. In other words, a function with a positive frequency spectrum will have an analytic extension in the upper half-plane. So an engineer’s analytic function corresponds to an extended function which is analytic to the mathematician for Im(z) > 0 . 4 • Once we have such an analytic representation of the form A(t)eiθ(t), we can talk about the instantaneous phase θ(t) of the signal, and its time-varying amplitude A(t). • The instantaneous frequency is the rate of change of the phase, θ0(t). We can compute the expected value of the frequency in the time domain by Z ∞ < ω > = ω(t)|Φ(t)|2dt (28) −∞ Z ∞ = ω(t)A2(t)dt (29) −∞ On the other hand, we may also calculate it in frequency space: Z ∞ < ω > = ω|Φ(ˆ ω)|2dω (30) −∞ Z ∞ = ωΦ(ˆ ω)Φˆ ∗(ω)dω (31) −∞ Z ∞ = F −1ωΦ(ˆ ω) Φ∗(t)dt (by Parseval’s Theorem) (32) −∞ Z ∞ = −i Φ∗(t)Φ0(t)dt (33) ∞ Z ∞ A0(t) = θ0(t) − i |A(t)|2dt (34) −∞ A(t) Now, since A(t) is real and L2(R), the second integrand forms a perfect derivative which vanishes when integrated over the whole real line.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    18 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us