Applications of Fourier Transform Convolution

Total Page:16

File Type:pdf, Size:1020Kb

Applications of Fourier Transform Convolution Convolution Fourier Transform and its applications Correlation Applications of Fourier transform Notes So far, only considered Fourier transform as a way to obtain the frequency spectrum of a function/signal. However, there are other important applications: Convolution: Real physical systems can smear out input signals due to the finite response time of the apparatus. e.g. if you send an impulse (δ-function) in, the response will be different. Fourier analysis can deconvolve the response of the apparatus to recover the true input signal. Correlation: Correlations are used to compare two signals and test if they are correlated or not (crosscorrelation). Useful for velocimetry, sonar/radar ranging. Can also use to correlate a signal with itself (autocorrelation). Especially useful for signals corrupted by noise. Convolution Fourier Transform and its applications Correlation Convolution Notes Convolution arises when we try to predict the response of a linear physical system to a given input. Physical systems have a non-ideal response. e.g. capacitance of a detector will cause the input signal to become smeared out in the output If the response of the system is known (or measured), one can deconvolve the output signal using Fourier analysis to obtain the true signal. Convolution is also useful for image processing purposes. e.g. finding certain features in the image or filtering. Convolution Fourier Transform and its applications Correlation Notes The current I through the resistor is V − V I = in out R while the current I through the capacitor is dV I = C out dt Convolution Fourier Transform and its applications Correlation Notes Both currents are the same, therefore V − V dV in out = C out R dt Therefore we get a differential equation dV V V out + out = in dt RC RC Multiplying both sides with et=RC we get dV V V out et=RC + out et=RC = in et=RC dt RC RC d V (V et=RC ) = in et=RC dt out RC Convolution Fourier Transform and its applications Correlation Notes Integrating we obtain the analytic solution: −t=RC Z t e τ=RC Vout (t) = e Vin(τ)dτ + C1 (1) RC −∞ Z t 1 −(t−τ)=RC = e Vin(τ)dτ (2) RC −∞ where we put the constant of integration C1 = 0. Now consider a δ-function input: Vin(t) = δ(t). Performing the integration we get 0; t < 0 Vout (t) = 1 −t=RC RC e ; t = 0 Convolution Fourier Transform and its applications Correlation Succession of δ pulses Notes Consider a train of δ function inputs. Since the system is linear, Vout is just the sum of the individual pulses. If they are close together they will overlap! The signal gets convolved with the exponential response. Convolution Fourier Transform and its applications Correlation Notes As the pulse separation becomes smaller and smaller we pass to the continuous case. We can write Z 1 Vin(t) = Vin(τ)δ(t − τ)dτ −∞ In our example, the response r(τ) to a δ pulse is just an exponential: r(τ) / e−t=RC . The output can be written as Z 1 Vout (t) = Vin(τ)r(t − τ)dτ (3) −∞ = Vin ⊗ r (4) which is the convolution of the input signal with the response function of the system. (Compare with analytic solution eqn. (2)). Convolution Fourier Transform and its applications Correlation Convolution and Fourier transform Notes Convolution 1 Z 1 p ⊗ q = p p(τ)q(t − τ)dτ (5) 2π −∞ What has this to do with Fourier transforms ? Let's apply Fourier transform to convolution: 1 Z 1 F[p ⊗ q] = p [p ⊗ q]e−i!t dt 2π −∞ 1 Z 1 1 Z 1 = p p p(τ)q(t − τ)dτ e−i!t dt 2π −∞ 2π −∞ 1 Z 1 1 Z 1 = p p(τ) p q(t − τ)e−i!t dt dτ 2π −∞ 2π −∞ Convolution Fourier Transform and its applications Correlation Fourier convolution Theorem Notes Use shifting property of Fourier transform for the term in square brackets: 1 Z 1 p q(t − τ)e−i!t dt = e−i!τ Q(!); 2π −∞ where Q(!) = F[q]. Hence, 1 Z 1 F[p ⊗ q] = p p(τ)e−i!τ Q(!)dτ 2π −∞ = P(!)Q(!) Therefore, F[p ⊗ q] = F[p] · F[q] (6) This is the Fourier convolution theorem: Convolution integral in the time domain is just a product in the frequency domain. Convolution Fourier Transform and its applications Correlation Fourier convolution Theorem Notes Typically, this is used to deconvolve a signal. If the system is linear and the response function r to a δ-pulse is known or measured we can use the theorem to deconvolve the output signal Vout : Vout = Vin ⊗ r Therefore, F[Vout ] = F[Vin ⊗ r] = F[Vin]F[r] And finally, F[V ] F[V ] = out in F[r] or F[V ] V = F−1 out in F[r] Convolution Fourier Transform and its applications Correlation Convolution - final remarks Notes Deconvolution only works for linear systems where superposition holds. Convolution is commutative: p ⊗ q = q ⊗ p Many applications: \cleaning up" a smeared signal by deconvolution; finding certain features in an image. In real world applications, signal not only gets smeared out by response function, but also has noise on top of it. Can be addressed using Wiener deconvolution (next week). Convolution Fourier Transform and its applications Correlation Correlation Notes Correlation provides a measure of similarity between two signals. Mathematically it is defined as Correlation 1 Z 1 p q = p p∗(τ)q(t + τ)dτ (7) 2π −∞ Note the difference between correlation and convolution: 1 Z 1 p ⊗ q = p p(τ)q(t − τ)dτ 2π −∞ Convolution Fourier Transform and its applications Correlation Notes The correlation is a function of the lag time t. A function correlated with itself is called autocorrelation: 1 Z 1 p p = p p∗(τ)p(t + τ)dτ (8) 2π −∞ Unlike convolution, correlation is not commutative: p q 6= q p. Convolution Fourier Transform and its applications Correlation Correlation of two functions - example Notes Example: Consider the two functions p(t) and q(t): 8 8 < 0; t < 0 < 0; t < 0 p(t) = 1; 0 < t < 1 ,and q(t) = 1 − t; 0 < t < 1 : 0; t > 1 : 0; t > 1 Convolution Fourier Transform and its applications Correlation Graphical illustration of correlation integral Notes Convolution Fourier Transform and its applications Correlation Correlation integral Notes Convolution Fourier Transform and its applications Correlation Average correlation function Notes If functions being correlated are not of finite duration and don't vanish as t ! ±∞, the correlation integral may not exist. In this case can define an average correlation function: Z T =2 1 ∗ [p q]avg = lim p (τ)q(t + τ)dτ (9) T !1 T −T =2 If functions p and q are periodic with period T0, set T = T0 in above definition. Convolution Fourier Transform and its applications Correlation What does it mean if two functions are uncorrelated? Let's write Notes p(t) = hpi + ∆p(t) ; and q(t) = hqi + ∆q(t); where we decomposed the functions into their mean and their time-dependent deviations from the mean ∆p(t). Also assume that the functions are real. 1 Z T =2 [p q]avg = lim [hpi + ∆p(τ)][hqi + ∆q(t + τ)]dτ T !1 T −T =2 1 Z T =2 = hpihqi + hpi lim ∆q(t + τ)dτ T !1 T −T =2 1 Z T =2 +hqi lim ∆p(τ)dτ T !1 T −T =2 1 Z T =2 + lim ∆p(τ)∆q(t + τ)dτ T !1 T −T =2 Convolution Fourier Transform and its applications Correlation Uncorrelated functions Notes By definition, 1 Z T =2 1 Z T =2 lim ∆q(t + τ)dτ = lim ∆p(τ)dτ = 0; T !1 T −T =2 T !1 T −T =2 since the deviations from the mean have to average to zero: 1 Z T =2 1 Z T =2 hpi ≡ lim p(τ)dτ = lim [hpi + ∆p(τ)]dτ T !1 T −T =2 T !1 T −T =2 1 Z T =2 = hpi + lim ∆p(τ)dτ T !1 T −T =2 | {z } =0 Convolution Fourier Transform and its applications Correlation Uncorrelated functions Notes Since two terms are zero, the correlation function reduces to 1 Z T =2 [p q]avg = hpihqi + lim ∆p(τ)∆q(t + τ)dτ T !1 T −T =2 If the variations in p are unrelated to the variations in q (e.g. if one of them is noise), the integral on the right hand side will be zero and the functions are considered uncorrelated: [p q]avg = hpihqi The correlation integral is constant and reduces to the product of the two mean values. In particular if the mean value of either p or q is zero, the correlation will be also zero. e.g. if one of them is white noise. Convolution Fourier Transform and its applications Correlation Example - Sonar/Radar ranging Notes By measuring the time delay between the transmission of a signal and the reception of its echo that bounces off an object, one can infer the distance by knowing the speed of the wave. Problem: Echo is weak, since intensity falls off as 1=r 4. Moreover, echo corrupted by noise. Solution: Rather than looking for the echo directly, cross-correlate echo with the original reference signal. Correlation will be large at the lag time that corresponds to the travel time of the signal. Will show that correlation works well even if signal/noise is low. Convolution Fourier Transform and its applications Correlation Example - Sonar/Radar ranging Notes Let the signal pulse s(t) be several cycles of a sine wave: The echo e(t) has two components: The attenuated original signal (α < 1) delayed in time by ∆ and some noise n(t) e(t) = αs(t − ∆) + n(t) Convolution Fourier Transform and its applications Correlation Example - Sonar/Radar ranging Notes In this example, chose α = 0:1 and signal/noise≈ 0:03.
Recommended publications
  • Convolution! (CDT-14) Luciano Da Fontoura Costa
    Convolution! (CDT-14) Luciano da Fontoura Costa To cite this version: Luciano da Fontoura Costa. Convolution! (CDT-14). 2019. hal-02334910 HAL Id: hal-02334910 https://hal.archives-ouvertes.fr/hal-02334910 Preprint submitted on 27 Oct 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Convolution! (CDT-14) Luciano da Fontoura Costa [email protected] S~aoCarlos Institute of Physics { DFCM/USP October 22, 2019 Abstract The convolution between two functions, yielding a third function, is a particularly important concept in several areas including physics, engineering, statistics, and mathematics, to name but a few. Yet, it is not often so easy to be conceptually understood, as a consequence of its seemingly intricate definition. In this text, we develop a conceptual framework aimed at hopefully providing a more complete and integrated conceptual understanding of this important operation. In particular, we adopt an alternative graphical interpretation in the time domain, allowing the shift implied in the convolution to proceed over free variable instead of the additive inverse of this variable. In addition, we discuss two possible conceptual interpretations of the convolution of two functions as: (i) the `blending' of these functions, and (ii) as a quantification of `matching' between those functions.
    [Show full text]
  • The Convolution Theorem
    Module 2 : Signals in Frequency Domain Lecture 18 : The Convolution Theorem Objectives In this lecture you will learn the following We shall prove the most important theorem regarding the Fourier Transform- the Convolution Theorem We are going to learn about filters. Proof of 'the Convolution theorem for the Fourier Transform'. The Dual version of the Convolution Theorem Parseval's theorem The Convolution Theorem We shall in this lecture prove the most important theorem regarding the Fourier Transform- the Convolution Theorem. It is this theorem that links the Fourier Transform to LSI systems, and opens up a wide range of applications for the Fourier Transform. We shall inspire its importance with an application example. Modulation Modulation refers to the process of embedding an information-bearing signal into a second carrier signal. Extracting the information- bearing signal is called demodulation. Modulation allows us to transmit information signals efficiently. It also makes possible the simultaneous transmission of more than one signal with overlapping spectra over the same channel. That is why we can have so many channels being broadcast on radio at the same time which would have been impossible without modulation There are several ways in which modulation is done. One technique is amplitude modulation or AM in which the information signal is used to modulate the amplitude of the carrier signal. Another important technique is frequency modulation or FM, in which the information signal is used to vary the frequency of the carrier signal. Let us consider a very simple example of AM. Consider the signal x(t) which has the spectrum X(f) as shown : Why such a spectrum ? Because it's the simplest possible multi-valued function.
    [Show full text]
  • B1. Fourier Analysis of Discrete Time Signals
    B1. Fourier Analysis of Discrete Time Signals Objectives • Introduce discrete time periodic signals • Define the Discrete Fourier Series (DFS) expansion of periodic signals • Define the Discrete Fourier Transform (DFT) of signals with finite length • Determine the Discrete Fourier Transform of a complex exponential 1. Introduction In the previous chapter we defined the concept of a signal both in continuous time (analog) and discrete time (digital). Although the time domain is the most natural, since everything (including our own lives) evolves in time, it is not the only possible representation. In this chapter we introduce the concept of expanding a generic signal in terms of elementary signals, such as complex exponentials and sinusoids. This leads to the frequency domain representation of a signal in terms of its Fourier Transform and the concept of frequency spectrum so that we characterize a signal in terms of its frequency components. First we begin with the introduction of periodic signals, which keep repeating in time. For these signals it is fairly easy to determine an expansion in terms of sinusoids and complex exponentials, since these are just particular cases of periodic signals. This is extended to signals of a finite duration which becomes the Discrete Fourier Transform (DFT), one of the most widely used algorithms in Signal Processing. The concepts introduced in this chapter are at the basis of spectral estimation of signals addressed in the next two chapters. 2. Periodic Signals VIDEO: Periodic Signals (19:45) http://faculty.nps.edu/rcristi/eo3404/b-discrete-fourier-transform/videos/chapter1-seg1_media/chapter1-seg1- 0.wmv http://faculty.nps.edu/rcristi/eo3404/b-discrete-fourier-transform/videos/b1_02_periodicSignals.mp4 In this section we define a class of discrete time signals called Periodic Signals.
    [Show full text]
  • FOURIER TRANSFORM Very Broadly Speaking, the Fourier Transform Is a Systematic Way to Decompose “Generic” Functions Into
    FOURIER TRANSFORM TERENCE TAO Very broadly speaking, the Fourier transform is a systematic way to decompose “generic” functions into a superposition of “symmetric” functions. These symmetric functions are usually quite explicit (such as a trigonometric function sin(nx) or cos(nx)), and are often associated with physical concepts such as frequency or energy. What “symmetric” means here will be left vague, but it will usually be associated with some sort of group G, which is usually (though not always) abelian. Indeed, the Fourier transform is a fundamental tool in the study of groups (and more precisely in the representation theory of groups, which roughly speaking describes how a group can define a notion of symmetry). The Fourier transform is also related to topics in linear algebra, such as the representation of a vector as linear combinations of an orthonormal basis, or as linear combinations of eigenvectors of a matrix (or a linear operator). To give a very simple prototype of the Fourier transform, consider a real-valued function f : R → R. Recall that such a function f(x) is even if f(−x) = f(x) for all x ∈ R, and is odd if f(−x) = −f(x) for all x ∈ R. A typical function f, such as f(x) = x3 + 3x2 + 3x + 1, will be neither even nor odd. However, one can always write f as the superposition f = fe + fo of an even function fe and an odd function fo by the formulae f(x) + f(−x) f(x) − f(−x) f (x) := ; f (x) := . e 2 o 2 3 2 2 3 For instance, if f(x) = x + 3x + 3x + 1, then fe(x) = 3x + 1 and fo(x) = x + 3x.
    [Show full text]
  • An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms
    An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms Notes prepared for MA3139 Arthur L. Schoenstadt Department of Applied Mathematics Naval Postgraduate School Code MA/Zh Monterey, California 93943 August 18, 2005 c 1992 - Professor Arthur L. Schoenstadt 1 Contents 1 Infinite Sequences, Infinite Series and Improper Integrals 1 1.1Introduction.................................... 1 1.2FunctionsandSequences............................. 2 1.3Limits....................................... 5 1.4TheOrderNotation................................ 8 1.5 Infinite Series . ................................ 11 1.6ConvergenceTests................................ 13 1.7ErrorEstimates.................................. 15 1.8SequencesofFunctions.............................. 18 2 Fourier Series 25 2.1Introduction.................................... 25 2.2DerivationoftheFourierSeriesCoefficients.................. 26 2.3OddandEvenFunctions............................. 35 2.4ConvergencePropertiesofFourierSeries.................... 40 2.5InterpretationoftheFourierCoefficients.................... 48 2.6TheComplexFormoftheFourierSeries.................... 53 2.7FourierSeriesandOrdinaryDifferentialEquations............... 56 2.8FourierSeriesandDigitalDataTransmission.................. 60 3 The One-Dimensional Wave Equation 70 3.1Introduction.................................... 70 3.2TheOne-DimensionalWaveEquation...................... 70 3.3 Boundary Conditions ............................... 76 3.4InitialConditions................................
    [Show full text]
  • The Discrete-Time Fourier Transform and Convolution Theorems: a Brief Tutorial
    The Discrete-Time Fourier Transform and Convolution Theorems: A Brief Tutorial Yi-Wen Liu 25 Feb 2013 (Revised 21 Sep 2015) 1 Definitions and interpretation 1.1 Units Throughout this semester, we will use the integer-valued variable n as the time variable for discrete-time signal processing; that is, n = −∞, ..., −1, 0, 1, ..., ∞. In this convention, the unit of n is dimensionless, but be aware that in reality the sampling period is 1/fs, where fs denotes the sampling rate. In some text books, 1/fs is denoted as T , and its unit is second. In this course we do not do this, thereby favoring conciseness of notations. 1.2 Definition of DTFT The discrete-time Fourier transform (DTFT) of a discrete-time signal x[n] is a function of frequency ω defined as follows: ∞ ∆ X(ω) = x[n]e−jωn. (1) n=−∞ X Conceptually, the DTFT allows us to check how much of a tonal component at fre- quency ω is in x[n]. The DTFT of a signal is often also called a spectrum. Note that X(ω) is complex-valued. So, the absolute value |X(ω)| represents the tonal component’s magnitude, and the angle ∠X(ω) tells us the phase of the tonal component. Also note that Eq. (1) defines the DTFT as the inner product between the signal and the complex exponential tone e−jωn. Because complex exponentials {e−jωn,ω ∈ [−π, π]} form a complete family of orthogonal bases for (practically) all signals of interest, what Eq. (1) essentially does is to project x[n] onto the space spanned by all complex exponen- tials.
    [Show full text]
  • Fourier Transforms & the Convolution Theorem
    Convolution, Correlation, & Fourier Transforms James R. Graham 11/25/2009 Introduction • A large class of signal processing techniques fall under the category of Fourier transform methods – These methods fall into two broad categories • Efficient method for accomplishing common data manipulations • Problems related to the Fourier transform or the power spectrum Time & Frequency Domains • A physical process can be described in two ways – In the time domain, by h as a function of time t, that is h(t), -∞ < t < ∞ – In the frequency domain, by H that gives its amplitude and phase as a function of frequency f, that is H(f), with -∞ < f < ∞ • In general h and H are complex numbers • It is useful to think of h(t) and H(f) as two different representations of the same function – One goes back and forth between these two representations by Fourier transforms Fourier Transforms ∞ H( f )= ∫ h(t)e−2πift dt −∞ ∞ h(t)= ∫ H ( f )e2πift df −∞ • If t is measured in seconds, then f is in cycles per second or Hz • Other units – E.g, if h=h(x) and x is in meters, then H is a function of spatial frequency measured in cycles per meter Fourier Transforms • The Fourier transform is a linear operator – The transform of the sum of two functions is the sum of the transforms h12 = h1 + h2 ∞ H ( f ) h e−2πift dt 12 = ∫ 12 −∞ ∞ ∞ ∞ h h e−2πift dt h e−2πift dt h e−2πift dt = ∫ ( 1 + 2 ) = ∫ 1 + ∫ 2 −∞ −∞ −∞ = H1 + H 2 Fourier Transforms • h(t) may have some special properties – Real, imaginary – Even: h(t) = h(-t) – Odd: h(t) = -h(-t) • In the frequency domain these
    [Show full text]
  • Fourier Analysis
    Chapter 1 Fourier analysis In this chapter we review some basic results from signal analysis and processing. We shall not go into detail and assume the reader has some basic background in signal analysis and processing. As basis for signal analysis, we use the Fourier transform. We start with the continuous Fourier transformation. But in applications on the computer we deal with a discrete Fourier transformation, which introduces the special effect known as aliasing. We use the Fourier transformation for processes such as convolution, correlation and filtering. Some special attention is given to deconvolution, the inverse process of convolution, since it is needed in later chapters of these lecture notes. 1.1 Continuous Fourier Transform. The Fourier transformation is a special case of an integral transformation: the transforma- tion decomposes the signal in weigthed basis functions. In our case these basis functions are the cosine and sine (remember exp(iφ) = cos(φ) + i sin(φ)). The result will be the weight functions of each basis function. When we have a function which is a function of the independent variable t, then we can transform this independent variable to the independent variable frequency f via: +1 A(f) = a(t) exp( 2πift)dt (1.1) −∞ − Z In order to go back to the independent variable t, we define the inverse transform as: +1 a(t) = A(f) exp(2πift)df (1.2) Z−∞ Notice that for the function in the time domain, we use lower-case letters, while for the frequency-domain expression the corresponding uppercase letters are used. A(f) is called the spectrum of a(t).
    [Show full text]
  • STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS and INTERPRETATIONS by DSG Pollock
    STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS AND INTERPRETATIONS by D.S.G. Pollock (University of Leicester) Email: stephen [email protected] This paper expounds some of the results of Fourier theory that are es- sential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to anti- aliasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statis- tical and econometric time series than is commonly recognised. 1 D.S.G. POLLOCK: Statistical Fourier Analysis 1. Introduction Statistical Fourier analysis is an important part of modern time-series analysis, yet it frequently poses an impediment that prevents a full understanding of temporal stochastic processes and of the manipulations to which their data are amenable. This paper provides a survey of the theory that is not overburdened by inessential complications, and it addresses some enduring misapprehensions.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • A Hilbert Space Theory of Generalized Graph Signal Processing
    1 A Hilbert Space Theory of Generalized Graph Signal Processing Feng Ji and Wee Peng Tay, Senior Member, IEEE Abstract Graph signal processing (GSP) has become an important tool in many areas such as image pro- cessing, networking learning and analysis of social network data. In this paper, we propose a broader framework that not only encompasses traditional GSP as a special case, but also includes a hybrid framework of graph and classical signal processing over a continuous domain. Our framework relies extensively on concepts and tools from functional analysis to generalize traditional GSP to graph signals in a separable Hilbert space with infinite dimensions. We develop a concept analogous to Fourier transform for generalized GSP and the theory of filtering and sampling such signals. Index Terms Graph signal proceesing, Hilbert space, generalized graph signals, F-transform, filtering, sampling I. INTRODUCTION Since its emergence, the theory and applications of graph signal processing (GSP) have rapidly developed (see for example, [1]–[10]). Traditional GSP theory is essentially based on a change of orthonormal basis in a finite dimensional vector space. Suppose G = (V; E) is a weighted, arXiv:1904.11655v2 [eess.SP] 16 Sep 2019 undirected graph with V the vertex set of size n and E the set of edges. Recall that a graph signal f assigns a complex number to each vertex, and hence f can be regarded as an element of n C , where C is the set of complex numbers. The heart of the theory is a shift operator AG that is usually defined using a property of the graph.
    [Show full text]
  • 20. the Fourier Transform in Optics, II Parseval’S Theorem
    20. The Fourier Transform in optics, II Parseval’s Theorem The Shift theorem Convolutions and the Convolution Theorem Autocorrelations and the Autocorrelation Theorem The Shah Function in optics The Fourier Transform of a train of pulses The spectrum of a light wave The spectrum of a light wave is defined as: 2 SFEt {()} where F{E(t)} denotes E(), the Fourier transform of E(t). The Fourier transform of E(t) contains the same information as the original function E(t). The Fourier transform is just a different way of representing a signal (in the frequency domain rather than in the time domain). But the spectrum contains less information, because we take the magnitude of E(), therefore losing the phase information. Parseval’s Theorem Parseval’s Theorem* says that the 221 energy in a function is the same, whether f ()tdt F ( ) d 2 you integrate over time or frequency: Proof: f ()tdt2 f ()t f *()tdt 11 F( exp(j td ) F *( exp(j td ) dt 22 11 FF() *(') exp([j '])tdtd ' d 22 11 FF( ) * ( ') [2 ')] dd ' 22 112 FF() *() d F () d * also known as 22Rayleigh’s Identity. The Fourier Transform of a sum of two f(t) F() functions t g(t) G() Faft() bgt () aF ft() bFgt () t The FT of a sum is the F() + sum of the FT’s. f(t)+g(t) G() Also, constants factor out. t This property reflects the fact that the Fourier transform is a linear operation. Shift Theorem The Fourier transform of a shifted function, f ():ta Ffta ( ) exp( jaF ) ( ) Proof : This theorem is F ft a ft( a )exp( jtdt ) important in optics, because we often encounter functions Change variables : uta that are shifting (continuously) along fu( )exp( j [ u a ]) du the time axis – they are called waves! exp(ja ) fu ( )exp( judu ) exp(jaF ) ( ) QED An example of the Shift Theorem in optics Suppose that we’re measuring the spectrum of a light wave, E(t), but a small fraction of the irradiance of this light, say , takes a different path that also leads to the spectrometer.
    [Show full text]