Periodic Eigenfunctions of the Fourier Transform Operator

Total Page:16

File Type:pdf, Size:1020Kb

Periodic Eigenfunctions of the Fourier Transform Operator PERIODIC EIGENFUNCTIONS OF THE FOURIER TRANSFORM OPERATOR COMLAN DE SOUZA AND DAVID W. KAMMLER Abstract. Let the generalized function (tempered distribution) f on R be a p-periodic eigenfunction of the Fourier transform op- erator F, i.e., f(x + p) = f(x); Ff = λf for some λ 2 C. We show that λ = 1; −i; −1; or + i; that p p = N for some N = 1; 2;:::; and that f has the representation 1 N−1 X X n f(x) = γ[n]δ x − − mp p m=−∞ n=0 where δ is the Dirac functional and γ is an eigenfunction of the discrete Fourier transform operator FN with N−1 1 X λ (F γ)[k] = γ[n]e−2πikn=N = p γ[k]; k = 0; 1;:::;N−1: N N n=0 N We generalize this result to p1; p2-periodic eigenfunctions of F 2 3 on R and to p1; p2; p3-periodic eigenfunctions of F on R . We present numerous illustrations of such periodic eigenfunctions of F that are defined on the plane R2, and show how these can be used to create interesting quasiperiodic eigenfunctions of F having m-fold rotational symmetry within this context. 1. Introduction In this paper we will study certain generalizations of the Dirac comb (or X functional, see [2, p. 117]) 1 X (1) X(x) := δ(x − n) n=−∞ Date: April 28, 2011. 1991 Mathematics Subject Classification. Primary 42C15, 42B99; Secondary 42A16, 42A75. Key words and phrases. periodic, eigenfunction, Fourier Transform operator. 1 PERIODIC EIGENFUNCTIONS OF THE FT OPERATOR 2 where δ is the Dirac functional. We work within the context of the Schwartz theory of distributions [10] as developed in [13], [8, Chapter 4, Chapter 8], [3, Chapter 7], [2, Chapter 3]. For purposes of manipulation we use “function" notation for δ, X and related functionals. More precise functional notation, e.g., δfφg := φ(0); φ 2 S(R) 1 X Xfφg := φ(n); φ 2 S(R) n=−∞ is used where appropriate. Here 1 m (n) (2) S(R) := fφ 2 C (R) : sup jx φ (x)j < 1; m; n = 0; 1; 2;:::g is the linear space of univariate Schwartz functions. Various useful proprieties of δ and X are developed in [13], [8, Chapter 4, Chapter 8], [3, Chapter 7], [2, Chapter 3]. For example, x (3) δ = jajδ(x); a > 0 or a < 0: a The Fourier transform of the Dirac comb can be obtained by using the Poisson sum formula 1 1 X X ^ φ(n) = φ (k); φ 2 S(R) n=−∞ k=−∞ where we use the superscript ^ for the Fourier transform Z 1 φ^(s) := e−2πisxφ(x)dx: −∞ Thus Z 1 X^fφg := X^(x)φ(x)dx Integral notation for functional −∞ Z 1 := X(s)φ^(s)ds; Definition of Fourier transform −∞ 1 X =: φ^(k); Action of X k=−∞ 1 X = φ(n); Poisson sum formula n=−∞ =: Xfφg; φ 2 S(R); Action of X: PERIODIC EIGENFUNCTIONS OF THE FT OPERATOR 3 We assume the knowledge of the remaining properties of the δ func- tional such as, the sifting property Z 1 (4) δ(x − b)φ(x)dx = φ(b); b 2 R; φ 2 S(R) −∞ where the “integral” refers to the action of the functional δ(x − b), and the identity (5) g(x)δ(x − a) = g(a)δ(x − a) (which holds when, g; g0 ; g00 ;::: are all continuous functions of slow growth at ±∞). The X functional is used in the study of sampling, periodization, etc., see [8, Chapter 5], [3, Chapters 7–9], [2, Chapter 7]. We will illustrate this process using a notation that can be generalized to an 1 n-dimensional setting. Let a1 2 R with a1 6= 0, and let A1 := . We a1 define the lattice La1 := fna1 : n 2 Zg and the corresponding a1-periodic Dirac comb X (6) grida1 (x) := δ(x − a): a2La1 We use (3) and (1) to write 1 X grida1 (x) := δ(x − na1) n=−∞ 1 1 X x = δ − n ja j a 1 n=−∞ 1 1 x = X ; ja1j a1 and thereby find ^ grida1 (s) = X(a1s) 1 X = δ(a1s − k) k=−∞ 1 k = X s − ja1j a1 (7) = jA1j gridA1 (s): PERIODIC EIGENFUNCTIONS OF THE FT OPERATOR 4 Let g be any univariate distribution with compact support. We can periodize g by writing 1 X f(x) : = g(x − na1) n=−∞ 1 X = δ(x − na1) ∗ g(x) n=−∞ (8) = grida1 (x) ∗ g(x) ; where ∗ represents the convolution product. We then use (8),(7) and (5) as we write ^ ^ f (s) = jA1j gridA1 (s) g (s) 1 X ^ = jA1j δ(s − kA1) g (s) k=−∞ 1 X ^ = jA1j g (kA1)δ(s − kA1) ; k=−∞ and thereby obtain the weakly convergent Fourier series 1 X ^ 2πikA1x (9) f(x) = jA1j g (kA1)e : k=−∞ We observe that grida1 has support at the points na1; n = 0; ±1; ±2;::: of the lattice La1 , while the Fourier transform jA1j gridA has support n 1 at the points ; n = 0; ±1; ±2;::: of the lattice LA1 : It follows that a1 ^ grida1 = grida1 if and only if a1 = ±1; i.e., if and only if (10) grida1 = X: Let F be the Fourier transform operator on the space of tempered distributions. It is well known [2], [3], [8], that F is linear and that (11) F 4 = I; where I denotes the identity operator on the space of tempered distri- butions. We are interested in tempered distributions f such that (12) Ff = λf; PERIODIC EIGENFUNCTIONS OF THE FT OPERATOR 5 where λ is a scalar. Any distribution f that satisfies (12), and that we will call eigenfunction of F, must also satisfy the following equation (13) F nf = λnf for any positive integer n; due to the linearity of the operator F. When n = 4, then F 4f = λ4f If = λ4f f = λ4f: Thus the eigenvalues of the operator F are 1; −1; i; −i. 2. Preliminaries We would like to characterize all periodic eigenfunctions f of the Fourier transform operator F, i.e., Ff = λf; f 6= 0; within the context of 1,2,3 dimensions. Such eigenfunctions are really tempered distributions, but we will continue to use “function” nomen- clature. 2.1. Periodic eigenfunctions of F on R. 2.1.1. Eigenfunctions of FN . We first consider the periodic eigenfunc- tions of the discrete Fourier transform operator FN since, as we will see later, they can be used to construct all periodic eigenfunctions of the Fourier transform operator F. Definition 2.1. Let N = 1; 2;::: . The matrix 21 1 1 ::: 1 3 1 !!2 :::!N−1 1 6 2 4 2N−2 7 F := 61 ! ! :::! 7 ;! := e−2πi=N N N 6. 7 4. 5 1 !N−1 !2N−2 :::!(N−1)(N−1) is said to be the discrete Fourier transform operator. In component form we have N−1 1 X (14) (F f)[k] := e−2πikn=N f[n]; k = 0; 1; 2;:::;N − 1: N N n=0 We often use (14) in a context where both f and FN f are N-periodic functions on Z, see [3, Chapters 1,4,6], and we say that f; Ff are functions on PN . In such cases, the summation can be taken over any N consecutive integers, and we allow k to take all values from Z. PERIODIC EIGENFUNCTIONS OF THE FT OPERATOR 6 It is easy to verify the operator identity 1 F 2 = R N N N where 21 0 0 ::: 0 03 0 0 0 ::: 0 1 60 0 0 ::: 1 07 R := 6. 7 N 6. 7 40 0 1 ::: 0 05 0 1 0 ::: 0 0 is the reflection operator, i.e., (RN f)[n] := f[−n]; n = 0; 1;:::;N − 1; (with arguments taken modulo N). We then find 1 2 1 1 F 4 = R = R2 = I N N N N 2 N N 2 N where IN is the N × N identity matrix. In this way we see that if FN f = λf; f 6= 0; then 4 1 λ − 2 = 0; N p p so λ must take one of the values ±1= N; ±i= N. The columns (and rows) of FN are pairwise orhogonal with T 1 T F F = I = F F : N N N N N N N Thus FN is normal so there is an orthonormal basis for C consisting of eigenvectors of FN , [12, p. 226]. Now if 1 FN f = ±p f; f 6= 0; N then 2 2 1 RN f = (NF )f = N ±p f = f; N N i.e., f is even, and if i FN f = ±p f; f 6= 0; N then 2 2 i RN f = (NF )f = N ±p f = −f; N N PERIODIC EIGENFUNCTIONS OF THE FT OPERATOR 7 i.e., f is odd. By using Euler’s identity we can write FN = CN − iSN where 1 nk N−1 (15) CN := cos 2π N N n;k=0 1 nk N−1 (16) SN := sin 2π N N n;k=0 are symmetric real matrices. We easily verify that CN RN = CN = RN CN SN RN = −SN = RN SN ; and thereby conclude that if 1 FN f = ±p f N then 1 CN f = ±p f; N and if i FN f = ±p f N then 1 SN f = ∓p f: N Since CN ; SN are real and symmetric, this guarantes that we can find N a real orthonormal basis for C consisting of eigenvectors of FN .
Recommended publications
  • Math 353 Lecture Notes Intro to Pdes Eigenfunction Expansions for Ibvps
    Math 353 Lecture Notes Intro to PDEs Eigenfunction expansions for IBVPs J. Wong (Fall 2020) Topics covered • Eigenfunction expansions for PDEs ◦ The procedure for time-dependent problems ◦ Projection, independent evolution of modes 1 The eigenfunction method to solve PDEs We are now ready to demonstrate how to use the components derived thus far to solve the heat equation. First, two examples to illustrate the proces... 1.1 Example 1: no source; Dirichlet BCs The simplest case. We solve ut =uxx; x 2 (0; 1); t > 0 (1.1a) u(0; t) = 0; u(1; t) = 0; (1.1b) u(x; 0) = f(x): (1.1c) The eigenvalues/eigenfunctions are (as calculated in previous sections) 2 2 λn = n π ; φn = sin nπx; n ≥ 1: (1.2) Assuming the solution exists, it can be written in the eigenfunction basis as 1 X u(x; t) = cn(t)φn(x): n=0 Definition (modes) The n-th term of this series is sometimes called the n-th mode or Fourier mode. I'll use the word frequently to describe it (rather than, say, `basis function'). 1 00 Substitute into the PDE (1.1a) and use the fact that −φn = λnφ to obtain 1 X 0 (cn(t) + λncn(t))φn(x) = 0: n=1 By the fact the fφng is a basis, it follows that the coefficient for each mode satisfies the ODE 0 cn(t) + λncn(t) = 0: Solving the ODE gives us a `general' solution to the PDE with its BCs, 1 X −λnt u(x; t) = ane φn(x): n=1 The remaining coefficients are determined by the IC, u(x; 0) = f(x): To match to the solution, we need to also write f(x) in the basis: 1 Z 1 X hf; φni f(x) = f φ (x); f = = 2 f(x) sin nπx dx: (1.3) n n n hφ ; φ i n=1 n n 0 Then from the initial condition, we get u(x; 0) = f(x) 1 1 X X =) cn(0)φn(x) = fnφn(x) n=1 n=1 =) cn(0) = fn for all n ≥ 1: Now everything has been solved - we are done! The solution to the IBVP (1.1) is 1 X −n2π2t u(x; t) = ane sin nπx with an given by (1.5): (1.4) n=1 Alternatively, we could state the solution as follows: The solution is 1 X −λnt u(x; t) = fne φn(x) n=1 with eigenfunctions/values φn; λn given by (1.2) and fn by (1.3).
    [Show full text]
  • Math 356 Lecture Notes Intro to Pdes Eigenfunction Expansions for Ibvps
    MATH 356 LECTURE NOTES INTRO TO PDES EIGENFUNCTION EXPANSIONS FOR IBVPS J. WONG (FALL 2019) Topics covered • Eigenfunction expansions for PDEs ◦ The procedure for time-dependent problems ◦ Projection, independent evolution of modes ◦ Separation of variables • The wave equation ◦ Standard solution, standing waves ◦ Example: tuning the vibrating string (resonance) • Interpreting solutions (what does the series mean?) ◦ Relevance of Fourier convergence theorem ◦ Smoothing for the heat equation ◦ Non-smoothing for the wave equation 3 1. The eigenfunction method to solve PDEs We are now ready to demonstrate how to use the components derived thus far to solve the heat equation. The procedure here for the heat equation will extend nicely to a variety of other problems. For now, consider an initial boundary value problem of the form ut = −Lu + h(x; t); x 2 (a; b); t > 0 hom. BCs at a and b (1.1) u(x; 0) = f(x) We seek a solution in terms of the eigenfunction basis X u(x; t) = cn(t)φn(x) n by finding simple ODEs to solve for the coefficients cn(t): This form of the solution is called an eigenfunction expansion for u (or `eigenfunction series') and each term cnφn(x) is a mode (or `Fourier mode' or `eigenmode'). Part 1: find the eigenfunction basis. The first step is to compute the basis. The eigenfunctions we need are the solutions to the eigenvalue problem Lφ = λφ, φ(x) satisfies the BCs for u: (1.2) By the theorem in ??, there is a sequence of eigenfunctions fφng with eigenvalues fλng that form an orthogonal basis for L2[a; b] (i.e.
    [Show full text]
  • FOURIER TRANSFORM Very Broadly Speaking, the Fourier Transform Is a Systematic Way to Decompose “Generic” Functions Into
    FOURIER TRANSFORM TERENCE TAO Very broadly speaking, the Fourier transform is a systematic way to decompose “generic” functions into a superposition of “symmetric” functions. These symmetric functions are usually quite explicit (such as a trigonometric function sin(nx) or cos(nx)), and are often associated with physical concepts such as frequency or energy. What “symmetric” means here will be left vague, but it will usually be associated with some sort of group G, which is usually (though not always) abelian. Indeed, the Fourier transform is a fundamental tool in the study of groups (and more precisely in the representation theory of groups, which roughly speaking describes how a group can define a notion of symmetry). The Fourier transform is also related to topics in linear algebra, such as the representation of a vector as linear combinations of an orthonormal basis, or as linear combinations of eigenvectors of a matrix (or a linear operator). To give a very simple prototype of the Fourier transform, consider a real-valued function f : R → R. Recall that such a function f(x) is even if f(−x) = f(x) for all x ∈ R, and is odd if f(−x) = −f(x) for all x ∈ R. A typical function f, such as f(x) = x3 + 3x2 + 3x + 1, will be neither even nor odd. However, one can always write f as the superposition f = fe + fo of an even function fe and an odd function fo by the formulae f(x) + f(−x) f(x) − f(−x) f (x) := ; f (x) := . e 2 o 2 3 2 2 3 For instance, if f(x) = x + 3x + 3x + 1, then fe(x) = 3x + 1 and fo(x) = x + 3x.
    [Show full text]
  • An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms
    An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms Notes prepared for MA3139 Arthur L. Schoenstadt Department of Applied Mathematics Naval Postgraduate School Code MA/Zh Monterey, California 93943 August 18, 2005 c 1992 - Professor Arthur L. Schoenstadt 1 Contents 1 Infinite Sequences, Infinite Series and Improper Integrals 1 1.1Introduction.................................... 1 1.2FunctionsandSequences............................. 2 1.3Limits....................................... 5 1.4TheOrderNotation................................ 8 1.5 Infinite Series . ................................ 11 1.6ConvergenceTests................................ 13 1.7ErrorEstimates.................................. 15 1.8SequencesofFunctions.............................. 18 2 Fourier Series 25 2.1Introduction.................................... 25 2.2DerivationoftheFourierSeriesCoefficients.................. 26 2.3OddandEvenFunctions............................. 35 2.4ConvergencePropertiesofFourierSeries.................... 40 2.5InterpretationoftheFourierCoefficients.................... 48 2.6TheComplexFormoftheFourierSeries.................... 53 2.7FourierSeriesandOrdinaryDifferentialEquations............... 56 2.8FourierSeriesandDigitalDataTransmission.................. 60 3 The One-Dimensional Wave Equation 70 3.1Introduction.................................... 70 3.2TheOne-DimensionalWaveEquation...................... 70 3.3 Boundary Conditions ............................... 76 3.4InitialConditions................................
    [Show full text]
  • Fourier Transforms & the Convolution Theorem
    Convolution, Correlation, & Fourier Transforms James R. Graham 11/25/2009 Introduction • A large class of signal processing techniques fall under the category of Fourier transform methods – These methods fall into two broad categories • Efficient method for accomplishing common data manipulations • Problems related to the Fourier transform or the power spectrum Time & Frequency Domains • A physical process can be described in two ways – In the time domain, by h as a function of time t, that is h(t), -∞ < t < ∞ – In the frequency domain, by H that gives its amplitude and phase as a function of frequency f, that is H(f), with -∞ < f < ∞ • In general h and H are complex numbers • It is useful to think of h(t) and H(f) as two different representations of the same function – One goes back and forth between these two representations by Fourier transforms Fourier Transforms ∞ H( f )= ∫ h(t)e−2πift dt −∞ ∞ h(t)= ∫ H ( f )e2πift df −∞ • If t is measured in seconds, then f is in cycles per second or Hz • Other units – E.g, if h=h(x) and x is in meters, then H is a function of spatial frequency measured in cycles per meter Fourier Transforms • The Fourier transform is a linear operator – The transform of the sum of two functions is the sum of the transforms h12 = h1 + h2 ∞ H ( f ) h e−2πift dt 12 = ∫ 12 −∞ ∞ ∞ ∞ h h e−2πift dt h e−2πift dt h e−2πift dt = ∫ ( 1 + 2 ) = ∫ 1 + ∫ 2 −∞ −∞ −∞ = H1 + H 2 Fourier Transforms • h(t) may have some special properties – Real, imaginary – Even: h(t) = h(-t) – Odd: h(t) = -h(-t) • In the frequency domain these
    [Show full text]
  • Sturm-Liouville Expansions of the Delta
    Gauge Institute Journal H. Vic Dannon Sturm-Liouville Expansions of the Delta Function H. Vic Dannon [email protected] May, 2014 Abstract We expand the Delta Function in Series, and Integrals of Sturm-Liouville Eigen-functions. Keywords: Sturm-Liouville Expansions, Infinitesimal, Infinite- Hyper-Real, Hyper-Real, infinite Hyper-real, Infinitesimal Calculus, Delta Function, Fourier Series, Laguerre Polynomials, Legendre Functions, Bessel Functions, Delta Function, 2000 Mathematics Subject Classification 26E35; 26E30; 26E15; 26E20; 26A06; 26A12; 03E10; 03E55; 03E17; 03H15; 46S20; 97I40; 97I30. 1 Gauge Institute Journal H. Vic Dannon Contents 0. Eigen-functions Expansion of the Delta Function 1. Hyper-real line. 2. Hyper-real Function 3. Integral of a Hyper-real Function 4. Delta Function 5. Convergent Series 6. Hyper-real Sturm-Liouville Problem 7. Delta Expansion in Non-Normalized Eigen-Functions 8. Fourier-Sine Expansion of Delta Associated with yx"( )+=l yx ( ) 0 & ab==0 9. Fourier-Cosine Expansion of Delta Associated with 1 yx"( )+=l yx ( ) 0 & ab==2 p 10. Fourier-Sine Expansion of Delta Associated with yx"( )+=l yx ( ) 0 & a = 0 11. Fourier-Bessel Expansion of Delta Associated with n2 -1 ux"( )+- (l 4 ) ux ( ) = 0 & ab==0 x 2 12. Delta Expansion in Orthonormal Eigen-functions 13. Fourier-Hermit Expansion of Delta Associated with ux"( )+- (l xux2 ) ( ) = 0 for any real x 14. Fourier-Legendre Expansion of Delta Associated with 2 Gauge Institute Journal H. Vic Dannon 11 11 uu"(ql )++ [ ] ( q ) = 0 -<<pq p 4 cos2 q , 22 15. Fourier-Legendre Expansion of Delta Associated with 112 11 ymy"(ql )++- [ ( ) ] (q ) = 0 -<<pq p 4 cos2 q , 22 16.
    [Show full text]
  • STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS and INTERPRETATIONS by DSG Pollock
    STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS AND INTERPRETATIONS by D.S.G. Pollock (University of Leicester) Email: stephen [email protected] This paper expounds some of the results of Fourier theory that are es- sential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to anti- aliasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statis- tical and econometric time series than is commonly recognised. 1 D.S.G. POLLOCK: Statistical Fourier Analysis 1. Introduction Statistical Fourier analysis is an important part of modern time-series analysis, yet it frequently poses an impediment that prevents a full understanding of temporal stochastic processes and of the manipulations to which their data are amenable. This paper provides a survey of the theory that is not overburdened by inessential complications, and it addresses some enduring misapprehensions.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • Lecture 15.Pdf
    Lecture #15 Eigenfunctions of hermitian operators If the spectrum is discrete , then the eigenfunctions are in Hilbert space and correspond to physically realizable states. It the spectrum is continuous , the eigenfunctions are not normalizable, and they do not correspond to physically realizable states (but their linear combinations may be normalizable). Discrete spectra Normalizable eigenfunctions of a hermitian operator have the following properties: (1) Their eigenvalues are real. (2) Eigenfunctions that belong to different eigenvalues are orthogonal. Finite-dimension vector space: The eigenfunctions of an observable operator are complete , i.e. any function in Hilbert space can be expressed as their linear combination. Continuous spectra Example: Find the eigenvalues and eigenfunctions of the momentum operator . This solution is not square -inferable and operator p has no eigenfunctions in Hilbert space. However, if we only consider real eigenvalues, we can define sort of orthonormality: Lecture 15 Page 1 L15.P2 since the Fourier transform of Dirac delta function is Proof: Plancherel's theorem Now, Then, which looks very similar to orthonormality. We can call such equation Dirac orthonormality. These functions are complete in a sense that any square integrable function can be written in a form and c(p) is obtained by Fourier's trick. Summary for continuous spectra: eigenfunctions with real eigenvalues are Dirac orthonormalizable and complete. Lecture 15 Page 2 L15.P3 Generalized statistical interpretation: If your measure observable Q on a particle in a state you will get one of the eigenvalues of the hermitian operator Q. If the spectrum of Q is discrete, the probability of getting the eigenvalue associated with orthonormalized eigenfunction is It the spectrum is continuous, with real eigenvalues q(z) and associated Dirac-orthonormalized eigenfunctions , the probability of getting a result in the range dz is The wave function "collapses" to the corresponding eigenstate upon measurement.
    [Show full text]
  • A Hilbert Space Theory of Generalized Graph Signal Processing
    1 A Hilbert Space Theory of Generalized Graph Signal Processing Feng Ji and Wee Peng Tay, Senior Member, IEEE Abstract Graph signal processing (GSP) has become an important tool in many areas such as image pro- cessing, networking learning and analysis of social network data. In this paper, we propose a broader framework that not only encompasses traditional GSP as a special case, but also includes a hybrid framework of graph and classical signal processing over a continuous domain. Our framework relies extensively on concepts and tools from functional analysis to generalize traditional GSP to graph signals in a separable Hilbert space with infinite dimensions. We develop a concept analogous to Fourier transform for generalized GSP and the theory of filtering and sampling such signals. Index Terms Graph signal proceesing, Hilbert space, generalized graph signals, F-transform, filtering, sampling I. INTRODUCTION Since its emergence, the theory and applications of graph signal processing (GSP) have rapidly developed (see for example, [1]–[10]). Traditional GSP theory is essentially based on a change of orthonormal basis in a finite dimensional vector space. Suppose G = (V; E) is a weighted, arXiv:1904.11655v2 [eess.SP] 16 Sep 2019 undirected graph with V the vertex set of size n and E the set of edges. Recall that a graph signal f assigns a complex number to each vertex, and hence f can be regarded as an element of n C , where C is the set of complex numbers. The heart of the theory is a shift operator AG that is usually defined using a property of the graph.
    [Show full text]
  • 20. the Fourier Transform in Optics, II Parseval’S Theorem
    20. The Fourier Transform in optics, II Parseval’s Theorem The Shift theorem Convolutions and the Convolution Theorem Autocorrelations and the Autocorrelation Theorem The Shah Function in optics The Fourier Transform of a train of pulses The spectrum of a light wave The spectrum of a light wave is defined as: 2 SFEt {()} where F{E(t)} denotes E(), the Fourier transform of E(t). The Fourier transform of E(t) contains the same information as the original function E(t). The Fourier transform is just a different way of representing a signal (in the frequency domain rather than in the time domain). But the spectrum contains less information, because we take the magnitude of E(), therefore losing the phase information. Parseval’s Theorem Parseval’s Theorem* says that the 221 energy in a function is the same, whether f ()tdt F ( ) d 2 you integrate over time or frequency: Proof: f ()tdt2 f ()t f *()tdt 11 F( exp(j td ) F *( exp(j td ) dt 22 11 FF() *(') exp([j '])tdtd ' d 22 11 FF( ) * ( ') [2 ')] dd ' 22 112 FF() *() d F () d * also known as 22Rayleigh’s Identity. The Fourier Transform of a sum of two f(t) F() functions t g(t) G() Faft() bgt () aF ft() bFgt () t The FT of a sum is the F() + sum of the FT’s. f(t)+g(t) G() Also, constants factor out. t This property reflects the fact that the Fourier transform is a linear operation. Shift Theorem The Fourier transform of a shifted function, f ():ta Ffta ( ) exp( jaF ) ( ) Proof : This theorem is F ft a ft( a )exp( jtdt ) important in optics, because we often encounter functions Change variables : uta that are shifting (continuously) along fu( )exp( j [ u a ]) du the time axis – they are called waves! exp(ja ) fu ( )exp( judu ) exp(jaF ) ( ) QED An example of the Shift Theorem in optics Suppose that we’re measuring the spectrum of a light wave, E(t), but a small fraction of the irradiance of this light, say , takes a different path that also leads to the spectrometer.
    [Show full text]
  • The Fourier Transform
    The Fourier Transform CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror The Fourier transform is a mathematical method that expresses a function as the sum of sinusoidal functions (sine waves). Fourier transforms are widely used in many fields of sciences and engineering, including image processing, quantum mechanics, crystallography, geoscience, etc. Here we will use, as examples, functions with finite, discrete domains (i.e., functions defined at a finite number of regularly spaced points), as typically encountered in computational problems. However the concept of Fourier transform can be readily applied to functions with infinite or continuous domains. (We won’t differentiate between “Fourier series” and “Fourier transform.”) A graphical example ! ! Suppose we have a function � � defined in the range − < � < on 2� + 1 discrete points such that ! ! ! � = �. By a Fourier transform we aim to express it as: ! !!!! � �! = �! + �! cos 2��!� + �! + �! cos 2��!� + �! + ⋯ (1) We first demonstrate graphically how a function may be described by its Fourier components. Fig. 1 shows a function defined on 101 discrete data points integer � = −50, −49, … , 49, 50. In fig. 2, the first Fig. 1. The function to be Fourier transformed few Fourier components are plotted separately (left) and added together (right) to form an approximation to the original function. Finally, fig. 3 demonstrates that by including the components up to � = 50, a faithful representation of the original function can be obtained. � ��, �� & �� Single components Summing over components up to m 0 �! = 0.6 �! = 0 �! = 0 1 �! = 1.9 �! = 0.01 �! = 2.2 2 �! = 0.27 �! = 0.02 �! = −1.3 3 �! = 0.39 �! = 0.03 �! = 0.4 Fig.2.
    [Show full text]