Fourier Transforms, Delta Functions and Gaussian Integrals

Total Page:16

File Type:pdf, Size:1020Kb

Fourier Transforms, Delta Functions and Gaussian Integrals Math Methods for Polymer Science Lecture 2: Fourier Transforms, Delta Functions and Gaussian Integrals In the first lecture, we reviewed the Taylor and Fourier series. These where both essentially ways of decomposing a given function into a differ- ent, more convenient, or more meaningful form. In this lecture, we review the generalization of the Fourier series to the Fourier transformation. In the context, it is also natural to review 2 special functions, Dirac delta functions and Gaussian functions, as these functions commonly arise in problems of Fourier analysis and are otherwise essential in polymer physics. For addi- tional reading on Fourier transforms, delta functions and Gaussian integrals see Chapters 15, 1 and 8 of Arken and Weber's text, Mathematical Methods for Physicists. 1 Fourier Transforms Conceptually, Fourier transforms are a straightforward generalizations of Fourier series which represent a function on finite domain of size L by an 2πn 2πn infinite sum over a discrete sets of functions, sin L x and cos L x . In the Fourier transform, that size of the domain is taken to 1 so that the domain becomes the all positive and negative values of x (see Fig. 1). The key differences between Taylor series, Fourier series and Fourier transforms are summarized as follows: Taylor Series - series representation in polynomials, \local" representation Fourier Series - series representation in sines and cosines, \global" repre- sentation (over finite or periodic domain) Fourier transform - integral representationin sines and cosines over infi- nite domain (L ! 1) How do we generalize a Fourier series to an infinite domain? For the Fourier transform it's more convenient to use complex representation of sine and cosine: eix = cos x + i sin x (1) 1 Figure 1: Schematic of Fourier transform of function f(x), where f(x) = 1 1 a0 X 2πn X 2πn + a cos x + b sin x . 2 n L n L n=1 n=1 Using this we can rewrite the Fourier series: 1 1 a0 X 2πn X 2πn f(x) = + a cos x + b sin x 2 n L n L n=1 n=1 1 X i2πnx = c exp (2) n L n=−∞ where ( an − ibn n > 0 cn = 2 (3) an + ibn 2 n < 0 a c = 0 : (4) 0 2 Notice also that complex functions ei2πnx=L, are also orthogonal. L L=2 Z 2 i2πm i2πn L 2π(m + n)i dx exp x exp x = exp x −L L L 2π(m + n)i L 2 −L=2 L π(m + n) = sin π(m + n) L 0 for m + n 6= 0 = (5) L for m + n = 0 Using this, we can extract Fourier coefficients by L 1 Z 2 2πn cn = dx exp − x f(x) (6) L −L L 2 Note that the complex notation takes care of factors of 2, etc. 2 We want to define Fourier transform as the L ! 1 limit of a Fourier series. This limit is unusual because we take L ! 1 while 2πn lim ≡ k (finite): (7) L!1 L Here, k is referred to as the wavenumber of the Fourier mode, eikx. The Fourier transform of a function, f(x), is defined as: Z 1 −ikx f~(k) = lim (cnL) = dx e f(x): (8) L!1 −∞ Often the Fourier transform is written as F [f(x)] = f~(k) (9) where F means Fourier transform. Notice that the Fourier transform takes f(x), function of \real space" variables, x, and outputs f~(k), a function of \Fourier space" variables, k. The Fourier transform can be inverted using the definition of the Fourier series 1 1 X X 1 f(x) = c e−ik(n)x = (c L)eik(n)x: (10) n L n n=−∞ n=−∞ Now as L ! 1 k takes on a continuum of values 2π 2π lim ∆k = k(n + 1) − k(n) = [(n + 1) − n] = ! 0 L!1 L L That is, δk becomes infinitely narrow in the L ! 1 limit. This means we can write the sum as 1 1 X 1 1 X 1 Z 1 lim = lim ∆k = dk (11) L!1 L 2π ∆k!0 2π n=−∞ n=−∞ −∞ The last step is the is Riemann's definition of an integral (see Fig. 2). And thus we get Z 1 dk f(x) = f~(k)eikx (12) −∞ 2π Eq. (12) is the inverse Fourier transform or h i F −1 f~(k) = f(x) (13) where F −1 means inverse Fourier transform, f~(x) is function of \Fourier space", and f(k) is function of \real space". 3 Figure 2: A figure showing that Fourier series becomes an integral of con- tinuous function f~(k) in limit that ∆k ! 0 (area). 1.1 Delta Function Related to the Fourier transform is a special function called the Dirac delta function, δ(x). It's essential properties can be deduced by the Fourier trans- form and inverse Fourier transform. Here, we simply insert the definition of the Fourier transform, eq. (8), into equation for the inverse transform, eq. (12), 1 1 1 Z dk Z dk Z 0 f(x) = f~(k)eikx = eikx dx0 eikx f(x0) −∞ 2π −∞ 2π −∞ 1 1 Z Z dk 0 = dx0 eik(x−x ) f(x0) −∞ −∞ 2π Z 1 = dx0 δ(x − x0)f(x0) = f(x) (14) −∞ This last line defines the properties of delta function, which is defined im- plicitly by the integral in the parentheses on the second line. When you integrate the product of the Dirac delta function with another function, it returns the value of that function at the point where the argument of δ(x) vanishes. Geometrically, you can think of it as an infinitely tall and narrowly peeked function, with area 1 under the curve (see Fig. ??). Using this definition of δ(x) we can derive the Fourier transform of os- cillatory functions. 4 Figure 3: Sketch of a Dirac delta function. Figure 4: Plot of f(x), which is only non-zero near x = 0. Example 1: Compute Fourier transform of A cos(qx). Z 1 f~(k) = dx A cos(qx)eikx −∞ A Z 1 = dx eiqx + e−iqx eikx 2 −∞ A Z 1 h i = dx ei(q+k)x + e−i(q−k)x 2 −∞ = πA [δ(q + k) + δ(q − k)] which is only non-zero for k = ±q. The Fourier transform is particularly useful for studying the properties of a function which is non-zero only over a finite region of space. For example, density or probability distribution for polymer chain. For this case, we can use Taylor series expansion to cast light on what 5 Figure 5: Plot of f(x). the Fourier transform tells us. Z 1 f~(k) = dx e−ikxf(x) −∞ Z 1 1 i 1 = dx 1 − ikx − k2x2 + k3x3 + k4x4 + ::: f(x) (15) −∞ 2! 3! 4! Clearly, Z 1 f~(k = 0) = dx f(x) (16) −∞ which is the total area under curve, named N. But we see from Taylor series that various powers of k represent certain averages. That is, 1 i 1 f~(k) = N 1 − ikhxi − k2hx2i + k3hx3i + k4hx4i + ::: (17) 2! 3! 4! where Z 1 dx xnf(x) n −∞ hx i = Z 1 (18) dx f(x) −∞ is an average of xn weighted by f(x). That is what I mean when we say that f~(k) is something of a global representation. f~(k) seems to encodes properties of the function over its entire range not just locally. Let's try an example. Compute Fourier transform of A jxj ≤ a f(x) = 0 jxj > a 6 Figure 6: Plot of Gaussian function. Z 1 Z a f~(k) = dx f(x)e−ikx = A dx e−ikx −∞ −a A h i 2A sin(ka) = e−ika − eika = (19) ik k Now that we have full expression, let's examine small k behavior. sin(ka) ka − 1 (ka)3 + ::: lim f~(k) = 2A = 2A 3! k!0 k k 1 = 2Aa 1 − (ka)2 + ::: (20) 3! This is just the form we derived above. Note that 2Aa = N(area), and you can check that Z a a 1 3 dx x2 x 3 a2 hx2i = −a = −a = (21) Z a 2a 3 dx −a 1.2 Gaussian Integral Let's use the Fourier transform to study an important function, the Gaussian bump 2 − x f(x) = Ae 2a2 (22) 7 This function is very important in random systems, especially in polymer physics. Aside: Integrating a Gaussian function (A trick!). Z 1 x2 I = dx exp − 2 (23) −∞ 2a Z 1 2 2 Z 1 Z 1 2 2 2 x x + y I = dx exp − 2 = dx dy exp − 2 (24) −∞ 2a −∞ −∞ 2a This double integral is carried out over whole x − y plane. Let's do same integral in polar coordinates: r = px2 + y2, x = r cos φ and y = r sin φ. Z 2π Z 1 2 2 r I = dφ dr r exp − 2 0 0 2a Z 1 2 d 2 r = 2π dr −a exp − 2 0 dr 2a 1 r2 = 2π −a2 exp − = 2πa2 (25) 2a2 0 p thus, I = 2πa. Let's go back to Fourier transform of a Gaussian bump. 1 Z 2 2 f~(k) = A dx e−x =2a e−ikx (26) −∞ This can be done by "completing the square" of argument in exponential x2 1 k2a2 + ikx = (x + ika2)2 + (27) 2a2 2a2 2 then Z 1 2 2 2 2 ~ (x + ika ) k a f(k) = A dx exp − 2 exp − −∞ 2a 2 k2a2 Z 1 u2 = A exp − du exp − 2 2 −∞ 2a p k2a2 = A 2πa exp − (28) 2 Note that this was done by changing variables u = x + ika2 and du = dx.
Recommended publications
  • Problem Set 2
    22.02 – Introduction to Applied Nuclear Physics Problem set # 2 Issued on Wednesday Feb. 22, 2012. Due on Wednesday Feb. 29, 2012 Problem 1: Gaussian Integral (solved problem) 2 2 1 x /2σ The Gaussian function g(x) = e− is often used to describe the shape of a wave packet. Also, it represents √2πσ2 the probability density function (p.d.f.) of the Gaussian distribution. 2 2 ∞ 1 x /2σ a) Calculate the integral √ 2 e− −∞ 2πσ Solution R 2 2 x x Here I will give the calculation for the simpler function: G(x) = e− . The integral I = ∞ e− can be squared as: R−∞ 2 2 2 2 2 2 ∞ x ∞ ∞ x y ∞ ∞ (x +y ) I = dx e− = dx dy e− e− = dx dy e− Z Z Z Z Z −∞ −∞ −∞ −∞ −∞ This corresponds to making an integral over a 2D plane, defined by the cartesian coordinates x and y. We can perform the same integral by a change of variables to polar coordinates: x = r cos ϑ y = r sin ϑ Then dxdy = rdrdϑ and the integral is: 2π 2 2 2 ∞ r ∞ r I = dϑ dr r e− = 2π dr r e− Z0 Z0 Z0 Now with another change of variables: s = r2, 2rdr = ds, we have: 2 ∞ s I = π ds e− = π Z0 2 x Thus we obtained I = ∞ e− = √π and going back to the function g(x) we see that its integral just gives −∞ ∞ g(x) = 1 (as neededR for a p.d.f). −∞ 2 2 R (x+b) /c Note: we can generalize this result to ∞ ae− dx = ac√π R−∞ Problem 2: Fourier Transform Give the Fourier transform of : (a – solved problem) The sine function sin(ax) Solution The Fourier Transform is given by: [f(x)][k] = 1 ∞ dx e ikxf(x).
    [Show full text]
  • The Error Function Mathematical Physics
    R. I. Badran The Error Function Mathematical Physics The Error Function and Stirling’s Formula The Error Function: x 2 The curve of the Gaussian function y e is called the bell-shaped graph. The error function is defined as the area under part of this curve: x 2 2 erf (x) et dt 1. . 0 There are other definitions of error functions. These are closely related integrals to the above one. 2. a) The normal or Gaussian distribution function. x t2 1 1 1 x P(, x) e 2 dt erf ( ) 2 2 2 2 Proof: Put t 2u and proceed, you might reach a step of x 1 2 P(0, x) eu du P(,x) P(,0) P(0,x) , where 0 1 x P(0, x) erf ( ) Here you can prove that 2 2 . This can be done by using the definition of error function in (1). 0 u2 I I e du Now you need to find P(,0) where . To find this integral you have to put u=x first, then u= y and multiply the two resulting integrals. Make the change of variables to polar coordinate you get R. I. Badran The Error Function Mathematical Physics 0 2 2 I 2 er rdr d 0 From this latter integral you get 1 I P(,0) 2 and 2 . 1 1 x P(, x) erf ( ) 2 2 2 Q. E. D. x 2 t 1 2 1 x 2.b P(0, x) e dt erf ( ) 2 0 2 2 (as proved earlier in 2.a).
    [Show full text]
  • B1. Fourier Analysis of Discrete Time Signals
    B1. Fourier Analysis of Discrete Time Signals Objectives • Introduce discrete time periodic signals • Define the Discrete Fourier Series (DFS) expansion of periodic signals • Define the Discrete Fourier Transform (DFT) of signals with finite length • Determine the Discrete Fourier Transform of a complex exponential 1. Introduction In the previous chapter we defined the concept of a signal both in continuous time (analog) and discrete time (digital). Although the time domain is the most natural, since everything (including our own lives) evolves in time, it is not the only possible representation. In this chapter we introduce the concept of expanding a generic signal in terms of elementary signals, such as complex exponentials and sinusoids. This leads to the frequency domain representation of a signal in terms of its Fourier Transform and the concept of frequency spectrum so that we characterize a signal in terms of its frequency components. First we begin with the introduction of periodic signals, which keep repeating in time. For these signals it is fairly easy to determine an expansion in terms of sinusoids and complex exponentials, since these are just particular cases of periodic signals. This is extended to signals of a finite duration which becomes the Discrete Fourier Transform (DFT), one of the most widely used algorithms in Signal Processing. The concepts introduced in this chapter are at the basis of spectral estimation of signals addressed in the next two chapters. 2. Periodic Signals VIDEO: Periodic Signals (19:45) http://faculty.nps.edu/rcristi/eo3404/b-discrete-fourier-transform/videos/chapter1-seg1_media/chapter1-seg1- 0.wmv http://faculty.nps.edu/rcristi/eo3404/b-discrete-fourier-transform/videos/b1_02_periodicSignals.mp4 In this section we define a class of discrete time signals called Periodic Signals.
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms
    An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms Notes prepared for MA3139 Arthur L. Schoenstadt Department of Applied Mathematics Naval Postgraduate School Code MA/Zh Monterey, California 93943 August 18, 2005 c 1992 - Professor Arthur L. Schoenstadt 1 Contents 1 Infinite Sequences, Infinite Series and Improper Integrals 1 1.1Introduction.................................... 1 1.2FunctionsandSequences............................. 2 1.3Limits....................................... 5 1.4TheOrderNotation................................ 8 1.5 Infinite Series . ................................ 11 1.6ConvergenceTests................................ 13 1.7ErrorEstimates.................................. 15 1.8SequencesofFunctions.............................. 18 2 Fourier Series 25 2.1Introduction.................................... 25 2.2DerivationoftheFourierSeriesCoefficients.................. 26 2.3OddandEvenFunctions............................. 35 2.4ConvergencePropertiesofFourierSeries.................... 40 2.5InterpretationoftheFourierCoefficients.................... 48 2.6TheComplexFormoftheFourierSeries.................... 53 2.7FourierSeriesandOrdinaryDifferentialEquations............... 56 2.8FourierSeriesandDigitalDataTransmission.................. 60 3 The One-Dimensional Wave Equation 70 3.1Introduction.................................... 70 3.2TheOne-DimensionalWaveEquation...................... 70 3.3 Boundary Conditions ............................... 76 3.4InitialConditions................................
    [Show full text]
  • Neural Network for the Fast Gaussian Distribution Test Author(S)
    Document Title: Neural Network for the Fast Gaussian Distribution Test Author(s): Igor Belic and Aleksander Pur Document No.: 208039 Date Received: December 2004 This paper appears in Policing in Central and Eastern Europe: Dilemmas of Contemporary Criminal Justice, edited by Gorazd Mesko, Milan Pagon, and Bojan Dobovsek, and published by the Faculty of Criminal Justice, University of Maribor, Slovenia. This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this final report available electronically in addition to NCJRS Library hard-copy format. Opinions and/or reference to any specific commercial products, processes, or services by trade name, trademark, manufacturer, or otherwise do not constitute or imply endorsement, recommendation, or favoring by the U.S. Government. Translation and editing were the responsibility of the source of the reports, and not of the U.S. Department of Justice, NCJRS, or any other affiliated bodies. IGOR BELI^, ALEKSANDER PUR NEURAL NETWORK FOR THE FAST GAUSSIAN DISTRIBUTION TEST There are several problems where it is very important to know whether the tested data are distributed according to the Gaussian law. At the detection of the hidden information within the digitized pictures (stega- nography), one of the key factors is the analysis of the noise contained in the picture. The incorporated noise should show the typically Gaussian distribution. The departure from the Gaussian distribution might be the first hint that the picture has been changed – possibly new information has been inserted. In such cases the fast Gaussian distribution test is a very valuable tool.
    [Show full text]
  • Fourier Analysis
    Chapter 1 Fourier analysis In this chapter we review some basic results from signal analysis and processing. We shall not go into detail and assume the reader has some basic background in signal analysis and processing. As basis for signal analysis, we use the Fourier transform. We start with the continuous Fourier transformation. But in applications on the computer we deal with a discrete Fourier transformation, which introduces the special effect known as aliasing. We use the Fourier transformation for processes such as convolution, correlation and filtering. Some special attention is given to deconvolution, the inverse process of convolution, since it is needed in later chapters of these lecture notes. 1.1 Continuous Fourier Transform. The Fourier transformation is a special case of an integral transformation: the transforma- tion decomposes the signal in weigthed basis functions. In our case these basis functions are the cosine and sine (remember exp(iφ) = cos(φ) + i sin(φ)). The result will be the weight functions of each basis function. When we have a function which is a function of the independent variable t, then we can transform this independent variable to the independent variable frequency f via: +1 A(f) = a(t) exp( 2πift)dt (1.1) −∞ − Z In order to go back to the independent variable t, we define the inverse transform as: +1 a(t) = A(f) exp(2πift)df (1.2) Z−∞ Notice that for the function in the time domain, we use lower-case letters, while for the frequency-domain expression the corresponding uppercase letters are used. A(f) is called the spectrum of a(t).
    [Show full text]
  • Extended Fourier Analysis of Signals
    Dr.sc.comp. Vilnis Liepiņš Email: [email protected] Extended Fourier analysis of signals Abstract. This summary of the doctoral thesis [8] is created to emphasize the close connection of the proposed spectral analysis method with the Discrete Fourier Transform (DFT), the most extensively studied and frequently used approach in the history of signal processing. It is shown that in a typical application case, where uniform data readings are transformed to the same number of uniformly spaced frequencies, the results of the classical DFT and proposed approach coincide. The difference in performance appears when the length of the DFT is selected greater than the length of the data. The DFT solves the unknown data problem by padding readings with zeros up to the length of the DFT, while the proposed Extended DFT (EDFT) deals with this situation in a different way, it uses the Fourier integral transform as a target and optimizes the transform basis in the extended frequency set without putting such restrictions on the time domain. Consequently, the Inverse DFT (IDFT) applied to the result of EDFT returns not only known readings, but also the extrapolated data, where classical DFT is able to give back just zeros, and higher resolution are achieved at frequencies where the data has been extrapolated successfully. It has been demonstrated that EDFT able to process data with missing readings or gaps inside or even nonuniformly distributed data. Thus, EDFT significantly extends the usability of the DFT based methods, where previously these approaches have been considered as not applicable [10-45]. The EDFT founds the solution in an iterative way and requires repeated calculations to get the adaptive basis, and this makes it numerical complexity much higher compared to DFT.
    [Show full text]
  • Fourier Analysis of Discrete-Time Signals
    Fourier analysis of discrete-time signals (Lathi Chapt. 10 and these slides) Towards the discrete-time Fourier transform • How we will get there? • Periodic discrete-time signal representation by Discrete-time Fourier series • Extension to non-periodic DT signals using the “periodization trick” • Derivation of the Discrete Time Fourier Transform (DTFT) • Discrete Fourier Transform Discrete-time periodic signals • A periodic DT signal of period N0 is called N0-periodic signal f[n + kN0]=f[n] f[n] n N0 • For the frequency it is customary to use a different notation: the frequency of a DT sinusoid with period N0 is 2⇡ ⌦0 = N0 Fourier series representation of DT periodic signals • DT N0-periodic signals can be represented by DTFS with 2⇡ fundamental frequency ⌦ 0 = and its multiples N0 • The exponential DT exponential basis functions are 0k j⌦ k j2⌦ k jn⌦ k e ,e± 0 ,e± 0 ,...,e± 0 Discrete time 0k j! t j2! t jn! t e ,e± 0 ,e± 0 ,...,e± 0 Continuous time • Important difference with respect to the continuous case: only a finite number of exponentials are different! • This is because the DT exponential series is periodic of period 2⇡ j(⌦ 2⇡)k j⌦k e± ± = e± Increasing the frequency: continuous time • Consider a continuous time sinusoid with increasing frequency: the number of oscillations per unit time increases with frequency Increasing the frequency: discrete time • Discrete-time sinusoid s[n]=sin(⌦0n) • Changing the frequency by 2pi leaves the signal unchanged s[n]=sin((⌦0 +2⇡)n)=sin(⌦0n +2⇡n)=sin(⌦0n) • Thus when the frequency increases from zero, the number of oscillations per unit time increase until the frequency reaches pi, then decreases again towards the value that it had in zero.
    [Show full text]
  • Error and Complementary Error Functions Outline
    Error and Complementary Error Functions Reading Problems Outline Background ...................................................................2 Definitions .....................................................................4 Theory .........................................................................6 Gaussian function .......................................................6 Error function ...........................................................8 Complementary Error function .......................................10 Relations and Selected Values of Error Functions ........................12 Numerical Computation of Error Functions ..............................19 Rationale Approximations of Error Functions ............................21 Assigned Problems ..........................................................23 References ....................................................................27 1 Background The error function and the complementary error function are important special functions which appear in the solutions of diffusion problems in heat, mass and momentum transfer, probability theory, the theory of errors and various branches of mathematical physics. It is interesting to note that there is a direct connection between the error function and the Gaussian function and the normalized Gaussian function that we know as the \bell curve". The Gaussian function is given as G(x) = Ae−x2=(2σ2) where σ is the standard deviation and A is a constant. The Gaussian function can be normalized so that the accumulated area under the
    [Show full text]
  • Fourier Analysis
    Fourier Analysis Hilary Weller <[email protected]> 19th October 2015 This is brief introduction to Fourier analysis and how it is used in atmospheric and oceanic science, for: Analysing data (eg climate data) • Numerical methods • Numerical analysis of methods • 1 1 Fourier Series Any periodic, integrable function, f (x) (defined on [ π,π]), can be expressed as a Fourier − series; an infinite sum of sines and cosines: ∞ ∞ a0 f (x) = + ∑ ak coskx + ∑ bk sinkx (1) 2 k=1 k=1 The a and b are the Fourier coefficients. • k k The sines and cosines are the Fourier modes. • k is the wavenumber - number of complete waves that fit in the interval [ π,π] • − sinkx for different values of k 1.0 k =1 k =2 k =4 0.5 0.0 0.5 1.0 π π/2 0 π/2 π − − x The wavelength is λ = 2π/k • The more Fourier modes that are included, the closer their sum will get to the function. • 2 Sum of First 4 Fourier Modes of a Periodic Function 1.0 Fourier Modes Original function 4 Sum of first 4 Fourier modes 0.5 2 0.0 0 2 0.5 4 1.0 π π/2 0 π/2 π π π/2 0 π/2 π − − − − x x 3 The first four Fourier modes of a square wave. The additional oscillations are “spectral ringing” Each mode can be represented by motion around a circle. ↓ The motion around each circle has a speed and a radius. These represent the wavenumber and the Fourier coefficients.
    [Show full text]
  • FOURIER ANALYSIS 1. the Best Approximation Onto Trigonometric
    FOURIER ANALYSIS ERIK LØW AND RAGNAR WINTHER 1. The best approximation onto trigonometric polynomials Before we start the discussion of Fourier series we will review some basic results on inner–product spaces and orthogonal projections mostly presented in Section 4.6 of [1]. 1.1. Inner–product spaces. Let V be an inner–product space. As usual we let u, v denote the inner–product of u and v. The corre- sponding normh isi given by v = v, v . k k h i A basic relation between the inner–productp and the norm in an inner– product space is the Cauchy–Scwarz inequality. It simply states that the absolute value of the inner–product of u and v is bounded by the product of the corresponding norms, i.e. (1.1) u, v u v . |h i|≤k kk k An outline of a proof of this fundamental inequality, when V = Rn and is the standard Eucledian norm, is given in Exercise 24 of Section 2.7k·k of [1]. We will give a proof in the general case at the end of this section. Let W be an n dimensional subspace of V and let P : V W be the corresponding projection operator, i.e. if v V then w∗ =7→P v W is the element in W which is closest to v. In other∈ words, ∈ v w∗ v w for all w W. k − k≤k − k ∈ It follows from Theorem 12 of Chapter 4 of [1] that w∗ is characterized by the conditions (1.2) v P v, w = v w∗, w =0 forall w W.
    [Show full text]