Lecture Notes on Dirac Delta Function, Fourier Transform, Laplace Transform

Total Page:16

File Type:pdf, Size:1020Kb

Lecture Notes on Dirac Delta Function, Fourier Transform, Laplace Transform Lecture Notes on Dirac delta function, Fourier transform, Laplace transform Luca Salasnich Dipartment of Physics and Astronomy “Galileo Gailei” University of Padua 2 Contents 1 Dirac Delta Function 1 2 Fourier Transform 5 3 Laplace Transform 11 3 4 CONTENTS Chapter 1 Dirac Delta Function In 1880 the self-taught electrical scientist Oliver Heaviside introduced the following function 1 for x> 0 Θ(x)= (1.1) 0 for x< 0 which is now called Heaviside step function. This is a discontinous function, with a discon- tinuity of first kind (jump) at x = 0, which is often used in the context of the analysis of electric signals. Moreover, it is important to stress that the Haviside step function appears also in the context of quantum statistical physics. In fact, the Fermi-Dirac function (or Fermi-Dirac distribution) 1 F (x)= , (1.2) β eβx +1 proposed in 1926 by Enrico Fermi and Paul Dirac to describe the quantum statistical distribution of electrons in metals, where β = 1/(kBT ) is the inverse of the absolute temperature T (with kB the Boltzmann constant) and x = ǫ µ is the energy ǫ of the electron with respect to the chemical potential µ, becomes the function− Θ( x) in the limit − of very small temperature T , namely 0 for x> 0 lim Fβ(x)=Θ( x)= . (1.3) β→+∞ − 1 for x< 0 Inspired by the work of Heaviside, with the purpose of describing an extremely localized charge density, in 1930 Paul Dirac investigated the following “function” + for x =0 δ(x)= (1.4) 0∞ for x =0 6 imposing that +∞ δ(x) dx =1 . (1.5) Z−∞ Unfortunately, this property of δ(x) is not compatible with the definition (1.4). In fact, from Eq. (1.4) it follows that the integral must be equal to zero. In other words, it does 1 2 CHAPTER 1. DIRAC DELTA FUNCTION not exist a function δ(x) which satisfies both Eq. (1.4) and Eq. (1.5). Dirac suggested that a way to circumvent this problem is to interpret the integral of Eq. (1.5) as +∞ +∞ δ(x) dx = lim δǫ(x) dx , (1.6) ǫ→0+ Z−∞ Z−∞ where δǫ(x) is a generic function of both x and ǫ such that + for x =0 lim δǫ(x) = ∞ , (1.7) ǫ→0+ 0 for x =0 +∞ 6 δǫ(x) dx = 1 . (1.8) Z−∞ Thus, the Dirac delta function δ(x) is a “generalized function” (but, strictly-speaking, not a function) which satisfy Eqs. (1.4) and (1.5) with the caveat that the integral in Eq. (1.5) must be interpreted according to Eq. (1.6) where the functions δǫ(x) satisfy Eqs. (1.7) and (1.8). There are infinite functions δǫ(x) which satisfy Eqs. (1.7) and (1.8). Among them there is, for instance, the following Gaussian 1 2 2 δ (x)= e−x /ǫ , (1.9) ǫ ǫ√π which clearly satisfies Eq. (1.7) and whose integral is equal to 1 for any value of ǫ. Another example is the function 1 for x ǫ/2 δ (x)= ǫ , (1.10) ǫ 0 for |x| ≤> ǫ/2 | | which again satisfies Eq. (1.7) and whose integral is equal to 1 for any value of ǫ. In the following we shall use Eq. (1.10) to study the properties of the Dirac delta function. According to the approach of Dirac, the integral involving δ(x) must be interpreted as the limit of the corresponding integral involving δǫ(x), namely +∞ +∞ δ(x) f(x) dx = lim δǫ(x) f(x) dx , (1.11) ǫ→0+ Z−∞ Z−∞ for any function f(x). It is then easy to prove that +∞ δ(x) f(x) dx = f(0) . (1.12) Z−∞ by using Eq. (1.10) and the mean value theorem. Similarly one finds +∞ δ(x c) f(x) dx = f(c) . (1.13) − Z−∞ 3 Several other properties of the Dirac delta function δ(x) follow from its definition. In particular δ( x) = δ(x) , (1.14) − 1 δ(a x) = δ(x) with a =0 , (1.15) a 6 | | 1 δ(f(x)) = δ(x x ) with f(x )=0 . (1.16) f ′(x ) − i i i i X | | Up to now we have considered the Dirac delta function δ(x) with only one variable x. It is not difficult to define a Dirac delta function δ(D)(r) in the case of a D-dimensional domain RD, where r =(x , x , ..., x ) RD is a D-dimensional vector: 1 2 D ∈ + for r = 0 δ(D)(r)= ∞ (1.17) 0 for r = 0 6 and δ(D)(r) dDr =1 . (1.18) RD Z Notice that sometimes δ(D)(r) is written using the simpler notation δ(r). Clearly, also in this case one must interpret the integral of Eq. (1.18) as (D) D (D) D δ (r) d r = lim δǫ (r) d r , (1.19) RD ǫ→0+ RD Z Z (D) where δǫ (r) is a generic function of both r and ǫ such that (D) + for r = 0 lim δǫ (r) = ∞ , (1.20) ǫ→0+ 0 for r = 0 6 (D) D lim δǫ (r) d r = 1 . (1.21) ǫ→0+ Z Several properties of δ(x) remain valid also for δ(D)(r). Nevertheless, some properties of δ(D)(r) depend on the space dimension D. For instance, one can prove the remarkable formula 1 2 (ln r ) for D =2 (D) 2π ∇ | | δ (r)= 1 2 1 , (1.22) D−2 for D 3 ( − D(D−2)VD ∇ |r| ≥ 2 ∂2 ∂2 ∂2 D/2 where = 2 + 2 +...+ 2 and VD = π /Γ(1+D/2) is the volume of a D-dimensional ∇ ∂x1 ∂x2 ∂xD ipersphere of unitary radius, with Γ(x) the Euler Gamma function. In the case D = 3 the previous formula becomes 1 1 δ(3)(r)= 2 , (1.23) −4π ∇ r | | which can be used to transform the Gauss law of electromagnetism from its integral form to its differential form. 4 CHAPTER 1. DIRAC DELTA FUNCTION Chapter 2 Fourier Transform It was known from the times of Archimedes that, in some cases, the infinite sum of decreas- ing numbers can produce a finite result. But it was only in 1593 that the mathematician Francois Viete gave the first example of a function, f(x)=1/(1 x), written as the infinite sum of power functions. This function is nothing else than the− geometric series, given by 1 ∞ = xn , for x < 1 . (2.1) 1 x | | n=0 − X In 1714 Brook Taylor suggested that any real function f(x) which is infinitely differen- tiable in x0 and sufficiently regular can be written as a series of powers, i.e. ∞ f(x)= c (x x )n , (2.2) n − 0 n=0 X where the coefficients cn are given by 1 c = f (n)(x ) , (2.3) n n! 0 with f (n)(x) the n-th derivative of the function f(x). The series (2.2) is now called Taylor series and becomes the so-called Maclaurin series if x0 = 0. Clearly, the geometric series (2.1) is nothing else than the Maclaurin series, where cn = 1. We observe that it is quite easy to prove the Taylor series: it is sufficient to suppose that Eq. (2.2) is valid and then to derive the coefficients cn by calculating the derivatives of f(x) at x = x0; in this way one gets Eq. (2.3). In 1807 Jean Baptiste Joseph Fourier, who was interested on wave propagation and periodic phenomena, found that any sufficiently regular real function function f(x) which is periodic, i.e. such that f(x + L)= f(x) , (2.4) where L is the periodicity, can be written as the infinite sum of sinusoidal functions, namely a ∞ 2π 2π f(x)= 0 + a cos n x + b sin n x , (2.5) 2 n L n L n=1 X 5 6 CHAPTER 2. FOURIER TRANSFORM where 2 L/2 2π a = f(y) cos n y dy , (2.6) n L L Z−L/2 2 L/2 2π b = f(y) sin n y dy . (2.7) n L L Z−L/2 It is quite easy to prove also the series (2.5), which is now called Fourier series. In fact, it is sufficient to suppose that Eq. (2.5) is valid and then to derive the coefficients an and 2π 2π bn by multiplying both side of Eq. (2.5) by cos n L x and cos n L x respectively and integrating over one period L; in this way one gets Eqs. (2.6) and (2.7). It is important to stress that, in general, the real variable x of the function f(x) can represent a space coordinate but also a time coordinate. In the former case L gives the spatial periodicity and 2π/L is the wavenumber, while in the latter case L is the time periodicity and 2π/L the angular frequency. Taking into account the Euler formula 2π 2π 2π ein L x = cos n x + i sin n x (2.8) L L with i = √ 1 the imaginary unit, Fourier observed that his series (2.5) can be re-written in the very− elegant form +∞ 2π in L x f(x)= fn e , (2.9) n=−∞ X where L/2 1 2π f = f(y) e−in L y dy (2.10) n L Z−L/2 are complex coefficients, with f0 = a0/2, fn =(an ibn)/2 if n> 0 and fn =(a−n +ib−n)/2 ∗ − if n< 0, thus fn = f−n.
Recommended publications
  • Discontinuous Forcing Functions
    Math 135A, Winter 2012 Discontinuous forcing functions 1 Preliminaries If f(t) is defined on the interval [0; 1), then its Laplace transform is defined to be Z 1 F (s) = L (f(t)) = e−stf(t) dt; 0 as long as this integral is defined and converges. In particular, if f is of exponential order and is piecewise continuous, the Laplace transform of f(t) will be defined. • f is of exponential order if there are constants M and c so that jf(t)j ≤ Mect: Z 1 Since the integral e−stMect dt converges if s > c, then by a comparison test (like (11.7.2) 0 in Salas-Hille-Etgen), the integral defining the Laplace transform of f(t) will converge. • f is piecewise continuous if over each interval [0; b], f(t) has only finitely many discontinuities, and at each point a in [0; b], both of the limits lim f(t) and lim f(t) t!a− t!a+ exist { they need not be equal, but they must exist. (At the endpoints 0 and b, the appropriate one-sided limits must exist.) 2 Step functions Define u(t) to be the function ( 0 if t < 0; u(t) = 1 if t ≥ 0: Then u(t) is called the step function, or sometimes the Heaviside step function: it jumps from 0 to 1 at t = 0. Note that for any number a > 0, the graph of the function u(t − a) is the same as the graph of u(t), but translated right by a: u(t − a) jumps from 0 to 1 at t = a.
    [Show full text]
  • FOURIER TRANSFORM Very Broadly Speaking, the Fourier Transform Is a Systematic Way to Decompose “Generic” Functions Into
    FOURIER TRANSFORM TERENCE TAO Very broadly speaking, the Fourier transform is a systematic way to decompose “generic” functions into a superposition of “symmetric” functions. These symmetric functions are usually quite explicit (such as a trigonometric function sin(nx) or cos(nx)), and are often associated with physical concepts such as frequency or energy. What “symmetric” means here will be left vague, but it will usually be associated with some sort of group G, which is usually (though not always) abelian. Indeed, the Fourier transform is a fundamental tool in the study of groups (and more precisely in the representation theory of groups, which roughly speaking describes how a group can define a notion of symmetry). The Fourier transform is also related to topics in linear algebra, such as the representation of a vector as linear combinations of an orthonormal basis, or as linear combinations of eigenvectors of a matrix (or a linear operator). To give a very simple prototype of the Fourier transform, consider a real-valued function f : R → R. Recall that such a function f(x) is even if f(−x) = f(x) for all x ∈ R, and is odd if f(−x) = −f(x) for all x ∈ R. A typical function f, such as f(x) = x3 + 3x2 + 3x + 1, will be neither even nor odd. However, one can always write f as the superposition f = fe + fo of an even function fe and an odd function fo by the formulae f(x) + f(−x) f(x) − f(−x) f (x) := ; f (x) := . e 2 o 2 3 2 2 3 For instance, if f(x) = x + 3x + 3x + 1, then fe(x) = 3x + 1 and fo(x) = x + 3x.
    [Show full text]
  • An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms
    An Introduction to Fourier Analysis Fourier Series, Partial Differential Equations and Fourier Transforms Notes prepared for MA3139 Arthur L. Schoenstadt Department of Applied Mathematics Naval Postgraduate School Code MA/Zh Monterey, California 93943 August 18, 2005 c 1992 - Professor Arthur L. Schoenstadt 1 Contents 1 Infinite Sequences, Infinite Series and Improper Integrals 1 1.1Introduction.................................... 1 1.2FunctionsandSequences............................. 2 1.3Limits....................................... 5 1.4TheOrderNotation................................ 8 1.5 Infinite Series . ................................ 11 1.6ConvergenceTests................................ 13 1.7ErrorEstimates.................................. 15 1.8SequencesofFunctions.............................. 18 2 Fourier Series 25 2.1Introduction.................................... 25 2.2DerivationoftheFourierSeriesCoefficients.................. 26 2.3OddandEvenFunctions............................. 35 2.4ConvergencePropertiesofFourierSeries.................... 40 2.5InterpretationoftheFourierCoefficients.................... 48 2.6TheComplexFormoftheFourierSeries.................... 53 2.7FourierSeriesandOrdinaryDifferentialEquations............... 56 2.8FourierSeriesandDigitalDataTransmission.................. 60 3 The One-Dimensional Wave Equation 70 3.1Introduction.................................... 70 3.2TheOne-DimensionalWaveEquation...................... 70 3.3 Boundary Conditions ............................... 76 3.4InitialConditions................................
    [Show full text]
  • Laplace Transforms: Theory, Problems, and Solutions
    Laplace Transforms: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved 1 Contents 43 The Laplace Transform: Basic Definitions and Results 3 44 Further Studies of Laplace Transform 15 45 The Laplace Transform and the Method of Partial Fractions 28 46 Laplace Transforms of Periodic Functions 35 47 Convolution Integrals 45 48 The Dirac Delta Function and Impulse Response 53 49 Solving Systems of Differential Equations Using Laplace Trans- form 61 50 Solutions to Problems 68 2 43 The Laplace Transform: Basic Definitions and Results Laplace transform is yet another operational tool for solving constant coeffi- cients linear differential equations. The process of solution consists of three main steps: • The given \hard" problem is transformed into a \simple" equation. • This simple equation is solved by purely algebraic manipulations. • The solution of the simple equation is transformed back to obtain the so- lution of the given problem. In this way the Laplace transformation reduces the problem of solving a dif- ferential equation to an algebraic problem. The third step is made easier by tables, whose role is similar to that of integral tables in integration. The above procedure can be summarized by Figure 43.1 Figure 43.1 In this section we introduce the concept of Laplace transform and discuss some of its properties. The Laplace transform is defined in the following way. Let f(t) be defined for t ≥ 0: Then the Laplace transform of f; which is denoted by L[f(t)] or by F (s), is defined by the following equation Z T Z 1 L[f(t)] = F (s) = lim f(t)e−stdt = f(t)e−stdt T !1 0 0 The integral which defined a Laplace transform is an improper integral.
    [Show full text]
  • Fourier Transforms & the Convolution Theorem
    Convolution, Correlation, & Fourier Transforms James R. Graham 11/25/2009 Introduction • A large class of signal processing techniques fall under the category of Fourier transform methods – These methods fall into two broad categories • Efficient method for accomplishing common data manipulations • Problems related to the Fourier transform or the power spectrum Time & Frequency Domains • A physical process can be described in two ways – In the time domain, by h as a function of time t, that is h(t), -∞ < t < ∞ – In the frequency domain, by H that gives its amplitude and phase as a function of frequency f, that is H(f), with -∞ < f < ∞ • In general h and H are complex numbers • It is useful to think of h(t) and H(f) as two different representations of the same function – One goes back and forth between these two representations by Fourier transforms Fourier Transforms ∞ H( f )= ∫ h(t)e−2πift dt −∞ ∞ h(t)= ∫ H ( f )e2πift df −∞ • If t is measured in seconds, then f is in cycles per second or Hz • Other units – E.g, if h=h(x) and x is in meters, then H is a function of spatial frequency measured in cycles per meter Fourier Transforms • The Fourier transform is a linear operator – The transform of the sum of two functions is the sum of the transforms h12 = h1 + h2 ∞ H ( f ) h e−2πift dt 12 = ∫ 12 −∞ ∞ ∞ ∞ h h e−2πift dt h e−2πift dt h e−2πift dt = ∫ ( 1 + 2 ) = ∫ 1 + ∫ 2 −∞ −∞ −∞ = H1 + H 2 Fourier Transforms • h(t) may have some special properties – Real, imaginary – Even: h(t) = h(-t) – Odd: h(t) = -h(-t) • In the frequency domain these
    [Show full text]
  • Rational Approximation of the Absolute Value Function from Measurements: a Numerical Study of Recent Methods
    Rational approximation of the absolute value function from measurements: a numerical study of recent methods I.V. Gosea∗ and A.C. Antoulasy May 7, 2020 Abstract In this work, we propose an extensive numerical study on approximating the absolute value function. The methods presented in this paper compute approximants in the form of rational functions and have been proposed relatively recently, e.g., the Loewner framework and the AAA algorithm. Moreover, these methods are based on data, i.e., measurements of the original function (hence data-driven nature). We compare numerical results for these two methods with those of a recently-proposed best rational approximation method (the minimax algorithm) and with various classical bounds from the previous century. Finally, we propose an iterative extension of the Loewner framework that can be used to increase the approximation quality of the rational approximant. 1 Introduction Approximation theory is a well-established field of mathematics that splits in a wide variety of subfields and has a multitude of applications in many areas of applied sciences. Some examples of the latter include computational fluid dynamics, solution and control of partial differential equations, data compression, electronic structure calculation, systems and control theory (model order reduction reduction of dynamical systems). In this work we are concerned with rational approximation, i.e, approximation of given functions by means of rational functions. More precisely, the original to be approximated function is the absolute value function (known also as the modulus function). Moreover, the methods under consideration do not necessarily require access to the exact closed-form of the original function, but only to evaluations of it on a particular domain.
    [Show full text]
  • 10 Heat Equation: Interpretation of the Solution
    Math 124A { October 26, 2011 «Viktor Grigoryan 10 Heat equation: interpretation of the solution Last time we considered the IVP for the heat equation on the whole line u − ku = 0 (−∞ < x < 1; 0 < t < 1); t xx (1) u(x; 0) = φ(x); and derived the solution formula Z 1 u(x; t) = S(x − y; t)φ(y) dy; for t > 0; (2) −∞ where S(x; t) is the heat kernel, 1 2 S(x; t) = p e−x =4kt: (3) 4πkt Substituting this expression into (2), we can rewrite the solution as 1 1 Z 2 u(x; t) = p e−(x−y) =4ktφ(y) dy; for t > 0: (4) 4πkt −∞ Recall that to derive the solution formula we first considered the heat IVP with the following particular initial data 1; x > 0; Q(x; 0) = H(x) = (5) 0; x < 0: Then using dilation invariance of the Heaviside step function H(x), and the uniquenessp of solutions to the heat IVP on the whole line, we deduced that Q depends only on the ratio x= t, which lead to a reduction of the heat equation to an ODE. Solving the ODE and checking the initial condition (5), we arrived at the following explicit solution p x= 4kt 1 1 Z 2 Q(x; t) = + p e−p dp; for t > 0: (6) 2 π 0 The heat kernel S(x; t) was then defined as the spatial derivative of this particular solution Q(x; t), i.e. @Q S(x; t) = (x; t); (7) @x and hence it also solves the heat equation by the differentiation property.
    [Show full text]
  • STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS and INTERPRETATIONS by DSG Pollock
    STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS AND INTERPRETATIONS by D.S.G. Pollock (University of Leicester) Email: stephen [email protected] This paper expounds some of the results of Fourier theory that are es- sential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to anti- aliasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statis- tical and econometric time series than is commonly recognised. 1 D.S.G. POLLOCK: Statistical Fourier Analysis 1. Introduction Statistical Fourier analysis is an important part of modern time-series analysis, yet it frequently poses an impediment that prevents a full understanding of temporal stochastic processes and of the manipulations to which their data are amenable. This paper provides a survey of the theory that is not overburdened by inessential complications, and it addresses some enduring misapprehensions.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • A Hilbert Space Theory of Generalized Graph Signal Processing
    1 A Hilbert Space Theory of Generalized Graph Signal Processing Feng Ji and Wee Peng Tay, Senior Member, IEEE Abstract Graph signal processing (GSP) has become an important tool in many areas such as image pro- cessing, networking learning and analysis of social network data. In this paper, we propose a broader framework that not only encompasses traditional GSP as a special case, but also includes a hybrid framework of graph and classical signal processing over a continuous domain. Our framework relies extensively on concepts and tools from functional analysis to generalize traditional GSP to graph signals in a separable Hilbert space with infinite dimensions. We develop a concept analogous to Fourier transform for generalized GSP and the theory of filtering and sampling such signals. Index Terms Graph signal proceesing, Hilbert space, generalized graph signals, F-transform, filtering, sampling I. INTRODUCTION Since its emergence, the theory and applications of graph signal processing (GSP) have rapidly developed (see for example, [1]–[10]). Traditional GSP theory is essentially based on a change of orthonormal basis in a finite dimensional vector space. Suppose G = (V; E) is a weighted, arXiv:1904.11655v2 [eess.SP] 16 Sep 2019 undirected graph with V the vertex set of size n and E the set of edges. Recall that a graph signal f assigns a complex number to each vertex, and hence f can be regarded as an element of n C , where C is the set of complex numbers. The heart of the theory is a shift operator AG that is usually defined using a property of the graph.
    [Show full text]
  • 20. the Fourier Transform in Optics, II Parseval’S Theorem
    20. The Fourier Transform in optics, II Parseval’s Theorem The Shift theorem Convolutions and the Convolution Theorem Autocorrelations and the Autocorrelation Theorem The Shah Function in optics The Fourier Transform of a train of pulses The spectrum of a light wave The spectrum of a light wave is defined as: 2 SFEt {()} where F{E(t)} denotes E(), the Fourier transform of E(t). The Fourier transform of E(t) contains the same information as the original function E(t). The Fourier transform is just a different way of representing a signal (in the frequency domain rather than in the time domain). But the spectrum contains less information, because we take the magnitude of E(), therefore losing the phase information. Parseval’s Theorem Parseval’s Theorem* says that the 221 energy in a function is the same, whether f ()tdt F ( ) d 2 you integrate over time or frequency: Proof: f ()tdt2 f ()t f *()tdt 11 F( exp(j td ) F *( exp(j td ) dt 22 11 FF() *(') exp([j '])tdtd ' d 22 11 FF( ) * ( ') [2 ')] dd ' 22 112 FF() *() d F () d * also known as 22Rayleigh’s Identity. The Fourier Transform of a sum of two f(t) F() functions t g(t) G() Faft() bgt () aF ft() bFgt () t The FT of a sum is the F() + sum of the FT’s. f(t)+g(t) G() Also, constants factor out. t This property reflects the fact that the Fourier transform is a linear operation. Shift Theorem The Fourier transform of a shifted function, f ():ta Ffta ( ) exp( jaF ) ( ) Proof : This theorem is F ft a ft( a )exp( jtdt ) important in optics, because we often encounter functions Change variables : uta that are shifting (continuously) along fu( )exp( j [ u a ]) du the time axis – they are called waves! exp(ja ) fu ( )exp( judu ) exp(jaF ) ( ) QED An example of the Shift Theorem in optics Suppose that we’re measuring the spectrum of a light wave, E(t), but a small fraction of the irradiance of this light, say , takes a different path that also leads to the spectrometer.
    [Show full text]
  • Techniques of Variational Analysis
    J. M. Borwein and Q. J. Zhu Techniques of Variational Analysis An Introduction October 8, 2004 Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo To Tova, Naomi, Rachel and Judith. To Charles and Lilly. And in fond and respectful memory of Simon Fitzpatrick (1953-2004). Preface Variational arguments are classical techniques whose use can be traced back to the early development of the calculus of variations and further. Rooted in the physical principle of least action they have wide applications in diverse ¯elds. The discovery of modern variational principles and nonsmooth analysis further expand the range of applications of these techniques. The motivation to write this book came from a desire to share our pleasure in applying such variational techniques and promoting these powerful tools. Potential readers of this book will be researchers and graduate students who might bene¯t from using variational methods. The only broad prerequisite we anticipate is a working knowledge of un- dergraduate analysis and of the basic principles of functional analysis (e.g., those encountered in a typical introductory functional analysis course). We hope to attract researchers from diverse areas { who may fruitfully use varia- tional techniques { by providing them with a relatively systematical account of the principles of variational analysis. We also hope to give further insight to graduate students whose research already concentrates on variational analysis. Keeping these two di®erent reader groups in mind we arrange the material into relatively independent blocks. We discuss various forms of variational princi- ples early in Chapter 2. We then discuss applications of variational techniques in di®erent areas in Chapters 3{7.
    [Show full text]