A Journey in Signal Processing with Jupyter

Total Page:16

File Type:pdf, Size:1020Kb

A Journey in Signal Processing with Jupyter A Journey in Signal Processing with Jupyter JEAN-FRANÇOIS BERCHER ESIEE-PARIS 2 Page 2/255 Contents 1 A basic introduction to signals and systems 9 1.1 Effects of delays and scaling on signals ............................ 9 1.2 A basic introduction to filtering ................................. 12 1.2.1 Transformations of signals - Examples of difference equations ............ 12 1.2.2 Filters ......................................... 19 2 Introduction to the Fourier representation 23 2.1 Simple examples ........................................ 23 2.1.1 Decomposition on basis - scalar producs ....................... 26 2.2 Decomposition of periodic functions – Fourier series ..................... 27 2.3 Complex Fourier series ..................................... 29 2.3.1 Introduction ....................................... 29 2.3.2 Computer experiment ................................. 30 3 From Fourier Series to Fourier transforms 39 3.1 Introduction and definitions ................................... 39 3.2 Examples ............................................ 41 3.2.1 The Fourier transform of a rectangular window .................... 42 3.2.2 Fourier transform of a sine wave ............................ 45 3.3 Symmetries of the Fourier transform. .............................. 49 3.4 Table of Fourier transform properties .............................. 51 4 Filters and convolution 53 4.1 Representation formula ..................................... 53 4.2 The convolution operation ................................... 54 4.2.1 Definition ........................................ 54 4.2.2 Illustration ....................................... 55 4.2.3 Exercises ........................................ 56 5 Transfer function 59 5.1 The Plancherel relation ..................................... 59 5.2 Consequences .......................................... 60 3 4 6 Basic representations for digital signals and systems 63 6.1 Study in the time domain .................................... 63 6.2 Study in the fequency domain ................................. 63 7 Filtering 65 8 Lab – Basic representations for digital signals and systems 67 8.1 Study in the time domain .................................... 67 8.1.1 The function scipy.signal lfilter() ............................ 68 8.2 Display of results ........................................ 72 8.3 Study in the frequency domain ................................. 74 8.4 Filtering ............................................. 75 8.4.1 Analysis in the time domain .............................. 76 8.4.2 Frequency representation ................................ 77 9 The continuous time case 83 9.1 The continuous time Fourier transform ............................. 83 9.1.1 Definition ........................................ 83 9.1.2 Example - The Fourier transform of a rectangular pulse ................ 84 9.1.3 Table of Fourier transform properties ......................... 87 9.1.4 Symmetries of the Fourier transform. ......................... 88 9.2 Dirac impulse, representation formula and convolution .................... 88 9.2.1 Dirac impulse ...................................... 88 9.2.2 Representation formula ................................. 89 10 Periodization, discretization and sampling 91 10.1 Periodization-discretization duality ............................... 91 10.1.1 Relation between Fourier series and Fourier transform ................ 91 10.1.2 Poisson summation formulas .............................. 92 10.2 The Discrete Fourier Transform ................................ 97 10.2.1 The Discrete Fourier Transform: Sampling the discrete-time Fourier transform ... 98 10.2.2 The DFT as a change of basis ............................. 101 10.2.3 Time-shift property ................................... 101 10.2.4 Circular convolution .................................. 101 10.3 (Sub)-Sampling of time signals ................................. 102 10.4 Illustration 1 ........................................... 103 10.5 Illustration 2 ........................................... 104 10.6 The sampling theorem ..................................... 107 10.6.1 Derivation in the case of discrete-time signals ..................... 107 10.6.2 Case of continuous-time signals. ............................ 109 10.6.3 Illustrations ....................................... 109 10.6.4 Sampling of band-pass signals ............................. 114 10.7 Lab on basics in image processing ............................... 114 10.7.1 Introduction ....................................... 114 Page 4/255 5 11 Digital filters 117 11.0.1 Introduction ....................................... 117 11.0.2 The z-transform ..................................... 117 11.1 Pole-zero locations and transfer functions behavior ...................... 117 11.1.1 Analysis of no-pole transfer functions ......................... 118 11.1.2 Analysis of all-poles transfer functions ........................ 120 11.1.3 General transfer functions ............................... 125 11.1.4 Appendix – listing of the class ZerosPolesPlay .................... 126 11.2 Synthesis of FIR filters ..................................... 131 11.2.1 Synthesis by sampling in the frequency domain .................... 131 11.2.2 Synthesis by the window method ........................... 136 11.3 Synthesis of IIR filters by the bilinear transformation method ................. 146 11.3.1 The bilinear transform ................................. 148 11.3.2 Synthesis of low-pass filters – procedure ........................ 149 11.3.3 Synthesis of other type of filters ............................ 151 11.3.4 Numerical results .................................... 152 11.4 Lab – Basic Filtering ...................................... 155 11.4.1 Analysis of the data ................................... 155 11.4.2 Filtering ......................................... 157 11.4.3 Design and implementation of the lowpass averaging filter .............. 157 11.4.4 Second part: Boost of a frequency band ........................ 160 11.5 Theoretical Part ......................................... 162 11.5.1 Lowpass [0- 250 Hz] filtering by the window method ................. 163 12 Random Signals 165 12.1 Introduction to Random Signals ................................ 165 12.2 Fundamental properties ..................................... 165 12.2.1 Stationnarity ...................................... 165 12.2.2 Ergodism ........................................ 166 12.2.3 Examples of random signals .............................. 167 12.2.4 White noise ....................................... 170 12.3 Second order analysis ...................................... 173 12.3.1 Correlation functions .................................. 173 12.4 Filtering ............................................. 178 12.4.1 General relations for cross-correlations ........................ 178 12.4.2 By-products ....................................... 179 12.4.3 Examples ........................................ 179 12.4.4 Correlation matrix ................................... 183 12.4.5 Identification of a filter by cross-correlation ...................... 184 12.5 Analyse dans le domaine fréquentiel .............................. 187 12.5.1 Notion de densité spectrale de Puissance ........................ 188 12.5.2 Power spectrum estimation ............................... 190 12.6 Applications ........................................... 192 12.6.1 Matched filter ...................................... 192 12.6.2 Wiener filtering ..................................... 200 Page 5/255 6 13 Adaptive Filters 209 13.1 A general filtering problem ................................... 210 13.1.1 Introduction ....................................... 211 13.1.2 The Linear Minimum Mean Square Error Estimator .................. 211 13.1.3 The Least Square Error Estimator ........................... 212 13.1.4 Application to filter identification ........................... 213 13.2 The steepest descent algorithm ................................. 218 13.3 Application to the iterative resolution of the normal equations ................. 223 13.3.1 Convergence analysis .................................. 227 13.3.2 An alternative view of the Steepest Descent Algorithm ................ 229 13.4 Adaptive versions ........................................ 232 13.4.1 The Least Mean Square (LMS) Algorithm ....................... 234 13.4.2 Illustation of the LMS in an identification problem .................. 235 13.4.3 Convergence properties of the LMS .......................... 248 13.4.4 The normalized LMS .................................. 248 13.4.5 Other variants of the LMS ............................... 250 13.4.6 Recursive Least Squares ................................ 251 Page 6/255 7 Page 7/255 8 Page 8/255 1 A basic introduction to signals and systems 1.1 Effects of delays and scaling on signals In this simple exercise, we recall the effect of delays and scaling on signals. It is important for students to experiments with that to ensure that they master these simple transformations. Study the code below and experiment with the parameters # Define a s i m p l e f u n c t i o n d e f f ( t ) : r e t u r n np . exp ( −0.25∗ t ) i f t >0 e l s e 0 T= np.linspace( −10 ,20 ,200) L= l e n (T) x=np. zeros(L) # r e s e r v e some sp a c e f o r x t 0 =0; a=1 # i n i t i a l v a l u e s # Compute x as f ( a∗ t + t 0 ) k=0 f o r t i n T: x [ k ]= f ( a∗ t + t 0 ) k=k+1 # P l o t t i n g t h e s i g n a l plt.plot(T,x) plt.xlabel( "
Recommended publications
  • Arxiv:1712.04732V2 [Math.NA] 17 Jan 2021 New Window Function, Runtimes for the SE Method Can Be Further Reduced
    Fast Ewald summation for electrostatic potentials with arbitrary periodicity D. S. Shamshirgar,a) J. Bagge,b) and A.-K. Tornbergc) KTH Mathematics, Swedish e-Science Research Centre, 100 44 Stockholm, Sweden. A unified treatment for fast and spectrally accurate evaluation of electrostatic po- tentials subject to periodic boundary conditions in any or none of the three spatial dimensions is presented. Ewald decomposition is used to split the problem into a real- space and a Fourier-space part, and the FFT-based Spectral Ewald (SE) method is used to accelerate the computation of the latter. A key component in the unified treatment is an FFT-based solution technique for the free-space Poisson problem in three, two or one dimensions, depending on the number of non-periodic directions. The computational cost is furthermore reduced by employing an adaptive FFT for the doubly and singly periodic cases, allowing for different local upsampling factors. The SE method will always be most efficient for the triply periodic case as the cost of computing FFTs will then be the smallest, whereas the computational cost of the rest of the algorithm is essentially independent of periodicity. We show that the cost of removing periodic boundary conditions from one or two directions out of three will only moderately increase the total runtime. Our comparisons also show that the computational cost of the SE method in the free-space case is around four times that of the triply periodic case. The Gaussian window function previously used in the SE method, is here compared to a piecewise polynomial approximation of the Kaiser-Bessel window function.
    [Show full text]
  • Observational Astrophysics II January 18, 2007
    Observational Astrophysics II JOHAN BLEEKER SRON Netherlands Institute for Space Research Astronomical Institute Utrecht FRANK VERBUNT Astronomical Institute Utrecht January 18, 2007 Contents 1RadiationFields 4 1.1 Stochastic processes: distribution functions, mean and variance . 4 1.2Autocorrelationandautocovariance........................ 5 1.3Wide-sensestationaryandergodicsignals..................... 6 1.4Powerspectraldensity................................ 7 1.5 Intrinsic stochastic nature of a radiation beam: Application of Bose-Einstein statistics ....................................... 10 1.5.1 Intermezzo: Bose-Einstein statistics . ................... 10 1.5.2 Intermezzo: Fermi Dirac statistics . ................... 13 1.6 Stochastic description of a radiation field in the thermal limit . 15 1.7 Stochastic description of a radiation field in the quantum limit . 20 2 Astronomical measuring process: information transfer 23 2.1Integralresponsefunctionforastronomicalmeasurements............ 23 2.2Timefiltering..................................... 24 2.2.1 Finiteexposureandtimeresolution.................... 24 2.2.2 Error assessment in sample average μT ................... 25 2.3 Data sampling in a bandlimited measuring system: Critical or Nyquist frequency 28 3 Indirect Imaging and Spectroscopy 31 3.1Coherence....................................... 31 3.1.1 The Visibility function . ................... 31 3.1.2 Young’sdualbeaminterferenceexperiment................ 31 3.1.3 Themutualcoherencefunction....................... 32 3.1.4 Interference
    [Show full text]
  • Fourier Analysis of Discrete-Time Signals
    Fourier analysis of discrete-time signals (Lathi Chapt. 10 and these slides) Towards the discrete-time Fourier transform • How we will get there? • Periodic discrete-time signal representation by Discrete-time Fourier series • Extension to non-periodic DT signals using the “periodization trick” • Derivation of the Discrete Time Fourier Transform (DTFT) • Discrete Fourier Transform Discrete-time periodic signals • A periodic DT signal of period N0 is called N0-periodic signal f[n + kN0]=f[n] f[n] n N0 • For the frequency it is customary to use a different notation: the frequency of a DT sinusoid with period N0 is 2⇡ ⌦0 = N0 Fourier series representation of DT periodic signals • DT N0-periodic signals can be represented by DTFS with 2⇡ fundamental frequency ⌦ 0 = and its multiples N0 • The exponential DT exponential basis functions are 0k j⌦ k j2⌦ k jn⌦ k e ,e± 0 ,e± 0 ,...,e± 0 Discrete time 0k j! t j2! t jn! t e ,e± 0 ,e± 0 ,...,e± 0 Continuous time • Important difference with respect to the continuous case: only a finite number of exponentials are different! • This is because the DT exponential series is periodic of period 2⇡ j(⌦ 2⇡)k j⌦k e± ± = e± Increasing the frequency: continuous time • Consider a continuous time sinusoid with increasing frequency: the number of oscillations per unit time increases with frequency Increasing the frequency: discrete time • Discrete-time sinusoid s[n]=sin(⌦0n) • Changing the frequency by 2pi leaves the signal unchanged s[n]=sin((⌦0 +2⇡)n)=sin(⌦0n +2⇡n)=sin(⌦0n) • Thus when the frequency increases from zero, the number of oscillations per unit time increase until the frequency reaches pi, then decreases again towards the value that it had in zero.
    [Show full text]
  • On the Duality of Regular and Local Functions
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 mathematics ISSN 2227-7390 www.mdpi.com/journal/mathematics Article On the Duality of Regular and Local Functions Jens V. Fischer Institute of Mathematics, Ludwig Maximilians University Munich, 80333 Munich, Germany; E-Mail: jens.fi[email protected]; Tel.: +49-8196-934918 1 Abstract: In this paper, we relate Poisson’s summation formula to Heisenberg’s uncertainty 2 principle. They both express Fourier dualities within the space of tempered distributions and 3 these dualities are furthermore the inverses of one another. While Poisson’s summation 4 formula expresses a duality between discretization and periodization, Heisenberg’s 5 uncertainty principle expresses a duality between regularization and localization. We define 6 regularization and localization on generalized functions and show that the Fourier transform 7 of regular functions are local functions and, vice versa, the Fourier transform of local 8 functions are regular functions. 9 Keywords: generalized functions; tempered distributions; regular functions; local functions; 10 regularization-localization duality; regularity; Heisenberg’s uncertainty principle 11 Classification: MSC 42B05, 46F10, 46F12 1 1.2 Introduction 13 Regularization is a popular trick in applied mathematics, see [1] for example. It is the technique 14 ”to approximate functions by more differentiable ones” [2]. Its terminology coincides moreover with 15 the terminology used in generalized function spaces. They contain two kinds of functions, ”regular 16 functions” and ”generalized functions”. While regular functions are being functions in the ordinary 17 functions sense which are infinitely differentiable in the ordinary functions sense, all other functions 18 become ”infinitely differentiable” in the ”generalized functions sense” [3].
    [Show full text]
  • Algorithmic Framework for X-Ray Nanocrystallographic Reconstruction in the Presence of the Indexing Ambiguity
    Algorithmic framework for X-ray nanocrystallographic reconstruction in the presence of the indexing ambiguity Jeffrey J. Donatellia,b and James A. Sethiana,b,1 aDepartment of Mathematics and bLawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 Contributed by James A. Sethian, November 21, 2013 (sent for review September 23, 2013) X-ray nanocrystallography allows the structure of a macromolecule over multiple orientations. Although there has been some success in to be determined from a large ensemble of nanocrystals. How- determining structure from perfectly twinned data, reconstruction is ever, several parameters, including crystal sizes, orientations, and often infeasible without a good initial atomic model of the structure. incident photon flux densities, are initially unknown and images We present an algorithmic framework for X-ray nanocrystal- are highly corrupted with noise. Autoindexing techniques, com- lographic reconstruction which is based on directly reducing data monly used in conventional crystallography, can determine ori- variance and resolving the indexing ambiguity. First, we design entations using Bragg peak patterns, but only up to crystal lattice an autoindexing technique that uses both Bragg and non-Bragg symmetry. This limitation results in an ambiguity in the orienta- data to compute precise orientations, up to lattice symmetry. tions, known as the indexing ambiguity, when the diffraction Crystal sizes are then determined by performing a Fourier analysis pattern displays less symmetry than the lattice and leads to data around Bragg peak neighborhoods from a finely sampled low- that appear twinned if left unresolved. Furthermore, missing angle image, such as from a rear detector (Fig. 1). Next, we model phase information must be recovered to determine the imaged structure factor magnitudes for each reciprocal lattice point with object’s structure.
    [Show full text]
  • Understanding the Lomb-Scargle Periodogram
    Understanding the Lomb-Scargle Periodogram Jacob T. VanderPlas1 ABSTRACT The Lomb-Scargle periodogram is a well-known algorithm for detecting and char- acterizing periodic signals in unevenly-sampled data. This paper presents a conceptual introduction to the Lomb-Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb-Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation. Subject headings: methods: data analysis | methods: statistical 1. Introduction The Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) is a well-known algorithm for de- tecting and characterizing periodicity in unevenly-sampled time-series, and has seen particularly wide use within the astronomy community. As an example of a typical application of this method, consider the data shown in Figure 1: this is an irregularly-sampled timeseries showing a single ob- ject from the LINEAR survey (Sesar et al. 2011; Palaversa et al. 2013), with un-filtered magnitude measured 280 times over the course of five and a half years. By eye, it is clear that the brightness of the object varies in time with a range spanning approximately 0.8 magnitudes, but what is not immediately clear is that this variation is periodic in time. The Lomb-Scargle periodogram is a method that allows efficient computation of a Fourier-like power spectrum estimator from such unevenly-sampled data, resulting in an intuitive means of determining the period of oscillation.
    [Show full text]
  • Fractional Calculus and Generalised Norms in Condition Monitoring of a Load Haul Dumper
    Fractional calculus and generalised norms in condition monitoring of a load haul dumper Master’s thesis Juhani Nissilä 1975600 Department of Mathematical Sciences University of Oulu Autumn 2014 Contents Introduction 4 1 Classical analysis 6 1.1 Norms and function spaces . 6 1.2 Convergence of function sequences . 9 1.3 Complex analysis . 10 1.4 Important theorems of analysis . 11 1.5 Distributions . 12 1.6 Iterated differences and derivatives . 16 1.7 Iterated integrals . 18 2 Mathematical background for fractional calculus 20 2.1 Gamma function . 20 2.2 Fourier series . 23 2.2.1 Series in L1;loc and L2;loc . 23 2.2.2 Pointwise convergence . 24 2.3 Fourier transform . 26 2.3.1 Transform in S and L1 . 26 2.3.2 Transform in L2 ...................... 28 2.3.3 Transform in S0 ...................... 29 2.3.4 Properties . 31 2.4 Poisson summation formula and the sampling theorem . 34 2.5 Discrete Fourier transform . 36 2.6 Window functions . 37 3 Fractional calculus in time domain 39 3.1 Riemann-Liouville fractional integral and derivative . 39 3.2 Caputo fractional derivative . 45 3.3 Grünwald-Letnikov fractional derivative and integral . 46 3.4 Equivalences of definitions . 47 4 Fractional calculus in frequency domain 49 4.1 Fourier differintegrals . 49 4.2 Weyl differintegrals . 52 4.3 Equivalences of definitions . 53 4.4 Numerical algorithm . 56 5 Norms, means and other features of signals 60 5.1 Generalised lp norms and Hölder means . 60 5.2 Measurement index . 64 1 6 Condition monitoring of a load haul dumper front axle 65 6.1 Data acquisition and acceleration signals .
    [Show full text]
  • The Poisson Summation Formula, the Sampling Theorem, and Dirac Combs
    The Poisson summation formula, the sampling theorem, and Dirac combs Jordan Bell [email protected] Department of Mathematics, University of Toronto April 3, 2014 1 Poisson summation formula Let S(R) be the set of all infinitely differentiable functions f on R such that for all nonnegative integers m and n, m (n) νm;n(f) = sup jxj jf (x)j x2R is finite. For each m and n, νm;n is a seminorm. This is a countable collection of seminorms, and S(R) is a Fréchet space. (One has to prove for vk 2 S(R) that if for each fixed m; n the sequence νm;n(fk) is a Cauchy sequence (of numbers), then there exists some v 2 S(R) such that for each m; n we have νm;n(f −fk) ! 0.) I mention that S(R) is a Fréchet space merely because it is satisfying to give a structure to a set with which we are working: if I call this set Schwartz space I would like to know what type of space it is, in the same sense that an Lp space is a Banach space. For f 2 S(R), define Z f^(ξ) = e−iξxf(x)dx; R and then 1 Z f(x) = eixξf^(ξ)dξ: 2π R Let T = R=2πZ. For φ 2 C1(T), define 1 Z 2π φ^(n) = e−intφ(t)dt; 2π 0 and then X φ(t) = φ^(n)eint: n2Z 1 For f 2 S(R) and λ 6= 0, define φλ : T ! C by X φλ(t) = 2π fλ(t + 2πj); j2Z 1 where fλ(x) = λf(λx).
    [Show full text]
  • Distribution Theory by Riemann Integrals Arxiv:1810.04420V1
    Distribution Theory by Riemann Integrals Hans G. Feichtinger∗and Mads S. Jakobseny October 11, 2018 Abstract It is the purpose of this article to outline a syllabus for a course that can be given to engi- neers looking for an understandable mathematical description of the foundations of distribution theory and the necessary functional analytic methods. Arguably, these are needed for a deeper understanding of basic questions in signal analysis. Objects such as the Dirac delta and the Dirac comb should have a proper definition, and it should be possible to explain how one can reconstruct a band-limited function from its samples by means of simple series expansions. It should also be useful for graduate mathematics students who want to see how functional analysis can help to understand fairly practical problems, or teachers who want to offer a course related to the “Mathematical Foundations of Signal Processing” at their institutions. The course requires only an understanding of the basic terms from linear functional analysis, namely Banach spaces and their duals, bounded linear operators and a simple version of w∗- convergence. As a matter of fact we use a set of function spaces which is quite different from the p d collection of Lebesgue spaces L (R ); k · kp used normally. We thus avoid the use of Lebesgue integration theory. Furthermore we avoid topological vector spaces in the form of the Schwartz space. Although practically all the tools developed and presented can be realized in the context of LCA (locally compact Abelian) groups, i.e. in the most general setting where a (commuta- tive) Fourier transform makes sense, we restrict our attention in the current presentation to d the Euclidean setting, where we have (generalized) functions over R .
    [Show full text]
  • Generalized Poisson Summation Formula for Tempered Distributions
    2015 International Conference on Sampling Theory and Applications (SampTA) Generalized Poisson Summation Formula for Tempered Distributions Ha Q. Nguyen and Michael Unser Biomedical Imaging Group, Ecole´ Polytechnique Fed´ erale´ de Lausanne (EPFL) Station 17, CH–1015, Lausanne, Switzerland fha.nguyen, michael.unserg@epfl.ch Abstract—The Poisson summation formula (PSF), which re- In the mathematics literature, the PSF is often stated in a lates the sampling of an analog signal with the periodization of dual form with the sampling occurring in the Fourier domain, its Fourier transform, plays a key role in the classical sampling and under strict conditions on the function f. Various versions theory. In its current forms, the formula is only applicable to a of (PSF) have been proven when both f and f^ are in appro- limited class of signals in L1. However, this assumption on the signals is too strict for many applications in signal processing priate subspaces of L1(R) \C(R). In its most general form, ^ that require sampling of non-decaying signals. In this paper when f; f 2 L1(R)\C(R), the RHS of (PSF) is a well-defined we generalize the PSF for functions living in weighted Sobolev periodic function in L1([0; 1]) whose (possibly divergent) spaces that do not impose any decay on the functions. The Fourier series is the LHS. If f and f^ additionally satisfy only requirement is that the signal to be sampled and its weak ^ −1−" derivatives up to order 1=2 + " for arbitrarily small " > 0, grow jf(x)j+jf(x)j ≤ C(1+jxj) ; 8x 2 R, for some C; " > 0, slower than a polynomial in the L2 sense.
    [Show full text]
  • Notes on Distributions
    Notes on Distributions Peter Woit Department of Mathematics, Columbia University [email protected] April 26, 2020 1 Introduction This is a set of notes written for the 2020 spring semester Fourier Analysis class, covering material on distributions. This is an important topic not covered in Stein-Shakarchi. These notes will supplement two textbooks you can consult for more details: A Guide to Distribution Theory and Fourier Transforms [2], by Robert Strichartz. The discussion of distributions in this book is quite compre- hensive, and at roughly the same level of rigor as this course. Much of the motivating material comes from physics. Lectures on the Fourier Transform and its Applications [1], by Brad Os- good, chapters 4 and 5. This book is wordier than Strichartz, has a wealth of pictures and motivational examples from the field of signal processing. The first three chapters provide an excellent review of the material we have covered so far, in a form more accessible than Stein-Shakarchi. One should think of distributions as mathematical objects generalizing the notion of a function (and the term \generalized function" is often used inter- changeably with \distribution"). A function will give one a distribution, but many important examples of distributions do not come from a function in this way. Some of the properties of distributions that make them highly useful are: Distributions are always infinitely differentiable, one never needs to worry about whether derivatives exist. One can look for solutions of differential equations that are distributions, and for many examples of differential equations the simplest solutions are given by distributions.
    [Show full text]
  • Lecture 5. FFT and Divide and Conquer
    CS711008Z Algorithm Design and Analysis Lecture 5. FFT and Divide and Conquer Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China . 1 / 57 Outline DFT: evaluate a polynomial at n special points; FFT: an efficient implementation of DFT; Applications of FFT: multiplying two polynomials (and multiplying two n-bits integers); time-frequency transform; solving partial differential equations; Appendix: relationship between continuous and discrete Fourier transforms. 2 / 57 DFT: Discrete Fourier Transform n−1 DFT evaluates a polynomial A(x) = a0 + a1x + ::: + an−1x 2 n−1 2π i at n distinct points 1; !; ! ; :::; ! , where ! = e n is the n-th complex root of unity. Thus, it transforms the complex vector a0; a1; :::; an−1 into k another complex vector y0; y1; :::; yn−1, where yk = A(w ), i.e., y0 = a0 + a1 + a2 ::: + an−1 1 2 n−1 y1 = a0 + a1! + a2! ::: + an−1! ::: ::: ::: :::::: n−1 2(n−1) (n−1)2 yn−1 = a0 + a1! + a2! ::: + an−1! Matrix form: 2 3 2 3 2 3 1 1 1 ::: 1 y0 a0 6 7 6 1 !1 !2 :::!n−1 7 6 7 6 y1 7 6 7 6 a1 7 6 7 = 6 1 !2 !4 :::!2(n−1)7 6 7 4 . 5 6 7 4 . 5 . 4::::::::::::::: 5 . 2 yn−1 1 !n−1 !2(n−1) :::!(n−1) an−1 . 3 / 57 FFT: a fast way to implement DFT [Cooley and Tukey, 1965] Direct matrix-vector multiplication requires O(n2) operations when using the Horner’s method, i.e., A(x) = a0 + x(a1 + x(a2 + ::: + xan−1)).
    [Show full text]