A Journey in Signal Processing with Jupyter
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Arxiv:1712.04732V2 [Math.NA] 17 Jan 2021 New Window Function, Runtimes for the SE Method Can Be Further Reduced
Fast Ewald summation for electrostatic potentials with arbitrary periodicity D. S. Shamshirgar,a) J. Bagge,b) and A.-K. Tornbergc) KTH Mathematics, Swedish e-Science Research Centre, 100 44 Stockholm, Sweden. A unified treatment for fast and spectrally accurate evaluation of electrostatic po- tentials subject to periodic boundary conditions in any or none of the three spatial dimensions is presented. Ewald decomposition is used to split the problem into a real- space and a Fourier-space part, and the FFT-based Spectral Ewald (SE) method is used to accelerate the computation of the latter. A key component in the unified treatment is an FFT-based solution technique for the free-space Poisson problem in three, two or one dimensions, depending on the number of non-periodic directions. The computational cost is furthermore reduced by employing an adaptive FFT for the doubly and singly periodic cases, allowing for different local upsampling factors. The SE method will always be most efficient for the triply periodic case as the cost of computing FFTs will then be the smallest, whereas the computational cost of the rest of the algorithm is essentially independent of periodicity. We show that the cost of removing periodic boundary conditions from one or two directions out of three will only moderately increase the total runtime. Our comparisons also show that the computational cost of the SE method in the free-space case is around four times that of the triply periodic case. The Gaussian window function previously used in the SE method, is here compared to a piecewise polynomial approximation of the Kaiser-Bessel window function. -
Observational Astrophysics II January 18, 2007
Observational Astrophysics II JOHAN BLEEKER SRON Netherlands Institute for Space Research Astronomical Institute Utrecht FRANK VERBUNT Astronomical Institute Utrecht January 18, 2007 Contents 1RadiationFields 4 1.1 Stochastic processes: distribution functions, mean and variance . 4 1.2Autocorrelationandautocovariance........................ 5 1.3Wide-sensestationaryandergodicsignals..................... 6 1.4Powerspectraldensity................................ 7 1.5 Intrinsic stochastic nature of a radiation beam: Application of Bose-Einstein statistics ....................................... 10 1.5.1 Intermezzo: Bose-Einstein statistics . ................... 10 1.5.2 Intermezzo: Fermi Dirac statistics . ................... 13 1.6 Stochastic description of a radiation field in the thermal limit . 15 1.7 Stochastic description of a radiation field in the quantum limit . 20 2 Astronomical measuring process: information transfer 23 2.1Integralresponsefunctionforastronomicalmeasurements............ 23 2.2Timefiltering..................................... 24 2.2.1 Finiteexposureandtimeresolution.................... 24 2.2.2 Error assessment in sample average μT ................... 25 2.3 Data sampling in a bandlimited measuring system: Critical or Nyquist frequency 28 3 Indirect Imaging and Spectroscopy 31 3.1Coherence....................................... 31 3.1.1 The Visibility function . ................... 31 3.1.2 Young’sdualbeaminterferenceexperiment................ 31 3.1.3 Themutualcoherencefunction....................... 32 3.1.4 Interference -
Fourier Analysis of Discrete-Time Signals
Fourier analysis of discrete-time signals (Lathi Chapt. 10 and these slides) Towards the discrete-time Fourier transform • How we will get there? • Periodic discrete-time signal representation by Discrete-time Fourier series • Extension to non-periodic DT signals using the “periodization trick” • Derivation of the Discrete Time Fourier Transform (DTFT) • Discrete Fourier Transform Discrete-time periodic signals • A periodic DT signal of period N0 is called N0-periodic signal f[n + kN0]=f[n] f[n] n N0 • For the frequency it is customary to use a different notation: the frequency of a DT sinusoid with period N0 is 2⇡ ⌦0 = N0 Fourier series representation of DT periodic signals • DT N0-periodic signals can be represented by DTFS with 2⇡ fundamental frequency ⌦ 0 = and its multiples N0 • The exponential DT exponential basis functions are 0k j⌦ k j2⌦ k jn⌦ k e ,e± 0 ,e± 0 ,...,e± 0 Discrete time 0k j! t j2! t jn! t e ,e± 0 ,e± 0 ,...,e± 0 Continuous time • Important difference with respect to the continuous case: only a finite number of exponentials are different! • This is because the DT exponential series is periodic of period 2⇡ j(⌦ 2⇡)k j⌦k e± ± = e± Increasing the frequency: continuous time • Consider a continuous time sinusoid with increasing frequency: the number of oscillations per unit time increases with frequency Increasing the frequency: discrete time • Discrete-time sinusoid s[n]=sin(⌦0n) • Changing the frequency by 2pi leaves the signal unchanged s[n]=sin((⌦0 +2⇡)n)=sin(⌦0n +2⇡n)=sin(⌦0n) • Thus when the frequency increases from zero, the number of oscillations per unit time increase until the frequency reaches pi, then decreases again towards the value that it had in zero. -
On the Duality of Regular and Local Functions
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 mathematics ISSN 2227-7390 www.mdpi.com/journal/mathematics Article On the Duality of Regular and Local Functions Jens V. Fischer Institute of Mathematics, Ludwig Maximilians University Munich, 80333 Munich, Germany; E-Mail: jens.fi[email protected]; Tel.: +49-8196-934918 1 Abstract: In this paper, we relate Poisson’s summation formula to Heisenberg’s uncertainty 2 principle. They both express Fourier dualities within the space of tempered distributions and 3 these dualities are furthermore the inverses of one another. While Poisson’s summation 4 formula expresses a duality between discretization and periodization, Heisenberg’s 5 uncertainty principle expresses a duality between regularization and localization. We define 6 regularization and localization on generalized functions and show that the Fourier transform 7 of regular functions are local functions and, vice versa, the Fourier transform of local 8 functions are regular functions. 9 Keywords: generalized functions; tempered distributions; regular functions; local functions; 10 regularization-localization duality; regularity; Heisenberg’s uncertainty principle 11 Classification: MSC 42B05, 46F10, 46F12 1 1.2 Introduction 13 Regularization is a popular trick in applied mathematics, see [1] for example. It is the technique 14 ”to approximate functions by more differentiable ones” [2]. Its terminology coincides moreover with 15 the terminology used in generalized function spaces. They contain two kinds of functions, ”regular 16 functions” and ”generalized functions”. While regular functions are being functions in the ordinary 17 functions sense which are infinitely differentiable in the ordinary functions sense, all other functions 18 become ”infinitely differentiable” in the ”generalized functions sense” [3]. -
Algorithmic Framework for X-Ray Nanocrystallographic Reconstruction in the Presence of the Indexing Ambiguity
Algorithmic framework for X-ray nanocrystallographic reconstruction in the presence of the indexing ambiguity Jeffrey J. Donatellia,b and James A. Sethiana,b,1 aDepartment of Mathematics and bLawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 Contributed by James A. Sethian, November 21, 2013 (sent for review September 23, 2013) X-ray nanocrystallography allows the structure of a macromolecule over multiple orientations. Although there has been some success in to be determined from a large ensemble of nanocrystals. How- determining structure from perfectly twinned data, reconstruction is ever, several parameters, including crystal sizes, orientations, and often infeasible without a good initial atomic model of the structure. incident photon flux densities, are initially unknown and images We present an algorithmic framework for X-ray nanocrystal- are highly corrupted with noise. Autoindexing techniques, com- lographic reconstruction which is based on directly reducing data monly used in conventional crystallography, can determine ori- variance and resolving the indexing ambiguity. First, we design entations using Bragg peak patterns, but only up to crystal lattice an autoindexing technique that uses both Bragg and non-Bragg symmetry. This limitation results in an ambiguity in the orienta- data to compute precise orientations, up to lattice symmetry. tions, known as the indexing ambiguity, when the diffraction Crystal sizes are then determined by performing a Fourier analysis pattern displays less symmetry than the lattice and leads to data around Bragg peak neighborhoods from a finely sampled low- that appear twinned if left unresolved. Furthermore, missing angle image, such as from a rear detector (Fig. 1). Next, we model phase information must be recovered to determine the imaged structure factor magnitudes for each reciprocal lattice point with object’s structure. -
Understanding the Lomb-Scargle Periodogram
Understanding the Lomb-Scargle Periodogram Jacob T. VanderPlas1 ABSTRACT The Lomb-Scargle periodogram is a well-known algorithm for detecting and char- acterizing periodic signals in unevenly-sampled data. This paper presents a conceptual introduction to the Lomb-Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb-Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation. Subject headings: methods: data analysis | methods: statistical 1. Introduction The Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) is a well-known algorithm for de- tecting and characterizing periodicity in unevenly-sampled time-series, and has seen particularly wide use within the astronomy community. As an example of a typical application of this method, consider the data shown in Figure 1: this is an irregularly-sampled timeseries showing a single ob- ject from the LINEAR survey (Sesar et al. 2011; Palaversa et al. 2013), with un-filtered magnitude measured 280 times over the course of five and a half years. By eye, it is clear that the brightness of the object varies in time with a range spanning approximately 0.8 magnitudes, but what is not immediately clear is that this variation is periodic in time. The Lomb-Scargle periodogram is a method that allows efficient computation of a Fourier-like power spectrum estimator from such unevenly-sampled data, resulting in an intuitive means of determining the period of oscillation. -
Fractional Calculus and Generalised Norms in Condition Monitoring of a Load Haul Dumper
Fractional calculus and generalised norms in condition monitoring of a load haul dumper Master’s thesis Juhani Nissilä 1975600 Department of Mathematical Sciences University of Oulu Autumn 2014 Contents Introduction 4 1 Classical analysis 6 1.1 Norms and function spaces . 6 1.2 Convergence of function sequences . 9 1.3 Complex analysis . 10 1.4 Important theorems of analysis . 11 1.5 Distributions . 12 1.6 Iterated differences and derivatives . 16 1.7 Iterated integrals . 18 2 Mathematical background for fractional calculus 20 2.1 Gamma function . 20 2.2 Fourier series . 23 2.2.1 Series in L1;loc and L2;loc . 23 2.2.2 Pointwise convergence . 24 2.3 Fourier transform . 26 2.3.1 Transform in S and L1 . 26 2.3.2 Transform in L2 ...................... 28 2.3.3 Transform in S0 ...................... 29 2.3.4 Properties . 31 2.4 Poisson summation formula and the sampling theorem . 34 2.5 Discrete Fourier transform . 36 2.6 Window functions . 37 3 Fractional calculus in time domain 39 3.1 Riemann-Liouville fractional integral and derivative . 39 3.2 Caputo fractional derivative . 45 3.3 Grünwald-Letnikov fractional derivative and integral . 46 3.4 Equivalences of definitions . 47 4 Fractional calculus in frequency domain 49 4.1 Fourier differintegrals . 49 4.2 Weyl differintegrals . 52 4.3 Equivalences of definitions . 53 4.4 Numerical algorithm . 56 5 Norms, means and other features of signals 60 5.1 Generalised lp norms and Hölder means . 60 5.2 Measurement index . 64 1 6 Condition monitoring of a load haul dumper front axle 65 6.1 Data acquisition and acceleration signals . -
The Poisson Summation Formula, the Sampling Theorem, and Dirac Combs
The Poisson summation formula, the sampling theorem, and Dirac combs Jordan Bell [email protected] Department of Mathematics, University of Toronto April 3, 2014 1 Poisson summation formula Let S(R) be the set of all infinitely differentiable functions f on R such that for all nonnegative integers m and n, m (n) νm;n(f) = sup jxj jf (x)j x2R is finite. For each m and n, νm;n is a seminorm. This is a countable collection of seminorms, and S(R) is a Fréchet space. (One has to prove for vk 2 S(R) that if for each fixed m; n the sequence νm;n(fk) is a Cauchy sequence (of numbers), then there exists some v 2 S(R) such that for each m; n we have νm;n(f −fk) ! 0.) I mention that S(R) is a Fréchet space merely because it is satisfying to give a structure to a set with which we are working: if I call this set Schwartz space I would like to know what type of space it is, in the same sense that an Lp space is a Banach space. For f 2 S(R), define Z f^(ξ) = e−iξxf(x)dx; R and then 1 Z f(x) = eixξf^(ξ)dξ: 2π R Let T = R=2πZ. For φ 2 C1(T), define 1 Z 2π φ^(n) = e−intφ(t)dt; 2π 0 and then X φ(t) = φ^(n)eint: n2Z 1 For f 2 S(R) and λ 6= 0, define φλ : T ! C by X φλ(t) = 2π fλ(t + 2πj); j2Z 1 where fλ(x) = λf(λx). -
Distribution Theory by Riemann Integrals Arxiv:1810.04420V1
Distribution Theory by Riemann Integrals Hans G. Feichtinger∗and Mads S. Jakobseny October 11, 2018 Abstract It is the purpose of this article to outline a syllabus for a course that can be given to engi- neers looking for an understandable mathematical description of the foundations of distribution theory and the necessary functional analytic methods. Arguably, these are needed for a deeper understanding of basic questions in signal analysis. Objects such as the Dirac delta and the Dirac comb should have a proper definition, and it should be possible to explain how one can reconstruct a band-limited function from its samples by means of simple series expansions. It should also be useful for graduate mathematics students who want to see how functional analysis can help to understand fairly practical problems, or teachers who want to offer a course related to the “Mathematical Foundations of Signal Processing” at their institutions. The course requires only an understanding of the basic terms from linear functional analysis, namely Banach spaces and their duals, bounded linear operators and a simple version of w∗- convergence. As a matter of fact we use a set of function spaces which is quite different from the p d collection of Lebesgue spaces L (R ); k · kp used normally. We thus avoid the use of Lebesgue integration theory. Furthermore we avoid topological vector spaces in the form of the Schwartz space. Although practically all the tools developed and presented can be realized in the context of LCA (locally compact Abelian) groups, i.e. in the most general setting where a (commuta- tive) Fourier transform makes sense, we restrict our attention in the current presentation to d the Euclidean setting, where we have (generalized) functions over R . -
Generalized Poisson Summation Formula for Tempered Distributions
2015 International Conference on Sampling Theory and Applications (SampTA) Generalized Poisson Summation Formula for Tempered Distributions Ha Q. Nguyen and Michael Unser Biomedical Imaging Group, Ecole´ Polytechnique Fed´ erale´ de Lausanne (EPFL) Station 17, CH–1015, Lausanne, Switzerland fha.nguyen, michael.unserg@epfl.ch Abstract—The Poisson summation formula (PSF), which re- In the mathematics literature, the PSF is often stated in a lates the sampling of an analog signal with the periodization of dual form with the sampling occurring in the Fourier domain, its Fourier transform, plays a key role in the classical sampling and under strict conditions on the function f. Various versions theory. In its current forms, the formula is only applicable to a of (PSF) have been proven when both f and f^ are in appro- limited class of signals in L1. However, this assumption on the signals is too strict for many applications in signal processing priate subspaces of L1(R) \C(R). In its most general form, ^ that require sampling of non-decaying signals. In this paper when f; f 2 L1(R)\C(R), the RHS of (PSF) is a well-defined we generalize the PSF for functions living in weighted Sobolev periodic function in L1([0; 1]) whose (possibly divergent) spaces that do not impose any decay on the functions. The Fourier series is the LHS. If f and f^ additionally satisfy only requirement is that the signal to be sampled and its weak ^ −1−" derivatives up to order 1=2 + " for arbitrarily small " > 0, grow jf(x)j+jf(x)j ≤ C(1+jxj) ; 8x 2 R, for some C; " > 0, slower than a polynomial in the L2 sense. -
Notes on Distributions
Notes on Distributions Peter Woit Department of Mathematics, Columbia University [email protected] April 26, 2020 1 Introduction This is a set of notes written for the 2020 spring semester Fourier Analysis class, covering material on distributions. This is an important topic not covered in Stein-Shakarchi. These notes will supplement two textbooks you can consult for more details: A Guide to Distribution Theory and Fourier Transforms [2], by Robert Strichartz. The discussion of distributions in this book is quite compre- hensive, and at roughly the same level of rigor as this course. Much of the motivating material comes from physics. Lectures on the Fourier Transform and its Applications [1], by Brad Os- good, chapters 4 and 5. This book is wordier than Strichartz, has a wealth of pictures and motivational examples from the field of signal processing. The first three chapters provide an excellent review of the material we have covered so far, in a form more accessible than Stein-Shakarchi. One should think of distributions as mathematical objects generalizing the notion of a function (and the term \generalized function" is often used inter- changeably with \distribution"). A function will give one a distribution, but many important examples of distributions do not come from a function in this way. Some of the properties of distributions that make them highly useful are: Distributions are always infinitely differentiable, one never needs to worry about whether derivatives exist. One can look for solutions of differential equations that are distributions, and for many examples of differential equations the simplest solutions are given by distributions. -
Lecture 5. FFT and Divide and Conquer
CS711008Z Algorithm Design and Analysis Lecture 5. FFT and Divide and Conquer Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China . 1 / 57 Outline DFT: evaluate a polynomial at n special points; FFT: an efficient implementation of DFT; Applications of FFT: multiplying two polynomials (and multiplying two n-bits integers); time-frequency transform; solving partial differential equations; Appendix: relationship between continuous and discrete Fourier transforms. 2 / 57 DFT: Discrete Fourier Transform n−1 DFT evaluates a polynomial A(x) = a0 + a1x + ::: + an−1x 2 n−1 2π i at n distinct points 1; !; ! ; :::; ! , where ! = e n is the n-th complex root of unity. Thus, it transforms the complex vector a0; a1; :::; an−1 into k another complex vector y0; y1; :::; yn−1, where yk = A(w ), i.e., y0 = a0 + a1 + a2 ::: + an−1 1 2 n−1 y1 = a0 + a1! + a2! ::: + an−1! ::: ::: ::: :::::: n−1 2(n−1) (n−1)2 yn−1 = a0 + a1! + a2! ::: + an−1! Matrix form: 2 3 2 3 2 3 1 1 1 ::: 1 y0 a0 6 7 6 1 !1 !2 :::!n−1 7 6 7 6 y1 7 6 7 6 a1 7 6 7 = 6 1 !2 !4 :::!2(n−1)7 6 7 4 . 5 6 7 4 . 5 . 4::::::::::::::: 5 . 2 yn−1 1 !n−1 !2(n−1) :::!(n−1) an−1 . 3 / 57 FFT: a fast way to implement DFT [Cooley and Tukey, 1965] Direct matrix-vector multiplication requires O(n2) operations when using the Horner’s method, i.e., A(x) = a0 + x(a1 + x(a2 + ::: + xan−1)).