On the Duality of Regular and Local Functions

Total Page:16

File Type:pdf, Size:1020Kb

On the Duality of Regular and Local Functions Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 mathematics ISSN 2227-7390 www.mdpi.com/journal/mathematics Article On the Duality of Regular and Local Functions Jens V. Fischer Institute of Mathematics, Ludwig Maximilians University Munich, 80333 Munich, Germany; E-Mail: jens.fi[email protected]; Tel.: +49-8196-934918 1 Abstract: In this paper, we relate Poisson’s summation formula to Heisenberg’s uncertainty 2 principle. They both express Fourier dualities within the space of tempered distributions and 3 these dualities are furthermore the inverses of one another. While Poisson’s summation 4 formula expresses a duality between discretization and periodization, Heisenberg’s 5 uncertainty principle expresses a duality between regularization and localization. We define 6 regularization and localization on generalized functions and show that the Fourier transform 7 of regular functions are local functions and, vice versa, the Fourier transform of local 8 functions are regular functions. 9 Keywords: generalized functions; tempered distributions; regular functions; local functions; 10 regularization-localization duality; regularity; Heisenberg’s uncertainty principle 11 Classification: MSC 42B05, 46F10, 46F12 1 1.2 Introduction 13 Regularization is a popular trick in applied mathematics, see [1] for example. It is the technique 14 ”to approximate functions by more differentiable ones” [2]. Its terminology coincides moreover with 15 the terminology used in generalized function spaces. They contain two kinds of functions, ”regular 16 functions” and ”generalized functions”. While regular functions are being functions in the ordinary 17 functions sense which are infinitely differentiable in the ordinary functions sense, all other functions 18 become ”infinitely differentiable” in the ”generalized functions sense” [3]. In this way, all functions 19 are being infinitely differentiable. Localization, in contrast to that, is another popular technique. It 20 allows for example to integrate functions which could not be integrated otherwise, if we think of ”locally 21 integrable” functions or if we think of the Short-Time Fourier Transform (STFT), capable to analyze 22 infinitely extended signals. Although, regularization and localization appear to be quite different, a © 2017 by the author(s). Distributed under a Creative Commons CC BY license. Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 2 of 12 23 connection between these operations is no surprise. It is ubiquitous in the literature. The theorem 24 below, however, appears in wider sense. It holds within the space of tempered distributions and is 25 directly related to Heisenberg’s uncertainty principle. It is moreover the inverse of an already known 26 discretization-periodization duality. 27 Section 2 provides an introduction to the notations used and previous results. Section 3 presents a 28 justification for Section 4 where regularization and localization within the space of tempered distributions 29 are defined. Section 5 provides symbolic calculation rules based on these definitions, needed to prove 30 the theorem in Section 6. Section 7 connects these results to results in a previous study and Section 8, 31 finally, concludes this study and provides an outlook. 32 2. Preliminaries n n n Let δkT be the Dirac impulse shifted by k ∈ Z units of T ∈ R+ = {t ∈ R : 0 < tν < ∞, 1 6 ν 6 n}, kT being componentwise multiplication, within the space S′ ≡ S′(Rn) of tempered distributions (generalized functions that do not grow faster than polynomials) and let IIIT := X δkT k∈Zn ′ ′ n 33 be the Dirac comb. Then δkT ∈ S and IIIT ∈ S for any T ∈ R+ are tempered distributions [4–6]. ′ 34 We shortly write δ instead of δkT if k = 0. The Fourier transform F in S is defined as usual and such 35 that F1 = δ and Fδ = 1 where 1 is the function being constantly one [3,4,7–12]. The Dirac comb 36 is moreover known for its excellent discretization (sampling) and periodization properties [7,11–13]. ′ ′ 37 While multiplication IIIT · f in S samples a function f ∈ S , the corresponding convolution product ′ 38 IIIT ∗ f periodizes f in S . ′ 39 The following two lemmas summarize the demands that must be put on f ∈ S such that f can ′ 40 be sampled or periodized in S . Recall that smoothness, i.e., infinite differentiability, is not a demand. 41 It is a given fact for all functions in generalized function spaces. Also recall that OM is the space of ′ ′ ′ 42 multiplication operators in S and OC is the space of convolution operators in S according to Laurent 43 Schwartz’ theory of distributions [4,5,14–20]. ′ ′ 44 Lemma 1 (Discretization). Generalized functions f ∈ S can be sampled in S if and only if f ∈ OM . Proof. Any uniform discretization (sampling) in S′ corresponds to forming the product ′ IIIT · f in S ′ ′ 45 where IIIT ∈ S is the Dirac comb. Furthermore, IIIT is no regular function, i.e., IIIT ∈ S \OM . On ′ 46 the other hand, for any multiplication product in S , it is required that at least one of the two factors is in ′ 47 OM . Hence, f ∈ OM ⊂ S . Otherwise the product does not exist. Vice versa, if f ∈ OM then IIIT · f ′ ′ 48 exists due to S ·OM ⊂ S . 49 An equivalent statement is the following lemma. ′ ′ ′ 50 Lemma 2 (Periodization). Generalized functions f ∈ S can be periodized in S if and only if f ∈ OC . Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 3 of 12 Proof. Any periodization in S′ corresponds to forming the convolution product ′ IIIT ∗ f in S ′ ′ ′ 51 where IIIT ∈ S is the Dirac comb. Furthermore, IIIT is of no rapid descent, i.e., IIIT ∈ S \OC . On ′ 52 the other hand, for any convolution product in S , it is required that at least one of the two factors is in ′ ′ ′ ′ 53 OC . Hence, f ∈ OC ⊂ S . Otherwise the convolution product does not exist. Vice versa, if f ∈ OC ′ ′ ′ 54 then IIIT ∗ f exists due to S ∗ OC ⊂ S . In a previous study [19], we used these insights in order to define operations of discretization ⊥⊥⊥T and ′ ′ periodization △△△T in S . While discretization is an operation ⊥⊥⊥T : OM → S , f 7→ ⊥⊥⊥T f := IIIT · f, ′ ′ periodization is an operation △△△T : OC → S , g 7→ △△△T g := IIIT ∗ g, respectively. Starting from these two definitions we proved that F( ⊥⊥⊥f) = △△△(Ff) and (1) F( △△△g) = ⊥⊥⊥(Fg) (2) ′ 55 hold in S , both being expressions of Poisson’s Summation Formula. We shortly write ⊥⊥⊥ and △△△ 56 instead of ⊥⊥⊥T and △△△T if Tν = 1 for all 1 6 ν 6 n. Recall moreover that these rules are a consequence of the Fourier duality F(α · f) = Fα ∗ Ff and (3) F(g ∗ f) = Fg ·Ff in S′ (4) ′ ′ 57 for any α ∈ OM , g ∈ OC and f ∈ S which is, according to Laurent Schwartz’ theory of generalized 58 functions, the widest possible comprehension of both, multiplication and convolution within the space of ′ ′ 59 tempered distributions [4,14,18]. It lies at the very heart of S . Many calculation rules in S , including 60 Equations (1), (2), (7), (8), (11), (12) and Lemmas 1, 2, 3, 4 rely on it. 61 3. Feasibilities 62 The following two lemmas provide justifications for the way we will define regularizations and ′ ′ 63 localizations in S below. They will allow us to invert discretizations and periodizations in S . ′ 64 Lemma 3 (Regularization). Let ϕ ∈ S. Then for any f ∈ S , ϕ ∗ f can be sampled. ′ 65 Proof. This is a consequence of the fact that S ∗ S ⊂ OM [4,10,14,18] and Lemma 1. 66 An equivalent statement is the following lemma. ′ 67 Lemma 4 (Localization). Let ϕ ∈ S. Then for any f ∈ S , ϕ · f can be periodized. ′ ′ ′ 68 Proof. It follows from the fact that S·S ⊂ OC [4,18], which is the Fourier dual F(S ∗ S ) = F(OM ) ′ 69 of S ∗ S ⊂ OM , and Lemma 2. Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 4 of 12 ′ 70 It is interesting to observe that ϕ ∗ and ϕ · stretch and compress f ∈ S , respectively. This property 71 is moreover independent of the actual choice of ϕ ∈ S. It can therefore be attributed to the operations of 72 convolution and multiplication themselves. 73 4. Definitions 74 ”One of the main applications of convolution is the regularization of a distribution” [14] or the 75 regularization of ordinary functions which are not being infinitely differentiable in the conventional 76 functions sense. Its actual importance lies furthermore in the fact that it is the reversal of discretization. 77 Regularization is usually understood as the approximation of generalized functions via approximate 78 identities [2,8–10,14,21–23]. In this paper, however, we extend this idea by allowing any ϕ ∈ S ′ ′ 79 and by allowing even ordinary functions f ∈ S to be used for regularizations in S . This approach 80 naturally includes the special case of choosing approximate identities without unnecessarily restricting 81 our theorem below. Lemmas 3 and 4 above justify the following two definitions. Definition 5 (Regularization). Let ϕ ∈ S. Then for any tempered distribution f ∈ S′ we define another tempered distribution by ∩ ϕf := ϕ ∗ f (5) ′ 82 which is a regular, slowly growing function in OM ⊂ S . The operation ∩ ϕ is called regularization, 83 approximation, interpolation or smoothing of f by means of ϕ. It is a linear continuous operation ′ ′ 84 ∩ ϕ : S → OM , f 7→ ∩ ϕf. The result of ∩ ϕ is called regular function of f in S . f ∩ ϕf ∩ ✻✻✻✻✻✻✻✻✻✻✻✻✻✻✻✻✻ −−→ϕ ... ... ... ... 85 Figure 1. The regularization of generalized function f yields regular function ∩ ϕf.
Recommended publications
  • Observational Astrophysics II January 18, 2007
    Observational Astrophysics II JOHAN BLEEKER SRON Netherlands Institute for Space Research Astronomical Institute Utrecht FRANK VERBUNT Astronomical Institute Utrecht January 18, 2007 Contents 1RadiationFields 4 1.1 Stochastic processes: distribution functions, mean and variance . 4 1.2Autocorrelationandautocovariance........................ 5 1.3Wide-sensestationaryandergodicsignals..................... 6 1.4Powerspectraldensity................................ 7 1.5 Intrinsic stochastic nature of a radiation beam: Application of Bose-Einstein statistics ....................................... 10 1.5.1 Intermezzo: Bose-Einstein statistics . ................... 10 1.5.2 Intermezzo: Fermi Dirac statistics . ................... 13 1.6 Stochastic description of a radiation field in the thermal limit . 15 1.7 Stochastic description of a radiation field in the quantum limit . 20 2 Astronomical measuring process: information transfer 23 2.1Integralresponsefunctionforastronomicalmeasurements............ 23 2.2Timefiltering..................................... 24 2.2.1 Finiteexposureandtimeresolution.................... 24 2.2.2 Error assessment in sample average μT ................... 25 2.3 Data sampling in a bandlimited measuring system: Critical or Nyquist frequency 28 3 Indirect Imaging and Spectroscopy 31 3.1Coherence....................................... 31 3.1.1 The Visibility function . ................... 31 3.1.2 Young’sdualbeaminterferenceexperiment................ 31 3.1.3 Themutualcoherencefunction....................... 32 3.1.4 Interference
    [Show full text]
  • Algorithmic Framework for X-Ray Nanocrystallographic Reconstruction in the Presence of the Indexing Ambiguity
    Algorithmic framework for X-ray nanocrystallographic reconstruction in the presence of the indexing ambiguity Jeffrey J. Donatellia,b and James A. Sethiana,b,1 aDepartment of Mathematics and bLawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 Contributed by James A. Sethian, November 21, 2013 (sent for review September 23, 2013) X-ray nanocrystallography allows the structure of a macromolecule over multiple orientations. Although there has been some success in to be determined from a large ensemble of nanocrystals. How- determining structure from perfectly twinned data, reconstruction is ever, several parameters, including crystal sizes, orientations, and often infeasible without a good initial atomic model of the structure. incident photon flux densities, are initially unknown and images We present an algorithmic framework for X-ray nanocrystal- are highly corrupted with noise. Autoindexing techniques, com- lographic reconstruction which is based on directly reducing data monly used in conventional crystallography, can determine ori- variance and resolving the indexing ambiguity. First, we design entations using Bragg peak patterns, but only up to crystal lattice an autoindexing technique that uses both Bragg and non-Bragg symmetry. This limitation results in an ambiguity in the orienta- data to compute precise orientations, up to lattice symmetry. tions, known as the indexing ambiguity, when the diffraction Crystal sizes are then determined by performing a Fourier analysis pattern displays less symmetry than the lattice and leads to data around Bragg peak neighborhoods from a finely sampled low- that appear twinned if left unresolved. Furthermore, missing angle image, such as from a rear detector (Fig. 1). Next, we model phase information must be recovered to determine the imaged structure factor magnitudes for each reciprocal lattice point with object’s structure.
    [Show full text]
  • Understanding the Lomb-Scargle Periodogram
    Understanding the Lomb-Scargle Periodogram Jacob T. VanderPlas1 ABSTRACT The Lomb-Scargle periodogram is a well-known algorithm for detecting and char- acterizing periodic signals in unevenly-sampled data. This paper presents a conceptual introduction to the Lomb-Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb-Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation. Subject headings: methods: data analysis | methods: statistical 1. Introduction The Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) is a well-known algorithm for de- tecting and characterizing periodicity in unevenly-sampled time-series, and has seen particularly wide use within the astronomy community. As an example of a typical application of this method, consider the data shown in Figure 1: this is an irregularly-sampled timeseries showing a single ob- ject from the LINEAR survey (Sesar et al. 2011; Palaversa et al. 2013), with un-filtered magnitude measured 280 times over the course of five and a half years. By eye, it is clear that the brightness of the object varies in time with a range spanning approximately 0.8 magnitudes, but what is not immediately clear is that this variation is periodic in time. The Lomb-Scargle periodogram is a method that allows efficient computation of a Fourier-like power spectrum estimator from such unevenly-sampled data, resulting in an intuitive means of determining the period of oscillation.
    [Show full text]
  • The Poisson Summation Formula, the Sampling Theorem, and Dirac Combs
    The Poisson summation formula, the sampling theorem, and Dirac combs Jordan Bell [email protected] Department of Mathematics, University of Toronto April 3, 2014 1 Poisson summation formula Let S(R) be the set of all infinitely differentiable functions f on R such that for all nonnegative integers m and n, m (n) νm;n(f) = sup jxj jf (x)j x2R is finite. For each m and n, νm;n is a seminorm. This is a countable collection of seminorms, and S(R) is a Fréchet space. (One has to prove for vk 2 S(R) that if for each fixed m; n the sequence νm;n(fk) is a Cauchy sequence (of numbers), then there exists some v 2 S(R) such that for each m; n we have νm;n(f −fk) ! 0.) I mention that S(R) is a Fréchet space merely because it is satisfying to give a structure to a set with which we are working: if I call this set Schwartz space I would like to know what type of space it is, in the same sense that an Lp space is a Banach space. For f 2 S(R), define Z f^(ξ) = e−iξxf(x)dx; R and then 1 Z f(x) = eixξf^(ξ)dξ: 2π R Let T = R=2πZ. For φ 2 C1(T), define 1 Z 2π φ^(n) = e−intφ(t)dt; 2π 0 and then X φ(t) = φ^(n)eint: n2Z 1 For f 2 S(R) and λ 6= 0, define φλ : T ! C by X φλ(t) = 2π fλ(t + 2πj); j2Z 1 where fλ(x) = λf(λx).
    [Show full text]
  • Distribution Theory by Riemann Integrals Arxiv:1810.04420V1
    Distribution Theory by Riemann Integrals Hans G. Feichtinger∗and Mads S. Jakobseny October 11, 2018 Abstract It is the purpose of this article to outline a syllabus for a course that can be given to engi- neers looking for an understandable mathematical description of the foundations of distribution theory and the necessary functional analytic methods. Arguably, these are needed for a deeper understanding of basic questions in signal analysis. Objects such as the Dirac delta and the Dirac comb should have a proper definition, and it should be possible to explain how one can reconstruct a band-limited function from its samples by means of simple series expansions. It should also be useful for graduate mathematics students who want to see how functional analysis can help to understand fairly practical problems, or teachers who want to offer a course related to the “Mathematical Foundations of Signal Processing” at their institutions. The course requires only an understanding of the basic terms from linear functional analysis, namely Banach spaces and their duals, bounded linear operators and a simple version of w∗- convergence. As a matter of fact we use a set of function spaces which is quite different from the p d collection of Lebesgue spaces L (R ); k · kp used normally. We thus avoid the use of Lebesgue integration theory. Furthermore we avoid topological vector spaces in the form of the Schwartz space. Although practically all the tools developed and presented can be realized in the context of LCA (locally compact Abelian) groups, i.e. in the most general setting where a (commuta- tive) Fourier transform makes sense, we restrict our attention in the current presentation to d the Euclidean setting, where we have (generalized) functions over R .
    [Show full text]
  • Generalized Poisson Summation Formula for Tempered Distributions
    2015 International Conference on Sampling Theory and Applications (SampTA) Generalized Poisson Summation Formula for Tempered Distributions Ha Q. Nguyen and Michael Unser Biomedical Imaging Group, Ecole´ Polytechnique Fed´ erale´ de Lausanne (EPFL) Station 17, CH–1015, Lausanne, Switzerland fha.nguyen, michael.unserg@epfl.ch Abstract—The Poisson summation formula (PSF), which re- In the mathematics literature, the PSF is often stated in a lates the sampling of an analog signal with the periodization of dual form with the sampling occurring in the Fourier domain, its Fourier transform, plays a key role in the classical sampling and under strict conditions on the function f. Various versions theory. In its current forms, the formula is only applicable to a of (PSF) have been proven when both f and f^ are in appro- limited class of signals in L1. However, this assumption on the signals is too strict for many applications in signal processing priate subspaces of L1(R) \C(R). In its most general form, ^ that require sampling of non-decaying signals. In this paper when f; f 2 L1(R)\C(R), the RHS of (PSF) is a well-defined we generalize the PSF for functions living in weighted Sobolev periodic function in L1([0; 1]) whose (possibly divergent) spaces that do not impose any decay on the functions. The Fourier series is the LHS. If f and f^ additionally satisfy only requirement is that the signal to be sampled and its weak ^ −1−" derivatives up to order 1=2 + " for arbitrarily small " > 0, grow jf(x)j+jf(x)j ≤ C(1+jxj) ; 8x 2 R, for some C; " > 0, slower than a polynomial in the L2 sense.
    [Show full text]
  • Notes on Distributions
    Notes on Distributions Peter Woit Department of Mathematics, Columbia University [email protected] April 26, 2020 1 Introduction This is a set of notes written for the 2020 spring semester Fourier Analysis class, covering material on distributions. This is an important topic not covered in Stein-Shakarchi. These notes will supplement two textbooks you can consult for more details: A Guide to Distribution Theory and Fourier Transforms [2], by Robert Strichartz. The discussion of distributions in this book is quite compre- hensive, and at roughly the same level of rigor as this course. Much of the motivating material comes from physics. Lectures on the Fourier Transform and its Applications [1], by Brad Os- good, chapters 4 and 5. This book is wordier than Strichartz, has a wealth of pictures and motivational examples from the field of signal processing. The first three chapters provide an excellent review of the material we have covered so far, in a form more accessible than Stein-Shakarchi. One should think of distributions as mathematical objects generalizing the notion of a function (and the term \generalized function" is often used inter- changeably with \distribution"). A function will give one a distribution, but many important examples of distributions do not come from a function in this way. Some of the properties of distributions that make them highly useful are: Distributions are always infinitely differentiable, one never needs to worry about whether derivatives exist. One can look for solutions of differential equations that are distributions, and for many examples of differential equations the simplest solutions are given by distributions.
    [Show full text]
  • Dirac Comb and Flavors of Fourier Transforms
    ESS 522 2014 6. Dirac Comb and Flavors of Fourier Transforms Consider a periodic function that comprises pulses of amplitude A and duration τ spaced a time T apart. We can define it over one period as y(t) = A, − τ / 2 ≤ t ≤ τ / 2 (6-1) = 0, elsewhere between − T / 2 and T / 2 The Fourier Series for y(t) is defined as ∞ ⎛ ik2πt ⎞ (6-2) y(t) = ∑ ck exp⎜ ⎟ k =−∞ ⎝ T ⎠ with T /2 1 ⎛ −ik2πt ⎞ ck = y(t)exp dt (6-3) ∫ ⎝⎜ ⎠⎟ T −T /2 T Evaluating this integral gives τ /2 1 ⎛ −ik2πt ⎞ c = Aexp dt k ∫ ⎝⎜ ⎠⎟ T −τ /2 T τ /2 A ⎡ T ⎛ −ik2πt ⎞ ⎤ = exp ⎢ ⎝⎜ ⎠⎟ ⎥ T ⎣−ik2π T ⎦−τ /2 A T ⎛ kπτ ⎞ = sin⎜ ⎟ (6-4) T kπ ⎝ T ⎠ ⎛ kπτ ⎞ sin⎜ ⎟ Aτ ⎝ T ⎠ Aτ ⎛ kπτ ⎞ = = sinc T kπτ T ⎝⎜ T ⎠⎟ T where we have used the relationship exp(iy) = cos(y) + i sin(y) to evaluate the integral with the cosine terms cancelling because of symmetry. The Fourier Series coefficients for a pulse train is given by a sinc function. 6-1 ESS 522 2014 The largest amplitude terms in the Fourier series have k < T/τ. Also if T = τ then the time series has a constant amplitude and all the coefficients except c0 are equal to zero (the equivalent of the inverse Fourier transform of a Dirac delta function in frequency). Now if we allow each pulse to become a delta function which can be written mathematically by letting τ → 0 with A = 1/τ which yields a simple result 1 c = , lim τ → 0, A = 1 / τ (6-5) k T A row of delta functions in the time domain spaced apart by time T is represented by a row of delta functions in the frequency domain scaled by 1/T and spaced apart in frequency by 1/T (remember f = k/T).
    [Show full text]
  • Introduction to Image Processing #5/7
    Outline Introduction to Image Processing #5/7 Thierry Geraud´ EPITA Research and Development Laboratory (LRDE) 2006 Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian
    [Show full text]
  • The Mellin Transform Jacqueline Bertrand, Pierre Bertrand, Jean-Philippe Ovarlez
    The Mellin Transform Jacqueline Bertrand, Pierre Bertrand, Jean-Philippe Ovarlez To cite this version: Jacqueline Bertrand, Pierre Bertrand, Jean-Philippe Ovarlez. The Mellin Transform. The Transforms and Applications Handbook, 1995, 978-1420066524. hal-03152634 HAL Id: hal-03152634 https://hal.archives-ouvertes.fr/hal-03152634 Submitted on 25 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The Transforms and Applications Handbook Chapter 12 The Mellin Transform 1 Jacqueline Bertrand2 Pierre Bertrand3 Jean-Philippe Ovarlez 4 1Published as Chapter 12 in "The Transform and Applications Handbook", Ed. A.D.Poularikas , Volume of "The Electrical Engineering Handbook" series, CRC Press inc, 1995 2CNRS and University Paris VII, LPTM, 75251 PARIS, France 3ONERA/DES, BP 72, 92322 CHATILLON, France 4id. Contents 1 Introduction 1 2 The classical approach and its developments 3 2.1 Generalities on the transformation . 3 2.1.1 Definition and relation to other transformations . 3 2.1.2 Inversion formula . 5 2.1.3 Transformation of distributions. 8 2.1.4 Some properties of the transformation. 11 2.1.5 Relation to multiplicative convolution . 13 2.1.6 Hints for a practical inversion of the Mellin transformation .
    [Show full text]
  • 11S Poisson Summation Formula Masatsugu Sei Suzuki Department of Physics, SUNY at Bimghamton (Date: January 08, 2011)
    Chapter 11S Poisson summation formula Masatsugu Sei Suzuki Department of Physics, SUNY at Bimghamton (Date: January 08, 2011) Poisson summation formula Fourier series Convolution Random walk Diffusion Magnetization Gaussian distribution function 11S.1 Poisson summataion formula For appropriate functions f(x), the Poisson summation formula may be stated as f (x n) 2 F(k m) , (1) n m where m and n are integers, and F(k) is the Fourier transform of f(x) and is defined by 1 F(k) F[ f (x)] eikx f (x)dx . 2 Note that the inverse Fourier transform is given by 1 f (x) eikx F(k)dk . 2 The factor of the right hand side of Eq.(1) arises from the definition of the Fourier transform. ((Proof)) The proof of Eq.(1) is given as follows. 1 f (x n) F(k)dk eikn n 2 n 1 We evaluate the factor I eikn . n It is evident that I is not equal to zero only when k = 2m (m; integer). Therefore I can be expressed by I A (k 2m) , m where A is the normalization factor. Then 1 f (x n) F(k)dk[A (k 2m)] n 2 m A F(k) (k 2m)dk 2 m A F(k 2m) 2 m A F(k 2n) 2 n The normalization factor, A, is readily shown to be 2 by considering the symmetrical case n2 f (x n) e , 1 2 F(k 2n) en 2 ________________________________________________________________________ ((Mathematica)) 2 f x_ Exp x2 ; FourierTransform f x , x, k, FourierParameters 0, 1 2 k 4 2 ______________________________________________________________________ Since 2 f (x n) A 2 .
    [Show full text]
  • Dirac Brushes (Or, the Fractional Fourier Transform of Dirac Combs)
    DIRAC BRUSHES (OR, THE FRACTIONAL FOURIER TRANSFORM OF DIRAC COMBS) JOE VIOLA Abstract. In analogy with the Poisson summation formula, we identify when the fractional Fourier transform, applied to a Dirac comb in dimension one, gives a discretely supported measure. We describe the resulting series of com- plex multiples of delta functions, and through either the metaplectic represen- tation of SL(2; Z) or the Bargmann transform, we see that the the identification of these measures is equivalent to the functional equation for the Jacobi theta functions. In tracing the values of the antiderivative in certain small-angle limits, we observe Euler spirals, and on a smaller scale, these spirals are made up of Gauss sums which give the coefficient in the aforementioned functional equation. Contents 1. Introduction 1 2. Direct proof of Theorems 1.2 and 1.3 7 3. Using the metaplectic representation of SL(2; R) 9 4. Evaluation of the coefficient µ 20 5. The Bargmann transform 25 6. Periodicity and parity 28 7. Questions of continuity in α 29 8. Numerics around approximations to irrational cotangents 39 References 41 1. Introduction 1.1. Setting and main results. For r > 0, we consider the Dirac comb p arXiv:1812.05346v1 [math.CA] 13 Dec 2018 X (1.1) xr(x) = r δ(x − rk) k2Z 0 as a distribution in the dualp Schwartz space S (R), meaning that the Dirac comb is defined via its action f 7! r P f(rk) on rapidly decaying smooth functions. k2Z We write the Fourier transform as Z (1.2) Ff(ξ) = i−1=2 e−2πixξf(x) dx Universite´ de Nantes, CNRS, Laboratoire de Mathematiques´ Jean Leray (LMJL - UMR 6629), F-44000 Nantes, France E-mail address: [email protected].
    [Show full text]