Observational Astrophysics II January 18, 2007

Total Page:16

File Type:pdf, Size:1020Kb

Observational Astrophysics II January 18, 2007 Observational Astrophysics II JOHAN BLEEKER SRON Netherlands Institute for Space Research Astronomical Institute Utrecht FRANK VERBUNT Astronomical Institute Utrecht January 18, 2007 Contents 1RadiationFields 4 1.1 Stochastic processes: distribution functions, mean and variance . 4 1.2Autocorrelationandautocovariance........................ 5 1.3Wide-sensestationaryandergodicsignals..................... 6 1.4Powerspectraldensity................................ 7 1.5 Intrinsic stochastic nature of a radiation beam: Application of Bose-Einstein statistics ....................................... 10 1.5.1 Intermezzo: Bose-Einstein statistics . ................... 10 1.5.2 Intermezzo: Fermi Dirac statistics . ................... 13 1.6 Stochastic description of a radiation field in the thermal limit . 15 1.7 Stochastic description of a radiation field in the quantum limit . 20 2 Astronomical measuring process: information transfer 23 2.1Integralresponsefunctionforastronomicalmeasurements............ 23 2.2Timefiltering..................................... 24 2.2.1 Finiteexposureandtimeresolution.................... 24 2.2.2 Error assessment in sample average μT ................... 25 2.3 Data sampling in a bandlimited measuring system: Critical or Nyquist frequency 28 3 Indirect Imaging and Spectroscopy 31 3.1Coherence....................................... 31 3.1.1 The Visibility function . ................... 31 3.1.2 Young’sdualbeaminterferenceexperiment................ 31 3.1.3 Themutualcoherencefunction....................... 32 3.1.4 Interference law for a partially coherent radiation field: the complex degreeofcoherence.............................. 34 3.2Indirectspectroscopy................................. 36 3.2.1 Temporalcoherence............................. 36 3.2.2 Longitudinalcorrelation........................... 37 3.3Indirectimaging................................... 39 3.3.1 Quasi-monochromatic point source: spatial response function of a two- elementinterferometer............................ 39 3.3.2 Quasi-monochromatic extended source: spatial or lateral coherence . 44 3.3.3 TheVanCittert-Zerniketheorem...................... 45 3.3.4 Etendue of coherence ............................. 48 3.4Aperturesynthesis.................................. 50 3.4.1 Quasi-monochromatic point source: spatial response function (PSF) and optical tranfer function (OTF) of a multi-element interferometer . 50 1 3.4.2 EarthRotationApertureSynthesis(ERAS)................ 56 3.4.3 TheWesterborkRadioSynthesisTelescope(WSRT)........... 57 3.4.4 Maximum allowable bandwidth of a quasi-monochromatic source . 61 3.4.5 The general case: arbitrary baselines and a field of view in arbitrary direction.................................... 62 4 Radiation sensing: a selection 64 4.1General........................................ 64 4.2Heterodynedetection................................. 64 4.3 Frequency limited noise in the thermal limit: signal to noise ratio and limiting sensitivity....................................... 67 4.4 Photoconductive detection . ............................. 73 4.4.1 Operationprinciple.............................. 73 4.4.2 Responsivity of a photoconductor . ................... 73 4.4.3 Temporal frequency response of a photoconductor ............ 76 4.5Frequencylimitednoiseinthequantumlimit................... 77 4.5.1 Frequency limited shot noise in the signal-photon limit . 77 4.5.2 Example: Shot noise in the signal-photon limit for a photoconductor . 79 4.5.3 Shot noise in the background-photon limit for a photoconductor: the normalized detectivity D∗ .......................... 80 4.6ChargeCoupledDevices............................... 81 4.6.1 Operationprinciple.............................. 81 4.6.2 ChargestorageinaCCD.......................... 81 4.6.3 ChargetransportinaCCD......................... 84 4.6.4 Charge capacity and transfer speed in CCD structures . 87 4.6.5 Focalplanearchitectures.......................... 88 4.6.6 WavelengthresponseofCCDs........................ 91 5 Fitting observed data 95 5.1Errors......................................... 95 5.1.1 Accuracyversusprecision.......................... 95 5.1.2 Computingthevariousdistributions.................... 97 5.2 Error propagation . ............................. 99 5.2.1 Examples of error propagation . ...................100 5.3 Errors distributed as a Gaussian and the least squares method . 101 5.3.1 Weightedaverages..............................102 5.3.2 Fittingastraightline............................103 5.3.3 Non-linearmodels:Levenberg-Marquardt.................105 5.3.4 Optimalextractionofaspectrum......................106 5.4 Errors distributed according to a Poisson distribution and Maximum likelihood 107 5.4.1 Aconstantbackground...........................107 5.4.2 Constantbackgroundplusonesource...................108 5.5GeneralMethods...................................108 5.5.1 Amoebe....................................108 5.5.2 Geneticalgorithms..............................109 5.6Exercises.......................................109 6 Looking for variability and periodicity 112 6.1Fittingsine-functions:Lomb-Scargle........................112 6.2 Period-folding: Stellingwerf .............................114 6.3 Variability through Kolmogorov-Smirnov tests . ............114 6.4Fouriertransforms..................................115 6.4.1 ThediscreteFouriertransform.......................116 6.4.2 Fromcontinuoustodiscrete.........................118 6.4.3 Thestatisticsofpowerspectra.......................120 6.4.4 Detectingandquantifyingasignal.....................121 6.5Exercises.......................................121 Chapter 1 Radiation Fields 1.1 Stochastic processes: distribution functions, mean and variance A stochastic process is defined as an infinite series of stochastic variables, one for each value of the time t. For a specific value of t,thestochastic variable X(t) possesses a certain probability density distribution. The numerical value of the stochastic variable X(t)attimet corresponds to a particular draw (outcome) from this probability distribution at time t.Thetimeseries of draws represents a single time function and is commonly referred to as a realisation of the stochastic process. The full set of all realisations is called the ensemble of time functions (see Fig. 1.1). At each time t, the stochastic variable X(t) describing the stochastic process, is distributed Figure 1.1: The ensemble of time functions of a stochastic process. 4 by a momentary cumulative distribution, called the first-order distribution: F (x; t)=P{X(t) ≤ x} (1.1) which gives the probability that the outcome at time t will not exceed the numerical value x. The probability density function, or first-order density, of X(t) follows from the derivative of F (x; t): ∂F(x; t) f(x; t) ≡ (1.2) ∂x This probability function may be a binomial, Poisson or normal (Gaussian) distribution. For the determination of the statistical properties of a stochastic process it suffices in many applications to consider only certain averages, in particular the expected values of X(t)and of X2(t). The mean or average μ(t)ofX(t) is the expected value of X(t) and is defined as +∞ μ(t)=E{X(t)} = xf(x; t) dx (1.3) −∞ The variance of X(t) is the expected value of the square of the difference of X(t)andμ(t) and is, by definition, equal to the square of the standard deviation: σ2(t)=E{(X(t) − μ(t))2} = E{X2(t)}−μ2(t) (1.4) (in case of real functions X(t)andμ(t)). 1.2 Autocorrelation and autocovariance This distribution can be generalized to higher-order distributions, for example the second order distribution of the process X(t) is the joint distribution G(x1,x2; t1,t2)=P{X(t1) ≤ x1,X(t2) ≤ x2} (1.5) The corresponding second derivative follows from 2 ∂ G(x1,x2; t1,t2) g(x1,x2; t1,t2) ≡ (1.6) ∂x1 ∂x2 The autocorrelation R(t1,t2)ofX(t) is the expected value of the product X(t1) · X(t2)or ∗ X(t1) · X (t2) if we deal with complex quantities: +∞ +∞ { · ∗ } ∗ ∗ R(t1,t2)=E X(t1) X (t2) = x1 x2 g(x1,x2; t1,t2) dx1dx2 (1.7) −∞ −∞ with the asterisk (∗) indicating the complex conjugate in case of complex variables (see Fig. 1.2). The value of R(t1,t2) on the diagonal t1 = t2 = t represents the average power of the signal at time t: R(t)=R(t, t)=E{X2(t)} = E{|X(t)|2 } (1.8) 5 Figure 1.2: The autocorrelation of a stochastic process. Since the average of X(t) is in general not zero, another quantity is introduced, the autoco- variance C(t1,t2), which is centered around the averages μ(t1)andμ(t2): ∗ C(t1,t2)=E{(X(t1) − μ(t1)) · (X(t2) − μ(t2)) } (1.9) For t1 = t2 = t (verify yourself) C(t)=C(t, t)=R(t, t)−|μ(t)|2= σ2(t) (1.10) C(t) represents the average power contained in the fluctuations of the signal around its mean value at time t. 1.3 Wide-sense stationary and ergodic signals Next, consider a signal for which the average does not depend on time, and for which the autocorrelation only depends on the time difference τ ≡ t2 − t1. This is called a wide-sense stationary (wss) signal. The following relations hold: signal average μ(t)=μ = constant (1.11) autocorrelation R(t1,t2)=R(τ) (1.12) 2 autocovariance C(t1,t2)=C(τ)=R(τ) − μ (1.13) 6 For τ =0: R(0) = μ2 + C(0) = μ2 + σ2 (1.14) which expresses that the total power in the signal equals the power in the average signal plus the power in its fluctuations around the average value. Note that both the autocorrelation and the autocovariance are even functions, i.e. R(−τ)=R(τ)andC(−τ)=C(τ). Finally, a wss-stochastic process X(t)isconsideredergodic when the momentaneous
Recommended publications
  • On the Duality of Regular and Local Functions
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 24 May 2017 doi:10.20944/preprints201705.0175.v1 mathematics ISSN 2227-7390 www.mdpi.com/journal/mathematics Article On the Duality of Regular and Local Functions Jens V. Fischer Institute of Mathematics, Ludwig Maximilians University Munich, 80333 Munich, Germany; E-Mail: jens.fi[email protected]; Tel.: +49-8196-934918 1 Abstract: In this paper, we relate Poisson’s summation formula to Heisenberg’s uncertainty 2 principle. They both express Fourier dualities within the space of tempered distributions and 3 these dualities are furthermore the inverses of one another. While Poisson’s summation 4 formula expresses a duality between discretization and periodization, Heisenberg’s 5 uncertainty principle expresses a duality between regularization and localization. We define 6 regularization and localization on generalized functions and show that the Fourier transform 7 of regular functions are local functions and, vice versa, the Fourier transform of local 8 functions are regular functions. 9 Keywords: generalized functions; tempered distributions; regular functions; local functions; 10 regularization-localization duality; regularity; Heisenberg’s uncertainty principle 11 Classification: MSC 42B05, 46F10, 46F12 1 1.2 Introduction 13 Regularization is a popular trick in applied mathematics, see [1] for example. It is the technique 14 ”to approximate functions by more differentiable ones” [2]. Its terminology coincides moreover with 15 the terminology used in generalized function spaces. They contain two kinds of functions, ”regular 16 functions” and ”generalized functions”. While regular functions are being functions in the ordinary 17 functions sense which are infinitely differentiable in the ordinary functions sense, all other functions 18 become ”infinitely differentiable” in the ”generalized functions sense” [3].
    [Show full text]
  • Algorithmic Framework for X-Ray Nanocrystallographic Reconstruction in the Presence of the Indexing Ambiguity
    Algorithmic framework for X-ray nanocrystallographic reconstruction in the presence of the indexing ambiguity Jeffrey J. Donatellia,b and James A. Sethiana,b,1 aDepartment of Mathematics and bLawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 Contributed by James A. Sethian, November 21, 2013 (sent for review September 23, 2013) X-ray nanocrystallography allows the structure of a macromolecule over multiple orientations. Although there has been some success in to be determined from a large ensemble of nanocrystals. How- determining structure from perfectly twinned data, reconstruction is ever, several parameters, including crystal sizes, orientations, and often infeasible without a good initial atomic model of the structure. incident photon flux densities, are initially unknown and images We present an algorithmic framework for X-ray nanocrystal- are highly corrupted with noise. Autoindexing techniques, com- lographic reconstruction which is based on directly reducing data monly used in conventional crystallography, can determine ori- variance and resolving the indexing ambiguity. First, we design entations using Bragg peak patterns, but only up to crystal lattice an autoindexing technique that uses both Bragg and non-Bragg symmetry. This limitation results in an ambiguity in the orienta- data to compute precise orientations, up to lattice symmetry. tions, known as the indexing ambiguity, when the diffraction Crystal sizes are then determined by performing a Fourier analysis pattern displays less symmetry than the lattice and leads to data around Bragg peak neighborhoods from a finely sampled low- that appear twinned if left unresolved. Furthermore, missing angle image, such as from a rear detector (Fig. 1). Next, we model phase information must be recovered to determine the imaged structure factor magnitudes for each reciprocal lattice point with object’s structure.
    [Show full text]
  • Understanding the Lomb-Scargle Periodogram
    Understanding the Lomb-Scargle Periodogram Jacob T. VanderPlas1 ABSTRACT The Lomb-Scargle periodogram is a well-known algorithm for detecting and char- acterizing periodic signals in unevenly-sampled data. This paper presents a conceptual introduction to the Lomb-Scargle periodogram and important practical considerations for its use. Rather than a rigorous mathematical treatment, the goal of this paper is to build intuition about what assumptions are implicit in the use of the Lomb-Scargle periodogram and related estimators of periodicity, so as to motivate important practical considerations required in its proper application and interpretation. Subject headings: methods: data analysis | methods: statistical 1. Introduction The Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) is a well-known algorithm for de- tecting and characterizing periodicity in unevenly-sampled time-series, and has seen particularly wide use within the astronomy community. As an example of a typical application of this method, consider the data shown in Figure 1: this is an irregularly-sampled timeseries showing a single ob- ject from the LINEAR survey (Sesar et al. 2011; Palaversa et al. 2013), with un-filtered magnitude measured 280 times over the course of five and a half years. By eye, it is clear that the brightness of the object varies in time with a range spanning approximately 0.8 magnitudes, but what is not immediately clear is that this variation is periodic in time. The Lomb-Scargle periodogram is a method that allows efficient computation of a Fourier-like power spectrum estimator from such unevenly-sampled data, resulting in an intuitive means of determining the period of oscillation.
    [Show full text]
  • The Poisson Summation Formula, the Sampling Theorem, and Dirac Combs
    The Poisson summation formula, the sampling theorem, and Dirac combs Jordan Bell [email protected] Department of Mathematics, University of Toronto April 3, 2014 1 Poisson summation formula Let S(R) be the set of all infinitely differentiable functions f on R such that for all nonnegative integers m and n, m (n) νm;n(f) = sup jxj jf (x)j x2R is finite. For each m and n, νm;n is a seminorm. This is a countable collection of seminorms, and S(R) is a Fréchet space. (One has to prove for vk 2 S(R) that if for each fixed m; n the sequence νm;n(fk) is a Cauchy sequence (of numbers), then there exists some v 2 S(R) such that for each m; n we have νm;n(f −fk) ! 0.) I mention that S(R) is a Fréchet space merely because it is satisfying to give a structure to a set with which we are working: if I call this set Schwartz space I would like to know what type of space it is, in the same sense that an Lp space is a Banach space. For f 2 S(R), define Z f^(ξ) = e−iξxf(x)dx; R and then 1 Z f(x) = eixξf^(ξ)dξ: 2π R Let T = R=2πZ. For φ 2 C1(T), define 1 Z 2π φ^(n) = e−intφ(t)dt; 2π 0 and then X φ(t) = φ^(n)eint: n2Z 1 For f 2 S(R) and λ 6= 0, define φλ : T ! C by X φλ(t) = 2π fλ(t + 2πj); j2Z 1 where fλ(x) = λf(λx).
    [Show full text]
  • Distribution Theory by Riemann Integrals Arxiv:1810.04420V1
    Distribution Theory by Riemann Integrals Hans G. Feichtinger∗and Mads S. Jakobseny October 11, 2018 Abstract It is the purpose of this article to outline a syllabus for a course that can be given to engi- neers looking for an understandable mathematical description of the foundations of distribution theory and the necessary functional analytic methods. Arguably, these are needed for a deeper understanding of basic questions in signal analysis. Objects such as the Dirac delta and the Dirac comb should have a proper definition, and it should be possible to explain how one can reconstruct a band-limited function from its samples by means of simple series expansions. It should also be useful for graduate mathematics students who want to see how functional analysis can help to understand fairly practical problems, or teachers who want to offer a course related to the “Mathematical Foundations of Signal Processing” at their institutions. The course requires only an understanding of the basic terms from linear functional analysis, namely Banach spaces and their duals, bounded linear operators and a simple version of w∗- convergence. As a matter of fact we use a set of function spaces which is quite different from the p d collection of Lebesgue spaces L (R ); k · kp used normally. We thus avoid the use of Lebesgue integration theory. Furthermore we avoid topological vector spaces in the form of the Schwartz space. Although practically all the tools developed and presented can be realized in the context of LCA (locally compact Abelian) groups, i.e. in the most general setting where a (commuta- tive) Fourier transform makes sense, we restrict our attention in the current presentation to d the Euclidean setting, where we have (generalized) functions over R .
    [Show full text]
  • Generalized Poisson Summation Formula for Tempered Distributions
    2015 International Conference on Sampling Theory and Applications (SampTA) Generalized Poisson Summation Formula for Tempered Distributions Ha Q. Nguyen and Michael Unser Biomedical Imaging Group, Ecole´ Polytechnique Fed´ erale´ de Lausanne (EPFL) Station 17, CH–1015, Lausanne, Switzerland fha.nguyen, michael.unserg@epfl.ch Abstract—The Poisson summation formula (PSF), which re- In the mathematics literature, the PSF is often stated in a lates the sampling of an analog signal with the periodization of dual form with the sampling occurring in the Fourier domain, its Fourier transform, plays a key role in the classical sampling and under strict conditions on the function f. Various versions theory. In its current forms, the formula is only applicable to a of (PSF) have been proven when both f and f^ are in appro- limited class of signals in L1. However, this assumption on the signals is too strict for many applications in signal processing priate subspaces of L1(R) \C(R). In its most general form, ^ that require sampling of non-decaying signals. In this paper when f; f 2 L1(R)\C(R), the RHS of (PSF) is a well-defined we generalize the PSF for functions living in weighted Sobolev periodic function in L1([0; 1]) whose (possibly divergent) spaces that do not impose any decay on the functions. The Fourier series is the LHS. If f and f^ additionally satisfy only requirement is that the signal to be sampled and its weak ^ −1−" derivatives up to order 1=2 + " for arbitrarily small " > 0, grow jf(x)j+jf(x)j ≤ C(1+jxj) ; 8x 2 R, for some C; " > 0, slower than a polynomial in the L2 sense.
    [Show full text]
  • Notes on Distributions
    Notes on Distributions Peter Woit Department of Mathematics, Columbia University [email protected] April 26, 2020 1 Introduction This is a set of notes written for the 2020 spring semester Fourier Analysis class, covering material on distributions. This is an important topic not covered in Stein-Shakarchi. These notes will supplement two textbooks you can consult for more details: A Guide to Distribution Theory and Fourier Transforms [2], by Robert Strichartz. The discussion of distributions in this book is quite compre- hensive, and at roughly the same level of rigor as this course. Much of the motivating material comes from physics. Lectures on the Fourier Transform and its Applications [1], by Brad Os- good, chapters 4 and 5. This book is wordier than Strichartz, has a wealth of pictures and motivational examples from the field of signal processing. The first three chapters provide an excellent review of the material we have covered so far, in a form more accessible than Stein-Shakarchi. One should think of distributions as mathematical objects generalizing the notion of a function (and the term \generalized function" is often used inter- changeably with \distribution"). A function will give one a distribution, but many important examples of distributions do not come from a function in this way. Some of the properties of distributions that make them highly useful are: Distributions are always infinitely differentiable, one never needs to worry about whether derivatives exist. One can look for solutions of differential equations that are distributions, and for many examples of differential equations the simplest solutions are given by distributions.
    [Show full text]
  • Dirac Comb and Flavors of Fourier Transforms
    ESS 522 2014 6. Dirac Comb and Flavors of Fourier Transforms Consider a periodic function that comprises pulses of amplitude A and duration τ spaced a time T apart. We can define it over one period as y(t) = A, − τ / 2 ≤ t ≤ τ / 2 (6-1) = 0, elsewhere between − T / 2 and T / 2 The Fourier Series for y(t) is defined as ∞ ⎛ ik2πt ⎞ (6-2) y(t) = ∑ ck exp⎜ ⎟ k =−∞ ⎝ T ⎠ with T /2 1 ⎛ −ik2πt ⎞ ck = y(t)exp dt (6-3) ∫ ⎝⎜ ⎠⎟ T −T /2 T Evaluating this integral gives τ /2 1 ⎛ −ik2πt ⎞ c = Aexp dt k ∫ ⎝⎜ ⎠⎟ T −τ /2 T τ /2 A ⎡ T ⎛ −ik2πt ⎞ ⎤ = exp ⎢ ⎝⎜ ⎠⎟ ⎥ T ⎣−ik2π T ⎦−τ /2 A T ⎛ kπτ ⎞ = sin⎜ ⎟ (6-4) T kπ ⎝ T ⎠ ⎛ kπτ ⎞ sin⎜ ⎟ Aτ ⎝ T ⎠ Aτ ⎛ kπτ ⎞ = = sinc T kπτ T ⎝⎜ T ⎠⎟ T where we have used the relationship exp(iy) = cos(y) + i sin(y) to evaluate the integral with the cosine terms cancelling because of symmetry. The Fourier Series coefficients for a pulse train is given by a sinc function. 6-1 ESS 522 2014 The largest amplitude terms in the Fourier series have k < T/τ. Also if T = τ then the time series has a constant amplitude and all the coefficients except c0 are equal to zero (the equivalent of the inverse Fourier transform of a Dirac delta function in frequency). Now if we allow each pulse to become a delta function which can be written mathematically by letting τ → 0 with A = 1/τ which yields a simple result 1 c = , lim τ → 0, A = 1 / τ (6-5) k T A row of delta functions in the time domain spaced apart by time T is represented by a row of delta functions in the frequency domain scaled by 1/T and spaced apart in frequency by 1/T (remember f = k/T).
    [Show full text]
  • Introduction to Image Processing #5/7
    Outline Introduction to Image Processing #5/7 Thierry Geraud´ EPITA Research and Development Laboratory (LRDE) 2006 Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian Thierry Geraud´ Introduction to Image Processing #5/7 Outline Outline 1 Introduction 2 Distributions About the Dirac Delta Function Some Useful Functions and Distributions 3 Fourier and Convolution 4 Sampling 5 Convolution and Linear Filtering 6 Some 2D Linear Filters Gradients Laplacian
    [Show full text]
  • The Mellin Transform Jacqueline Bertrand, Pierre Bertrand, Jean-Philippe Ovarlez
    The Mellin Transform Jacqueline Bertrand, Pierre Bertrand, Jean-Philippe Ovarlez To cite this version: Jacqueline Bertrand, Pierre Bertrand, Jean-Philippe Ovarlez. The Mellin Transform. The Transforms and Applications Handbook, 1995, 978-1420066524. hal-03152634 HAL Id: hal-03152634 https://hal.archives-ouvertes.fr/hal-03152634 Submitted on 25 Feb 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The Transforms and Applications Handbook Chapter 12 The Mellin Transform 1 Jacqueline Bertrand2 Pierre Bertrand3 Jean-Philippe Ovarlez 4 1Published as Chapter 12 in "The Transform and Applications Handbook", Ed. A.D.Poularikas , Volume of "The Electrical Engineering Handbook" series, CRC Press inc, 1995 2CNRS and University Paris VII, LPTM, 75251 PARIS, France 3ONERA/DES, BP 72, 92322 CHATILLON, France 4id. Contents 1 Introduction 1 2 The classical approach and its developments 3 2.1 Generalities on the transformation . 3 2.1.1 Definition and relation to other transformations . 3 2.1.2 Inversion formula . 5 2.1.3 Transformation of distributions. 8 2.1.4 Some properties of the transformation. 11 2.1.5 Relation to multiplicative convolution . 13 2.1.6 Hints for a practical inversion of the Mellin transformation .
    [Show full text]
  • 11S Poisson Summation Formula Masatsugu Sei Suzuki Department of Physics, SUNY at Bimghamton (Date: January 08, 2011)
    Chapter 11S Poisson summation formula Masatsugu Sei Suzuki Department of Physics, SUNY at Bimghamton (Date: January 08, 2011) Poisson summation formula Fourier series Convolution Random walk Diffusion Magnetization Gaussian distribution function 11S.1 Poisson summataion formula For appropriate functions f(x), the Poisson summation formula may be stated as f (x n) 2 F(k m) , (1) n m where m and n are integers, and F(k) is the Fourier transform of f(x) and is defined by 1 F(k) F[ f (x)] eikx f (x)dx . 2 Note that the inverse Fourier transform is given by 1 f (x) eikx F(k)dk . 2 The factor of the right hand side of Eq.(1) arises from the definition of the Fourier transform. ((Proof)) The proof of Eq.(1) is given as follows. 1 f (x n) F(k)dk eikn n 2 n 1 We evaluate the factor I eikn . n It is evident that I is not equal to zero only when k = 2m (m; integer). Therefore I can be expressed by I A (k 2m) , m where A is the normalization factor. Then 1 f (x n) F(k)dk[A (k 2m)] n 2 m A F(k) (k 2m)dk 2 m A F(k 2m) 2 m A F(k 2n) 2 n The normalization factor, A, is readily shown to be 2 by considering the symmetrical case n2 f (x n) e , 1 2 F(k 2n) en 2 ________________________________________________________________________ ((Mathematica)) 2 f x_ Exp x2 ; FourierTransform f x , x, k, FourierParameters 0, 1 2 k 4 2 ______________________________________________________________________ Since 2 f (x n) A 2 .
    [Show full text]
  • Dirac Brushes (Or, the Fractional Fourier Transform of Dirac Combs)
    DIRAC BRUSHES (OR, THE FRACTIONAL FOURIER TRANSFORM OF DIRAC COMBS) JOE VIOLA Abstract. In analogy with the Poisson summation formula, we identify when the fractional Fourier transform, applied to a Dirac comb in dimension one, gives a discretely supported measure. We describe the resulting series of com- plex multiples of delta functions, and through either the metaplectic represen- tation of SL(2; Z) or the Bargmann transform, we see that the the identification of these measures is equivalent to the functional equation for the Jacobi theta functions. In tracing the values of the antiderivative in certain small-angle limits, we observe Euler spirals, and on a smaller scale, these spirals are made up of Gauss sums which give the coefficient in the aforementioned functional equation. Contents 1. Introduction 1 2. Direct proof of Theorems 1.2 and 1.3 7 3. Using the metaplectic representation of SL(2; R) 9 4. Evaluation of the coefficient µ 20 5. The Bargmann transform 25 6. Periodicity and parity 28 7. Questions of continuity in α 29 8. Numerics around approximations to irrational cotangents 39 References 41 1. Introduction 1.1. Setting and main results. For r > 0, we consider the Dirac comb p arXiv:1812.05346v1 [math.CA] 13 Dec 2018 X (1.1) xr(x) = r δ(x − rk) k2Z 0 as a distribution in the dualp Schwartz space S (R), meaning that the Dirac comb is defined via its action f 7! r P f(rk) on rapidly decaying smooth functions. k2Z We write the Fourier transform as Z (1.2) Ff(ξ) = i−1=2 e−2πixξf(x) dx Universite´ de Nantes, CNRS, Laboratoire de Mathematiques´ Jean Leray (LMJL - UMR 6629), F-44000 Nantes, France E-mail address: [email protected].
    [Show full text]