Dirac Delta Function

Total Page:16

File Type:pdf, Size:1020Kb

Dirac Delta Function APPENDIX E Dirac Delta Function Paul Dirac introduced some useful formal tools (such as his notation for integrals and operators). One of them is the Dirac delta function δ(x), an object then unknown to mathematicians, which turned out to be very useful in physics. We may think of it as of a function with the following characteristics1 • It is nonzero, but very close to x = 0, where its value is +∞. • The surface under its plot equals 1, what is highlighted by a symbolic equation: ∞ δ(x)dx = 1. −∞ When we look at a straight thin reed protruding from a lake (with the water level = 0), then we have to do with something similar to the Dirac delta function. The only task of the Dirac delta function is its specific behavior, when integrating the product f (x)δ(x) and the integration includes the point x = 0; namely: b f (x)δ(x)dx = f (0). (E.1) a This result is quite understandable: the integral refers to the surface under the curve f (x)δ(x), but since δ(x) is so concentrated at x = 0, then it pays to take seriously only those x that are “extremely close”tox = 0. Over there, f (x) is equal to f (0). The constant f (0) can be ∞ extracted from the integral, which itself, therefore, has the form −∞ δ(x)dx = 1. This is why we get the right side of the last equation. Of course, δ(x − c) represents the same narrow peak, but at x = c. Therefore, for a ≤ c ≤ b,wehave b f (x)δ x − c dx = f (c). (E.2) a 1 More precisely, this is not a function; it is called a distribution. The theory of distributions were developed by mathematicians only after Dirac’s work. Ideas of Quantum Chemistry, Second Edition. http://dx.doi.org/10.1016/B978-0-444-59436-5.00025-8 © 2014 Elsevier B.V. All rights reserved. e69 e70 Appendix E Approximations to δ(x) The Dirac delta function δ(x) can be approximated by many functions, that depend on a certain parameter and have the following properties: • When the parameter tends to approach a limit, then the values of the functions for x distant from 0 become smaller and smaller, while as x gets close to zero, they are getting larger and larger (peaking at close to x = 0). • The integral of the function tends to be equal (or be close to) 1 when the parameter approaches its limit value. Here are several functions that approximate the Dirac delta function: • A rectangular function centered at x = 0, with the rectangle surface equal to 1 (a → 0): 1 for − a ≤ x ≤ a f x; a = a 2 2 . 1 0 for other • A Gaussian function2 (a →∞) normalized to 1: a 2 f (x; a) = e−ax . 2 π • Another function is: 1 sin ax f x; a = lim when a →∞. 3 π x • The last function is (we will use it when considering the interaction of matter with 3 radiation) : 1 sin2 ax f x; a = lim when a →∞. 4 πa x2 2 a −ax2 Let us see how an approximation f2 = π e does the job of the Dirac delta function when a →∞.Letus ( ) = ( − )2 take a function f x x 5 and consider the integral ∞ ∞ π π a 2 −ax2 a 1 1 f (x) f2(x) dx = x − 5 e dx = + 0 + 25 = + 25. −∞ π −∞ π 4a a a 4a When a →∞, then the value of the integral tends to be 25 = f (0), as it must be for the Dirac delta function that is used instead of f2. 3 The function under the limit symbol may be treated as A[sin (ax)]2, with amplitude A decaying as A = 1/x2, when |x|→∞. For small values of x,thesin(ax) changes to ax (as it is seen from its Taylor expansion); hence, for small x, the function changes to a2. This means that when a →∞, there will be a dominant peak close to x = 0, although there will be some smaller side peaks clustering around x = 0. The surface of the dominating peak may be approximated by a triangle of the base 2π/a and the height a2, and then we obtain its surface equal to πa. Hence, the “approximate normalization factor”1/(πa) in f4. Dirac Delta Function e71 Properties of δ(x) Function δ(cx) Let us see what δ(cx) is: a ac2 δ(cx) = lim exp −ac2x2 = lim exp −ac2x2 a→∞ π a→∞ πc2 1 ac2 1 = lim exp −ac2x2 = δ(x). |c| ac2→∞ π |c| Therefore, 1 δ(cx) = δ(x). (E.3) |c| Dirac δ in 3-D The 3-D Dirac delta function is defined in the Cartesian coordinate system as δ(r) = δ(x)δ(y)δ(z), where r = (x, y, z). Then, δ(r) denotes a peak of the infinite height at r = 0,δ(r − A) denotes an identical peak at the position shown by the vector A from the origin. Each of the peaks is normalized to 1; i.e., the integral over the whole 3-D space equals 1. This means that Eq. (E.1) is satisfied, but this time x ∈ R3 . An Application of the Dirac Delta Function When may such a concept as the Dirac delta function be useful? Here is an example. Let us imagine that we have (in the 3-D space) two molecular charge distributions: ρA(r) and ρB(r). Each of the distributions consists of the electronic part and the nuclear part. How can such charge distributions be represented mathematically? There is no problem with mathematical representation of the electronic parts–they are simply some functions of the position r in space: −ρel,A(r) and −ρel,B(r) for each molecule, respectively. The integrals of the corresponding electronic distributions yield, of course, −NA and −NB (in a.u.), or the negative number of the electrons (because the electrons carry a negative charge). How, then, do you write the nuclear charge distribution as a function of r? There is no way to do this without the Dirac e72 Appendix E delta function. With the function, our task becomes simple: ρnucl,A(r) = Z A,aδ r − ra ∈ a A ρnucl,B(r) = Z B,bδ r − rb . b∈B At the nuclear positions, we put delta functions with “intensities” equal to the nuclear charges. For neutral molecules, ρnucl,A(r)dr and ρnucl,B(r)dr have to give +NA and +NB, respec- tively. Indeed, we have ρnucl,A(r)dr = Z A,a δ r − ra dr = Z A,a = NA a∈A a∈A ρnucl,B(r)dr = Z B,b δ r − rb dr = Z B,b = NB. b∈B b∈B Thus, the Dirac delta function enables us to write the total charge distributions and their interactions in an elegant way: ρA(r) =−ρel,A(r) + ρnucl,A(r) ρB(r) =−ρel,B(r) + ρnucl,B(r). To demonstrate the difference, let us write the electrostatic interaction of the two charge distributions both without the Dirac delta functions: Z A Z B Z A E = − ρ , (r) dr inter |r − r | el B |r − r | a∈A b∈B a b a∈A a Z B ρel,A(r)ρel,B(r ) − ρ , (r) dr + drdr . el A |r − r | |r − r| b∈B b The four terms mean, respectively, the following interactions: nuclei of A with nuclei of B, nuclei of A with electrons of B, electrons of A with nuclei of B, electrons of A with electrons of B. With the Dirac delta function, the same expression reads: ρA(r)ρB r E = dr dr . inter |r − r| The last expression comes from the definition of the Coulomb interaction and the definition of the integral.4 No matter what the charge distributions look like, whether they are diffuse (like the electronic ones) or pointlike (like those of the nuclei), the formula is always the same. 4 Of course, the two notations are equivalent because inserting the total charge distributions into the last integral, as well as using the properties of the Dirac delta function, gives the first expression for Einter..
Recommended publications
  • Problem Set 2
    22.02 – Introduction to Applied Nuclear Physics Problem set # 2 Issued on Wednesday Feb. 22, 2012. Due on Wednesday Feb. 29, 2012 Problem 1: Gaussian Integral (solved problem) 2 2 1 x /2σ The Gaussian function g(x) = e− is often used to describe the shape of a wave packet. Also, it represents √2πσ2 the probability density function (p.d.f.) of the Gaussian distribution. 2 2 ∞ 1 x /2σ a) Calculate the integral √ 2 e− −∞ 2πσ Solution R 2 2 x x Here I will give the calculation for the simpler function: G(x) = e− . The integral I = ∞ e− can be squared as: R−∞ 2 2 2 2 2 2 ∞ x ∞ ∞ x y ∞ ∞ (x +y ) I = dx e− = dx dy e− e− = dx dy e− Z Z Z Z Z −∞ −∞ −∞ −∞ −∞ This corresponds to making an integral over a 2D plane, defined by the cartesian coordinates x and y. We can perform the same integral by a change of variables to polar coordinates: x = r cos ϑ y = r sin ϑ Then dxdy = rdrdϑ and the integral is: 2π 2 2 2 ∞ r ∞ r I = dϑ dr r e− = 2π dr r e− Z0 Z0 Z0 Now with another change of variables: s = r2, 2rdr = ds, we have: 2 ∞ s I = π ds e− = π Z0 2 x Thus we obtained I = ∞ e− = √π and going back to the function g(x) we see that its integral just gives −∞ ∞ g(x) = 1 (as neededR for a p.d.f). −∞ 2 2 R (x+b) /c Note: we can generalize this result to ∞ ae− dx = ac√π R−∞ Problem 2: Fourier Transform Give the Fourier transform of : (a – solved problem) The sine function sin(ax) Solution The Fourier Transform is given by: [f(x)][k] = 1 ∞ dx e ikxf(x).
    [Show full text]
  • The Error Function Mathematical Physics
    R. I. Badran The Error Function Mathematical Physics The Error Function and Stirling’s Formula The Error Function: x 2 The curve of the Gaussian function y e is called the bell-shaped graph. The error function is defined as the area under part of this curve: x 2 2 erf (x) et dt 1. . 0 There are other definitions of error functions. These are closely related integrals to the above one. 2. a) The normal or Gaussian distribution function. x t2 1 1 1 x P(, x) e 2 dt erf ( ) 2 2 2 2 Proof: Put t 2u and proceed, you might reach a step of x 1 2 P(0, x) eu du P(,x) P(,0) P(0,x) , where 0 1 x P(0, x) erf ( ) Here you can prove that 2 2 . This can be done by using the definition of error function in (1). 0 u2 I I e du Now you need to find P(,0) where . To find this integral you have to put u=x first, then u= y and multiply the two resulting integrals. Make the change of variables to polar coordinate you get R. I. Badran The Error Function Mathematical Physics 0 2 2 I 2 er rdr d 0 From this latter integral you get 1 I P(,0) 2 and 2 . 1 1 x P(, x) erf ( ) 2 2 2 Q. E. D. x 2 t 1 2 1 x 2.b P(0, x) e dt erf ( ) 2 0 2 2 (as proved earlier in 2.a).
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • Neural Network for the Fast Gaussian Distribution Test Author(S)
    Document Title: Neural Network for the Fast Gaussian Distribution Test Author(s): Igor Belic and Aleksander Pur Document No.: 208039 Date Received: December 2004 This paper appears in Policing in Central and Eastern Europe: Dilemmas of Contemporary Criminal Justice, edited by Gorazd Mesko, Milan Pagon, and Bojan Dobovsek, and published by the Faculty of Criminal Justice, University of Maribor, Slovenia. This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this final report available electronically in addition to NCJRS Library hard-copy format. Opinions and/or reference to any specific commercial products, processes, or services by trade name, trademark, manufacturer, or otherwise do not constitute or imply endorsement, recommendation, or favoring by the U.S. Government. Translation and editing were the responsibility of the source of the reports, and not of the U.S. Department of Justice, NCJRS, or any other affiliated bodies. IGOR BELI^, ALEKSANDER PUR NEURAL NETWORK FOR THE FAST GAUSSIAN DISTRIBUTION TEST There are several problems where it is very important to know whether the tested data are distributed according to the Gaussian law. At the detection of the hidden information within the digitized pictures (stega- nography), one of the key factors is the analysis of the noise contained in the picture. The incorporated noise should show the typically Gaussian distribution. The departure from the Gaussian distribution might be the first hint that the picture has been changed – possibly new information has been inserted. In such cases the fast Gaussian distribution test is a very valuable tool.
    [Show full text]
  • Lectures on the Local Semicircle Law for Wigner Matrices
    Lectures on the local semicircle law for Wigner matrices Florent Benaych-Georges∗ Antti Knowlesy September 11, 2018 These notes provide an introduction to the local semicircle law from random matrix theory, as well as some of its applications. We focus on Wigner matrices, Hermitian random matrices with independent upper-triangular entries with zero expectation and constant variance. We state and prove the local semicircle law, which says that the eigenvalue distribution of a Wigner matrix is close to Wigner's semicircle distribution, down to spectral scales containing slightly more than one eigenvalue. This local semicircle law is formulated using the Green function, whose individual entries are controlled by large deviation bounds. We then discuss three applications of the local semicircle law: first, complete delocalization of the eigenvectors, stating that with high probability the eigenvectors are approximately flat; second, rigidity of the eigenvalues, giving large deviation bounds on the locations of the individual eigenvalues; third, a comparison argument for the local eigenvalue statistics in the bulk spectrum, showing that the local eigenvalue statistics of two Wigner matrices coincide provided the first four moments of their entries coincide. We also sketch further applications to eigenvalues near the spectral edge, and to the distribution of eigenvectors. arXiv:1601.04055v4 [math.PR] 10 Sep 2018 ∗Universit´eParis Descartes, MAP5. Email: [email protected]. yETH Z¨urich, Departement Mathematik. Email: [email protected].
    [Show full text]
  • Semi-Parametric Likelihood Functions for Bivariate Survival Data
    Old Dominion University ODU Digital Commons Mathematics & Statistics Theses & Dissertations Mathematics & Statistics Summer 2010 Semi-Parametric Likelihood Functions for Bivariate Survival Data S. H. Sathish Indika Old Dominion University Follow this and additional works at: https://digitalcommons.odu.edu/mathstat_etds Part of the Mathematics Commons, and the Statistics and Probability Commons Recommended Citation Indika, S. H. S.. "Semi-Parametric Likelihood Functions for Bivariate Survival Data" (2010). Doctor of Philosophy (PhD), Dissertation, Mathematics & Statistics, Old Dominion University, DOI: 10.25777/ jgbf-4g75 https://digitalcommons.odu.edu/mathstat_etds/30 This Dissertation is brought to you for free and open access by the Mathematics & Statistics at ODU Digital Commons. It has been accepted for inclusion in Mathematics & Statistics Theses & Dissertations by an authorized administrator of ODU Digital Commons. For more information, please contact [email protected]. SEMI-PARAMETRIC LIKELIHOOD FUNCTIONS FOR BIVARIATE SURVIVAL DATA by S. H. Sathish Indika BS Mathematics, 1997, University of Colombo MS Mathematics-Statistics, 2002, New Mexico Institute of Mining and Technology MS Mathematical Sciences-Operations Research, 2004, Clemson University MS Computer Science, 2006, College of William and Mary A Dissertation Submitted to the Faculty of Old Dominion University in Partial Fulfillment of the Requirement for the Degree of DOCTOR OF PHILOSOPHY DEPARTMENT OF MATHEMATICS AND STATISTICS OLD DOMINION UNIVERSITY August 2010 Approved by: DayanandiN; Naik Larry D. Le ia M. Jones ABSTRACT SEMI-PARAMETRIC LIKELIHOOD FUNCTIONS FOR BIVARIATE SURVIVAL DATA S. H. Sathish Indika Old Dominion University, 2010 Director: Dr. Norou Diawara Because of the numerous applications, characterization of multivariate survival dis­ tributions is still a growing area of research.
    [Show full text]
  • Delta Functions and Distributions
    When functions have no value(s): Delta functions and distributions Steven G. Johnson, MIT course 18.303 notes Created October 2010, updated March 8, 2017. Abstract x = 0. That is, one would like the function δ(x) = 0 for all x 6= 0, but with R δ(x)dx = 1 for any in- These notes give a brief introduction to the mo- tegration region that includes x = 0; this concept tivations, concepts, and properties of distributions, is called a “Dirac delta function” or simply a “delta which generalize the notion of functions f(x) to al- function.” δ(x) is usually the simplest right-hand- low derivatives of discontinuities, “delta” functions, side for which to solve differential equations, yielding and other nice things. This generalization is in- a Green’s function. It is also the simplest way to creasingly important the more you work with linear consider physical effects that are concentrated within PDEs, as we do in 18.303. For example, Green’s func- very small volumes or times, for which you don’t ac- tions are extremely cumbersome if one does not al- tually want to worry about the microscopic details low delta functions. Moreover, solving PDEs with in this volume—for example, think of the concepts of functions that are not classically differentiable is of a “point charge,” a “point mass,” a force plucking a great practical importance (e.g. a plucked string with string at “one point,” a “kick” that “suddenly” imparts a triangle shape is not twice differentiable, making some momentum to an object, and so on.
    [Show full text]
  • Error and Complementary Error Functions Outline
    Error and Complementary Error Functions Reading Problems Outline Background ...................................................................2 Definitions .....................................................................4 Theory .........................................................................6 Gaussian function .......................................................6 Error function ...........................................................8 Complementary Error function .......................................10 Relations and Selected Values of Error Functions ........................12 Numerical Computation of Error Functions ..............................19 Rationale Approximations of Error Functions ............................21 Assigned Problems ..........................................................23 References ....................................................................27 1 Background The error function and the complementary error function are important special functions which appear in the solutions of diffusion problems in heat, mass and momentum transfer, probability theory, the theory of errors and various branches of mathematical physics. It is interesting to note that there is a direct connection between the error function and the Gaussian function and the normalized Gaussian function that we know as the \bell curve". The Gaussian function is given as G(x) = Ae−x2=(2σ2) where σ is the standard deviation and A is a constant. The Gaussian function can be normalized so that the accumulated area under the
    [Show full text]
  • MTHE/MATH 332 Introduction to Control
    1 Queen’s University Mathematics and Engineering and Mathematics and Statistics MTHE/MATH 332 Introduction to Control (Preliminary) Supplemental Lecture Notes Serdar Yuksel¨ April 3, 2021 2 Contents 1 Introduction ....................................................................................1 1.1 Introduction................................................................................1 1.2 Linearization...............................................................................4 2 Systems ........................................................................................7 2.1 System Properties...........................................................................7 2.2 Linear Systems.............................................................................8 2.2.1 Representation of Discrete-Time Signals in terms of Unit Pulses..............................8 2.2.2 Linear Systems.......................................................................8 2.3 Linear and Time-Invariant (Convolution) Systems................................................9 2.4 Bounded-Input-Bounded-Output (BIBO) Stability of Convolution Systems........................... 11 2.5 The Frequency Response (or Transfer) Function of Linear Time-Invariant Systems..................... 11 2.6 Steady-State vs. Transient Solutions............................................................ 11 2.7 Bode Plots................................................................................. 12 2.8 Interconnections of Systems..................................................................
    [Show full text]
  • Computational Methods in Nonlinear Physics 1. Review of Probability and Statistics
    Computational Methods in Nonlinear Physics 1. Review of probability and statistics P.I. Hurtado Departamento de Electromagnetismo y F´ısicade la Materia, and Instituto Carlos I de F´ısica Te´oricay Computacional. Universidad de Granada. E-18071 Granada. Spain E-mail: [email protected] Abstract. These notes correspond to the first part (20 hours) of a course on Computational Methods in Nonlinear Physics within the Master on Physics and Mathematics (FisyMat) of University of Granada. In this first chapter we review some concepts of probability theory and stochastic processes. Keywords: Computational physics, probability and statistics, Monte Carlo methods, stochastic differential equations, Langevin equation, Fokker-Planck equation, molecular dynamics. References and sources [1] R. Toral and P. Collet, Stochastic Numerical Methods, Wiley (2014). [2] H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, Oxford Univ. Press (2002). [3] N.G. van Kampen, Stochastic Processes in Physics and Chemistry, Elsevier (2007). [4] Wikipedia: https://en.wikipedia.org CONTENTS 2 Contents 1 The probability space4 1.1 The σ-algebra of events....................................5 1.2 Probability measures and Kolmogorov axioms........................6 1.3 Conditional probabilities and statistical independence....................8 2 Random variables9 2.1 Definition of random variables................................. 10 3 Average values, moments and characteristic function 15 4 Some Important Probability Distributions 18 4.1 Bernuilli distribution.....................................
    [Show full text]
  • PROPERTIES of the GAUSSIAN FUNCTION } ) ( { Exp )( C Bx Axy
    PROPERTIES OF THE GAUSSIAN FUNCTION The Gaussian in an important 2D function defined as- (x b)2 y(x) a exp{ } c , where a, b, and c are adjustable constants. It has a bell shape with a maximum of y=a occurring at x=b. The first two derivatives of y(x) are- 2a(x b) (x b)2 y'(x) exp{ } c c and 2a (x b)2 y"(x) c 2(x b)2 exp{ ) c2 c Thus the function has zero slope at x=b and an inflection point at x=bsqrt(c/2). Also y(x) is symmetric about x=b. It is our purpose here to look at some of the properties of y(x) and in particular examine the special case known as the probability density function. Karl Gauss first came up with the Gaussian in the early 18 hundreds while studying the binomial coefficient C[n,m]. This coefficient is defined as- n! C[n,m]= m!(n m)! Expanding this definition for constant n yields- 1 1 1 1 C[n,m] n! ... 0!(n 0)! 1!(n 1)! 2!(n 2)! n!(0)! As n gets large the magnitude of the individual terms within the curly bracket take on the value of a Gaussian. Let us demonstrate things for n=10. Here we have- C[10,m]= 1+10+45+120+210+252+210+120+45+10+1=1024=210 These coefficients already lie very close to a Gaussian with a maximum of 252 at m=5. For this discovery, and his numerous other mathematical contributions, Gauss has been honored on the German ten mark note as shown- If you look closely, it shows his curve.
    [Show full text]
  • Introduction to Random Matrices Theory and Practice
    Introduction to Random Matrices Theory and Practice arXiv:1712.07903v1 [math-ph] 21 Dec 2017 Giacomo Livan, Marcel Novaes, Pierpaolo Vivo Contents Preface 4 1 Getting Started 6 1.1 One-pager on random variables . .8 2 Value the eigenvalue 11 2.1 Appetizer: Wigner’s surmise . 11 2.2 Eigenvalues as correlated random variables . 12 2.3 Compare with the spacings between i.i.d.’s . 12 2.4 Jpdf of eigenvalues of Gaussian matrices . 15 3 Classified Material 17 3.1 Count on Dirac . 17 3.2 Layman’s classification . 20 3.3 To know more... 22 4 The fluid semicircle 24 4.1 Coulomb gas . 24 4.2 Do it yourself (before lunch) . 26 5 Saddle-point-of-view 32 5.1 Saddle-point. What’s the point? . 32 5.2 Disintegrate the integral equation . 33 5.3 Better weak than nothing . 34 5.4 Smart tricks . 35 5.5 The final touch . 36 5.6 Epilogue . 37 5.7 To know more... 40 6 Time for a change 41 6.1 Intermezzo: a simpler change of variables . 41 6.2 ...that is the question . 42 6.3 Keep your volume under control . 42 6.4 For doubting Thomases... 43 6.5 Jpdf of eigenvalues and eigenvectors . 43 1 Giacomo Livan, Marcel Novaes, Pierpaolo Vivo 6.6 Leave the eigenvalues alone . 44 6.7 For invariant models... 45 6.8 The proof . 46 7 Meet Vandermonde 47 7.1 The Vandermonde determinant . 47 7.2 Do it yourself . 48 8 Resolve(nt) the semicircle 51 8.1 A bit of theory .
    [Show full text]