Delta Functions and Distributions
Total Page:16
File Type:pdf, Size:1020Kb
When functions have no value(s): Delta functions and distributions Steven G. Johnson, MIT course 18.303 notes Created October 2010, updated March 8, 2017. Abstract x = 0. That is, one would like the function δ(x) = 0 for all x 6= 0, but with R δ(x)dx = 1 for any in- These notes give a brief introduction to the mo- tegration region that includes x = 0; this concept tivations, concepts, and properties of distributions, is called a “Dirac delta function” or simply a “delta which generalize the notion of functions f(x) to al- function.” δ(x) is usually the simplest right-hand- low derivatives of discontinuities, “delta” functions, side for which to solve differential equations, yielding and other nice things. This generalization is in- a Green’s function. It is also the simplest way to creasingly important the more you work with linear consider physical effects that are concentrated within PDEs, as we do in 18.303. For example, Green’s func- very small volumes or times, for which you don’t ac- tions are extremely cumbersome if one does not al- tually want to worry about the microscopic details low delta functions. Moreover, solving PDEs with in this volume—for example, think of the concepts of functions that are not classically differentiable is of a “point charge,” a “point mass,” a force plucking a great practical importance (e.g. a plucked string with string at “one point,” a “kick” that “suddenly” imparts a triangle shape is not twice differentiable, making some momentum to an object, and so on. The prob- the wave equation problematic with traditional func- lem is that there is no classical function δ(x) having tions). Any serious work with PDEs will eventually these properties. run into the concept of a “weak solution,” which is For example, one could imagine constructing this essentially a version of the distribution concept. function as the limit: ( 1 ∆x 0 ≤ x < ∆x δ(x) = lim = lim δ∆x(x)? 1 What’s wrong with functions? ∆x!0+ 0 otherwise ∆x!0+ The most familiar notion of a function f(x) is a map For any ∆x > 0, the function δ (x) at right has from real numbers to real numbers (or maybe ∆x R R integral = 1 and is zero except near x = 0. Un- complex numbers ); that is, for every x you have C fortunately, the ∆x ! 0 limit does not exist as an a value y = f(x). Of course, you may know that ordinary function: δ (x) approaches 1 for x = 0, one can define functions from any set to any other ∆x but of course 1 is not a real number. set, but at first glance it seems that ! func- R R Informally, one often sees “definitions” of δ(x) that tions (and multi-dimensional generalizations thereof) describe it as some mysterious object that is “not are the best-suited concept for describing most phys- quite” a function, which = 0 for x 6= 0 but is unde- ical quantities—for example, velocity as a function fined at x = 0, and which is “only really defined inside of time, or pressure or density as a function of posi- an integral” (where it “integrates” to 1).1 This may tion in a fluid, and so on. Unfortunately, such func- leave you with a queasy feeling that δ(x) is somehow tions have some severe drawbacks that, eventually, lead them to be replaced in some contexts by an- 1Historically, this unsatisfactory description is precisely other concept: distributions (also called generalized how δ(x) first appeared, and for many years there was a corre- functions). What are these drawbacks? sponding cloud of guilt and uncertainty surrounding any usage of it. Most famously, an informal δ(x) notion was popularized by physicist Paul Dirac, who in his Principles of Quantum Me- chanics (1930) wrote: “Thus δ(x) is not a quantity which can 1.1 No delta functions be generally used in mathematical analysis like an ordinary For lots of applications, such as those involving PDEs function, but its use must be confined to certain simple types of expression for which it is obvious that no inconsistency can and Green’s functions, one would like to have a func- arise.” You know you are on shaky ground, in mathematics, tion δ(x) whose integral is “concentrated” at the point when you are forced to appeal to the “obvious.” 1 not real or rigorous (and therefore anything based on u 6= 0. except for u(x) that are only nonzero for iso- it may be suspect). For example, integration is an lated points. And with functions like S(x) there are operation that is classically only defined for ordinary all sorts of apparently pointless questions about what functions, so it may not even be clear (yet) what “R ” value to assign exactly at the discontinuity. It may means when we write “R δ(x)dx”. likewise seem odd to care about what value to assign the slope of jxj at x = 0; surely any value in [−1; 1] 1.2 Not all functions are differentiable should do? Most physical laws can be written in the form of 1.4 Limits and derivatives derivatives, but lots of functions are not differen- cannot always be interchanged tiable. Discontinuous functions arise all of the time at the interface between two materials (e.g. think In numerical and analytical PDE methods, we are of the density at the interface between glass and air). continually writing functions as limits. A Fourier se- Of course, at a microscopic level you could argue that ries is a limit as the number of terms goes to infinity. the quantum wavefunctions might be continuous, but A finite-difference method solves the problem in the one hardly wishes to resort to atoms and quantum limit as ∆x ! 0. We initially find the Green’s func- mechanics every time a material has a boundary! tion as the limit of the response to a box-like function The classic example of a discontinuous function is that is nonzero outside of a width ∆x, and then take the Heaviside step function: the limit ∆x ! 0. After all of these kinds of things, we eventually substitute our solution back into the ( 1 x ≥ 0 PDE, and we assume that the limit of the solution is S(x) = : 0 x < 0 still a solution. In doing so, however, we usually end up interchanging the limits and the differentiation. The derivative S0(x) is zero everywhere except at For example, we usually differentiate a Fourier series x = 0, where the derivative does not exist—the slope by: at x = 0 is “infinity.” Notice that S0(x) very much 1 1 1 x d X X d X resembles δ(x), and R δ(x0)dx0 would certainly look a sin(nπx) = a sin(nπx) = nπa cos(nπx); 1 dx n n dx n something like S(x) if it existed (since the “integral” n=1 n=1 n=1 of δ should be 1 for x > 0)—this is not a coincidence, but this is not always true if we interpret “=” in the and it would be a shame not to exploit it! usual sense of being true for every x. The general A function doesn’t need to be discontinuous to lack problem is that one cannot always interchange limits a derivative. Consider the function jxj: its derivative and derivatives, i.e. d lim 6= lim d . (Note that is +1 for x > 0 and −1 for x < 0, but at x = 0 dx dx P1 is really a limit lim PN .) the derivative doesn’t exist. We say that jxj is only n=1 N!1 n=1 A simple example of this is the function “piecewise” differentiable. Note that S(x) is very useful for writing down all 8 >− x < − ( sorts of functions with discontinuities. jxj = xS(x) − < 0 1 − < x < f(x) = x − ≤ x ≤ ; f(x) = : xS(−x), for example, and the δ (x) “box” function 0 jxj > ∆x :> x > on the right-hand-side of the δ(x) “definition” above can be written δ∆x(x) = [S(x) − S(x − ∆x)]=∆x. The limit as ! 0 of f(x) is simply zero. But 0 f(0) = 1 for all , so 1.3 Nagging worries about d d 0 = lim f 6= 1 = lim f : discrepancies at isolated points !0 !0 dx x=0 dx x=0 When we try to do linear algebra with functions, we Notice, however, that this mismatch only occurs at continually find ourselves worrying about excluding one isolated point x = 0. The same thing happens odd caveats and counterexamples that have to do for Fourier series: differentiation term-by-term works with finite discrepancies at isolated points. For ex- except at isolated points. Thus, this problem returns ample, a Fourier series of a square-integrable function to the complaint in the previous section 1.3. converges everywhere. except at isolated points of discontinuity [like the point x = 0 for S(x), where 1.5 Too much information a Fourier series would converge to 0.5]. As an- other example, hu; vi = R uv¯ defines an inner prod- A function f(x) gives us a value at every point x, but uct on functions and a norm kuk2 = hu; ui > 0 for does this really correspond to a measurable quantity 2 in the physical universe? How would you measure the [The generalization to functions φ(~x) for ~x 2 Rd, with velocity at one instant in time, or the density at one the distributions corresponding to d-dimensional in- point in a fluid? Of course, your measurement device tegrals, is very straightforward, but we will stick to could be very precise, and very small, and very fast, d = 1 for simplicity.] Second,4 we require that ffφg but in the end all you can ever measure are averages act like integration in that it must be linear: of f(x) over a small region of space and/or time.