Delta Functions and Distributions

Total Page:16

File Type:pdf, Size:1020Kb

Delta Functions and Distributions When functions have no value(s): Delta functions and distributions Steven G. Johnson, MIT course 18.303 notes Created October 2010, updated March 8, 2017. Abstract x = 0. That is, one would like the function δ(x) = 0 for all x 6= 0, but with R δ(x)dx = 1 for any in- These notes give a brief introduction to the mo- tegration region that includes x = 0; this concept tivations, concepts, and properties of distributions, is called a “Dirac delta function” or simply a “delta which generalize the notion of functions f(x) to al- function.” δ(x) is usually the simplest right-hand- low derivatives of discontinuities, “delta” functions, side for which to solve differential equations, yielding and other nice things. This generalization is in- a Green’s function. It is also the simplest way to creasingly important the more you work with linear consider physical effects that are concentrated within PDEs, as we do in 18.303. For example, Green’s func- very small volumes or times, for which you don’t ac- tions are extremely cumbersome if one does not al- tually want to worry about the microscopic details low delta functions. Moreover, solving PDEs with in this volume—for example, think of the concepts of functions that are not classically differentiable is of a “point charge,” a “point mass,” a force plucking a great practical importance (e.g. a plucked string with string at “one point,” a “kick” that “suddenly” imparts a triangle shape is not twice differentiable, making some momentum to an object, and so on. The prob- the wave equation problematic with traditional func- lem is that there is no classical function δ(x) having tions). Any serious work with PDEs will eventually these properties. run into the concept of a “weak solution,” which is For example, one could imagine constructing this essentially a version of the distribution concept. function as the limit: ( 1 ∆x 0 ≤ x < ∆x δ(x) = lim = lim δ∆x(x)? 1 What’s wrong with functions? ∆x!0+ 0 otherwise ∆x!0+ The most familiar notion of a function f(x) is a map For any ∆x > 0, the function δ (x) at right has from real numbers to real numbers (or maybe ∆x R R integral = 1 and is zero except near x = 0. Un- complex numbers ); that is, for every x you have C fortunately, the ∆x ! 0 limit does not exist as an a value y = f(x). Of course, you may know that ordinary function: δ (x) approaches 1 for x = 0, one can define functions from any set to any other ∆x but of course 1 is not a real number. set, but at first glance it seems that ! func- R R Informally, one often sees “definitions” of δ(x) that tions (and multi-dimensional generalizations thereof) describe it as some mysterious object that is “not are the best-suited concept for describing most phys- quite” a function, which = 0 for x 6= 0 but is unde- ical quantities—for example, velocity as a function fined at x = 0, and which is “only really defined inside of time, or pressure or density as a function of posi- an integral” (where it “integrates” to 1).1 This may tion in a fluid, and so on. Unfortunately, such func- leave you with a queasy feeling that δ(x) is somehow tions have some severe drawbacks that, eventually, lead them to be replaced in some contexts by an- 1Historically, this unsatisfactory description is precisely other concept: distributions (also called generalized how δ(x) first appeared, and for many years there was a corre- functions). What are these drawbacks? sponding cloud of guilt and uncertainty surrounding any usage of it. Most famously, an informal δ(x) notion was popularized by physicist Paul Dirac, who in his Principles of Quantum Me- chanics (1930) wrote: “Thus δ(x) is not a quantity which can 1.1 No delta functions be generally used in mathematical analysis like an ordinary For lots of applications, such as those involving PDEs function, but its use must be confined to certain simple types of expression for which it is obvious that no inconsistency can and Green’s functions, one would like to have a func- arise.” You know you are on shaky ground, in mathematics, tion δ(x) whose integral is “concentrated” at the point when you are forced to appeal to the “obvious.” 1 not real or rigorous (and therefore anything based on u 6= 0. except for u(x) that are only nonzero for iso- it may be suspect). For example, integration is an lated points. And with functions like S(x) there are operation that is classically only defined for ordinary all sorts of apparently pointless questions about what functions, so it may not even be clear (yet) what “R ” value to assign exactly at the discontinuity. It may means when we write “R δ(x)dx”. likewise seem odd to care about what value to assign the slope of jxj at x = 0; surely any value in [−1; 1] 1.2 Not all functions are differentiable should do? Most physical laws can be written in the form of 1.4 Limits and derivatives derivatives, but lots of functions are not differen- cannot always be interchanged tiable. Discontinuous functions arise all of the time at the interface between two materials (e.g. think In numerical and analytical PDE methods, we are of the density at the interface between glass and air). continually writing functions as limits. A Fourier se- Of course, at a microscopic level you could argue that ries is a limit as the number of terms goes to infinity. the quantum wavefunctions might be continuous, but A finite-difference method solves the problem in the one hardly wishes to resort to atoms and quantum limit as ∆x ! 0. We initially find the Green’s func- mechanics every time a material has a boundary! tion as the limit of the response to a box-like function The classic example of a discontinuous function is that is nonzero outside of a width ∆x, and then take the Heaviside step function: the limit ∆x ! 0. After all of these kinds of things, we eventually substitute our solution back into the ( 1 x ≥ 0 PDE, and we assume that the limit of the solution is S(x) = : 0 x < 0 still a solution. In doing so, however, we usually end up interchanging the limits and the differentiation. The derivative S0(x) is zero everywhere except at For example, we usually differentiate a Fourier series x = 0, where the derivative does not exist—the slope by: at x = 0 is “infinity.” Notice that S0(x) very much 1 1 1 x d X X d X resembles δ(x), and R δ(x0)dx0 would certainly look a sin(nπx) = a sin(nπx) = nπa cos(nπx); 1 dx n n dx n something like S(x) if it existed (since the “integral” n=1 n=1 n=1 of δ should be 1 for x > 0)—this is not a coincidence, but this is not always true if we interpret “=” in the and it would be a shame not to exploit it! usual sense of being true for every x. The general A function doesn’t need to be discontinuous to lack problem is that one cannot always interchange limits a derivative. Consider the function jxj: its derivative and derivatives, i.e. d lim 6= lim d . (Note that is +1 for x > 0 and −1 for x < 0, but at x = 0 dx dx P1 is really a limit lim PN .) the derivative doesn’t exist. We say that jxj is only n=1 N!1 n=1 A simple example of this is the function “piecewise” differentiable. Note that S(x) is very useful for writing down all 8 >− x < − ( sorts of functions with discontinuities. jxj = xS(x) − < 0 1 − < x < f(x) = x − ≤ x ≤ ; f(x) = : xS(−x), for example, and the δ (x) “box” function 0 jxj > ∆x :> x > on the right-hand-side of the δ(x) “definition” above can be written δ∆x(x) = [S(x) − S(x − ∆x)]=∆x. The limit as ! 0 of f(x) is simply zero. But 0 f(0) = 1 for all , so 1.3 Nagging worries about d d 0 = lim f 6= 1 = lim f : discrepancies at isolated points !0 !0 dx x=0 dx x=0 When we try to do linear algebra with functions, we Notice, however, that this mismatch only occurs at continually find ourselves worrying about excluding one isolated point x = 0. The same thing happens odd caveats and counterexamples that have to do for Fourier series: differentiation term-by-term works with finite discrepancies at isolated points. For ex- except at isolated points. Thus, this problem returns ample, a Fourier series of a square-integrable function to the complaint in the previous section 1.3. converges everywhere. except at isolated points of discontinuity [like the point x = 0 for S(x), where 1.5 Too much information a Fourier series would converge to 0.5]. As an- other example, hu; vi = R uv¯ defines an inner prod- A function f(x) gives us a value at every point x, but uct on functions and a norm kuk2 = hu; ui > 0 for does this really correspond to a measurable quantity 2 in the physical universe? How would you measure the [The generalization to functions φ(~x) for ~x 2 Rd, with velocity at one instant in time, or the density at one the distributions corresponding to d-dimensional in- point in a fluid? Of course, your measurement device tegrals, is very straightforward, but we will stick to could be very precise, and very small, and very fast, d = 1 for simplicity.] Second,4 we require that ffφg but in the end all you can ever measure are averages act like integration in that it must be linear: of f(x) over a small region of space and/or time.
Recommended publications
  • Sobolev Spaces, Theory and Applications
    Sobolev spaces, theory and applications Piotr Haj lasz1 Introduction These are the notes that I prepared for the participants of the Summer School in Mathematics in Jyv¨askyl¨a,August, 1998. I thank Pekka Koskela for his kind invitation. This is the second summer course that I delivere in Finland. Last August I delivered a similar course entitled Sobolev spaces and calculus of variations in Helsinki. The subject was similar, so it was not posible to avoid overlapping. However, the overlapping is little. I estimate it as 25%. While preparing the notes I used partially the notes that I prepared for the previous course. Moreover Lectures 9 and 10 are based on the text of my joint work with Pekka Koskela [33]. The notes probably will not cover all the material presented during the course and at the some time not all the material written here will be presented during the School. This is however, not so bad: if some of the results presented on lectures will go beyond the notes, then there will be some reasons to listen the course and at the same time if some of the results will be explained in more details in notes, then it might be worth to look at them. The notes were prepared in hurry and so there are many bugs and they are not complete. Some of the sections and theorems are unfinished. At the end of the notes I enclosed some references together with comments. This section was also prepared in hurry and so probably many of the authors who contributed to the subject were not mentioned.
    [Show full text]
  • Discontinuous Forcing Functions
    Math 135A, Winter 2012 Discontinuous forcing functions 1 Preliminaries If f(t) is defined on the interval [0; 1), then its Laplace transform is defined to be Z 1 F (s) = L (f(t)) = e−stf(t) dt; 0 as long as this integral is defined and converges. In particular, if f is of exponential order and is piecewise continuous, the Laplace transform of f(t) will be defined. • f is of exponential order if there are constants M and c so that jf(t)j ≤ Mect: Z 1 Since the integral e−stMect dt converges if s > c, then by a comparison test (like (11.7.2) 0 in Salas-Hille-Etgen), the integral defining the Laplace transform of f(t) will converge. • f is piecewise continuous if over each interval [0; b], f(t) has only finitely many discontinuities, and at each point a in [0; b], both of the limits lim f(t) and lim f(t) t!a− t!a+ exist { they need not be equal, but they must exist. (At the endpoints 0 and b, the appropriate one-sided limits must exist.) 2 Step functions Define u(t) to be the function ( 0 if t < 0; u(t) = 1 if t ≥ 0: Then u(t) is called the step function, or sometimes the Heaviside step function: it jumps from 0 to 1 at t = 0. Note that for any number a > 0, the graph of the function u(t − a) is the same as the graph of u(t), but translated right by a: u(t − a) jumps from 0 to 1 at t = a.
    [Show full text]
  • Notes on Partial Differential Equations John K. Hunter
    Notes on Partial Differential Equations John K. Hunter Department of Mathematics, University of California at Davis Contents Chapter 1. Preliminaries 1 1.1. Euclidean space 1 1.2. Spaces of continuous functions 1 1.3. H¨olderspaces 2 1.4. Lp spaces 3 1.5. Compactness 6 1.6. Averages 7 1.7. Convolutions 7 1.8. Derivatives and multi-index notation 8 1.9. Mollifiers 10 1.10. Boundaries of open sets 12 1.11. Change of variables 16 1.12. Divergence theorem 16 Chapter 2. Laplace's equation 19 2.1. Mean value theorem 20 2.2. Derivative estimates and analyticity 23 2.3. Maximum principle 26 2.4. Harnack's inequality 31 2.5. Green's identities 32 2.6. Fundamental solution 33 2.7. The Newtonian potential 34 2.8. Singular integral operators 43 Chapter 3. Sobolev spaces 47 3.1. Weak derivatives 47 3.2. Examples 47 3.3. Distributions 50 3.4. Properties of weak derivatives 53 3.5. Sobolev spaces 56 3.6. Approximation of Sobolev functions 57 3.7. Sobolev embedding: p < n 57 3.8. Sobolev embedding: p > n 66 3.9. Boundary values of Sobolev functions 69 3.10. Compactness results 71 3.11. Sobolev functions on Ω ⊂ Rn 73 3.A. Lipschitz functions 75 3.B. Absolutely continuous functions 76 3.C. Functions of bounded variation 78 3.D. Borel measures on R 80 v vi CONTENTS 3.E. Radon measures on R 82 3.F. Lebesgue-Stieltjes measures 83 3.G. Integration 84 3.H. Summary 86 Chapter 4.
    [Show full text]
  • Elliptic Pdes
    viii CHAPTER 4 Elliptic PDEs One of the main advantages of extending the class of solutions of a PDE from classical solutions with continuous derivatives to weak solutions with weak deriva- tives is that it is easier to prove the existence of weak solutions. Having estab- lished the existence of weak solutions, one may then study their properties, such as uniqueness and regularity, and perhaps prove under appropriate assumptions that the weak solutions are, in fact, classical solutions. There is often considerable freedom in how one defines a weak solution of a PDE; for example, the function space to which a solution is required to belong is not given a priori by the PDE itself. Typically, we look for a weak formulation that reduces to the classical formulation under appropriate smoothness assumptions and which is amenable to a mathematical analysis; the notion of solution and the spaces to which solutions belong are dictated by the available estimates and analysis. 4.1. Weak formulation of the Dirichlet problem Let us consider the Dirichlet problem for the Laplacian with homogeneous boundary conditions on a bounded domain Ω in Rn, (4.1) −∆u = f in Ω, (4.2) u =0 on ∂Ω. First, suppose that the boundary of Ω is smooth and u,f : Ω → R are smooth functions. Multiplying (4.1) by a test function φ, integrating the result over Ω, and using the divergence theorem, we get ∞ (4.3) Du · Dφ dx = fφ dx for all φ ∈ Cc (Ω). ZΩ ZΩ The boundary terms vanish because φ = 0 on the boundary.
    [Show full text]
  • Elliptic Pdes
    CHAPTER 4 Elliptic PDEs One of the main advantages of extending the class of solutions of a PDE from classical solutions with continuous derivatives to weak solutions with weak deriva- tives is that it is easier to prove the existence of weak solutions. Having estab- lished the existence of weak solutions, one may then study their properties, such as uniqueness and regularity, and perhaps prove under appropriate assumptions that the weak solutions are, in fact, classical solutions. There is often considerable freedom in how one defines a weak solution of a PDE; for example, the function space to which a solution is required to belong is not given a priori by the PDE itself. Typically, we look for a weak formulation that reduces to the classical formulation under appropriate smoothness assumptions and which is amenable to a mathematical analysis; the notion of solution and the spaces to which solutions belong are dictated by the available estimates and analysis. 4.1. Weak formulation of the Dirichlet problem Let us consider the Dirichlet problem for the Laplacian with homogeneous boundary conditions on a bounded domain Ω in Rn, (4.1) −∆u = f in Ω; (4.2) u = 0 on @Ω: First, suppose that the boundary of Ω is smooth and u; f : Ω ! R are smooth functions. Multiplying (4.1) by a test function φ, integrating the result over Ω, and using the divergence theorem, we get Z Z 1 (4.3) Du · Dφ dx = fφ dx for all φ 2 Cc (Ω): Ω Ω The boundary terms vanish because φ = 0 on the boundary.
    [Show full text]
  • On Stochastic Distributions and Currents
    NISSUNA UMANA INVESTIGAZIONE SI PUO DIMANDARE VERA SCIENZIA S’ESSA NON PASSA PER LE MATEMATICHE DIMOSTRAZIONI LEONARDO DA VINCI vol. 4 no. 3-4 2016 Mathematics and Mechanics of Complex Systems VINCENZO CAPASSO AND FRANCO FLANDOLI ON STOCHASTIC DISTRIBUTIONS AND CURRENTS msp MATHEMATICS AND MECHANICS OF COMPLEX SYSTEMS Vol. 4, No. 3-4, 2016 dx.doi.org/10.2140/memocs.2016.4.373 ∩ MM ON STOCHASTIC DISTRIBUTIONS AND CURRENTS VINCENZO CAPASSO AND FRANCO FLANDOLI Dedicated to Lucio Russo, on the occasion of his 70th birthday In many applications, it is of great importance to handle random closed sets of different (even though integer) Hausdorff dimensions, including local infor- mation about initial conditions and growth parameters. Following a standard approach in geometric measure theory, such sets may be described in terms of suitable measures. For a random closed set of lower dimension with respect to the environment space, the relevant measures induced by its realizations are sin- gular with respect to the Lebesgue measure, and so their usual Radon–Nikodym derivatives are zero almost everywhere. In this paper, how to cope with these difficulties has been suggested by introducing random generalized densities (dis- tributions) á la Dirac–Schwarz, for both the deterministic case and the stochastic case. For the last one, mean generalized densities are analyzed, and they have been related to densities of the expected values of the relevant measures. Ac- tually, distributions are a subclass of the larger class of currents; in the usual Euclidean space of dimension d, currents of any order k 2 f0; 1;:::; dg or k- currents may be introduced.
    [Show full text]
  • A Corrective View of Neural Networks: Representation, Memorization and Learning
    Proceedings of Machine Learning Research vol 125:1{54, 2020 33rd Annual Conference on Learning Theory A Corrective View of Neural Networks: Representation, Memorization and Learning Guy Bresler [email protected] Department of EECS, MIT Cambridge, MA, USA. Dheeraj Nagaraj [email protected] Department of EECS, MIT Cambridge, MA, USA. Editors: Jacob Abernethy and Shivani Agarwal Abstract We develop a corrective mechanism for neural network approximation: the total avail- able non-linear units are divided into multiple groups and the first group approx- imates the function under consideration, the second approximates the error in ap- proximation produced by the first group and corrects it, the third group approxi- mates the error produced by the first and second groups together and so on. This technique yields several new representation and learning results for neural networks: 1. Two-layer neural networks in the random features regime (RF) can memorize arbi- Rd ~ n trary labels for n arbitrary points in with O( θ4 ) ReLUs, where θ is the minimum distance between two different points. This bound can be shown to be optimal in n up to logarithmic factors. 2. Two-layer neural networks with ReLUs and smoothed ReLUs can represent functions with an error of at most with O(C(a; d)−1=(a+1)) units for a 2 N [ f0g when the function has Θ(ad) bounded derivatives. In certain cases d can be replaced with effective dimension q d. Our results indicate that neural networks with only a single nonlinear layer are surprisingly powerful with regards to representation, and show that in contrast to what is suggested in recent work, depth is not needed in order to represent highly smooth functions.
    [Show full text]
  • Lecture Notes for MATH 592A
    Lecture Notes for MATH 592A Vladislav Panferov February 25, 2008 1. Introduction As a motivation for developing the theory let’s consider the following boundary-value problem, u′′ + u = f(x) x Ω=(0, 1) − ∈ u(0) = 0, u(1) = 0, where f is a given (smooth) function. We know that the solution is unique, sat- isfies the stability estimate following from the maximum principle, and it can be expressed explicitly through Green’s function. However, there is another way “to say something about the solution”, quite independent of what we’ve done before. Let’s multuply the differential equation by u and integrate by parts. We get x=1 1 1 u′ u + (u′ 2 + u2) dx = f u dx. − x=0 0 0 h i Z Z The boundary terms vanish because of the boundary conditions. We introduce the following notations 1 1 u 2 = u2 dx (thenorm), and (f,u)= f udx (the inner product). k k Z0 Z0 Then the integral identity above can be written in the short form as u′ 2 + u 2 =(f,u). k k k k Now we notice the following inequality (f,u) 6 f u . (Cauchy-Schwarz) | | k kk k This may be familiar from linear algebra. For a quick proof notice that f + λu 2 = f 2 +2λ(f,u)+ λ2 u 2 0 k k k k k k ≥ 1 for any λ. This expression is a quadratic function in λ which has a minimum for λ = (f,u)/ u 2. Using this value of λ and rearranging the terms we get − k k (f,u)2 6 f 2 u 2.
    [Show full text]
  • SOME ALGEBRAIC DEFINITIONS and CONSTRUCTIONS Definition
    SOME ALGEBRAIC DEFINITIONS AND CONSTRUCTIONS Definition 1. A monoid is a set M with an element e and an associative multipli- cation M M M for which e is a two-sided identity element: em = m = me for all m M×. A−→group is a monoid in which each element m has an inverse element m−1, so∈ that mm−1 = e = m−1m. A homomorphism f : M N of monoids is a function f such that f(mn) = −→ f(m)f(n) and f(eM )= eN . A “homomorphism” of any kind of algebraic structure is a function that preserves all of the structure that goes into the definition. When M is commutative, mn = nm for all m,n M, we often write the product as +, the identity element as 0, and the inverse of∈m as m. As a convention, it is convenient to say that a commutative monoid is “Abelian”− when we choose to think of its product as “addition”, but to use the word “commutative” when we choose to think of its product as “multiplication”; in the latter case, we write the identity element as 1. Definition 2. The Grothendieck construction on an Abelian monoid is an Abelian group G(M) together with a homomorphism of Abelian monoids i : M G(M) such that, for any Abelian group A and homomorphism of Abelian monoids−→ f : M A, there exists a unique homomorphism of Abelian groups f˜ : G(M) A −→ −→ such that f˜ i = f. ◦ We construct G(M) explicitly by taking equivalence classes of ordered pairs (m,n) of elements of M, thought of as “m n”, under the equivalence relation generated by (m,n) (m′,n′) if m + n′ = −n + m′.
    [Show full text]
  • Smooth Bumps, a Borel Theorem and Partitions of Smooth Functions on P.C.F. Fractals
    SMOOTH BUMPS, A BOREL THEOREM AND PARTITIONS OF SMOOTH FUNCTIONS ON P.C.F. FRACTALS. LUKE G. ROGERS, ROBERT S. STRICHARTZ, AND ALEXANDER TEPLYAEV Abstract. We provide two methods for constructing smooth bump functions and for smoothly cutting off smooth functions on fractals, one using a prob- abilistic approach and sub-Gaussian estimates for the heat operator, and the other using the analytic theory for p.c.f. fractals and a fixed point argument. The heat semigroup (probabilistic) method is applicable to a more general class of metric measure spaces with Laplacian, including certain infinitely ramified fractals, however the cut off technique involves some loss in smoothness. From the analytic approach we establish a Borel theorem for p.c.f. fractals, showing that to any prescribed jet at a junction point there is a smooth function with that jet. As a consequence we prove that on p.c.f. fractals smooth functions may be cut off with no loss of smoothness, and thus can be smoothly decom- posed subordinate to an open cover. The latter result provides a replacement for classical partition of unity arguments in the p.c.f. fractal setting. 1. Introduction Recent years have seen considerable developments in the theory of analysis on certain fractal sets from both probabilistic and analytic viewpoints [1, 14, 26]. In this theory, either a Dirichlet energy form or a diffusion on the fractal is used to construct a weak Laplacian with respect to an appropriate measure, and thereby to define smooth functions. As a result the Laplacian eigenfunctions are well un- derstood, but we have little knowledge of other basic smooth functions except in the case where the fractal is the Sierpinski Gasket [19, 5, 20].
    [Show full text]
  • Advanced Partial Differential Equations Prof. Dr. Thomas
    Advanced Partial Differential Equations Prof. Dr. Thomas Sørensen summer term 2015 Marcel Schaub July 2, 2015 1 Contents 0 Recall PDE 1 & Motivation 3 0.1 Recall PDE 1 . .3 1 Weak derivatives and Sobolev spaces 7 1.1 Sobolev spaces . .8 1.2 Approximation by smooth functions . 11 1.3 Extension of Sobolev functions . 13 1.4 Traces . 15 1.5 Sobolev inequalities . 17 2 Linear 2nd order elliptic PDE 25 2.1 Linear 2nd order elliptic partial differential operators . 25 2.2 Weak solutions . 26 2.3 Existence via Lax-Milgram . 28 2.4 Inhomogeneous bounday value problems . 35 2.5 The space H−1(U) ................................ 36 2.6 Regularity of weak solutions . 39 A Tutorials 58 A.1 Tutorial 1: Review of Integration . 58 A.2 Tutorial 2 . 59 A.3 Tutorial 3: Norms . 61 A.4 Tutorial 4 . 62 A.5 Tutorial 6 (Sheet 7) . 65 A.6 Tutorial 7 . 65 A.7 Tutorial 9 . 67 A.8 Tutorium 11 . 67 B Solutions of the problem sheets 70 B.1 Solution to Sheet 1 . 70 B.2 Solution to Sheet 2 . 71 B.3 Solution to Problem Sheet 3 . 73 B.4 Solution to Problem Sheet 4 . 76 B.5 Solution to Exercise Sheet 5 . 77 B.6 Solution to Exercise Sheet 7 . 81 B.7 Solution to problem sheet 8 . 84 B.8 Solution to Exercise Sheet 9 . 87 2 0 Recall PDE 1 & Motivation 0.1 Recall PDE 1 We mainly studied linear 2nd order equations – specifically, elliptic, parabolic and hyper- bolic equations. Concretely: • The Laplace equation ∆u = 0 (elliptic) • The Poisson equation −∆u = f (elliptic) • The Heat equation ut − ∆u = 0, ut − ∆u = f (parabolic) • The Wave equation utt − ∆u = 0, utt − ∆u = f (hyperbolic) We studied (“main motivation; goal”) well-posedness (à la Hadamard) 1.
    [Show full text]
  • Laplace Transforms: Theory, Problems, and Solutions
    Laplace Transforms: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved 1 Contents 43 The Laplace Transform: Basic Definitions and Results 3 44 Further Studies of Laplace Transform 15 45 The Laplace Transform and the Method of Partial Fractions 28 46 Laplace Transforms of Periodic Functions 35 47 Convolution Integrals 45 48 The Dirac Delta Function and Impulse Response 53 49 Solving Systems of Differential Equations Using Laplace Trans- form 61 50 Solutions to Problems 68 2 43 The Laplace Transform: Basic Definitions and Results Laplace transform is yet another operational tool for solving constant coeffi- cients linear differential equations. The process of solution consists of three main steps: • The given \hard" problem is transformed into a \simple" equation. • This simple equation is solved by purely algebraic manipulations. • The solution of the simple equation is transformed back to obtain the so- lution of the given problem. In this way the Laplace transformation reduces the problem of solving a dif- ferential equation to an algebraic problem. The third step is made easier by tables, whose role is similar to that of integral tables in integration. The above procedure can be summarized by Figure 43.1 Figure 43.1 In this section we introduce the concept of Laplace transform and discuss some of its properties. The Laplace transform is defined in the following way. Let f(t) be defined for t ≥ 0: Then the Laplace transform of f; which is denoted by L[f(t)] or by F (s), is defined by the following equation Z T Z 1 L[f(t)] = F (s) = lim f(t)e−stdt = f(t)e−stdt T !1 0 0 The integral which defined a Laplace transform is an improper integral.
    [Show full text]