The Laplace Operator in Polar Coordinates

Total Page:16

File Type:pdf, Size:1020Kb

The Laplace Operator in Polar Coordinates The Laplace operator in polar coordinates We now consider the Laplace operator with Dirichlet boundary conditions on a circular region Ω = {( x,y) | x2 + y2 ≤ A }. Our goal is to compute eigenvalues and eigenfunctions of the Laplace operator on this region. -Δ u = λ u As before, we seek a solution by the method of separation of variables. However, given the geometry of the region it does not make sense to seek solutions of the form u(x,y) = u1(x) u2(y) Instead, we will change to polar coordinates and seek solutions of the form u(r,θ) = R(r) Θ(θ) Given the geometry of the problem at hand, this is a more natural coordinate system to use, and as we will see, the problem does separate cleanly into ODEs in r and θ. Our first task it to rewrite the Laplace operator in polar coordinates. Initially we have that Δu = ∂2u + ∂2u ∂x2 ∂y2 To change coordinates from (x,y) to (r,θ) we have to introduce change of variable formulas and use the chain rule to rewrite x and y derivatives as r and θ derivatives. Here are some of the details. r = x2 + y2 tan(θ) = y/x To start the process we develop some basic partial derivative facts. ∂r = x = r cos(θ) = cos(θ) ∂x r x2 + y2 ∂r = y = r sin(θ) = sin(θ) ∂y r x2 + y2 sec 2(θ) ∂θ = - y ∂x x2 ∂θ = cos 2(θ) - r sin(θ) = - sin(θ) ∂x ( r2 cos 2(θ) ) r sec 2(θ) ∂θ = 1 ∂y x 1 ∂θ = cos 2(θ) 1 = cos(θ) ∂y r cos(θ) r We are now in position to start applying the chain rule: ∂u = ∂u ∂r + ∂u ∂θ = ∂u cos(θ) + ∂u - sin(θ) ∂x ∂r ∂x ∂θ ∂x ∂r ∂θ ( r ) ∂u = ∂u ∂r + ∂u ∂θ = ∂u sin(θ) + ∂u cos(θ) ∂y ∂r ∂y ∂θ ∂y ∂r ∂θ ( r ) The higher order derivatives proceed similarly. I refer you to the text for further details. At the end of the process we learn that ∂2u + ∂2u = ∂2u + 1 ∂2u + 1 ∂u 2 2 2 2 2 r ∂r ∂x ∂y ∂r r ∂θ Separation of Variables We are now in position to implement the separation of variables. Once again, we want to compute eigenvalues and eigenfunctions of the Laplace operator: -Δ u = λ u To apply separation of variables we assume that u(r,θ) = R(r) Θ(θ) and substitute into the polar form of the Laplace equation: 2 2 -Δ u = - ∂2u - 1 ∂2u - 1 ∂u = - ∂ R(r) Θ(θ) - 1 ∂ R(r) Θ(θ) - 1 ∂R(r) Θ(θ) 2 2 2 r ∂r 2 2 2 r ∂r ∂r r ∂θ ∂r r ∂θ 2 2 Θ(θ) d R(r) - 1 R(r) d Θ(θ) - 1 Θ(θ) dR(r) = λ u = λ R(r) Θ(θ) 2 2 2 r dr dr r dθ Dividing both sides by a factor of R(r) Θ(θ) multiplying both sides by r2 gives 2 2 r2 d R(r) - 1 d Θ(θ) - r dR(r) = λ r2 2 2 dr R(r) dr Θ(θ) dθ R(r) If we move all of the terms that depend on r and R(r) to one side and all of the terms that depend on θ and Θ(θ) to the other we get 2 2 - r2 d R(r) + r dR(r) + λ r2 = - 1 d Θ(θ) 2 dr 2 R(r) dr R(r) Θ(θ) dθ The only way for these two expressions to equal for all possible values of r and θ is to have them both equal a constant, γ. 2 2 - 1 d Θ(θ) = γ 2 Θ(θ) dθ 2 - r2 d R(r) + r dR(r) + λ r2 = γ R(r) dr2 R(r) dr For convenience, these two equations are typically rewritten as 2 d Θ(θ) + γ Θ(θ) = 0 2 dθ 2 d R(r) + 1 dR(r) + λ - γ R(r) = 0 r dr2 dr ( r2 ) The problem is now fully separated. The first of these two problems is the easier to work with. The appropriate boundary conditions to apply to this problem state that the function Θ(θ) and its first derivative with respect to θ are periodic in θ: Θ(- π) = Θ(π) dΘ(- π) = dΘ(π) dθ dθ The ODE 2 d Θ(θ) + γ Θ(θ) = 0 2 dθ Θ(- π) = Θ(π) dΘ(- π) = dΘ(π) dθ dθ has eigenvalues γ = 0 with associated eigenfunction Θ(θ) = 1 and γ = n2 with associated eigenfunctions Θ(θ) = sin(n θ) Θ(θ) = cos(n θ) As a side effect of solving the Θ(θ) problem we have now determined all of the legal values of γ: γ = n2 for n = 0, 1, 2, … We can now focus our attention on the R(r) equation. 3 2 d R(r) + 1 dR(r) + λ - n2 R(r) = 0 r dr2 dr ( r2) The standard technique for solving this equation starts by multiplying both sides of the equation by a factor of r2. 2 r2 d R(r) + r dR(r) + (λr2 - n2) R(r) = 0 dr2 dr Next, and for reasons that will become clear below, we introduce a simple change of variables. s = λ r With this change of variables we will rewrite the equation in terms of a function S(s) = R(r) = R( s ) λ The equation uses various derivatives of R(r), so we will have to determine what happens to those derivatives once we make the change of variables. dR(r) = dR(r) ds = dS(s) λ dr ds dr ds dR(r) 2 d 2 2 2 d R(r) = dr ds = d S(s) ( λ) = λ d S(s) dr2 ds dr ds2 ds2 We now have 2 ( s )2 λ d S(s) + s dS(s) λ + (s2 - n2) S(s) = 0 λ ds2 λ ds 2 s2 d S(s) + s dS(s) + (s2 - n2) S(s) = 0 ds2 ds This equation is a well-known ODE, the Bessel equation of order n. Solving the Bessel Equation The Bessel equation can be solved by assuming that the solution takes the form ∞ α k S(s) = s ∑ak s k = 0 Substituting this into the equation gives ∞ ∞ ∞ ∞ 2 k+α-2 k+α-1 2 k+α 2 k+α s ∑ak (k+α)( k+α-1) s + s ∑ak (k+α) s + s ∑ak s - n ∑ak s = 0 k = 0 k = 0 k = 0 k = 0 To facilitate consolidating these sums into a single sum we shift the indices of summation in the third term. 4 ∞ ∞ ∞ ∞ k+α k+α k+α 2 k+α ∑ak (k+α)( k+α-1) s + ∑ak (k+α) s + ∑ak-2 s - n ∑ak s = 0 k = 0 k = 0 k = 2 k = 0 If we consider only terms with the form sα we see that 2 a0 (α (α-1) + α - n ) = 0 or α2 - n2 = 0 This tells us that solutions must have the form ∞ n k S(s) = s ∑ak s k = 0 or ∞ -n k S(s) = s ∑ak s k = 0 The latter form does not produce reasonable solutions, since for n ≥ 1 this function is unbounded at s = 0. Thus we will continue with the assumpation that α = n. With this assumption we have that ∞ ∞ ∞ ∞ k+n k+n k+n 2 k+n ∑ak (k+n)( k+n-1) s + ∑ak (k+n) s + ∑ak-2 s - n ∑ak s = 0 k = 0 k = 0 k = 2 k = 0 Considering terms with factors of sn+1 we see that 2 a1 ((n+1 )n + (n+1 ) - n ) = 0 or a1 (2 n + 1) = 0 or a1 = 0 For terms beyond sn+2 we have 2 ak ((k+n)( k+n-1) + (k+n)- n ) + ak-2 = 0 or 2 2 ak ((k+n) - n ) + ak-2 = 0 This leads to a recurrence relation that allows us to compute the coefficients ak in terms of ak-2 . 5 ak-2 ak-2 ak = - = - (k+n)2 - n2 k (k + 2 n) Since a1 = 0, the recurrence relation tells us that ak = 0 for all odd k. For the even k we have that a0 a2 = - 2 (2 + 2 n) a2 a0 a4 = - = 4 (4 + 2 n) 2! 24(n+1)( n+2) a4 a0 a6 = - = - 6 (6 + 2 n) 3! 26(n+3)( n+2)( n+1) More generally, for k = 2 j we have that j a0 a2j = (-1 ) j! 22 j(n+1)( n+2)⋯( n+j) Since we are free to pick a0, we select for our convenience 1 a0 = 2n n! so that (-1) j a2j = 22j+nj!(n+j)! This now gives ∞ j 2j+n S(s) = ∑ (-1) s j = 0 j!( n+j)! (2 ) This function is known as the Bessel function of order n, usually written Jn(s). Eigenvalues and Eigenfunctions of the Laplacian Following the reasoning above we have that u(r,θ) = R(r) Θ(θ) where Θ(θ) = sin(n θ) or Θ(θ) = cos(n θ) 6 and R(r) = S(s) = Jn(s) If we are going to require that u(A,θ) = 0 we must have that R(A) = S( λ A) = Jn( λ A) = 0 or that λ A must be a zero of Jn(s). It turns out that for each value of n ≥ 0 the Bessel function Jn(s) has an infinite sequence sn,m of zeroes.
Recommended publications
  • Arxiv:1507.07356V2 [Math.AP]
    TEN EQUIVALENT DEFINITIONS OF THE FRACTIONAL LAPLACE OPERATOR MATEUSZ KWAŚNICKI Abstract. This article discusses several definitions of the fractional Laplace operator ( ∆)α/2 (α (0, 2)) in Rd (d 1), also known as the Riesz fractional derivative − ∈ ≥ operator, as an operator on Lebesgue spaces L p (p [1, )), on the space C0 of ∈ ∞ continuous functions vanishing at infinity and on the space Cbu of bounded uniformly continuous functions. Among these definitions are ones involving singular integrals, semigroups of operators, Bochner’s subordination and harmonic extensions. We collect and extend known results in order to prove that all these definitions agree: on each of the function spaces considered, the corresponding operators have common domain and they coincide on that common domain. 1. Introduction We consider the fractional Laplace operator L = ( ∆)α/2 in Rd, with α (0, 2) and d 1, 2, ... Numerous definitions of L can be found− in− literature: as a Fourier∈ multiplier with∈{ symbol} ξ α, as a fractional power in the sense of Bochner or Balakrishnan, as the inverse of the−| Riesz| potential operator, as a singular integral operator, as an operator associated to an appropriate Dirichlet form, as an infinitesimal generator of an appropriate semigroup of contractions, or as the Dirichlet-to-Neumann operator for an appropriate harmonic extension problem. Equivalence of these definitions for sufficiently smooth functions is well-known and easy. The purpose of this article is to prove that, whenever meaningful, all these definitions are equivalent in the Lebesgue space L p for p [1, ), ∈ ∞ in the space C0 of continuous functions vanishing at infinity, and in the space Cbu of bounded uniformly continuous functions.
    [Show full text]
  • Cloaking Via Change of Variables for Second Order Quasi-Linear Elliptic Differential Equations Maher Belgacem, Abderrahman Boukricha
    Cloaking via change of variables for second order quasi-linear elliptic differential equations Maher Belgacem, Abderrahman Boukricha To cite this version: Maher Belgacem, Abderrahman Boukricha. Cloaking via change of variables for second order quasi- linear elliptic differential equations. 2017. hal-01444772 HAL Id: hal-01444772 https://hal.archives-ouvertes.fr/hal-01444772 Preprint submitted on 24 Jan 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Cloaking via change of variables for second order quasi-linear elliptic differential equations Maher Belgacem, Abderrahman Boukricha University of Tunis El Manar, Faculty of Sciences of Tunis 2092 Tunis, Tunisia. Abstract The present paper introduces and treats the cloaking via change of variables in the framework of quasi-linear elliptic partial differential operators belonging to the class of Leray-Lions (cf. [14]). We show how a regular near-cloak can be obtained using an admissible (nonsingular) change of variables and we prove that the singular change- of variable-based scheme achieve perfect cloaking in any dimension d ≥ 2. We thus generalize previous results (cf. [7], [11]) obtained in the context of electric impedance tomography formulated by a linear differential operator in divergence form.
    [Show full text]
  • Chapter 8 Change of Variables, Parametrizations, Surface Integrals
    Chapter 8 Change of Variables, Parametrizations, Surface Integrals x0. The transformation formula In evaluating any integral, if the integral depends on an auxiliary function of the variables involved, it is often a good idea to change variables and try to simplify the integral. The formula which allows one to pass from the original integral to the new one is called the transformation formula (or change of variables formula). It should be noted that certain conditions need to be met before one can achieve this, and we begin by reviewing the one variable situation. Let D be an open interval, say (a; b); in R , and let ' : D! R be a 1-1 , C1 mapping (function) such that '0 6= 0 on D: Put D¤ = '(D): By the hypothesis on '; it's either increasing or decreasing everywhere on D: In the former case D¤ = ('(a);'(b)); and in the latter case, D¤ = ('(b);'(a)): Now suppose we have to evaluate the integral Zb I = f('(u))'0(u) du; a for a nice function f: Now put x = '(u); so that dx = '0(u) du: This change of variable allows us to express the integral as Z'(b) Z I = f(x) dx = sgn('0) f(x) dx; '(a) D¤ where sgn('0) denotes the sign of '0 on D: We then get the transformation formula Z Z f('(u))j'0(u)j du = f(x) dx D D¤ This generalizes to higher dimensions as follows: Theorem Let D be a bounded open set in Rn;' : D! Rn a C1, 1-1 mapping whose Jacobian determinant det(D') is everywhere non-vanishing on D; D¤ = '(D); and f an integrable function on D¤: Then we have the transformation formula Z Z Z Z ¢ ¢ ¢ f('(u))j det D'(u)j du1::: dun = ¢ ¢ ¢ f(x) dx1::: dxn: D D¤ 1 Of course, when n = 1; det D'(u) is simply '0(u); and we recover the old formula.
    [Show full text]
  • Anti-Chain Rule Name
    Anti-Chain Rule Name: The most fundamental rule for computing derivatives in calculus is the Chain Rule, from which all other differentiation rules are derived. In words, the Chain Rule states that, to differentiate f(g(x)), differentiate the outside function f and multiply by the derivative of the inside function g, so that d f(g(x)) = f 0(g(x)) · g0(x): dx The Chain Rule makes computation of nearly all derivatives fairly easy. It is tempting to think that there should be an Anti-Chain Rule that makes antidiffer- entiation as easy as the Chain Rule made differentiation. Such a rule would just reverse the process, so instead of differentiating f and multiplying by g0, we would antidifferentiate f and divide by g0. In symbols, Z F (g(x)) f(g(x)) dx = + C: g0(x) Unfortunately, the Anti-Chain Rule does not exist because the formula above is not Z always true. For example, consider (x2 + 1)2 dx. We can compute this integral (correctly) by expanding and applying the power rule: Z Z (x2 + 1)2 dx = x4 + 2x2 + 1 dx x5 2x3 = + + x + C 5 3 However, we could also think about our integrand above as f(g(x)) with f(x) = x2 and g(x) = x2 + 1. Then we have F (x) = x3=3 and g0(x) = 2x, so the Anti-Chain Rule would predict that Z F (g(x)) (x2 + 1)2 dx = + C g0(x) (x2 + 1)3 = + C 3 · 2x x6 + 3x4 + 3x2 + 1 = + C 6x x5 x3 x 1 = + + + + C 6 2 2 6x Since these answers are different, we conclude that the Anti-Chain Rule has failed us.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • Introduction to the Modern Calculus of Variations
    MA4G6 Lecture Notes Introduction to the Modern Calculus of Variations Filip Rindler Spring Term 2015 Filip Rindler Mathematics Institute University of Warwick Coventry CV4 7AL United Kingdom [email protected] http://www.warwick.ac.uk/filiprindler Copyright ©2015 Filip Rindler. Version 1.1. Preface These lecture notes, written for the MA4G6 Calculus of Variations course at the University of Warwick, intend to give a modern introduction to the Calculus of Variations. I have tried to cover different aspects of the field and to explain how they fit into the “big picture”. This is not an encyclopedic work; many important results are omitted and sometimes I only present a special case of a more general theorem. I have, however, tried to strike a balance between a pure introduction and a text that can be used for later revision of forgotten material. The presentation is based around a few principles: • The presentation is quite “modern” in that I use several techniques which are perhaps not usually found in an introductory text or that have only recently been developed. • For most results, I try to use “reasonable” assumptions, not necessarily minimal ones. • When presented with a choice of how to prove a result, I have usually preferred the (in my opinion) most conceptually clear approach over more “elementary” ones. For example, I use Young measures in many instances, even though this comes at the expense of a higher initial burden of abstract theory. • Wherever possible, I first present an abstract result for general functionals defined on Banach spaces to illustrate the general structure of a certain result.
    [Show full text]
  • 9.2. Undoing the Chain Rule
    < Previous Section Home Next Section > 9.2. Undoing the Chain Rule This chapter focuses on finding accumulation functions in closed form when we know its rate of change function. We’ve seen that 1) an accumulation function in closed form is advantageous for quickly and easily generating many values of the accumulating quantity, and 2) the key to finding accumulation functions in closed form is the Fundamental Theorem of Calculus. The FTC says that an accumulation function is the antiderivative of the given rate of change function (because rate of change is the derivative of accumulation). In Chapter 6, we developed many derivative rules for quickly finding closed form rate of change functions from closed form accumulation functions. We also made a connection between the form of a rate of change function and the form of the accumulation function from which it was derived. Rather than inventing a whole new set of techniques for finding antiderivatives, our mindset as much as possible will be to use our derivatives rules in reverse to find antiderivatives. We worked hard to develop the derivative rules, so let’s keep using them to find antiderivatives! Thus, whenever you have a rate of change function f and are charged with finding its antiderivative, you should frame the task with the question “What function has f as its derivative?” For simple rate of change functions, this is easy, as long as you know your derivative rules well enough to apply them in reverse. For example, given a rate of change function …. … 2x, what function has 2x as its derivative? i.e.
    [Show full text]
  • 7. Transformations of Variables
    Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 7. Transformations of Variables Basic Theory The Problem As usual, we start with a random experiment with probability measure ℙ on an underlying sample space. Suppose that we have a random variable X for the experiment, taking values in S, and a function r:S→T . Then Y=r(X) is a new random variabl e taking values in T . If the distribution of X is known, how do we fin d the distribution of Y ? This is a very basic and important question, and in a superficial sense, the solution is easy. 1. Show that ℙ(Y∈B) = ℙ ( X∈r −1( B)) for B⊆T. However, frequently the distribution of X is known either through its distribution function F or its density function f , and we would similarly like to find the distribution function or density function of Y . This is a difficult problem in general, because as we will see, even simple transformations of variables with simple distributions can lead to variables with complex distributions. We will solve the problem in various special cases. Transformed Variables with Discrete Distributions 2. Suppose that X has a discrete distribution with probability density function f (and hence S is countable). Show that Y has a discrete distribution with probability density function g given by g(y)=∑ f(x), y∈T x∈r −1( y) 3. Suppose that X has a continuous distribution on a subset S⊆ℝ n with probability density function f , and that T is countable.
    [Show full text]
  • Calculus Terminology
    AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential
    [Show full text]
  • The Laplace Operator in Polar Coordinates in Several Dimensions∗
    The Laplace operator in polar coordinates in several dimensions∗ Attila M´at´e Brooklyn College of the City University of New York January 15, 2015 Contents 1 Polar coordinates and the Laplacian 1 1.1 Polar coordinates in n dimensions............................. 1 1.2 Cartesian and polar differential operators . ............. 2 1.3 Calculating the Laplacian in polar coordinates . .............. 2 2 Changes of variables in harmonic functions 4 2.1 Inversionwithrespecttotheunitsphere . ............ 4 1 Polar coordinates and the Laplacian 1.1 Polar coordinates in n dimensions Let n ≥ 2 be an integer, and consider the n-dimensional Euclidean space Rn. The Laplace operator Rn n 2 2 in is Ln =∆= i=1 ∂ /∂xi . We are interested in solutions of the Laplace equation Lnf = 0 n 2 that are spherically symmetric,P i.e., is such that f depends only on r=1 xi . In order to do this, we need to use polar coordinates in n dimensions. These can be definedP as follows: for k with 1 ≤ k ≤ n 2 k 2 define rk = i=1 xi and put for 2 ≤ k ≤ n write P (1) rk−1 = rk sin φk and xk = rk cos φk. rk−1 = rk sin φk and xk = rk cos φk. The polar coordinates of the point (x1,x2,...,xn) will be (rn,φ2,φ3,...,φn). 2 2 2 In case n = 2, we can write y = x1, x = x2. The polar coordinates (r, θ) are defined by r = x + y , (2) x = r cos θ and y = r sin θ, so we can take r2 = r and φ2 = θ.
    [Show full text]
  • A General Chain Rule for Distributional Derivatives
    proceedings of the american mathematical society Volume 108, Number 3, March 1990 A GENERAL CHAIN RULE FOR DISTRIBUTIONAL DERIVATIVES L. AMBROSIO AND G. DAL MASO (Communicated by Barbara L. Keyfitz) Abstract. We prove a general chain rule for the distribution derivatives of the composite function v(x) = f(u(x)), where u: R" —>Rm has bounded variation and /: Rm —>R* is Lipschitz continuous. Introduction The aim of the present paper is to prove a chain rule for the distributional derivative of the composite function v(x) = f(u(x)), where u: Q —>Rm has bounded variation in the open set ilcR" and /: Rw —>R is uniformly Lip- schitz continuous. Under these hypotheses it is easy to prove that the function v has locally bounded variation in Q, hence its distributional derivative Dv is a Radon measure in Q with values in the vector space Jz? m of all linear maps from R" to Rm . The problem is to give an explicit formula for Dv in terms of the gradient Vf of / and of the distributional derivative Du . To illustrate our formula, we begin with the simpler case, studied by A. I. Vol pert, where / is continuously differentiable. Let us denote by Su the set of all jump points of u, defined as the set of all xef! where the approximate limit u(x) does not exist at x. Then the following identities hold in the sense of measures (see [19] and [20]): (0.1) Dv = Vf(ü)-Du onß\SH, and (0.2) Dv = (f(u+)-f(u-))®vu-rn_x onSu, where vu denotes the measure theoretical unit normal to Su, u+ , u~ are the approximate limits of u from both sides of Su, and %?n_x denotes the (n - l)-dimensional Hausdorff measure.
    [Show full text]
  • Math 212-Lecture 8 13.7: the Multivariable Chain Rule
    Math 212-Lecture 8 13.7: The multivariable chain rule The chain rule with one independent variable w = f(x; y). If the particle is moving along a curve x = x(t); y = y(t), then the values that the particle feels is w = f(x(t); y(t)). Then, w = w(t) is a function of t. x; y are intermediate variables and t is the independent variable. The chain rule says: If both fx and fy are continuous, then dw = f x0(t) + f y0(t): dt x y Intuitively, the total change rate is the superposition of contributions in both directions. Comment: You cannot do as in one single variable calculus like @w dx @w dy dw dw + = + ; @x dt @y dt dt dt @w by canceling the two dx's. @w in @x is only the `partial' change due to the dw change of x, which is different from the dw in dt since the latter is the total change. The proof is to use the increment ∆w = wx∆x + wy∆y + small. Read Page 903. Example: Suppose w = sin(uv) where u = 2s and v = s2. Compute dw ds at s = 1. Solution. By chain rule dw @w du @w dv = + = cos(uv)v ∗ 2 + cos(uv)u ∗ 2s: ds @u ds @v ds At s = 1, u = 2; v = 1. Hence, we have cos(2) ∗ 2 + cos(2) ∗ 2 ∗ 2 = 6 cos(2): We can actually check that w = sin(uv) = sin(2s3). Hence, w0(s) = 3 2 cos(2s ) ∗ 6s . At s = 1, this is 6 cos(2).
    [Show full text]