INVARIANCE of the LAPLACE OPERATOR. the Goal of This

Total Page:16

File Type:pdf, Size:1020Kb

INVARIANCE of the LAPLACE OPERATOR. the Goal of This INVARIANCE OF THE LAPLACE OPERATOR. The goal of this handout is to give a coordinate-free proof of the invari- n ance of the Laplace operator under orthogonal transformations of R (and to explain what this means). n 2 Theorem. Let D ⊂ R be an open set, f 2 C (D), and let U 2 O(n) be an orthogonal linear transformation leaving D invariant (U(D) = D). Then: ∆(f ◦ U) = (∆f) ◦ U: Before giving the proof we recall some basic mathematical facts. Orthogonal linear transformations. An invertible linear transformation n ∗ ∗ −1 ∗ U 2 L(R ) is orthogonal if U U = UU = In (equivalently, U = U .) n ∗ n Recall that, given A 2 L(R ), A 2 L(R ) is defined by: ∗ n A v · w = v · Aw; 8v; w 2 R : n Orthogonal linear transformations are isometries of R ; they preserve the inner product: n Uv · Uw = v · w; 8v; w 2 R ; if U 2 O(n); and therefore preserve the lengths of vectors and the angle between two vec- tors. Orthogonal linear transformations form a group (inverse of orthogonal is orthogonal, composition of orthogonal is orthogonal), denoted by O(n). The matrix of an orthogonal transformation with respect to an orthonormal n t basis of R is an orthogonal matrix: U U = In, so that the columns of U n (or the rows of U) give an orthonormal basis of R . det(U) = ±1 if U 2 O(n). The orthogonal transformations with deter- n minant one can be thought of as rotations of R . If n = 2, they are exactly the rotation matrices Rθ: cos θ − sin θ R = : θ sin θ cos θ If n = 3, one is always an eigenvalue, with a one-dimensional eigenspace (the axis of rotation); in the (two-dimensional) orthogonal complement of this eigenspace, U acts as rotation by θ. 1 n Change of variables in multiple integrals. Let A 2 L(R ) be an invertible n linear transformation. Then if D ⊂ R and f : A(D) ! R is integrable, then so is f ◦ A and: Z Z fdV = (f ◦ A)jdetAjdV: A(D) D (This is a special case of the change of variables theorem.) n Multivariable integration by parts. Let D ⊂ R be a bounded open set 1 2 1 with C boundary, f 2 C (D), g 2 C0 (D). Then: Z Z (∆f)gdV = − rf · rgdV: D D 1 1 Here C0 (D) denotes the set of C functions in D which are zero near the boundary of D. This follows directly from Green's first identity, which in turn is a direct consequence of the divergence theorem. n Note that given D ⊂ R open and P 2 D, it is easy to find a function 1 ' 2 C0 (D) so that ' ≥ 0 everywhere and '(P ) = 1. For example, one could let: 2 2 'P (x) = maxf0; 1 − jjP − xjj / g: This works for > 0 small enough, since 'P is positive only in a ball of radius centered at P (which does not touch the boundary of D if is small.) This can be used to observe that if h is continuous in D, then h ≡ 0 in 1 D if, and only if, for any ' 2 C0 (D) we have: Z h'dV = 0: D Indeed if h(P ) 6= 0 for some P 2 D we have (say) h(P ) > 0, hence h > 0 in a sufficiently small ball B centered at P (by continuity of h). But 1 then h'P ≥ 0 everywhere in D and yet (since 'P 2 C (D)) we must have R 0 D h'P = 0, so that h'P ≡ 0 in D, contradicting the fact that it is positive on B. Proof of Theorem. By the observation just made, it is enough to show 1 that, for all ' 2 C0 (D): Z Z ∆(f ◦ U)'dV = [(∆f) ◦ U]'dV: D D 2 By multivariable integration by parts, for the left-hand side: Z Z Z ∆(f ◦ U)'dV = − r(f ◦ U) · r'dV = − [(rf)U] · r'dV; D D D where we also used the chain rule to assert that r(f ◦ U) = (rf)U. On the other hand, for the right-hand side: Z Z Z (∆f) ◦ U'dV = [(∆f)(' ◦ U −1)] ◦ UdV = (∆f)(' ◦ U −1)dV; D D D using the change of variables theorem and the facts that D is invariant under U and j det Uj = 1. Again from integration by parts and the chain rule we have: Z Z Z (∆f)(' ◦ U −1)dV = − rf · r(' ◦ U −1)dV = − rf · [(r')U t]dV; D D D where we also use the fact that U −1 = U t. Thus we've reduced the proof to showing that: Z Z [(rf)U] · r' = rf · [(r')U t]dV: D D But this follows from the (easily verified) fact that for any n × n matrix A we have: t n (vA) · w = v · (wA ); 8v; w 2 R : Application: Laplacian of radial functions. A function f : B ! R n (where B = fx 2 R ; jxj ≤ Rg is a ball of some radius) is radial if it is invariant under the orthogonal group: f = f ◦ U, for all U 2 O(n). This implies f is a function of distance to the origin only, that is (abusing the notation): f = f(r); if x = r! with r = jxj;! 2 S (the unit sphere): The theorem implies the Laplacian of a radial function is also radial: f = f ◦ U 8U ) ∆f = ∆(f ◦ U) = (∆f) ◦ U 8U; or (∆f)(r!) = g(r), for some function g(r) depending on f, which we now compute. 3 From the divergence theorem on a ball BR of radius R centered at 0: Z Z @f ∆fdV = dS; BR SR @n or: Z R Z Z n−1 n−1 g(r)r d!dr = fr(R)R d!; 0 S S and since both integrals in ! just give the (n-1)-dimensional area of S: Z R n−1 n−1 g(r)r dr = fr(R)R : 0 Differentiating in R we find: n−1 n−1 n−2 g(R)R = frr(R)R + (n − 1)fr(R)R (we assume n ≥ 2), so finally n − 1 g(r) = ∆f(r) = f + f : rr r r Remark. For general (not necessarily radial) functions we have in polar coordinates x = r!: n − 1 1 ∆f = f + f + ∆ f; rr r r r2 S where ∆S is the spherical Laplacian: cos φ 1 ∆Sf = f (n = 2); ∆Sf = f + f + f (n = 3); θθ φφ sin φ φ sin2 φ θθ in polar coordinates (r; θ) (resp. spherical coordinates (r; φ, θ), φ 2 [0; π]; θ 2 [0; 2π]). Note the analogy between ∆S for n=3 and the Laplacian for n = 2: 1 1 ∆f = f + f + ∆ f (n = 2): rr r r r2 S Making the substitutions: 1 cos φ r ! sin φ; = (log r) ! = (log sin φ) ; ∆ f ! f ; r r sin φ φ S θθ we obtain ∆Sf for n=3. 4 n More generally, the spherical Laplacian ∆Sn on the unite sphere S ⊂ n+1 R can be described inductively in terms of the spherical Laplacian on the n−1 n n unit sphere S ⊂ R . To do this, write ! 2 S in spherical coordinates: as a vector in Rn+1, n n−1 ! = (sin φ, (cosφ)θ) 2 R × R ; θ 2 S ; φ 2 [0; π]: (This represents Sn−1 as the equator φ = 0 in Sn.) In these coordinates, we have, for a function f = f(φ, θ) defined on Sn: cos φ 1 ∆Sn f = f + (n − 1) f + ∆ n−1 f: φφ sin φ φ sin2 φ S Here the operator ∆Sn−1 f involves only differentiation in the variables θ 2 Sn−1. 5.
Recommended publications
  • Cloaking Via Change of Variables for Second Order Quasi-Linear Elliptic Differential Equations Maher Belgacem, Abderrahman Boukricha
    Cloaking via change of variables for second order quasi-linear elliptic differential equations Maher Belgacem, Abderrahman Boukricha To cite this version: Maher Belgacem, Abderrahman Boukricha. Cloaking via change of variables for second order quasi- linear elliptic differential equations. 2017. hal-01444772 HAL Id: hal-01444772 https://hal.archives-ouvertes.fr/hal-01444772 Preprint submitted on 24 Jan 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Cloaking via change of variables for second order quasi-linear elliptic differential equations Maher Belgacem, Abderrahman Boukricha University of Tunis El Manar, Faculty of Sciences of Tunis 2092 Tunis, Tunisia. Abstract The present paper introduces and treats the cloaking via change of variables in the framework of quasi-linear elliptic partial differential operators belonging to the class of Leray-Lions (cf. [14]). We show how a regular near-cloak can be obtained using an admissible (nonsingular) change of variables and we prove that the singular change- of variable-based scheme achieve perfect cloaking in any dimension d ≥ 2. We thus generalize previous results (cf. [7], [11]) obtained in the context of electric impedance tomography formulated by a linear differential operator in divergence form.
    [Show full text]
  • Chapter 8 Change of Variables, Parametrizations, Surface Integrals
    Chapter 8 Change of Variables, Parametrizations, Surface Integrals x0. The transformation formula In evaluating any integral, if the integral depends on an auxiliary function of the variables involved, it is often a good idea to change variables and try to simplify the integral. The formula which allows one to pass from the original integral to the new one is called the transformation formula (or change of variables formula). It should be noted that certain conditions need to be met before one can achieve this, and we begin by reviewing the one variable situation. Let D be an open interval, say (a; b); in R , and let ' : D! R be a 1-1 , C1 mapping (function) such that '0 6= 0 on D: Put D¤ = '(D): By the hypothesis on '; it's either increasing or decreasing everywhere on D: In the former case D¤ = ('(a);'(b)); and in the latter case, D¤ = ('(b);'(a)): Now suppose we have to evaluate the integral Zb I = f('(u))'0(u) du; a for a nice function f: Now put x = '(u); so that dx = '0(u) du: This change of variable allows us to express the integral as Z'(b) Z I = f(x) dx = sgn('0) f(x) dx; '(a) D¤ where sgn('0) denotes the sign of '0 on D: We then get the transformation formula Z Z f('(u))j'0(u)j du = f(x) dx D D¤ This generalizes to higher dimensions as follows: Theorem Let D be a bounded open set in Rn;' : D! Rn a C1, 1-1 mapping whose Jacobian determinant det(D') is everywhere non-vanishing on D; D¤ = '(D); and f an integrable function on D¤: Then we have the transformation formula Z Z Z Z ¢ ¢ ¢ f('(u))j det D'(u)j du1::: dun = ¢ ¢ ¢ f(x) dx1::: dxn: D D¤ 1 Of course, when n = 1; det D'(u) is simply '0(u); and we recover the old formula.
    [Show full text]
  • Introduction to the Modern Calculus of Variations
    MA4G6 Lecture Notes Introduction to the Modern Calculus of Variations Filip Rindler Spring Term 2015 Filip Rindler Mathematics Institute University of Warwick Coventry CV4 7AL United Kingdom [email protected] http://www.warwick.ac.uk/filiprindler Copyright ©2015 Filip Rindler. Version 1.1. Preface These lecture notes, written for the MA4G6 Calculus of Variations course at the University of Warwick, intend to give a modern introduction to the Calculus of Variations. I have tried to cover different aspects of the field and to explain how they fit into the “big picture”. This is not an encyclopedic work; many important results are omitted and sometimes I only present a special case of a more general theorem. I have, however, tried to strike a balance between a pure introduction and a text that can be used for later revision of forgotten material. The presentation is based around a few principles: • The presentation is quite “modern” in that I use several techniques which are perhaps not usually found in an introductory text or that have only recently been developed. • For most results, I try to use “reasonable” assumptions, not necessarily minimal ones. • When presented with a choice of how to prove a result, I have usually preferred the (in my opinion) most conceptually clear approach over more “elementary” ones. For example, I use Young measures in many instances, even though this comes at the expense of a higher initial burden of abstract theory. • Wherever possible, I first present an abstract result for general functionals defined on Banach spaces to illustrate the general structure of a certain result.
    [Show full text]
  • 7. Transformations of Variables
    Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 7. Transformations of Variables Basic Theory The Problem As usual, we start with a random experiment with probability measure ℙ on an underlying sample space. Suppose that we have a random variable X for the experiment, taking values in S, and a function r:S→T . Then Y=r(X) is a new random variabl e taking values in T . If the distribution of X is known, how do we fin d the distribution of Y ? This is a very basic and important question, and in a superficial sense, the solution is easy. 1. Show that ℙ(Y∈B) = ℙ ( X∈r −1( B)) for B⊆T. However, frequently the distribution of X is known either through its distribution function F or its density function f , and we would similarly like to find the distribution function or density function of Y . This is a difficult problem in general, because as we will see, even simple transformations of variables with simple distributions can lead to variables with complex distributions. We will solve the problem in various special cases. Transformed Variables with Discrete Distributions 2. Suppose that X has a discrete distribution with probability density function f (and hence S is countable). Show that Y has a discrete distribution with probability density function g given by g(y)=∑ f(x), y∈T x∈r −1( y) 3. Suppose that X has a continuous distribution on a subset S⊆ℝ n with probability density function f , and that T is countable.
    [Show full text]
  • The Laplace Operator in Polar Coordinates in Several Dimensions∗
    The Laplace operator in polar coordinates in several dimensions∗ Attila M´at´e Brooklyn College of the City University of New York January 15, 2015 Contents 1 Polar coordinates and the Laplacian 1 1.1 Polar coordinates in n dimensions............................. 1 1.2 Cartesian and polar differential operators . ............. 2 1.3 Calculating the Laplacian in polar coordinates . .............. 2 2 Changes of variables in harmonic functions 4 2.1 Inversionwithrespecttotheunitsphere . ............ 4 1 Polar coordinates and the Laplacian 1.1 Polar coordinates in n dimensions Let n ≥ 2 be an integer, and consider the n-dimensional Euclidean space Rn. The Laplace operator Rn n 2 2 in is Ln =∆= i=1 ∂ /∂xi . We are interested in solutions of the Laplace equation Lnf = 0 n 2 that are spherically symmetric,P i.e., is such that f depends only on r=1 xi . In order to do this, we need to use polar coordinates in n dimensions. These can be definedP as follows: for k with 1 ≤ k ≤ n 2 k 2 define rk = i=1 xi and put for 2 ≤ k ≤ n write P (1) rk−1 = rk sin φk and xk = rk cos φk. rk−1 = rk sin φk and xk = rk cos φk. The polar coordinates of the point (x1,x2,...,xn) will be (rn,φ2,φ3,...,φn). 2 2 2 In case n = 2, we can write y = x1, x = x2. The polar coordinates (r, θ) are defined by r = x + y , (2) x = r cos θ and y = r sin θ, so we can take r2 = r and φ2 = θ.
    [Show full text]
  • Part IA — Vector Calculus
    Part IA | Vector Calculus Based on lectures by B. Allanach Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine. 3 Curves in R 3 Parameterised curves and arc length, tangents and normals to curves in R , the radius of curvature. [1] 2 3 Integration in R and R Line integrals. Surface and volume integrals: definitions, examples using Cartesian, cylindrical and spherical coordinates; change of variables. [4] Vector operators Directional derivatives. The gradient of a real-valued function: definition; interpretation as normal to level surfaces; examples including the use of cylindrical, spherical *and general orthogonal curvilinear* coordinates. Divergence, curl and r2 in Cartesian coordinates, examples; formulae for these oper- ators (statement only) in cylindrical, spherical *and general orthogonal curvilinear* coordinates. Solenoidal fields, irrotational fields and conservative fields; scalar potentials. Vector derivative identities. [5] Integration theorems Divergence theorem, Green's theorem, Stokes's theorem, Green's second theorem: statements; informal proofs; examples; application to fluid dynamics, and to electro- magnetism including statement of Maxwell's equations. [5] Laplace's equation 2 3 Laplace's equation in R and R : uniqueness theorem and maximum principle. Solution of Poisson's equation by Gauss's method (for spherical and cylindrical symmetry) and as an integral. [4] 3 Cartesian tensors in R Tensor transformation laws, addition, multiplication, contraction, with emphasis on tensors of second rank. Isotropic second and third rank tensors.
    [Show full text]
  • The Jacobian and Change of Variables
    The Jacobian and Change of Variables Icon Placement: Section 3.3, p. 204 This set of exercises examines how a particular determinant called the Jacobian may be used to allow us to change variables in double and triple integrals. First, let us review how a change of variables is done in a single integral: this change is usually called a substitution. If we are attempting to integrate f(x) over the interval a ≤ x ≤ b,we can let x = g(u) and substitute u for x, giving us Z b Z d f(x) dx = f(g(u))g0(u) du a c 0 dx where a = g(c)andb = g(d). Notice the presence of the factor g (u)= du . A similar factor will be introduced when we study substitutions in double and triple integrals, and this factor will involve the Jacobian. We will now study changing variables in double and triple integrals. These intergrals are typically studied in third or fourth semester calculus classes. For more information on these integrals, consult your calculus text; e.g., Reference 2, Chapter 13, \Multiple Integrals." We assume that the reader has a passing familiarity with these concepts. To change variables in double integrals, we will need to change points (u; v)topoints(x; y). 2 2 → Ê That is, we will have a transformation T : Ê with T (u; v)=(x; y). Notice that x and y are functions of u and v;thatis,x = x(u; v)andy = y(u; v). This transformation T may or may not be linear.
    [Show full text]
  • 3 Laplace's Equation
    3 Laplace’s Equation We now turn to studying Laplace’s equation ∆u = 0 and its inhomogeneous version, Poisson’s equation, ¡∆u = f: We say a function u satisfying Laplace’s equation is a harmonic function. 3.1 The Fundamental Solution Consider Laplace’s equation in Rn, ∆u = 0 x 2 Rn: Clearly, there are a lot of functions u which satisfy this equation. In particular, any constant function is harmonic. In addition, any function of the form u(x) = a1x1 +:::+anxn for constants ai is also a solution. Of course, we can list a number of others. Here, however, we are interested in finding a particular solution of Laplace’s equation which will allow us to solve Poisson’s equation. Given the symmetric nature of Laplace’s equation, we look for a radial solution. That is, we look for a harmonic function u on Rn such that u(x) = v(jxj). In addition, to being a natural choice due to the symmetry of Laplace’s equation, radial solutions are natural to look for because they reduce a PDE to an ODE, which is generally easier to solve. Therefore, we look for a radial solution. If u(x) = v(jxj), then x u = i v0(jxj) jxj 6= 0; xi jxj which implies 1 x2 x2 u = v0(jxj) ¡ i v0(jxj) + i v00(jxj) jxj 6= 0: xixi jxj jxj3 jxj2 Therefore, n ¡ 1 ∆u = v0(jxj) + v00(jxj): jxj Letting r = jxj, we see that u(x) = v(jxj) is a radial solution of Laplace’s equation implies v satisfies n ¡ 1 v0(r) + v00(r) = 0: r Therefore, 1 ¡ n v00 = v0 r v00 1 ¡ n =) = v0 r =) ln v0 = (1 ¡ n) ln r + C C =) v0(r) = ; rn¡1 1 which implies ½ c1 ln r + c2 n = 2 v(r) = c1 (2¡n)rn¡2 + c2 n ¸ 3: From these calculations, we see that for any constants c1; c2, the function ½ c1 ln jxj + c2 n = 2 u(x) ´ c1 (3.1) (2¡n)jxjn¡2 + c2 n ¸ 3: for x 2 Rn, jxj 6= 0 is a solution of Laplace’s equation in Rn ¡ f0g.
    [Show full text]
  • Gradient Vector Tangent Plane and Normal Line Flux and Surface
    Gradient vector • The gradient of f(x; y; z) is the vector rf = hfx; fy; fzi. This gives the direction of most rapid increase at each point and the rate of change in that direction is jjrfjj. • The direction of most rapid decrease is given by −∇f and the rate of change in that direction is −||∇fjj. Tangent plane and normal line • The tangent plane to the graph of z = f(x; y) at the point (x0; y0; z0) is the plane z − f(x0; y0) = fx(x0; y0) · (x − x0) + fy(x0; y0) · (y − y0): • The normal line to the graph of z = f(x; y) at the point (x0; y0; z0) has direction n = hfx(x0; y0); fy(x0; y0); −1i : Flux and surface integrals • The flux of the vector field F (x; y; z) through a surface σ in R3 is given by ¨ Flux = F · n dS; σ where n is the unit normal vector depending on the orientation of the surface. If σ is the graph of z = f(x; y) oriented upwards, then n dS = ⟨−fx; −fy; 1i dx dy. • The surface integral of a function H(x; y; z) over the graph of z = f(x; y) is given by ¨ ¨ q · 2 2 H(x; y; z) dS = H(x; y; f(x; y)) 1 + fx + fy dx dy; σ R where σ denotes the graph of z = f(x; y) and R is its projection onto the xy-plane. Change of variables • Cylindrical coordinates. These are defined by the formulas x = r cos θ; y = r sin θ; x2 + y2 = r2; dV = r dz dr dθ: • Spherical coordinates.
    [Show full text]
  • VECTOR CALCULUS I Mathematics 254 Study Guide
    VECTOR CALCULUS I Mathematics 254 Study Guide By Harold R. Parks Department of Mathematics Oregon State University and Dan Rockwell Dean C. Wills Dec 2014 MTH 254 STUDY GUIDE Summary of topics Lesson 1 (p. 1): Coordinate Systems, §10.2, 13.5 Lesson 2 (p. 9): Vectors in the Plane and in 3-Space, §11.1, 11.2 Lesson 3 (p. 17): Dot Products, §11.3 Lesson 4 (p. 21): Cross Products, §11.4 Lesson 5 (p. 23): Calculus on Curves, §11.5 & 11.6 Lesson 6 (p. 27): Motion in Space, §11.7 Lesson 7 (p. 29): Lengths of Curves, §11.8 Lesson 8 (p. 31): Curvature and Normal Vectors, §11.9 Lesson 9 (p. 33): Planes and Surfaces, §12.1 Lesson 10 (p. 37): Graphs and Level Curves, §12.2 Lesson 11 (p. 39): Limits and Continuity, §12.3 Lesson 12 (p. 43): Partial Derivatives, §12.4 Lesson 13 (p. 45): The Chain Rule, §12.5 Lesson 14 (p. 49): Directional Derivatives and the Gradient, §12.6 Lesson 15 (p. 53): Tangent Planes, Linear Approximation, §12.7 Lesson 16 (p. 57): Maximum/Minimum Problems, §12.8 Lesson 17 (p. 61): Lagrange Multipliers, §12.9 Lesson 18 (p. 63): Double Integrals: Rectangular Regions, §13.1 Lesson 19 (p. 67): Double Integrals over General Regions, §13.2 Lesson 20 (p. 73): Double Integrals in Polar Coordinates, §13.3 Lesson 21 (p. 75): Triple Integrals, §13.4 Lesson 22 (p. 77): Triple Integrals: Cylindrical & Spherical, §13.4 Lesson 23 (p. 79): Integrals for Mass Calculations, §13.6 Lesson 24 (p. 81): Change of Multiple Variables, §13.7 HRP, Summary of Suggested Problems § 10.2 (p.
    [Show full text]
  • On the Change of Variables Formula for Multiple Integrals 1 Introduction
    J. Math. Study Vol. 50, No. 3, pp. 268-276 doi: 10.4208/jms.v50n3.17.04 September 2017 On the Change of Variables Formula for Multiple Integrals Shibo Liu1,∗and Yashan Zhang2 1 Department of Mathematics, Xiamen University, Xiamen 361005, P.R. China; 2 Department of Mathematics, University of Macau, Macau, P.R. China. Received January 7, 2017; Accepted (revised) May 17, 2017 Abstract. We develop an elementary proof of the change of variables formula in multi- ple integrals. Our proof is based on an induction argument. Assuming the formula for (m−1)-integrals, we define the integral over hypersurface in Rm, establish the diver- gent theorem and then use the divergent theorem to prove the formula for m-integrals. In addition to its simplicity, an advantage of our approach is that it yields the Brouwer Fixed Point Theorem as a corollary. AMS subject classifications: 26B15, 26B20 Key words: Change of variables, surface integral, divergent theorem, Cauchy-Binet formula. 1 Introduction The change of variables formula for multiple integrals is a fundamental theorem in mul- tivariable calculus. It can be stated as follows. Theorem 1.1. Let D and Ω be bounded open domains in Rm with piece-wise C1-boundaries, ϕ ∈ C1(Ω¯ ,Rm) such that ϕ : Ω → DisaC1-diffeomorphism. If f ∈ C(D¯ ), then f (y)dy = f (ϕ(x)) Jϕ(x) dx, (1.1) Ω ZD Z ′ where Jϕ(x)=detϕ (x) is the Jacobian determinant of ϕ at x ∈ Ω. The usual proofs of this theorem that one finds in advanced calculus textbooks in- volves careful estimates of volumes of images of small cubes under the map ϕ and nu- merous annoying details.
    [Show full text]
  • H. Jerome Keisler : Foundations of Infinitesimal Calculus
    FOUNDATIONS OF INFINITESIMAL CALCULUS H. JEROME KEISLER Department of Mathematics University of Wisconsin, Madison, Wisconsin, USA [email protected] June 4, 2011 ii This work is licensed under the Creative Commons Attribution-Noncommercial- Share Alike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ Copyright c 2007 by H. Jerome Keisler CONTENTS Preface................................................................ vii Chapter 1. The Hyperreal Numbers.............................. 1 1A. Structure of the Hyperreal Numbers (x1.4, x1.5) . 1 1B. Standard Parts (x1.6)........................................ 5 1C. Axioms for the Hyperreal Numbers (xEpilogue) . 7 1D. Consequences of the Transfer Axiom . 9 1E. Natural Extensions of Sets . 14 1F. Appendix. Algebra of the Real Numbers . 19 1G. Building the Hyperreal Numbers . 23 Chapter 2. Differentiation........................................ 33 2A. Derivatives (x2.1, x2.2) . 33 2B. Infinitesimal Microscopes and Infinite Telescopes . 35 2C. Properties of Derivatives (x2.3, x2.4) . 38 2D. Chain Rule (x2.6, x2.7). 41 Chapter 3. Continuous Functions ................................ 43 3A. Limits and Continuity (x3.3, x3.4) . 43 3B. Hyperintegers (x3.8) . 47 3C. Properties of Continuous Functions (x3.5{x3.8) . 49 Chapter 4. Integration ............................................ 59 4A. The Definite Integral (x4.1) . 59 4B. Fundamental Theorem of Calculus (x4.2) . 64 4C. Second Fundamental Theorem of Calculus (x4.2) . 67 Chapter 5. Limits ................................................... 71 5A. "; δ Conditions for Limits (x5.8, x5.1) . 71 5B. L'Hospital's Rule (x5.2) . 74 Chapter 6. Applications of the Integral........................ 77 6A. Infinite Sum Theorem (x6.1, x6.2, x6.6) . 77 6B. Lengths of Curves (x6.3, x6.4) .
    [Show full text]