Multivariable Analysis
Total Page:16
File Type:pdf, Size:1020Kb
1 - Multivariable analysis Roland van der Veen Groningen, 20-1-2020 2 Contents 1 Introduction 5 1.1 Basic notions and notation . 5 2 How to solve equations 7 2.1 Linear algebra . 8 2.2 Derivative . 10 2.3 Elementary Riemann integration . 12 2.4 Mean value theorem and Banach contraction . 15 2.5 Inverse and Implicit function theorems . 17 2.6 Picard's theorem on existence of solutions to ODE . 20 3 Multivariable fundamental theorem of calculus 23 3.1 Exterior algebra . 23 3.2 Differential forms . 26 3.3 Integration . 27 3.4 More on cubes and their boundary . 28 3.5 Exterior derivative . 30 3.6 The fundamental theorem of calculus (Stokes Theorem) . 31 3.7 Fundamental theorem of calculus: Poincar´elemma . 32 3 4 CONTENTS Chapter 1 Introduction The goal of these notes is to explore the notions of differentiation and integration in arbitrarily many variables. The material is focused on answering two basic questions: 1. How to solve an equation? How many solutions can one expect? 2. Is there a higher dimensional analogue for the fundamental theorem of calculus? Can one find a primitive? The equations we will address are systems of non-linear equations in finitely many variables and also ordinary differential equations. The approach will be mostly theoretical, schetching a framework in which one can predict how many solutions there will be without necessarily solving the equation. The key assumption is that everything we do can locally be approximated by linear functions. In other words, everything will be differentiable. One of the main results is that the linearization of the equation predicts the number of solutions and approximates them well locally. This is known as the implicit function theorem. For ordinary differential equations we will prove a similar result on the existence and uniqueness of solutions. To introduce the second question, recall what the fundamental theorem of calculus says. Z b f 0(x)dx = f(b) − f(a) a What if f is now a function depending on two or more variables? In two and three dimensions, vector calculus gives some partial answers involving div, grad, curl and the theorems of Gauss, Green and Stokes. How can one make sense of these and are there any more such theorems perhaps in higher dimensions? The key to understanding this question is to pass from functions to differential forms. In the example above this means passing from f(x) to the differential form f(x)dx. Taking the dx part of our integrands seriously clarifies all formulas and shows the way to a general fundamental theorem of calculus that works in any dimension, known as the (generalized) Stokes theorem: Z Z d! = ! Ω @Ω All the results mentioned in this paragraph are special cases of this powerful theorem. This is not calculus. We made an attempt to prove everything we say so that no black boxes have to be accepted on faith. This self-sufficiency is one of the great strengths of mathematics. The reader is asked to at least try the exercises. Doing exercises (and necessarily failing some!) is an integral part of mathematics. 1.1 Basic notions and notation Most of the material is standard and can be found in references such as Calculus on manifolds by M. Spivak. n n 1 n Pn i We will mostly work in R whose elements are vectors R 3 x = (x ; : : : ; x ) = i=1 x ei where ei is the i-th standard basis vector whose coordinates are all 0 except a 1 at the i-th place. Throughout the text I try to write f functions as A 3 a 7−! a2 + a + 1 2 B, instead of f : A ! B defined by f(a) = a2 + a + 1. An important function jj n p 2 2 2 is the (Euclidean) norm R 3 x 7−! R where j(x1; x2; : : : ; xn)j = x1 + x2 + : : : xn. As the name suggests it satisfies the triangle inequality jx + yj ≤ jxj + jyj. Another piece of notation is that of open and closed subsets of Rn. A set S ⊂ Rn is called open if it is the n n union of (possibly infinitely many) open balls Br(p) = fx 2 R : jx − pj < rg. The set S ⊂ R is closed if its n n complement R − S is open. For example the closed disk Br(p) = fx 2 R : jx − pj ≤ rg is closed. 5 6 CHAPTER 1. INTRODUCTION n n i i For a sequence of points (pk) 2 R and q 2 R we write limk!1 pk = q if for all i we have limk!1 pk = q . i n f m Here q is the i-th coordinate of q (not i-th power!). For a function R ⊃ A −! B ⊂ R we say limp!q f(p) = s if n for any sequence (pk) converging to q in R we have limk!1 f(pk) = s. The function f is said to be continuous at q if limp!q f(p) = f(q) and f is continuous if it is continuous at every point. f is uniformly continuous on set U if for all > 0 there is a δ > 0 such that for all u; v 2 U ju − vj < δ ) jf(u) − f(v)j < . A continuous function on a closed bounded U ⊂ Rn is uniformly continuous. Moreover, continuous functions Rm ! Rn send closed bounded subsets to closed bounded subsets. For more on open sets and continuity we refer to the course on metric and topological spaces next block. The above results and definitions should all be familliar at least in the case n = 1. When formulated in terms of the norm all proofs are identical in the case of Rn. Exercises Exercise 0. True or false: 1. B1(1) [ Bπ(5) is open. 2. [0; 1]10 is closed. 3. [0; 1) is open. n 4. f(x1; : : : xn) 2 R jx1 > 0g is open. 7 2 2 5. fx 2 R : x1 + ··· + x7 = 1g is open. Chapter 2 How to solve equations Under what conditions can a system of n real equations in k + n variables be solved? Naively one may hope that each equation can be used to determine a variable so that in the end k variables are left undetermined and all others are functions of those. For example consider the two systems of two equations on the left and on the right (k = 1; n = 2): x + y + z = 0 sin(x + y) − log(1 − z) = 0 (2.1) 1 −x + y + z = 0 ey − = 0 (2.2) 1 + x − z The system on the left is linear and easy to solve, we get x = 0 and y = −z. The system on the right is hard Figure 2.1: Solutions to the two systems. The yellow surface is the solution to the first equation, blue the second. The positive x; y; z axes are drawn in red, green, blue respectively. to solve explicitly but looks very similar near (0; 0; 0) since sin(x + y) ≈ x + y and log(1 − z) ≈ −z near zero. We will be able to show that just like in the linear situation, a curve of solutions passes through the origin. The key point is that the derivative of the complicated looking functions at the origin is precisely the linear function shown on the left. We will look at equations involving only differentiable functions. This means that locally they can be approximated well by linear functions. The goal of the chapter is to prove the implicit function theorem. Basically it says that the linear approximation decides whether or not a system of equations is solvable locally and if so how many solutions it has. This is illustrated in the figures above. Exercises Exercise 0. The non-linear equation sin(x − y) = 0 has a solution (x; y) = (0; 0). Find a linear equation that approximates the non-linear equation well near (0; 0). Exercise 1. (Linear case) Is it always true that a system of two linear equations in three unknowns has a line of solutions? Prove or 7 8 CHAPTER 2. HOW TO SOLVE EQUATIONS provide counter example. Exercise 2. (Non-linear equation) Solve z and y as a function of x subject to the equations. x2 + y2 + z2 = 1 x + y2 + z3 = 0 Exercise 3. (Linear equation) Write the system below as a matrix equation Ax = b for matrix A and vectors x; b. x + 3y + 2z = 1 x + y + z = 0 −x + y + 4z = 0 Exercise 4. (Three holes) Give a single equation in three unknowns such that the solution set is a bounded subset of R3, looks smooth and two-dimensional everywhere and has a hole. Harder: Can you increase the number of holes to three? 2.1 Linear algebra The basis for our investigation of equations is the linear case. Linear equations can neatly be summarized in terms of a single matrix equation Av = b. Here v is a vector in Rk+n, and b 2 Rn and A is an n × (k + n) matrix. In case b = 0 we call the equation homogeneous and the solution set is some linear subspace ker A = fv 2 Rk+njAv = 0g, the kernel of the map defined by A. In general, given a single solution p 2 Rk+n such that Ap = b the entire solution set fv 2 Rk+njAv = bg is the affine linear subspace (ker A) + p = fs + pjs 2 ker Ag. In discussing the qualitative properties of linear equations it is more convenient to think in terms of linear maps.