
MAT 314 LECTURE NOTES 1. Analysis on metric spaces 1.1. Definitions, and open sets. A metric space is, essentially, a set of points together with a rule for saying how far apart two such points are: Definition 1.1. A metric space consists of a set X together with a function d: X × X ! R such that: (1) For each x; y 2 X, d(x; y) ≥ 0, and d(x; y) = 0 if and only if x = y. (2) For each x; y 2 X, d(x; y) = d(y; x). (3) For each x; y; z 2 X, d(x; z) ≤ d(x; y) + d(y; z). The last condition is known as the triangle inequality. If we think of d(x; y) as representing, say, the smallest possible amount of time that it takes to get from x to y, the triangle inequality should make sense, since one way of getting from x to z is to first go from x to y and then from y to z, and doing this can take as little time as d(x; y) + d(y; z). Of course, it will often be the case that the quickest path from x to z doesn't pass through y, in which case the inequality will be strict. The first part of the course is concerned with developing notions like convergence and continuity in the context of a general metric space (X; d). You've probably already had some exposure to these concepts at least in the context of the real numbers, which indeed are the first example: Example 1.2. Take X = R, and define d(x; y) = jx−yj. It should be fairly obvious that the three axioms for a metric space are satisfied in this case. If you've had a good real analysis course, then a lot (though not all) of the proofs below should look somewhat familiar, essentially with absolute value signs replaced by `d's. Example 1.3. If X = Rn, there are actually many metrics d that we can use which generalize the absolute value metric on R. The most famous of these is surely n !1=2 X 2 d2((x1; : : : ; xn); (y1; : : : ; yn)) = (xi − yi) ; i=1 which gives the distance from (x1; : : : ; xn) to (y1; : : : ; yn) that is dictated by the Pythagorean theorem. Although (in view of the Pythagorean theorem) d2 seems like a very natu- ral sort of distance, the triangle inequality for it isn't completely obvious, but rather depends on the Cauchy-Schwarz inequality, which states that, where for n ~v = (v1; : : : ; vn); ~w = (w1; : : : ; wn) 2 R we write n X h~v; ~wi = viwi; i=1 one has h~v; ~wi2 ≤ h~v;~vih~w; ~wi: 1 2 MAT 314 LECTURE NOTES Proof of the triangle inequality for d2, assuming the Cauchy-Schwarz inequality. We have 2 d2(~x;~z) = h~x − ~z; ~x − ~zi = h(~x − ~y) + (~y − ~z); (~x − ~y) + (~y − ~z)i = h~x − ~y; ~x − ~yi + 2h~x − ~y; ~y − ~zi + h~y − ~z; ~y − ~zi ≤ h~x − ~y; ~x − ~yi + 2 (h~x − ~y; ~x − ~yih~y − ~z; ~y − ~zi)1=2 + h~y − ~z; ~y − ~zi 2 2 2 = d2(~x;~y) + 2d2(~x;~y)d2(~y; ~z) + d2(~y; ~z) = (d2(~x;~y) + d2(~y; ~z)) ; and then taking the square root of both sides implies the triangle inequality. Proof of the Cauchy-Schwarz inequality. If either ~v or ~w is the zero vector, then both sides of the Cauchy-Schwarz inequality (h~v; ~wi2 ≤ h~v;~vih~w; ~wi) are zero, so it holds in that case. So let us assume that ~v and ~w are both nonzero, so we can form the vectors 1 1 ~v0 = ~v; ~w0 = ~w; h~v;~vi1=2 h~w; ~wi1=2 which satisfy h~v0;~v0i = h~w0 ~w0i = 1: Notice that 0 ≤ h~v0 ± ~w0;~v0 ± ~w0i = h~v0;~v0i ± 2h~v0; ~w0i + h~w; ~wi = 2 ± 2h~v0; ~w0i; i.e., ±h~v0; ~w0i ≤ 1 = h~v0;~v0ih~w0 ~w0i: Now square this last equation and then multiply it by h~v;~vih~w; ~wi to get the desired inequality h~v; ~wi2 ≤ h~v;~vih~w; ~wi: Example 1.4. Here are two examples of metrics on Rn for which the triangle inequality is more obvious than for the standard Pythagorean distance d2: n X d1(~x;~y) = jxi − yij; i=1 d1(~x;~y) = max jxi − yij: 1≤i≤n The first of these, d1, is colloquially known as the \taxicab metric" (why?). Note that each of d1; d2; d1 specializes to the usual absolute value metric when n = 1. In fact, there's an infinite family of metrics n !1=p X p dp(~x;~y) = jxi − yij ; i=1 defined for any real number p ≥ 1. The triangle inequality for these follows from a generalization of the Cauchy-Schwarz inequality called the Minkowski inequality, which we'll see later on in the course. Also later in the course (perhaps in a problem set) we'll prove that lim dp(~x;~y) = d1(~x;~y); p!1 which explains the notation for d1. MAT 314 LECTURE NOTES 3 Of course, for 0 < p < 1 one could try to define a metric dp by the same formula above, but it turns out that for those values of p the \triangle inequality" would point in the wrong direction. n Although there are many metrics on R , d2 is generally the one that is used unless explicit mention otherwise is made. Example 1.5. Let [a; b] be any closed interval in R. I hope that you're familiar with the fact that if h:[a; b] ! R is a continuous function then there is some x0 2 [a; b] such that h(x0) = max h(x) x2[a;b] (if you're not, we'll be proving a generalization of this fact later on). In light of this, where X = C([a; b]) := ff :[a; b] ! Rjf is continuousg; we can define a metric on X by d(f; g) = max jf(x) − g(x)j: x2[a;b] (Check for yourself that this is indeed a metric.) As this example illustrates, metric space concepts apply not just to spaces whose elements are thought of as geometric points, but also sometimes to spaces of func- tions. Indeed, one of the major tasks later in the course, when we discuss Lebesgue integration theory, will be to understand convergence in various metric spaces of functions. In calculus on R, a fundamental role is played by those subsets of R which are intervals. The analogues of open intervals in general metric spaces are the following: Definition 1.6. If (X; d) is a metric space, p 2 X, and r > 0, the open ball of radius r around p is Br(p) = fq 2 Xjd(p; q) < rg: Exercise 1.7. In R2, draw a picture of the open ball of radius 1 around the origin in the metrics d2, d1, and d1. One of the biggest themes of the whole unit on metric spaces in this course is that a major role is played by the following kinds of sets: Definition 1.8. If (X; d) is a metric space and U ⊂ X, U is called an open subset of X if, for every p 2 U, there is some > 0 such that B(p) ⊂ U: (Of course, will typically depend on p.) Fortunately, the terminology in the previous two definitions is consistent, be- cause: Proposition 1.9. If (X; d) is a metric space, every open ball Br(p) is an open subset of X. Proof. Let p 2 X and r > 0. We need to show that if q 2 Br(p) then some open ball B(q)( > 0) around q is contained in Br(p). Now by the definition of Br(p), 4 MAT 314 LECTURE NOTES the fact that q 2 Br(p) means that d(p; q) < r, so if we set = r − d(p; q) we have > 0: If x 2 B(q), then the triangle inequality shows d(p; x) ≤ d(p; q) + d(q; x) < d(p; q) + = r; and so x 2 Br(p). Thus B(q) ⊂ Br(p), as desired. Open balls are conceptually fairly simple, but they don't behave very well when you apply set-theoretic operations to them (for instance the union of two open balls is only rarely an open ball). The more general notion of an open set is better in this regard. Lemma 1.10. Let (X; d) be a metric space. (i) If fUαjα 2 Ag is any collection of open sets in X, then [ Uα α2A is open. (ii) If fU1;:::;Ung is a finite collection of open sets in X, then n \i=1Ui is an open set. Proof. For (i), if x 2 [α2AUα, then for some β 2 A we have x 2 Uβ (this is just the definition of the union of a collection of sets). Since Uβ is open, we have B(x) ⊂ Uβ for some > 0. But of course Uβ ⊂ [α2AUα, so this shows that B(x) ⊂ [α2AUα: Since x was an arbitrary element of [α2AUα, this completes the proof of (i). n As for (ii), if x 2 \i=1Ui, then for each i there is i > 0 such that Bi (x) ⊂ Ui. Let = min i: 1≤i≤n Then (since there are only finitely many i!) > 0, and we have, for each i, B(x) ⊂ Bi (x) ⊂ Ui; so that n B(x) ⊂ \i=1Ui: Note the asymmetry between unions and intersections here; arbitrary unions of open sets are open, but we can only say that finite intersections of open sets are open.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages78 Page
-
File Size-