
In this section, we assume that you're familiar with R2 as the set of pairs of real numbers, as well as the \standard" addition and scalar multiplication that you have seen in your linear algebra class. 1 Analysis on R2 From the previous section, an important element of analysis is the idea of measuring distance: the definition of convergence of a sequence of real numbers used jxn − xj to measure how close xn was to x. As it turns out, there are many different ways of measuring distance on R2. 2 Definition 1.1. Suppose x = (x1; x2) 2 R . 1 1 2 2 2 P2 2 2 1. Euclidean Size: kxk2 := (x1 + x2) = i=1 xi . 2. Maximum Size: kxk1 := maxfjx1j; jx2jg. P2 3. Sum Size: kxk1 := jx1j + jx2j = i=1 jxij. Notice that if we think of R as the subspace of R2 whose second component is 0, then each of the norms reduces to the absolute value. Thus, they are all generalizations of the absolute value as a measure of size. Theorem 1.2. k · k2 satisfies (1-3): 2 1. kxk2 > 0 for all x 2 R nf0g. 2 2. kaxk2 = jajkxk2 for all x 2 R and all a 2 R. 2 3. kx + yk2 ≤ kxk2 + kyk2 for all x; y 2 R . These assertions are also true if we replace k · k2 by either k · k1 or k · k1. Before we prove Theorem 1.2, we need the following lemma: Lemma 1.3. Suppose x = (x1; x2) and y = (y1; y2). Then x1y1 + x2y2 ≤ kxk2kyk2 for any x; y 2 R2. Proof. By taking square roots, it will be enough to show that 2 2 2 2 2 (x1 + x2)(y1 + y2) ≥ x1y1 + x2y2 ; or equivalently 2 2 2 2 2 (x1 + x2)(y1 + y2) − x1y1 + x2y2 ≥ 0: 1 We have 2 2 2 2 2 (x1 + x2)(y1 + y2) − x1y1 + x2y2 2 2 2 2 2 2 2 2 2 2 2 2 = x1y1 + x1y2 + x2y1 + x2y2 − x1y1 + 2x1y1x2y2 + x2yx 2 2 2 2 = x1y2 − 2x1y1x2y2 + x2y1 2 = (x1y2 − x2y1) ; and this last quantity is clearly positive. We are now in position to prove Theorem 1.2. 2 2 Proof. (1.) Suppose that x 6= 0. Then, at least one of x1 or x2 is not zero, and so x1 +x2 > 0. Taking square roots yields kxk2 > 0. 2 (2.) Let x 2 R and a 2 R be arbitrary. Since ax = (ax1; ax2), we will have 2 2 2 2 2 2 2 2 kaxk2 = (ax1) + (ax2) = a x1 + x2 = a kxk2: p Taking square roots (and bearing in mind that a2 = jaj) finishes the proof of (2.). 2 2 (3.) We will show that kx + yk2 ≤ (kxk2 + kyk2) . Since x + y = (x1 + y1; x2 + y2), we have 2 2 2 kx + yk2 = (x1 + y1) + (x2 + y2) 2 2 2 2 = x1 + 2x1y1 + y1 + x2 + 2x2y2 + y2 2 2 2 2 = (x1 + x2) + 2 x1y1 + x2y2 + (y1 + y2) 2 2 ≤ kxk2 + 2 x1y1 + x2y2 + kyk2 2 2 2 ≤ kxk2 + 2kxk2kyk2 + kyk2 = kxk2 + kyk2 ; where we have used Lemma 1.3. Exercise: Prove the version of Theorem 1.2 for k · k1 and k · k1. Now that we have a way of measuring size, we can also measure distance between \points" (really the vectors) in R2: Definition 1.4. Suppose x; y 2 R2. Then 1. the Euclidean Distance between x and y is kx − yk2. 2. the Max Distance between x and y is kx − yk1. 3. the Sum Distance between x and y is kx − yk1. This definition simply generalizes our ideas of distance from R: the distance between x; y 2 R is jx − yj. It's almost like we're just replacing j · j with k · k. Notice that the properties of the various sizes are generalizations of the properties of the absolute value - including the triangle inequality! There is an important distinction: unlike R, R2 has many different distances! Exercise: Suppose x = (x1; x2). Does kxk := 2jx1j + 3jx2j satisfy the three properties of Theorem 1.2? Why or why not? Exercise: Suppose A is an 2 × 2 matrix. Does kxkA := kAxk2 satisfy (1-3) of Theorem 1.2? Why or why not? 2 1.1 Convergence in R2 Once we have a way of measuring distance, we can define convergence: 2 2 Definition 1.5. Suppose xn is a sequence in R and x 2 R . We say that xn converges to x with respect to k · ki (i = 1; 2 or 1) and write either xn ! x or lim xn = x if for every n!1 " > 0, there is an N such that for all n 2 N, if n > N, then kxn − xki < ". On the face of things, it looks like we could perhaps have a sequence that converges with respect to one of our distances, but not with respect to the other. As it turns out, this doesn't happen in R2. Lemma 1.6. There are positive constants C, K and M such that for all x 2 R2 1 kxk ≤ kxk ≤ Ckxk C 1 2 1 1 kxk ≤ kxk ≤ Kkxk K 1 2 1 1 kxk ≤ kxk ≤ Mkxk M 1 1 1 Proof. We prove only the first. Suppose x = (x; y). By definition, kxk1 = jxj + jyj and p 2 2 kxk2 = x + y . We clearly have p 2 2 jxj ≤ x + y = kxk2 p 2 2 jyj ≤ x + y = kxk2; 1 and adding these inequalities yields kxk1 ≤ 2kxk2, and so 2 kxk1 ≤ kxk2. It remains only to show that kxk2 ≤ 2kxk1. Notice that q p 2 2 p 2 2 2 kxk2 = x + y ≤ jxj + 2jxjjyj + jyj = (jxj + jyj) = kxk1 < 2kxk1; as desired. Using the previous lemma, we can show the following: Proposition 1.7. xn ! x with respect to k · k2 if and only if xn ! x with respect to k · k1 Proof. We first show that if xn ! x with respect to k· k2, then xn ! x with respect to k· k1. Let " be given. Since xn ! x with respect to k · k2, there is an N such that kxn − xk2 < C" whenever n > N: Suppose now that n > N is arbitrary. By Lemma 1.6, we then have 1 kx − xk ≤ kx − xk < C": C n 1 n 2 Thus, the N from the convergence of xn to x with respect to k · k2 also works to show convergence of xn to x with respect to k · k1. The other direction is left to you. 3 2 The preceding proposition shows that in R , to determine the convergence of xn to x, it doesn't matter which measure of size we use - so long as it satisfies (1-3) of Theorem 1.2. Thus, from now on, when we say xn ! x, it is unnecessary to specify with respect to which norm we mean! The other advantage of this is that we get the following: Proposition 1.8. Suppose xn = (xn; yn) and x = (x; y). xn ! x if and only if xn ! x and yn ! y. This proposition says that to figure out if a sequence converges in R2, we need only look at the the behavior of the components. This is a very nice property, since we can then use all of our convergence properties in R. Proof. Suppose xn ! x. We need to show that xn ! x and yn ! y. Since Proposition 1.7 2 implies it doesn't matter what size we use in R , we use k · k1. Notice that jxn − xj ≤ jxn − xj + jyn − yj = kxn − xk1 for all n 2 N: Suppose now that " > 0. Since xn ! x, there is an N1 such that kxn − xk1 < " whenever n > N1: By the preceding inequality, we then have jxn − xj ≤ kxn − xk1 < " whenever n > N1: Thus, the N that works for xn also works for xn. Thus, xn ! x implies that xn ! x. The proof that yn ! y is similar. We next suppose that xn ! x and yn ! y, and show that xn ! x in k · k1. Let " > 0 be given. By assumption, there exist Nx and Ny such that " jx − xj < whenever n > N n 2 x and " jy − yj < whenever n > N : n 2 y Let N := maxfNx;Nyg. Then we will have " " kx − xk = jx − xj + jy − yj < + = " whenever n > N: n 1 n n 2 2 Thus, xn ! x in k · k1. The preceding implies the following, whose proof is left to you: 2 Proposition 1.9. Suppose xn ! x and yn ! y in R , and suppose an ! a in R. Then 1. xn + yn ! x + y 2. anxn ! ax. 4 1.2 Continuity in R2 Notice that once we have convergence of sequences in R2, we can define continuity for functions. Unlike the case of R, we have two types of functions to consider: f :Ω ! R and f :Ω ! R2, where Ω ⊆ R2. However, the definition for each is in essence the same: continuous functions must map convergent sequences to convergent sequences. Definition 1.10. Suppose Ω ⊆ R2. Then 1. f :Ω ! R is continuous at a 2 Ω if f(xn) ! f(a) for every sequence xn in Ω that converges to a. 2. f is continuous in Ω if f is continuous at every point in Ω. 2 3. f :Ω ! R is continuous at a 2 Ω if f(xn) ! f(a) for every sequence xn in Ω that converges to a. 4. f :Ω ! R2 is continuous in Ω if f is continuous at every point in Ω. Example 1.11. Suppose f :(x; y) 7! x2 − 2y.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-