<<

Lecture 1s Isomorphisms of Vector Spaces (pages 246-249)

Definition: L is said to be one-to-one if L(u1) = L(u2) implies u1 = u2.

Example: The mapping L : R4 → R2 defined by L(a, b, c, d) = (a, d) is not one- to-one. One counterexample is that L(1, 2, 1, 2) = (1, 2) and L(1, −1, −2, 2) = (1, 2), so L(1, 2, 1, 2) = L(1, −1, −2, 2) even though (1, 2, 1, 2) 6= (1, −1, −2, 2).

The mapping L : R2 → R4 defined by L(a, b) = (a, b, a, b) is one-to-one. To prove this, we assume that L(a1, b1) = L(a2, b2). That means (a1, b1, a1, b1) = (a2, b2, a2, b2). From this we have a1 = a2 and b1 = b2, which means (a1, b1) = (a2, b2). And so we have shown that L(a1, b1) = L(a2, b2) implies (a1, b1) = (a2, b2).

3 2 The mapping L : R → P2 defined by L(a, b, c) = a + (a + b)x + (a − c)x is one-to-one. To prove this, we assume that L(a1, b1, c1) = L(a2, b2, c2). That 2 2 means a1 + (a1 + b1)x + (a1 − c1)x = a2 + (a2 + b2)x + (a2 − c2)x . This means that a1 = a2, a1 + b1 = a2 + b2, and a1 − c1 = a2 − c2. Plugging the fact that a1 = a2 into the other two equations gives us a1 + b1 = a1 + b2 ⇒ b1 = b2 and a1 − c1 = a1 − c2 ⇒ c1 = c2. And since a1 = a2, b1 = b2, and c1 = c2, we have (a1, b1, c1) = (a2, b2, c2), and thus L is one-to-one.

Lemma 4.7.1: L is one-to-one if and only if Null(L) = {0}. Proof of Lemma 4.7.1: We will need to prove both directions of this if and only if statement: ⇒ (If L is one-to-one, then Null(L) = {0}): Suppose that L is one-to-one, and let x ∈ Null(L). Then L(x) = 0. But we also know that L(0) = 0. Since L is one-to-one and L(x) = L(0), we have x = 0. As such, we see that 0 is the only element of Null(L). ⇐ (If Null(L) = {0}, then L is one-to-one): Suppose that Null(L) = {0}, and further suppose that L(u1) = L(u2). Then L(u1 − u2) = L(u1) − L(u2) = 0. This means that u1 − u2 is in the nullspace of L. But 0 is the only element of the nullspace, so we have u1 − u2 = 0. By the uniqueness of the additive inverse, this means that u1 = u2, and this shows that L is one-to-one.

Definition: L : U → V is said to be onto if for every v ∈ V there exists some u ∈ U such that L(u) = v. That is, Range(L) = V. Example: The mapping L : R4 → R2 defined by L(a, b, c, d) = (a, d) is onto. To 2 4 prove this, let (s1, s2) ∈ R . Then (s1, 0, 0, s2) ∈ R is such that L(s1, 0, 0, s2) =

1 (s1, s2). (Note: I could have also used (s1, 1, 1, s2) or (s1, s1, s2, s2), or countless other vectors from R4. We need to be able to find at least one vector u such thatL( u) = v, not exactly one.)

The mapping L : R2 → R4 defined by L(a, b) = (a, b, a, b) is not onto. One counterexample is that there is no (a, b) ∈ R2 such that L(a, b) = (1, 2, 3, 4), since we would need to have a = 1 and a = 3, which is not possible.

3 2 The mapping L : R → P2 defined by L(a, b, c) = a + (a + b)x + (a − c)x is 2 onto. To prove this, let s0 + s1x + s2x ∈ P2. To show that L is onto, we need 3 2 to find (a, b, c) ∈ R such that L(a, b, c) = s0 + s1x + s2x . That is, we need to 2 2 find a, b, c ∈ R such that a + (a + b)x + (a − c)x = s0 + s1x + s2x . Setting the coefficients equal to each other, we see that we need a = s0, a + b = s1, and a − c = s2. Plugging a = s0 into the other two equations, we get that s0 + b = s1 ⇒ b = s1 − s0, and s0 − c = s2 ⇒ c = s0 − s2. And so we have 3 found (s0, s1 − s0, s0 − s2) ∈ R such that L(s0, s1 − s0, s0 − s2) = s0 + (s0 + 2 2 (s1 − s0))x + (s0 − (s0 − s2))x = s0 + s1x + s2x .

Theorem 4.7.2: The linear mapping L : U → V has an inverse mapping L−1 : V → U if and only if L is one-to-one and onto. Proof of Theorem 4.7.2: First, let’s suppose that L has an inverse mapping. To see that L is one-to-one, we assume that L(u1) = L(u2). Then u1 = −1 −1 L (L(u1)) = L (L(u2)) = u2, and so we see that L is one-to-one. To see that L is onto, let v ∈ V. Then L−1(v) ∈ W is such that L(L−1(v)) = v, and this means that L is onto. Now, let’s suppose that L is one-to-one and onto, and let’s try to define a mapping M : V → U that will turn out to be L−1. There are two things we need to do to define any mapping from V to U. The first is to make sure we define M for every element of V. So, let v ∈ V. Then, since L is onto, we know that there is some u ∈ U such that L(u) = v. So if we say M(v) = u such that L(u) = v, then we are defining M for every element of V. But the other thing we have to do when defining a mapping is to make sure that we don’t send our vector v to two different vectors in U. However, because L is one-to-one, we know that if L(u1) = v and L(u2) = v, then L(u1) = L(u2), so u1 = u2. And so, we can define M(v) to be the unique vector u ∈ U such that L(u) = v. And this mapping M satisfies M(L(u)) = M(v) = u and L(M(v)) = L(u) = v, which means that M = L−1.

Definition: If U and V are vector spaces over R, and if L : U → V is a linear, one-to-one, and onto mapping, then L is called an isomorphism (or a vector isomorphism), and U and V are said to be isomorphic. Note: The reason that we include the alternate name “ isomor- phism” is that there are lots of different definitions for an isomorphism in the world of . You may have encountered a definition that only re-

2 quires a function that is one-to-one and onto (but not linear). The word “iso- ” comes from the Greek words meaning “same form”, so the definition of an isomorphism depends on what form you want to have be the same. In linear , linear combinations are an important part of the form of a vector space, so we add the requirement that our function preserve linear combinations. You’ll find that different areas of study have a different concept of form. You get used to it...

3 Example: We’ve seen that the linear mapping L : R → P2 defined by L(a, b, c) = a + (a + b)x + (a − c)x2 is both one-to-one and onto, so L is an 3 isomorphism, and R and P2 are isomorphic. Theorem 4.7.3: Suppose that U and V are finite-dimensional vector spaces over R. Then U and V are isomorphic if and only if they are of the same . Proof of Theorem 4.7.3: We will need to prove both directions of this if and only if statement:

⇒ (If U and V are isomorphic, then they have the same dimension): Suppose that U and V are isomorphic, let L be an isomorphism from U to V, and let B = {u1,..., un} be a basis for U. Then I claim that C = {L(u1),...,L(un)} is a basis for V. First, lets show that C is linearly independent. To that end, let t1, . . . , tn ∈ R be such that

t1L(u1) + ··· + tnL(un) = 0V then

L(t1u1 + ··· + tnun) = 0V and so we see that t1u1 + ··· + tnun ∈ Null(L). But, since L is an isomorphism,

L is one-to-one, so we know that Null(L) = {0U}. This means that

t1u1 + ··· + tnun = 0U

Since the vectors {u1,..., un} are a basis for U, they are linearly independent, so we know that t1 = ··· = tn = 0. As such, we’ve shown that the vectors {L(u1),...,L(un)} are also linearly independent. Now we need to show that they are a spanning for V. That is, given any v ∈ V, we need to find s1, . . . , sn ∈ R such that

s1L(u1) + ··· + snL(un) = v

Since L is an isomorphism, we know that L is onto, so there is a u ∈ U such that L(u) = v. And since {u1,..., un} is a basis for U, we know that there are r1, . . . , rn ∈ R such that

3 r1u1 + ··· + rnun = u

Then

v = L(u) = L(r1u1 + ··· + rnun) = r1L(u1) + ··· + rnL(un)

and so we see that v is in the span of C = {L(u1),...,L(un)}. And since we have shown that C is a linearly independent spanning set for V, we know that it is a basis for V. And this means that the dimension of V is n, which is the same as the dimension of U. ⇐ (If U and V have the same dimension, then they are isomorphic): Let U and V have the same dimension, let B = {u1,..., un} be a basis for U, and let C = {v1,..., vn} be a basis for V. Then I claim that the linear mapping L : U → V defined by

L(t1u1 + ··· + tnun) = t1v1 + ··· + tnvn is an isomorphism. I will take for granted that it is a linear mapping, and instead focus on showing that it is one-to-one and onto. To see that L is one-to-one, suppose that

L(r1u1 + ··· + rnun) = L(s1u1 + ··· + snun)

Then

r1v1 + ··· + rnvn = s1v1 + ··· + snvn

Bringing everything to one side, we get that

(r1 − s1)v1 + ··· + (rn − sn)vn = 0V

But since {v1,..., vn} is a basis for V, it is linearly independent, so we have that ri − si = 0 for all i, or that ri = si for all i. Which means that

r1u1 + ··· + rnun = s1u1 + ··· + snun

4 and so L is one-to-one. Now let’s see that L is onto. To that end, let v ∈ V. Then there are s1, . . . , sn ∈ R such that

v = s1v1 + ··· + snvn

And we then have

L(s1u1 + ··· + snun) = s1v1 + ··· + snvn so s1u1 + ··· + snun ∈ U is such that L(s1u1 + ··· + snun) = v, and we see that L is onto.

For once the book has generalized more than I would, because I find the following statement to be quite important in the study of .

Corollary to Theorem 4.7.3: If U is a vector space with dimension n, then U is isomorphic to Rn. That is, every finite dimensional vector space is really just the same as Rn. Which is fabulous, because we know a lot of things about Rn, and Rn is easy to work with. I’ll discuss the wonders of this new revelation in the next lecture. But for now, we finish with one last theorem.

Theorem 4.7.4: If U and V are n-dimensional vector spaces over R, then a linear mapping L : U → V is one-to-one if and only if it is onto. Proof of Theorem 4.7.4: Let U and V be n-dimensional vector spaces over R, and let B = {u1,..., un} be a basis for U and C = {v1,..., vn} be a basis for V. Now suppose that L is one-to-one. To show that L is onto, I am first going to show that the set C1 = {L(u1),...,L(un)} is a basis for V. And to do that, I first want to show that C1 is linearly independent. To that end, suppose that t1, . . . , tn ∈ R are such that

t1L(u1) + ··· + tnL(un) = 0

Using the linearity of L we get

L(t1u1 + ··· + tnun) = 0

and so we see that t1u1 + ··· + tnun is in the nullspace of L. But, since L is one-to-one, Lemma 4.7.1 tells us that Null(L) = {0}. So we have

5 t1u1 + ··· + tnun = 0

At this point we use the fact that {u1,..., un} is a basis for U, and therefore is linearly independent, to see that t1 = ··· = tn = 0. And this means that our set C1 = {L(u1),...,L(un)} is also linearly independent. We also know that C1 contains n vectors. This is actually not immediately obvious, since it in general one could have that L(ui) = L(uj) for some i 6= j, but because L is one-to-one, we know that L(ui) 6= L(uj) whenever i 6= j, because the vectors ui are all distinct. An even easier way to see this is to note that the set C1 is linearly independent, and a linearly independent set cannot contain duplicate vectors.

Either way, we have found that C1 is a linearly independent set containing dim(V) elements of V, so by the two-out-of-three rule (aka Theorem 4.3.4(3)), C1 is also a spanning set for V, and therefore is a basis for V.

But what I actually need is the fact that C1 is a spanning set for V. Because now, given any v ∈ V, there are scalars s1, . . . , sn such that

s1L(u1) + ··· + snL(un) = v

This means that s1u1 + ··· + snun ∈ U is such that

L(s1u1 + ··· + snun) = s1L(u1) + ··· + snL(un) = v and at long last we have shown that L is onto. Now, we turn to the other direction of our proof. So, let us assume that L is onto, and show that L is also one-to-one. The first step in that process will be to note that, because L is onto, for every vector vi in our basis C = {v1,..., vn} for V, there is a vector wi in U such that L(wi) = vi. I claim that the set B1 = {w1,..., wn} of these vectors is a basis for U. To prove this claim, I will first show that B1 is linearly independent. To that end, let t1, . . . , tn ∈ R be such that

t1w1 + ··· + tnwn = 0

Then we see that

L(t1w1 + ··· + tnwn) = L(0) = 0, so t1L(w1) + ··· + tnL(wn) = 0, so t1v1 + ··· + tnvn = 0

6 Since {v1,..., vn} is linearly independent, we have t1 = ··· = tn = 0. And so we have shown that B1 = {w1,..., wn} is also linearly independent. And, as noted earlier, this means that B1 does not contain any repeated entries, which means that B1 has n = dim(U) elements. Again using the two-out-of-three rule, B1 is a spanning set for U, and thus is a basis for U. But again, it is the fact that B1 is a spanning set for U that I want to use. For now, to show that L is one-to-one, suppose that L(u) = L(w). Since B1 is a spanning set for U, we know that there are scalars a1, . . . , an and b1, . . . , bn such that

u = a1w1 + ··· + anwn and w = b1w1 + ··· + bnwn

Then

L(u) = L(a1w1 + ··· + anwn) = a1L(w1) + ··· + anL(wn) = a1v1 + ··· + anvn and similarly

L(w) = L(b1w1 + ··· + bnwn) = b1L(w1) + ··· + bnL(wn) = b1v1 + ··· + bnvn which means that

a1v1 + ··· + anvn = b1v1 + ··· + bnvn and from this we get

(a1 − b1)v1 + ··· + (an − bn)vn = 0

Since {v1,..., vn} is linearly independent, we have (a1 −b1) = ··· = (an −bn) = 0. And this means that ai = bi for all 1 ≤ i ≤ n. And this means that u = w, so we have shown that L is one-to-one.

7