Contents 3 Vector Spaces and Linear Transformations

Contents 3 Vector Spaces and Linear Transformations

Linear Algebra (part 2) : Vector Spaces and Linear Transformations (by Evan Dummit, 2016, v. 2.35) Contents 3 Vector Spaces and Linear Transformations 1 3.1 Review of Vectors in Rn ........................................... 1 3.2 The Formal Denition of a Vector Space . 2 3.3 Subspaces . 5 3.3.1 The Subspace Criterion . 5 3.3.2 Additional Examples of Subspaces . 7 3.4 Span, Linear Independence, Bases, Dimension . 8 3.4.1 Linear Combinations and Span . 9 3.4.2 Linear Independence and Linear Dependence . 11 3.4.3 Bases and Dimension . 15 3.4.4 Finding Bases for Rn, Row Spaces, Column Spaces, and Nullspaces . 19 3.5 Linear Transformations . 21 3.5.1 Denition and Examples . 22 3.5.2 Kernel and Image . 24 3.5.3 Isomorphisms of Vector Spaces, Matrices Associated to Linear Transformations . 27 3 Vector Spaces and Linear Transformations In this chapter we will introduce the notion of an abstract vector space, which is, ultimately, a generalization of the ideas inherent in studying vectors in 2- or 3-dimensional space. We will study vector spaces from an axiomatic perspective, discuss the notions of span and linear independence, and ultimately explain why every vector space possesses a linearly independent spanning set called a basis. We will then discuss linear transformations, which are the most natural kind of a map from one vector space to another, and show how they are intimately related with matrices. In our discussions we will give concrete examples as often as possible, and use the general properties we have shown about vector spaces to motivate results relevant to solving systems of linear equations and solving dierential equations. 3.1 Review of Vectors in Rn • A vector, as we typically think of it, is a quantity which has both a magnitude and a direction. This is in contrast to a scalar, which carries only a magnitude. ◦ Real-valued vectors are extremely useful in just about every aspect of the physical sciences, since just about everything in Newtonian physics is a vector position, velocity, acceleration, forces, etc. There is also vector calculus namely, calculus in the context of vector elds which is typically part of a multivariable calculus course; it has many applications to physics as well. • We often think of vectors geometrically, as a directed line segment (having a starting point and an endpoint). • Algebraically, we write a vector as an ordered tuple of coordinates: we denote the n-dimensional vector from the origin to the point (a1; a2; ··· ; an) as v = ha1; a2; ··· ; ani, where the ai are real-number scalars. 1 4 ◦ Some vectors: h1; 2i, h3; 5; −1i, −π; e2; 27; 3; ; 0; 0; −1 . 3 ◦ Notation: We use angle brackets h·i rather than parentheses (·) so as to draw a visual distinction between a vector and the coordinates of a point in space. We also draw arrows above vectors (as ~v) or typeset them in boldface (as v) in order to set them apart from scalars. Boldface is hard to produce without a computer, so it is highly recommended to use the arrow notation ~v when writing by hand. ◦ Note: Vectors are a little bit dierent from directed line segments, because we don't care where a vector starts: we only care about the dierence between the starting and ending positions. Thus: the directed segment whose start is (0; 0) and end is (1; 1) and the segment starting at (1; 1) and ending at (2; 2) represent the same vector h1; 1i. • We can add vectors (provided they are of the same length!) in the obvious way, one component at a time: if v = ha1; ··· ; ani and w = hb1; ··· ; bni then v + w = ha1 + b1; ··· ; an + bni. ◦ We can justify this using our geometric idea of what a vector does: v moves us from the origin to the point (a1; ··· ; an). Then w tells us to add hb1; ··· ; bni to the coordinates of our current position, and so w moves us from (a1; ··· ; an) to (a1 +b1; ··· ; an +bn). So the net result is that the sum vector v+w moves us from the origin to (a1 + b1; ··· ; an + bn), meaning that it is just the vector ha1 + b1; ··· ; an + bni. ◦ Geometrically, we can think of vector addition using a parallelogram whose pairs of parallel sides are v and w and whose diagonal is v + w: • We can also 'scale' a vector by a scalar, one component at a time: if r is a scalar, then we have r v = hra1; ··· ; rani. ◦ Again, we can justify this by our geometric idea of what a vector does: if v moves us some amount in a 1 direction, then v should move us half as far in that direction. Analogously, 2v should move us twice 2 as far in that direction, while −v should move us exactly as far, but in the opposite direction. • Example: If v = h−1; 2; 2i and w = h3; 0; −4i then 2w = h6; 0; −8i , and v+w = h2; 2; −2i . Furthermore,v− 2w = h−7; 2; 10i . • The arithmetic of vectors in Rn satises several algebraic properties that follow more or less directly from the denition: ◦ Addition of vectors is commutative and associative. ◦ There is a zero vector (namely, the vector with all entries zero), and every vector has an additive inverse. ◦ Scalar multiplication distributes over addition of both vectors and scalars. 3.2 The Formal Denition of a Vector Space • The two operations of addition and scalar multiplication (and the various algebraic properties they satisfy) are the key properties of vectors in Rn. We would like to investigate other collections of things which possess those same properties. 2 • Denition: A (real) vector space is a collection V of vectors together with two binary operations, addition of vectors (+) and scalar multiplication of a vector by a real number (·), satisfying the following axioms: ◦ Let v, v1, v2, v3 be any vectors in V and α, α1,α2 be any (real number) scalars. ◦ Note: The statement that + and · are binary operations means that v1 + v2 and α · v are always dened; that is, they are both vectors in V . ◦ [A1] Addition is commutative: v1 + v2 = v2 + v1. ◦ [A2] Addition is associative: (v1 + v2) + v3 = v1 + (v2 + v3). ◦ [A3] There exists a zero vector 0, with v + 0 = v. ◦ [A4] Every vector v has an additive inverse −v, with v + (−v) = 0. ◦ [M1] Scalar multiplication is consistent with regular multiplication: α1 · (α2 · v) = (α1α2) · v. ◦ [M2] Addition of scalars distributes: (α1 + α2) · v = α1 · v + α2 · v. ◦ [M3] Addition of vectors distributes: α · (v1 + v2) = α · v1 + α · v2. ◦ [M4] The scalar 1 acts like the identity on vectors: 1 · v = v. • Remark: One may also consider vector spaces where the collection of scalars is something other than the real numbers: for example, there exists an equally important notion of a complex vector space, whose scalars are the complex numbers. (The axioms are the same, except we allow the scalars to be complex numbers.) ◦ We will primarily work with real vector spaces, in which the scalars are the real numbers. ◦ The most general notion of a vector space involves scalars from a eld, which is a collection of numbers which possess addition and multiplication operations which are commutative, associative, and distribu- tive, with an additive identity 0 and multiplicative identity 1, such that every element has an additive inverse and every nonzero element has a multiplicative inverse. ◦ Aside from the real and complex numbers, another example of a eld is the rational numbers (fractions). ◦ One can formulate an equally interesting theory of vector spaces over any eld. • Here are some examples of vector spaces: • Example: The vectors in Rn are a vector space, for any n > 0. (This had better be true!) ◦ For simplicity we will demonstrate all of the axioms for vectors in R2; there, the vectors are of the form hx; yi and scalar multiplication is dened as α · hx; yi = hαx; αyi. ◦ [A1]: We have hx1; y1i + hx2; y2i = hx1 + x2; y1 + y2i = hx2; y2i + hx1; y1i. ◦ [A2]: We have (hx1; y1i + hx2; y2i)+hx3; y3i = hx1 + x2 + x3; y1 + y2 + y3i = hx1; y1i+(hx2; y2i + hx3; y3i). ◦ [A3]: The zero vector is h0; 0i, and clearly hx; yi + h0; 0i = hx; yi. ◦ [A4]: The additive inverse of hx; yi is h−x; −yi, since hx; yi + h−x; −yi = h0; 0i. ◦ [M1]: We have α1 · (α2 · hx; yi) = hα1α2x; α1α2yi = (α1α2) · hx; yi. ◦ [M2]: We have (α1 + α2) · hx; yi = h(α1 + α2)x; (α1 + α2)yi = α1 · hx; yi + α2 · hx; yi. ◦ [M3]: We have α · (hx1; y1i + hx2; y2i) = hα(x1 + x2); α(y1 + y2)i = α · hx1; y1i + α · hx2; y2i. ◦ [M4]: Finally, we have 1 · hx; yi = hx; yi. • Example: The set of m × n matrices for any m and any n, forms a vector space. ◦ The various algebraic properties we know about matrix addition give [A1] and [A2] along with [M1], [M2], [M3], and [M4]. ◦ The zero vector in this vector space is the zero matrix (with all entries zero), and [A3] and [A4] follow easily. ◦ Note of course that in some cases we can also multiply matrices by other matrices. However, the requirements for being a vector space don't care that we can multiply matrices by other matrices! (All we need to be able to do is add them and multiply them by scalars.) 3 • Example: The complex numbers (the numbers of the form a + bi for real a and b, and where i2 = −1) are a vector space. ◦ The axioms all follow from the standard properties of complex numbers.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    29 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us