Math 2331 – Linear Algebra 4.3 Linearly Independent Sets; Bases

Total Page:16

File Type:pdf, Size:1020Kb

Math 2331 – Linear Algebra 4.3 Linearly Independent Sets; Bases 4.3 Linearly Independent Sets; Bases Math 2331 { Linear Algebra 4.3 Linearly Independent Sets; Bases Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A 4.3 Linearly Independent Sets; Bases Linearly Independent Sets Definition Facts Examples A Basis Set: Definition A Basis Set: Examples Nul A: Examples and Theorem Col A: Examples and Theorem The Spanning Set Theorem Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Linearly Independent Sets Linearly Independent Sets A set of vectors fv1; v2;:::; vpg in a vector space V is said to be linearly independent if the vector equation c1v1 + c2v2 + ··· + cpvp = 0 has only the trivial solution c1 = 0;:::; cp = 0. The set fv1; v2;:::; vpg is said to be linearly dependent if there exists weights c1;:::; cp;not all 0, such that c1v1 + c2v2 + ··· + cpvp = 0. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Linearly Independent Sets: Facts The following results from Section 1.7 are still true for more general vectors spaces. Fact 1 A set containing the zero vector is linearly dependent. Fact 2 A set of two vectors is linearly dependent if and only if one is a multiple of the other. Fact 3 A set containing the zero vector is linearly independent. Jiwen He, University of Houston Math 2331, Linear Algebra 4 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Linearly Independent Sets: Examples Example 1 2 0 0 3 2 ; ; is a linearly set. 3 4 0 0 3 0 Example 1 2 3 6 ; is a linearly set 3 4 9 11 3 6 1 2 since is not a multiple of . 9 11 3 4 Jiwen He, University of Houston Math 2331, Linear Algebra 5 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Linearly Independent Sets: Examples Theorem (4) An indexed set fv1; v2;:::; vpg of two or more vectors, with v1 6= 0, is linearly dependent if and only if some vector vj (j > 1) is a linear combination of the preceding vectors v1;:::; vj−1. Example Let fp1, p2, p3g be a set of vectors in P2 where p1(t) = t, 2 2 p2(t) = t , and p3(t) = 4t + 2t . Is this a linearly dependent set? Solution: Since p3 = p1 + p2, fp1, p2, p3g is a linearly set. Jiwen He, University of Houston Math 2331, Linear Algebra 6 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A A Basis Set Let H be the plane illustrated below. Which of the following are valid descriptions of H? (a) H =Spanfv1; v2g (b) H =Spanfv1; v3g (c) H =Spanfv2; v3g (d) H =Spanfv1; v2; v3g A basis set is an “efficient” spanning set containing no unnecessary vectors. In this case, we would consider the linearly independent sets fv1; v2g and fv1; v3g to both be examples of basis sets or bases (plural for basis) for H. Jiwen He, University of Houston Math 2331, Linear Algebra 7 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A A Basis Set: Definition and Examples A Basis Set Let H be a subspace of a vector space V . An indexed set of vectors β = fb1;:::; bpg in V is a basis for H if i. β is a linearly independent set, and ii. H = Spanfb1;:::; bpg. Example 2 1 3 2 0 3 2 0 3 Let e1 = 4 0 5 ; e2 = 4 1 5 ; e3 = 4 0 5. Show that 0 0 1 3 fe1; e2; e3g is a basis for R . The set fe1; e2; e3g is called a standard basis for R3. Solutions: (Review the IMT, page 112) 2 1 0 0 3 Let A = e1 e2 e3 = 4 0 1 0 5. 0 0 1 Jiwen He, University of Houston Math 2331, Linear Algebra 8 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A A Basis Set: Definition and Examples Since A has 3 pivots, the columns of A are linearly by the IMT, and the columns of A by IMT; 3 therefore, fe1; e2; e3g is a basis for R . Example 2 n Let S = 1; t; t ;:::; t . Show that S is a basis for Pn. Solution: Any polynomial in Pn is in span of S:To show that S is linearly independent, assume n c0 · 1 + c1 · t + ··· + cn · t = 0: Then c0 = c1 = ··· = cn = 0. Hence S is a basis for Pn. Jiwen He, University of Houston Math 2331, Linear Algebra 9 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A A Basis Set: Example Example 2 1 3 2 0 3 2 1 3 Let v1 = 4 2 5 ; v2 = 4 1 5 ; v3 = 4 0 5. 0 1 3 3 Is fv1; v2; v3g a basis for R ? 2 1 0 1 3 Solution: Let A = [v1 v2 v3] = 4 2 1 0 5. By row reduction, 0 1 3 2 1 0 1 3 2 1 0 1 3 2 1 0 1 3 4 2 1 0 5 v 4 0 1 −2 5 v 4 0 1 −2 5 0 1 3 0 1 3 0 0 5 and since there are 3 pivots, the columns of A are linearly 3 independent and they span R by the IMT. Therefore fv1; v2; v3g is a basis for R3 . Jiwen He, University of Houston Math 2331, Linear Algebra 10 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A A Basis Set: Example Example Explain why each of the following sets is not a basis for R3: 82 3 2 3 2 3 2 39 < 1 4 0 1 = (a) 4 2 5 ; 4 5 5 ; 4 1 5 ; 4 −3 5 : 3 7 0 7 ; Example 82 3 2 39 < 1 4 = (b) 4 2 5 ; 4 5 5 : 3 6 ; Jiwen He, University of Houston Math 2331, Linear Algebra 11 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Bases for Nul A: Example Example Find a basis for Nul A where 3 6 6 3 9 A = : 6 12 13 0 3 Solution: Row reduce A 0 : x1 = −2x2 − 13x4 − 33x5 1 2 0 13 33 0 =) x = 6x + 15x 0 0 1 −6 −15 0 3 4 5 x2, x4 and x5 are free Jiwen He, University of Houston Math 2331, Linear Algebra 12 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Bases for Nul A: Example (cont.) 2 3 2 3 x1 −2x2 − 13x4 − 33x5 6 x2 7 6 x2 7 6 7 6 7 6 x3 7 = 6 6x4 + 15x5 7 = 6 7 6 7 4 x4 5 4 x4 5 x5 x5 2 −2 3 2 −13 3 2 −33 3 6 1 7 6 0 7 6 0 7 6 7 6 7 6 7 x2 6 0 7 + x4 6 6 7 + x5 6 15 7 6 7 6 7 6 7 4 0 5 4 1 5 4 0 5 0 0 1 " " " u v w Therefore fu; v; wg is a spanning set for Nul A. In the last section we observed that this set is linearly independent. Therefore fu; v; wg is a basis for Nul A. The technique used here always provides a linearly independent set. Jiwen He, University of Houston Math 2331, Linear Algebra 13 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A The Spanning Set Theorem A basis can be constructed from a spanning set of vectors by discarding vectors which are linear combinations of preceding vectors in the indexed set. Example −1 0 −2 Suppose v = ; v = and v = . 1 0 2 −1 3 −3 Solution: If x is in Spanfv1; v2; v3g, then x =c1v1 + c2v2 + c3v3 = c1v1 + c2v2 + c3 ( v1 + v2) = v1 + v2 Therefore, Spanfv1; v2; v3g =Spanfv1; v2g : Jiwen He, University of Houston Math 2331, Linear Algebra 14 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A The Spanning Set Theorem Theorem (5 The Spanning Set Theorem) Let S = fv1;:::; vpg be a set in V and let H = Span fv1;:::; vpg : a. If one of the vectors in S - say vk - is a linear combination of the remaining vectors in S, then the set formed from S by removing vk still spans H. b. If H 6= f0g, some subset of S is a basis for H. Jiwen He, University of Houston Math 2331, Linear Algebra 15 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Bases for Col A: Examples Example Find a basis for Col A, where 2 1 2 0 4 3 6 2 4 −1 3 7 A = [a1 a2 a3 a4] = 6 7. 4 3 6 2 22 5 4 8 0 16 Solution: Row reduce: 2 1 2 0 4 3 6 0 0 1 5 7 [a1 a2 a3 a4] ∼ · · · s 6 7 = [b1 b2 b3 b4] 4 0 0 0 0 5 0 0 0 0 Jiwen He, University of Houston Math 2331, Linear Algebra 16 / 20 4.3 Linearly Independent Sets; Bases Linearly Independent Sets A Basis Set Nul A Col A Bases for Col A: Examples (cont.) Note that b2 = b1 and a2 = a1 b4 = 4b1 + 5b3 and a4 = 4a1 + 5a3 b1 and b3 are not multiples of each other a1 and a3 are not multiples of each other Elementary row operations on a matrix do not affect the linear dependence relations among the columns of the matrix.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]
  • Review a Basis of a Vector Space 1
    Review • Vectors v1 , , v p are linearly dependent if x1 v1 + x2 v2 + + x pv p = 0, and not all the coefficients are zero. • The columns of A are linearly independent each column of A contains a pivot. 1 1 − 1 • Are the vectors 1 , 2 , 1 independent? 1 3 3 1 1 − 1 1 1 − 1 1 1 − 1 1 2 1 0 1 2 0 1 2 1 3 3 0 2 4 0 0 0 So: no, they are dependent! (Coeff’s x3 = 1 , x2 = − 2, x1 = 3) • Any set of 11 vectors in R10 is linearly dependent. A basis of a vector space Definition 1. A set of vectors { v1 , , v p } in V is a basis of V if • V = span{ v1 , , v p} , and • the vectors v1 , , v p are linearly independent. In other words, { v1 , , vp } in V is a basis of V if and only if every vector w in V can be uniquely expressed as w = c1 v1 + + cpvp. 1 0 0 Example 2. Let e = 0 , e = 1 , e = 0 . 1 2 3 0 0 1 3 Show that { e 1 , e 2 , e 3} is a basis of R . It is called the standard basis. Solution. 3 • Clearly, span{ e 1 , e 2 , e 3} = R . • { e 1 , e 2 , e 3} are independent, because 1 0 0 0 1 0 0 0 1 has a pivot in each column. Definition 3. V is said to have dimension p if it has a basis consisting of p vectors. Armin Straub 1 [email protected] This definition makes sense because if V has a basis of p vectors, then every basis of V has p vectors.
    [Show full text]
  • MATH 532: Linear Algebra Chapter 4: Vector Spaces
    MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 2 Spaces and Subspaces Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 3 Spaces and Subspaces Spaces and Subspaces While the discussion of vector spaces can be rather dry and abstract, they are an essential tool for describing the world we work in, and to understand many practically relevant consequences. After all, linear algebra is pretty much the workhorse of modern applied mathematics. Moreover, many concepts we discuss now for traditional “vectors” apply also to vector spaces of functions, which form the foundation of functional analysis. [email protected] MATH 532 4 Spaces and Subspaces Vector Space Definition A set V of elements (vectors) is called a vector space (or linear space) over the scalar field F if (A1) x + y 2 V for any x; y 2 V (M1) αx 2 V for every α 2 F and (closed under addition), x 2 V (closed under scalar (A2) (x + y) + z = x + (y + z) for all multiplication), x; y; z 2 V, (M2) (αβ)x = α(βx) for all αβ 2 F, (A3) x + y = y + x for all x; y 2 V, x 2 V, (A4) There exists a zero vector 0 2 V (M3) α(x + y) = αx + αy for all α 2 F, such that x + 0 = x for every x; y 2 V, x 2 V, (M4) (α + β)x = αx + βx for all (A5) For every x 2 V there is a α; β 2 F, x 2 V, negative (−x) 2 V such that (M5)1 x = x for all x 2 V.
    [Show full text]
  • Hodge Theory
    HODGE THEORY PETER S. PARK Abstract. This exposition of Hodge theory is a slightly retooled version of the author's Harvard minor thesis, advised by Professor Joe Harris. Contents 1. Introduction 1 2. Hodge Theory of Compact Oriented Riemannian Manifolds 2 2.1. Hodge star operator 2 2.2. The main theorem 3 2.3. Sobolev spaces 5 2.4. Elliptic theory 11 2.5. Proof of the main theorem 14 3. Hodge Theory of Compact K¨ahlerManifolds 17 3.1. Differential operators on complex manifolds 17 3.2. Differential operators on K¨ahlermanifolds 20 3.3. Bott{Chern cohomology and the @@-Lemma 25 3.4. Lefschetz decomposition and the Hodge index theorem 26 Acknowledgments 30 References 30 1. Introduction Our objective in this exposition is to state and prove the main theorems of Hodge theory. In Section 2, we first describe a key motivation behind the Hodge theory for compact, closed, oriented Riemannian manifolds: the observation that the differential forms that satisfy certain par- tial differential equations depending on the choice of Riemannian metric (forms in the kernel of the associated Laplacian operator, or harmonic forms) turn out to be precisely the norm-minimizing representatives of the de Rham cohomology classes. This naturally leads to the statement our first main theorem, the Hodge decomposition|for a given compact, closed, oriented Riemannian manifold|of the space of smooth k-forms into the image of the Laplacian and its kernel, the sub- space of harmonic forms. We then develop the analytic machinery|specifically, Sobolev spaces and the theory of elliptic differential operators|that we use to prove the aforementioned decom- position, which immediately yields as a corollary the phenomenon of Poincar´eduality.
    [Show full text]