4. Matrix Inverses

Total Page:16

File Type:pdf, Size:1020Kb

4. Matrix Inverses L. Vandenberghe ECE133A (Spring 2021) 4. Matrix inverses • left and right inverse • linear independence • nonsingular matrices • matrices with linearly independent columns • matrices with linearly independent rows 4.1 Left and right inverse < in general, so we have to distinguish two types of inverses Left inverse: - is a left inverse of if - = is left-invertible if it has at least one left inverse Right inverse: - is a right inverse of if - = is right-invertible if it has at least one right inverse Matrix inverses 4.2 Examples 2 −3 −4 3 6 7 1 0 1 = 6 4 6 7 , = 6 7 0 1 1 6 1 1 7 4 5 • is left-invertible; the following matrices are left inverses: 1 −11 −10 16 , 0 −1/2 3 9 7 8 −11 0 1/2 −2 • is right-invertible; the following matrices are right inverses: 2 1 −1 3 2 1 0 3 2 1 −1 3 1 6 7 6 7 6 7 6 −1 1 7 , 6 0 1 7 , 6 0 0 7 2 6 7 6 7 6 7 6 1 1 7 6 0 0 7 6 0 1 7 4 5 4 5 4 5 Matrix inverses 4.3 Some immediate properties Dimensions a left or right inverse of an < × = matrix must have size = × < Left and right inverse of (conjugate) transpose ) ) • - is a left inverse of if and only if - is a right inverse of ) -) = ¹- º) = • - is a left inverse of if and only if - is a right inverse of - = ¹- º = Matrix inverses 4.4 Inverse if has a left and a right inverse, then they are equal and unique: - = , . = =) - = -¹.º = ¹- º. = . • in this case, we call - = . the inverse of (notation: −1) • is invertible if its inverse exists Example 2 −1 1 −3 3 2 2 4 1 3 6 7 − 1 6 7 = 6 1 −1 1 7 , 1 = 6 0 −2 1 7 6 7 4 6 7 6 2 2 2 7 6 −2 −2 0 7 4 5 4 5 Matrix inverses 4.5 Linear equations set of < linear equations in = variables 11G1 ¸ 12G2 ¸ · · · ¸ 1=G= = 11 21G1 ¸ 22G2 ¸ · · · ¸ 2=G= = 12 . <1G1 ¸ <2G2 ¸ · · · ¸ <=G= = 1< • in matrix form: G = 1 • may have no solution, a unique solution, infinitely many solutions Matrix inverses 4.6 Linear equations and matrix inverse Left-invertible matrix: if - is a left inverse of , then G = 1 =) G = - G = -1 there is at most one solution (if there is a solution, it must be equal to -1) Right-invertible matrix: if - is a right inverse of , then G = -1 =) G = -1 = 1 there is at least one solution (namely, G = -1) Invertible matrix: if is invertible, then G = 1 () G = −11 there is a unique solution Matrix inverses 4.7 Outline • left and right inverse • linear independence • nonsingular matrices • matrices with linearly independent columns • matrices with linearly independent rows Linear combination a linear combination of vectors 01,..., 0= is a sum of scalar-vector products G101 ¸ G202 ¸ · · · ¸ G=0= • the scalars G8 are the coefficients of the linear combination • can be written as a matrix-vector product 2 G1 3 6 7 6 G2 7 G101 ¸ G202 ¸ · · · ¸ G=0= = 01 02 ··· 0= 6 . 7 6 . 7 6 7 6 G= 7 4 5 • the trivial linear combination has coefficients G1 = ··· = G= = 0 (same definition holds for real and complex vectors/scalars) Matrix inverses 4.8 Linear dependence a collection of vectors 01, 02,..., 0= is linearly dependent if G101 ¸ G202 ¸ · · · ¸ G=0= = 0 for some scalars G1,..., G=, not all zero • the vector 0 can be written as a nontrivial linear combination of 01,..., 0= • equivalently, at least one vector 08 is a linear combination of the other vectors: G1 G8−1 G8¸1 G= 08 = − 01 − · · · − 08−1 − 08¸1 − · · · − 0= G8 G8 G8 G8 if G8 < 0 Matrix inverses 4.9 Example the vectors 2 0.2 3 2 −0.1 3 2 0 3 6 7 6 7 6 7 01 = 6 −7 7 , 02 = 6 2 7 , 03 = 6 −1 7 6 7 6 7 6 7 6 8.6 7 6 −1 7 6 2.2 7 4 5 4 5 4 5 are linearly dependent • 0 can be expressed as a nontrivial linear combination of 01, 02, 03: 0 = 01 ¸ 202 − 303 • 01 can be expressed as a linear combination of 02, 03: 01 = −202 ¸ 303 (and similarly 02 and 03) Matrix inverses 4.10 Linear independence vectors 01,..., 0= are linearly independent if they are not linearly dependent • the zero vector cannot be written as a nontrivial linear combination: G101 ¸ G202 ¸ · · · ¸ G=0= = 0 =) G1 = G2 = ··· = G= = 0 • none of the vectors 08 is a linear combination of the other vectors Matrix with linearly independent columns = 01 02 ··· 0= has linearly independent columns if G = 0 =) G = 0 Matrix inverses 4.11 Example the vectors 2 1 3 2 −1 3 2 0 3 6 7 6 7 6 7 01 = 6 −2 7 , 02 = 6 0 7 , 03 = 6 1 7 6 7 6 7 6 7 6 0 7 6 1 7 6 1 7 4 5 4 5 4 5 are linearly independent: 2 G1 − G2 3 6 7 G101 ¸ G202 ¸ G303 = 6 −2G1 ¸ G3 7 = 0 6 7 6 G2 ¸ G3 7 4 5 holds only if G1 = G2 = G3 = 0 Matrix inverses 4.12 Dimension inequality if = vectors 01, 02,..., 0= of length < are linearly independent, then = ≤ < (proof is in textbook) • if an < × = matrix has linearly independent columns then < ≥ = • if an < × = matrix has linearly independent rows then < ≤ = Matrix inverses 4.13 Outline • left and right inverse • linear independence • nonsingular matrices • matrices with linearly independent columns • matrices with linearly independent rows Nonsingular matrix for a square matrix the following four properties are equivalent 1. is left-invertible 2. the columns of are linearly independent 3. is right-invertible 4. the rows of are linearly independent a square matrix with these properties is called nonsingular Nonsingular = invertible • if properties 1 and 3 hold, then is invertible (page 4.5) • if is invertible, properties 1 and 3 hold (by definition of invertibility) Matrix inverses 4.14 Proof (a) left-invertible linearly independent columns (b’) (b) (a’) linearly independent rows right-invertible • we show that (a) holds in general • we show that (b) holds for square matrices ) • (a’) and (b’) follow from (a) and (b) applied to Matrix inverses 4.15 Part a: suppose is left-invertible • if is a left inverse of (satisfies = ), then G = 0 =) G = 0 =) G = 0 • this means that the columns of are linearly independent: if = 01 02 ··· 0= then G101 ¸ G202 ¸ · · · ¸ G=0= = 0 holds only for the trivial linear combination G1 = G2 = ··· = G= = 0 Matrix inverses 4.16 Part b: suppose is square with linearly independent columns 01,..., 0= • for every =-vector 1 the vectors 01,..., 0=, 1 are linearly dependent (from dimension inequality on page 4.13) • hence for every 1 there exists a nontrivial linear combination G101 ¸ G202 ¸ · · · ¸ G=0= ¸ G=¸11 = 0 • we must have G=¸1 < 0 because 01,..., 0= are linearly independent • hence every 1 can be written as a linear combination of 01,..., 0= • in particular, there exist =-vectors 21,..., 2= such that 21 = 41, 22 = 42, . , 2= = 4=, • the matrix = 21 22 ··· 2= is a right inverse of : 21 22 ··· 2= = 41 42 ··· 4= = Matrix inverses 4.17 Examples 2 1 −1 1 −1 3 2 1 −1 1 3 6 7 6 7 6 −1 1 −1 1 7 = 6 −1 1 1 7 , = 6 7 6 7 6 1 1 −1 −1 7 6 1 1 −1 7 6 7 4 5 6 −1 −1 1 1 7 4 5 • is nonsingular because its columns are linearly independent: G1 − G2 ¸ G3 = 0, −G1 ¸ G2 ¸ G3 = 0,G1 ¸ G2 − G3 = 0 is only possible if G1 = G2 = G3 = 0 • is singular because its columns are linearly dependent: G = 0 for G = ¹1, 1, 1, 1º Matrix inverses 4.18 Example: Vandermonde matrix C C2 ··· C=−1 2 1 1 1 1 3 6 2 =−1 7 6 1 C2 C ··· C 7 = 6 . .2 . 2. 7 with C8 < C 9 for 8 < 9 6 . 7 6 2 =−1 7 6 1 C= C= ··· C= 7 4 5 we show that is nonsingular by showing that G = 0 only if G = 0 • G = 0 means ?¹C1º = ?¹C2º = ··· = ?¹C=º = 0 where 2 =−1 ?¹Cº = G1 ¸ G2C ¸ G3C ¸ · · · ¸ G=C ?¹Cº is a polynomial of degree = − 1 or less • if G < 0, then ?¹Cº can not have more than = − 1 distinct real roots • therefore ?¹C1º = ··· = ?¹C=º = 0 is only possible if G = 0 Matrix inverses 4.19 Inverse of transpose and product Transpose and conjugate transpose ) if is nonsingular, then and are nonsingular and ¹)º−1 = ¹−1º), ¹º−1 = ¹−1º ) we write these as − and − Product if and are nonsingular and of equal size, then is nonsingular with ¹º−1 = −1−1 Matrix inverses 4.20 Outline • left and right inverse • linear independence • nonsingular matrices • matrices with linearly independent columns • matrices with linearly independent rows Gram matrix the Gram matrix associated with a matrix = 01 02 ··· 0= is the matrix of column inner products • for real matrices: ) ) ) 2 0 01 0 02 ··· 0 0= 3 6 )1 )1 )1 7 ) 6 0 01 0 02 ··· 0 0= 7 = 6 2. 2. 2. 7 6 . 7 6 ) ) ) 7 6 0= 0 0= 0 ··· 0= 0= 7 4 1 2 5 • for complex matrices 2 0 01 0 02 ··· 0 0= 3 6 1 1 1 7 6 0 01 0 02 ··· 0 0= 7 = 6 2.
Recommended publications
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]
  • Tight Frames and Their Symmetries
    Technical Report 9 December 2003 Tight Frames and their Symmetries Richard Vale, Shayne Waldron Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, New Zealand e–mail: [email protected] (http:www.math.auckland.ac.nz/˜waldron) e–mail: [email protected] ABSTRACT The aim of this paper is to investigate symmetry properties of tight frames, with a view to constructing tight frames of orthogonal polynomials in several variables which share the symmetries of the weight function, and other similar applications. This is achieved by using representation theory to give methods for constructing tight frames as orbits of groups of unitary transformations acting on a given finite-dimensional Hilbert space. Along the way, we show that a tight frame is determined by its Gram matrix and discuss how the symmetries of a tight frame are related to its Gram matrix. We also give a complete classification of those tight frames which arise as orbits of an abelian group of symmetries. Key Words: Tight frames, isometric tight frames, Gram matrix, multivariate orthogonal polynomials, symmetry groups, harmonic frames, representation theory, wavelets AMS (MOS) Subject Classifications: primary 05B20, 33C50, 20C15, 42C15, sec- ondary 52B15, 42C40 0 1. Introduction u1 u2 u3 2 The three equally spaced unit vectors u1, u2, u3 in IR provide the following redundant representation 2 3 f = f, u u , f IR2, (1.1) 3 h ji j ∀ ∈ j=1 X which is the simplest example of a tight frame. Such representations arose in the study of nonharmonic Fourier series in L2(IR) (see Duffin and Schaeffer [DS52]) and have recently been used extensively in the theory of wavelets (see, e.g., Daubechies [D92]).
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]
  • Gram Matrix and Orthogonality in Frames 1
    U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 1, 2018 ISSN 1223-7027 GRAM MATRIX AND ORTHOGONALITY IN FRAMES Abolhassan FEREYDOONI1 and Elnaz OSGOOEI 2 In this paper, we aim at introducing a criterion that determines if f figi2I is a Bessel sequence, a frame or a Riesz sequence or not any of these, based on the norms and the inner products of the elements in f figi2I. In the cases of Riesz and Bessel sequences, we introduced a criterion but in the case of a frame, we did not find any answers. This criterion will be shown by K(f figi2I). Using the criterion introduced, some interesting extensions of orthogonality will be presented. Keywords: Frames, Bessel sequences, Orthonormal basis, Riesz bases, Gram matrix MSC2010: Primary 42C15, 47A05. 1. Preliminaries Frames are generalizations of orthonormal bases, but, more than orthonormal bases, they have shown their ability and stability in the representation of functions [1, 4, 10, 11]. The frames have been deeply studied from an abstract point of view. The results of such studies have been used in concrete frames such as Gabor and Wavelet frames which are very important from a practical point of view [2, 9, 5, 8]. An orthonormal set feng in a Hilbert space is characterized by a simple relation hem;eni = dm;n: In the other words, the Gram matrix is the identity matrix. Moreover, feng is an orthonor- mal basis if spanfeng = H. But, for frames the situation is more complicated; i.e., the Gram Matrix has no such a simple form.
    [Show full text]
  • Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
    UNIQUENESS OF LOW-RANK MATRIX COMPLETION BY RIGIDITY THEORY AMIT SINGER∗ AND MIHAI CUCURINGU† Abstract. The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to efficient randomized algorithms for testing necessary and sufficient conditions for local completion and for testing sufficient conditions for global completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, iterative methods, collaborative filtering. AMS subject classifications. 05C10, 05C75, 15A48 1. Introduction. Can the missing entries of an incomplete real valued matrix be recovered? Clearly, a matrix can be completed in an infinite number of ways by replacing the missing entries with arbitrary values.
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • The Invertible Matrix Theorem II in Section 2.8, We Gave a List of Characterizations of Invertible Matrices (Theorem 2.8.1)
    i i “main” 2007/2/16 page 312 i i 312 CHAPTER 4 Vector Spaces For Problems 9–12, determine the solution set to Ax = b, 14. Show that a 6 × 4 matrix A with nullity(A) = 0 must and show that all solutions are of the form (4.9.3). have rowspace(A) = R4. Is colspace(A) = R4? 13−1 4 15. Prove that if rowspace(A) = nullspace(A), then A 9. A = 27 9 , b = 11 . contains an even number of columns. 15 21 10 16. Show that a 5×7 matrix A must have 2 ≤ nullity(A) ≤ 2 −114 5 7. Give an example of a 5 × 7 matrix A with 10. A = 1 −123 , b = 6 . nullity(A) = 2 and an example of a 5 × 7 matrix 1 −255 13 A with nullity(A) = 7. 11−2 −3 17. Show that 3 × 8 matrix A must have 5 ≤ nullity(A) ≤ 3 −1 −7 2 8. Give an example of a 3 × 8 matrix A with 11. A = , b = . 111 0 nullity(A) = 5 and an example of a 3 × 8 matrix 22−4 −6 A with nullity(A) = 8. 11−15 0 18. Prove that if A and B are n × n matrices and A is 12. A = 02−17 , b = 0 . invertible, then 42−313 0 nullity(AB) = nullity(B). 13. Show that a 3 × 7 matrix A with nullity(A) = 4 must have colspace(A) = R3. Is rowspace(A) = R3? [Hint: Bx = 0 if and only if ABx = 0.] 4.10 The Invertible Matrix Theorem II In Section 2.8, we gave a list of characterizations of invertible matrices (Theorem 2.8.1).
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]