Discover Linear Algebra Incomplete Preliminary Draft

Total Page:16

File Type:pdf, Size:1020Kb

Discover Linear Algebra Incomplete Preliminary Draft Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 4 1.2 (F) Subspaces and span . 8 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 11 1.4 (F) Dot product . 16 1.5 (R) Dot product over R ................................. 18 1.6 (F) Additional exercises . 19 2 (F) Matrices 20 2.1 Matrix basics . 20 2.2 Matrix multiplication . 24 2.3 Arithmetic of diagonal and triangular matrices . 31 2.4 Permutation Matrices . 33 2.5 Additional exercises . 36 3 (F) Matrix Rank 39 3.1 Column and row rank . 39 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 40 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 43 3.4 Matrix rank and invertibility . 45 3.5 Codimension (optional) . 47 3.6 Additional exercises . 48 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 51 4.1 Homogeneous systems of linear equations . 51 4.2 General systems of linear equations . 54 5 (F, R) Affine and Convex Combinations (optional) 56 5.1 (F) Affine combinations . 56 5.2 (F) Hyperplanes . 60 5.3 (R) Convex combinations . 60 5.4 (R) Helly's Theorem . 62 5.5 (F, R) Additional exercises . 63 6 (F) The Determinant 64 6.1 Permutations . 65 6.2 Defining the determinant . 68 6.3 Cofactor expansion . 73 6.4 Matrix inverses and the determinant . 75 6.5 Additional exercises . 76 7 (F) Theory of Systems of Linear Equations II: Cramer's Rule 77 7.1 Cramer's Rule . 77 8 (F) Eigenvectors and Eigenvalues 79 8.1 Eigenvector and eigenvalue basics . 79 8.2 Similar matrices and diagonalizability . 82 8.3 Polynomial basics . 83 8.4 The characteristic polynomial . 84 8.5 The Cayley-Hamilton Theorem . 87 8.6 Additional exercises . 89 CONTENTS v 9 (R) Orthogonal Matrices 90 9.1 Orthogonal matrices . 90 9.2 Orthogonal similarity . 91 9.3 Additional exercises . 92 10 (R) The Spectral Theorem 93 10.1 Statement of the Spectral Theorem . 93 10.2 Applications of the Spectral Theorem . 94 11 (F, R) Bilinear and Quadratic Forms 96 11.1 (F) Linear and bilinear forms . 96 11.2 (F) Multivariate polynomials . 98 11.3 (R) Quadratic forms . 100 11.4 (F) Geometric algebra (optional) . 103 12 (C) Complex Matrices 106 12.1 Complex numbers . 106 12.2 Hermitian dot product in Cn .............................. 107 12.3 Hermitian and unitary matrices . 109 12.4 Normal matrices and unitary similarity . 110 12.5 Additional exercises . 112 13 (C, R) Matrix Norms 113 13.1 (R) Operator norm . 113 13.2 (R) Frobenius norm . 114 13.3 (C) Complex Matrices . 115 II Linear Algebra of Vector Spaces 116 Introduction to Part II 117 14 (F) Preliminaries 118 14.1 Modular arithmetic . 118 14.2 Abelian groups . 123 14.3 Fields . 126 vi CONTENTS 14.4 Polynomials . 127 15 (F) Vector Spaces: Basic Concepts 138 15.1 Vector spaces . 138 15.2 Subspaces and span . 141 15.3 Linear independence and bases . 143 15.4 The First Miracle of Linear Algebra . 147 15.5 Direct sums . 150 16 (F) Linear Maps 152 16.1 Linear map basics . 152 16.2 Isomorphisms . 154 16.3 The Rank-Nullity Theorem . 155 16.4 Linear transformations . 156 16.4.1 Eigenvectors, eigenvalues, eigenspaces . 158 16.4.2 Invariant subspaces . 159 16.5 Coordinatization . 163 16.6 Change of basis . 166 17 (F) Block Matrices (optional) 168 17.1 Block matrix basics . 168 17.2 Arithmetic of block-diagonal and block-triangular matrices . 170 18 (F) Minimal Polynomials of Matrices and Linear Transformations (optional) 172 18.1 The minimal polynomial . 172 18.2 Minimal polynomials of linear transformations . 174 19 (R) Euclidean Spaces 177 19.1 Inner products . 177 19.2 Gram-Schmidt orthogonalization . 181 19.3 Isometries and orthogonal transformations . 183 19.4 First proof of the Spectral Theorem . 186 20 (C) Hermitian Spaces 189 20.1 Hermitian spaces . 189 20.2 Hermitian transformations . 194 CONTENTS vii 20.3 Unitary transformations . 195 20.4 Adjoint transformations in Hermitian spaces . 196 20.5 Normal transformations . 196 20.6 The Complex Spectral Theorem for normal transformations . 197 21 (R, C) The Singular Value Decomposition 198 21.1 The Singular Value Decomposition . 198 21.2 Low-rank approximation . 199 22 (R) Finite Markov Chains 202 22.1 Stochastic matrices . 202 22.2 Finite Markov Chains . 203 22.3 Digraphs . 205 22.4 Digraphs and Markov Chains . 205 22.5 Finite Markov Chains and undirected graphs . 205 22.6 Additional exercises . 206 23 More Chapters 207 24 Hints 208 24.1 Column Vectors . ..
Recommended publications
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]
  • Sharp Finiteness Principles for Lipschitz Selections: Long Version
    Sharp finiteness principles for Lipschitz selections: long version By Charles Fefferman Department of Mathematics, Princeton University, Fine Hall Washington Road, Princeton, NJ 08544, USA e-mail: [email protected] and Pavel Shvartsman Department of Mathematics, Technion - Israel Institute of Technology, 32000 Haifa, Israel e-mail: [email protected] Abstract Let (M; ρ) be a metric space and let Y be a Banach space. Given a positive integer m, let F be a set-valued mapping from M into the family of all compact convex subsets of Y of dimension at most m. In this paper we prove a finiteness principle for the existence of a Lipschitz selection of F with the sharp value of the finiteness number. Contents 1. Introduction. 2 1.1. Main definitions and main results. 2 1.2. Main ideas of our approach. 3 2. Nagata condition and Whitney partitions on metric spaces. 6 arXiv:1708.00811v2 [math.FA] 21 Oct 2017 2.1. Metric trees and Nagata condition. 6 2.2. Whitney Partitions. 8 2.3. Patching Lemma. 12 3. Basic Convex Sets, Labels and Bases. 17 3.1. Main properties of Basic Convex Sets. 17 3.2. Statement of the Finiteness Theorem for bounded Nagata Dimension. 20 Math Subject Classification 46E35 Key Words and Phrases Set-valued mapping, Lipschitz selection, metric tree, Helly’s theorem, Nagata dimension, Whitney partition, Steiner-type point. This research was supported by Grant No 2014055 from the United States-Israel Binational Science Foundation (BSF). The first author was also supported in part by NSF grant DMS-1265524 and AFOSR grant FA9550-12-1-0425.
    [Show full text]
  • The Continuity of Additive and Convex Functions, Which Are Upper Bounded
    THE CONTINUITY OF ADDITIVE AND CONVEX FUNCTIONS, WHICH ARE UPPER BOUNDED ON NON-FLAT CONTINUA IN Rn TARAS BANAKH, ELIZA JABLO NSKA,´ WOJCIECH JABLO NSKI´ Abstract. We prove that for a continuum K ⊂ Rn the sum K+n of n copies of K has non-empty interior in Rn if and only if K is not flat in the sense that the affine hull of K coincides with Rn. Moreover, if K is locally connected and each non-empty open subset of K is not flat, then for any (analytic) non-meager subset A ⊂ K the sum A+n of n copies of A is not meager in Rn (and then the sum A+2n of 2n copies of the analytic set A has non-empty interior in Rn and the set (A − A)+n is a neighborhood of zero in Rn). This implies that a mid-convex function f : D → R, defined on an open convex subset D ⊂ Rn is continuous if it is upper bounded on some non-flat continuum in D or on a non-meager analytic subset of a locally connected nowhere flat subset of D. Let X be a linear topological space over the field of real numbers. A function f : X → R is called additive if f(x + y)= f(x)+ f(y) for all x, y ∈ X. R x+y f(x)+f(y) A function f : D → defined on a convex subset D ⊂ X is called mid-convex if f 2 ≤ 2 for all x, y ∈ D. Many classical results concerning additive or mid-convex functions state that the boundedness of such functions on “sufficiently large” sets implies their continuity.
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Discrete Analytic Convex Geometry Introduction
    Discrete Analytic Convex Geometry Introduction Martin Henk Otto-von-Guericke-Universit¨atMagdeburg Winter term 2012/13 webpage CONTENTS i Contents Preface ii 0 Some basic and convex facts1 1 Support and separate5 2 Radon, Helly, Caratheodory and (a few) relatives9 Index 11 ii CONTENTS Preface The material presented here is stolen from different excellent sources: • First of all: A manuscript of Ulrich Betke on convexity which is partially based on lecture notes given by Peter McMullen. • The inspiring books by { Alexander Barvinok, "A course in Convexity" { G¨unter Ewald, "Combinatorial Convexity and Algebraic Geometry" { Peter M. Gruber, "Convex and Discrete Geometry" { Peter M. Gruber and Cerrit G. Lekkerkerker, "Geometry of Num- bers" { Jiri Matousek, "Discrete Geometry" { Rolf Schneider, "Convex Geometry: The Brunn-Minkowski Theory" { G¨unter M. Ziegler, "Lectures on polytopes" • and some original papers !! and they are part of lecture notes on "Discrete and Convex Geometry" jointly written with Maria Hernandez Cifre but not finished yet. Some basic and convex facts 1 0 Some basic and convex facts n 0.1 Notation. R = x = (x1; : : : ; xn)| : xi 2 R denotes the n-dimensional Pn Euclidean space equipped with the Euclidean inner product hx; yi = i=1 xi yi, n p x; y 2 R , and the Euclidean norm jxj = hx; xi. 0.2 Definition [Linear, affine, positive and convex combination]. Let m 2 n N and let xi 2 R , λi 2 R, 1 ≤ i ≤ m. Pm i) i=1 λi xi is called a linear combination of x1;:::; xm. Pm Pm ii) If i=1 λi = 1 then i=1 λi xi is called an affine combination of x1; :::; xm.
    [Show full text]
  • Download Special Issue
    Scientific Programming Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Lead Guest Editor: Giner Alor-Hernandez Guest Editors: Jezreel Mejia-Miranda and José María Álvarez-Rodríguez Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Scientific Programming Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments Lead Guest Editor: Giner Alor-Hernández Guest Editors: Jezreel Mejia-Miranda and José María Álvarez-Rodríguez Copyright © 2018 Hindawi. All rights reserved. This is a special issue published in “Scientific Programming.” All articles are open access articles distributed under the Creative Com- mons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Editorial Board M. E. Acacio Sanchez, Spain Christoph Kessler, Sweden Danilo Pianini, Italy Marco Aldinucci, Italy Harald Köstler, Germany Fabrizio Riguzzi, Italy Davide Ancona, Italy José E. Labra, Spain Michele Risi, Italy Ferruccio Damiani, Italy Thomas Leich, Germany Damian Rouson, USA Sergio Di Martino, Italy Piotr Luszczek, USA Giuseppe Scanniello, Italy Basilio B. Fraguela, Spain Tomàs Margalef, Spain Ahmet Soylu, Norway Carmine Gravino, Italy Cristian Mateos, Argentina Emiliano Tramontana, Italy Gianluigi Greco, Italy Roberto Natella, Italy Autilia Vitiello, Italy Bormin Huang, USA Francisco Ortin, Spain Jan Weglarz, Poland Chin-Yu Huang, Taiwan Can Özturan, Turkey
    [Show full text]
  • On Modified L 1-Minimization Problems in Compressed Sensing Man Bahadur Basnet Iowa State University
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2013 On Modified l_1-Minimization Problems in Compressed Sensing Man Bahadur Basnet Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Applied Mathematics Commons, and the Mathematics Commons Recommended Citation Basnet, Man Bahadur, "On Modified l_1-Minimization Problems in Compressed Sensing" (2013). Graduate Theses and Dissertations. 13473. https://lib.dr.iastate.edu/etd/13473 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. On modified `-one minimization problems in compressed sensing by Man Bahadur Basnet A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics (Applied Mathematics) Program of Study Committee: Fritz Keinert, Co-major Professor Namrata Vaswani, Co-major Professor Eric Weber Jennifer Davidson Alexander Roitershtein Iowa State University Ames, Iowa 2013 Copyright © Man Bahadur Basnet, 2013. All rights reserved. ii DEDICATION I would like to dedicate this thesis to my parents Jaya and Chandra and to my family (wife Sita, son Manas and daughter Manasi) without whose support, encouragement and love, I would not have been able to complete this work. iii TABLE OF CONTENTS LIST OF TABLES . vi LIST OF FIGURES . vii ACKNOWLEDGEMENTS . viii ABSTRACT . x CHAPTER 1. INTRODUCTION .
    [Show full text]
  • Non-Linear Inner Structure of Topological Vector Spaces
    mathematics Article Non-Linear Inner Structure of Topological Vector Spaces Francisco Javier García-Pacheco 1,*,† , Soledad Moreno-Pulido 1,† , Enrique Naranjo-Guerra 1,† and Alberto Sánchez-Alzola 2,† 1 Department of Mathematics, College of Engineering, University of Cadiz, 11519 Puerto Real, CA, Spain; [email protected] (S.M.-P.); [email protected] (E.N.-G.) 2 Department of Statistics and Operation Research, College of Engineering, University of Cadiz, 11519 Puerto Real (CA), Spain; [email protected] * Correspondence: [email protected] † These authors contributed equally to this work. Abstract: Inner structure appeared in the literature of topological vector spaces as a tool to charac- terize the extremal structure of convex sets. For instance, in recent years, inner structure has been used to provide a solution to The Faceless Problem and to characterize the finest locally convex vector topology on a real vector space. This manuscript goes one step further by settling the bases for studying the inner structure of non-convex sets. In first place, we observe that the well behaviour of the extremal structure of convex sets with respect to the inner structure does not transport to non-convex sets in the following sense: it has been already proved that if a face of a convex set intersects the inner points, then the face is the whole convex set; however, in the non-convex setting, we find an example of a non-convex set with a proper extremal subset that intersects the inner points. On the opposite, we prove that if a extremal subset of a non-necessarily convex set intersects the affine internal points, then the extremal subset coincides with the whole set.
    [Show full text]
  • Memory Capacity of Neural Turing Machines with Matrix Representation
    Memory Capacity of Neural Turing Machines with Matrix Representation Animesh Renanse * [email protected] Department of Electronics & Electrical Engineering Indian Institute of Technology Guwahati Assam, India Rohitash Chandra * [email protected] School of Mathematics and Statistics University of New South Wales Sydney, NSW 2052, Australia Alok Sharma [email protected] Laboratory for Medical Science Mathematics RIKEN Center for Integrative Medical Sciences Yokohama, Japan *Equal contributions Editor: Abstract It is well known that recurrent neural networks (RNNs) faced limitations in learning long- term dependencies that have been addressed by memory structures in long short-term memory (LSTM) networks. Matrix neural networks feature matrix representation which inherently preserves the spatial structure of data and has the potential to provide better memory structures when compared to canonical neural networks that use vector repre- sentation. Neural Turing machines (NTMs) are novel RNNs that implement notion of programmable computers with neural network controllers to feature algorithms that have copying, sorting, and associative recall tasks. In this paper, we study augmentation of mem- arXiv:2104.07454v1 [cs.LG] 11 Apr 2021 ory capacity with matrix representation of RNNs and NTMs (MatNTMs). We investigate if matrix representation has a better memory capacity than the vector representations in conventional neural networks. We use a probabilistic model of the memory capacity us- ing Fisher information and investigate how the memory capacity for matrix representation networks are limited under various constraints, and in general, without any constraints. In the case of memory capacity without any constraints, we found that the upper bound on memory capacity to be N 2 for an N ×N state matrix.
    [Show full text]
  • Homogeneous Systems (1.5) Linear Independence and Dependence (1.7)
    Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout Page 1 of3 Homogeneous systems (1.5) Homogenous systems are linear systems in the form Ax = 0, where 0 is the 0 vector. Given a system Ax = b, suppose x = α + t1α1 + t2α2 + ::: + tkαk is a solution (in parametric form) to the system, for any values of t1; t2; :::; tk. Then α is a solution to the system Ax = b (seen by seeting t1 = ::: = tk = 0), and α1; α2; :::; αk are solutions to the homoegeneous system Ax = 0. Terms you should know The zero solution (trivial solution): The zero solution is the 0 vector (a vector with all entries being 0), which is always a solution to the homogeneous system Particular solution: Given a system Ax = b, suppose x = α + t1α1 + t2α2 + ::: + tkαk is a solution (in parametric form) to the system, α is the particular solution to the system. The other terms constitute the solution to the associated homogeneous system Ax = 0. Properties of a Homegeneous System 1. A homogeneous system is ALWAYS consistent, since the zero solution, aka the trivial solution, is always a solution to that system. 2. A homogeneous system with at least one free variable has infinitely many solutions. 3. A homogeneous system with more unknowns than equations has infinitely many solu- tions 4. If α1; α2; :::; αk are solutions to a homogeneous system, then ANY linear combination of α1; α2; :::; αk is also a solution to the homogeneous system Important theorems to know Theorem. (Chapter 1, Theorem 6) Given a consistent system Ax = b, suppose α is a solution to the system.
    [Show full text]