Linear Combination Linearly Independent/Dependent Span

Linear Combination Linearly Independent/Dependent Span

Definition Definition Linear Combination Linearly Independent/dependent (Sets of) Vectors (Sets of) Vectors Definition Definition Span Inverse/Invertible Matrix (Sets of) Vectors Matrices Definition Definition Matrix Transpose Elementary Row Operation Matrices Matrices Definition Definition Row equivalent Reduced Row Echelon Form Matrices Matrices Definition Definition Pivot (position) Pivot/Non-pivot column Matrices Matrices A set of vectors fv1;:::; vng is linearly independent if the only solution to A linear combination of a vectors v1; v2;:::; vn is a sum of scalar multiples of the vectors: x1v1 + ··· + xnvn = 0 x1v1 + x2v2 + ::: + xnvn is the trivial solution: x1 = ··· = xn = 0. If there is more than one solution to this equation, then the set (the xi's are scalars). of vectors is linearly dependent. The span of a set of vectors fv ;:::; v g (denoted An n × n matrix A is invertible if there exists another 1 n spanfv ;:::; v g) is the set of all linear combinations n × n matrix A−1 such that AA−1 = A−1A = I. 1 n of v1;:::; vn. There are three types of elementary row operations If A is an m × n matrix, then the transpose of A T for matrices. (denoted A ) is an n × m matrix whose i-th row is the i-th column of A: 1. Swapping two rows of the matrix. T 2 T 3 2 j j 3 | v1 | 2. Adding a multiple of one row to another row. v ··· v 6 . 7 4 1 n5 = 4 . 5 : j j T 3. Scaling a row by a non-zero constant. | vn | The reduced row echelon form of an m × n matrix A (denoted rref A) is the m × n matrix which is row equivalent to A and satisfies the following. 1. The leading non-zero entry of each row is 1. Two m × n matrices are row equivalent if one can be transformed into the other by a sequence of 2. All other entries in the columns with leading 1's elementary row operations. are 0's. 3. The leading 1 in any row is to the right of all leading 1's in the rows above. Rows of 0's are at the bottom of the matrix. A column of a matrix is a pivot column if the column A pivot position (or simply, pivot) in a matrix A is a contains a pivot. Otherwise the column is a non-pivot position corresponding to a leading 1 in rref A. column. Definition Definition Column Space Rank Matrices Matrices Definition Definition Null Space Nullity Matrices Matrices Definition Definition Row space Identity Matrix Matrices Matrices Definition Definition Identity Matrix Vector Space Matrices Vector Spaces Definition Definition Subspace Basis Vector Spaces Vector Spaces The rank of a matrix A (denoted rank A) is the The column space of a matrix A (denoted Col A) is dimension of the column space of the matrix, which is the span of the columns of A. the number of pivots/pivot columns in A. The nullity of a matrix A is the dimension of the null The null space of a matrix A (denoted Nul A) is the space of A, which is the number of non-pivot columns set of all vectors x satisfying Ax = 0. in A. A diagonal matrix is an n × n matrix where all the The row space of a matrix A is the column space of entries off the diagonal are 0's. AT . A vector space is a set of objects (e.g. vectors) along with two operations, vector addition and scalar The n × n identity matrix is a diagonal matrix with multiplication, which satisfy ten axioms (not listed 1's on its diagonal. here). Let V be a vector space. A subspace H of V is a subset of V that satisfies 1. 0 2 H; A basis for a vector space V is a set of linearly 2. H is closed under vector addition (if u; v 2 H, independent vectors that spans V (i.e. the span of then u + v 2 H); the vectors is V ). 3. H is closed under scalar multiplication (if u 2 H and k is a scalar, then ku 2 H). Every subspace is a vector space. Definition Definition Standard Basis for Rn Dimension Vector Spaces Vector Spaces Definition Definition Domain/Codomain Image (range) Functions and Linear Transformations Functions and Linear Transformations Definition Definition Kernel Onto Functions and Linear Transformations Functions and Linear Transformations Definition Definition One-to-one Bijection and Inverse Functions and Linear Transformations Functions and Linear Transformations Definition Definition Standard Matrix for a Linear Linear Transformation Transformation Functions and Linear Transformations Functions and Linear Transformations If fv1;:::; vng is a basis for a vector space V , then the dimension of V is n, the number of vectors in the The standard basis for n is the set of columns of the basis. R n × n identity matrix. (The i-th standard basis (A vector space may also be infinite dimensional; it vector is denoted e .) may not be possible to write down a basis for the i vector space in this case.) If T : V ! W is a function, then the image (or range) of T (denoted im T ) is the set of all outputs of T . That is, the image of T is the set of all vectors b for If T : V ! W is a function, then V is the domain of which T (x) = b has a solution. T , and W is the codomain of T . If A is the standard matrix for T , then the image of T is the column space of A, which is a subspace of W . If T : V ! W is a function, then the kernel of T is If T : V ! W is a function, then T is onto if the set of all solutions to T (x) = 0. T (x) = b has at least one solution for each b 2 W . If A is the standard matrix for T , then the kernel of In this case, the image of T is W . T is the null space of A, which is a subspace of V . If T : V ! W is a function that is both one-to-one and onto, then T is a bijection. Moreover, there −1 If T : V ! W is a function, then T is one-to-one if exists a function T : W ! V satisfying T (x) = b has at most one solution for each b 2 W . T −1(T (x)) = x and T (T −1(b)) = b for all x 2 V and b 2 W . The function T −1 is the inverse of T . If T : V ! W is a linear transformation, dim V = m and dim W = n, then there is an n × m matrix A Let V and W be vector spaces. A function that satisfies T (x) = Ax. This matrix A is the T : V ! W is a linear transformation if standard matrix for T . Moreover, the columns of A are the images of the standard basis vectors under T : 1. T (u + v) = T (u) + T (v) and 2 j j j 3 2. T (ku) = kT (u) A = 4T (e1) T (e2) ··· T (en)5 : j j j for any u; v 2 V and any scalar k. Definition Definition Coordinates Change of basis matrix Coordinate Systems Coordinate Systems Definition Definition B-matrix for a linear transformation Trace Coordinate Systems Matrix Invariant Definition Theorem Determinant Eigenvalue formula for trace Matrix Invariant Matrix Invariant Theorem Definition Eigenvalue formula for determinant Characteristic polynomial Matrix Invariant Eigenvalues and Eigenvectors Definition Definition Eigenvalue Multiplicity of an eigenvalue Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Let B = fb1;:::; bng and D = fd1;:::; dng be bases If B = fb1;:::; bng is a basis for a vector space V , for a vector space V . Then the change of basis matrix then each b 2 V is expressible as a unique linear from B to D is combination of the basis elements: 2 j j j 3 b = x1b1 + ··· + xnbn: P = 4 [b1]D [b2]D ··· [bn]D5 : D B j j j The coordinates for b are the coefficients in this expansion, and we write For any vector v 2 V , this matrix satisfies T [b]B = x1 x2 ··· xn : P [v]B = [v]D: D B Let T : V ! W be a linear transformation between vector spaces. Let B = fb1;:::; bng be a basis for V and D = fd1;:::; dng be a basis for W . Then the The trace of a square matrix is the sum of the entries B-matrix for the linear transformation is on the diagonal of the matrix. 2 j j j 3 B = 4 [T (b1)]D [T (b2)]D ··· [T (bn)]D5 : j j j For any square matrix, the trace of the matrix is The determinant of a square matrix is the sum of the equal to the sum of its eigenvalues. entries on the diagonal of the matrix. Let A be a square matrix. The characteristic For any square matrix, the determinant of the matrix polynomial of A is det(A − λI). is equal to the product of its eigenvalues. Let A be an n × n matrix, and let λ1; λ2; : : : ; λk be the distinct eigenvalues of A. Then the characteristic polynomial of A factors into det(A − λI) = (λ − λ )α1 (λ − λ )α2 ··· (λ − λ )αk : 1 2 k Let A be a square matrix. The eigenvalues of A are the roots of the characteristic polynomial of A. The multiplicity of the eigenvalue λi is αi.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us