Matrix Rank Matrix Rank

Matrix Rank Matrix Rank

Matrix Rank Consider the linear combination cc11aa 2 2 cmm a, where cc 1, , m are scalars, and aa1, ,m are nnm-dimensional vectors ( in general) Next consider the equation cc 11 aa 2 2 c mm a 0 (1) Definition: The set of vectors aa 1 ,, m is linearly independent if the only solution to (1) is cc 12 c m 0. Otherwise, the vectors are linearly dependent. Remark: Linear dependence allows us to write (at least) one of the vectors as a combination of the others. Suppose c1 0, then aaa1212313(/)cc (/) cc ( cmm /) c 1 a Example: The three vectors aa12[3 0 2 2], [ 6 42 24 54], a 3 [21 21 0 15] are linearly dependent because 1 60aaa 1232 5.1 Matrix Rank Definition: The maximum number of linearly independent row vectors of a matrix A = [ajk] is called the rank of A and is denoted by rank A. Example: The matrix 3022 A 6422454 21 21 0 15 is rank 2, or rank A = 2. (How do we know that it is not less than 2?) MATLAB Verification: >>A>> A = [3022[3 0 2 2; -6422454216 42 24 54; 21 -21 0 -15] >> rank(A) 5.2 1 Matrix Rank Theorem: The rank of a matrix A equals the number of independent column vectors. Hence, rank A = rank AT Proof : Let A = [ajk] be an m n matrix and let rank A = r. Then, A has a linear ly i nd epend ent set of row vect ors v1,…,vr andlld all row vec tors o f A, a1,…,am are linear combinations of these vectors, so we may write avv1111122cc c 1rr v avv2211222cc c 2rr v avvmmcc11 m 2 2 c mrr v WittifthWe may now write equations for the k-th componentftht of the aj itin terms o fthf the k-th component of the vj as acvcv1111122kk k cv 1 rrk acvcv2211222kk k cv 2 rrk acvcv cv mk m11 k m 2 2 k mr rk 5.3 Matrix Rank Proof (continued): acvcv1111122kk k cv 1 rrk acvcv2211222kk k cv 2 rrk acvcvmk m11 k m 2 2 k cv mr rk which we may also write acc111121kr c acc c 221222krvv v 12kk rk accmk m12 m c mr T We see that [a1k a2k … amk] is the k-th column vector of A. Hence, each column vector is a linear combination the rm-dimensional column vectors T cj, where cj = [c1k c2k … cmk] . Consequently, the number of independent column vectors is rr 5.4 2 Matrix Rank Proof (continued): We may carry out the same argument with AT. Since the rows of A are the columns of AT and vice versa, we find rr and hence rr ElExample: CidiConsider again 3022 A 6422454 21 21 0 15 We see 230 2 3 0 22 229 24 6 42 and 54 6 42 33 321 02121152121 So, only two vectors are linearly independent (rank A = 2, again) How do we determine the rank? Gauss elimination… but how do we prove that? 5.5 Finite-Dimensional, Real Vector Spaces Definition: A non-empty set V of elements a, b, … is called a real vector space and these elements are called vectors if in V there are defined two algebraic operations (vector multiplication and scalar addition) over the field of real numbers that satisfy the relations: (I) Vector addition associates to every pair of elements a and b a unique element of V that is denoted a + b (I.1) Commutivity: For any a, b: a + b = b + a (I.2) Associativity: For any three vectors a, b, c: (a + b) + c = a + (b + c) (()I.3) There is a uniq ue 0 vector in V, such that a + 0 = a (I.4) For every a, there is a unique vector –a, such that a + (–a) = 0 5.6 3 Finite-Dimensional, Real Vector Spaces Definition: A non-empty set V of elements a, b, … is called a real vector space and these elements are called vectors if in V there are defined two algebraic operations (vector multiplication and scalar addition) over the field of real numbers that satisfy the relations: (II) Scalar multiplication associates to every aVcand a unique element of V that is denoted ca (II.1) Distributivity: For any c, a, b: c(a + b) = ca + cb (II.2) Distributivity: For any c, k, a: (c + k )a = ca + ka (II.3) Associativity: For any c, k, a: c(ka) = (ck)a (II.4) For every a: 1a = a The vector space is finite-dimensional or n-dimensional if each vector has n elements, where n +. 5.7 Finite-Dimensional, Real Vector Spaces Definition: A field is a set of mathematical “objects” (the standard terminology of mathematical texts) that is endowed with two mathematical operations, addition (+) and multiplication () that satisfy the following axioms: Axiom 1. (Commutative laws): x + y = y + x, x y = y x Axiom 2. (Associative laws): x + ( y + z) = (x + y) + z , x ( y z)= (x y) z Axiom 3. (Identity elements): There are unique elements 0 and 1 such that x + 0 = x and x 1 = x AiAxiom 4. (Inverses ): For eac h x, there is a un ique el ement –x, suchthth that x + (–x) = 0; for each x 0, there is a unique element x , such that x x = 1 Axiom 5. (Distributive law): x ( y + z) = x y + x z Remark: The only fields that are currently important in applications are the real and complex numbers 5.8 4 Finite-Dimensional, Real Vector Spaces Remarks: (1) The scalar field can be any arithmetic field that obeys the field axioms. The complex field is often used. Quarternions are sometimes used. (2) A vector space V can consist of any objects that obey the axioms. These can be column vectors, row vectors, or matrices (of the same size) — among other possibilities. This concept is much more general than the concept of row vectors or column vectors! (3) The most important abstract example of a real, finite-dimensional vector space is n — the space of all ordered n-tuples of real numbers (4) The powerful concept of a vector space can be (and will be) extended to infinite dimensions. An important example: All functions whose squares are Lebesgue-integrable between 0 and 1 [denoted L2(0,1)] 5.9 Finite, Real Vector Spaces Definitions: (1) The maximum number of linearly independent vectors in V is called the dimension of V and is denoted dim V. In a finite vector space, this number is always a non-negggative integer. (2) A linearly independent set of vectors that has the largest possible number of elements is called a basis for V. The number of these elements equals dim V. (3) All linear combinations of a non-empty set of vectors a1, …, ap V is called the span of these vectors. They define a subspace of V and constitute a vector space. The subspace’s dimension is less than or equal to dim V. Example: The span of the three vectors from the example of slide 1 aa12[3 0 2 2], [ 6 42 24 54], a 3 [21 21 0 15] is a vector space of dimension 2. Any two of these vectors are linearly independent and may be chosen as a basis, e.g., a1 and a2 5.10 5 Matrix Theorems Theorem 1: Row-equivalent matrices have the same rank. Consequently, we may determine the rank by reducing the matrix to upper triangular (echelon) form. The number of non-zero rows equals the rank. ElExample: CidiConsider again 3022 A 6422454 21 21 0 15 We find 3 0 2 2 30 2 2 3022 A 6 42 24 54 0 42 28 58 0 42 28 58 21 21 0 15 0 21 14 29 0 0 0 0 and conclude once again that the matrix has rank 2. 5.11 Matrix Theorems Theorem 2: Linear dependence and independence. The p vectors x1,…,xp with n components each are linearly independent if the matrix with row vectors x1,…,xp has rank p. They are linearly dependent if the rank is less than p. Theorem 3: A set of p vectors with n < p is always linearly dependent. Theorem 4: The vector space n, which consists of all vectors with n components has dimension n. 5.12 6 Matrix Theorems Theorem 5: Fundamental theorem for linear systems. (a) Existence. A linear system of m equations with n unknowns x1,…,xn has solutions if and only if the coefficient matrix A and the augmentdted ma titrix Aˆ have the same ran k. We reca ll (slid e 3 .6) aa11 12 ax 1n 1 b 1 aa11 12 a 1n | b 1 aa a x b aa a| b Axb21 22 2n 2 2 Aˆ 21 22 2n 2 | aamm12 a mnn x b m aamm12 a mnm| b (()b) Uniqueness. The syypystem has precisely one solution if fyf and only if rank A = rank Aˆ = n (c) Infinitely many solutions. If r = rank A = rank Aˆ < n, the system has infinitely many solutions. We may choose n r of these unknowns arbitrarily. Once that is done, the remain r unknowns are determined. 5.13 Matrix Theorems Theorem 5 (continued): Fundamental theorem for linear systems. (d) Gauss elimination. If solutions exist, they can all be obtained by Gauss elimination. Definition: The system Ax = b is called homogeneous if b = 0. Otherwise, it is called inhomogeneous. Theorem 6. Homogeneous systems. A homogeneous system always has the trivial solution x1= 0, x2 = 0,…, xn = 0.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us