Math 640 — Midterm Practice Examination

Total Page:16

File Type:pdf, Size:1020Kb

Math 640 — Midterm Practice Examination Math 640 — Midterm practice examination 1. Give a proof that if A, B Mn and AB has rank n, then A has rank n. 2 2. We say a matrix A Mn has a square root B Mn if B A. Prove that every diagonalizable matrix has a square root. 3. We say a matrix A Mn is diagonally dominant if n |aij | |aii | for all i 1,,n j1 ji (i) Prove diagonally dominant matrices are invertible. (ii) Prove that if A is diagonally dominant, then A 2max|aii | 1in 4. Prove that if A Mm,p and B Mp,n then rAB rA rB n. n 5. Suppose A Mn Define the function defined on Mn by A max |aij |. 1in j1 (i) Prove that is a norm on Mn. n (ii) Prove that is the matrix norm pertaining the norm on . 1 6. Suppose A Mn and there is an invertible matrix S Mn, for which S AS D is a diagonal matrix. Prove that the columns of S are the eigenvectors of A. (Give a complete proof.) 7. We say a matrix A MnR is column stochastic if all its elements are nonnegative and n aji 1 for all i 1,,n j1 (i) Prove that the set of column stochastic matrices is multiplicatively closed. (ii) Prove that the spectral radius A 1. 8. Let A,D MnR where D is diagonal with distinct diagonal elements. Prove that A and D commute if and only if A is also diagonal. 9. Let A MnR be skew-symmetric. Show that A is diagonalizable. 10. Prove that Unitary matrices are closed in the pointwise topology. That is, if Un is a sequence of unitary matrices and limUn V for each entry, then V is unitary. 1 11. Which Householder transformations on 2 are rotations? 12. Show that for every matrix in A Mn there is a lower triangular matrix L and an upper triangular matrix U such that LA U 13. Prove that the unitary matrices are closed the norm. That is, if the sequence {U_n} Mn are all unitary and limn Un U in the 2 norm, then U is also unitary. n n 14. Given two finite sequences ckk1 and dkk1. Prove that k|ckdk | maxk|dk|k|ck |. 15. Verify that a matrix norm which is subordinate to a vector norm satisfies norm conditions (i) and (ii). 16. Let A Mn. Show that the matrix norm subordinate to the vector norm is given by A maxriA i 1 th where as usual riA denotes the i row of the matrix A and 1 is the .1 norm. 17. The Hilbert matrix, Hn of order n is defined by 1 hij 1 i, j n i j 1 18. Prove or disprove that Hn 1 lnn. 19. Show that Hn 1. 1 2 20. Show that Hn 2 n as n . 21. Show that the spectral radius of Hn is bounded by 1. 22. Show that for each 0 there exists an integer N such that if n N there is a vector x Rn with x21 such that Hnx2 . 23. Same as the previous question except that you need to show that N 1 1 will 1/2 work. 24. Prove that the product of two isometries is an isometry. k n 25. Let A Mn.IftrA 0, k 1,,n, then A is nilpotent. 26. Let A Mn.If 1 A we can define the Cayley transform of A by CA I A1I A. Show that if A is skew-Hermitian, then CA is unitary and conversely. 27. Let A Mn. Prove that if A is contained in the left half plane, then CA is n convergent. That is limmCA 0. (This is a little difficult.) 2.
Recommended publications
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • The Invertible Matrix Theorem II in Section 2.8, We Gave a List of Characterizations of Invertible Matrices (Theorem 2.8.1)
    i i “main” 2007/2/16 page 312 i i 312 CHAPTER 4 Vector Spaces For Problems 9–12, determine the solution set to Ax = b, 14. Show that a 6 × 4 matrix A with nullity(A) = 0 must and show that all solutions are of the form (4.9.3). have rowspace(A) = R4. Is colspace(A) = R4? 13−1 4 15. Prove that if rowspace(A) = nullspace(A), then A 9. A = 27 9 , b = 11 . contains an even number of columns. 15 21 10 16. Show that a 5×7 matrix A must have 2 ≤ nullity(A) ≤ 2 −114 5 7. Give an example of a 5 × 7 matrix A with 10. A = 1 −123 , b = 6 . nullity(A) = 2 and an example of a 5 × 7 matrix 1 −255 13 A with nullity(A) = 7. 11−2 −3 17. Show that 3 × 8 matrix A must have 5 ≤ nullity(A) ≤ 3 −1 −7 2 8. Give an example of a 3 × 8 matrix A with 11. A = , b = . 111 0 nullity(A) = 5 and an example of a 3 × 8 matrix 22−4 −6 A with nullity(A) = 8. 11−15 0 18. Prove that if A and B are n × n matrices and A is 12. A = 02−17 , b = 0 . invertible, then 42−313 0 nullity(AB) = nullity(B). 13. Show that a 3 × 7 matrix A with nullity(A) = 4 must have colspace(A) = R3. Is rowspace(A) = R3? [Hint: Bx = 0 if and only if ABx = 0.] 4.10 The Invertible Matrix Theorem II In Section 2.8, we gave a list of characterizations of invertible matrices (Theorem 2.8.1).
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • On Bounds and Closed Form Expressions for Capacities Of
    On Bounds and Closed Form Expressions for Capacities of Discrete Memoryless Channels with Invertible Positive Matrices Thuan Nguyen Thinh Nguyen School of Electrical and School of Electrical and Computer Engineering Computer Engineering Oregon State University Oregon State University Corvallis, OR, 97331 Corvallis, 97331 Email: [email protected] Email: [email protected] Abstract—While capacities of discrete memoryless channels are That said, it is still beneficial to find the channel capacity well studied, it is still not possible to obtain a closed form expres- in closed form expression for a number of reasons. These sion for the capacity of an arbitrary discrete memoryless channel. include (1) formulas can often provide a good intuition about This paper describes an elementary technique based on Karush- Kuhn-Tucker (KKT) conditions to obtain (1) a good upper bound the relationship between the capacity and different channel of a discrete memoryless channel having an invertible positive parameters, (2) formulas offer a faster way to determine the channel matrix and (2) a closed form expression for the capacity capacity than that of algorithms, and (3) formulas are useful if the channel matrix satisfies certain conditions related to its for analytical derivations where closed form expression of the singular value and its Gershgorin’s disk. capacity is needed in the intermediate steps. To that end, our Index Terms—Wireless Communication, Convex Optimization, Channel Capacity, Mutual Information. paper describes an elementary technique based on the theory of convex optimization, to find closed form expressions for (1) a new upper bound on capacities of discrete memoryless I. INTRODUCTION channels with positive invertible channel matrix and (2) the Discrete memoryless channels (DMC) play a critical role optimality conditions of the channel matrix such that the upper in the early development of information theory and its ap- bound is precisely the capacity.
    [Show full text]
  • Chapter 9 Eigenvectors and Eigenvalues
    Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E,letf : E E ! be any linear map. If, by luck, there is a basis (e1,...,en) of E with respect to which f is represented by a diagonal matrix λ1 0 ... 0 0 λ ... D = 2 , 0 . ... ... 0 1 B 0 ... 0 λ C B nC @ A then the action of f on E is very simple; in every “direc- tion” ei,wehave f(ei)=λiei. 511 512 CHAPTER 9. EIGENVECTORS AND EIGENVALUES We can think of f as a transformation that stretches or shrinks space along the direction e1,...,en (at least if E is a real vector space). In terms of matrices, the above property translates into the fact that there is an invertible matrix P and a di- agonal matrix D such that a matrix A can be factored as 1 A = PDP− . When this happens, we say that f (or A)isdiagonaliz- able,theλisarecalledtheeigenvalues of f,andtheeis are eigenvectors of f. For example, we will see that every symmetric matrix can be diagonalized. 9.1. EIGENVECTORS AND EIGENVALUES OF A LINEAR MAP 513 Unfortunately, not every matrix can be diagonalized. For example, the matrix 11 A = 1 01 ✓ ◆ can’t be diagonalized. Sometimes, a matrix fails to be diagonalizable because its eigenvalues do not belong to the field of coefficients, such as 0 1 A = , 2 10− ✓ ◆ whose eigenvalues are i. ± This is not a serious problem because A2 can be diago- nalized over the complex numbers.
    [Show full text]
  • The Invertible Matrix Theorem
    Math 240 TA: Shuyi Weng Winter 2017 February 6, 2017 The Invertible Matrix Theorem Theorem 1. Let A 2 Rn×n. Then the following statements are equivalent. 1. A is invertible. 2. A is row equivalent to In. 3. A has n pivots in its reduced echelon form. 4. The matrix equation Ax = 0 has only the trivial solution. 5. The columns of A are linearly independent. 6. The linear transformation T defined by T (x) = Ax is one-to-one. 7. The equation Ax = b has at least one solution for every b 2 Rn. 8. The columns of A span Rn. 9. The linear transformation T defined by T (x) = Ax is onto. 10. There exists an n × n matrix B such that AB = In. 11. There exists an n × n matrix C such that CA = In. 12. AT is invertible. Theorem 2. Let T : Rn ! Rn be defined by T (x) = Ax. Then T is one-to-one and onto if and only if A is an invertible matrix. Problem. True or false (all matrices are assumed to be n × n, unless otherwise specified). 1. The identity matrix is invertible. 2. If A can be row reduced to the identity matrix, then it is invertible. 3. If both A and B are invertible, so is AB. 4. If A is invertible, then the matrix equation Ax = b is consistent for every b 2 Rn. n 5. If A is an n × n matrix such that the equation Ax = ei is consistent for each ei 2 R a column of the n × n identity matrix, then A is invertible.
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • Lecture 12 Elementary Matrices and Finding Inverses
    Mathematics 13: Lecture 12 Elementary Matrices and Finding Inverses Dan Sloughter Furman University January 30, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 1 / 24 I A is invertible. I The only solution to A~x = ~0 is the trivial solution ~x = ~0. I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 I The only solution to A~x = ~0 is the trivial solution ~x = ~0. I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: I A is invertible. Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: I A is invertible. I The only solution to A~x = ~0 is the trivial solution ~x = ~0. Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 ~ ~ n I A~x = b has a solution for b in R .
    [Show full text]
  • Similar Matrices and Jordan Form
    Similar matrices and Jordan form We’ve nearly covered the entire heart of linear algebra – once we’ve finished singular value decompositions we’ll have seen all the most central topics. AT A is positive definite A matrix is positive definite if xT Ax > 0 for all x 6= 0. This is a very important class of matrices; positive definite matrices appear in the form of AT A when computing least squares solutions. In many situations, a rectangular matrix is multiplied by its transpose to get a square matrix. Given a symmetric positive definite matrix A, is its inverse also symmet­ ric and positive definite? Yes, because if the (positive) eigenvalues of A are −1 l1, l2, · · · ld then the eigenvalues 1/l1, 1/l2, · · · 1/ld of A are also positive. If A and B are positive definite, is A + B positive definite? We don’t know much about the eigenvalues of A + B, but we can use the property xT Ax > 0 and xT Bx > 0 to show that xT(A + B)x > 0 for x 6= 0 and so A + B is also positive definite. Now suppose A is a rectangular (m by n) matrix. A is almost certainly not symmetric, but AT A is square and symmetric. Is AT A positive definite? We’d rather not try to find the eigenvalues or the pivots of this matrix, so we ask when xT AT Ax is positive. Simplifying xT AT Ax is just a matter of moving parentheses: xT (AT A)x = (Ax)T (Ax) = jAxj2 ≥ 0.
    [Show full text]
  • MATH. 513. JORDAN FORM Let A1,...,Ak Be Square Matrices of Size
    MATH. 513. JORDAN FORM Let A1,...,Ak be square matrices of size n1, . , nk, respectively with entries in a field F . We define the matrix A1 ⊕ ... ⊕ Ak of size n = n1 + ... + nk as the block matrix A1 0 0 ... 0 0 A2 0 ... 0 . . 0 ...... 0 Ak It is called the direct sum of the matrices A1,...,Ak. A matrix of the form λ 1 0 ... 0 0 λ 1 ... 0 . . 0 . λ 1 0 ...... 0 λ is called a Jordan block. If k is its size, it is denoted by Jk(λ). A direct sum J = Jk1 ⊕ ... ⊕ Jkr (λr) of Jordan blocks is called a Jordan matrix. Theorem. Let T : V → V be a linear operator in a finite-dimensional vector space over a field F . Assume that the characteristic polynomial of T is a product of linear polynimials. Then there exists a basis E in V such that [T ]E is a Jordan matrix. Corollary. Let A ∈ Mn(F ). Assume that its characteristic polynomial is a product of linear polynomials. Then there exists a Jordan matrix J and an invertible matrix C such that A = CJC−1. Notice that the Jordan matrix J (which is called a Jordan form of A) is not defined uniquely. For example, we can permute its Jordan blocks. Otherwise the matrix J is defined uniquely (see Problem 7). On the other hand, there are many choices for C. We have seen this already in the diagonalization process. What is good about it? We have, as in the case when A is diagonalizable, AN = CJ N C−1.
    [Show full text]
  • Matrix Inverses Recall
    Matrix inverses Recall... DefinitionMatrix inversesA square matrix A is invertible (or nonsingular) if matrix 9 BRecall...such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix B Remark Not all square matrices are invertible. ∃ such that AB = I and BA = I. (We say B is an inverse of A.) Theorem.Remark IfNotA allis squareinvertible, matrices then are its invertible. inverse is unique. 1 RemarkTheorem.WhenIf A Ais invertible,is invertible, then we its denote inverse itsis unique. inverse as A− . 1 Remark When A is invertible, we denote its inverse as A− . Theorem. If A is an n n invertible matrix, then the system of × ~ 1~ linearTheorem. equationsIf A given is an n by A~xn invertible= b has matrix, the unique then solutionthe system~x of= linearA− b. × 1 equations given by Ax = b has the unique solution x = A− b. Proof. Assume A is an invertible matrix. Then we have by associativity of by def'n of Proof. Assume A is an invertiblematrix m matrix.ult. Theniden wetit havey 1 1 A(A− b)=(AA− )b = Ib = b. by def'n of inverse Theorem (Properties1 of matrix inverse). Thus, ~x = A− ~b is a solution to A~x = ~b. 1 1 1 Suppose(a) If A~y isis another invertible, solution then A to− theis itself linear invertible system. and It follows(A− )− that= AA~y. = ~b, 1 1 but multiplying both sides by A− gives ~y = A− ~b = ~x. (b) If A is invertible and c =0is a scalar, then cA is invertible and 1 1 1 (cA)− = c A− .
    [Show full text]