Numerical Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Numerical Linear Algebra NUMERICAL LINEAR ALGEBRA Robert D. Skeel c 2006 Robert D. Skeel February 10, 2006 0 Chapter 1 DIRECT METHODS—PART I 1.1 Linear Systems of Equations 1.2 Data Error 1.3 Gaussian Elimination 1.4 LU Factorization 1.5 Symmetric Positive Definite Matrices 1.1 Linear Systems of Equations 1.1.1 Network analysis A resistance network 4 ohm 3 ohm E1 I1 5 ohm 2 v I3 I2 E2 has unknown currents in its branches and unknown potentials at its nodes. A computer representation might look like branch from to RV 1 2 1 4 2 2 1 2 5 −4 3 1 2 3 0 1 Ohm’s Law Eto = Efrom + V − RI gives equations branch 1 E1 = E2 +2− 4I1 branch 2 E2 = E1 − 4 − 5I2 branch 3 E2 = E1 − 3I3 Kirchoff’s Current Law (a conservation law) gives equations node 1 I1 − I2 − I3 =0 node 2 − I1 + I2 + I3 =0 These equations are redundant; hence set E2 =0. Usually these equations are reduced to a smaller system by one of two techniques: 1. loop analysis—complicated to program 2. nodal analysis—eliminate currents and solve for nodal potentials, i.e., use 3 branch equations to substitute for currents in 1st node eqn: 1 1 1 1 ( − E ) − (−1+ E ) − (0 + E )=0. 2 4 1 5 1 3 1 Here are some other applications which give rise to linear systems of equations: • AC networks: capacitance, inductance, complex numbers; • hydraulic networks: pressure, rate of flow (flux); • framed structures: displacements, forces, stiffness, Newton’s 1st Law, Hooke’s Law; • surveying networks. The last 3 are nonlinear. 1.1.2 Matrices To quote Jennings, “they provide a concise and simple method of describing lengthy and otherwise complicated computations.” A matrix A ∈ Rm×n is expressed a a ··· a n 11 12 1 a21 a22 ··· a2n A =[aij]= . . . am1 am2 ··· amn For example, the directed graph 2 1 1 2 34 8 23 4 56 7 5 can be represented by the adjacency matrix branches 12345678 1 −1 −1000001 2 10−10−10 0 0 nodes 3 001−10−10 0 4 010100−10 5 0000111−1 Special types of matrices are a column vector x 1 x2 m x = . ,x∈ R , . xm a diagonal matrix d 0 ··· 0 1 0 d2 ··· 0 D = . =diag(d1,d2,...,dn), . .. 00··· dn and an identity matrix 0 . . . 0 I =diag(1, 1,...,1),ek = 1 k. 0 . . 0 3 Three operations are defined for matrices: (i) αA (ii) A + B AB=: C (iii) m×nn×pm×p Xn cij = aikbkj k=1 j i Note that x x 1 1 x2 x2 α . = . [α]. . . xn xn Generally AB =6 BA. We define the transpose by T A =: C where cij = aji. T This permits a compact definition of a column vector by means of x =(x1,x2,...,xn) . Note that (AB)T = BTAT. A lower or left triangular matrix, typically denoted by the symbol L, has the form × ×× ××× . ×××× 4 An upper or right triangular matrix, typically denoted by U or R, has the form ×××× ××× ×× . × A unit triangular matrix has ones on the diagonal. The determinant det(A), whose definition is complicated, has the properties det(αA)=αn det(A), det(AT)=det(A), det(AB)=det(A)det(B), det(A) =06 ⇐⇒ A−1 exists. Also (AB)−1 = B−1A−1. 1.1.3 Partitioned Matrices This notion makes it possible to express many ideas without introducing the clutter of subscripts. An example of partitioning a matrix into blocks is 00−1 −1 0 AT 2 00 11 M = = AR 2 −11 50 22 −11 07 where −11 50 A = ,R= . −11 07 In matrix operations, blocks can be treated as scalars except that multiplication is noncom- mutative. The product A A ... A q B B ... B r 11 12 1 l1 11 12 1 m1 q r A21 A22 ... A2 l2 B21 B22 ... B2 m2 . . . . p q Ap1 Ap2 ... Apq l Bq1 Bq2 ... Bqr m m1 m2 mq n1 n2 nr can be conveniently expressed because the partitioning is conformable. The (1-1)-block of the product is A11B11 + A12B21 + ···+ A1qBq1 5 Here are some examples: x 1 x2 Ax =: c1 c2 ··· cn . = x1c1 + x2c2 + ···+ xncn, . xn T T r1 r1 x T T r2 r2 x Ax =: . x = . . . T T rm rmx (the two preceding examples are computational alternatives), AB =: A b b ··· bp = Ab Ab ··· Abp , 1 2 1 2 T T T T a11 a12 a13 r1 a11r1 + a12r2 + a13r3 T T T T AB =: a21 a22 a23 r2 = a21r1 + a22r2 + a23r3 . T T T T a31 a32 a33 r3 a31r1 + a32r2 + a33r3 The last example states that the rows of AB are linear combinations of rows of B. Therefore, matrix premultiplication ⇔ row operations. 1.1.4 Linear Spaces 1 Amultiset of vectors x1,x2,...,xk is linearly independent if α1x1 + α2x2 + ···+ αkxk =0⇒ α1 = α2 = ···= αk =0; otherwise it is linearly dependent, which implies that one of them is a linear combination of the others. Rn is a vector space.Asubspace S is a subset which is also a vector space, i.e., x ∈ S ⇒ αx ∈ S, x, y ∈ S ⇒ x + y ∈ S. (What are the possible subspaces for n = 3?) Recall that span{x1,x2,...,xk} := ··· . The dimension of S := maximum number of linearly independent vectors. A linearly indepen- n dent multiset having that many elements is a basis, e.g., R has a basis e1,e2,...,en. If y1,y2,...,yk is a basis for S, then for any x ∈ S there exist unique α1,α2,...,αk such that x = α y + α y + ···+ αkyk or 1 1 2 2 α 1 α 2 x =[|y1,y2,...,y{z }k] . , . basis αk |{z} coordinates 1A multiset, or bag, of k elements is a k-tuple in which the ordering of elements does not matter 6 which we note is a conformable partitioning. Consider a matrix A ∈ Rm×n expressed as T r1 T r2 A =[c1,c2,...,cn]andA = . . . T rm The range n R(A)= span{c1,c2,...,cn} = {Ax : x ∈ R }. The null space N(A)={x ∈ Rn : Ax =0}. Also rank (A)= dim[R(A)]. It can be shown that rank (AT) = rank (A). The problem Ax = b can be written c1x1 + c2x2 + ···+ cnxn = b. It has a solution if b ∈ R(A). There is always a solution if R(A)=Rm, i.e., rank (A)=m, which implies n ≥ m. The solution is unique if c1,c2,...,cn is a basis, which implies n = m and x = A−1b. The inner product for x, y ∈ Rn is xTy. The outer product for x ∈ Rm, y ∈ Rn is xyT ∈ Rm×n, e.g., T T ··· T I = e1e1 + e2e2 + + enen . Review questions 1. Give a complete definition of the product AB where A is an m×n matrix with elements aij and B is an n × p matrix with elements bij. 2. If AB is a square matrix, when is it true that det(AB)=det(A)det(B)? 3. Give an expression for the product AB with B is partitioned into columns. 4. Define what it means for a multiset of vectors x1,x2,...,xk to be linearly independent. 5. Define a subspace of a linear space. 6. Define span{x1,x2,...,xk}. 7. Define the dimension of a subspace. 8. Define a basis for a subspace. 7 9. Define the range of a matrix. 10. Define the null space of a matrix. 11. Define the column rank of a matrix. 12. If A is an m × n matrix, what can we say about its row rank and its column rank? Exercises 1. In the resistance network of section 1.1.1 do the currents I1,I2,I3 depend on the arbi- trary choice for E2? T T 2. Let A denote the incidence matrix given in the example, let i =[I1,I2,...,I8] , T v =[V1,V2,...,V8] , R =diag(R1,R2,...,R8), and e =[E1,E2,E3,E4,E5]where Ij,Vj,Rj is the current, voltage source, resistance along branch j and Ei is the potential at node i. Use these vectors and matrices to express Ohm’s Law and Kirchoff’s Current Law as vector equations. Then eliminate i to get a single vector equation for e. 3. In an electrical resistance network such as given in the example, we can represent a simple loop by a column vector b of dimension 8 with elements −1, 0, and 1. What would be the meaning of the values −1, 0, and 1? Let AT be the node–branch incidence matrix. What can we say about ATb? Explain. k k− 4. Show that if x, y ∈ Rn,then(xyT) =(xTy) 1xyT. 5. Prove the Woodbury formula −1 −1 (A + UV T) = A−1 − A−1U(I + V TA−1U) V TA−1 where A ∈ Rn×n, U, V ∈ Rn×k and both A and I + V TA−1U are nonsingular. − T 6. Show that (AT) 1 =(A−1) . T 7. Show that (AB) = BTAT. T 8. Use Exercise 1.10 to show that (ABC) = CTBTAT. Avoid subscripts. 9. (Stewart, p. 66) Prove that the equation Ax = b has a solution if and only if for any y, yTA = 0 implies yTb =0. 10. (advanced) (Stewart, p. 67) Show that if A ∈ Rm×n and rank (A)=r,thenA = UV T where U and V are of full rank r. Do not use the approach suggested by Stewart; rather, partition A and U by columns and V by rows and columns.
Recommended publications
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • 8.3 Positive Definite Matrices
    8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number.
    [Show full text]
  • Inner Products and Norms (Part III)
    Inner Products and Norms (part III) Prof. Dan A. Simovici UMB 1 / 74 Outline 1 Approximating Subspaces 2 Gram Matrices 3 The Gram-Schmidt Orthogonalization Algorithm 4 QR Decomposition of Matrices 5 Gram-Schmidt Algorithm in R 2 / 74 Approximating Subspaces Definition A subspace T of a inner product linear space is an approximating subspace if for every x 2 L there is a unique element in T that is closest to x. Theorem Let T be a subspace in the inner product space L. If x 2 L and t 2 T , then x − t 2 T ? if and only if t is the unique element of T closest to x. 3 / 74 Approximating Subspaces Proof Suppose that x − t 2 T ?. Then, for any u 2 T we have k x − u k2=k (x − t) + (t − u) k2=k x − t k2 + k t − u k2; by observing that x − t 2 T ? and t − u 2 T and applying Pythagora's 2 2 Theorem to x − t and t − u. Therefore, we have k x − u k >k x − t k , so t is the unique element of T closest to x. 4 / 74 Approximating Subspaces Proof (cont'd) Conversely, suppose that t is the unique element of T closest to x and x − t 62 T ?, that is, there exists u 2 T such that (x − t; u) 6= 0. This implies, of course, that u 6= 0L. We have k x − (t + au) k2=k x − t − au k2=k x − t k2 −2(x − t; au) + jaj2 k u k2 : 2 2 Since k x − (t + au) k >k x − t k (by the definition of t), we have 2 2 1 −2(x − t; au) + jaj k u k > 0 for every a 2 F.
    [Show full text]
  • Amath/Math 516 First Homework Set Solutions
    AMATH/MATH 516 FIRST HOMEWORK SET SOLUTIONS The purpose of this problem set is to have you brush up and further develop your multi-variable calculus and linear algebra skills. The problem set will be very difficult for some and straightforward for others. If you are having any difficulty, please feel free to discuss the problems with me at any time. Don't delay in starting work on these problems! 1. Let Q be an n × n symmetric positive definite matrix. The following fact for symmetric matrices can be used to answer the questions in this problem. Fact: If M is a real symmetric n×n matrix, then there is a real orthogonal n×n matrix U (U T U = I) T and a real diagonal matrix Λ = diag(λ1; λ2; : : : ; λn) such that M = UΛU . (a) Show that the eigenvalues of Q2 are the square of the eigenvalues of Q. Note that λ is an eigenvalue of Q if and only if there is some vector v such that Qv = λv. Then Q2v = Qλv = λ2v, so λ2 is an eigenvalue of Q2. (b) If λ1 ≥ λ2 ≥ · · · ≥ λn are the eigen values of Q, show that 2 T 2 n λnkuk2 ≤ u Qu ≤ λ1kuk2 8 u 2 IR : T T Pn 2 We have u Qu = u UΛUu = i=1 λikuk2. The result follows immediately from bounds on the λi. (c) If 0 < λ < λ¯ are such that 2 T ¯ 2 n λkuk2 ≤ u Qu ≤ λkuk2 8 u 2 IR ; then all of the eigenvalues of Q must lie in the interval [λ; λ¯].
    [Show full text]
  • Facts from Linear Algebra
    Appendix A Facts from Linear Algebra Abstract We introduce the notation of vector and matrices (cf. Section A.1), and recall the solvability of linear systems (cf. Section A.2). Section A.3 introduces the spectrum σ(A), matrix polynomials P (A) and their spectra, the spectral radius ρ(A), and its properties. Block structures are introduced in Section A.4. Subjects of Section A.5 are orthogonal and orthonormal vectors, orthogonalisation, the QR method, and orthogonal projections. Section A.6 is devoted to the Schur normal form (§A.6.1) and the Jordan normal form (§A.6.2). Diagonalisability is discussed in §A.6.3. Finally, in §A.6.4, the singular value decomposition is explained. A.1 Notation for Vectors and Matrices We recall that the field K denotes either R or C. Given a finite index set I, the linear I space of all vectors x =(xi)i∈I with xi ∈ K is denoted by K . The corresponding square matrices form the space KI×I . KI×J with another index set J describes rectangular matrices mapping KJ into KI . The linear subspace of a vector space V spanned by the vectors {xα ∈V : α ∈ I} is denoted and defined by α α span{x : α ∈ I} := aαx : aα ∈ K . α∈I I×I T Let A =(aαβ)α,β∈I ∈ K . Then A =(aβα)α,β∈I denotes the transposed H matrix, while A =(aβα)α,β∈I is the adjoint (or Hermitian transposed) matrix. T H Note that A = A holds if K = R . Since (x1,x2,...) indicates a row vector, T (x1,x2,...) is used for a column vector.
    [Show full text]
  • Similar Matrices and Jordan Form
    Similar matrices and Jordan form We’ve nearly covered the entire heart of linear algebra – once we’ve finished singular value decompositions we’ll have seen all the most central topics. AT A is positive definite A matrix is positive definite if xT Ax > 0 for all x 6= 0. This is a very important class of matrices; positive definite matrices appear in the form of AT A when computing least squares solutions. In many situations, a rectangular matrix is multiplied by its transpose to get a square matrix. Given a symmetric positive definite matrix A, is its inverse also symmet­ ric and positive definite? Yes, because if the (positive) eigenvalues of A are −1 l1, l2, · · · ld then the eigenvalues 1/l1, 1/l2, · · · 1/ld of A are also positive. If A and B are positive definite, is A + B positive definite? We don’t know much about the eigenvalues of A + B, but we can use the property xT Ax > 0 and xT Bx > 0 to show that xT(A + B)x > 0 for x 6= 0 and so A + B is also positive definite. Now suppose A is a rectangular (m by n) matrix. A is almost certainly not symmetric, but AT A is square and symmetric. Is AT A positive definite? We’d rather not try to find the eigenvalues or the pivots of this matrix, so we ask when xT AT Ax is positive. Simplifying xT AT Ax is just a matter of moving parentheses: xT (AT A)x = (Ax)T (Ax) = jAxj2 ≥ 0.
    [Show full text]
  • On the Uniqueness of Euclidean Distance Matrix Completions Abdo Y
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 370 (2003) 1–14 www.elsevier.com/locate/laa On the uniqueness of Euclidean distance matrix completions Abdo Y. Alfakih Bell Laboratories, Lucent Technologies, Room 3J-310, 101 Crawfords Corner Road, Holmdel, NJ 07733-3030, USA Received 28 December 2001; accepted 18 December 2002 Submitted by R.A. Brualdi Abstract The Euclidean distance matrix completion problem (EDMCP) is the problem of determin- ing whether or not a given partial matrix can be completed into a Euclidean distance matrix. In this paper, we present necessary and sufficient conditions for a given solution of the EDMCP to be unique. © 2003 Elsevier Inc. All rights reserved. AMS classification: 51K05; 15A48; 52A05; 52B35 Keywords: Matrix completion problems; Euclidean distance matrices; Semidefinite matrices; Convex sets; Singleton sets; Gale transform 1. Introduction All matrices considered in this paper are real. An n × n matrix D = (dij ) is said to be a Euclidean distance matrix (EDM) iff there exist points p1,p2,...,pn in some i j 2 Euclidean space such that dij =p − p for all i, j = 1,...,n. It immediately follows that if D is EDM then D is symmetric with positive entries and with zero diagonal. We say matrix A = (aij ) is symmetric partial if only some of its entries are specified and aji is specified and equal to aij whenever aij is specified. The unspecified entries of A are said to be free. Given an n × n symmetric partial matrix A,ann × n matrix D is said to be a EDM completion of A iff D is EDM and Email address: [email protected] (A.Y.
    [Show full text]
  • Euclidean Distance Matrix Analysis (EDMA): Estimation of Mean Form and Mean Form Difference 1
    Mathematical Geology, Vol. 25, No. 5, 1993 Euclidean Distance Matrix Analysis (EDMA): Estimation of Mean Form and Mean Form Difference 1 Subhash Lele 2 Euclidean Distance Matrix Analysis (EDMA) of form is a coordinate free approach to the analysis of form using landmark data. In this paper, the problem of estimation of mean form, variance- covariance matrix, and mean form difference under the Gaussian perturbation model is considered using EDMA. The suggested estimators are based on the method of moments. They are shown to be consistent, that is as the sample size increases these estimators approach the true parameters. They are also shown to be computationally very simple. A method to improve their efficiency is suggested. Estimation in the presence of missing data is studied. In addition, it is shown that the superimposition method of estimation leads to incorrect mean form and variance-covariance struc- ture. KEY WORDS: coordinate free approach, invariance principle, moment estimators, non-central chi-square, nuisance parameters, procrustes methods, superimposition, missing data. INTRODUCTION Morphometrics, or the quantitative analysis of biological forms is an important subject. Many different kinds of data are utilized to analyze biological forms. Traditionally scientists have used various linear distances across the form. The technological advances in the last two decades have enabled the scientists to collect data on the complete outline of the object or coordinates of certain biological loci called landmarks, This paper concerns itself with the statistical analysis of landmark coordinate data. In particular, we suggest a method to estimate the mean form and variance-covariance parameters given a sample of n individuals from a population.
    [Show full text]
  • Message-Passing Algorithms for Quadratic Minimization
    Journal of Machine Learning Research 1–29 Message-Passing Algorithms for Quadratic Minimization Nicholas Ruozzi [email protected] Communication Theory Laboratory Ecole´ Polytechnique F´ed´erale de Lausanne Lausanne 1015, Switzerland Sekhar Tatikonda [email protected] Department of Electrical Engineering Yale University New Haven, CT 06520, USA Abstract Gaussian belief propagation (GaBP) is an iterative algorithm for computing the mean of a multivariate Gaussian distribution, or equivalently, the minimum of a multivariate positive definite quadratic function. Sufficient conditions, such as walk-summability, that guarantee the convergence and correctness of GaBP are known, but GaBP may fail to converge to the correct solution given an arbitrary positive definite quadratic function. As was observed by Malioutov et al. (2006), the GaBP algorithm fails to converge if the computation trees produced by the algorithm are not positive definite. In this work, we will show that the failure modes of the GaBP algorithm can be understood via graph covers, and we prove that a parameterized generalization of the min-sum algorithm can be used to ensure that the computation trees remain positive definite whenever the input matrix is positive definite. We demonstrate that the resulting algorithm is closely related to other iterative schemes for quadratic minimization such as the Gauss-Seidel and Jacobi algorithms. Finally, we observe, empirically, that there always exists a choice of parameters such that the above generalization of the GaBP algorithm converges. Keywords: belief propagation, Gaussian graphical models, graph covers 1. Introduction arXiv:1212.0171v1 [cs.IT] 2 Dec 2012 In this work, we study the properties of reweighted message-passing algorithms with respect n n to the quadratic minimization problem.
    [Show full text]
  • Definite Matrices
    John Nachbar September 2, 2014 Definite Matrices1 1 Basic Definitions. An N × N symmetric matrix A is positive definite iff for any v 6= 0, v0Av > 0. For example, if a b A = b c then the statement is that for any v = (v1; v2) 6= 0, 0 a b v1 v Av = [ v1 v2 ] b c v2 av1 + bv2 = [ v1 v2 ] bv1 + cv2 2 2 = av1 + 2bv1v2 + cv2 > 0 2 2 (The expression av1 +2bv1v2 +cv2 is called a quadratic form.) An N ×N symmetric matrix A is negative definite iff −A is positive definite. The definition of a positive semidefinite matrix relaxes > to ≥, and similarly for negative semi-definiteness. If N = 1 then A is just a number and a number is positive definite iff it is positive. For N > 1 the condition of being positive definite is somewhat subtle. For example, the matrix 1 3 A = 3 1 is not positive definite. If v = (1; −1) then v0Av = −4. Loosely, a matrix is positive definite iff (a) it has a diagonal that is positive and (b) off diagonal terms are not too large in absolute value relative to the terms on the diagonal. I won't formalize this assertion but it should be plausible given the example. The canonical positive definite matrix is the identity matrix, where all the off diagonal terms are zero. A useful fact is the following. If S is any M ×N matrix then A = S0S is positive semi-definite. To see this, note that S0S is symmetric N × N. To see that it is N positive semi-definite, note that for any v 2 R , v0Av = v0[S0S]v = (v0S0)(Sv) = (Sv)0(Sv) ≥ 0: 1cbna.
    [Show full text]
  • Symmetric Norm Inequalities and Positive Semi-Definite Block-Matrices Antoine Mhanna
    Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices Antoine Mhanna To cite this version: Antoine Mhanna. Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices. 2015. hal-01182244v5 HAL Id: hal-01182244 https://hal.inria.fr/hal-01182244v5 Preprint submitted on 14 Sep 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices Antoine Mhanna1 1 Dept of Mathematics, Lebanese University, Hadath, Beirut, Lebanon. [email protected] Abstract For positive semi-definite block-matrix M, we say that M is P.S.D. and we write A X M = M+ , with A M+, B M+ . The focus is on studying the conse- X B ∈ n+m ∈ n ∈ m ∗ quences of a decomposition lemma due to C. Bourrin and the main result is extending the class of P.S.D. matrices M written by blocks of same size that satisfies the inequality: M A + B for all symmetric norms. k k ≤ k k Keywords : Matrix Analysis, Hermitian matrices, symmetric norms. 1 Introduction Let A be an n n matrix and F an m m matrix, (m>n) written by blocks such that A × × is a diagonal block and all entries other than those of A are zeros, then the two matrices have the same singular values and for all unitarily invariant norms A = F = A 0 , k k k k k ⊕ k we say then that the symmetric norm on Mm induces a symmetric norm on Mn, so for square matrices we may assume that our norms are defined on all spaces Mn, n 1.
    [Show full text]
  • Oscillation Criteria for Matrix Differential Equations
    OSCILLATION CRITERIA FOR MATRIX DIFFERENTIAL EQUATIONS H. C. HOWARD 1. Introduction. We shall be concerned at first with some properties of the solutions of the matrix differential equation (1.1) Y"{x) + P(x)Y(x) = 0 where P(x) = (Pv(x))> hj, = 1, 2, ... ,w, is an n X n symmetric matrix whose elements are continuous real-valued functions for 0 < x < <», and Y(x) = (ytj(x)), Y"'(x) = (y" ij(x)) are n X n matrices. It is clear such equations possess solutions for 0 < x < oo, since one can reduce them to a first-order system and then apply known existence theorems (6, Chapter 1). We shall be primarily interested in giving sufficient conditions for solutions F of (1.1), or for solutions F of generalizations of (1.1), to oscillate, in the sense that the equation determinant Y(x) = \Y(x)\ = 0 possesses an infinite number of roots in a < x < °° for all a > 0. We shall also give sufficient conditions for non-oscillation of solutions of matrix equations, that is, for a solution F to be such that the equation |F(x)| = 0 has a finite number of roots in (0, <»). The subject of oscillation criteria for matrix differential equations is not a new one. The reader is referred to the recent book of Atkinson (1), and in particular Chapter 10 and the Notes on Chapter 10, for an exposition of the subject of matrix oscillation criteria, together with many references to the literature. It may be noted here that the definitions given in much of the pertinent literature for * 'oscillation" and "non-oscillation" of solutions are different from the ones used here.
    [Show full text]