CS 515: Homework 1

Total Page:16

File Type:pdf, Size:1020Kb

CS 515: Homework 1 CS 515: Homework 1 Yingwei Wang ∗ Department of Mathematics, Purdue University, West Lafayette, IN, USA 1 Matrix product 1.1 (a) 1 3 2 1 2 ∗ 3, 2, 1 = 6 4 2 . 3 9 4 3 1.2 (b) 1 3, 2, 1 ∗ 2 = 10. 3 1.3 (c) 0 0 1 1 2 3 7 8 9 0 1 0 ∗ 4 5 6 = 4 5 6 . 1 0 0 7 8 9 1 2 3 2 Symmetric matrices Give an example to show that the product of two symmetric matrices need not to be symmetric. ∗E-mail address: [email protected]; Tel: 765 237 7149 1 Yingwei Wang Numerical Linear Algebra 1 0 0 1 0 1 1 0 1 0 2 0 ∗ 0 1 0 = 0 2 0 . 0 0 3 1 0 1 3 0 3 3 Elementary reflector Let H = I − 2xxT , where xT x = 1. 3.1 H is symmetric HT = IT − 2(xxT )T = I − 2xxT = H. So H is symmetric. 3.2 H is involuntary H2 =(I − 2xxT )(I − 2xxT ) = I2 − 4xxT +4xxT xxT = I − 4xxT +4x(xT x)xT = I − 4xxT +4xxT = I. So H is involuntary. 3.3 H is orthogonal Since H is symmetric and involuntary, we have HT H = H2 = I, HHT = H2 = I. So H is orthogonal. Remark 3.1. H is often called “elementary reflector ”or a “Householder transformation”. 2 Yingwei Wang Numerical Linear Algebra 4 Property about zero vectors Proposition 4.1. Let A ∈ Rm×n. Show that xT AT Ax =0 if and only if Ax =0. Proof. If Ax = 0, then it is obvious that xT AT Ax = 0. T Conversely, let y = Ax =(y1, ··· ,yj, ··· ,ym) be a column vector, then xT AT Ax =0, ⇒ yT y =0, m 2 ⇒ yj =0, j=1 X ⇒ yj =0, ∀j, ⇒ y =0, ⇒ Ax =0. 5 Block matrices Assuming the partition conform, compute 5.1 (a) (X,Y )T ∗ A ∗ (X,Y ) XA = ∗ (X,Y ) Y A XAX XAY = . Y AX Y AY 5.2 (b) β bT α (α, xT ) ∗ ∗ b B x α = (αβ + xT b, αbT + xT B) ∗ x = α2β + αxT b + αbT x + xT Bx. 3 Yingwei Wang Numerical Linear Algebra 5.3 (c) α aT β bT ∗ a A b B αβ + aT b αbT + aT B = . βa + Ab abT + AB 6 Matrix equation n T T T Proposition 6.1. Let x ∈ R be partitioned in the form x = (ξ1,y ) where y = n−1 (ξ2,ξ3, ··· ,ξn). Show that if ξi =06 , there is a unique vector b ∈ R such that 1 0 ξ ξ ∗ 1 = 1 (6.1) b I − y 0 n 1 Proof. Choose b = −y/ξi, then 1 0 ξ ∗ 1 b In− y 1 ξ = 1 ξb + y ξ = 1 0 If there are two vectors b1, b2 satisfying (6.1), then ξ1b1 + y =0, ξ1b2 + y =0, ⇒ ξ1(b1 − b2)=0, ⇒ b1 = b2. 7 Nonsingular matrix Proposition 7.1. Let A = diag (λ1,λ2, ··· ,λn). Show that A is nonsingular if and only if λi =06 for 1 ≤ i ≤ n. 4 Yingwei Wang Numerical Linear Algebra n Proof. Since det(A) = Πi=1λi, we know that if A is nonsingular then det(A) =6 0, which implies that λi =06 for 1 ≤ i ≤ n. Conversely, if λi =06 for 1 ≤ i ≤ n, then det(A) =6 0, which means A is nonsingular. −1 −1 −1 −1 Besides, then inverse of A is A = diag (λ1 ,λ2 , ··· ,λn ). 8 Inverse Proposition 8.1. Let u, v ∈ Rn and σ =6 0. Suppose that vT u − σ−1 =6 0. Show that (I − σuvT ) is nonsingular and has an inverse given by (I − τuvT ) where σ−1 + τ −1 = vT u. T T Proof. Let An = In − σuv and Dn = det(An). Let u = (u1,u2, ··· ,un) and v = T (v1, v2, ··· , vn) . Then by the knowledge of linear algebra, it is easy to know that 1 − σu1v1 −σu1v2 ··· −σu1vn −σu v 1 − σu v ··· −σu v det(A ) = det 2 1 2 2 2 n (8.1) n ··· −σunv −σunv ··· 1 − σunvn 1 2 1 − σu1v1 −σu1v2 ··· 0 1 − σu1v1 −σu1v2 ··· σu1vn −σu v 1 − σu v ··· 0 −σu v 1 − σu v ··· σu v = det 2 1 2 2 − det 2 1 2 2 2 n(8.2) ··· ··· −σunv −σunv ··· 1 −σunv −σunv ··· σunvn 1 2 1 2 1 0 ··· 0 0 1 ··· 0 = D − − det (8.3) n 1 ··· −σunv −σunv ··· σunvn 1 2 = Dn−1 − σunvn. which means Dn = Dn−1 − σvnun. (8.4) Besides, we know that D1 =1 − σu1v1. Now we can get n T Dn =1 − σ ujvj =1 − σv u. j=1 X 5 Yingwei Wang Numerical Linear Algebra Since vT u − σ−1 =6 0, we know that det(I − σuvT ) =6 0, which implies that (I − σuvT ) is nonsingular. Now we want to find the inverse of (I − σuvT ). Choose τ such that 1 1 + = vT u, σ τ σ + τ ⇒ = vT u, στ ⇒ σ + τ = στvT u. Consider the product (I − τuvT )(I − σuvT ): (I − τuvT )(I − σuvT ) = I − (σ + τ)uvT + στ(uvT )2, = I − στvT uuvT + στ(uvT )2. T T T 2 Let X = v uuv =(xpq)n×n, Y =(uv ) =(ypq)n×n, then n xpq = ukvk upvq, k=1 ! nX ypq = (upvk)(ukvq) k=1 Xn = ukvk upvq. k ! X=1 It implies that X = Y and further (I − τuvT )(I − σuvT )= I. Similarly, we can prove that (I − σuvT )(I − τuvT )= I. Then we are done. Remark 8.1. From (8.2) to (8.3), we assume that un and vn are not 0. But if un = 0 or v0 =0, then (8.4) is still true. So, anyway, (8.4) is true. Besides, from (8.2) to (8.3), if vj =0, then it is fine; if vj =06 , then we can use the last T row to make jth row become ej = (0, ··· , 0, 1, 0, ··· , 0) where 1 is in the jth place. Finally, from (8.1) to (8.2), it is just a common trick for computing the det(An). You can find that in any textbook on linear algebra. 9 Matrices Proposition 9.1. Let B ∈ Rn×n,S ∈ Rk×k and U, V ∈ Rn×k. Assuming that B,S and [V T B−1U − S−1] are nonsingular, show that (B − USV T )−1 = B−1 − B−1UTV T B−1, 6 Yingwei Wang Numerical Linear Algebra where S−1 + T −1 = V T B−1U. Proof. On one hand, (B − USV T )(B−1 − B−1UTV T B−1), = I − UTV T B−1 − USV T B−1 + USV T B−1UTV T B−1, = I − U(T + S)V T B−1 + US(S−1 + T −1)TV T B−1, = I. On the other hand, (B−1 − B−1UTV T B−1)(B − USV T ), = I − B−1USV T − B−1UTV T + B−1UTV T B−1USV T , − − − − = I − B 1U(S + T )V T + B 1UT (S 1 + T 1)SV T , = I. Then we are done. 10 Hermitian matrix Question: A ∈ Rn×n is hermitian if AH = A. If A = B + iC, then it is easy to show that BT = B and CT = −C. Suppose that we represent A in an array A.herm with the property that A.herm(i, j) houses bij if i ≥ j and cij if j > i. Using this data structure write a matrix-vector multiply function that computes ℜ(z) and ℑ(z) from ℜ(x) and ℑ(x) so that z = Ax. Solution: Let x =(xj) be a column vector, and each element xj = pj + iqj, where pj = ℜ(xj) and qj = ℑ(xj). Let A = B + iC be hermitian and A.herm =(ajk), B =(bjk), C =(cjk). We know that ajk = bjk if j ≥ k, & ajk = cjk if j ≤ k. Besides, bjk = bkj, & cjk = −ckj. 7 Yingwei Wang Numerical Linear Algebra Let z =(zj) be a column vector. Then from z = Ax, we know that n zj = (bjk + icjk)(pk + iqk) k=1 Xn = (bjkpk − cjkqk)+ i(cjkpk + bjkqk). k=1 X j n j n ⇒ ℜ(zj)= ajkpk − akjpk − akjqk + ajkqk, (10.1) k k j k k j X=1 X= +1 X=1 X= +1 j n j n ℑ(zj)= − akjpk + ajkpk + ajkqk − akjqk. (10.2) k k j k k j X=1 X= +1 X=1 X= +1 From the formula (10.1) and (10.2), we can write some code using any language. 11 The algebra of triangular matrices Proposition 11.1. Here are some properties about the products and inverses of triangular and unit triangle matrices. 1. The inverse of an upper (lower) triangular matrix is upper (lower) triangular. 2. The product of two upper (lower) triangular matrices is upper (lower) triangular. 3. The inverse of a unit upper (lower) triangular matrix is upper (lower) triangular. 4. The product of two unit upper (lower) triangular matrices is upper (lower) triangular. Proof. (1). Let Tn×n be the upper triangular matrix, then T (i, j) = 0 if i > j. Induction by n. If n = 1, then it is obviously true. −1 Suppose T(n−1)×(n−1) is an upper triangular matrix, then solving the equation TS = I, we can get S(n, n)=1/T (n, n), S(n, 1 : n − 1)=0, S(1 : n − 1, n)= −T (1 : n − 1, 1 : n − 1)−1T (1 : n − 1, n)S(n, n), S(1 : n − 1, 1 : n − 1) = T (1 : n − 1, 1 : n − 1)−1, which is triangular by assumption.
Recommended publications
  • Math 4571 (Advanced Linear Algebra) Lecture #27
    Math 4571 (Advanced Linear Algebra) Lecture #27 Applications of Diagonalization and the Jordan Canonical Form (Part 1): Spectral Mapping and the Cayley-Hamilton Theorem Transition Matrices and Markov Chains The Spectral Theorem for Hermitian Operators This material represents x4.4.1 + x4.4.4 +x4.4.5 from the course notes. Overview In this lecture and the next, we discuss a variety of applications of diagonalization and the Jordan canonical form. This lecture will discuss three essentially unrelated topics: A proof of the Cayley-Hamilton theorem for general matrices Transition matrices and Markov chains, used for modeling iterated changes in systems over time The spectral theorem for Hermitian operators, in which we establish that Hermitian operators (i.e., operators with T ∗ = T ) are diagonalizable In the next lecture, we will discuss another fundamental application: solving systems of linear differential equations. Cayley-Hamilton, I First, we establish the Cayley-Hamilton theorem for arbitrary matrices: Theorem (Cayley-Hamilton) If p(x) is the characteristic polynomial of a matrix A, then p(A) is the zero matrix 0. The same result holds for the characteristic polynomial of a linear operator T : V ! V on a finite-dimensional vector space. Cayley-Hamilton, II Proof: Since the characteristic polynomial of a matrix does not depend on the underlying field of coefficients, we may assume that the characteristic polynomial factors completely over the field (i.e., that all of the eigenvalues of A lie in the field) by replacing the field with its algebraic closure. Then by our results, the Jordan canonical form of A exists.
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • The Inverse Along a Lower Triangular Matrix∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
    LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and .
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Triangular Factorization
    Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]