Lecture 7 We Know from a Previous Lecture Thatρ(A) a for Any ≤ ||| ||| � Chapter 6 + Appendix D: Location and Perturbation of Matrix Norm

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 7 We Know from a Previous Lecture Thatρ(A) a for Any ≤ ||| ||| � Chapter 6 + Appendix D: Location and Perturbation of Matrix Norm KTH ROYAL INSTITUTE OF TECHNOLOGY Eigenvalue Perturbation Results, Motivation Lecture 7 We know from a previous lecture thatρ(A) A for any ≤ ||| ||| � Chapter 6 + Appendix D: Location and perturbation of matrix norm. That is, we know that all eigenvalues are in a eigenvalues circular disk with radius upper bounded by any matrix norm. � Some other results on perturbed eigenvalue problems More precise results? � Chapter 8: Nonnegative matrices What can be said about the eigenvalues and eigenvectors of Magnus Jansson/Mats Bengtsson, May 11, 2016 A+�B when� is small? 2 / 29 Geršgorin circles Geršgorin circles, cont. Geršgorin’s Thm: LetA=D+B, whereD= diag(d 1,...,d n), IfG(A) contains a region ofk circles that are disjoint from the andB=[b ] M has zeros on the diagonal. Define rest, then there arek eigenvalues in that region. ij ∈ n n 0.6 r �(B)= b i | ij | j=1 0.4 �j=i � 0.2 ( , )= : �( ) ] C D B z C z d r B λ i { ∈ | − i |≤ i } Then, all eigenvalues ofA are located in imag[ 0 n λ (A) G(A) = C (D,B) k -0.2 k ∈ i ∀ i=1 � -0.4 TheC i (D,B) are called Geršgorin circles. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 real[ λ] 3 / 29 4 / 29 Geršgorin, Improvements Invertibility and stability SinceA T has the same eigenvalues asA, we can do the same IfA M is strictly diagonally dominant such that ∈ n but summing over columns instead of rows. We conclude that n T aii > aij i λi (A) G(A) G(A ) i | | | |∀ ∈ ∩ ∀ j=1 �j=i � 1 SinceS − AS has the same eigenvalues asA, the above can be then “improved” by 1. A is invertible. 1 1 T 2. If all main diagonal elements are real and positive then all λi (A) G(S − AS) G (S− AS) i ∈ ∩ ∀ eigenvalues are in the right half plane. for any choice ofS. For it to be useful,� S should� be “simple”, 3. IfA is Hermitian with all diagonal elements positive, then e.g., diagonal (see e.g. Corollaries 6.1.6 and 6.1.8). all eigenvalues are real and positive. 5 / 29 6 / 29 Reducible matrices Irreducibly diagonally dominant A matrixA M is called reducible if IfA M is called irreducibly diagonally dominant if ∈ n ∈ n � n= 1 andA= 0 or i) A is irreducible (=A has the SC property). � n 2 and there is a permutation matrixP M such ii) A is diagonally dominant, ≥ ∈ n that n B C r aii aij i PT AP= } | |≥ | |∀ 0 D n r j=1 �j=i � �} − � r n r − iii) For at least one row,i, for some integer 1 r n 1. n ≤ ≤ −���� ���� A matrixA M that is not reducible is called irreducible. aii > aij ∈ n | | | | A matrix is irreducible iff it describes a strongly connected j=1 �j=i directed graph, “A has the SC property”. � 7 / 29 8 / 29 Invertibility and stability, stronger result Perturbation theorems IfA M n is irreducibly diagonally dominant, then Thm: LetA,E M n and letA be diagonalizable, ∈ ∈1 ˆ 1. A is invertible. A=SΛS − . Further, let λ be an eigenvalue ofA+E. 2. If all main diagonal elements are real and positive then all Then there is some eigenvalueλ i ofA such that 1 eigenvalues are in the right half plane. λˆ λ S S − E =κ(S) E | − i |≤ ||| ||| ||| ||| ||| ||| ||| ||| 3. IfA is Hermitian with all diagonal elements positive, then for some particular matrix norms (e.g., all eigenvalues are real and positive. 1, 2, ). ||| · ||| ||| · ||| ||| · |||∞ Cor: IfA is a normal matrix,S is unitary = S = S 1 = 1. This gives ⇒ ||| |||2 ||| − |||2 λˆ λ E | − i |≤ ||| ||| 2 indicating that normal matrices are perfectly conditioned for eigenvalue computations. 9 / 29 10 / 29 Perturbation cont’d Perturbation of a simple eigenvalue If bothA andE are Hermitian, we can use Weyl’s theorem Letλ be a simple eigenvalue ofA M and lety andx be ∈ n (here we assume the eigenvalues are indexed in non-decreasing the corresponding left and right eigenvectors. Theny x= 0. ∗ � order): Thm: LetA(t) M n be differentiable att= 0 and assumeλ λ (E) λ (A+E) λ (A) λ (E) k ∈ 1 ≤ k − k ≤ n ∀ is a simple eigenvalue ofA(0) with left and right We also have for this case eigenvectorsy andx. Ifλ(t) is an eigenvalue ofA(t) for n 1/2 smallt such thatλ(0) =λ then 2 λk (A+E) λ k (A) E 2 y ∗A�(0)x | − | ≤ || || λ�( ) = �k=1 � 0 � y ∗x where 2 is the Frobenius norm. y ∗Ex Example: A(t)=A+ tE givesλ �(0) = . || · || y ∗x 11 / 29 12 / 29 Perturbation of eigenvalues cont’d Literature with perturbation results Errors in eigenvalues may also be related to the residual J. R. Magnus and H. Neudecker. r=A xˆ λˆxˆ. Assume for example thatA is diagonalizable Matrix Differential Calculus with Applications in Statistics and − 1 A=SΛS − and letˆx and λˆ be a given complex vector and Econometrics. scalar, respectively. Then there is some eigenvalue ofA such John Wiley & Sons Ltd., 1988, rev. 1999. that H. Krim and P. Forster. Projections on unstructured subspaces. ˆ r IEEE Trans. SP, 44(10):2634–2637, Oct. 1996. λ λ i κ(S) || || | − |≤ xˆ J. Moro, J. V. Burke, and M. L. Overton. || || (for details and conditions see book). On the Lidskii-Vishik- Lyusternik perturbation theory for eigenvalues of matrices with arbitrary Jordan structure. We conclude that a small residual implies a good SIAM Journ. Matrix Anal. and Appl., 18(4):793–817, 1997. approximation of the eigenvalue. F. Rellich. Perturbation Theory of Eigenvalue Problems. Gordon & Breach, 1969. J. Wilkinson. The Algebraic Eigenvalue Problem. Clarendon Press, 1965. 13 / 29 14 / 29 Perturbation of eigenvectors with simple eigenva- Perturbation of eigenvectors with simple eigenva- lues lues: The real symmetric case Assume thatA M (R) is real symmetric matrix with ∈ n Thm: LetA(t) M n be differentiable att= 0 and assume normalized eigenvectorsx i and eigenvaluesλ i . Further assume ∈ ˆ λ0 is a simple eigenvalue ofA(0) with left and right thatλ 1 is a simple distinct eigenvalue. Let A=A+�B where eigenvectorsy 0 andx 0. Ifλ(t) is an eigenvalue ofA(t), it � is a small scalar,B is real symmetric and let xˆ1 be an has a right eigenvectorx(t) for smallt normalized such that eigenvector of Aˆ that approachesx as� 0. Then afirst 1 → order approximation (in�) is x0∗x(t)= 1, with derivative n T x0y0∗ xk Bx1 x�(0) = (λ0I A(0)) † I A�(0)x0 ˆ1x x 1 =� xk − − y0∗x0 − λ1 λ k � � �k=2 − B† denotes the Moore-Penrose pseudo inverse of a Warning: Non-unique derivative in the complex valued matrixB. case! See, e.g., J. R. Magnus and H. Neudecker. Matrix Differential Warning, Warning Warning: No extension to multiple Calculus with Applications in Statistics and Econometrics. eigenvalues! 15 / 29 16 / 29 Chapter 8: Element-wise nonnegative matrices Nonnegative matrices: Some properties Def: A matrixA=[a ] M is nonnegative ifa 0 for all LetA,B M andx C n. Then ij ∈ n,r ij ≥ ∈ n ∈ i,j, and we write this asA 0. (Note that this should not � Ax A x ≥ | |≤| || | be confused with the matrix being nonnegative definite!) � AB A B | |≤| || | Ifa > 0 for alli,j, we say thatA is positive and write this � IfA 0, thenA m 0; ifA> 0, thenA m > 0. ij ≥ ≥ asA> 0. (We writeA>B to meanA B> 0 etc.) � IfA 0,x> 0, and Ax= 0 thenA= 0. − ≥ =[ ] � If A B , then A B , for any absolute norm ; We also define A a ij . | |≤| | � � ≤ � � �·� | | | | that is, a norm for which A = A . Typical applications where nonnegative or positive matrices � � �| |� occur are problems in which we have matrices where the elements correspond to � probabilities (e.g., Markov chains) � power levels or power gain factors (e.g., in power control for wireless systems). � Non-negative weights/costs in graphs. 17 / 29 18 / 29 Nonnegative matrices: Spectral radius Nonnegative matrices: Spectral radius Lemma: IfA M ,A 0, and if the row sums ofA are Thm: LetA M andA 0. Then ∈ n ≥ ∈ n ≥ constant, thenρ(A) = A . If the column sums are n n ||| ||| ∞ constant, thenρ(A) = A 1. min aij ρ(A) max aij ||| ||| i ≤ ≤ i �j=1 �j=1 n n min aij ρ(A) max aij The following theorem can be used to give upper and lower j ≤ ≤ j = = bounds on the spectral radius of arbitrary matrices. �i 1 �i 1 , ρ( ) ρ( ) ρ( ) Thm: LetA M n andA 0. If Ax=λx andx> 0, then Thm: LetA B M n. If A B, then A A B . ∈ ≥ ∈ | |≤ ≤ | | ≤ λ=ρ(A). 19 / 29 20 / 29 Positive matrices Nonnegative matrices For positive matrices we can say a little more. Generalization of Perron’s theorem to general non-negative matrices? Perron’s theorem: IfA M n andA> 0, then ∈ Thm: IfA M andA 0, then 1. ρ(A)>0 ∈ n ≥ 2. ρ(A) is an eigenvalue ofA 1. ρ(A) is an eigenvalue ofA n 3. There is anx R n withx> 0 such that Ax=ρ(A)x 2. There is a non-zerox R withx 0 such that ∈ ∈ ≥ 4. ρ(A) is an algebraically (and geometrically) simple Ax=ρ(A)x eigenvalue ofA For stronger results, we need a stronger assumption onA. 5. λ <ρ(A) for every eigenvalueλ=ρ(A) ofA | | � 6. [A/ρ(A)]m L asm , whereL= xy T , Ax=ρ(A)x, → →∞ y T A=ρ(A)y T ,x> 0,y> 0, andx T y= 1.
Recommended publications
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Lecture 2: Spectral Theorems
    Lecture 2: Spectral Theorems This lecture introduces normal matrices. The spectral theorem will inform us that normal matrices are exactly the unitarily diagonalizable matrices. As a consequence, we will deduce the classical spectral theorem for Hermitian matrices. The case of commuting families of matrices will also be studied. All of this corresponds to section 2.5 of the textbook. 1 Normal matrices Definition 1. A matrix A 2 Mn is called a normal matrix if AA∗ = A∗A: Observation: The set of normal matrices includes all the Hermitian matrices (A∗ = A), the skew-Hermitian matrices (A∗ = −A), and the unitary matrices (AA∗ = A∗A = I). It also " # " # 1 −1 1 1 contains other matrices, e.g. , but not all matrices, e.g. 1 1 0 1 Here is an alternate characterization of normal matrices. Theorem 2. A matrix A 2 Mn is normal iff ∗ n kAxk2 = kA xk2 for all x 2 C : n Proof. If A is normal, then for any x 2 C , 2 ∗ ∗ ∗ ∗ ∗ 2 kAxk2 = hAx; Axi = hx; A Axi = hx; AA xi = hA x; A xi = kA xk2: ∗ n n Conversely, suppose that kAxk = kA xk for all x 2 C . For any x; y 2 C and for λ 2 C with jλj = 1 chosen so that <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij, we expand both sides of 2 ∗ 2 kA(λx + y)k2 = kA (λx + y)k2 to obtain 2 2 ∗ 2 ∗ 2 ∗ ∗ kAxk2 + kAyk2 + 2<(λhAx; Ayi) = kA xk2 + kA yk2 + 2<(λhA x; A yi): 2 ∗ 2 2 ∗ 2 Using the facts that kAxk2 = kA xk2 and kAyk2 = kA yk2, we derive 0 = <(λhAx; Ayi − λhA∗x; A∗yi) = <(λhx; A∗Ayi − λhx; AA∗yi) = <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij: n ∗ ∗ n Since this is true for any x 2 C , we deduce (A A − AA )y = 0, which holds for any y 2 C , meaning that A∗A − AA∗ = 0, as desired.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • COMPLEX MATRICES and THEIR PROPERTIES Mrs
    ISSN: 2277-9655 [Devi * et al., 6(7): July, 2017] Impact Factor: 4.116 IC™ Value: 3.00 CODEN: IJESS7 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY COMPLEX MATRICES AND THEIR PROPERTIES Mrs. Manju Devi* *Assistant Professor In Mathematics, S.D. (P.G.) College, Panipat DOI: 10.5281/zenodo.828441 ABSTRACT By this paper, our aim is to introduce the Complex Matrices that why we require the complex matrices and we have discussed about the different types of complex matrices and their properties. I. INTRODUCTION It is no longer possible to work only with real matrices and real vectors. When the basic problem was Ax = b the solution was real when A and b are real. Complex number could have been permitted, but would have contributed nothing new. Now we cannot avoid them. A real matrix has real coefficients in det ( A - λI), but the eigen values may complex. We now introduce the space Cn of vectors with n complex components. The old way, the vector in C2 with components ( l, i ) would have zero length: 12 + i2 = 0 not good. The correct length squared is 12 + 1i12 = 2 2 2 2 This change to 11x11 = 1x11 + …….. │xn│ forces a whole series of other changes. The inner product, the transpose, the definitions of symmetric and orthogonal matrices all need to be modified for complex numbers. II. DEFINITION A matrix whose elements may contain complex numbers called complex matrix. The matrix product of two complex matrices is given by where III. LENGTHS AND TRANSPOSES IN THE COMPLEX CASE The complex vector space Cn contains all vectors x with n complex components.
    [Show full text]
  • 7 Spectral Properties of Matrices
    7 Spectral Properties of Matrices 7.1 Introduction The existence of directions that are preserved by linear transformations (which are referred to as eigenvectors) has been discovered by L. Euler in his study of movements of rigid bodies. This work was continued by Lagrange, Cauchy, Fourier, and Hermite. The study of eigenvectors and eigenvalues acquired in- creasing significance through its applications in heat propagation and stability theory. Later, Hilbert initiated the study of eigenvalue in functional analysis (in the theory of integral operators). He introduced the terms of eigenvalue and eigenvector. The term eigenvalue is a German-English hybrid formed from the German word eigen which means “own” and the English word “value”. It is interesting that Cauchy referred to the same concept as characteristic value and the term characteristic polynomial of a matrix (which we introduce in Definition 7.1) was derived from this naming. We present the notions of geometric and algebraic multiplicities of eigen- values, examine properties of spectra of special matrices, discuss variational characterizations of spectra and the relationships between matrix norms and eigenvalues. We conclude this chapter with a section dedicated to singular values of matrices. 7.2 Eigenvalues and Eigenvectors Let A Cn×n be a square matrix. An eigenpair of A is a pair (λ, x) C (Cn∈ 0 ) such that Ax = λx. We refer to λ is an eigenvalue and to ∈x is× an eigenvector−{ } . The set of eigenvalues of A is the spectrum of A and will be denoted by spec(A). If (λ, x) is an eigenpair of A, the linear system Ax = λx has a non-trivial solution in x.
    [Show full text]
  • On Almost Normal Matrices
    W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 6-2013 On Almost Normal Matrices Tyler J. Moran College of William and Mary Follow this and additional works at: https://scholarworks.wm.edu/honorstheses Part of the Mathematics Commons Recommended Citation Moran, Tyler J., "On Almost Normal Matrices" (2013). Undergraduate Honors Theses. Paper 574. https://scholarworks.wm.edu/honorstheses/574 This Honors Thesis is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Undergraduate Honors Theses by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. On Almost Normal Matrices A thesis submitted in partial fulllment of the requirement for the degree of Bachelor of Science in Mathematics from The College of William and Mary by Tyler J. Moran Accepted for Ilya Spitkovsky, Director Charles Johnson Ryan Vinroot Donald Campbell Williamsburg, VA April 3, 2013 Abstract An n-by-n matrix A is called almost normal if the maximal cardinality of a set of orthogonal eigenvectors is at least n−1. We give several basic properties of almost normal matrices, in addition to studying their numerical ranges and Aluthge transforms. First, a criterion for these matrices to be unitarily irreducible is established, in addition to a criterion for A∗ to be almost normal and a formula for the rank of the self commutator of A. We then show that unitarily irreducible almost normal matrices cannot have flat portions on the boundary of their numerical ranges and that the Aluthge transform of A is never normal when n > 2 and A is unitarily irreducible and invertible.
    [Show full text]
  • Inclusion Relations Between Orthostochastic Matrices And
    Linear and Multilinear Algebra, 1987, Vol. 21, pp. 253-259 Downloaded By: [Li, Chi-Kwong] At: 00:16 20 March 2009 Photocopying permitted by license only 1987 Gordon and Breach Science Publishers, S.A. Printed in the United States of America Inclusion Relations Between Orthostochastic Matrices and Products of Pinchinq-- Matrices YIU-TUNG POON Iowa State University, Ames, lo wa 5007 7 and NAM-KIU TSING CSty Poiytechnic of HWI~K~tig, ;;e,~g ,?my"; 2nd A~~KI:Lkwe~ity, .Auhrn; Al3b3m-l36849 Let C be thc set of ail n x n urthusiuihaatic natiiccs and ;"', the se! of finite prnd~uctpof n x n pinching matrices. Both sets are subsets of a,, the set of all n x n doubly stochastic matrices. We study the inclusion relations between C, and 9'" and in particular we show that.'P,cC,b~t-~p,t~forn>J,andthatC~$~~fori123. 1. INTRODUCTION An n x n matrix is said to be doubly stochastic (d.s.) if it has nonnegative entries and ail row sums and column sums are 1. An n x n d.s. matrix S = (sij)is said to be orthostochastic (0.s.) if there is a unitary matrix U = (uij) such that 2 sij = luijl , i,j = 1, . ,n. Following [6], we write S = IUI2. The set of all n x n d.s. (or 0s.) matrices will be denoted by R, (or (I,, respectively). If P E R, can be expressed as 1-t (1 2 54 Y. T POON AND N K. TSING Downloaded By: [Li, Chi-Kwong] At: 00:16 20 March 2009 for some permutation matrix Q and 0 < t < I, then we call P a pinching matrix.
    [Show full text]
  • Section 6.1.2: the Schur Decomposition 6.1.3: Nonunitary Decompositions
    Section 6.1.2: The Schur Decomposition 6.1.3: Nonunitary Decompositions Jim Lambers March 22, 2021 • Now that we have answered the first of the two questions posed earlier, by identifying a class of matrices for which eigenvalues are easily obtained, we move to the second question: how can a general square matrix A be reduced to (block) triangular form, in such a way that its eigenvalues are preserved? • That is, we need to find a similarity transformation (that is, A P −1AP for some P ) that can bring A closer to triangular form, in some sense. n n×n • With this goal in mind, suppose that x 2 C is a normalized eigenvector of A 2 C (that is, kxk2 = 1), with eigenvalue λ. • Furthermore, suppose that P is a Householder reflection such that P x = e1. (this can be done based on Section 4.2) • Because P is complex, it is Hermitian, meaning that P H = P , and unitary, meaning that P H P = I. • This the generalization of a real Householder reflection being symmetric and orthogonal. • Therefore, P is its own inverse, so P e1 = x. • It follows that the matrix P H AP , which is a similarity transformation of A, satisfies H H H P AP e1 = P Ax = λP x = λP x = λe1: H H • That is, e1 is an eigenvector of P AP with eigenvalue λ, and therefore P AP has the block structure λ vH P H AP = : (1) 0 B • Therefore, λ(A) = fλg [ λ(B), which means that we can now focus on the (n − 1) × (n − 1) matrix B to find the rest of the eigenvalues of A.
    [Show full text]
  • Normal Matrix Polynomials with Nonsingular Leading Coefficients∗
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 17, pp. 458-472, September 2008 ELA NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS∗ NIKOLAOS PAPATHANASIOU† AND PANAYIOTIS PSARRAKOS† Abstract. In this paper, the notions of weakly normal and normal matrixpolynomials with nonsingular leading coefficients are introduced. These matrixpolynomials are characterized using orthonormal systems of eigenvectors and normal eigenvalues. The conditioning of the eigenvalue problem of a normal matrixpolynomial is also studied, thereby constructing an appropriate Jordan canonical form. Key words. Matrixpolynomial, Normal eigenvalue, Weighted perturbation, Condition number. AMS subject classifications. 15A18, 15A22, 65F15, 65F35. 1. Introduction. In pure and applied mathematics, normality of matrices (or operators) arises in many concrete problems. This is reflected in the fact that there are numerous ways to describe a normal matrix (or operator). A list of about ninety conditions on a square matrix equivalent to being normal can be found in [5, 7]. The study of matrix polynomials has also a long history, especially in the context of spectral analysis, leading to solutions of associated linear systems of higher order; see [6, 10, 11] and references therein. Surprisingly, it seems that the notion of nor- mality has been overlooked by people working in this area. Two exceptions are the work of Adam and Psarrakos [1], as well as Lancaster and Psarrakos [9]. Our present goal is to take a comprehensive look at normality of matrix poly- nomials. To avoid infinite eigenvalues, we restrict ourselves to matrix polynomials with nonsingular leading coefficients. The case of singular leading coefficients and infinite eigenvalues will be considered in future work.
    [Show full text]
  • Supplement: Symmetric and Hermitian Matrices
    Supplement: Symmetric and Hermitian Matrices A Bunch of Definitions Definition: A real n × n matrix A is called symmetric if AT = A. Definition: A complex n × n matrix A is called Hermitian if A∗ = A, where A∗ = AT , the conjugate transpose. Definition: A complex n × n matrix A is called normal if A∗A = AA∗, i.e. commutes with its conjugate transpose. It is quite a surprising result that these three kinds of matrices are always diagonalizable; and moreover, one can construct an orthonormal basis (in standard inner product) for Rn/Cn, consisting of eigenvectors of A. Hence the matrix P that gives diagonalization A = P DP −1 will be orthogonal/unitary, namely: T −1 T Definition: An n × n real matrix P is called orthogonal if P P = In, i.e. P = P . ∗ −1 ∗ Definition: An n × n complex matrix P is called unitary if P P = In, i.e. P = P . Diagonalization using these special kinds of P will have special names: Definition: A matrix A is called orthogonally diagonalizable if A is similar to a diagonal matrix D with an orthogonal matrix P , i.e. A = P DP T . A matrix A is called unitarily diagonalizable if A is similar to a diagonal matrix D with a unitary matrix P , i.e. A = P DP ∗. Then we have the following big theorems: Theorem: Every real n × n symmetric matrix A is orthogonally diagonalizable Theorem: Every complex n × n Hermitian matrix A is unitarily diagonalizable. Theorem: Every complex n × n normal matrix A is unitarily diagonalizable. To prove the above results, it is convenient to introduce the concept of adjoint operator, which allows us to discuss effectively the \transpose" operation in a general inner product space.
    [Show full text]
  • Orthogonal Polynomials in the Normal Matrix Model with a Cubic Potential
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Available online at www.sciencedirect.com Advances in Mathematics 230 (2012) 1272–1321 www.elsevier.com/locate/aim Orthogonal polynomials in the normal matrix model with a cubic potential Pavel M. Blehera, Arno B.J. Kuijlaarsb,∗ a Department of Mathematical Sciences, Indiana University-Purdue University Indianapolis, 402 N. Blackford St., Indianapolis, IN 46202, USA b Department of Mathematics, Katholieke Universiteit Leuven, Celestijnenlaan 200B bus 2400, 3001 Leuven, Belgium Received 29 June 2011; accepted 6 March 2012 Available online 23 April 2012 Communicated by Nikolai Makarov Abstract We consider the normal matrix model with a cubic potential. The model is ill-defined, and in order to regularize it, Elbau and Felder introduced a model with a cut-off and corresponding system of orthogonal polynomials with respect to a varying exponential weight on the cut-off region on the complex plane. In the present paper we show how to define orthogonal polynomials on a specially chosen system of infinite contours on the complex plane, without any cut-off, which satisfy the same recurrence algebraic identity that is asymptotically valid for the orthogonal polynomials of Elbau and Felder. The main goal of this paper is to develop the Riemann–Hilbert (RH) approach to the orthogonal polynomials under consideration and to obtain their asymptotic behavior on the complex plane as the degree n of the polynomial goes to infinity. As the first step in the RH approach, we introduce an auxiliary vector equilibrium problem for a pair of measures (µ1; µ2/ on the complex plane.
    [Show full text]
  • James B. Carrell Groups, Matrices, and Vector Spaces a Group Theoretic Approach to Linear Algebra Groups, Matrices, and Vector Spaces James B
    James B. Carrell Groups, Matrices, and Vector Spaces A Group Theoretic Approach to Linear Algebra Groups, Matrices, and Vector Spaces James B. Carrell Groups, Matrices, and Vector Spaces A Group Theoretic Approach to Linear Algebra 123 James B. Carrell Department of Mathematics University of British Columbia Vancouver, BC Canada ISBN 978-0-387-79427-3 ISBN 978-0-387-79428-0 (eBook) DOI 10.1007/978-0-387-79428-0 Library of Congress Control Number: 2017943222 Mathematics Subject Classification (2010): 15-01, 20-01 © Springer Science+Business Media LLC 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.
    [Show full text]