EXPLICIT BOUNDS for the PSEUDOSPECTRA of VARIOUS CLASSES of MATRICES and OPERATORS Contents 1. Introduction 2 2. Pseudospectra 2

Total Page:16

File Type:pdf, Size:1020Kb

EXPLICIT BOUNDS for the PSEUDOSPECTRA of VARIOUS CLASSES of MATRICES and OPERATORS Contents 1. Introduction 2 2. Pseudospectra 2 EXPLICIT BOUNDS FOR THE PSEUDOSPECTRA OF VARIOUS CLASSES OF MATRICES AND OPERATORS FEIXUE GONG1, OLIVIA MEYERSON2, JEREMY MEZA3, ABIGAIL WARD4 MIHAI STOICIU5 (ADVISOR) SMALL 2014 - MATHEMATICAL PHYSICS GROUP N×N Abstract. We study the "-pseudospectra σ"(A) of square matrices A ∈ C . We give a complete characterization of the "-pseudospectrum of any 2×2 matrix and describe the asymptotic behavior (as " → 0) of σ"(A) for any square matrix A. We also present explicit upper and lower bounds for the "-pseudospectra of bidiagonal and tridiagonal matrices, as well as for finite rank operators. Contents 1. Introduction 2 2. Pseudospectra 2 2.1. Motivation and Definitions 2 2.2. Normal Matrices 6 2.3. Non-normal Diagonalizable Matrices 7 2.4. Non-diagonalizable Matrices 8 3. Pseudospectra of 2 2 Matrices 10 3.1. Non-diagonalizable× 2 2 Matrices 10 3.2. Diagonalizable 2 2 Matrices× 12 4. Asymptotic Union of Disks× Theorem 15 5. Pseudospectra of N N Jordan Block 20 6. Pseudospectra of bidiagonal× matrices 22 6.1. Asymptotic Bounds for Periodic Bidiagonal Matrices 22 6.2. Examples of Bidiagonal Matrices 29 7. Tridiagonal Matrices 32 Date: October 30, 2014. 1;2;5Williams College 3Carnegie Mellon University 4University of Chicago. 1 2 GONG, MEYERSON, MEZA, WARD, STOICIU 8. Finite Rank operators 33 References 35 1. Introduction The pseudospectra of matrices and operators is an important mathematical object that has found applications in various areas of mathematics: linear algebra, func- tional analysis, numerical analysis, and differential equations. An overview of the main results on pseudospectra can be found in [14]. In this paper we investigate a few classes of matrices and operators and provide explicit bounds on their "-pseudospectra. We study 2 2 matrices, bidiagonal and tridiagonal matrices, as well as finite rank operators. We also describe the asymptotic behavior of the "-pseudospectrum σ" A of any× n n matrix A. The paper is organized as follows: in Section 2 we( give) the three× standard equivalent definitions for the pseudospectrum and we present the \classical" results on "- pseudospectra of normal and diagonalizable matrices (the Bauer-Fike theorems). Section 3 contains a detailed analysis of the "-pseudospectrum of 2 2 matrices, including both the non-diagonalizable case (Subsection 3.1) and the diagonalizable case (Subsection 3.2). The asymptotic behavior (as " 0) of the "-pseudospectrum× of any n n matrix is described in Section 4, where we show (in Theorem 4.1) that, for any square matrix A, the "-pseudospectrum converges,→ as " 0 to a union of disks. × → The main result of Section 4 is applied to several classes of matrices: Jordan blocks (Section 5), bidiagonal matrices (Section 6), and tridiagonal matrices (Section 7). In Section 5 we derive explicit lower bounds for σ" J , where J is any Jordan block, while Section 6 is dedicated to the analysis of arbitrary periodic bidiagonal matrices A. We derive explicit formulas (in terms the coefficients( ) of A) for the asymptotic radii of the "-pseudospectrum of A, as " 0. We continue with a brief investigation of tridiagonal matrices in Section 7, while in the last section (Section 8) we consider finite rank operators and show that the→"-pseudospectrum of an operator of rank 1 m is at most as big as C" m , as " 0. → 2. Pseudospectra 2.1. Motivation and Definitions. The concept of the spectrum of a matrix A N×N C provides a fundamental tool for understanding the behavior of A. As is well- known, a complex number z C is in the spectrum of A (denoted σ A ) whenever∈ zI A (which we will denote as z A) is not invertible, i.e., the characteristic polynomial of A has z as a root.∈ As slightly perturbing the coefficients( ) of A will change− the roots of the characteristic− polynomial, the property of \membership in the set of eigenvalues" is not well-suited for many purposes, especially those in numerical analysis. We thus want to find a characterization of when a complex EXPLICIT BOUNDS FOR THE PSEUDOSPECTRA OF MATRICES AND OPERATORS 3 number is close to an eigenvalue, and we do this by considering the set of complex numbers z such that z A −1 is large, where the norm here is the usual operator norm induced by the Euclidean norm, i.e. Y( − ) Y A sup Av : YvY=1 The motivation for considering thisY Y question= Y comesY from the following observation: N×N Proposition 2.1. Let A C , λ an eigenvalue of A, and zn be a sequence of −1 complex numbers converging to λ with zn z for all n. Then zn A as n . ∈ ≠ Y( − ) Y → ∞ Proof.→ ∞ Let v be the eigenvector corresponding to λ with norm 1, so Av λv. Define zn A v vn = zn A v ( − ) (note that zn A v 0 by assumption).= Now, note that Y( − ) Y z A −1 z A −1v ( − ) ≠ n n n z A −1 z A v Y( − ) Y ≥ Y( n− ) Yn zn A v ( − ) ( − ) = ] v ] Y( − ) Y zn A v = ] 1 ] Y( −; ) Y λ zn and since the last quantity gets arbitrarily= large for large n, we obtain the result. S − S We call the operator z A −1 the resolvent of A. The observation that the norm of the resolvent is large when z is close to an eigenvalue of A leads us to the first definition of the "-pseudospectrum( − ) of an operator. N×N Definition 2.1. Let A C , and let " 0. The "-pseudospectrum of A is the set of z C such that ∈ z A −1> 1 " ∈ Note that the boundary of the "Y(-pseudospectrum− ) Y > ~ is exactly the 1 " level curve of the function z z A −1 . Fig. 2.1 depicts the behavior of this function near the eigenvalues, as described in Prop. 2.1. ~ ↦ Y( − ) Y The resolvent norm has singularities in the complex plane, and as we approach these points, the resolvent norm grows to infinity. Conversely, if z A −1 approaches infinity, then z must approach some eigenvalue of A [14, Thm 2.4]. Y( − ) Y (It is also possible to develop a theory of pseudo-spectrum for operators on Banach spaces, and it is important to note that this converse does not necessarily hold for such operators; that is, there are operators [4, 5] such that z A −1 approaches infinity, but z does not approach the spectrum of A.) Y( − ) Y The second definition of the "-pseudospectrum arises from eigenvalue perturbation theory [8]. 4 GONG, MEYERSON, MEZA, WARD, STOICIU Figure 2.1. Contour Plot of Resolvent Norm N×N Definition 2.2. Let A C . The "-pseudospectrum of A is the set of z C such that ∈ z σ A E ∈ for some E with E ". ∈ ( + ) Finally, the thirdY definitionY < of the "-pseudospectum is the following: N×N Definition 2.3. Let A C . The "-pseudospectrum of A is the set of z C such that ∈ z A v " ∈ for some unit vector v. Y( − ) Y < This definition is similar to our first definition in that it quantifies how close z is to an eigenvalue of A. In addition to this, it also gives us the notion of an "-pseudoeigenvector. Theorem 2.1 (Equivalence of the definitions of pseudospectra). For any matrix N×N A C , the three definitions above are equivalent. ∈ The proof of this theorem follows [14, §2]. Proof. Define the following sets −1 −1 M" z C z A " N z z σ A E for some E N×N ; E " " C = { ∈ S Y( − ) Y > C } N P" z C z A v " for some v C ; v 1 = { ∈ S ∈ ( + ) ∈ Y Y < } To show equivalence,= { we∈ willS Y( show− ) thatY <M" N" P"∈ M".Y Y = } N" P" ⊆ ⊆ ⊆ Let⊆z N". Let E be such that z σ A E , with E ". Let v be an eigenvector of norm one for z, that is v 1 and A E v zv. Then ∈ ∈ ( + ) Y Y < z A v z A E v Ev E v " Y Y = ( + ) = and hence z P . " Y( − ) Y = Y( − − ) + Y ≤ Y YY Y < ∈ EXPLICIT BOUNDS FOR THE PSEUDOSPECTRA OF MATRICES AND OPERATORS 5 P" M" N Let⊆z P". Let v C be such that v 1 and z A v ". Observe z A v ∈ ∈ z A v zY YA= v Y( − ) suY < z A v ( − ) (z−(A)−v ) = Y( − ) Y = −1 −1 where we define u Y(z−A)vY and s z A vY(. Then,− ) Y we see that s v z A u, which implies that = = Y( − ) Y = ( − ) z A −1 z A −1u s−1 v s−1 "−1 and hence z M".Y( − ) Y ≥ Y( − ) Y = S SY Y = > M" N" ∈ Let ⊆z M". We will construct an E with E " and z σ A E . Since −1 −1 −1 z A supYvY=1 z A v " , there exists some v with v 1 such that z A∈ −1v "−1. Observe Y Y < ∈ ( + ) Y( − ) Y = Y( − ) Y > Y Y = z A −1v Y( − ) Y > z A −1v z A −1v z A −1v u z A −1v ( − ) ( − ) = Y((z−A−)−1)v Y v = Y( − ) Y where u is the unit vector Y(z−A)−1vY . SoY(Y(z−−A))−1vY Y z A u, from which we have v zu Au = ( − ) z A −1v Define E v u∗. Note Eu= + v and E ". Then we have zu Y(z−A)−1vY Y(zY(−A)−1vY) Y Au Eu, and so z σ A E . = = Y Y < = + ∈ ( + ) As all three definitions are now shown to be equivalent, we can unambiguously denote the "-pseudospectrum of A as σ" A .
Recommended publications
  • 18.06 Linear Algebra, Problem Set 2 Solutions
    18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular −1 U : 2 3 2 3 2 3 1 a b 1 0 0 −1 4 5 4 5 4 5 UU = I 0 1 c x1 x2 x3 = 0 1 0 : 0 0 1 0 0 1 −1 Solution (4 points): Row reduce [U I] to get [I U ] as follows (here Ri stands for the ith row): 2 3 2 3 1 a b 1 0 0 (R1 = R1 − aR2) 1 0 b − ac 1 −a 0 4 5 4 5 0 1 c 0 1 0 −! (R2 = R2 − cR2) 0 1 0 0 1 −c 0 0 1 0 0 1 0 0 1 0 0 1 ( ) 2 3 R1 = R1 − (b − ac)R3 1 0 0 1 −a ac − b −! 40 1 0 0 1 −c 5 : 0 0 1 0 0 1 Section 2.5. Problem 40: (Recommended) A is a 4 by 4 matrix with 1's on the diagonal and −1 −a; −b; −c on the diagonal above. Find A for this bidiagonal matrix. −1 Solution (12 points): Row reduce [A I] to get [I A ] as follows (here Ri stands for the ith row): 2 3 1 −a 0 0 1 0 0 0 6 7 60 1 −b 0 0 1 0 07 4 5 0 0 1 −c 0 0 1 0 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + aR2) 1 0 −ab 0 1 a 0 0 6 7 (R2 = R2 + bR2) 60 1 0 −bc 0 1 b 07 −! 4 5 (R3 = R3 + cR4) 0 0 1 0 0 0 1 c 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + abR3) 1 0 0 0 1 a ab abc (R = R + bcR ) 60 1 0 0 0 1 b bc 7 −! 2 2 4 6 7 : 40 0 1 0 0 0 1 c 5 0 0 0 1 0 0 0 1 Alternatively, write A = I − N.
    [Show full text]
  • [Math.RA] 19 Jun 2003 Two Linear Transformations Each Tridiagonal with Respect to an Eigenbasis of the Ot
    Two linear transformations each tridiagonal with respect to an eigenbasis of the other; comments on the parameter array∗ Paul Terwilliger Abstract Let K denote a field. Let d denote a nonnegative integer and consider a sequence ∗ K p = (θi,θi , i = 0...d; ϕj , φj, j = 1...d) consisting of scalars taken from . We call p ∗ ∗ a parameter array whenever: (PA1) θi 6= θj, θi 6= θj if i 6= j, (0 ≤ i, j ≤ d); (PA2) i−1 θh−θd−h ∗ ∗ ϕi 6= 0, φi 6= 0 (1 ≤ i ≤ d); (PA3) ϕi = φ1 + (θ − θ )(θi−1 − θ ) h=0 θ0−θd i 0 d i−1 θh−θd−h ∗ ∗ (1 ≤ i ≤ d); (PA4) φi = ϕ1 + (Pθ − θ )(θd−i+1 − θ0) (1 ≤ i ≤ d); h=0 θ0−θd i 0 −1 ∗ ∗ ∗ ∗ −1 (PA5) (θi−2 − θi+1)(θi−1 − θi) P, (θi−2 − θi+1)(θi−1 − θi ) are equal and independent of i for 2 ≤ i ≤ d − 1. In [13] we showed the parameter arrays are in bijection with the isomorphism classes of Leonard systems. Using this bijection we obtain the following two characterizations of parameter arrays. Assume p satisfies PA1, PA2. Let ∗ ∗ A, B, A ,B denote the matrices in Matd+1(K) which have entries Aii = θi, Bii = θd−i, ∗ ∗ ∗ ∗ ∗ ∗ Aii = θi , Bii = θi (0 ≤ i ≤ d), Ai,i−1 = 1, Bi,i−1 = 1, Ai−1,i = ϕi, Bi−1,i = φi (1 ≤ i ≤ d), and all other entries 0. We show the following are equivalent: (i) p satisfies −1 PA3–PA5; (ii) there exists an invertible G ∈ Matd+1(K) such that G AG = B and G−1A∗G = B∗; (iii) for 0 ≤ i ≤ d the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θ0)(λ − θ1) · · · (λ − θn−1)(θi − θ0)(θi − θ1) · · · (θi − θn−1) ϕ1ϕ2 · · · ϕ nX=0 n is a scalar multiple of the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θd)(λ − θd−1) · · · (λ − θd−n+1)(θ − θ )(θ − θ ) · · · (θ − θ ) i 0 i 1 i n−1 .
    [Show full text]
  • Self-Interlacing Polynomials Ii: Matrices with Self-Interlacing Spectrum
    SELF-INTERLACING POLYNOMIALS II: MATRICES WITH SELF-INTERLACING SPECTRUM MIKHAIL TYAGLOV Abstract. An n × n matrix is said to have a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are distributed as follows n−1 λ1 > −λ2 > λ3 > ··· > (−1) λn > 0: A method for constructing sign definite matrices with self-interlacing spectra from totally nonnegative ones is presented. We apply this method to bidiagonal and tridiagonal matrices. In particular, we generalize a result by O. Holtz on the spectrum of real symmetric anti-bidiagonal matrices with positive nonzero entries. 1. Introduction In [5] there were introduced the so-called self-interlacing polynomials. A polynomial p(z) is called self- interlacing if all its roots are real, semple and interlacing the roots of the polynomial p(−z). It is easy to see that if λk, k = 1; : : : ; n, are the roots of a self-interlacing polynomial, then the are distributed as follows n−1 (1.1) λ1 > −λ2 > λ3 > ··· > (−1) λn > 0; or n (1.2) − λ1 > λ2 > −λ3 > ··· > (−1) λn > 0: The polynomials whose roots are distributed as in (1.1) (resp. in (1.2)) are called self-interlacing of kind I (resp. of kind II). It is clear that a polynomial p(z) is self-interlacing of kind I if, and only if, the polynomial p(−z) is self-interlacing of kind II. Thus, it is enough to study self-interlacing of kind I, since all the results for self-interlacing of kind II will be obtained automatically. Definition 1.1. An n × n matrix is said to possess a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are real, simple, are distributed as in (1.1).
    [Show full text]
  • Accurate Singular Values of Bidiagonal Matrices
    d d Accurate Singular Values of Bidiagonal Matrices (Appeared in the SIAM J. Sci. Stat. Comput., v. 11, n. 5, pp. 873-912, 1990) James Demmel W. Kahan Courant Institute Computer Science Division 251 Mercer Str. University of California New York, NY 10012 Berkeley, CA 94720 Abstract Computing the singular values of a bidiagonal matrix is the ®nal phase of the standard algo- rithm for the singular value decomposition of a general matrix. We present a new algorithm which computes all the singular values of a bidiagonal matrix to high relative accuracy indepen- dent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may com- pute sm all singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm, and frequently faster. Keywords: singular value decomposition, bidiagonal matrix, QR iteration AMS(MOS) subject classi®cations: 65F20, 65G05, 65F35 1. Introduction The standard algorithm for computing the singular value decomposition (SVD ) of a gen- eral real matrix A has two phases [7]: = T 1) Compute orthogonal matrices P 11and Q such that B PAQ 11is in bidiagonal form, i.e. has nonzero entries only on its diagonal and ®rst superdiagonal. Σ= T 2) Compute orthogonal matrices P 22and Q such that PBQ 22is diagonal and nonnega- σ Σ tive. The diagonal entries i of are the singular values of A. We will take them to be σ≥σ = sorted in decreasing order: ii+112. The columns of Q QQ are the right singular vec- = tors,andthecolumnsofP PP12are the left singular vectors.
    [Show full text]
  • On Operator Whose Norm Is an Eigenvalue
    BULLETIN OF THE INSTITUTE OF MATHEMATICS ACADEMIA SINICA Volume 31, Number 1, March 2003 ON OPERATOR WHOSE NORM IS AN EIGENVALUE BY C.-S. LIN (林嘉祥) Dedicated to Professor Carshow Lin on his retirement Abstract. We present in this paper various types of char- acterizations of a bounded linear operator T on a Hilbert space whose norm is an eigenvalue for T , and their consequences are given. We show that many results in Hilbert space operator the- ory are related to such an operator. In this article we are interested in the study of a bounded linear operator on a Hilbert space H whose norm is an eigenvalue for the operator. Various types of characterizations of such operator are given, and consequently we see that many results in operator theory are related to such operator. As the matter of fact, our study is motivated by the following two results: (1) A linear compact operator on a locally uniformly convex Banach sapce into itself satisfies the Daugavet equation if and only if its norm is an eigenvalue for the operator [1, Theorem 2.7]; and (2) Every compact operator on a Hilbert space has a norm attaining vector for the operator [4, p.85]. In what follows capital letters mean bounded linear operators on H; T ∗ and I denote the adjoint operator of T and the identity operator, re- spectively. We shall recall some definitions first. A unit vector x ∈ H is Received by the editors September 4, 2001 and in revised form February 8, 2002. 1991 AMS Subject Classification. 47A10, 47A30, 47A50.
    [Show full text]
  • Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Universitat Politècnica De Catalunya
    Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Author: Josep Lluis Larriba Pey Advisor: Juan José Navarro Guerrero Barcelona, January 1995 UPC Universitat Politècnica de Catalunya Departament d'Arquitectura de Computadors UNIVERSITAT POLITÈCNICA DE CATALU NYA B L I O T E C A X - L I B R I S Tesi doctoral presentada per Josep Lluis Larriba Pey per tal d'aconseguir el grau de Doctor en Informàtica per la Universitat Politècnica de Catalunya UNIVERSITAT POLITÈCNICA DE CATAl U>'YA ADA'UNÍIEÏRACÍÓ ;;//..;,3UMPf';S AO\n¿.-v i S -i i::-.« c:¿fCM or,re: iïhcc'a a la pàgina .rf.S# a;; 3 b el r;ú¡; Barcelona, 6 N L'ENCARREGAT DEL REGISTRE. Barcelona, de de 1995 To my wife Marta To my parents Elvira and José Luis "A journey of a thousand miles must begin with a single step" Lao-Tse Acknowledgements I would like to thank my parents, José Luis and Elvira, for their unconditional help and love to me. I will always be indebted to you. I also want to thank my wife, Marta, for being the sparkle in my life, for her support and for understanding my (good) moods. I thank Juanjo, my advisor, for his friendship, patience and good advice. Also, I want to thank Àngel Jorba for his unconditional collaboration, work and support in some of the contributions of this work. I thank the members of "Comissió de Doctorat del DAG" for their comments and specially Miguel Valero for his suggestions on the topics of chapter 4. I thank Mateo Valero for his friendship and always good advice.
    [Show full text]
  • Pseudospectra and Linearization Techniques of Rational Eigenvalue Problems
    Pseudospectra and Linearization Techniques of Rational Eigenvalue Problems Axel Torshage May 2013 Masters’sthesis 30 Credits Umeå University Supervisor - Christian Engström Abstract This thesis concerns the analysis and sensitivity of nonlinear eigenvalue problems for matrices and linear operators. The …rst part illustrates that lack of normal- ity may result in catastrophic ill-conditioned eigenvalue problem. Linearization of rational eigenvalue problems for both operators over …nite and in…nite dimen- sional spaces are considered. The standard approach is to multiply by the least common denominator in the rational term and apply a well known linearization technique to the polynomial eigenvalue problem. However, the symmetry of the original problem is lost, which may result in a more ill-conditioned problem. In this thesis, an alternative linearization method is used and the sensitivity of the two di¤erent linearizations are studied. Moreover, this work contains numeri- cally solved rational eigenvalue problems with applications in photonic crystals. For these examples the pseudospectra is used to show how well-conditioned the problems are which indicates whether the solutions are reliable or not. Contents 1 Introduction 3 1.1 History of spectra and pseudospectra . 7 2 Normal and almost normal matrices 9 2.1 Perturbation of normal Matrix . 13 3 Linearization of Eigenvalue problems 16 3.1 Properties of the …nite dimension case . 18 3.2 Finite and in…nite physical system . 23 3.2.1 Non-damped polynomial method . 24 3.2.2 Non-damped rational method . 25 3.2.3 Dampedcase ......................... 27 3.2.4 Damped polynomial method . 28 3.2.5 Damped rational method .
    [Show full text]
  • Computing the Moore-Penrose Inverse for Bidiagonal Matrices
    УДК 519.65 Yu. Hakopian DOI: https://doi.org/10.18523/2617-70802201911-23 COMPUTING THE MOORE-PENROSE INVERSE FOR BIDIAGONAL MATRICES The Moore-Penrose inverse is the most popular type of matrix generalized inverses which has many applications both in matrix theory and numerical linear algebra. It is well known that the Moore-Penrose inverse can be found via singular value decomposition. In this regard, there is the most effective algo- rithm which consists of two stages. In the first stage, through the use of the Householder reflections, an initial matrix is reduced to the upper bidiagonal form (the Golub-Kahan bidiagonalization algorithm). The second stage is known in scientific literature as the Golub-Reinsch algorithm. This is an itera- tive procedure which with the help of the Givens rotations generates a sequence of bidiagonal matrices converging to a diagonal form. This allows to obtain an iterative approximation to the singular value decomposition of the bidiagonal matrix. The principal intention of the present paper is to develop a method which can be considered as an alternative to the Golub-Reinsch iterative algorithm. Realizing the approach proposed in the study, the following two main results have been achieved. First, we obtain explicit expressions for the entries of the Moore-Penrose inverse of bidigonal matrices. Secondly, based on the closed form formulas, we get a finite recursive numerical algorithm of optimal computational complexity. Thus, we can compute the Moore-Penrose inverse of bidiagonal matrices without using the singular value decomposition. Keywords: Moor-Penrose inverse, bidiagonal matrix, inversion formula, finite recursive algorithm.
    [Show full text]
  • Basic Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 373 (2003) 143–151 www.elsevier.com/locate/laa Basic matricesୋ Miroslav Fiedler Institute of Computer Science, Czech Academy of Sciences, 182 07 Prague 8, Czech Republic Received 30 April 2002; accepted 13 November 2002 Submitted by B. Shader Abstract We define a basic matrix as a square matrix which has both subdiagonal and superdiagonal rank at most one. We investigate, partly using additional restrictions, the relationship of basic matrices to factorizations. Special basic matrices are also mentioned. © 2003 Published by Elsevier Inc. AMS classification: 15A23; 15A33 Keywords: Structure rank; Tridiagonal matrix; Oscillatory matrix; Factorization; Orthogonal matrix 1. Introduction and preliminaries For m × n matrices, structure (cf. [2]) will mean any nonvoid subset of M × N, M ={1,...,m}, N ={1,...,n}. Given a matrix A (over a field, or a ring) and a structure, the structure rank of A is the maximum rank of any submatrix of A all entries of which are contained in the structure. The most important examples of structure ranks for square n × n matrices are the subdiagonal rank (S ={(i, k); n i>k 1}), the superdiagonal rank (S = {(i, k); 1 i<k n}), the off-diagonal rank (S ={(i, k); i/= k,1 i, k n}), the block-subdiagonal rank, etc. All these ranks enjoy the following property: Theorem 1.1 [2, Theorems 2 and 3]. If a nonsingular matrix has subdiagonal (su- perdiagonal, off-diagonal, block-subdiagonal, etc.) rank k, then the inverse also has subdiagonal (...)rank k.
    [Show full text]
  • The Significance of the Imaginary Part of the Weak Value
    The significance of the imaginary part of the weak value J. Dressel and A. N. Jordan Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA (Dated: July 30, 2018) Unlike the real part of the generalized weak value of an observable, which can in a restricted sense be operationally interpreted as an idealized conditioned average of that observable in the limit of zero measurement disturbance, the imaginary part of the generalized weak value does not provide information pertaining to the observable being measured. What it does provide is direct information about how the initial state would be unitarily disturbed by the observable operator. Specifically, we provide an operational interpretation for the imaginary part of the generalized weak value as the logarithmic directional derivative of the post-selection probability along the unitary flow generated by the action of the observable operator. To obtain this interpretation, we revisit the standard von Neumann measurement protocol for obtaining the real and imaginary parts of the weak value and solve it exactly for arbitrary initial states and post-selections using the quantum operations formalism, which allows us to understand in detail how each part of the generalized weak value arises in the linear response regime. We also provide exact treatments of qubit measurements and Gaussian detectors as illustrative special cases, and show that the measurement disturbance from a Gaussian detector is purely decohering in the Lindblad sense, which allows the shifts for a Gaussian detector to be completely understood for any coupling strength in terms of a single complex weak value that involves the decohered initial state.
    [Show full text]
  • On Commuting Compact Self-Adjoint Operators on a Pontryagin Space∗
    On commuting compact self-adjoint operators on a Pontryagin space¤ Tirthankar Bhattacharyyay Poorna Prajna Institute for Scientific Research, 4 Sadshivnagar, Bangalore 560080, India. e-mail: [email protected] and TomaˇzKoˇsir Department of Mathematics, University of Ljubljana, Jadranska 19, 1000 Ljubljana, Slovenia. e-mail: [email protected] July 20, 2000 Abstract Suppose that A1;A2;:::;An are compact commuting self-adjoint linear maps on a Pontryagin space K of index k and that their joint root subspace M0 at the zero eigen- value in Cn is a nondegenerate subspace. Then there exist joint invariant subspaces H and F in K such that K = F © H, H is a Hilbert space and F is finite-dimensional space with k · dim F · (n + 2)k. We also consider the structure of restrictions AjjF in the case k = 1. 1 Introduction Let K be a Pontryagin space whose index of negativity (henceforward called index) is k and A be a compact self-adjoint operator on K with non-degenarate root subspace at the eigenvalue 0. Then K can be decomposed into an orthogonal direct sum of a Hilbert subspace and a Pontryagin subspace both of which are invariant under A and this Pontryagin subspace has dimension at most 3k. This has many applications among which we mention the study of elliptic multiparameter problems [2]. Binding and Seddighi gave a complete proof of this decomposition in [3] and in fact proved that non-degenaracy of the root subspace at 0 is necessary and sufficient for such a decomposition. They show ¤ Research was supported in part by the Ministry of Science and Technology of Slovenia and the National Board of Higher Mathematics, India.
    [Show full text]
  • On Spectra of Quadratic Operator Pencils with Rank One Gyroscopic
    On spectra of quadratic operator pencils with rank one gyroscopic linear part Olga Boyko, Olga Martynyuk, Vyacheslav Pivovarchik January 3, 2018 Abstract. The spectrum of a selfadjoint quadratic operator pencil of the form λ2M λG A is investigated where M 0, G 0 are bounded operators and A is selfadjoint − − ≥ ≥ bounded below is investigated. It is shown that in the case of rank one operator G the eigen- values of such a pencil are of two types. The eigenvalues of one of these types are independent of the operator G. Location of the eigenvalues of both types is described. Examples for the case of the Sturm-Liouville operators A are given. keywords: quadratic operator pencil, gyroscopic force, eigenvalues, algebraic multi- plicity 2010 Mathematics Subject Classification : Primary: 47A56, Secondary: 47E05, 81Q10 arXiv:1801.00463v1 [math-ph] 1 Jan 2018 1 1 Introduction. Quadratic operator pencils of the form L(λ) = λ2M λG A with a selfadjoint operator A − − bounded below describing potential energy, a bounded symmetric operator M 0 describing ≥ inertia of the system and an operator G bounded or subordinate to A occur in different physical problems where, in most of cases, they have spectra consisting of normal eigenvalues (see below Definition 2.2). Usually the operator G is symmetric (see, e.g. [7], [20] and Chapter 4 in [13]) or antisymmetric (see [16] and Chapter 2 in [13]). In the first case G describes gyroscopic effect while in the latter case damping forces. The problems in which gyroscopic forces occur can be found in [2], [1], [21], [23], [12], [10], [11], [14].
    [Show full text]