What Does the Spectral Theorem Say? Author(S): P

Total Page:16

File Type:pdf, Size:1020Kb

What Does the Spectral Theorem Say? Author(S): P What Does the Spectral Theorem Say? Author(s): P. R. Halmos Source: The American Mathematical Monthly, Vol. 70, No. 3 (Mar., 1963), pp. 241-247 Published by: Mathematical Association of America Stable URL: http://www.jstor.org/stable/2313117 . Accessed: 29/07/2011 13:16 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at . http://www.jstor.org/action/showPublisher?publisherCode=maa. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access to The American Mathematical Monthly. http://www.jstor.org WHAT DOES THE SPECTRAL THEOREM SAY? P. R. HALMOS, Universityof Michigan Most students of mathematics learn quite early and most mathematicians remembertill quite late that every Hermitian matrix (and, in particular,every real symmetricmatrix) may be put into diagonal form.A moreprecise statement of the resultis that every Hermitianmatrix is unitarilyequivalent to a diagonal one. The spectral theoremis widely and correctlyregarded as the generalization of this assertion to operatorson Hilbert space. It is unfortunatetherefore that even the bare statementof the spectral theoremis widelyregarded as somewhat mysteriousand deep, and probably inaccessible to the nonspecialist.The pur- pose of this paper is to try to dispel some of the mystery. Probably the main reason the general operator theoremfrightens most peo- ple is that it does not obviously include the special matrixtheorem. To see the relation between the two, the descriptionof the finite-dimensionalsituation has to be distorted almost beyond recognition.The result is not intuitive in any language; neitherStieltjes integralswith unorthodoxmultiplicative properties, nor bounded operatorrepresentations of functionalgebras, are in the daily tool- kit of everyworking mathematician. In contrast,the formulationof the spectral theorem given below uses only the relatively elementaryconcepts of measure theory.This formulationhas been part of the oral traditionof Hilbert space for quite some time (foran explicittreatment see [6]), but it has not been called the spectral theorem; it usually occurs in the much deeper "multiplicitytheory." Since the statementuses simple concepts only, this aspect of the presentformu- lation is an advantage, not a drawback; its effectis to make the spiritof one of the harder parts of the subject accessible to the student of the easier parts. Anotherreason the spectral theoremis thoughtto be hard is that its proof is hard. An assessment of difficultyis, of course, a subjective matter, but, in any case, thereis no magic new techniquein the pages that follow.It is the state- ment of the spectral theoremthat is the main concernof the exposition,not the proof.The proofis essentiallythe same as it always was; most of the standard methods used to establish the spectral theoremcan be adapted to the present formulation. Let 4 be a complex-valuedbounded measurable functionon a measure space X with measure ,t. (All measure-theoreticstatements, equations, and relations, e.g., "4 is bounded," are to be interpretedin the "almost everywhere"sense.) An operator A is definedon the Hilbert space 22(p) by (Af )(x)-=*t(x)f (x), x E- X; the operator A is called the multiplicationinduced by 4. The study of the rela- tion between A and 4 is an instructiveexercise. It turns out, forinstance, that 241 242 WHAT DOES THE SPECTRAL THEOREM SAY? [March the adjoint A* of A is the multiplicationinduced by the complex conjugate + of 4. If ifralso is a bounded measurable functionon X, with induced multiplica- tion B, then the multiplicationinduced by the product function044 is the prod- uct operatorAB. It followsthat a multiplicationis always normal; it is Hermi- tian if and only if the functionthat induces it is real. (For the elementarycon- cepts of operatortheory, such as Hermitianoperators, normal operators, projec- tions, and spectra, see [3]. For presentpurposes a concept is called elementary if it is discussed in [3] beforethe spectral theorem,i.e., beforep. 56.) As a special case let X be a finiteset (with n points, say), and let,t be the "counting measure" in X (so that,t({x}) =1 for each x in X). In this case ?2(,g) is n-dimensionalcomplex Euclidean space; it is customaryand convenient to indicate the values of a functionin ?2(M) by indices instead of parenthetical arguments.With this notation the action on f of the multiplicationA induced by 4 can be described by A(f1, ... ,fn) = (Oifi, ...* ,fn) To say this with matrices,note that the characteristicfunctions of the single- tons in X forman orthonormalbasis in ?2(M); the assertion is that the matrix of A with respect to that basis is diag (41, * , On)4 The general notation is now established and the special role of the finite- dimensional situation within it is clear; everythingis ready for the principal statement. SPECTRAL THEOREM. Every Hermitian operatoris unitarilyequivalent to a multiplication.. In complete detail the theoremsays that if A is a Hermitian operator on a Hilbert space 3C,then there exists a (real-valued) bounded measurable function 4 on some measure space X with measure u, and there exists an isometry U from ?2(M) onto JC,such that (U-1AUf)(x) = 4(x)f(x), x E X, foreachf in ?2(M). What followsis an outline of a proofof the spectral theorem, a briefdiscussion of its relation to the version involvingspectral measures,and an illustrationof its application. Three tools are needed forthe proofof the spectral theorem. (1) The equalityof normand spectralradius. If the spectrumof A is A(A), then the spectralradius r(A) is definedby r(A) = sup {I XI : X E A(A)}A It is always true that r(A)_?IIAI ([3, Theorem 2, p. 52]); the useful fact here is thatif A is Hermitian,then r(A) =IIAII([3, Theorem2, p. 55]). (2) The Riesz representationtheorem for compactsets in theline. If L is a posi- tive linear functionaldefined for all real-valued continuousfunctions on a com- 19631 WHAT DOES THE SPECTRAL THEOREM SAY? 243 pact subset X of the real line, then thereexists a unique finitemeasure u on the Borel sets of X such that L(f) = ffd1. for all f in the domain of L. (To say that L is linear means of course that L(af + fig)= aL(f) + AL(g), wheneverf and g are in the domain of L and axand 3 are real scalars; to say that L is positive means that L(f) 0 wheneverfis in the domain of L and f> 0.) For a proof,see [4, Theorem D, p. 247]. (3) The Weierstrassapproximation theorem for compactsets in theline. Each real-valued continuous functionon a compact subset of the real line is the uni- formlimit of polynomials. For a pleasant elementarydiscussion and proof see [I, p. 102]. Consider now a Hermitian operator A on a Hilbert space SC. A vector t in XC is a cyclicvector for A if the set of all vectors of the formq(A)t, where q runs over polynomials with complexcoefficients, is dense in SC. Cyclic vectors may not exist, but an easy transfiniteargument shows that XC is always the direct sum of a familyof subspaces, each of which reduces A, such that the restriction of A to each of them does have a cyclic vector. Once the spectral theoremis known foreach such restriction,it followseasily forA itself;the measure spaces that serve forthe directsummands of a have a natural directsum, which serves forSC itself.Conclusion: there is no loss of generalityin assuming that A has a cyclic vector,say t. For each real polynomialp write L(p) = (p(A)%, t). Clearly L is a linear functional;since I L(p) I ? IIP(A)II-IIt1I2 = r(p(A)) _11t112 = sup {JX: XE A(p(A))} IIII2 = sup p(X)I : X E A(A)} .I|1I2, the functionalL is bounded for polynomials. (The last step uses the spectral mapping theorem; cf. [3, Theorem 3, p. 551.) It follows (by the Weierstrass theorem)that L.has a bounded extensionto all real-valued continuousfunctions on A (A). To prove that L is positive,observe firstthat ifp is a real polynomial, then ((p(A))2, {)=IP(A){I!2 ? 0 If f is an arbitrarypositive continuous functionon A(A), then approximate V/f uniformlyby real polynomials; the inequality just proved implies that L(f) >0 244 WHAT DOES THE SPECTRAL THEOREM SAY? [March (since f is then uniformlyapproximated by squares of real polynomials). The Riesz theoremnow yields the existenceof a finitemeasure A.tsuch that (p(A)t, t) = fpdlk forevery real polynomialp. For each (possibly complex) polynomialq write Uq = q(A) . Since A is Hermitian,(q(A))*(= {(A)) is a polynomialin A, and so is (q(A)) *q(A) qf2(A));' it followsthat f qqI 2dl = (q(A)q(A)%, ) = ((q(A))*q(A)%, ) = jq(A)t|12==|Uq|j2. This means that the linear transformationU froma dense subset of 22(yt) into SC is an isometry,and hence that it has a unique isometricextension that maps 22(y) into SC.
Recommended publications
  • Singular Value Decomposition (SVD)
    San José State University Math 253: Mathematical Methods for Data Visualization Lecture 5: Singular Value Decomposition (SVD) Dr. Guangliang Chen Outline • Matrix SVD Singular Value Decomposition (SVD) Introduction We have seen that symmetric matrices are always (orthogonally) diagonalizable. That is, for any symmetric matrix A ∈ Rn×n, there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . , λn), both real and square, such that A = QΛQT . We have pointed out that λi’s are the eigenvalues of A and qi’s the corresponding eigenvectors (which are orthogonal to each other and have unit norm). Thus, such a factorization is called the eigendecomposition of A, also called the spectral decomposition of A. What about general rectangular matrices? Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/22 Singular Value Decomposition (SVD) Existence of the SVD for general matrices Theorem: For any matrix X ∈ Rn×d, there exist two orthogonal matrices U ∈ Rn×n, V ∈ Rd×d and a nonnegative, “diagonal” matrix Σ ∈ Rn×d (of the same size as X) such that T Xn×d = Un×nΣn×dVd×d. Remark. This is called the Singular Value Decomposition (SVD) of X: • The diagonals of Σ are called the singular values of X (often sorted in decreasing order). • The columns of U are called the left singular vectors of X. • The columns of V are called the right singular vectors of X. Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/22 Singular Value Decomposition (SVD) * * b * b (n>d) b b b * b = * * = b b b * (n<d) * b * * b b Dr.
    [Show full text]
  • Operator Theory in the C -Algebra Framework
    Operator theory in the C∗-algebra framework S.L.Woronowicz and K.Napi´orkowski Department of Mathematical Methods in Physics Faculty of Physics Warsaw University Ho˙za74, 00-682 Warszawa Abstract Properties of operators affiliated with a C∗-algebra are studied. A functional calculus of normal elements is constructed. Representations of locally compact groups in a C∗-algebra are considered. Generaliza- tions of Stone and Nelson theorems are investigated. It is shown that the tensor product of affiliated elements is affiliated with the tensor product of corresponding algebras. 1 0 Introduction Let H be a Hilbert space and CB(H) be the algebra of all compact operators acting on H. It was pointed out in [17] that the classical theory of unbounded closed operators acting in H [8, 9, 3] is in a sense related to CB(H). It seems to be interesting to replace in this context CB(H) by any non-unital C∗- algebra. A step in this direction is done in the present paper. We shall deal with the following topics: the functional calculus of normal elements (Section 1), the representation theory of Lie groups including the Stone theorem (Sections 2,3 and 4) and the extensions of symmetric elements (Section 5). Section 6 contains elementary results related to tensor products. The perturbation theory (in the spirit of T.Kato) is not covered in this pa- per. The elementary results in this direction are contained the first author’s previous paper (cf. [17, Examples 1, 2 and 3 pp. 412–413]). To fix the notation we remind the basic definitions and results [17].
    [Show full text]
  • Notes on the Spectral Theorem
    Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner prod- uct space V has an adjoint. (T ∗ is defined as the unique linear operator on V such that hT (x), yi = hx, T ∗(y)i for every x, y ∈ V – see Theroems 6.8 and 6.9.) When V is infinite dimensional, the adjoint T ∗ may or may not exist. One useful fact (Theorem 6.10) is that if β is an orthonormal basis for a finite dimen- ∗ ∗ sional inner product space V , then [T ]β = [T ]β. That is, the matrix representation of the operator T ∗ is equal to the complex conjugate of the matrix representation for T . For a general vector space V , and a linear operator T , we have already asked the ques- tion “when is there a basis of V consisting only of eigenvectors of T ?” – this is exactly when T is diagonalizable. Now, for an inner product space V , we know how to check whether vec- tors are orthogonal, and we know how to define the norms of vectors, so we can ask “when is there an orthonormal basis of V consisting only of eigenvectors of T ?” Clearly, if there is such a basis, T is diagonalizable – and moreover, eigenvectors with distinct eigenvalues must be orthogonal. Definitions Let V be an inner product space. Let T ∈ L(V ). (a) T is normal if T ∗T = TT ∗ (b) T is self-adjoint if T ∗ = T For the next two definitions, assume V is finite-dimensional: Then, (c) T is unitary if F = C and kT (x)k = kxk for every x ∈ V (d) T is orthogonal if F = R and kT (x)k = kxk for every x ∈ V Notes 1.
    [Show full text]
  • Asymptotic Spectral Measures: Between Quantum Theory and E
    Asymptotic Spectral Measures: Between Quantum Theory and E-theory Jody Trout Department of Mathematics Dartmouth College Hanover, NH 03755 Email: [email protected] Abstract— We review the relationship between positive of classical observables to the C∗-algebra of quantum operator-valued measures (POVMs) in quantum measurement observables. See the papers [12]–[14] and theB books [15], C∗ E theory and asymptotic morphisms in the -algebra -theory of [16] for more on the connections between operator algebra Connes and Higson. The theory of asymptotic spectral measures, as introduced by Martinez and Trout [1], is integrally related K-theory, E-theory, and quantization. to positive asymptotic morphisms on locally compact spaces In [1], Martinez and Trout showed that there is a fundamen- via an asymptotic Riesz Representation Theorem. Examples tal quantum-E-theory relationship by introducing the concept and applications to quantum physics, including quantum noise of an asymptotic spectral measure (ASM or asymptotic PVM) models, semiclassical limits, pure spin one-half systems and quantum information processing will also be discussed. A~ ~ :Σ ( ) { } ∈(0,1] →B H I. INTRODUCTION associated to a measurable space (X, Σ). (See Definition 4.1.) In the von Neumann Hilbert space model [2] of quantum Roughly, this is a continuous family of POV-measures which mechanics, quantum observables are modeled as self-adjoint are “asymptotically” projective (or quasiprojective) as ~ 0: → operators on the Hilbert space of states of the quantum system. ′ ′ A~(∆ ∆ ) A~(∆)A~(∆ ) 0 as ~ 0 The Spectral Theorem relates this theoretical view of a quan- ∩ − → → tum observable to the more operational one of a projection- for certain measurable sets ∆, ∆′ Σ.
    [Show full text]
  • Computing the Singular Value Decomposition to High Relative Accuracy
    Computing the Singular Value Decomposition to High Relative Accuracy James Demmel Department of Mathematics Department of Electrical Engineering and Computer Science University of California - Berkeley [email protected] Plamen Koev Department of Mathematics University of California - Berkeley [email protected] Structured Matrices In Operator Theory, Numerical Analysis, Control, Signal and Image Processing Boulder, Colorado June 27-July 1, 1999 Supported by NSF and DOE INTRODUCTION • High Relative Accuracy means computing the correct SIGN and LEADING DIGITS • Singular Value Decomposition (SVD): A = UΣV T where U, V are orthogonal, σ1 σ2 Σ = and σ1 ≥ σ2 ≥ . σn ≥ 0 .. . σn • GOAL: Compute all σi with high relative accuracy, even when σi σ1 • It all comes down to being able to compute determi- nants to high relative accuracy. Example: 100 by 100 Hilbert Matrix H(i, j) = 1/(i + j − 1) • Singular values range from 1 down to 10−150 • Old algorithm, New Algorithm, both in 16 digits Singular values of Hilb(100), blue = accurate, red = usual 0 10 −20 10 −40 10 −60 10 −80 10 −100 10 −120 10 −140 10 0 10 20 30 40 50 60 70 80 90 100 • D = log(cond(A)) = log(σ1/σn) (here D = 150) • Cost of Old algorithm = O(n3D2) • Cost of New algorithm = O(n3), independent of D – Run in double, not bignums as in Mathematica – New hundreds of times faster than Old • When does it work? Not for all matrices ... • Why bother? Why do we want tiny singular values accurately? 1. When they are determined accurately by the data • Hilbert: H(i, j) = 1/(i + j − 1) • Cauchy: C(i, j) = 1/(xi + yj) 2.
    [Show full text]
  • The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a Self-Adjoi
    The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a self-adjoint operator 3. Spectral theorem for bounded self-adjoint operators 4. Functions of unitary operators 5. Spectral theorem for unitary operators 6. Alternative approach 7. From Theorem 1.2 to Theorem 1.1 A. Spectral projections B. Unbounded self-adjoint operators C. Von Neumann's mean ergodic theorem 1 2 1. Introduction If H is a Hilbert space, a bounded linear operator A : H ! H (A 2 L(H)) has an adjoint A∗ : H ! H defined by (1.1) (Au; v) = (u; A∗v); u; v 2 H: We say A is self-adjoint if A = A∗. We say U 2 L(H) is unitary if U ∗ = U −1. More generally, if H is another Hilbert space, we say Φ 2 L(H; H) is unitary provided Φ is one-to-one and onto, and (Φu; Φv)H = (u; v)H , for all u; v 2 H. If dim H = n < 1, each self-adjoint A 2 L(H) has the property that H has an orthonormal basis of eigenvectors of A. The same holds for each unitary U 2 L(H). Proofs can be found in xx11{12, Chapter 2, of [T3]. Here, we aim to prove the following infinite dimensional variant of such a result, called the Spectral Theorem. Theorem 1.1. If A 2 L(H) is self-adjoint, there exists a measure space (X; F; µ), a unitary map Φ: H ! L2(X; µ), and a 2 L1(X; µ), such that (1.2) ΦAΦ−1f(x) = a(x)f(x); 8 f 2 L2(X; µ): Here, a is real valued, and kakL1 = kAk.
    [Show full text]
  • Variational Calculus of Supervariables and Related Algebraic Structures1
    Variational Calculus of Supervariables and Related Algebraic Structures1 Xiaoping Xu Department of Mathematics, The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong2 Abstract We establish a formal variational calculus of supervariables, which is a combination of the bosonic theory of Gel’fand-Dikii and the fermionic theory in our earlier work. Certain interesting new algebraic structures are found in connection with Hamiltonian superoperators in terms of our theory. In particular, we find connections between Hamiltonian superoperators and Novikov- Poisson algebras that we introduced in our earlier work in order to establish a tensor theory of Novikov algebras. Furthermore, we prove that an odd linear Hamiltonian superoperator in our variational calculus induces a Lie superalgebra, which is a natural generalization of the Super-Virasoro algebra under certain conditions. 1 Introduction Formal variational calculus was introduced by Gel’fand and Dikii [GDi1-2] in studying Hamiltonian systems related to certain nonlinear partial differential equation, such as the arXiv:math/9911191v1 [math.QA] 24 Nov 1999 KdV equations. Invoking the variational derivatives, they found certain interesting Pois- son structures. Moreover, Gel’fand and Dorfman [GDo] found more connections between Hamiltonian operators and algebraic structures. Balinskii and Novikov [BN] studied sim- ilar Poisson structures from another point of view. The nature of Gel’fand and Dikii’s formal variational calculus is bosonic. In [X3], we presented a general frame of Hamiltonian superoperators and a purely fermionic formal variational calculus. Our work [X3] was based on pure algebraic analogy. In this paper, we shall present a formal variational calculus of supervariables, which is a combination 11991 Mathematical Subject Classification.
    [Show full text]
  • 7 Spectral Properties of Matrices
    7 Spectral Properties of Matrices 7.1 Introduction The existence of directions that are preserved by linear transformations (which are referred to as eigenvectors) has been discovered by L. Euler in his study of movements of rigid bodies. This work was continued by Lagrange, Cauchy, Fourier, and Hermite. The study of eigenvectors and eigenvalues acquired in- creasing significance through its applications in heat propagation and stability theory. Later, Hilbert initiated the study of eigenvalue in functional analysis (in the theory of integral operators). He introduced the terms of eigenvalue and eigenvector. The term eigenvalue is a German-English hybrid formed from the German word eigen which means “own” and the English word “value”. It is interesting that Cauchy referred to the same concept as characteristic value and the term characteristic polynomial of a matrix (which we introduce in Definition 7.1) was derived from this naming. We present the notions of geometric and algebraic multiplicities of eigen- values, examine properties of spectra of special matrices, discuss variational characterizations of spectra and the relationships between matrix norms and eigenvalues. We conclude this chapter with a section dedicated to singular values of matrices. 7.2 Eigenvalues and Eigenvectors Let A Cn×n be a square matrix. An eigenpair of A is a pair (λ, x) C (Cn∈ 0 ) such that Ax = λx. We refer to λ is an eigenvalue and to ∈x is× an eigenvector−{ } . The set of eigenvalues of A is the spectrum of A and will be denoted by spec(A). If (λ, x) is an eigenpair of A, the linear system Ax = λx has a non-trivial solution in x.
    [Show full text]
  • Veral Versions of the Spectral Theorem for Normal Operators in Hilbert Spaces
    10. The Spectral Theorem The big moment has arrived, and we are now ready to prove se- veral versions of the spectral theorem for normal operators in Hilbert spaces. Throughout this chapter, it should be helpful to compare our results with the more familiar special case when the Hilbert space is finite-dimensional. In this setting, the spectral theorem says that every normal matrix T 2 Cn×n can be diagonalized by a unitary transfor- mation. This can be rephrased as follows: There are numbers zj 2 C n (the eigenvalues) and orthogonal projections Pj 2 B(C ) such that Pm T = j=1 zjPj. The subspaces R(Pj) are orthogonal to each other. From this representation of T , it is then also clear that Pj is the pro- jection onto the eigenspace belonging to zj. In fact, we have already proved one version of the (general) spec- tral theorem: The Gelfand theory of the commutative C∗-algebra A ⊆ B(H) that is generated by a normal operator T 2 B(H) provides a functional calculus: We can define f(T ), for f 2 C(σ(T )) in such a way that the map C(σ(T )) ! A, f 7! f(T ) is an isometric ∗-isomorphism between C∗-algebras, and this is the spectral theorem in one of its ma- ny disguises! See Theorem 9.13 and the discussion that follows. As a warm-up, let us use this material to give a quick proof of the result about normal matrices T 2 Cn×n that was stated above. Consider the C∗-algebra A ⊆ Cn×n that is generated by T .
    [Show full text]
  • Chapter 6 the Singular Value Decomposition Ax=B Version of 11 April 2019
    Matrix Methods for Computational Modeling and Data Analytics Virginia Tech Spring 2019 · Mark Embree [email protected] Chapter 6 The Singular Value Decomposition Ax=b version of 11 April 2019 The singular value decomposition (SVD) is among the most important and widely applicable matrix factorizations. It provides a natural way to untangle a matrix into its four fundamental subspaces, and reveals the relative importance of each direction within those subspaces. Thus the singular value decomposition is a vital tool for analyzing data, and it provides a slick way to understand (and prove) many fundamental results in matrix theory. It is the perfect tool for solving least squares problems, and provides the best way to approximate a matrix with one of lower rank. These notes construct the SVD in various forms, then describe a few of its most compelling applications. 6.1 Eigenvalues and eigenvectors of symmetric matrices To derive the singular value decomposition of a general (rectangu- lar) matrix A IR m n, we shall rely on several special properties of 2 ⇥ the square, symmetric matrix ATA. While this course assumes you are well acquainted with eigenvalues and eigenvectors, we will re- call some fundamental concepts, especially pertaining to symmetric matrices. 6.1.1 A passing nod to complex numbers Recall that even if a matrix has real number entries, it could have eigenvalues that are complex numbers; the corresponding eigenvec- tors will also have complex entries. Consider, for example, the matrix 0 1 S = − . " 10# 73 To find the eigenvalues of S, form the characteristic polynomial l 1 det(lI S)=det = l2 + 1.
    [Show full text]
  • ASYMPTOTICALLY ISOSPECTRAL QUANTUM GRAPHS and TRIGONOMETRIC POLYNOMIALS. Pavel Kurasov, Rune Suhr
    ISSN: 1401-5617 ASYMPTOTICALLY ISOSPECTRAL QUANTUM GRAPHS AND TRIGONOMETRIC POLYNOMIALS. Pavel Kurasov, Rune Suhr Research Reports in Mathematics Number 2, 2018 Department of Mathematics Stockholm University Electronic version of this document is available at http://www.math.su.se/reports/2018/2 Date of publication: Maj 16, 2018. 2010 Mathematics Subject Classification: Primary 34L25, 81U40; Secondary 35P25, 81V99. Keywords: Quantum graphs, almost periodic functions. Postal address: Department of Mathematics Stockholm University S-106 91 Stockholm Sweden Electronic addresses: http://www.math.su.se/ [email protected] Asymptotically isospectral quantum graphs and generalised trigonometric polynomials Pavel Kurasov and Rune Suhr Dept. of Mathematics, Stockholm Univ., 106 91 Stockholm, SWEDEN [email protected], [email protected] Abstract The theory of almost periodic functions is used to investigate spectral prop- erties of Schr¨odinger operators on metric graphs, also known as quantum graphs. In particular we prove that two Schr¨odingeroperators may have asymptotically close spectra if and only if the corresponding reference Lapla- cians are isospectral. Our result implies that a Schr¨odingeroperator is isospectral to the standard Laplacian on a may be different metric graph only if the potential is identically equal to zero. Keywords: Quantum graphs, almost periodic functions 2000 MSC: 34L15, 35R30, 81Q10 Introduction. The current paper is devoted to the spectral theory of quantum graphs, more precisely to the direct and inverse spectral theory of Schr¨odingerop- erators on metric graphs [3, 20, 24]. Such operators are defined by three parameters: a finite compact metric graph Γ; • a real integrable potential q L (Γ); • ∈ 1 vertex conditions, which can be parametrised by unitary matrices S.
    [Show full text]
  • Lecture Notes on Spectra and Pseudospectra of Matrices and Operators
    Lecture Notes on Spectra and Pseudospectra of Matrices and Operators Arne Jensen Department of Mathematical Sciences Aalborg University c 2009 Abstract We give a short introduction to the pseudospectra of matrices and operators. We also review a number of results concerning matrices and bounded linear operators on a Hilbert space, and in particular results related to spectra. A few applications of the results are discussed. Contents 1 Introduction 2 2 Results from linear algebra 2 3 Some matrix results. Similarity transforms 7 4 Results from operator theory 10 5 Pseudospectra 16 6 Examples I 20 7 Perturbation Theory 27 8 Applications of pseudospectra I 34 9 Applications of pseudospectra II 41 10 Examples II 43 1 11 Some infinite dimensional examples 54 1 Introduction We give an introduction to the pseudospectra of matrices and operators, and give a few applications. Since these notes are intended for a wide audience, some elementary concepts are reviewed. We also note that one can understand the main points concerning pseudospectra already in the finite dimensional case. So the reader not familiar with operators on a separable Hilbert space can assume that the space is finite dimensional. Let us briefly outline the contents of these lecture notes. In Section 2 we recall some results from linear algebra, mainly to fix notation, and to recall some results that may not be included in standard courses on linear algebra. In Section 4 we state some results from the theory of bounded operators on a Hilbert space. We have decided to limit the exposition to the case of bounded operators.
    [Show full text]