Λ-Aluthge Iteration and Spectral Radius

Total Page:16

File Type:pdf, Size:1020Kb

Λ-Aluthge Iteration and Spectral Radius Integr. equ. oper. theory 99 (9999), 1{6 0378-620X/99000-0, DOI 10.1007/s00020-003-0000 Integral Equations °c 2008 BirkhÄauserVerlag Basel/Switzerland and Operator Theory ¸-Aluthge iteration and spectral radius Tin-Yau Tam Abstract. Let T 2 B(H) be an invertible operator on the complex Hilbert space H. For 0 < ¸ < 1, we extend Yamazaki's formula of the spectral radius ¸ 1¡¸ in terms of the ¸-Aluthge transform ¢¸(T ) := jT j UjT j where T = UjT j n is the polar decomposition of T . Namely, we prove that limn!1 jjj ¢¸(T ) jjj = r(T ) where r(T ) is the spectral radius of T and jjj ¢ jjj is a unitarily invariant norm such that (B(H); jjj ¢ jjj ) is a Banach algebra with jjj I jjj = 1. 1. Introduction Let H be a complex Hilbert space and let B(H) be the algebra of all bounded linear operators on H. For 0 < ¸ < 1, the ¸-Aluthge transform of T [2, 5, 11, 15] is ¸ 1¡¸ ¢¸(T ) := jT j UjT j ; where T = UjT j is the polar decomposition of T , that is, U is a partial isometry ¤ 1=2 n n¡1 0 and jT j = (T T ) . Set ¢¸(T ) := ¢¸(¢¸ (T )), n ¸ 1 and ¢¸(T ) := T . When 1 ¸ = 2 , it is called the Althuge transform [1] of T . See [3, 4, 7, 8, 11, 12, 13, 19]. Yamazaki [18] established the following interesting result n lim k¢ 1 (T )k = r(T ); (1.1) n!1 2 where r(T ) is the spectral radius of T and kT k is the spectral norm of T . Wang [17] then gave an elegant simple proof of (1.1) but apparently there is a gap. See Remark 2.6 for details and Remark 2.9 for a ¯x. Clearly k¢¸(T )k · kT k (1.2) n and thus fk¢¸(T )kgn2N is nonincreasing. Since the spectra of T and ¢¸(T ) are identical [11, 5], r(¢¸(T )) = r(T ): (1.3) 2000 Mathematics Subject Classi¯cation. Primary 47A30, 47A63 In memory of my brother-in-law, Johnny Kei-Sun Man, who passed away on January 16, 2008, at the age of ¯fty nine. 2 T.Y. Tam IEOT Antezana, Massey and Stojano® [5] proved that for any square matrix X, n lim k¢¸(X)k = r(X): (1.4) n!1 When T is invertible, the polar decomposition T = UjT j is unique [16, p.315] in which U is unitary and jT j is invertible positive so that ¢¸(T ) is also invertible. Our goal is to show that (1.4) is true for invertible operator T 2 B(H) and unitarily invariant norm under which B(H) is a Banach algebra with jjj I jjj = 1 [16, p.227-228], by extending the ideas in [17]. 2. Main results The following inequalities are known as Heinz's inequalities [10, 14]. Lemma 2.1. (Heinz) Let jjj ¢ jjj be a unitarily invariant norm on B(H). For positive A; B 2 B(H), X 2 B(H) and 0 · ® · 1, jjj A®XB1¡® jjj · jjj AX jjj ® jjj XB jjj 1¡® (2.1) jjj A®XB® jjj · jjj AXB jjj ® jjj X jjj 1¡®: (2.2) Remark 2.2. In [17] inequality (2.2) and McIntosh inequality jjj A¤XB jjj · jjj AA¤X jjj 1=2 jjj XBB¤ jjj 1=2 are used in the proof of (1.1). Yamazaki's proof [18] uses (2.2). We note that (2.2) and McIntosh's inequality follow from (2.1). n We simply write T0 := T and Tn := ¢¸(T ), n 2 N, once we ¯x 0 · ¸ · 1. Lemma 2.3. Let T 2 B(H) and 0 · ¸ · 1. Let jjj ¢ jjj be a unitarily invariant norm on B(H). Then for k; n 2 N, k k jjj (Tn+1) jjj · jjj (Tn) jjj : (2.3) k So for each k 2 N, limn!1 jjj (Tn) jjj exists. Moreover if T is invertible, then k 2¸¡1 k+1 ¸ k¡1 1¡¸ jjj (Tn+1) jTnj jjj · jjj (Tn) jjj jjj (Tn) jjj ; (2.4) 1¡2¸ k k+1 1¡¸ k¡1 ¸ jjjjTnj (Tn+1) jjj · jjj (Tn) jjj jjj (Tn) jjj : (2.5) Proof. Denote by Tn = UnTn the polar decomposition of Tn. Since Tn+1 = ¸ 1¡¸ jTnj UnjTnj , k ¸ k¡1 1¡¸ (Tn+1) = jTnj (Tn) UnjTnj : (2.6) By (2.1) k k¡1 ¸ k¡1 1¡¸ k jjj (Tn+1) jjj · jjjjTnj(Tn) Un jjj jjj (Tn) UnjTnj jjj = jjj (Tn) jjj since jjj ¢ jjj is unitarily invariant. So (2.3) is established. 2¸¡1 Suppose that T is invertible so that jTnj exists for 0 · ¸ · 1. From (2.6) k 2¸¡1 ¸ k¡1 ¸ (Tn+1) jTnj = jTnj (Tn) UnjTnj : Vol. 99 (9999) ¸-Aluthge iteration and spectral radius 3 Since jjj ¢ jjj is unitarily invariant and Un is unitary, by (2.2) k 2¸¡1 k¡1 ¸ k¡1 1¡¸ jjj (Tn+1) jTnj jjj · jjjjTnj(Tn) UnjTnj jjj jjj (Tn) Un jjj k+1 ¸ k¡1 1¡¸ = jjj (Tn) jjj jjj (Tn) jjj : Similarly (2.5) is established. ¤ 2¸¡1 When ¸ = 1=2, jTnj = I for any T 2 B(H). But the above computation does not work without assuming that T is invertible since the polar decomposition of a general T 2 B(T ) only yields T = UjT j where U is only a partial isometry [16, p.316]. The spectral norm enjoys kT k = k jT j k which is not valid for general unitarily invariant norm jjj ¢ jjj . Lemma 2.4. (Spectral radius formula) [16, p.235] Suppose that B(H) is a Banach algebra with respect to the norm jjj ¢ jjj (not necessarily unitarily invariant). For T 2 B(H), r(T ) = lim jjj T k jjj 1=k = inf jjj T k jjj 1=k: k!1 k2N In particular jjj T jjj ¸ r(T ). For B(H) to be a Banach algebra with respect to jjj ¢ jjj , the norm in Lemma 2.4 has to be submultiplicative, i.e., jjj ST jjj · jjj S jjj jjj T jjj [16, p.227]. The unitarily invariant norm jjj ¢ jjj in Lemma 2.3 need not be so. The condition jjj I jjj = 1 is k 1=k inessential for the formula r(T ) = limk!1 jjj T jjj , i.e, it is still valid even k 1=k jjj I jjj > 1. The formula r(T ) = infk2N jjj T jjj is valid for any normed algebra [6, p.236]). Lemma 2.5. Suppose that B(H) is a Banach algebra with respect to the unitarily k invariant norm jjj ¢ jjj and jjj I jjj = 1. Let 0 < ¸ < 1. Let sk := limn!1 jjj (Tn) jjj and s := s1. If T 2 B(H) is non-quasinilpotent, then s > 0. Moreover if T 2 B(H) k is invertible, then sk = s for each k 2 N. k Proof. Let T 2 B(H). By (2.3), for each k 2 N, the sequence f jjj (Tn) jjjgn2N is k nonincreasing so that sk := limn!1 jjj (Tn) jjj exists. By Lemma 2.4 and (1.3) jjj Tn jjj ¸ r(Tn) = r(T ). The spectrum σ(T ) of T is a compact nonempty set. If T is non-quasinilpotent, i.e., r(T ) > 0 so that s := s1 = lim jjj Tn jjj ¸ r(T ) > 0: (2.7) n!1 k Now assume that T is invertible. We proceed by induction to show that sk = s for all k 2 N. When k = 1, the statement is trivial. Suppose that the statement is true for 1 · k · m. Case 1: 0 < ¸ · 1=2. By (2.1)(A = jTnj, X = B = I) we have 1¡2¸ 1¡2¸ 0 < jjjjTnj jjj · jjjjTnj jjj 4 T.Y. Tam IEOT since jjj I jjj = 1 and 0 · 1 ¡ 2¸ < 1. Since T is invertible, jTnj is also invertible 2¸¡1 and thus jTnj exists. So m m jjj (Tn+1) jjj jjj (Tn+1) jjj 1¡2¸ · 1¡2¸ jjjjTnj jjj jjjjTnj jjj m 2¸¡1 · jjj (Tn+1) jTnj jjj since jjj ¢ jjj is submultiplicative m+1 ¸ m¡1 1¡¸ · jjj (Tn) jjj jjj (Tn) jjj by (2.4) m ¸ ¸ m¡1 1¡¸ · jjj (Tn) jjj jjj Tn jjj jjj (Tn) jjj : By the induction hypothesis, taking limits as n ! 1 yields sm · s¸ s(m¡1)(1¡¸) · sm¸s¸s(m¡1)(1¡¸); s1¡2¸ m+1 (m+1)¸ ¸ (m+1)¸ m+1 where s > 0 by (2.7). We have s · sm+1 · s and hence sm+1 = s . Case 2. 1=2 < ¸ < 1. Similar to Case 1, by (2.1) we have 2¸¡1 2¸¡1 jjjjTnj jjj · jjjjTnj jjj and m m jjj (Tn+1) jjj jjj (Tn+1) jjj 2¸¡1 · 2¸¡1 jjjjTnj jjj jjjjTnj jjj 1¡2¸ m · jjjjTnj Tn+1 jjj m+1 1¡¸ m¡1 ¸ · jjj (Tn) jjj jjj (Tn) jjj by (2.5) m 1¡¸ 1¡¸ m¡1 ¸ · jjj (Tn) jjj jjj Tn jjj jjj (Tn) jjj : So sm · s1¡¸ s(m¡1)¸ · sm(1¡¸)s(1¡¸)s(m¡1)¸ s2¸¡1 m+1 m+1 which leads to sm+1 = s . ¤ Remark 2.6. Suppose ¸ = 1=2. In the proof of [17, Lemma 4], the possibility that s = 0 is not considered (the spectral k ¢ k is the norm under consideration). It amounts to r(T ) = 0, that is, T is quasinilpotent [9, p.50], [13, p.381]. In the m+1 above induction proof, if ¸ = 1=2, one cannot deduce that sm+1 = s for jjj ¢ jjj , m 1=2 (m¡1)=2 m granted that s · sm+1s · s (that relies on (2.4) which is under the assumption that T is invertible) is valid.
Recommended publications
  • An Elementary Proof of the Spectral Radius Formula for Matrices
    AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral ra- dius of a matrix A may be obtained using the formula 1 ρ(A) = lim kAnk =n; n!1 where k · k represents any matrix norm. 1. Introduction It is a well-known fact from the theory of Banach algebras that the spectral radius of any element A is given by the formula ρ(A) = lim kAnk1=n: (1.1) n!1 For a matrix, the spectrum is just the collection of eigenvalues, so this formula yields a technique for estimating for the top eigenvalue. The proof of Equation 1.1 is beautiful but advanced. See, for exam- ple, Rudin's treatment in his Functional Analysis. It turns out that elementary techniques suffice to develop the formula for matrices. 2. Preliminaries For completeness, we shall briefly introduce the major concepts re- quired in the proof. It is expected that the reader is already familiar with these ideas. 2.1. Norms. A norm is a mapping k · k from a vector space X into the nonnegative real numbers R+ which has three properties: (1) kxk = 0 if and only if x = 0; (2) kαxk = jαj kxk for any scalar α and vector x; and (3) kx + yk ≤ kxk + kyk for any vectors x and y. The most fundamental example of a norm is the Euclidean norm k·k2 which corresponds to the standard topology on Rn. It is defined by 2 2 2 kxk2 = k(x1; x2; : : : ; xn)k2 = x1 + x2 + · · · + xn: Date: 30 November 2001.
    [Show full text]
  • Banach J. Math. Anal. 2 (2008), No. 2, 59–67 . OPERATOR-VALUED
    Banach J. Math. Anal. 2 (2008), no. 2, 59–67 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) http://www.math-analysis.org . OPERATOR-VALUED INNER PRODUCT AND OPERATOR INEQUALITIES JUN ICHI FUJII1 This paper is dedicated to Professor Josip E. Peˇcari´c Submitted by M. S. Moslehian Abstract. The Schwarz inequality and Jensen’s one are fundamental in a Hilbert space. Regarding a sesquilinear map B(X, Y ) = Y ∗X as an operator- valued inner product, we discuss operator versions for the above inequalities and give simple conditions that the equalities hold. 1. Introduction Inequality plays a basic role in analysis and consequently in Mathematics. As surveyed briefly in [6], operator inequalities on a Hilbert space have been discussed particularly since Furuta inequality was established. But it is not easy in general to give a simple condition that the equality in an operator inequality holds. In this note, we observe basic operator inequalities and discuss the equality conditions. To show this, we consider simple linear algebraic structure in operator spaces: For Hilbert spaces H and K, the symbol B(H, K) denotes all the (bounded linear) operators from H to K and B(H) ≡ B(H, H). Then, consider an operator n n matrix A = (Aij) ∈ B(H ), a vector X = (Xj) ∈ B(H, H ) with operator entries Xj ∈ B(H), an operator-valued inner product n ∗ X ∗ Y X = Yj Xj, j=1 Date: Received: 29 March 2008; Accepted 13 May 2008. 2000 Mathematics Subject Classification. Primary 47A63; Secondary 47A75, 47A80. Key words and phrases. Schwarz inequality, Jensen inequality, Operator inequality.
    [Show full text]
  • Stat 309: Mathematical Computations I Fall 2018 Lecture 4
    STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 4 1. spectral radius • matrix 2-norm is also known as the spectral norm • name is connected to the fact that the norm is given by the square root of the largest eigenvalue of ATA, i.e., largest singular value of A (more on this later) n×n • in general, the spectral radius ρ(A) of a matrix A 2 C is defined in terms of its largest eigenvalue ρ(A) = maxfjλij : Axi = λixi; xi 6= 0g n×n • note that the spectral radius does not define a norm on C • for example the non-zero matrix 0 1 J = 0 0 has ρ(J) = 0 since both its eigenvalues are 0 • there are some relationships between the norm of a matrix and its spectral radius • the easiest one is that ρ(A) ≤ kAk n for any matrix norm that satisfies the inequality kAxk ≤ kAkkxk for all x 2 C , i.e., consistent norm { here's a proof: Axi = λixi taking norms, kAxik = kλixik = jλijkxik therefore kAxik jλij = ≤ kAk kxik since this holds for any eigenvalue of A, it follows that maxjλij = ρ(A) ≤ kAk i { in particular this is true for any operator norm { this is in general not true for norms that do not satisfy the consistency inequality kAxk ≤ kAkkxk (thanks to Likai Chen for pointing out); for example the matrix " p1 p1 # A = 2 2 p1 − p1 2 2 p is orthogonal and therefore ρ(A) = 1 but kAkH;1 = 1= 2 and so ρ(A) > kAkH;1 { exercise: show that any eigenvalue of a unitary or an orthogonal matrix must have absolute value 1 Date: October 10, 2018, version 1.0.
    [Show full text]
  • Computing the Polar Decomposition---With Applications
    This 1984 technical report was published as N. J. Higham. Computing the polar decomposition---with applications. SIAM J. Sci. Stat. Comput., 7(4):1160-1174, 1986. The appendix contains an alternative proof omitted in the published paper. Pages 19-20 are missing from the copy that was scanned. The report is mainly of interest for how it was produced. - The cover was prepared in Vizawrite 64 on a Commodore 64 and was printed on an Epson FX-80 dot matrix printer. - The main text was prepared in Vuwriter on an Apricot PC and printed on an unknown dot matrix printer. - The bibliography was prepared in Superbase on a Commodore 64 and printed on an Epson FX-80 dot matrix printer. COMPUTING THE POLAR DECOMPOSITION - WITH APPLICATIONS N.J. Higham • Numerical Analysis Report No. 94 November 1984 This work was carried out with the support of a SERC Research Studentship • Department of Mathematics University of Manchester Manchester M13 9PL ENGLAND Univeriity of Manchester/UMIST Joint Nu•erical Analysis Reports D•p•rt••nt of Math••atic1 Dep•rt•ent of Mathe~atic1 The Victoria Univ•rsity University of Manchester Institute of of Manchester Sci•nce •nd Technology Reque1t1 for individu•l t•chnical reports •ay be addressed to Dr C.T.H. Baker, Depart•ent of Mathe••tics, The University, Mancheiter M13 9PL ABSTRACT .. A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice.
    [Show full text]
  • Hilbert's Projective Metric in Quantum Information Theory D. Reeb, M. J
    Hilbert’s projective metric in quantum information theory D. Reeb, M. J. Kastoryano and M. M. Wolf REPORT No. 31, 2010/2011, fall ISSN 1103-467X ISRN IML-R- -31-10/11- -SE+fall HILBERT’S PROJECTIVE METRIC IN QUANTUM INFORMATION THEORY David Reeb,∗ Michael J. Kastoryano,† and Michael M. Wolf‡ Department of Mathematics, Technische Universit¨at M¨unchen, 85748 Garching, Germany Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark (Dated: August 15, 2011) We introduce and apply Hilbert’s projective metric in the context of quantum information theory. The metric is induced by convex cones such as the sets of positive, separable or PPT operators. It provides bounds on measures for statistical distinguishability of quantum states and on the decrease of entanglement under LOCC protocols or other cone-preserving operations. The results are formulated in terms of general cones and base norms and lead to contractivity bounds for quantum channels, for instance improving Ruskai’s trace-norm contraction inequality. A new duality between distinguishability measures and base norms is provided. For two given pairs of quantum states we show that the contraction of Hilbert’s projective metric is necessary and sufficient for the existence of a probabilistic quantum operation that maps one pair onto the other. Inequalities between Hilbert’s projective metric and the Chernoff bound, the fidelity and various norms are proven. Contents I. Introduction 2 II. Basic concepts 3 III. Base norms and negativities 7 IV. Contractivity properties of positive maps 9 V. Distinguishability measures 14 VI. Fidelity and Chernoff bound inequalities 21 VII. Operational interpretation 24 VIII.
    [Show full text]
  • Geometry of Matrix Decompositions Seen Through Optimal Transport and Information Geometry
    Published in: Journal of Geometric Mechanics doi:10.3934/jgm.2017014 Volume 9, Number 3, September 2017 pp. 335{390 GEOMETRY OF MATRIX DECOMPOSITIONS SEEN THROUGH OPTIMAL TRANSPORT AND INFORMATION GEOMETRY Klas Modin∗ Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg, Sweden Abstract. The space of probability densities is an infinite-dimensional Rie- mannian manifold, with Riemannian metrics in two flavors: Wasserstein and Fisher{Rao. The former is pivotal in optimal mass transport (OMT), whereas the latter occurs in information geometry|the differential geometric approach to statistics. The Riemannian structures restrict to the submanifold of multi- variate Gaussian distributions, where they induce Riemannian metrics on the space of covariance matrices. Here we give a systematic description of classical matrix decompositions (or factorizations) in terms of Riemannian geometry and compatible principal bundle structures. Both Wasserstein and Fisher{Rao geometries are discussed. The link to matrices is obtained by considering OMT and information ge- ometry in the category of linear transformations and multivariate Gaussian distributions. This way, OMT is directly related to the polar decomposition of matrices, whereas information geometry is directly related to the QR, Cholesky, spectral, and singular value decompositions. We also give a coherent descrip- tion of gradient flow equations for the various decompositions; most flows are illustrated in numerical examples. The paper is a combination of previously known and original results. As a survey it covers the Riemannian geometry of OMT and polar decomposi- tions (smooth and linear category), entropy gradient flows, and the Fisher{Rao metric and its geodesics on the statistical manifold of multivariate Gaussian distributions.
    [Show full text]
  • Banach Algebras
    Banach Algebras Yurii Khomskii Bachelor Thesis Department of Mathematics, Leiden University Supervisor: Dr. Marcel de Jeu April 18, 2005 i Contents Foreword iv 1. Algebraic Concepts 1 1.1. Preliminaries . 1 1.2. Regular Ideals . 3 1.3. Adjoining an Identity . 4 1.4. Quasi-inverses . 8 2. Banach Algebras 10 2.1. Preliminaries of Normed and Banach Algebras . 10 2.2. Inversion and Quasi-inversion in Banach Algebras . 14 3. Spectra 18 3.1. Preliminaries . 18 3.2. Polynomial Spectral Mapping Theorem and the Spectral Radius Formula . 22 4. Gelfand Representation Theory 25 4.1. Multiplicative Linear Functionals and the Maximal Ideal Space . 25 4.2. The Gelfand Topology . 30 4.3. The Gelfand Representation . 31 4.4. The Radical and Semi-simplicity . 33 4.5. Generators of Banach algebras . 34 5. Examples of Gelfand Representations 36 5.1. C (X ) for X compact and Hausdorff . 36 5.2. C 0(X ) for X locally compact and Hausdorff. 41 5.3. Stone-Cecˇ h compactification . 42 5.4. A(D) . 44 5.5. AC (Γ) . 46 5.6. H 1 . 47 ii iii Foreword The study of Banach algebras began in the twentieth century and originated from the observation that some Banach spaces show interesting properties when they can be supplied with an extra multiplication operation. A standard exam- ple was the space of bounded linear operators on a Banach space, but another important one was function spaces (of continuous, bounded, vanishing at infin- ity etc. functions as well as functions with absolutely convergent Fourier series). Nowadays Banach algebras is a wide discipline with a variety of specializations and applications.
    [Show full text]
  • Isospectral Graph Reductions, Estimates of Matrices' Spectra, And
    ISOSPECTRAL GRAPH REDUCTIONS, ESTIMATES OF MATRICES’ SPECTRA, AND EVENTUALLY NEGATIVE SCHWARZIAN SYSTEMS A Thesis Presented to The Academic Faculty by Benjamin Zachary Webb In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Mathematics Georgia Institute of Technology May 2011 ISOSPECTRAL GRAPH REDUCTIONS, ESTIMATES OF MATRICES’ SPECTRA, AND EVENTUALLY NEGATIVE SCHWARZIAN SYSTEMS Approved by: Dr. Leonid A. Bunimovich, Advisor Dr. Dana Randall School of Mathematics College of Computing Georgia Institute of Technology Georgia Institute of Technology Dr. Yuri Bakhtin Dr. Howie Weiss School of Mathematics School of Mathematics Georgia Institute of Technology Georgia Institute of Technology Dr. Luca Dieci Date Approved: March 8, 2011 School of Mathematics Georgia Institute of Technology To my wife, Rebekah. iii ACKNOWLEDGEMENTS In acknowledging the people who have advised, mentored, guided, supported, and helped me throughout my time at Georgia Tech I would like to start with my advisor Dr. Leonid Bunimovich, with whom I have enjoyed working and who has had a significant impact on my understanding of mathematics, mentoring, and academics in general. I am also grateful for Yuri Bakhtin, Luca Dieci, Dana Randall, and Howie Weiss for not only serving as part of my dissertation committee but also guiding and supporting my endeavors while at Georgia Tech. I would also like to thank those in the School of Mathematics at Georgia Tech for the general atmosphere of supportiveness. This is especially true of both Klara Grodzinsky and Rena Brakebill who have guided my teaching efforts over the past years. I would also extend this thanks to Sharon McDowell and Genola Turner who along with my advisor have not only been supportive of myself but of my wife and children who have found themselves in a rather unique situation over the past years of my doctoral studies.
    [Show full text]
  • Wavelets Through a Looking Glass: SATISFACTION GUARANTEED! the World of the Spectrum Remember: Your 30-Day Return Privilege by O
    20% Discount Pre-Pub Offer Forthcoming June 2002! List Price: $59.95 (tent.) Special Price: $47.96 + shipping & handling Wavelets through a Please mention reference # Y404 when placing your order! Offer Valid Until 31 August 2002 Looking Glass Table of Contents The World of the Spectrum List of Figures List of Tables O. Bratteli, University of Oslo, Oslo, Norway Preface P. Jorgensen, University of Iowa, Iowa City, IA 1. Introduction Overture: Why wavelets? * Subband filters This book combining wavelets and the world of the spectrum focuses on recent and sieves * Matrix functions and developments in wavelet theory, emphasizing fundamental and relatively multiresolutions * Qubits: The oracle of timeless techniques that have a geometric and spectral-theoretic flavor. The Feynman and the algorithm of Shor * Chaos exposition is clearly motivated and unfolds systematically, aided by numerous and cascade approximation * Spectral bounds for the transfer and subdivision graphics. operators * Connections to group theory * Wavelet packets * The Gabor transform * Key features of the book: Exercises * Terminology 2. Homotopy Theory and Cascades • The important role of the spectrum of a transfer operator is studied The dangers of navigating the landscape of • wavelets * Homotopy classes of wavelets * Excellent graphics show how wavelets depend on the spectra of the transfer Multiresolution analysis and tight frames * operators Generality of multiresolution analysis * • Key topics of wavelet theory are examined: connected components in the Global homotopy and an index theorem * Cascades in L2(R) * An open problem * variety of wavelets, the geometry of winding numbers, the Galerkin projection Exercises method, classical functions of Weierstrass and Hurwitz and their role in describing the eigenvalue-spectrum of the transfer operator, isospectral 3.
    [Show full text]
  • Isospectral Graph Reductions
    Graphs Reductions Eigenvalue Estimation Summary and Implications Isospectral Graph Reductions Leonid Bunimovich Leonid Bunimovich Isospectral Graph Reductions Networks and Graphs Graphs Reductions Definitions Eigenvalue Estimation Graph Reductions Summary and Implications Main Results Outline 1 Graphs Reductions Networks and Graphs Definitions Graph Reductions Main Results 2 Eigenvalue Estimation Gershgorin’s Theorem Brauer’s Theorem Brualdi’s Theorem 3 Summary and Implications References Leonid Bunimovich Isospectral Graph Reductions Networks and Graphs Graphs Reductions Definitions Eigenvalue Estimation Graph Reductions Summary and Implications Main Results Network Structure Typical real networks are defined by some large graph with complicated structure [2,8,11]. E.coli metabolic network Question: To what extent can this structure be simplified/reduced while maintaining some characteristic of the network? Leonid Bunimovich Isospectral Graph Reductions Networks and Graphs Graphs Reductions Definitions Eigenvalue Estimation Graph Reductions Summary and Implications Main Results The collection of graphs G The graph of a network may or may not be directed, weighted, have multiple edges or loops. Each such graph can be considered a weighted, directed graph without multiple edges possibly with loops. 3 2 1 3 2 Let G be the collection of all such graphs. Leonid Bunimovich Isospectral Graph Reductions Networks and Graphs Graphs Reductions Definitions Eigenvalue Estimation Graph Reductions Summary and Implications Main Results The collection of graphs G Definition A graph G ∈ G is triple G = (V , E, ω) where V is its vertices, E its edges, and ω : E → W where W is the set of edge weights. An important characteristic of a network/graph is the spectrum of its weighted adjacency matrix [1,3,10].
    [Show full text]
  • Computing the Polar Decomposition—With Applications
    Computing the Polar Decomposition—with Applications Higham, Nicholas J. 1986 MIMS EPrint: 2007.9 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 SIAM J. Sci. STAT. COMPUT. (C) 1986 Society for Industrial and Applied Mathematics Vol. 7, No. 4, October 1986 OO7 COMPUTING THE POLAR DECOMPOSITION---WITH APPLICATIONS* NICHOLAS J. HIGHAMf Abstract. A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice. To add to the known best approximation property of the unitary polar factor, the Hermitian polar factor H of a nonsingular Hermitian matrix A is shown to be a good positive definite approximation to A and 1/2(A / H) is shown to be a best Hermitian positive semi-definite approximation to A. Perturbation bounds for the polar factors are derived. Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix. Key words, polar decomposition, singular value decomposition, Newton's method, matrix square root AMS(MOS) subject classifications. 65F25, 65F30, 65F35 1. Introduction. The polar decomposition is a generalisation to matrices of the familiar complex number representation z r e i, r ->_ 0.
    [Show full text]
  • Lecture Notes on Spectra and Pseudospectra of Matrices and Operators
    Lecture Notes on Spectra and Pseudospectra of Matrices and Operators Arne Jensen Department of Mathematical Sciences Aalborg University c 2009 Abstract We give a short introduction to the pseudospectra of matrices and operators. We also review a number of results concerning matrices and bounded linear operators on a Hilbert space, and in particular results related to spectra. A few applications of the results are discussed. Contents 1 Introduction 2 2 Results from linear algebra 2 3 Some matrix results. Similarity transforms 7 4 Results from operator theory 10 5 Pseudospectra 16 6 Examples I 20 7 Perturbation Theory 27 8 Applications of pseudospectra I 34 9 Applications of pseudospectra II 41 10 Examples II 43 1 11 Some infinite dimensional examples 54 1 Introduction We give an introduction to the pseudospectra of matrices and operators, and give a few applications. Since these notes are intended for a wide audience, some elementary concepts are reviewed. We also note that one can understand the main points concerning pseudospectra already in the finite dimensional case. So the reader not familiar with operators on a separable Hilbert space can assume that the space is finite dimensional. Let us briefly outline the contents of these lecture notes. In Section 2 we recall some results from linear algebra, mainly to fix notation, and to recall some results that may not be included in standard courses on linear algebra. In Section 4 we state some results from the theory of bounded operators on a Hilbert space. We have decided to limit the exposition to the case of bounded operators.
    [Show full text]