13 Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the matrix). If the operator is symmetric, this is always possible. In any case, we can always find a basis of generalized eigenvectors (which we can use to write the matrix in Jordan canonical form). The goal of this section is to prove the appropriate generalization of diagonalization forar- bitrary bounded self-adjoint operators on Hilbert space. Unlike the case for compact operators, there are generally ‘not enough’ eigenvectors. We will define the spectrum σ(T ) of an operator T , which contains the set of eigenvalues but can be larger, and then the spectral resolution which describes an increasing family of subspaces on which T acts in a particularly simple way. You may like to read some alternative sources for this material. Fortunately there are many good sources! • Gilbert Helmberg’s book ‘Introduction to Spectral Theory in Hilbert Space’. • John Erdos’s notes on ‘Operators on Hilbert Space’, available at http://www.mth. kcl.ac.uk/~jerdos/OpTh/FP.htm. • Dana Mendelson and Alexandre Tomberg’s article in the McGill Undergraduate Mathemat- ics Journal, available at http://www.math.mcgill.ca/jakobson/courses/ ma667/mendelsontomberg-spectral.pdf. • Chapter 6.6 of Stein & Shakarchi vol. III. • Michael Taylor’s notes on ‘The Spectral Theorem for Self-Adjoint and Unitary Operators’, available at http://www.unc.edu/math/Faculty/met/specthm.pdf. 13.1 The spectrum of an operator Definition 13.1. The spectrum of an operator T is σ(T ) = fλ 2 C jT − λI is not invertible with bounded inverseg : In finite dimensions the spectrum is just the set of eigenvalues. The operator T − λI can fail to be invertible in several ways. First, if λ is an eigenvalue, then the operator is not one-to-one. Even if the operator is one-to-one, its range may not be all of H . (This phenomenon does not occur in finite dimensions.) The standard example is the shift operator on `2(N). Later, we’ll see that omitting the phrase ‘with bounded inverse’ does not change anything; invertible bounded linear operators automatically have bounded inverses. 1 The complement of the spectrum is often referred to as the ‘resolvent set’, and theoperator (T − λI)−1 the ‘resolvent at λ’. P k k < − 1 k Lemma 13.2. If A 1 then I A is invertible with bounded inverse k=0 A . Proof: One easily checks that the given series converges to some bounded operator. Now Xn I − (I − A) Ak = An+1 k=0 P P − 1 k 1 k − □ with norm converging to 0. Thus (I A) k=0 A = I, and similarly ( k=0 A )(I A) = I. Lemma 13.3. If A is invertible and kA − Bk < kA−1k−1 then B is also invertible. Proof: Apply Lemma 13.2 to A−1B = I−(I−A−1B). □ Lemma 13.4. The spectrum of an operator T is a closed subset of C. Proof: Take λ in the resolvent set of T , and µ within k(T − λI)−1k−1 of λ. Then apply Lemma 13.3 with A = T − λI and B = T − µI, so see that T − µI is also invertible so µ is in the resolvent set of T . □ Lemma 13.5. If kAf k; kA∗ f k ≥ ckf k for some constant c, then A has a bounded inverse. Proof: The inequality shows that A and A∗ are injective. Moreover, A has closed range, since if fn = Aдn converge to some f , by the estimate the дn converge to some д and then f = Aд. The range must be all of H : if z ? range A then ∗ 0 = hz; Awi = hA z;wi for all w, and since A∗ is injective, z = 0. Thus the operator A has an inverse, and that inverse is bounded, by the inequality. □ Lemma 13.6. The spectrum of a self-adjoint linear operator T is contained in the real line. Proof: We want to construct a bounded inverse for T − (x + iy)I for any x 2 R; 0 , y 2 R. It suffices to do this for T − iI. We calculate k(T − iI)f k2 = h(T − iI)f ; (T − iI)f i = kT f k2 + hT f ; −iI f i + ⟨−iI f ;T f i + kf k2 ≥ k f k2: The same argument applies to (T − iI)∗ = T + iI, and so by Lemma 13.5 T − iI has a bounded inverse. □ 2 (A similar argument shows that the spectrum of a unitary operator is contained in the unit circle in the complex plane.) Lemma 13.7. The spectrum of a bounded linear operatorT is compact, and in particular bounded by its norm: σ(T ) ⊂ BkT k(0): Proof: Suppose jλj > kT k. Then I − λ−1T is invertible with a bounded inverse by Lemma 13.2, and hence T − λI also has a bounded inverse so λ is in the resolvent set. This shows that the spectrum is a bounded set, and combined with Lemma 13.4 and the Heine-Borel theorem, we see that the spectrum is compact. □ Theorem 13.8. The norm of a bounded self-adjoint linear operatorT is the largest absolute value of a point in the spectrum: kT k = max jλj: λ2σ(T ) Proof: Define α = inf hT f ; f i β = sup hT f ; f i: k k f =1 kf k=1 Observe that kT k = maxfjα j; jβjg, and since β ≥ α, in fact kT k = maxf−α; βg: We’ll first show that σ(T ) ⊂ [α; β], and then that α; β 2 σ(T ). If λ < [α; β], then another application of Lemma 13.5 shows that T − λI has bounded inverse, so λ < σ(T ). In order to apply that Lemma, we need to show T − λI (which is self-adjoint) is bounded below. Pick a unit vector f , and compute: k(T − λI)f k2 = k(T − hT f ; f i)f + (hT f ; f i − λI)f k2 = k(T − hT f ; f i)f k2 + jhT f ; f i − λj2 since the cross terms vanish, because h(T − hT f ; f i)f ; f i = 0, and then ≥ jhT f ; f i − λj2 ≥ dist(λ; [α; β])2: We next want to show that α; β 2 σ(T ). 3 Suppose first that kT k = β. (The other case is similar, and we omit it.) We choose unit vectors n!1 fn so hT fn; fni −−−−! β. In fact, we then have 2 2 2 k(T − βI)fn k ≤ kT k − 2βhT fn; fni + β = 2kT k(kT k − hT fn; fni) n!1 −−−−! 0: Observe this makes it impossible for T − βI to have a bounded inverse, as if it did we could compute 2 1 = kfn k −1 2 = k(T − βI) (T − βI)fn k −1 2 2 ≤ k(T − βI) k k(T − βI)fn k n!1 −−−−! 0: Thus β 2 σ(T ). Next consider the operator S = βI − T , which has αS = inf h(βI − T )f ; f i = 0 kf k=1 βS = sup h(βI − T )f ; f i = β − α kf k=1 and kS k = β − α. Repeating the exact same argument as above, we see that β − α 2 σ(S), which is equivalent to α 2 σ(T ), because both say that T − αI does not have a bounded inverse. □ Theorem 13.8 does not hold for non self-adjoint operators; the general statement for bounded k n k1/n linear operators is that the ‘spectral radius’ is equal to limn!1 T . Consider the example in p *1 1+ finite dimensions T = where kT n k = n2 + 1. ,0 1- Theorem 13.9 (Spectral mapping theorem). Let p be a polynomial. Then σ(p(T )) = p(σ(T )): 4 Proof: If p is constant, the result is trivial. Otherwise, take λ 2 σ(T ), and observe that λ is a root of the polynomial p(x) − p(λ), so we have p(x) − p(λ) = (x − λ)q(x) for some polynomial q, and hence p(T ) − p(λ)I = (T − λI)q(T ) = q(T )(T − λI): Then p(λ) can not be in the resolvent set of p(T ), for then q(T )(p(T )−p(λ)I) would give a bounded inverse for T − λI. Thus p(λ) 2 σ(p(T )). Q Conversely, suppose µ 2 σ(p(T )). Factor p(x) − µ = c (x − x ) into a product of linear Q i i − − factors, so we also have p(T ) µI = c i(T xiI). If every xi is in the resolvent set of T , this expression would be invertible, giving a contradition. Thus some xi 2 σ(T ), and thenp(xi)−µ = 0, giving the result. □ Theorem 13.10. Suppose T is a bounded self-adjoint linear operator and p is a polynomial. Then kp(T )k = max jp(t)j: t2σ(T ) k k j j Proof: First suppose p is a real polynomial. We use Theorem 13.8 to see p(T ) = maxλ2σ(p(T )) λ and then the spectral mapping theorem (Theorem 13.9) to obtain the conclusion. For the general case, note that kAk2 = kA∗Ak for all bounded operators A, and then kp(t)k2 = kp(T )∗p(T )k = k(pp)(T )k, at which point we can use the result for the real polynomial pp. □ 13.2 Positive operators We’ll now introduce another class of linear operators, the positive operators.
An Elementary Proof of the Spectral Radius Formula for Matrices
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral ra- dius of a matrix A may be obtained using the formula 1 ρ(A) = lim kAnk =n; n!1 where k · k represents any matrix norm. 1. Introduction It is a well-known fact from the theory of Banach algebras that the spectral radius of any element A is given by the formula ρ(A) = lim kAnk1=n: (1.1) n!1 For a matrix, the spectrum is just the collection of eigenvalues, so this formula yields a technique for estimating for the top eigenvalue. The proof of Equation 1.1 is beautiful but advanced. See, for exam- ple, Rudin's treatment in his Functional Analysis. It turns out that elementary techniques suffice to develop the formula for matrices. 2. Preliminaries For completeness, we shall briefly introduce the major concepts re- quired in the proof. It is expected that the reader is already familiar with these ideas. 2.1. Norms. A norm is a mapping k · k from a vector space X into the nonnegative real numbers R+ which has three properties: (1) kxk = 0 if and only if x = 0; (2) kαxk = jαj kxk for any scalar α and vector x; and (3) kx + yk ≤ kxk + kyk for any vectors x and y. The most fundamental example of a norm is the Euclidean norm k·k2 which corresponds to the standard topology on Rn. It is defined by 2 2 2 kxk2 = k(x1; x2; : : : ; xn)k2 = x1 + x2 + · · · + xn: Date: 30 November 2001.
1 1 Bounded and unbounded operators 1. Let X, Y be Banach spaces and D 2 X a linear space, not necessarily closed. 2. A linear operator is any linear map T : D ! Y . 3. D is the domain of T , sometimes written Dom (T ), or D (T ). 4. T (D) is the Range of T , Ran(T ). 5. The graph of T is Γ(T ) = f(x; T x)jx 2 D (T )g 6. The kernel of T is Ker(T ) = fx 2 D (T ): T x = 0g 1.1 Operations 1. aT1 + bT2 is defined on D (T1) \D (T2). 2. if T1 : D (T1) ⊂ X ! Y and T2 : D (T2) ⊂ Y ! Z then T2T1 : fx 2 D (T1): T1(x) 2 D (T2). In particular, inductively, D (T n) = fx 2 D (T n−1): T (x) 2 D (T )g. The domain may become trivial. 3. Inverse. The inverse is defined if Ker(T ) = f0g. This implies T is bijective. Then T −1 : Ran(T ) !D (T ) is defined as the usual function inverse, and is clearly linear. @ is not invertible on C1[0; 1]: Ker@ = C. 4. Closable operators. It is natural to extend functions by continuity, when possible. If xn ! x and T xn ! y we want to see whether we can define T x = y. Clearly, we must have xn ! 0 and T xn ! y ) y = 0; (1) since T (0) = 0 = y. Conversely, (1) implies the extension T x := y when- ever xn ! x and T xn ! y is consistent and defines a linear operator.
San José State University Math 253: Mathematical Methods for Data Visualization Lecture 5: Singular Value Decomposition (SVD) Dr. Guangliang Chen Outline • Matrix SVD Singular Value Decomposition (SVD) Introduction We have seen that symmetric matrices are always (orthogonally) diagonalizable. That is, for any symmetric matrix A ∈ Rn×n, there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . , λn), both real and square, such that A = QΛQT . We have pointed out that λi’s are the eigenvalues of A and qi’s the corresponding eigenvectors (which are orthogonal to each other and have unit norm). Thus, such a factorization is called the eigendecomposition of A, also called the spectral decomposition of A. What about general rectangular matrices? Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/22 Singular Value Decomposition (SVD) Existence of the SVD for general matrices Theorem: For any matrix X ∈ Rn×d, there exist two orthogonal matrices U ∈ Rn×n, V ∈ Rd×d and a nonnegative, “diagonal” matrix Σ ∈ Rn×d (of the same size as X) such that T Xn×d = Un×nΣn×dVd×d. Remark. This is called the Singular Value Decomposition (SVD) of X: • The diagonals of Σ are called the singular values of X (often sorted in decreasing order). • The columns of U are called the left singular vectors of X. • The columns of V are called the right singular vectors of X. Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/22 Singular Value Decomposition (SVD) * * b * b (n>d) b b b * b = * * = b b b * (n<d) * b * * b b Dr.
Banach J. Math. Anal. 2 (2008), no. 2, 59–67 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) http://www.math-analysis.org . OPERATOR-VALUED INNER PRODUCT AND OPERATOR INEQUALITIES JUN ICHI FUJII1 This paper is dedicated to Professor Josip E. Peˇcari´c Submitted by M. S. Moslehian Abstract. The Schwarz inequality and Jensen’s one are fundamental in a Hilbert space. Regarding a sesquilinear map B(X, Y ) = Y ∗X as an operator- valued inner product, we discuss operator versions for the above inequalities and give simple conditions that the equalities hold. 1. Introduction Inequality plays a basic role in analysis and consequently in Mathematics. As surveyed briefly in [6], operator inequalities on a Hilbert space have been discussed particularly since Furuta inequality was established. But it is not easy in general to give a simple condition that the equality in an operator inequality holds. In this note, we observe basic operator inequalities and discuss the equality conditions. To show this, we consider simple linear algebraic structure in operator spaces: For Hilbert spaces H and K, the symbol B(H, K) denotes all the (bounded linear) operators from H to K and B(H) ≡ B(H, H). Then, consider an operator n n matrix A = (Aij) ∈ B(H ), a vector X = (Xj) ∈ B(H, H ) with operator entries Xj ∈ B(H), an operator-valued inner product n ∗ X ∗ Y X = Yj Xj, j=1 Date: Received: 29 March 2008; Accepted 13 May 2008. 2000 Mathematics Subject Classification. Primary 47A63; Secondary 47A75, 47A80. Key words and phrases. Schwarz inequality, Jensen inequality, Operator inequality.
On the Origin and Early History of Functional Analysis
U.U.D.M. Project Report 2008:1 On the origin and early history of functional analysis Jens Lindström Examensarbete i matematik, 30 hp Handledare och examinator: Sten Kaijser Januari 2008 Department of Mathematics Uppsala University Abstract In this report we will study the origins and history of functional analysis up until 1918. We begin by studying ordinary and partial differential equations in the 18th and 19th century to see why there was a need to develop the concepts of functions and limits. We will see how a general theory of infinite systems of equations and determinants by Helge von Koch were used in Ivar Fredholm’s 1900 paper on the integral equation b Z ϕ(s) = f(s) + λ K(s, t)f(t)dt (1) a which resulted in a vast study of integral equations. One of the most enthusiastic followers of Fredholm and integral equation theory was David Hilbert, and we will see how he further developed the theory of integral equations and spectral theory. The concept introduced by Fredholm to study sets of transformations, or operators, made Maurice Fr´echet realize that the focus should be shifted from particular objects to sets of objects and the algebraic properties of these sets. This led him to introduce abstract spaces and we will see how he introduced the axioms that defines them. Finally, we will investigate how the Lebesgue theory of integration were used by Frigyes Riesz who was able to connect all theory of Fredholm, Fr´echet and Lebesgue to form a general theory, and a new discipline of mathematics, now known as functional analysis.
Stat 309: Mathematical Computations I Fall 2018 Lecture 4
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 4 1. spectral radius • matrix 2-norm is also known as the spectral norm • name is connected to the fact that the norm is given by the square root of the largest eigenvalue of ATA, i.e., largest singular value of A (more on this later) n×n • in general, the spectral radius ρ(A) of a matrix A 2 C is defined in terms of its largest eigenvalue ρ(A) = maxfjλij : Axi = λixi; xi 6= 0g n×n • note that the spectral radius does not define a norm on C • for example the non-zero matrix 0 1 J = 0 0 has ρ(J) = 0 since both its eigenvalues are 0 • there are some relationships between the norm of a matrix and its spectral radius • the easiest one is that ρ(A) ≤ kAk n for any matrix norm that satisfies the inequality kAxk ≤ kAkkxk for all x 2 C , i.e., consistent norm { here's a proof: Axi = λixi taking norms, kAxik = kλixik = jλijkxik therefore kAxik jλij = ≤ kAk kxik since this holds for any eigenvalue of A, it follows that maxjλij = ρ(A) ≤ kAk i { in particular this is true for any operator norm { this is in general not true for norms that do not satisfy the consistency inequality kAxk ≤ kAkkxk (thanks to Likai Chen for pointing out); for example the matrix " p1 p1 # A = 2 2 p1 − p1 2 2 p is orthogonal and therefore ρ(A) = 1 but kAkH;1 = 1= 2 and so ρ(A) > kAkH;1 { exercise: show that any eigenvalue of a unitary or an orthogonal matrix must have absolute value 1 Date: October 10, 2018, version 1.0.
Inverse and Implicit Function Theorems for Noncommutative
Inverse and Implicit Function Theorems for Noncommutative Functions on Operator Domains Mark E. Mancuso Abstract Classically, a noncommutative function is defined on a graded domain of tuples of square matrices. In this note, we introduce a notion of a noncommutative function defined on a domain Ω ⊂ B(H)d, where H is an infinite dimensional Hilbert space. Inverse and implicit function theorems in this setting are established. When these operatorial noncommutative functions are suitably continuous in the strong operator topology, a noncommutative dilation-theoretic construction is used to show that the assumptions on their derivatives may be relaxed from boundedness below to injectivity. Keywords: Noncommutive functions, operator noncommutative functions, free anal- ysis, inverse and implicit function theorems, strong operator topology, dilation theory. MSC (2010): Primary 46L52; Secondary 47A56, 47J07. INTRODUCTION Polynomials in d noncommuting indeterminates can naturally be evaluated on d-tuples of square matrices of any size. The resulting function is graded (tuples of n × n ma- trices are mapped to n × n matrices) and preserves direct sums and similarities. Along with polynomials, noncommutative rational functions and power series, the convergence of which has been studied for example in [9], [14], [15], serve as prototypical examples of a more general class of functions called noncommutative functions. The theory of non- commutative functions finds its origin in the 1973 work of J. L. Taylor [17], who studied arXiv:1804.01040v2 [math.FA] 7 Aug 2019 the functional calculus of noncommuting operators. Roughly speaking, noncommutative functions are to polynomials in noncommuting variables as holomorphic functions from complex analysis are to polynomials in commuting variables.
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner prod- uct space V has an adjoint. (T ∗ is defined as the unique linear operator on V such that hT (x), yi = hx, T ∗(y)i for every x, y ∈ V – see Theroems 6.8 and 6.9.) When V is infinite dimensional, the adjoint T ∗ may or may not exist. One useful fact (Theorem 6.10) is that if β is an orthonormal basis for a finite dimen- ∗ ∗ sional inner product space V , then [T ]β = [T ]β. That is, the matrix representation of the operator T ∗ is equal to the complex conjugate of the matrix representation for T . For a general vector space V , and a linear operator T , we have already asked the ques- tion “when is there a basis of V consisting only of eigenvectors of T ?” – this is exactly when T is diagonalizable. Now, for an inner product space V , we know how to check whether vec- tors are orthogonal, and we know how to define the norms of vectors, so we can ask “when is there an orthonormal basis of V consisting only of eigenvectors of T ?” Clearly, if there is such a basis, T is diagonalizable – and moreover, eigenvectors with distinct eigenvalues must be orthogonal. Definitions Let V be an inner product space. Let T ∈ L(V ). (a) T is normal if T ∗T = TT ∗ (b) T is self-adjoint if T ∗ = T For the next two definitions, assume V is finite-dimensional: Then, (c) T is unitary if F = C and kT (x)k = kxk for every x ∈ V (d) T is orthogonal if F = R and kT (x)k = kxk for every x ∈ V Notes 1.
Computing the Polar Decomposition---With Applications
This 1984 technical report was published as N. J. Higham. Computing the polar decomposition---with applications. SIAM J. Sci. Stat. Comput., 7(4):1160-1174, 1986. The appendix contains an alternative proof omitted in the published paper. Pages 19-20 are missing from the copy that was scanned. The report is mainly of interest for how it was produced. - The cover was prepared in Vizawrite 64 on a Commodore 64 and was printed on an Epson FX-80 dot matrix printer. - The main text was prepared in Vuwriter on an Apricot PC and printed on an unknown dot matrix printer. - The bibliography was prepared in Superbase on a Commodore 64 and printed on an Epson FX-80 dot matrix printer. COMPUTING THE POLAR DECOMPOSITION - WITH APPLICATIONS N.J. Higham • Numerical Analysis Report No. 94 November 1984 This work was carried out with the support of a SERC Research Studentship • Department of Mathematics University of Manchester Manchester M13 9PL ENGLAND Univeriity of Manchester/UMIST Joint Nu•erical Analysis Reports D•p•rt••nt of Math••atic1 Dep•rt•ent of Mathe~atic1 The Victoria Univ•rsity University of Manchester Institute of of Manchester Sci•nce •nd Technology Reque1t1 for individu•l t•chnical reports •ay be addressed to Dr C.T.H. Baker, Depart•ent of Mathe••tics, The University, Mancheiter M13 9PL ABSTRACT .. A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice.
Hilbert's Projective Metric in Quantum Information Theory D. Reeb, M. J
Hilbert’s projective metric in quantum information theory D. Reeb, M. J. Kastoryano and M. M. Wolf REPORT No. 31, 2010/2011, fall ISSN 1103-467X ISRN IML-R- -31-10/11- -SE+fall HILBERT’S PROJECTIVE METRIC IN QUANTUM INFORMATION THEORY David Reeb,∗ Michael J. Kastoryano,† and Michael M. Wolf‡ Department of Mathematics, Technische Universit¨at M¨unchen, 85748 Garching, Germany Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark (Dated: August 15, 2011) We introduce and apply Hilbert’s projective metric in the context of quantum information theory. The metric is induced by convex cones such as the sets of positive, separable or PPT operators. It provides bounds on measures for statistical distinguishability of quantum states and on the decrease of entanglement under LOCC protocols or other cone-preserving operations. The results are formulated in terms of general cones and base norms and lead to contractivity bounds for quantum channels, for instance improving Ruskai’s trace-norm contraction inequality. A new duality between distinguishability measures and base norms is provided. For two given pairs of quantum states we show that the contraction of Hilbert’s projective metric is necessary and sufficient for the existence of a probabilistic quantum operation that maps one pair onto the other. Inequalities between Hilbert’s projective metric and the Chernoff bound, the fidelity and various norms are proven. Contents I. Introduction 2 II. Basic concepts 3 III. Base norms and negativities 7 IV. Contractivity properties of positive maps 9 V. Distinguishability measures 14 VI. Fidelity and Chernoff bound inequalities 21 VII. Operational interpretation 24 VIII.
The Notion of Observable and the Moment Problem for ∗-Algebras and Their GNS Representations
The notion of observable and the moment problem for ∗-algebras and their GNS representations Nicol`oDragoa, Valter Morettib Department of Mathematics, University of Trento, and INFN-TIFPA via Sommarive 14, I-38123 Povo (Trento), Italy. anicolo.drago@unitn.it, bvalter.moretti@unitn.it February, 17, 2020 Abstract We address some usually overlooked issues concerning the use of ∗-algebras in quantum theory and their physical interpretation. If A is a ∗-algebra describing a quantum system and ! : A ! C a state, we focus in particular on the interpretation of !(a) as expectation value for an algebraic observable a = a∗ 2 A, studying the problem of finding a probability n measure reproducing the moments f!(a )gn2N. This problem enjoys a close relation with the selfadjointness of the (in general only symmetric) operator π!(a) in the GNS representation of ! and thus it has important consequences for the interpretation of a as an observable. We n provide physical examples (also from QFT) where the moment problem for f!(a )gn2N does not admit a unique solution. To reduce this ambiguity, we consider the moment problem n ∗ (a) for the sequences f!b(a )gn2N, being b 2 A and !b(·) := !(b · b). Letting µ!b be a n solution of the moment problem for the sequence f!b(a )gn2N, we introduce a consistency (a) relation on the family fµ!b gb2A. We prove a 1-1 correspondence between consistent families (a) fµ!b gb2A and positive operator-valued measures (POVM) associated with the symmetric (a) operator π!(a). In particular there exists a unique consistent family of fµ!b gb2A if and only if π!(a) is maximally symmetric.
Asymptotic Spectral Measures: Between Quantum Theory and E
Asymptotic Spectral Measures: Between Quantum Theory and E-theory Jody Trout Department of Mathematics Dartmouth College Hanover, NH 03755 Email: jody.trout@dartmouth.edu Abstract— We review the relationship between positive of classical observables to the C∗-algebra of quantum operator-valued measures (POVMs) in quantum measurement observables. See the papers [12]–[14] and theB books [15], C∗ E theory and asymptotic morphisms in the -algebra -theory of [16] for more on the connections between operator algebra Connes and Higson. The theory of asymptotic spectral measures, as introduced by Martinez and Trout [1], is integrally related K-theory, E-theory, and quantization. to positive asymptotic morphisms on locally compact spaces In [1], Martinez and Trout showed that there is a fundamen- via an asymptotic Riesz Representation Theorem. Examples tal quantum-E-theory relationship by introducing the concept and applications to quantum physics, including quantum noise of an asymptotic spectral measure (ASM or asymptotic PVM) models, semiclassical limits, pure spin one-half systems and quantum information processing will also be discussed. A~ ~ :Σ ( ) { } ∈(0,1] →B H I. INTRODUCTION associated to a measurable space (X, Σ). (See Definition 4.1.) In the von Neumann Hilbert space model [2] of quantum Roughly, this is a continuous family of POV-measures which mechanics, quantum observables are modeled as self-adjoint are “asymptotically” projective (or quasiprojective) as ~ 0: → operators on the Hilbert space of states of the quantum system. ′ ′ A~(∆ ∆ ) A~(∆)A~(∆ ) 0 as ~ 0 The Spectral Theorem relates this theoretical view of a quan- ∩ − → → tum observable to the more operational one of a projection- for certain measurable sets ∆, ∆′ Σ.