7. Operator Theory on Hilbert Spaces

Total Page:16

File Type:pdf, Size:1020Kb

7. Operator Theory on Hilbert Spaces 7. Operator Theory on Hilbert spaces In this section we take a closer look at linear continuous maps between Hilbert spaces. These are often called bounded operators, and the branch of Functional Analysis that studies these objects is called “Operator Theory.” The standard notations in Operator Theory are as follows. Notations. If H1 and H2 are Hilbert spaces, the Banach space L(H1, H2) = {T : H1 → H2 : T linear continuous } will be denoted by B(H1, H2). In the case of one Hilbert space H, the space L(H, H) is simply denoted by B(H). Given T ∈ B(H1, H2) and S ∈ B(H2, H3), their composition S◦T ∈ B(H1, H3) will be simply denoted by ST . We know that B(H) is a unital Banach algebra. This Banach algebra will be our main tool used for investigating bounded operators. The general theme of this section is to study the interplay between the operator theoretic properties of a bounded linear operator, say T : H → H, and the algebraic properties of T as an element of the Banach algebra B(H). Some of the various concepts associated with bounded linear operators are actually defined using this Banach algebra. For example, for T ∈ B(H), we denote its spectrum in B(H) by SpecH(T ), that is, SpecH(T ) = {λ ∈ C : T − λI : H → H not invertible}, and we define its spectral radius radH(T ) = max |λ| : λ ∈ SpecH(T ) . We start with an important technical result, for which we need the following. Lemma 7.1. Let X and Y be normed vector spaces. Equip X×Y with the product topology. For a sesquilinear map φ : X × Y → C, the following are equivalent: (i) φ is continuous; (ii) φ is continuous at (0, 0); (iii) sup |φ(x, y)| :(x, y) ∈ X × Y, kxk, kyk ≤ 1 < ∞; (iv) there exists some constant C ≥ 0, such that |φ(x, y)| ≤ C · kxk · kyk, ∀ (x, y) ∈ X × Y. Moreover, the number in (iii) is equal to (1) min C ≥ 0 : |φ(x, y)| ≤ C · kxk · kyk, ∀ (x, y) ∈ X × Y . Proof. The implication (i) ⇒ (ii) is trivial. (ii) ⇒ (iii). Assume φ is continuous at (0, 0). We prove (iii) by contra- diction. Assume, for each integer n ≥ 1 there are vectors xn ∈ X and yn ∈ Y 2 1 with kxnk, kynk ≤ 1, but such that |φ(xn, yn)| ≥ n . If we take vn = n xn and 1 1 wn = n yn, then on the one hand we have kvnk, kwnk ≤ n , ∀ n ≥ 1, which forces limn→∞(vn, wn) = (0, 0) in X × Y, so by (iii) we have limn→∞ φ(vn, wn) = 0. On the other hand, we also have |φ(x , y )| |φ(v , w )| = n n ≥ 1, ∀ n ≥ 1, n n n2 300 §7. Operator Theory on Hilbert spaces 301 which is impossible. (iii) ⇒ (iv). Assume φ has property (iii), and denote the number sup |φ(x, y)| :(x, y) ∈ X × Y, kxk, kyk ≤ 1 simply by M. In order to prove (iv) we are going to prove the inequality (2) |φ(x, y)| ≤ M · kxk · kyk, ∀(x, y) ∈ X × Y. Fix (x, y) ∈ X×Y. If either x = 0 or y = 0, the above inequality is trivial, so we can 1 1 assume both x and y are non-zero. Consider the vectors v = kxk x and w = kyk y. We clearly have |φ(x, y)| = φ(kxkv, kykw) = kxk · kyk · |φ(v, w)|. Since kvk = kwk = 1, we have |φ(v, w)| ≤ M, so the above inequality gives (2). (iv) ⇒ (i). Assume φ has property (iv) and let us show that φ is continuous. Let C ≥ 0 is as in (iv). Let (xn)n→∞ ⊂ X and (yn)n→∞ ⊂ Y be convergent sequences with limn→∞ xn = x and limn→∞ yn = y, and let us prove that limn→∞ φ(xn, yn) = φ(x, y). Using (iv) we have |φ(xn, yn) − φ(x, y)| ≤ |φ(xn, yn) − φ(xn, y)| + |φ(xn, y) − φ(x, y)| = = |φ(xn, yn − y)| + |φ(xn − x, y)| ≤ ≤ C · kxnk · kyn − yk + C · kxn − xk · kyk, ∀ n ≥ 1, which clearly forces limn→∞ |φ(xn, yn) − φ(x, y)| = 0, and we are done. To prove the last assertion we observe first that every C ≥ 0 with |φ(x, y)| ≤ C · kxk · kyk, ∀ (x, y) ∈ X × Y, automatically satisfies the inequality C ≥ M. This is a consequence of the above inequality, restricted to those (x, y) ∈ X×Y, with kxk, kyk ≤ 1. To finish the proof, all we have to prove is the fact that C = M satisfies (iv). But this has already been obtained when we proved the implication (iii) ⇒ (iv). Notation. With the notations above, the number defined in (iii), which is also equal to the quantity (1), is denoted by kφk. This is justified by the following. Exercise 1. Let X and Y be normed vector spaces over C. Prove that the space S(X, Y) = φ : X × Y → C : φ sesquilinear continuous is a vector space, when equipped with pointwise addition and scalar multiplication. Prove that the map S(X, Y) 3 φ 7−→ kφk ∈ [0, ∞) defines a norm. With this terminology, we have the following technical result. Theorem 7.1. Let H1 and H2 be Hilbert spaces, and let φ : H1 × H2 → C be a sesquilinear map. The following are equivalent (equip H1 × H2 with the product topology): (i) φ is continuous; (ii) there exists T ∈ B(H1, H2), such that φ(ξ1, ξ2) = (T ξ1|ξ2)H2 , ∀ (ξ1, ξ2) ∈ H1 × H2, where ( . | . )H2 denotes the inner product on H2. 302 CHAPTER II: ELEMENTS OF FUNCTIONAL ANALYSIS Moreover, the operator T ∈ B(H1, H2) is unique, and has norm kT k = kφk. Proof. (i) ⇒ (ii). Assume φ is continuous, so by Lemma 7.1 we have (3) |φ(ξ, ζ)| ≤ kφk · kξk · kζk, ∀ ξ ∈ H1, ζ ∈ H2. Fix for the moment ξ ∈ H1, and consider the map φξ : H2 3 ζ 7−→ φ(ξ, ζ) ∈ C. Using (3), it is clear that φξ : H2 → C is linear continuous, and has norm kφξk ≤ kφk·kξk. Using Riesz’ Theorem (see Section 3), it follows that there exists a unique ˜ vector ξ ∈ H2, such that ˜ φξ(ζ) = (ξ|ζ)H2 , ∀ ζ ∈ H2. Moreover, one has the equality ˜ (4) kξkH2 = kφξk ≤ kφk · kξkH1 . Remark that, if we start with two vectors ξ, η ∈ H1, then we have ˜ (ξ|ζ)H2 + (˜η|ζ)H2 = φ(ξ, ζ) + φ(η, ζ) = φ(ξ + η, ζ) = φξ+η(ζ), ∀ ζ ∈ H2, so by the uniqueness part in Riesz’ Theorem we get the equality ξ]+ η = ξ˜+η ˜. Likewise, if ξ ∈ H1, and λ ∈ C, we have ˜ ¯ ˜ ¯ (λξ|ζ)H2 = λ(ξ|ζ)H2 = λφ(ξ, ζ) = φ(λξ, ζ) = φλξ(ζ), ∀ ζ ∈ H2, which forces λξf = λξ˜. This way we have defined a linear map ˜ T : H1 3 ξ 7−→ ξ ∈ H2, with φ(ξ, ζ) = (T ξ|ζ)H2 , ∀ (ξ, ζ) ∈ H1 × H2. Using (4) we also have kT ξkH2 ≤ kφk · kξkH1 , ∀ x ∈∈ H1, so T is indeed continuous, and it has norm kT k ≤ kφk. The uniqueness of T is obvious. (ii) ⇒ (i). Assume φ has property (ii), and let us prove that φ is continuous. This is pretty clear, because if we take T ∈ B(H1, H2) as in (ii), then using the Cauchy-Bunyakovski-Schwartz inequality we have |φ(ξ1, ξ2)| = |(T ξ1|ξ2)H2 | ≤ kT ξ1k · kξ2k ≤ kT k · kξ1k · kξ2k, ∀ (ξ1, ξ2) ∈ H1 × H2, so we can apply Lemma 7.1. Notice that this also proves the inequality kφk ≤ kT k. Since by the proof of the implication (i) ⇒ (ii) we already know that kT k ≤ kφk, it follows that in fact we have equality kT k = kφk. Remark 7.1. For a sesquilinear map φ : H1 × H2 → C, the conditions (i), (ii) are also equivalent to (ii’) there exists S ∈ B(H2, H1), such that φ(ξ1, ξ2) = (ξ1|Sξ2)H1 , ∀ (ξ1, ξ2) ∈ H1 × H2. §7. Operator Theory on Hilbert spaces 303 Moreover, S ∈ B(H2, H1) is unique, and satisfies kSk = kT k. This follows from the Theorem, applied to the sesquilinear map ψ : H2 × H1 → C, defined by ψ(ξ2, ξ1) = φ(ξ1, ξ2), ∀ (ξ2, ξ1) ∈ H2 × H1. Notice that one clearly has kφk = kψk, so we also get kT k = kSk. In particular, if we start with an operator T ∈ B(H1, H2) and we consider the continuous sesquilinear map H1 × H2 3 (ξ1, ξ2) 7−→ (T ξ1|ξ2)H2 ∈ C, then there exists a unique operator S ∈ B(H2, H1), such that (5) (T ξ1|ξ2)H2 = (ξ1|Sξ2)H1 , ∀ (ξ1, ξ2) ∈ H1 × H2. This operator will satisfy kSk = kT k. Definition. Given an operator T ∈ B(H1, H2), the unique operator S ∈ ? B(H2, H1) that satisfies (5) is called the adjoint of T , and is denoted by T . By the above Remark, for any two vectors ξ1 ∈ H1, ξ2 ∈ H2, we have the identities ? (T ξ1|ξ2)H2 = (ξ1|T ξ2)H1 , ? (ξ2|T ξ1)H2 = (T ξ2|ξ1)H1 . (The second identity follows from the first one by taking complex conjugates.) The following result collects a first set of properties of the adjoint operation. Proposition 7.1. A. For two Hilbert spaces H1, H2, one has ? kT k = kT k, ∀ T ∈ B(H1, H2);(6) ? ? (T ) = T, ∀ T ∈ B(H1, H2);(7) ? ? ? (S + T ) = S + T ∀ S, T ∈ B(H1, H2);(8) ? ¯ ? (9) (λT ) = λT ∀ T ∈ B(H1, H2), λ ∈ C. B. Given three Hilbert spaces H1, H2, and H3, one has ? ? ? (10) (ST ) = T S ∀ T ∈ B(H1, H2),S ∈ B(H2, H3) Proof.
Recommended publications
  • Operator Theory in the C -Algebra Framework
    Operator theory in the C∗-algebra framework S.L.Woronowicz and K.Napi´orkowski Department of Mathematical Methods in Physics Faculty of Physics Warsaw University Ho˙za74, 00-682 Warszawa Abstract Properties of operators affiliated with a C∗-algebra are studied. A functional calculus of normal elements is constructed. Representations of locally compact groups in a C∗-algebra are considered. Generaliza- tions of Stone and Nelson theorems are investigated. It is shown that the tensor product of affiliated elements is affiliated with the tensor product of corresponding algebras. 1 0 Introduction Let H be a Hilbert space and CB(H) be the algebra of all compact operators acting on H. It was pointed out in [17] that the classical theory of unbounded closed operators acting in H [8, 9, 3] is in a sense related to CB(H). It seems to be interesting to replace in this context CB(H) by any non-unital C∗- algebra. A step in this direction is done in the present paper. We shall deal with the following topics: the functional calculus of normal elements (Section 1), the representation theory of Lie groups including the Stone theorem (Sections 2,3 and 4) and the extensions of symmetric elements (Section 5). Section 6 contains elementary results related to tensor products. The perturbation theory (in the spirit of T.Kato) is not covered in this pa- per. The elementary results in this direction are contained the first author’s previous paper (cf. [17, Examples 1, 2 and 3 pp. 412–413]). To fix the notation we remind the basic definitions and results [17].
    [Show full text]
  • 18.102 Introduction to Functional Analysis Spring 2009
    MIT OpenCourseWare http://ocw.mit.edu 18.102 Introduction to Functional Analysis Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 108 LECTURE NOTES FOR 18.102, SPRING 2009 Lecture 19. Thursday, April 16 I am heading towards the spectral theory of self-adjoint compact operators. This is rather similar to the spectral theory of self-adjoint matrices and has many useful applications. There is a very effective spectral theory of general bounded but self- adjoint operators but I do not expect to have time to do this. There is also a pretty satisfactory spectral theory of non-selfadjoint compact operators, which it is more likely I will get to. There is no satisfactory spectral theory for general non-compact and non-self-adjoint operators as you can easily see from examples (such as the shift operator). In some sense compact operators are ‘small’ and rather like finite rank operators. If you accept this, then you will want to say that an operator such as (19.1) Id −K; K 2 K(H) is ‘big’. We are quite interested in this operator because of spectral theory. To say that λ 2 C is an eigenvalue of K is to say that there is a non-trivial solution of (19.2) Ku − λu = 0 where non-trivial means other than than the solution u = 0 which always exists. If λ =6 0 we can divide by λ and we are looking for solutions of −1 (19.3) (Id −λ K)u = 0 −1 which is just (19.1) for another compact operator, namely λ K: What are properties of Id −K which migh show it to be ‘big? Here are three: Proposition 26.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • Numerical Operator Calculus in Higher Dimensions
    Numerical operator calculus in higher dimensions Gregory Beylkin* and Martin J. Mohlenkamp Applied Mathematics, University of Colorado, Boulder, CO 80309 Communicated by Ronald R. Coifman, Yale University, New Haven, CT, May 31, 2002 (received for review July 31, 2001) When an algorithm in dimension one is extended to dimension d, equation using only one-dimensional operations and thus avoid- in nearly every case its computational cost is taken to the power d. ing the exponential dependence on d. However, if the best This fundamental difficulty is the single greatest impediment to approximate solution of the form (Eq. 1) is not good enough, solving many important problems and has been dubbed the curse there is no way to improve the accuracy. of dimensionality. For numerical analysis in dimension d,we The natural extension of Eq. 1 is the form propose to use a representation for vectors and matrices that generalizes separation of variables while allowing controlled ac- r ͑ ͒ ϭ ͸ ␾l ͑ ͒ ␾l ͑ ͒ curacy. Basic linear algebra operations can be performed in this f x1,...,xd sl 1 x1 ··· d xd . [2] representation using one-dimensional operations, thus bypassing lϭ1 the exponential scaling with respect to the dimension. Although not all operators and algorithms may be compatible with this The key quantity in Eq. 2 is r, which we call the separation rank. representation, we believe that many of the most important ones By increasing r, the approximate solution can be made as are. We prove that the multiparticle Schro¨dinger operator, as well accurate as desired.
    [Show full text]
  • Proved for Real Hilbert Spaces. Time Derivatives of Observables and Applications
    AN ABSTRACT OF THE THESIS OF BERNARD W. BANKSfor the degree DOCTOR OF PHILOSOPHY (Name) (Degree) in MATHEMATICS presented on (Major Department) (Date) Title: TIME DERIVATIVES OF OBSERVABLES AND APPLICATIONS Redacted for Privacy Abstract approved: Stuart Newberger LetA andH be self -adjoint operators on a Hilbert space. Conditions for the differentiability with respect totof -itH -itH <Ae cp e 9>are given, and under these conditionsit is shown that the derivative is<i[HA-AH]e-itHcp,e-itHyo>. These resultsare then used to prove Ehrenfest's theorem and to provide results on the behavior of the mean of position as a function of time. Finally, Stone's theorem on unitary groups is formulated and proved for real Hilbert spaces. Time Derivatives of Observables and Applications by Bernard W. Banks A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy June 1975 APPROVED: Redacted for Privacy Associate Professor of Mathematics in charge of major Redacted for Privacy Chai an of Department of Mathematics Redacted for Privacy Dean of Graduate School Date thesis is presented March 4, 1975 Typed by Clover Redfern for Bernard W. Banks ACKNOWLEDGMENTS I would like to take this opportunity to thank those people who have, in one way or another, contributed to these pages. My special thanks go to Dr. Stuart Newberger who, as my advisor, provided me with an inexhaustible supply of wise counsel. I am most greatful for the manner he brought to our many conversa- tions making them into a mutual exchange between two enthusiasta I must also thank my parents for their support during the earlier years of my education.Their contributions to these pages are not easily descerned, but they are there never the less.
    [Show full text]
  • On Relaxation of Normality in the Fuglede-Putnam Theorem
    proceedings of the american mathematical society Volume 77, Number 3, December 1979 ON RELAXATION OF NORMALITY IN THE FUGLEDE-PUTNAM THEOREM TAKAYUKIFURUTA1 Abstract. An operator means a bounded linear operator on a complex Hubert space. The familiar Fuglede-Putnam theorem asserts that if A and B are normal operators and if X is an operator such that AX = XB, then A*X = XB*. We shall relax the normality in the hypotheses on A and B. Theorem 1. If A and B* are subnormal and if X is an operator such that AX = XB, then A*X = XB*. Theorem 2. Suppose A, B, X are operators in the Hubert space H such that AX = XB. Assume also that X is an operator of Hilbert-Schmidt class. Then A*X = XB* under any one of the following hypotheses: (i) A is k-quasihyponormal and B* is invertible hyponormal, (ii) A is quasihyponormal and B* is invertible hyponormal, (iii) A is nilpotent and B* is invertible hyponormal. 1. In this paper an operator means a bounded linear operator on a complex Hilbert space. An operator T is called quasinormal if F commutes with T* T, subnormal if T has a normal extension and hyponormal if [ F*, T] > 0 where [S, T] = ST - TS. The inclusion relation of the classes of nonnormal opera- tors listed above is as follows: Normal § Quasinormal § Subnormal ^ Hyponormal; the above inclusions are all proper [3, Problem 160, p. 101]. The familiar Fuglede-Putnam theorem is as follows ([3, Problem 152], [4]). Theorem A. If A and B are normal operators and if X is an operator such that AX = XB, then A*X = XB*.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • Hilbert Space Quantum Mechanics
    qmd113 Hilbert Space Quantum Mechanics Robert B. Griffiths Version of 27 August 2012 Contents 1 Vectors (Kets) 1 1.1 Diracnotation ................................... ......... 1 1.2 Qubitorspinhalf ................................. ......... 2 1.3 Intuitivepicture ................................ ........... 2 1.4 General d ............................................... 3 1.5 Ketsasphysicalproperties . ............. 4 2 Operators 5 2.1 Definition ....................................... ........ 5 2.2 Dyadsandcompleteness . ........... 5 2.3 Matrices........................................ ........ 6 2.4 Daggeroradjoint................................. .......... 7 2.5 Hermitianoperators .............................. ........... 7 2.6 Projectors...................................... ......... 8 2.7 Positiveoperators............................... ............ 9 2.8 Unitaryoperators................................ ........... 9 2.9 Normaloperators................................. .......... 10 2.10 Decompositionoftheidentity . ............... 10 References: CQT = Consistent Quantum Theory by Griffiths (Cambridge, 2002). See in particular Ch. 2; Ch. 3; Ch. 4 except for Sec. 4.3. 1 Vectors (Kets) 1.1 Dirac notation ⋆ In quantum mechanics the state of a physical system is represented by a vector in a Hilbert space: a complex vector space with an inner product. The term “Hilbert space” is often reserved for an infinite-dimensional inner product space having the property◦ that it is complete or closed. However, the term is often used nowadays, as in these notes, in a way that includes finite-dimensional spaces, which automatically satisfy the condition of completeness. ⋆ We will use Dirac notation in which the vectors in the space are denoted by v , called a ket, where v is some symbol which identifies the vector. | One could equally well use something like v or v. A multiple of a vector by a complex number c is written as c v —think of it as analogous to cv of cv. | ⋆ In Dirac notation the inner product of the vectors v with w is written v w .
    [Show full text]
  • Computing the Singular Value Decomposition to High Relative Accuracy
    Computing the Singular Value Decomposition to High Relative Accuracy James Demmel Department of Mathematics Department of Electrical Engineering and Computer Science University of California - Berkeley [email protected] Plamen Koev Department of Mathematics University of California - Berkeley [email protected] Structured Matrices In Operator Theory, Numerical Analysis, Control, Signal and Image Processing Boulder, Colorado June 27-July 1, 1999 Supported by NSF and DOE INTRODUCTION • High Relative Accuracy means computing the correct SIGN and LEADING DIGITS • Singular Value Decomposition (SVD): A = UΣV T where U, V are orthogonal, σ1 σ2 Σ = and σ1 ≥ σ2 ≥ . σn ≥ 0 .. . σn • GOAL: Compute all σi with high relative accuracy, even when σi σ1 • It all comes down to being able to compute determi- nants to high relative accuracy. Example: 100 by 100 Hilbert Matrix H(i, j) = 1/(i + j − 1) • Singular values range from 1 down to 10−150 • Old algorithm, New Algorithm, both in 16 digits Singular values of Hilb(100), blue = accurate, red = usual 0 10 −20 10 −40 10 −60 10 −80 10 −100 10 −120 10 −140 10 0 10 20 30 40 50 60 70 80 90 100 • D = log(cond(A)) = log(σ1/σn) (here D = 150) • Cost of Old algorithm = O(n3D2) • Cost of New algorithm = O(n3), independent of D – Run in double, not bignums as in Mathematica – New hundreds of times faster than Old • When does it work? Not for all matrices ... • Why bother? Why do we want tiny singular values accurately? 1. When they are determined accurately by the data • Hilbert: H(i, j) = 1/(i + j − 1) • Cauchy: C(i, j) = 1/(xi + yj) 2.
    [Show full text]
  • Variational Calculus of Supervariables and Related Algebraic Structures1
    Variational Calculus of Supervariables and Related Algebraic Structures1 Xiaoping Xu Department of Mathematics, The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong2 Abstract We establish a formal variational calculus of supervariables, which is a combination of the bosonic theory of Gel’fand-Dikii and the fermionic theory in our earlier work. Certain interesting new algebraic structures are found in connection with Hamiltonian superoperators in terms of our theory. In particular, we find connections between Hamiltonian superoperators and Novikov- Poisson algebras that we introduced in our earlier work in order to establish a tensor theory of Novikov algebras. Furthermore, we prove that an odd linear Hamiltonian superoperator in our variational calculus induces a Lie superalgebra, which is a natural generalization of the Super-Virasoro algebra under certain conditions. 1 Introduction Formal variational calculus was introduced by Gel’fand and Dikii [GDi1-2] in studying Hamiltonian systems related to certain nonlinear partial differential equation, such as the arXiv:math/9911191v1 [math.QA] 24 Nov 1999 KdV equations. Invoking the variational derivatives, they found certain interesting Pois- son structures. Moreover, Gel’fand and Dorfman [GDo] found more connections between Hamiltonian operators and algebraic structures. Balinskii and Novikov [BN] studied sim- ilar Poisson structures from another point of view. The nature of Gel’fand and Dikii’s formal variational calculus is bosonic. In [X3], we presented a general frame of Hamiltonian superoperators and a purely fermionic formal variational calculus. Our work [X3] was based on pure algebraic analogy. In this paper, we shall present a formal variational calculus of supervariables, which is a combination 11991 Mathematical Subject Classification.
    [Show full text]
  • October 7, 2015 1 Adjoint of a Linear Transformation
    Mathematical Toolkit Autumn 2015 Lecture 4: October 7, 2015 Lecturer: Madhur Tulsiani The following is an extremely useful inequality. Prove it for a general inner product space (not neccessarily finite dimensional). Exercise 0.1 (Cauchy-Schwarz-Bunyakowsky inequality) Let u; v be any two vectors in an inner product space V . Then jhu; vij2 ≤ hu; ui · hv; vi Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for P V . Then for any v 2 V , there exist c1; : : : ; cn 2 F such that v = i ci · wi. The coefficients ci are often called Fourier coefficients. Using the orthonormality and the properties of the inner product, we get ci = hv; wii. This can be used to prove the following Proposition 0.2 (Parseval's identity) Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for V . Then, for any u; v 2 V n X hu; vi = hu; wii · hv; wii : i=1 1 Adjoint of a linear transformation Definition 1.1 Let V; W be inner product spaces over the same field F and let ' : V ! W be a linear transformation. A transformation '∗ : W ! V is called an adjoint of ' if h'(v); wi = hv; '∗(w)i 8v 2 V; w 2 W: n Pn Example 1.2 Let V = W = C with the inner product hu; vi = i=1 ui · vi. Let ' : V ! V be represented by the matrix A. Then '∗ is represented by the matrix AT . Exercise 1.3 Let 'left : Fib ! Fib be the left shift operator as before, and let hf; gi for f; g 2 Fib P1 f(n)g(n) ∗ be defined as hf; gi = n=0 Cn for C > 4.
    [Show full text]
  • Common Cyclic Vectors for Normal Operators 11
    COMMON CYCLIC VECTORS FOR NORMAL OPERATORS WILLIAM T. ROSS AND WARREN R. WOGEN Abstract. If µ is a finite compactly supported measure on C, then the set Sµ 2 2 ∞ of multiplication operators Mφ : L (µ) → L (µ),Mφf = φf, where φ ∈ L (µ) is injective on a set of full µ measure, is the complete set of cyclic multiplication operators on L2(µ). In this paper, we explore the question as to whether or not Sµ has a common cyclic vector. 1. Introduction A bounded linear operator S on a separable Hilbert space H is cyclic with cyclic vector f if the closed linear span of {Snf : n = 0, 1, 2, ···} is equal to H. For many cyclic operators S, especially when S acts on a Hilbert space of functions, the description of the cyclic vectors is well known. Some examples: When S is the multiplication operator f → xf on L2[0, 1], the cyclic vectors f are the L2 functions which are non-zero almost everywhere. When S is the forward shift f → zf on the classical Hardy space H2, the cyclic vectors are the outer functions [10, p. 114] (Beurling’s theorem). When S is the backward shift f → (f − f(0))/z on H2, the cyclic vectors are the non-pseudocontinuable functions [9, 16]. For a collection S of operators on H, we say that S has a common cyclic vector f, if f is a cyclic vector for every S ∈ S. Earlier work of the second author [19] showed that the set of co-analytic Toeplitz operators Tφ, with non-constant symbol φ, on H2 has a common cyclic vector.
    [Show full text]