Chapter Iv Operators on Inner Product Spaces §1

Total Page:16

File Type:pdf, Size:1020Kb

Chapter Iv Operators on Inner Product Spaces §1 CHAPTER IV OPERATORS ON INNER PRODUCT SPACES 1. Complex Inner Product Spaces § 1.1. Let us recall the inner product (or the dot product) for the real n–dimensional n n Euclidean space R : for vectors x = (x1, x2, . , xn) and y = (y1, y2, . , yn) in R , the inner product x, y (denoted by x y in some books) is defined to be · x, y = x y + x y + + x y , 1 1 2 2 ··· n n and the norm (or the magnitude) x is given by x = x, x = x2 + x2 + + x2 . 1 2 ··· n For complex vectors, we cannot copy this definition directly. We need to use complex conjugation to modify this definition in such a way that x, x 0 so that the definition ≥ of magnitude x = x, x still makes sense. Recall that the conjugate of a complex number z = a + ib, where x and y are real, is given by z = a ib, and − z z = (a ib)(a + ib) = a2 + b2 = z 2. − | | The identity z z = z 2 turns out to be very useful and should be kept in mind. | | Recall that the addition and the scalar multiplication of vectors in Cn are defined n as follows: for x = (x1, x2, . , xn) and y = (y1, y2, . , yn) in C , and a in C, x + y = (x1 + y1, x2 + y2, . , xn + yn) and ax = (ax1, ax2, . , axn) The inner product (or the scalar product) x, y of vectors x and y is defined by x, y = x y + x y + + x y (1.1.1) 1 1 2 2 ······ n n Notice that x, x = x x +x x + +x x = x 2+ x 2+ + x 2 0, which is what 1 1 2 2 ··· n n | 1| | 2| ··· | n| ≥ we ask for. The norm of x is given by x = x, x 1/2 = x 2 + x 2 + + x 2. | 1| | 2| ··· | n| 1 Remark: In (1.1.1), it is not clear why we prefer to take complex conjugates of components of y instead of components of x. Actually this is more or less due to the tradition of mathematics, rather than our preference. (Physicists have a different tradition!) The space Cn provides us with the typical example of complex inner product spaces, defined as follows: Definition. By an inner product on a complex vector space we mean a device of assigning to each pair of vectors x and y a complex number denoted by x, y , such that the following conditions are satisfied: (C1) x, y 0, and x, x = 0 if and only if x = 0. ≥ (C2) y, x = x, y . (C3) The inner product is a “sesquelinear map”, i.e. a x + a x , y = a x , y + a x , y 1 1 2 2 1 1 2 2 x, b y + b y = b x, y + b x, y . 1 1 2 2 1 1 2 2 (Actually the second identity of (C3) above is the consequence of the first, to- gether with (C2). Inner products for real vector spaces can be defined in the similar fashion. It is slightly simpler because there is no need to take complex conjugation. This is simply because the conjugate of a real number is just itself. Besides Cn, another example of complex inner product space is given as follows. Consider a space of well-behaved complex-valued functions over an interval, say [a, b]; F (here we do not specify the technical meaning of being well-behaved). The inner product f, g of f, g is given by ∈ F 1 b f, g = f(t) g(t) dt, for f, g . b a ∈ F − a (On the right hand side, 1/(b a) is a normalization factor added for convenience in the − future.) The norm induced by this inner product is 1/2 1 b f f, f 1/2 = f(t) 2 dt for f . ≡ b a | | ∈ F − a In the future we will take to be the space of trigonometric polynomials and [a, b] is F any interval of length 2π, such as [0, 2π] and [ π, π]. − 2 1.2. Let V be a complex vector space V with an inner product , . We say that · · two vectors x and y in V are orthogonal or perpendicular if their inner product is zero and we write x y in this case. Thus, by our definition here, ⊥ x y x, y = 0. ⊥ ⇐⇒ From the definition of orthogonality you should recognize that, first, the zero vector 0 is orthogonal to every vector (indeed, for each vector x in V , 0, x = 0 + 0, x = 0, x + 0, x by (C3) and hence 0, x = 0); second, 0 is the only vector orthogonal to itself (this follows from (C1)) and hence 0 is the only vector orthogonal to every vector; third, x y implies y x (indeed, if x, y = 0, then y, x = x, y = 0 = 0). ⊥ ⊥ A set of nonzero vectors is called an orthogonal system if each vector in is S S orthogonal to all other vectors in . If, furthermore, each vector in has length 1, S S then is called an orthonormal system. (Notice the difference of the endings of the S words “orthogonal” and “orthonormal”.) We have the following generalized Pythagoras theorem: If v , v , , v are an orthogonal system, then 1 2 ··· n v + v + + v 2 = v 2 + v 2 + + v 2. (1.2.1) 1 2 ··· n 1 2 ··· n We prove this by induction on n. When n = 1, (5.2) becomes v 2 = v 2 and there is 1 1 nothing to prove. So let n 2 and assume that the theorem is true for n 1 vectors. Let ≥ − w = v + v + + v . Then, by our induction hypothesis, w 2 = n v 2. Thus 2 3 ··· n k= 2 k (1.2.1) becomes v + w 2 = v 2 + w 2 which remains to be verified. Notice that 1 1 v , w = v , v + v , v + + v , v = 0. 1 1 2 1 3 ······ 1 n Hence v + w 2 = v + w, v + w 1 1 1 = v , v + v , w + w, v + w, w 1 1 1 1 = v , v + v , w + v , w + w, w 1 1 1 1 = v , v + w, w = v 2 + w 2. 1 1 1 Hence (1.2.1) is valid. Given an orthogonal system = e , e ,..., e in V , and a vector v which can E { 1 2 n} be written as a linear combination of vectors in , say B n v = v1e1 + v2e2 + + vnen vkek, ··· ≡ k= 1 we look for an explicit expression for the coefficients vk in this linear combination. By the linearity in the “first slot” of the inner product, we have n n v, ej = vkek, ej = vk ek, ej . k= 1 k= 1 3 Note that e , e are zeros except when k = j, which gives 1 in this case; (in short, k j e , e = δ ). So the above identity becomes v, e = v . Thus k j jk j j n v = v, ek ek = v, e1 e1 + v, e2 e2 + + v, en en, (1.2.2) k= 1 ··· Since v, e e = v, e e = v, e , the generalized Pythagoras theorem gives k k | k| k | k| v 2 = v, e 2 + v, e 2 + + v, e 2, (1.2.3) | 1| | 2| ······ | n| if v is in the linear span of the orthonormal system = e , e ,..., e . The last E { 1 2 n} identity is a general fact about orthonormal system that should be kept in mind. 1.3. Next we consider a slightly more general problem: given a vector v in an inner product space V and a subspace W of V , spanned by a given orthogonal system S = w , w ,..., w of nonzero vectors ( w , w = 0 for k = j and w , w = 0, where k { 1 2 r} k j k k and j run between 1 and r), find the so-called orthogonal decomposition of v: v = w + h, (1.3.1) where w W and h W (that is, h is perpendicular to all vectors in W ). The vector w ∈ ⊥ here will be called the (orthogonal) projection of v onto W . Since w is in W and W is spanned by w1, w2,..., wr, we can write w = a w + a w + + a w . (1.3.2) 1 1 2 2 ··· r r We have to find a1, a2,..., ar. Identity (1.3.1) can be rewritten as r v = w + h = akwk + h. k= 1 Take any vector from w1, w2,..., wr, say wj, and form the inner product of wj with each side of the above identity. By the linearity of the “first slot” of inner product, we have v, w = r a w , w + h, w . Note that w , w are zeros except when k = j. j k= 1 k k j j k j Hence r a w , w can be reduced to a w , w . On the other hand, h, w = 0 k= 1 k k j j j j j because h is perpendicular to W and w is in W . Thus we arrive at v, w = a w , w , j j j j j or a = v, w / w , w . Substitute this expression of a to (1.3.2), switching the index j j j j j j to k, to obtain: r v, wk v, w1 v, w2 v, wr w = wk w1 + w2 + + wr, (1.3.3) k= 1 wk, wk ≡ w1, w1 w2, w2 ··· wr, wr which is the required projection.
Recommended publications
  • Math 651 Homework 1 - Algebras and Groups Due 2/22/2013
    Math 651 Homework 1 - Algebras and Groups Due 2/22/2013 1) Consider the Lie Group SU(2), the group of 2 × 2 complex matrices A T with A A = I and det(A) = 1. The underlying set is z −w jzj2 + jwj2 = 1 (1) w z with the standard S3 topology. The usual basis for su(2) is 0 i 0 −1 i 0 X = Y = Z = (2) i 0 1 0 0 −i (which are each i times the Pauli matrices: X = iσx, etc.). a) Show this is the algebra of purely imaginary quaternions, under the commutator bracket. b) Extend X to a left-invariant field and Y to a right-invariant field, and show by computation that the Lie bracket between them is zero. c) Extending X, Y , Z to global left-invariant vector fields, give SU(2) the metric g(X; X) = g(Y; Y ) = g(Z; Z) = 1 and all other inner products zero. Show this is a bi-invariant metric. d) Pick > 0 and set g(X; X) = 2, leaving g(Y; Y ) = g(Z; Z) = 1. Show this is left-invariant but not bi-invariant. p 2) The realification of an n × n complex matrix A + −1B is its assignment it to the 2n × 2n matrix A −B (3) BA Any n × n quaternionic matrix can be written A + Bk where A and B are complex matrices. Its complexification is the 2n × 2n complex matrix A −B (4) B A a) Show that the realification of complex matrices and complexifica- tion of quaternionic matrices are algebra homomorphisms.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Arxiv:2008.02889V1 [Math.QA]
    NONCOMMUTATIVE NETWORKS ON A CYLINDER S. ARTHAMONOV, N. OVENHOUSE, AND M. SHAPIRO Abstract. In this paper a double quasi Poisson bracket in the sense of Van den Bergh is constructed on the space of noncommutative weights of arcs of a directed graph embedded in a disk or cylinder Σ, which gives rise to the quasi Poisson bracket of G.Massuyeau and V.Turaev on the group algebra kπ1(Σ,p) of the fundamental group of a surface based at p ∈ ∂Σ. This bracket also induces a noncommutative Goldman Poisson bracket on the cyclic space C♮, which is a k-linear space of unbased loops. We show that the induced double quasi Poisson bracket between boundary measurements can be described via noncommutative r-matrix formalism. This gives a more conceptual proof of the result of [Ove20] that traces of powers of Lax operator form an infinite collection of noncommutative Hamiltonians in involution with respect to noncommutative Goldman bracket on C♮. 1. Introduction The current manuscript is obtained as a continuation of papers [Ove20, FK09, DF15, BR11] where the authors develop noncommutative generalizations of discrete completely integrable dynamical systems and [BR18] where a large class of noncommutative cluster algebras was constructed. Cluster algebras were introduced in [FZ02] by S.Fomin and A.Zelevisnky in an effort to describe the (dual) canonical basis of universal enveloping algebra U(b), where b is a Borel subalgebra of a simple complex Lie algebra g. Cluster algebras are commutative rings of a special type, equipped with a distinguished set of generators (cluster variables) subdivided into overlapping subsets (clusters) of the same cardinality subject to certain polynomial relations (cluster transformations).
    [Show full text]
  • Continuous Cayley Transform
    Continuous Cayley Transform A. Jadczyk In complex analysis, the Cayley transform is a mapping w of the complex plane to itself, given by z − i w(z) = : z + i This is a linear fractional transformation of the type az + b z 7! cz + d given by the matrix a b 1 −i C = = : c d 1 i The determinant of this matrix is 2i; and, noticing that (1 + i)2 = 2i; we can 1 as well replace C by 1+i C in order to to have C of determinant one - thus in the group SL(2; C): The transformation itself does not change after such a replacement. The Cayley transform is singular at the point z = −i: It maps the upper complex half{plane fz : Im(z) > 0g onto the interior of the unit disk D = fw : jwj < 1g; and the lower half{plane fz : Im(z) < 0g; except of the singular point, onto the exterior of the closure of D: The real line fz : Im(z) = 0g is mapped onto the unit circle fw : jwj = 1g - the boundary of D: The singular point z = −i can be thought of as being mapped to the point at infinity of the Riemann sphere - the one{point compactification of the complex plane. By solving the equation w(z) = z we easily find that there are two fixed points (z1; z2) of the Cayley transform: p p ! 1 3 1 3 z = + + i − − ; 1 2 2 2 2 p p ! 1 3 1 3 z = − + i − + : 2 2 2 2 2 In order to interpolate between the identity mapping and the Cayley trans- form we will choose a one{parameter group of transformations in SL(2; C) shar- ing these two fixed points.
    [Show full text]
  • Using March Madness in the First Linear Algebra Course
    Using March Madness in the first Linear Algebra course Steve Hilbert Ithaca College [email protected] Background • National meetings • Tim Chartier • 1 hr talk and special session on rankings • Try something new Why use this application? • This is an example that many students are aware of and some are interested in. • Interests a different subgroup of the class than usual applications • Interests other students (the class can talk about this with their non math friends) • A problem that students have “intuition” about that can be translated into Mathematical ideas • Outside grading system and enforcer of deadlines (Brackets “lock” at set time.) How it fits into Linear Algebra • Lots of “examples” of ranking in linear algebra texts but not many are realistic to students. • This was a good way to introduce and work with matrix algebra. • Using matrix algebra you can easily scale up to work with relatively large systems. Filling out your bracket • You have to pick a winner for each game • You can do this any way you want • Some people use their “ knowledge” • I know Duke is better than Florida, or Syracuse lost a lot of games at the end of the season so they will probably lose early in the tournament • Some people pick their favorite schools, others like the mascots, the uniforms, the team tattoos… Why rank teams? • If two teams are going to play a game ,the team with the higher rank (#1 is higher than #2) should win. • If there are a limited number of openings in a tournament, teams with higher rankings should be chosen over teams with lower rankings.
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • Characterization and Computation of Matrices of Maximal Trace Over Rotations
    Characterization and Computation of Matrices of Maximal Trace over Rotations Javier Bernal1, Jim Lawrence1,2 1National Institute of Standards and Technology, Gaithersburg, MD 20899, USA 2George Mason University, 4400 University Dr, Fairfax, VA 22030, USA javier.bernal,james.lawrence @nist.gov [email protected] { } Abstract The constrained orthogonal Procrustes problem is the least-squares problem that calls for a rotation matrix that optimally aligns two cor- responding sets of points in d dimensional Euclidean space. This − problem generalizes to the so-called Wahba’s problem which is the same problem with nonnegative weights. Given a d d matrix M, × solutions to these problems are intimately related to the problem of finding a d d rotation matrix U that maximizes the trace of UM, × i.e., that makes UM a matrix of maximal trace over rotation matrices, and it is well known this can be achieved with a method based on the computation of the singular value decomposition (SVD) of M. As the main goal of this paper, we characterize d d matrices of maximal trace × over rotation matrices in terms of their eigenvalues, and for d = 2, 3, we show how this characterization can be used to determine whether a matrix is of maximal trace over rotation matrices. Finally, although depending only slightly on the characterization, as a secondary goal of the paper, for d = 2, 3, we identify alternative ways, other than the SVD, of obtaining solutions to the aforementioned problems. MSC: 15A18, 15A42, 65H17, 65K99, 93B60 Keywords: Eigenvalues, orthogonal, Procrustes, rotation, singular value decomposition, trace, Wahba 1 Contents 1 Introduction 2 2 Reformulation of Problems as Maximizations of the Trace of Matrices Over Rotations 4 3 Characterization of Matrices of Maximal Trace Over Rota- tions 6 4 The Two-Dimensional Case: Computation without SVD 17 5 The Three-Dimensional Case: Computation without SVD 22 6 The Three-Dimensional Case Revisited 28 Summary 33 References 34 1 Introduction Suppose P = p1,...,pn and Q = q1,...,qn are each sets of n points in Rd.
    [Show full text]
  • Proved for Real Hilbert Spaces. Time Derivatives of Observables and Applications
    AN ABSTRACT OF THE THESIS OF BERNARD W. BANKSfor the degree DOCTOR OF PHILOSOPHY (Name) (Degree) in MATHEMATICS presented on (Major Department) (Date) Title: TIME DERIVATIVES OF OBSERVABLES AND APPLICATIONS Redacted for Privacy Abstract approved: Stuart Newberger LetA andH be self -adjoint operators on a Hilbert space. Conditions for the differentiability with respect totof -itH -itH <Ae cp e 9>are given, and under these conditionsit is shown that the derivative is<i[HA-AH]e-itHcp,e-itHyo>. These resultsare then used to prove Ehrenfest's theorem and to provide results on the behavior of the mean of position as a function of time. Finally, Stone's theorem on unitary groups is formulated and proved for real Hilbert spaces. Time Derivatives of Observables and Applications by Bernard W. Banks A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy June 1975 APPROVED: Redacted for Privacy Associate Professor of Mathematics in charge of major Redacted for Privacy Chai an of Department of Mathematics Redacted for Privacy Dean of Graduate School Date thesis is presented March 4, 1975 Typed by Clover Redfern for Bernard W. Banks ACKNOWLEDGMENTS I would like to take this opportunity to thank those people who have, in one way or another, contributed to these pages. My special thanks go to Dr. Stuart Newberger who, as my advisor, provided me with an inexhaustible supply of wise counsel. I am most greatful for the manner he brought to our many conversa- tions making them into a mutual exchange between two enthusiasta I must also thank my parents for their support during the earlier years of my education.Their contributions to these pages are not easily descerned, but they are there never the less.
    [Show full text]
  • On Relaxation of Normality in the Fuglede-Putnam Theorem
    proceedings of the american mathematical society Volume 77, Number 3, December 1979 ON RELAXATION OF NORMALITY IN THE FUGLEDE-PUTNAM THEOREM TAKAYUKIFURUTA1 Abstract. An operator means a bounded linear operator on a complex Hubert space. The familiar Fuglede-Putnam theorem asserts that if A and B are normal operators and if X is an operator such that AX = XB, then A*X = XB*. We shall relax the normality in the hypotheses on A and B. Theorem 1. If A and B* are subnormal and if X is an operator such that AX = XB, then A*X = XB*. Theorem 2. Suppose A, B, X are operators in the Hubert space H such that AX = XB. Assume also that X is an operator of Hilbert-Schmidt class. Then A*X = XB* under any one of the following hypotheses: (i) A is k-quasihyponormal and B* is invertible hyponormal, (ii) A is quasihyponormal and B* is invertible hyponormal, (iii) A is nilpotent and B* is invertible hyponormal. 1. In this paper an operator means a bounded linear operator on a complex Hilbert space. An operator T is called quasinormal if F commutes with T* T, subnormal if T has a normal extension and hyponormal if [ F*, T] > 0 where [S, T] = ST - TS. The inclusion relation of the classes of nonnormal opera- tors listed above is as follows: Normal § Quasinormal § Subnormal ^ Hyponormal; the above inclusions are all proper [3, Problem 160, p. 101]. The familiar Fuglede-Putnam theorem is as follows ([3, Problem 152], [4]). Theorem A. If A and B are normal operators and if X is an operator such that AX = XB, then A*X = XB*.
    [Show full text]