Linear Differential Equations and Functions of Operators

Total Page:16

File Type:pdf, Size:1020Kb

Linear Differential Equations and Functions of Operators Linear differential equations and functions of operators Andreas Ros´en (formerly Axelsson) Link¨opingUniversity February 2011 Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 1 / 26 The simplest example 7 −9 Given initial data x(0) 2 C2 and coefficient matrix A := , solve 3 −5 the linear system of first order ODEs x0(t) + Ax(t) = 0 for x : R ! C2: 3 1 Coordinates in eigenbasis y(t) := V −1x(t), where V := , give 1 1 decoupled equations 4 0 y 0(t) + y(t) = 0 for y : R ! C2: 0 −2 e−4t 0 Solution to the initial ODE is x(t) = V V −1x(0): 0 e2t Definition 4 0 e−4t 0 A = V V −1 gives e−tA := V V −1. 0 −2 0 e2t Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 2 / 26 Functional calculus through diagonalization of matrices A generic matrix A 2 Cn×n is diagonalizable, −1 A = V diag(λ1; : : : ; λn) V ; for some invertible \change-of-basis" matrix V 2 Cn×n. Definition For a function φ : σ(A) = fλ1; : : : ; λng ! C defined on the spectrum of A, define the matrix −1 φ(A) := V diag φ(λ1); : : : ; φ(λn) V For fixed A, the map φ 7! φ(A) from symbol φ : σ(A) ! C to matrix φ(A) 2 Cn×n is an algebra-homomorphism: (φψ)(A) = φ(A) (A): −tA −tA −sA −(t+s)A Example: fe gt2R is a group of matrices, e e = e . Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 3 / 26 1 Functions of matrices 2 Functions of linear Hilbert space operators 3 Spectral projections and the Kato conjecture 4 Operational calculus and maximal regularity Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 4 / 26 Functional calculus for general matrices For general A 2 Cn×n there are natural definitions of matrices A2; A3; A4;:::; (λI − A)−1; λ2 = σ(A): We want to define in a natural way a matrix φ(A) for any φ 2 H(σ(A)) := fφ :Ω ! C holomorphic ;Ω ⊃ σ(A) openg: Definition For a function φ 2 H(σ(A)), define the matrix 1 Z φ(A) := φ(λ)(λI − A)−1dλ, 2πi γ where γ is a closed curve in Ω counter clockwise around σ(A). The holomorphic functional calculus of A is the map H(σ(A)) 3 φ 7! φ(A) 2 Cn×n. Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 5 / 26 Properties of the holomorphic functional calculus 1 The holomorphic functional calculus of A is an algebra-homomorphism and its range is a commutative subalgebra of Cn×n: φ(A) (A) = (φψ)(A) = ( φ)(A) = (A)φ(A): 2 For polynomials we have (zk )(A) = Ak ; for k = 0; 1; 2;::: 3 If H(σ(A)) 3 φk ! φ uniformly on compact subsets of Ω, then φk (A) ! φ(A). 4 For diagonal matrices we have φ diag(λ1; : : : ; λn) = diag(φ(λ1); : : : ; φ(λn)): 5 For any invertible \change-of-basis" matrix V , we have φ(VAV −1) = V φ(A)V −1. Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 6 / 26 Example: a Jordan block Fix a 2 C and consider A = aI + N, where N is nilpotent: Nm = 0 for some m. Spectrum is σ(A) = fag and m−1 1 1 1 X Nk = = : λI − (aI + N) λ − a I − N=(λ − a) (λ − a)k+1 k=0 Cauchy's formula for derivatives gives m−1 m−1 X 1 Z φ(λ) X φ(k)(a) φ(A) = dλ Nk = Nk : 2πi (λ − a)k+1 k! k=0 γ k=0 For example 02 31 2 0 1 00 1 (3) 3 3100 φ(3) φ (3) 2 φ (3) 6 φ (3) B60 310 7C 6 0 φ(3) φ0(3) 1 φ00(3) 7 φ B6 7C = 6 2 7 : @40 0 31 5A 4 0 0 φ(3) φ0(3) 5 0 0 0 3 0 0 0 φ(3) Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 7 / 26 Generalization to bounded operators on Hilbert spaces Letting n ! 1, we extend the above holomorphic functional calculus from matrices A 2 Cn×n; to bounded linear operators T : H!H on Hilbert space H (or more generally to Banach space operators). The Dunford (or Riesz{Dunford or Dunford{Taylor) integral 1 Z φ(T ) := φ(λ)(λI − T )−1dλ 2πi γ defines a bounded linear operator φ(T ): H!H. Spectrum σ(T ) ⊂ C is compact (but typically not finite). Symbol φ :Ω ! C is holomorphic on open neighbourhood Ω ⊃ σ(T ). Closed curve γ ⊂ Ω encircles σ(T ) counter clockwise. Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 8 / 26 Example: self-adjoint operators Let T : H!H be self-adjoint: hTf ; gi = hf ; Tgi, f ; g 2 H. The spectral theorem shows −1 T = VMλV ; where V : L2(X ; dµ) !H is a bijective isometry, with a Borel measure dµ on a σ-compact space X , and Mλ : L2(X ; dµ) ! L2(X ; dµ) is multiplication by a real-valued function λ 2 L1(X ; dµ): (Mλf )(x) := λ(x)f (x) for f 2 L2(X ; dµ) and a.e. x 2 X . Then 1 the spectrum σ(T ) ⊂ R is the essential range of λ : X ! R, and 2 φ(T ) is similar to multiplication by φ(λ(x)), or more precisely −1 φ(T ) = VMφ◦λV : Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 9 / 26 A boundedness conjecture Even though φ is assumed to be holomorphic on a neighbourhood of σ(T ) in the definition of φ(T ), it is natural to ask whether φ(T ) only depends on φjσ(T ) ? Do we have the estimate kφ(T )kH!H ≤ sup jφ(λ)j ? λ2σ(T ) For self-adjoint operators we have kT kH!H = kMφ◦λkL2!L2 = sup jφ(λ)j: λ2σ(T ) Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 10 / 26 The conjecture is false! The estimate kφ(T )kH!H ≤ sup jφ(λ)j λ2σ(T ) cannot be true for general non-selfadjoint operators, since changing to an equivalent norm on H may change the RHS, but not the LHS (not σ(T )!). Example 0 1 A simple counter example is A = : We have A2 = 0, so 0 0 σ(A) = f0g. However, φ(0) φ0(0) A = : 0 φ(0) We cannot bound A (φ(0) and φ0(0)) by φ(0), uniformly in φ. Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 11 / 26 A classical positive result The estimate kφ(T )kH!H ≤ sup jφ(λ)j λ2σ(T ) holds if we replace the spectrum σ(T ) in the RHS by the disk fjλj ≤ kT kg⊃ σ(T ): The following is a classical result by J. von Neumann (Eine Spektraltheorie f¨urallgemeine Operatoren eines unit¨arenRaumes. Math. Nachr., 1951). Theorem (von Neumann) Let T : H!H be a bounded linear operator on a Hilbert space H. Then kφ(T )kH!H ≤ sup jφ(λ)j jλ|≤kT k holds for all φ holomorphic on some neighbourhood of jλj ≤ kT k. Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 12 / 26 A sharpening of von Neumann's theorem M. Crouzeix (Numerical range and functional calculus in Hilbert space. J. Funct. Anal., 2007) has proved the following. Theorem (Crouzeix) Let W (T ) := fhTf ; f i ; kf k = 1g be the numerical range of a linear Hilbert space operator T : H!H. Then kφ(T )kH!H ≤ 11:08 supλ2W (T ) jφ(λ)j holds uniformly for all φ holomorphic on some neighbourhood of W (T ). Hausdorff–Toeplitz theorem: W (T ) is a convex set. σ(T ) ⊂ W (T ) ⊂ fjλj ≤ kT kg The second inclusion quite sharp: kT k ≤ 2 supfjλj ; λ 2 W (T )g. For a normal( A∗A = AA∗) matrix A, W (A) is the convex hull of σ(A). Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 13 / 26 Spectral projections: a simple example Often one needs to apply φ which are not holomorphic on a convex neigbourhood of σ(T ) (like W (T )). Example 23 0 03 Let A := 40 7 15, so that σ(A) = f3; 7g (algebraic multiplicity 2 for 0 0 7 ( 20 0 03 0; jλ − 3j < 1; λ = 7). For φ(λ) = we have φ(A) := 4010 5 being 1; jλ − 7j < 1; 0 01 the spectral projection onto the two-dimensional generalized eigenspace at λ = 7. Von Neumann's and Crouzeix's theorems do not apply, but still kφ(A)k ≤ C supfjφ(λ)j ; jλ − 3j < 1 or jλ − 7j < 1g is clear from the Dunford integral! Andreas Ros´en (Link¨opingUniversity) Diff. equations & functions of operators February 2011 14 / 26 Partial differential equations (PDEs) Applications to PDEs typically involves functional calculus of unbounded n differential operators D on a space like H = L2(R ) (rather than bounded operators T ). Pn 2 Consider the positive Laplace operator −∆ = − 1 @k ; with spectrum σ(−∆) = R+. The heat equation @t ft − ∆ft = 0 has solution −t(−∆) ft = e f0; t > 0: 2 The wave equation @t ft − ∆ft = 0 has solution p p sin(t −∆) ft = cos(t −∆)f0 + p (@t f )0; t 2 R: −∆ n For functions ft (x) = f (t; x) we write x 2 R for the space variable and t for the time/evolution variable.
Recommended publications
  • Linear Differential Equations in N Th-Order ;
    Ordinary Differential Equations Second Course Babylon University 2016-2017 Linear Differential Equations in n th-order ; An nth-order linear differential equation has the form ; …. (1) Where the coefficients ;(j=0,1,2,…,n-1) ,and depends solely on the variable (not on or any derivative of ) If then Eq.(1) is homogenous ,if not then is nonhomogeneous . …… 2 A linear differential equation has constant coefficients if all the coefficients are constants ,if one or more is not constant then has variable coefficients . Examples on linear differential equations ; first order nonhomogeneous second order homogeneous third order nonhomogeneous fifth order homogeneous second order nonhomogeneous Theorem 1: Consider the initial value problem given by the linear differential equation(1)and the n initial Conditions; … …(3) Define the differential operator L(y) by …. (4) Where ;(j=0,1,2,…,n-1) is continuous on some interval of interest then L(y) = and a linear homogeneous differential equation written as; L(y) =0 Definition: Linearly Independent Solution; A set of functions … is linearly dependent on if there exists constants … not all zero ,such that …. (5) on 1 Ordinary Differential Equations Second Course Babylon University 2016-2017 A set of functions is linearly dependent if there exists another set of constants not all zero, that is satisfy eq.(5). If the only solution to eq.(5) is ,then the set of functions is linearly independent on The functions are linearly independent Theorem 2: If the functions y1 ,y2, …, yn are solutions for differential
    [Show full text]
  • The Wronskian
    Jim Lambers MAT 285 Spring Semester 2012-13 Lecture 16 Notes These notes correspond to Section 3.2 in the text. The Wronskian Now that we know how to solve a linear second-order homogeneous ODE y00 + p(t)y0 + q(t)y = 0 in certain cases, we establish some theory about general equations of this form. First, we introduce some notation. We define a second-order linear differential operator L by L[y] = y00 + p(t)y0 + q(t)y: Then, a initial value problem with a second-order homogeneous linear ODE can be stated as 0 L[y] = 0; y(t0) = y0; y (t0) = z0: We state a result concerning existence and uniqueness of solutions to such ODE, analogous to the Existence-Uniqueness Theorem for first-order ODE. Theorem (Existence-Uniqueness) The initial value problem 00 0 0 L[y] = y + p(t)y + q(t)y = g(t); y(t0) = y0; y (t0) = z0 has a unique solution on an open interval I containing the point t0 if p, q and g are continuous on I. The solution is twice differentiable on I. Now, suppose that we have obtained two solutions y1 and y2 of the equation L[y] = 0. Then 00 0 00 0 y1 + p(t)y1 + q(t)y1 = 0; y2 + p(t)y2 + q(t)y2 = 0: Let y = c1y1 + c2y2, where c1 and c2 are constants. Then L[y] = L[c1y1 + c2y2] 00 0 = (c1y1 + c2y2) + p(t)(c1y1 + c2y2) + q(t)(c1y1 + c2y2) 00 00 0 0 = c1y1 + c2y2 + c1p(t)y1 + c2p(t)y2 + c1q(t)y1 + c2q(t)y2 00 0 00 0 = c1(y1 + p(t)y1 + q(t)y1) + c2(y2 + p(t)y2 + q(t)y2) = 0: We have just established the following theorem.
    [Show full text]
  • 18.102 Introduction to Functional Analysis Spring 2009
    MIT OpenCourseWare http://ocw.mit.edu 18.102 Introduction to Functional Analysis Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 108 LECTURE NOTES FOR 18.102, SPRING 2009 Lecture 19. Thursday, April 16 I am heading towards the spectral theory of self-adjoint compact operators. This is rather similar to the spectral theory of self-adjoint matrices and has many useful applications. There is a very effective spectral theory of general bounded but self- adjoint operators but I do not expect to have time to do this. There is also a pretty satisfactory spectral theory of non-selfadjoint compact operators, which it is more likely I will get to. There is no satisfactory spectral theory for general non-compact and non-self-adjoint operators as you can easily see from examples (such as the shift operator). In some sense compact operators are ‘small’ and rather like finite rank operators. If you accept this, then you will want to say that an operator such as (19.1) Id −K; K 2 K(H) is ‘big’. We are quite interested in this operator because of spectral theory. To say that λ 2 C is an eigenvalue of K is to say that there is a non-trivial solution of (19.2) Ku − λu = 0 where non-trivial means other than than the solution u = 0 which always exists. If λ =6 0 we can divide by λ and we are looking for solutions of −1 (19.3) (Id −λ K)u = 0 −1 which is just (19.1) for another compact operator, namely λ K: What are properties of Id −K which migh show it to be ‘big? Here are three: Proposition 26.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • Numerical Operator Calculus in Higher Dimensions
    Numerical operator calculus in higher dimensions Gregory Beylkin* and Martin J. Mohlenkamp Applied Mathematics, University of Colorado, Boulder, CO 80309 Communicated by Ronald R. Coifman, Yale University, New Haven, CT, May 31, 2002 (received for review July 31, 2001) When an algorithm in dimension one is extended to dimension d, equation using only one-dimensional operations and thus avoid- in nearly every case its computational cost is taken to the power d. ing the exponential dependence on d. However, if the best This fundamental difficulty is the single greatest impediment to approximate solution of the form (Eq. 1) is not good enough, solving many important problems and has been dubbed the curse there is no way to improve the accuracy. of dimensionality. For numerical analysis in dimension d,we The natural extension of Eq. 1 is the form propose to use a representation for vectors and matrices that generalizes separation of variables while allowing controlled ac- r ͑ ͒ ϭ ͸ ␾l ͑ ͒ ␾l ͑ ͒ curacy. Basic linear algebra operations can be performed in this f x1,...,xd sl 1 x1 ··· d xd . [2] representation using one-dimensional operations, thus bypassing lϭ1 the exponential scaling with respect to the dimension. Although not all operators and algorithms may be compatible with this The key quantity in Eq. 2 is r, which we call the separation rank. representation, we believe that many of the most important ones By increasing r, the approximate solution can be made as are. We prove that the multiparticle Schro¨dinger operator, as well accurate as desired.
    [Show full text]
  • Inverse and Implicit Function Theorems for Noncommutative
    Inverse and Implicit Function Theorems for Noncommutative Functions on Operator Domains Mark E. Mancuso Abstract Classically, a noncommutative function is defined on a graded domain of tuples of square matrices. In this note, we introduce a notion of a noncommutative function defined on a domain Ω ⊂ B(H)d, where H is an infinite dimensional Hilbert space. Inverse and implicit function theorems in this setting are established. When these operatorial noncommutative functions are suitably continuous in the strong operator topology, a noncommutative dilation-theoretic construction is used to show that the assumptions on their derivatives may be relaxed from boundedness below to injectivity. Keywords: Noncommutive functions, operator noncommutative functions, free anal- ysis, inverse and implicit function theorems, strong operator topology, dilation theory. MSC (2010): Primary 46L52; Secondary 47A56, 47J07. INTRODUCTION Polynomials in d noncommuting indeterminates can naturally be evaluated on d-tuples of square matrices of any size. The resulting function is graded (tuples of n × n ma- trices are mapped to n × n matrices) and preserves direct sums and similarities. Along with polynomials, noncommutative rational functions and power series, the convergence of which has been studied for example in [9], [14], [15], serve as prototypical examples of a more general class of functions called noncommutative functions. The theory of non- commutative functions finds its origin in the 1973 work of J. L. Taylor [17], who studied arXiv:1804.01040v2 [math.FA] 7 Aug 2019 the functional calculus of noncommuting operators. Roughly speaking, noncommutative functions are to polynomials in noncommuting variables as holomorphic functions from complex analysis are to polynomials in commuting variables.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • An Introduction to Pseudo-Differential Operators
    An introduction to pseudo-differential operators Jean-Marc Bouclet1 Universit´ede Toulouse 3 Institut de Math´ematiquesde Toulouse [email protected] 2 Contents 1 Background on analysis on manifolds 7 2 The Weyl law: statement of the problem 13 3 Pseudodifferential calculus 19 3.1 The Fourier transform . 19 3.2 Definition of pseudo-differential operators . 21 3.3 Symbolic calculus . 24 3.4 Proofs . 27 4 Some tools of spectral theory 41 4.1 Hilbert-Schmidt operators . 41 4.2 Trace class operators . 44 4.3 Functional calculus via the Helffer-Sj¨ostrandformula . 50 5 L2 bounds for pseudo-differential operators 55 5.1 L2 estimates . 55 5.2 Hilbert-Schmidt estimates . 60 5.3 Trace class estimates . 61 6 Elliptic parametrix and applications 65 n 6.1 Parametrix on R ................................ 65 6.2 Localization of the parametrix . 71 7 Proof of the Weyl law 75 7.1 The resolvent of the Laplacian on a compact manifold . 75 7.2 Diagonalization of ∆g .............................. 78 7.3 Proof of the Weyl law . 81 A Proof of the Peetre Theorem 85 3 4 CONTENTS Introduction The spirit of these notes is to use the famous Weyl law (on the asymptotic distribution of eigenvalues of the Laplace operator on a compact manifold) as a case study to introduce and illustrate one of the many applications of the pseudo-differential calculus. The material presented here corresponds to a 24 hours course taught in Toulouse in 2012 and 2013. We introduce all tools required to give a complete proof of the Weyl law, mainly the semiclassical pseudo-differential calculus, and then of course prove it! The price to pay is that we avoid presenting many classical concepts or results which are not necessary for our purpose (such as Borel summations, principal symbols, invariance by diffeomorphism or the G˚ardinginequality).
    [Show full text]
  • Linear Differential Operators
    3 Linear Differential Operators Differential equations seem to be well suited as models for systems. Thus an understanding of differential equations is at least as important as an understanding of matrix equations. In Section 1.5 we inverted matrices and solved matrix equations. In this chapter we explore the analogous inversion and solution process for linear differential equations. Because of the presence of boundary conditions, the process of inverting a differential operator is somewhat more complex than the analogous matrix inversion. The notation ordinarily used for the study of differential equations is designed for easy handling of boundary conditions rather than for understanding of differential operators. As a consequence, the concept of the inverse of a differential operator is not widely understood among engineers. The approach we use in this chapter is one that draws a strong analogy between linear differential equations and matrix equations, thereby placing both these types of models in the same conceptual frame- work. The key concept is the Green’s function. It plays the same role for a linear differential equation as does the inverse matrix for a matrix equa- tion. There are both practical and theoretical reasons for examining the process of inverting differential operators. The inverse (or integral form) of a differential equation displays explicitly the input-output relationship of the system. Furthermore, integral operators are computationally and theoretically less troublesome than differential operators; for example, differentiation emphasizes data errors, whereas integration averages them. Consequently, the theoretical justification for applying many of the com- putational procedures of later chapters to differential systems is based on the inverse (or integral) description of the system.
    [Show full text]
  • Algebra Homomorphisms and the Functional Calculus
    Pacific Journal of Mathematics ALGEBRA HOMOMORPHISMS AND THE FUNCTIONAL CALCULUS MARK PHILLIP THOMAS Vol. 79, No. 1 May 1978 PACIFIC JOURNAL OF MATHEMATICS Vol. 79, No. 1, 1978 ALGEBRA HOMOMORPHISMS AND THE FUNCTIONAL CALCULUS MARC THOMAS Let b be a fixed element of a commutative Banach algebra with unit. Suppose σ(b) has at most countably many connected components. We give necessary and sufficient conditions for b to possess a discontinuous functional calculus. Throughout, let B be a commutative Banach algebra with unit 1 and let rad (B) denote the radical of B. Let b be a fixed element of B. Let έ? denote the LF space of germs of functions analytic in a neighborhood of σ(6). By a functional calculus for b we mean an algebra homomorphism θr from έ? to B such that θ\z) = b and θ\l) = 1. We do not require θr to be continuous. It is well-known that if θ' is continuous, then it is equal to θ, the usual functional calculus obtained by integration around contours i.e., θ{f) = -±τ \ f(t)(f - ]dt for f eέ?, Γ a contour about σ(b) [1, 1.4.8, Theorem 3]. In this paper we investigate the conditions under which a functional calculus & is necessarily continuous, i.e., when θ is the unique functional calculus. In the first section we work with sufficient conditions. If S is any closed subspace of B such that bS Q S, we let D(b, S) denote the largest algebraic subspace of S satisfying (6 — X)D(b, S) = D(b, S)f all λeC.
    [Show full text]
  • The Weyl Functional Calculus F(4)
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF FUNCTIONAL ANALYSIS 4, 240-267 (1969) The Weyl Functional Calculus ROBERT F. V. ANDERSON Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 Communicated by Edward Nelson Received July, 1968 I. INTRODUCTION In this paper a functional calculus for an n-tuple of noncommuting self-adjoint operators on a Banach space will be proposed and examined. The von Neumann spectral theorem for self-adjoint operators on a Hilbert space leads to a functional calculus which assigns to a self- adjoint operator A and a real Borel-measurable function f of a real variable, another self-adjoint operatorf(A). This calculus generalizes in a natural way to an n-tuple of com- muting self-adjoint operators A = (A, ,..., A,), because their spectral families commute. In particular, if v is an eigenvector of A, ,..., A, with eigenvalues A, ,..., A, , respectively, and f a continuous function of n real variables, f(A 1 ,..., A,) u = f(b , . U 0. This principle fails when the operators don’t commute; instead we have the Uncertainty Principle, and so on. However, the Fourier inversion formula can also be used to define the von Neumann functional cakuius for a single operator: f(4) = (37F2 1 (Sf)(&) exp( -2 f A,) 4 E’ where This definition generalizes naturally to an n-tuple of operators, commutative or not, as follows: in the n-dimensional Fourier inversion 240 THE WEYL FUNCTIONAL CALCULUS 241 formula, the free variables x 1 ,..,, x, are replaced by self-adjoint operators A, ,..., A, .
    [Show full text]
  • October 7, 2015 1 Adjoint of a Linear Transformation
    Mathematical Toolkit Autumn 2015 Lecture 4: October 7, 2015 Lecturer: Madhur Tulsiani The following is an extremely useful inequality. Prove it for a general inner product space (not neccessarily finite dimensional). Exercise 0.1 (Cauchy-Schwarz-Bunyakowsky inequality) Let u; v be any two vectors in an inner product space V . Then jhu; vij2 ≤ hu; ui · hv; vi Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for P V . Then for any v 2 V , there exist c1; : : : ; cn 2 F such that v = i ci · wi. The coefficients ci are often called Fourier coefficients. Using the orthonormality and the properties of the inner product, we get ci = hv; wii. This can be used to prove the following Proposition 0.2 (Parseval's identity) Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for V . Then, for any u; v 2 V n X hu; vi = hu; wii · hv; wii : i=1 1 Adjoint of a linear transformation Definition 1.1 Let V; W be inner product spaces over the same field F and let ' : V ! W be a linear transformation. A transformation '∗ : W ! V is called an adjoint of ' if h'(v); wi = hv; '∗(w)i 8v 2 V; w 2 W: n Pn Example 1.2 Let V = W = C with the inner product hu; vi = i=1 ui · vi. Let ' : V ! V be represented by the matrix A. Then '∗ is represented by the matrix AT . Exercise 1.3 Let 'left : Fib ! Fib be the left shift operator as before, and let hf; gi for f; g 2 Fib P1 f(n)g(n) ∗ be defined as hf; gi = n=0 Cn for C > 4.
    [Show full text]