Chapter 7. the Eigenvalue Problem Section 7.1

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 7. the Eigenvalue Problem Section 7.1 7. The Eigenvalue Problem, December 17, 2009 1 Chapter 7. The Eigenvalue Problem Section 7.1. The general case 1 Introduction. The eigenvalue equation of an operator Oˆ is § Oˆ x = λ x (1) | | where λ is an unknown number and x is an unknown ket. This equation has | many solutions xα , which are called eigenstates,oreigenkets,oreigenvectors | of Oˆ.Foreachsolutionxα ,thereisanumberλα, called the eigenvalue | corresponding to the eigenket xα . I remind you that if Aˆ represents| an observable then the eigenstates of Aˆ are the pure states of that observable. In addition, the spectrum of A (the values that the observable A can take) consists of the eigenvalues of Aˆ. Solving the eigenvalue equation for the operators representing observables is a matter of greatest importance to quantum mechanics. In most cases encountered in practice, you will know the operator and will want to calculate some of the eigenvalues and eigenkets. If one works with the Schr¨odinger representation (which is the coordinate representation), the operators of interest are differential operators acting on wave functions that are elements of L2. For example, the energy operator for a harmonic oscillator is h¯2 d2 Hˆ = kx2 (2) −2μ dx2 − and the eigenvalue equation is h¯2 d2ψ(x) Hˆ ψ(x) kx2ψ(x)=Eψ(x)(3) ≡−2μ dx2 − The eigenvalue equation for the energy of the harmonic oscillator has an analytic solution (see Metiu, Quantum Mechanics) and so do those equations for the particle in a box and for the hydrogen atom. This situation is not general. One of the most frequently used methods for solving such equations nu- merically is to turn them into a matrix equation. This is why we dedicate a chapter to studying the eigenvalue problem for matrices. In practice, we use computers and software provided by experts to solve the eigenvalue problem for a given matrix. Because of this, we focus here on understanding various 7. The Eigenvalue Problem, December 17, 2009 2 aspects of the eigenvalue problem for matrices and do not discuss numer- ical methods for solving it. When we need a numerical solution, we use Mathematica. For demanding calculations, we must use Fortran, C,orC++, since Mathematica is less efficient. These computer languages have libraries of programs designed to give you the eigenvalues and the eigenvectors when you give them the matrix. 2 Some general considerations. It turns out that the eigenvalue problem A§ ψ = λ ψ has many solutions and not all of them are physically mean- ingful.| We| are interested in this eigenvalue problem because, if it arises from an observable, the eigenvalues are the spectrum of the observable and some of the eigenfunctions are the pure states of the observable. In addition, the operator representing the observable is Hermitian, because its spectrum, which consists of all the values the observable can take, just contain only real numbers. Before we try to solve the eigenvalue problem, we need to decide how we recognize which eigenstates correspond to pure states: these are the only eigenstates of interest to us. In Chapter 2, we examined the definition of the probability and concluded that kets ai and aj representing pure states must satisfy the orthonormal- ization condition| | ai aj = δij (4) | 3 Normalization. Let us assume that we are interested in solving the eigenvalue§ equation Aˆ ψ = λ ψ (5) | | It is easy to see that if α is a complex number and ψ satisfies Eq. 5 then η = α ψ satisfies the same equation. For every given| eigenvalue we have as| many| eigenstates as complex numbers. The one that is a pure state must be normalized. We can therefore determine the value of α by requiring that η satisfies | η η =1 (6) | Using Eq. 5 in Eq. 6 (and the properties of the scalar product) leads to α∗α ψ ψ =1 (7) | or 1 α α = (8) ∗ ψ ψ | 7. The Eigenvalue Problem, December 17, 2009 3 This gives eiφ α = (9) ψ ψ | where φ is a real number, which cannot be determined by solving Eq. 8. The ket eiφ η = ψ (10) | ψ ψ | | is normalized and therefore is a pure state of Aˆ corresponding to the eigen- value λ. If the system is in the state η and we measure A, we are certain that the result is λ. | In practice, most computer programs that calculate eigenstates do not normalize them. If the computer produces ψ as the eigenket corresponding to eigenvalue λ,youmustuseEq.10tofind| the eigenstate η that is a pure state. Since eiφ in Eq. 10 is an irrelevant phase factor, we can| drop it (i.e. take φ = 0); the pure state corresponding to λ is ψ η = | (11) | ψ ψ | 4 Orthogonality. The pure states must also be orthogonal. Because an operator§ Aˆ, representing an observable A, is always Hermitian, one can prove that eigenstates corresponding to different eigenvalues must be orthogonal. That is, if Aˆ ηi = λi ηi and Aˆ ηj = λj ηj (12) | | | | and λi = λj (13) then ηi ηj =0 (14) | Theproofiseasy.Wehave Aˆηi ηj ηi Aˆηj =0 (15) | − | because Aˆ is Hermitian. Using Eqs. 12 in Eq. 15 gives (λi λj) ηi ηj =0 − | 7. The Eigenvalue Problem, December 17, 2009 4 When λi = λj,wemusthave ηi ηj =0, | which is what we wanted to prove. So, when you solve an eigenvalue equation for an observable, the eigen- states corresponding to different eigenvalues must be orthogonal. If they are not, there is something wrong with the program calculating the eigenvectors. What happens if some eigenvalues are equal to each other? Physicists call such eigenvalues ‘degenerate’, mathematicians call them ‘multiple’. If ηi ˆ ˆ | and ηj belong to the same eigenvalue (i.e. if A ηi = λ ηi and A ηj = λ ηj ) | | | | | then ηi and ηj are under no obligation to be orthogonal to each other (only to, according| | to the fact in the previous , all eigenvectors belonging to eigen- values that differ from λ). Most numerical§ procedures for solving eigenvalue problems will provide non-orthogonal degenerate eigenvectors. These can be orthogonalized with the Gram-Schmidt procedure and then normalized. The resulting eigenvectors are now pure states. You will see how this is done in an example given later in this chapter. It is important to keep in mind this distinction between eigenstates and pure states. All pure states are eigenstates but not all eigenstates are pure states. 5 The eigenvalue problem in matrix representation: a review. Let a1 , § | a2 , ..., an , ...bea completeorthonormalbasisset. Thissetneednotbe a| set of pure| states of an observable; we can use any complete basis set that a mathematician can construct. Orthonormality means that ai aj = δij for all i, j (16) | Because the set is complete, we have N Iˆ an an (17) n=1 | | Eq. 17 ignores the continuous spectrum and truncates the infinite sum in the completeness relation. We can act with each am , m =1, 2,...,N, on the eigenvalue equation to turn it into the following N| equations: am Aˆ ψ = λ am ψ ,m=1, 2,...,N (18) | | | 7. The Eigenvalue Problem, December 17, 2009 5 Now insert Iˆ,asgivenbyEq.17,betweenAˆ and ψ to obtain | N am Aˆ an an ψ = λ am ψ ,m=1, 2,...,N (19) n=1 | | | | N The complex numbers an ψ are the coordinates of ψ in the an n=1 representation. If we know | them, we can write ψ as | {| } | N ψ = an an ψ (20) | n=1 | | It is easy to rewrite Eq. 19 in a form familiar from linear matrix algebra. Let us denote ψi ai ψ ,i=1, 2,...,N (21) ≡ | and Amn am Aˆ an (22) ≡ | | With this notation, Eq. 19 becomes N Amn ψn = λψm,m=1, 2,...,N (23) n=1 and Eq. 20, N ψ = an ψn (24) | n=1 | The sum in Eq. 23 is the rule by which the matrix A, having the elements Amn, acts on the vector ψ, having the coordinates ψn. This equation is called the eigenvalue problem for the matrix A and it is often written as Aψ = λψ (25) (which resembles the operator equation Aˆ ψ = λ ψ )oras | | A11 A12 A1N ψ1 ψ1 ··· A21 A22 A2N ψ2 ψ2 ⎛ ··· ⎞ ⎛ ⎞ ⎛ ⎞ . .. = λ . (26) ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A A A ⎟ ⎜ ψ ⎟ ⎜ ψ ⎟ ⎜ N1 N2 NN ⎟ ⎜ N ⎟ ⎜ N ⎟ ⎝ ··· ⎠ ⎝ ⎠ ⎝ ⎠ Eqs. 25 and 26 are different ways of writing Eq. 23, which, in turn, is the representation of the equation Aˆ ψ = λ ψ in the basis set a1 ,..., aN . | | {| | } 7. The Eigenvalue Problem, December 17, 2009 6 Normally in quantum mechanics we choose the basis set a1 ,..., an ˆ {| |ˆ } and know the operator A. This means that we can calculate Amn = am A an . We do not know ψ or λ, and our mission is to find them from Aˆ ψ =|λ ψ| . We have converted| this operator equation, by using the orthonormal| basis| set a1 ,..., aN , into the matrix equation Eq. 23 (or Eq. 25 or Eq. 26). {| | } In Eq. 23, we know Amn but we do not know ψ = ψ1, ψ2,...,ψN or λ. { } WemustcalculatethemfromEq.23.Onceweknowψ1, ψ2,...,ψN ,wecan calculate ψ from | N N ψ = an an ψ ψn an (27) | n=1 | | ≡n=1 | 6 Use a computer. Calculating eigenvalues and eigenvectors of a matrix “by§ hand” is extremely tedious, especially since the dimension N is often very large. Most computer languages (including Fortran, C, C++, Mathematica, Maple, Mathcad, Basic) have libraries of functions that given a matrix will return its eigenvalues and eigenvectors. I will not discuss the numerical algorithms used for finding eigenvalues. Quantum mechanics could leave that to experts without suffering much harm.
Recommended publications
  • Degenerate Eigenvalue Problem 32.1 Degenerate Perturbation
    Physics 342 Lecture 32 Degenerate Eigenvalue Problem Lecture 32 Physics 342 Quantum Mechanics I Wednesday, April 23rd, 2008 We have the matrix form of the first order perturbative result from last time. This carries over pretty directly to the Schr¨odingerequation, with only minimal replacement (the inner product and finite vector space change, but notationally, the results are identical). Because there are a variety of quantum mechanical systems with degenerate spectra (like the Hydrogen 2 eigenstates, each En has n associated eigenstates) and we want to be able to predict the energy shift associated with perturbations in these systems, we can copy our arguments for matrices to cover matrices with more than one eigenvector per eigenvalue. The punch line of that program is that we can use the non-degenerate perturbed energies, provided we start with the \correct" degenerate linear combinations. 32.1 Degenerate Perturbation N×N Going back to our symmetric matrix example, we have A IR , and 2 again, a set of eigenvectors and eigenvalues: A xi = λi xi. This time, suppose that the eigenvalue λi has a set of M associated eigenvectors { that is, suppose a set of eigenvectors yj satisfy: A yj = λi yj j = 1 M (32.1) −! 1 of 9 32.1. DEGENERATE PERTURBATION Lecture 32 (so this represents M separate equations) that are themselves orthonormal1. Clearly, any linear combination of these vectors is also an eigenvector: M M X X A βk yk = λi βk yk: (32.2) k=1 k=1 M PM Define the general combination of yi to be z βk yk, also an f gi=1 ≡ k=1 eigenvector of A with eigenvalue λi.
    [Show full text]
  • 18.102 Introduction to Functional Analysis Spring 2009
    MIT OpenCourseWare http://ocw.mit.edu 18.102 Introduction to Functional Analysis Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 108 LECTURE NOTES FOR 18.102, SPRING 2009 Lecture 19. Thursday, April 16 I am heading towards the spectral theory of self-adjoint compact operators. This is rather similar to the spectral theory of self-adjoint matrices and has many useful applications. There is a very effective spectral theory of general bounded but self- adjoint operators but I do not expect to have time to do this. There is also a pretty satisfactory spectral theory of non-selfadjoint compact operators, which it is more likely I will get to. There is no satisfactory spectral theory for general non-compact and non-self-adjoint operators as you can easily see from examples (such as the shift operator). In some sense compact operators are ‘small’ and rather like finite rank operators. If you accept this, then you will want to say that an operator such as (19.1) Id −K; K 2 K(H) is ‘big’. We are quite interested in this operator because of spectral theory. To say that λ 2 C is an eigenvalue of K is to say that there is a non-trivial solution of (19.2) Ku − λu = 0 where non-trivial means other than than the solution u = 0 which always exists. If λ =6 0 we can divide by λ and we are looking for solutions of −1 (19.3) (Id −λ K)u = 0 −1 which is just (19.1) for another compact operator, namely λ K: What are properties of Id −K which migh show it to be ‘big? Here are three: Proposition 26.
    [Show full text]
  • Curl, Divergence and Laplacian
    Curl, Divergence and Laplacian What to know: 1. The definition of curl and it two properties, that is, theorem 1, and be able to predict qualitatively how the curl of a vector field behaves from a picture. 2. The definition of divergence and it two properties, that is, if div F~ 6= 0 then F~ can't be written as the curl of another field, and be able to tell a vector field of clearly nonzero,positive or negative divergence from the picture. 3. Know the definition of the Laplace operator 4. Know what kind of objects those operator take as input and what they give as output. The curl operator Let's look at two plots of vector fields: Figure 1: The vector field Figure 2: The vector field h−y; x; 0i: h1; 1; 0i We can observe that the second one looks like it is rotating around the z axis. We'd like to be able to predict this kind of behavior without having to look at a picture. We also promised to find a criterion that checks whether a vector field is conservative in R3. Both of those goals are accomplished using a tool called the curl operator, even though neither of those two properties is exactly obvious from the definition we'll give. Definition 1. Let F~ = hP; Q; Ri be a vector field in R3, where P , Q and R are continuously differentiable. We define the curl operator: @R @Q @P @R @Q @P curl F~ = − ~i + − ~j + − ~k: (1) @y @z @z @x @x @y Remarks: 1.
    [Show full text]
  • DEGENERACY CURVES, GAPS, and DIABOLICAL POINTS in the SPECTRA of NEUMANN PARALLELOGRAMS P Overfelt
    DEGENERACY CURVES, GAPS, AND DIABOLICAL POINTS IN THE SPECTRA OF NEUMANN PARALLELOGRAMS P Overfelt To cite this version: P Overfelt. DEGENERACY CURVES, GAPS, AND DIABOLICAL POINTS IN THE SPECTRA OF NEUMANN PARALLELOGRAMS. 2020. hal-03017250 HAL Id: hal-03017250 https://hal.archives-ouvertes.fr/hal-03017250 Preprint submitted on 20 Nov 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. DEGENERACY CURVES, GAPS, AND DIABOLICAL POINTS IN THE SPECTRA OF NEUMANN PARALLELOGRAMS P. L. OVERFELT Abstract. In this paper we consider the problem of solving the Helmholtz equation over the space of all parallelograms subject to Neumann boundary conditions and determining the degeneracies occurring in their spectra upon changing the two parameters, angle and side ratio. This problem is solved numerically using the finite element method (FEM). Specifically for the lowest eleven normalized eigenvalue levels of the family of Neumann parallelograms, the intersection of two (or more) adjacent eigen- value level surfaces occurs in one of three ways: either as an isolated point associated with the special geometries, i.e., the rectangle, the square, or the rhombus, as part of a degeneracy curve which appears to contain an infinite number of points, or as a diabolical point in the Neumann parallelogram spec- trum.
    [Show full text]
  • Numerical Operator Calculus in Higher Dimensions
    Numerical operator calculus in higher dimensions Gregory Beylkin* and Martin J. Mohlenkamp Applied Mathematics, University of Colorado, Boulder, CO 80309 Communicated by Ronald R. Coifman, Yale University, New Haven, CT, May 31, 2002 (received for review July 31, 2001) When an algorithm in dimension one is extended to dimension d, equation using only one-dimensional operations and thus avoid- in nearly every case its computational cost is taken to the power d. ing the exponential dependence on d. However, if the best This fundamental difficulty is the single greatest impediment to approximate solution of the form (Eq. 1) is not good enough, solving many important problems and has been dubbed the curse there is no way to improve the accuracy. of dimensionality. For numerical analysis in dimension d,we The natural extension of Eq. 1 is the form propose to use a representation for vectors and matrices that generalizes separation of variables while allowing controlled ac- r ͑ ͒ ϭ ͸ ␾l ͑ ͒ ␾l ͑ ͒ curacy. Basic linear algebra operations can be performed in this f x1,...,xd sl 1 x1 ··· d xd . [2] representation using one-dimensional operations, thus bypassing lϭ1 the exponential scaling with respect to the dimension. Although not all operators and algorithms may be compatible with this The key quantity in Eq. 2 is r, which we call the separation rank. representation, we believe that many of the most important ones By increasing r, the approximate solution can be made as are. We prove that the multiparticle Schro¨dinger operator, as well accurate as desired.
    [Show full text]
  • A Singular One-Dimensional Bound State Problem and Its Degeneracies
    A Singular One-Dimensional Bound State Problem and its Degeneracies Fatih Erman1, Manuel Gadella2, Se¸cil Tunalı3, Haydar Uncu4 1 Department of Mathematics, Izmir˙ Institute of Technology, Urla, 35430, Izmir,˙ Turkey 2 Departamento de F´ısica Te´orica, At´omica y Optica´ and IMUVA. Universidad de Valladolid, Campus Miguel Delibes, Paseo Bel´en 7, 47011, Valladolid, Spain 3 Department of Mathematics, Istanbul˙ Bilgi University, Dolapdere Campus 34440 Beyo˘glu, Istanbul,˙ Turkey 4 Department of Physics, Adnan Menderes University, 09100, Aydın, Turkey E-mail: [email protected], [email protected], [email protected], [email protected] October 20, 2017 Abstract We give a brief exposition of the formulation of the bound state problem for the one-dimensional system of N attractive Dirac delta potentials, as an N N matrix eigenvalue problem (ΦA = ωA). The main aim of this paper is to illustrate that the non-degeneracy× theorem in one dimension breaks down for the equidistantly distributed Dirac delta potential, where the matrix Φ becomes a special form of the circulant matrix. We then give elementary proof that the ground state is always non-degenerate and the associated wave function may be chosen to be positive by using the Perron-Frobenius theorem. We also prove that removing a single center from the system of N delta centers shifts all the bound state energy levels upward as a simple consequence of the Cauchy interlacing theorem. Keywords. Point interactions, Dirac delta potentials, bound states. 1 Introduction Dirac delta potentials or point interactions, or sometimes called contact potentials are one of the exactly solvable classes of idealized potentials, and are used as a pedagogical tool to illustrate various physically important phenomena, where the de Broglie wavelength of the particle is much larger than the range of the interaction.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • 11.-HYPERBOLA-THEORY.Pdf
    12. HYPERBOLA 1. INTRODUCTION A hyperbola is the locus of a point which moves in the plane in such a way that Z the ratio of its distance from a fixed point in the same plane to its distance X’ P from a fixed line is always constant which is always greater than unity. M The fixed point is called the focus, the fixed line is called the directrix. The constant ratio is generally denoted by e and is known as the eccentricity of the Directrix hyperbola. A hyperbola can also be defined as the locus of a point such that S (focus) the absolute value of the difference of the distances from the two fixed points Z’ (foci) is constant. If S is the focus, ZZ′ is the directrix and P is any point on the hyperbola as show in figure. Figure 12.1 SP Then by definition, we have = e (e > 1). PM Note: The general equation of a conic can be taken as ax22+ 2hxy + by + 2gx + 2fy += c 0 This equation represents a hyperbola if it is non-degenerate (i.e. eq. cannot be written into two linear factors) ahg ∆ ≠ 0, h2 > ab. Where ∆=hb f gfc MASTERJEE CONCEPTS 1. The general equation ax22+ 2hxy + by + 2gx + 2fy += c 0 can be written in matrix form as ahgx ah x x y + 2gx + 2fy += c 0 and xy1hb f y = 0 hb y gfc1 Degeneracy condition depends on the determinant of the 3x3 matrix and the type of conic depends on the determinant of the 2x2 matrix.
    [Show full text]
  • October 7, 2015 1 Adjoint of a Linear Transformation
    Mathematical Toolkit Autumn 2015 Lecture 4: October 7, 2015 Lecturer: Madhur Tulsiani The following is an extremely useful inequality. Prove it for a general inner product space (not neccessarily finite dimensional). Exercise 0.1 (Cauchy-Schwarz-Bunyakowsky inequality) Let u; v be any two vectors in an inner product space V . Then jhu; vij2 ≤ hu; ui · hv; vi Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for P V . Then for any v 2 V , there exist c1; : : : ; cn 2 F such that v = i ci · wi. The coefficients ci are often called Fourier coefficients. Using the orthonormality and the properties of the inner product, we get ci = hv; wii. This can be used to prove the following Proposition 0.2 (Parseval's identity) Let V be a finite dimensional inner product space and let fw1; : : : ; wng be an orthonormal basis for V . Then, for any u; v 2 V n X hu; vi = hu; wii · hv; wii : i=1 1 Adjoint of a linear transformation Definition 1.1 Let V; W be inner product spaces over the same field F and let ' : V ! W be a linear transformation. A transformation '∗ : W ! V is called an adjoint of ' if h'(v); wi = hv; '∗(w)i 8v 2 V; w 2 W: n Pn Example 1.2 Let V = W = C with the inner product hu; vi = i=1 ui · vi. Let ' : V ! V be represented by the matrix A. Then '∗ is represented by the matrix AT . Exercise 1.3 Let 'left : Fib ! Fib be the left shift operator as before, and let hf; gi for f; g 2 Fib P1 f(n)g(n) ∗ be defined as hf; gi = n=0 Cn for C > 4.
    [Show full text]
  • Operator Calculus
    Pacific Journal of Mathematics OPERATOR CALCULUS PHILIP JOEL FEINSILVER Vol. 78, No. 1 March 1978 PACIFIC JOURNAL OF MATHEMATICS Vol. 78, No. 1, 1978 OPERATOR CALCULUS PHILIP FEINSILVER To an analytic function L(z) we associate the differential operator L(D), D denoting differentiation with respect to a real variable x. We interpret L as the generator of a pro- cess with independent increments having exponential mar- tingale m(x(t), t)= exp (zx(t) — tL{z)). Observing that m(x, —t)—ezCl where C—etLxe~tL, we study the operator calculus for C and an associated generalization of the operator xD, A—CD. We find what functions / have the property n that un~C f satisfy the evolution equation ut~Lu and the eigenvalue equations Aun—nun, thus generalizing the powers xn. We consider processes on RN as well as R1 and discuss various examples and extensions of the theory. In the case that L generates a Markov semigroup, we have transparent probabilistic interpretations. In case L may not gene- rate a probability semigroup, the general theory gives some insight into what properties any associated "processes with independent increments" should have. That is, the purpose is to elucidate the Markov case but in such a way that hopefully will lead to practi- cable definitions and will present useful ideas for defining more general processes—involving, say, signed and/or singular measures. IL Probabilistic basis. Let pt(%} be the transition kernel for a process p(t) with stationary independent increments. That is, ί pί(a?) = The Levy-Khinchine formula says that, generally: tξ tL{ίξ) e 'pt(x) = e R where L(ίξ) = aiξ - σψ/2 + [ eiξu - 1 - iζη(u)-M(du) with jΛ-fO} u% η(u) = u(\u\ ^ 1) + sgn^(M ^ 1) and ί 2M(du)< <*> .
    [Show full text]
  • The Holomorphic Functional Calculus Approach to Operator Semigroups
    THE HOLOMORPHIC FUNCTIONAL CALCULUS APPROACH TO OPERATOR SEMIGROUPS CHARLES BATTY, MARKUS HAASE, AND JUNAID MUBEEN In honour of B´elaSz.-Nagy (29 July 1913 { 21 December 1998) on the occasion of his centenary Abstract. In this article we construct a holomorphic functional calculus for operators of half-plane type and show how key facts of semigroup theory (Hille- Yosida and Gomilko-Shi-Feng generation theorems, Trotter-Kato approxima- tion theorem, Euler approximation formula, Gearhart-Pr¨usstheorem) can be elegantly obtained in this framework. Then we discuss the notions of bounded H1-calculus and m-bounded calculus on half-planes and their relation to weak bounded variation conditions over vertical lines for powers of the resolvent. Finally we discuss the Hilbert space case, where semigroup generation is char- acterised by the operator having a strong m-bounded calculus on a half-plane. 1. Introduction The theory of strongly continuous semigroups is more than 60 years old, with the fundamental result of Hille and Yosida dating back to 1948. Several monographs and textbooks cover material which is now canonical, each of them having its own particular point of view. One may focus entirely on the semigroup, and consider the generator as a derived concept (as in [12]) or one may start with the generator and view the semigroup as the inverse Laplace transform of the resolvent (as in [2]). Right from the beginning, functional calculus methods played an important role in semigroup theory. Namely, given a C0-semigroup T = (T (t))t≥0 which is uni- formly bounded, say, one can form the averages Z 1 Tµ := T (s) µ(ds) (strong integral) 0 for µ 2 M(R+) a complex Radon measure of bounded variation on R+ = [0; 1).
    [Show full text]
  • Functional Analysis (WS 19/20), Problem Set 8 (Spectrum And
    Functional Analysis (WS 19/20), Problem Set 8 (spectrum and adjoints on Hilbert spaces)1 In what follows, let H be a complex Hilbert space. Let T : H ! H be a bounded linear operator. We write T ∗ : H ! H for adjoint of T dened with hT x; yi = hx; T ∗yi: This operator exists and is uniquely determined by Riesz Representation Theorem. Spectrum of T is the set σ(T ) = fλ 2 C : T − λI does not have a bounded inverseg. Resolvent of T is the set ρ(T ) = C n σ(T ). Basic facts on adjoint operators R1. | Adjoint T ∗ exists and is uniquely determined. R2. | Adjoint T ∗ is a bounded linear operator and kT ∗k = kT k. Moreover, kT ∗T k = kT k2. R3. | Taking adjoints is an involution: (T ∗)∗ = T . R4. Adjoints commute with the sum: ∗ ∗ ∗. | (T1 + T2) = T1 + T2 ∗ ∗ R5. | For λ 2 C we have (λT ) = λ T . R6. | Let T be a bounded invertible operator. Then, (T ∗)−1 = (T −1)∗. R7. Let be bounded operators. Then, ∗ ∗ ∗. | T1;T2 (T1 T2) = T2 T1 R8. | We have relationship between kernel and image of T and T ∗: ker T ∗ = (im T )?; (ker T ∗)? = im T It will be helpful to prove that if M ⊂ H is a linear subspace, then M = M ??. Btw, this covers all previous results like if N is a nite dimensional linear subspace then N = N ?? (because N is closed). Computation of adjoints n n ∗ M1. ,Let A : R ! R be a complex matrix. Find A . 2 M2. | ,Let H = l (Z). For x = (:::; x−2; x−1; x0; x1; x2; :::) 2 H we dene the right shift operator −1 ∗ with (Rx)k = xk−1: Find kRk, R and R .
    [Show full text]