Sturm–Liouville Theory

Total Page:16

File Type:pdf, Size:1020Kb

Sturm–Liouville Theory 2 Sturm–Liouville Theory So far, we’ve examined the Fourier decomposition of functions defined on some interval (often scaled to be from π to π). We viewed this expansion as an infinite dimensional − analogue of expanding a finite dimensional vector into its components in an orthonormal basis. But this is just the tip of the iceberg. Recalling other games we play in linear algebra, you might well be wondering whether we couldn’t have found some other basis in which to expand our functions. You might also wonder whether there shouldn’t be some role for matrices in this story. If so, read on! 2.1 Self-adjoint matrices We’ll begin by reviewing some facts about matrices. Let V and W be finite dimensional vector spaces (defined, say, over the complex numbers) with dim V = n and dim W = m. Suppose we have a linear map M : V W . By linearity, we know what M does to any → vector v V if we know what it does to a complete set v , v ,..., v of basis vectors ∈ { 1 2 n} in V . Furthermore, given a basis w , w ,..., w of W we can represent the map M in { 1 2 m} terms of an m n matrix M whose components are × Mai =(wa,Mvi) for a =1, . , m and i =1, . , n , (2.1) where ( , ) is the inner product in W . We’ll be particularly interested in the case m = n, when the matrix M is square and the map M takes M : V W = V is an isomorphism of vector spaces. For any n n → ∼ × matrix M we define it’s eigenvalues λ ,λ ,...,λ to be the roots of the characteristic { 1 2 n} equation P (λ) det(M λI) = 0, where I is the identity matrix. This characteristic ≡ − equation has degree n and the fundamental theorem of algebra assures us that we’ll always be able to find n roots (generically complex, and not necessarily distinct). The eigenvector vi of M that corresponds to the eigenvalue λi is then defined by Mvi = λivi (at least for non-degenerate eigenvalues). Given a complex n n matrix M, its Hermitian conjugate M† is defined to be the × T complex conjugate of the transpose matrix, M† (M )∗, where the complex conjugation T ≡ acts on each entry of M . A matrix is said to be Hermitian or self-adjoint if M† = M. There’s a neater way to define this: since for two vectors we have (u, v)=u v, we see † · that a matrix B is the adjoint of a matrix A iff (Bu, v)=(u, Av) (2.2) because the vector (Bu)† = u†B†. The advantages of this definition are that i) it does not require that we pick any particular components in which to write the matrix and ii) it applies whenever we have a definition of an inner product ( , ). Self-adjoint matrices have a number of very important properties. Firstly, since λi(vi, vi)=(vi, Mvi)=(Mvi, vi)=λi∗(vi, vi) (2.3) – 19 – the eigenvalues of a self-adjoint matrix are always real. Secondly, we have λi(vj, vi) = (vj, Mvi)=(Mvj, vi)=λj(vj, vi) (2.4) or in other words (λ λ )(v , v ) = 0 (2.5) i − j j i so that eigenvectors corresponding to distinct eigenvalues are orthogonal wrt the inner product ( , ). After rescaling the eigenvectors to have unit norm, we can express any v V as a linear combination of the orthonormal set v , v ,..., v of eigenvectors of ∈ { 1 2 n} some self-adjoint M. If M has degenerate eigenvalues (i.e. two or more distinct vectors have the same eigenvalue) then the set of vectors sharing an eigenvalue form a vector subspace of V and we simply choose an orthonormal basis for each of these subspaces. In any case, the important point here is that self-adjoint matrices provide a natural way to pick a basis on our vector space. A self-adjoint matrix M is non-singular (det M = 0 so that M 1 exists) if and only ' − if all its eigenvalues are non-zero. In this case, we can solve the linear equation Mu = f 1 for a fixed vector f and unknown u. Formally, the solution is u = M− f, but a practical way to determine u proceeds as follows. Suppose v , v ,..., v is an orthonormal basis { 1 2 n} of eigenvectors of M. Then we can write n n f = fivi and u = uivi !i=1 !i=1 where fi =(vi, f) etc. as before. We will know the vector u if we can find all its coefficients u in the v basis. But by linearity i { i} n n n Mu = ui Mvi = uiλivi = f = fivi , (2.6) !i=1 !i=1 !i=1 and taking the inner product of this equation with vj gives n n uiλi (vj, vi)=ujλj = fi(vj, vi)=fj (2.7) !i=1 !i=1 using the orthonormality of the basis. Provided λ = 0 we deduce u = f /λ so that j ' j j j n fi u = vi . (2.8) λi !i=1 If M is singular then either Mu = f has no solution or else has a non-unique solution (which it is depends on the choice of f). 2.2 Differential operators In the previous chapter we learned to think of functions as infinite dimensional vectors. We’d now like to think of the analogue of matrices. Sturm and Liouville realised that these – 20 – could be thought of as linear differential operators . This is just a linear combination of L derivatives with coefficients that can also be functions of x, i.e. is a linear differential L operator of order p if p p 1 d d − d = Ap(x) p + Ap 1(x) p 1 + + A1(x) + A0(x) . L dx − dx − ··· dx When it acts on a (sufficiently smooth) function y(x) it gives us back some other function y(x) obtained by differentiating y(x) in the obvious way. This is a linear map between L spaces of functions because for two (p-times differentiable) functions y1,2(x) and constants c we have (c y + c y )=c y + c y . The analogue of the matrix equation Mu = f 1,2 L 1 1 2 2 1L 1 2L 2 is then the differential equation y(x)=f(x) where we assume that both the coefficient L functions A (x),...,A (x) in and the function f(x) are known, and that we wish to find p 0 L the unknown function y(x). For most of our applications in mathematical physics, we’ll be interested in second order9 linear differential operators10 d2 d = P (x) + R(x) Q(x) . (2.9) L dx2 dx − Recall that for any such operator, the homogeneous equation y(x) = 0 has precisely two L non-trivial linearly independent solutions, say y = y1(x) and y = y2(x) and the general solution y(x)=c1y1(x)+c2y2(x) with ci C is known as the complementary function. ∈ When dealing with the inhomogeneous equation y = f, we seek any single solution y(x)= L yp(x), and the general solution is then a linear combination y(x)=cpyp(x)+c1y1(x)+c2y2(x) of the particular and complementary solutions. In many physical applications, the function f represents a driving force for a system that obeys y(x) = 0 if left undisturbed. L In the cases (I assume) you’ve seen up to now, actually finding the particular solution required a good deal of either luck or inspired guesswork – you ‘noticed’ that if you dif- ferentiated such-and-such a function you’d get something that looked pretty close to the solution you’re after, and perhaps you could then refine this guess to find an exact solu- tion. Sturm–Liouville theory provides a more systematic approach, analogous to solving the matrix equation Mu = f above. 2.3 Self-adjoint differential operators The 2nd-order differential operators considered by Sturm & Liouville take the form d dy y p(x) q(x)y, (2.10) L ≡ dx dx − " # where p(x) is real (and once differentiable) and q(x) is real and continuous. This may look to be a tremendous specialization of the general form (2.9), with R(x) restricted to be 9It’s a beautiful question to ask ‘why only second order’? Particularly in quantum theory. 10The sign in front of Q(x) is just a convention. – 21 – P (x), but actually this isn’t the case. Provided P (x) = 0, starting from (2.9) we divide # ' through by P (x) to obtain 2 d R(x) d Q(x) x R(t)/P (t)dt d x R(t)/P (t)dt d Q(x) + =e− 0 e 0 (2.11) dx2 P (x) dx − P (x) dx dx − P (x) ! " ! # x Thus setting p(x) to be the integrating factor p(x) = exp 0 R(t)/P (t)dt and likewise setting q(x)=Q(x)p(x)/P (x), we see that the forms (2.10) and (2.9) are equivalent. $% & However, for most purposes (2.10) will be more convenient. The beautiful feature of these Sturm–Liouville operators is that they are self-adjoint with respect the inner product b (f, g)= f(x)∗g(x)dx, (2.12) 'a provided the functions on which they act obey appropriate boundary conditions. To see this, we simply integrate by parts twice: b d df ∗ ( f, g)= p(x) q(x)f ∗(x) g(x)dx L dx dx − 'a ( " # ) b b df ∗ df ∗ dg = p g p(x) q(x)f(x)∗g(x)dx dx − a dx dx − ( )a ' (2.13) b b df ∗ dg d dg = p g pf∗ + f(x)∗ p(x) q(x) g(x) dx − dx dx dx − ( )a 'a ( " # ) b df ∗ dg = p(x) g f ∗ +(f, g) dx − dx L ( " #)a where in the first line we have used fact that p and q are real for a Sturm–Liouville operator.
Recommended publications
  • Math 353 Lecture Notes Intro to Pdes Eigenfunction Expansions for Ibvps
    Math 353 Lecture Notes Intro to PDEs Eigenfunction expansions for IBVPs J. Wong (Fall 2020) Topics covered • Eigenfunction expansions for PDEs ◦ The procedure for time-dependent problems ◦ Projection, independent evolution of modes 1 The eigenfunction method to solve PDEs We are now ready to demonstrate how to use the components derived thus far to solve the heat equation. First, two examples to illustrate the proces... 1.1 Example 1: no source; Dirichlet BCs The simplest case. We solve ut =uxx; x 2 (0; 1); t > 0 (1.1a) u(0; t) = 0; u(1; t) = 0; (1.1b) u(x; 0) = f(x): (1.1c) The eigenvalues/eigenfunctions are (as calculated in previous sections) 2 2 λn = n π ; φn = sin nπx; n ≥ 1: (1.2) Assuming the solution exists, it can be written in the eigenfunction basis as 1 X u(x; t) = cn(t)φn(x): n=0 Definition (modes) The n-th term of this series is sometimes called the n-th mode or Fourier mode. I'll use the word frequently to describe it (rather than, say, `basis function'). 1 00 Substitute into the PDE (1.1a) and use the fact that −φn = λnφ to obtain 1 X 0 (cn(t) + λncn(t))φn(x) = 0: n=1 By the fact the fφng is a basis, it follows that the coefficient for each mode satisfies the ODE 0 cn(t) + λncn(t) = 0: Solving the ODE gives us a `general' solution to the PDE with its BCs, 1 X −λnt u(x; t) = ane φn(x): n=1 The remaining coefficients are determined by the IC, u(x; 0) = f(x): To match to the solution, we need to also write f(x) in the basis: 1 Z 1 X hf; φni f(x) = f φ (x); f = = 2 f(x) sin nπx dx: (1.3) n n n hφ ; φ i n=1 n n 0 Then from the initial condition, we get u(x; 0) = f(x) 1 1 X X =) cn(0)φn(x) = fnφn(x) n=1 n=1 =) cn(0) = fn for all n ≥ 1: Now everything has been solved - we are done! The solution to the IBVP (1.1) is 1 X −n2π2t u(x; t) = ane sin nπx with an given by (1.5): (1.4) n=1 Alternatively, we could state the solution as follows: The solution is 1 X −λnt u(x; t) = fne φn(x) n=1 with eigenfunctions/values φn; λn given by (1.2) and fn by (1.3).
    [Show full text]
  • Math 356 Lecture Notes Intro to Pdes Eigenfunction Expansions for Ibvps
    MATH 356 LECTURE NOTES INTRO TO PDES EIGENFUNCTION EXPANSIONS FOR IBVPS J. WONG (FALL 2019) Topics covered • Eigenfunction expansions for PDEs ◦ The procedure for time-dependent problems ◦ Projection, independent evolution of modes ◦ Separation of variables • The wave equation ◦ Standard solution, standing waves ◦ Example: tuning the vibrating string (resonance) • Interpreting solutions (what does the series mean?) ◦ Relevance of Fourier convergence theorem ◦ Smoothing for the heat equation ◦ Non-smoothing for the wave equation 3 1. The eigenfunction method to solve PDEs We are now ready to demonstrate how to use the components derived thus far to solve the heat equation. The procedure here for the heat equation will extend nicely to a variety of other problems. For now, consider an initial boundary value problem of the form ut = −Lu + h(x; t); x 2 (a; b); t > 0 hom. BCs at a and b (1.1) u(x; 0) = f(x) We seek a solution in terms of the eigenfunction basis X u(x; t) = cn(t)φn(x) n by finding simple ODEs to solve for the coefficients cn(t): This form of the solution is called an eigenfunction expansion for u (or `eigenfunction series') and each term cnφn(x) is a mode (or `Fourier mode' or `eigenmode'). Part 1: find the eigenfunction basis. The first step is to compute the basis. The eigenfunctions we need are the solutions to the eigenvalue problem Lφ = λφ, φ(x) satisfies the BCs for u: (1.2) By the theorem in ??, there is a sequence of eigenfunctions fφng with eigenvalues fλng that form an orthogonal basis for L2[a; b] (i.e.
    [Show full text]
  • Linear Differential Equations in N Th-Order ;
    Ordinary Differential Equations Second Course Babylon University 2016-2017 Linear Differential Equations in n th-order ; An nth-order linear differential equation has the form ; …. (1) Where the coefficients ;(j=0,1,2,…,n-1) ,and depends solely on the variable (not on or any derivative of ) If then Eq.(1) is homogenous ,if not then is nonhomogeneous . …… 2 A linear differential equation has constant coefficients if all the coefficients are constants ,if one or more is not constant then has variable coefficients . Examples on linear differential equations ; first order nonhomogeneous second order homogeneous third order nonhomogeneous fifth order homogeneous second order nonhomogeneous Theorem 1: Consider the initial value problem given by the linear differential equation(1)and the n initial Conditions; … …(3) Define the differential operator L(y) by …. (4) Where ;(j=0,1,2,…,n-1) is continuous on some interval of interest then L(y) = and a linear homogeneous differential equation written as; L(y) =0 Definition: Linearly Independent Solution; A set of functions … is linearly dependent on if there exists constants … not all zero ,such that …. (5) on 1 Ordinary Differential Equations Second Course Babylon University 2016-2017 A set of functions is linearly dependent if there exists another set of constants not all zero, that is satisfy eq.(5). If the only solution to eq.(5) is ,then the set of functions is linearly independent on The functions are linearly independent Theorem 2: If the functions y1 ,y2, …, yn are solutions for differential
    [Show full text]
  • The Wronskian
    Jim Lambers MAT 285 Spring Semester 2012-13 Lecture 16 Notes These notes correspond to Section 3.2 in the text. The Wronskian Now that we know how to solve a linear second-order homogeneous ODE y00 + p(t)y0 + q(t)y = 0 in certain cases, we establish some theory about general equations of this form. First, we introduce some notation. We define a second-order linear differential operator L by L[y] = y00 + p(t)y0 + q(t)y: Then, a initial value problem with a second-order homogeneous linear ODE can be stated as 0 L[y] = 0; y(t0) = y0; y (t0) = z0: We state a result concerning existence and uniqueness of solutions to such ODE, analogous to the Existence-Uniqueness Theorem for first-order ODE. Theorem (Existence-Uniqueness) The initial value problem 00 0 0 L[y] = y + p(t)y + q(t)y = g(t); y(t0) = y0; y (t0) = z0 has a unique solution on an open interval I containing the point t0 if p, q and g are continuous on I. The solution is twice differentiable on I. Now, suppose that we have obtained two solutions y1 and y2 of the equation L[y] = 0. Then 00 0 00 0 y1 + p(t)y1 + q(t)y1 = 0; y2 + p(t)y2 + q(t)y2 = 0: Let y = c1y1 + c2y2, where c1 and c2 are constants. Then L[y] = L[c1y1 + c2y2] 00 0 = (c1y1 + c2y2) + p(t)(c1y1 + c2y2) + q(t)(c1y1 + c2y2) 00 00 0 0 = c1y1 + c2y2 + c1p(t)y1 + c2p(t)y2 + c1q(t)y1 + c2q(t)y2 00 0 00 0 = c1(y1 + p(t)y1 + q(t)y1) + c2(y2 + p(t)y2 + q(t)y2) = 0: We have just established the following theorem.
    [Show full text]
  • Sturm-Liouville Expansions of the Delta
    Gauge Institute Journal H. Vic Dannon Sturm-Liouville Expansions of the Delta Function H. Vic Dannon [email protected] May, 2014 Abstract We expand the Delta Function in Series, and Integrals of Sturm-Liouville Eigen-functions. Keywords: Sturm-Liouville Expansions, Infinitesimal, Infinite- Hyper-Real, Hyper-Real, infinite Hyper-real, Infinitesimal Calculus, Delta Function, Fourier Series, Laguerre Polynomials, Legendre Functions, Bessel Functions, Delta Function, 2000 Mathematics Subject Classification 26E35; 26E30; 26E15; 26E20; 26A06; 26A12; 03E10; 03E55; 03E17; 03H15; 46S20; 97I40; 97I30. 1 Gauge Institute Journal H. Vic Dannon Contents 0. Eigen-functions Expansion of the Delta Function 1. Hyper-real line. 2. Hyper-real Function 3. Integral of a Hyper-real Function 4. Delta Function 5. Convergent Series 6. Hyper-real Sturm-Liouville Problem 7. Delta Expansion in Non-Normalized Eigen-Functions 8. Fourier-Sine Expansion of Delta Associated with yx"( )+=l yx ( ) 0 & ab==0 9. Fourier-Cosine Expansion of Delta Associated with 1 yx"( )+=l yx ( ) 0 & ab==2 p 10. Fourier-Sine Expansion of Delta Associated with yx"( )+=l yx ( ) 0 & a = 0 11. Fourier-Bessel Expansion of Delta Associated with n2 -1 ux"( )+- (l 4 ) ux ( ) = 0 & ab==0 x 2 12. Delta Expansion in Orthonormal Eigen-functions 13. Fourier-Hermit Expansion of Delta Associated with ux"( )+- (l xux2 ) ( ) = 0 for any real x 14. Fourier-Legendre Expansion of Delta Associated with 2 Gauge Institute Journal H. Vic Dannon 11 11 uu"(ql )++ [ ] ( q ) = 0 -<<pq p 4 cos2 q , 22 15. Fourier-Legendre Expansion of Delta Associated with 112 11 ymy"(ql )++- [ ( ) ] (q ) = 0 -<<pq p 4 cos2 q , 22 16.
    [Show full text]
  • Lecture 15.Pdf
    Lecture #15 Eigenfunctions of hermitian operators If the spectrum is discrete , then the eigenfunctions are in Hilbert space and correspond to physically realizable states. It the spectrum is continuous , the eigenfunctions are not normalizable, and they do not correspond to physically realizable states (but their linear combinations may be normalizable). Discrete spectra Normalizable eigenfunctions of a hermitian operator have the following properties: (1) Their eigenvalues are real. (2) Eigenfunctions that belong to different eigenvalues are orthogonal. Finite-dimension vector space: The eigenfunctions of an observable operator are complete , i.e. any function in Hilbert space can be expressed as their linear combination. Continuous spectra Example: Find the eigenvalues and eigenfunctions of the momentum operator . This solution is not square -inferable and operator p has no eigenfunctions in Hilbert space. However, if we only consider real eigenvalues, we can define sort of orthonormality: Lecture 15 Page 1 L15.P2 since the Fourier transform of Dirac delta function is Proof: Plancherel's theorem Now, Then, which looks very similar to orthonormality. We can call such equation Dirac orthonormality. These functions are complete in a sense that any square integrable function can be written in a form and c(p) is obtained by Fourier's trick. Summary for continuous spectra: eigenfunctions with real eigenvalues are Dirac orthonormalizable and complete. Lecture 15 Page 2 L15.P3 Generalized statistical interpretation: If your measure observable Q on a particle in a state you will get one of the eigenvalues of the hermitian operator Q. If the spectrum of Q is discrete, the probability of getting the eigenvalue associated with orthonormalized eigenfunction is It the spectrum is continuous, with real eigenvalues q(z) and associated Dirac-orthonormalized eigenfunctions , the probability of getting a result in the range dz is The wave function "collapses" to the corresponding eigenstate upon measurement.
    [Show full text]
  • Positive Eigenfunctions of Markovian Pricing Operators
    Positive Eigenfunctions of Markovian Pricing Operators: Hansen-Scheinkman Factorization, Ross Recovery and Long-Term Pricing Likuan Qin† and Vadim Linetsky‡ Department of Industrial Engineering and Management Sciences McCormick School of Engineering and Applied Sciences Northwestern University Abstract This paper develops a spectral theory of Markovian asset pricing models where the under- lying economic uncertainty follows a continuous-time Markov process X with a general state space (Borel right process (BRP)) and the stochastic discount factor (SDF) is a positive semi- martingale multiplicative functional of X. A key result is the uniqueness theorem for a positive eigenfunction of the pricing operator such that X is recurrent under a new probability mea- sure associated with this eigenfunction (recurrent eigenfunction). As economic applications, we prove uniqueness of the Hansen and Scheinkman (2009) factorization of the Markovian SDF corresponding to the recurrent eigenfunction, extend the Recovery Theorem of Ross (2015) from discrete time, finite state irreducible Markov chains to recurrent BRPs, and obtain the long- maturity asymptotics of the pricing operator. When an asset pricing model is specified by given risk-neutral probabilities together with a short rate function of the Markovian state, we give sufficient conditions for existence of a recurrent eigenfunction and provide explicit examples in a number of important financial models, including affine and quadratic diffusion models and an affine model with jumps. These examples show that the recurrence assumption, in addition to fixing uniqueness, rules out unstable economic dynamics, such as the short rate asymptotically arXiv:1411.3075v3 [q-fin.MF] 9 Sep 2015 going to infinity or to a zero lower bound trap without possibility of escaping.
    [Show full text]
  • An Introduction to Pseudo-Differential Operators
    An introduction to pseudo-differential operators Jean-Marc Bouclet1 Universit´ede Toulouse 3 Institut de Math´ematiquesde Toulouse [email protected] 2 Contents 1 Background on analysis on manifolds 7 2 The Weyl law: statement of the problem 13 3 Pseudodifferential calculus 19 3.1 The Fourier transform . 19 3.2 Definition of pseudo-differential operators . 21 3.3 Symbolic calculus . 24 3.4 Proofs . 27 4 Some tools of spectral theory 41 4.1 Hilbert-Schmidt operators . 41 4.2 Trace class operators . 44 4.3 Functional calculus via the Helffer-Sj¨ostrandformula . 50 5 L2 bounds for pseudo-differential operators 55 5.1 L2 estimates . 55 5.2 Hilbert-Schmidt estimates . 60 5.3 Trace class estimates . 61 6 Elliptic parametrix and applications 65 n 6.1 Parametrix on R ................................ 65 6.2 Localization of the parametrix . 71 7 Proof of the Weyl law 75 7.1 The resolvent of the Laplacian on a compact manifold . 75 7.2 Diagonalization of ∆g .............................. 78 7.3 Proof of the Weyl law . 81 A Proof of the Peetre Theorem 85 3 4 CONTENTS Introduction The spirit of these notes is to use the famous Weyl law (on the asymptotic distribution of eigenvalues of the Laplace operator on a compact manifold) as a case study to introduce and illustrate one of the many applications of the pseudo-differential calculus. The material presented here corresponds to a 24 hours course taught in Toulouse in 2012 and 2013. We introduce all tools required to give a complete proof of the Weyl law, mainly the semiclassical pseudo-differential calculus, and then of course prove it! The price to pay is that we avoid presenting many classical concepts or results which are not necessary for our purpose (such as Borel summations, principal symbols, invariance by diffeomorphism or the G˚ardinginequality).
    [Show full text]
  • Linear Differential Operators
    3 Linear Differential Operators Differential equations seem to be well suited as models for systems. Thus an understanding of differential equations is at least as important as an understanding of matrix equations. In Section 1.5 we inverted matrices and solved matrix equations. In this chapter we explore the analogous inversion and solution process for linear differential equations. Because of the presence of boundary conditions, the process of inverting a differential operator is somewhat more complex than the analogous matrix inversion. The notation ordinarily used for the study of differential equations is designed for easy handling of boundary conditions rather than for understanding of differential operators. As a consequence, the concept of the inverse of a differential operator is not widely understood among engineers. The approach we use in this chapter is one that draws a strong analogy between linear differential equations and matrix equations, thereby placing both these types of models in the same conceptual frame- work. The key concept is the Green’s function. It plays the same role for a linear differential equation as does the inverse matrix for a matrix equa- tion. There are both practical and theoretical reasons for examining the process of inverting differential operators. The inverse (or integral form) of a differential equation displays explicitly the input-output relationship of the system. Furthermore, integral operators are computationally and theoretically less troublesome than differential operators; for example, differentiation emphasizes data errors, whereas integration averages them. Consequently, the theoretical justification for applying many of the com- putational procedures of later chapters to differential systems is based on the inverse (or integral) description of the system.
    [Show full text]
  • Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces
    Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces Stefan Klus1, Ingmar Schuster2, and Krikamol Muandet3 1Department of Mathematics and Computer Science, Freie Universit¨atBerlin, Germany 2Zalando Research, Zalando SE, Berlin, Germany 3Max Planck Institute for Intelligent Systems, T¨ubingen,Germany Abstract Transfer operators such as the Perron{Frobenius or Koopman operator play an im- portant role in the global analysis of complex dynamical systems. The eigenfunctions of these operators can be used to detect metastable sets, to project the dynamics onto the dominant slow processes, or to separate superimposed signals. We propose kernel transfer operators, which extend transfer operator theory to reproducing kernel Hilbert spaces and show that these operators are related to Hilbert space representations of con- ditional distributions, known as conditional mean embeddings. The proposed numerical methods to compute empirical estimates of these kernel transfer operators subsume ex- isting data-driven methods for the approximation of transfer operators such as extended dynamic mode decomposition and its variants. One main benefit of the presented kernel- based approaches is that they can be applied to any domain where a similarity measure given by a kernel is available. Furthermore, we provide elementary results on eigende- compositions of finite-rank RKHS operators. We illustrate the results with the aid of guiding examples and highlight potential applications in molecular dynamics as well as video and text data analysis. 1 Introduction Transfer operators such as the Perron{Frobenius or Koopman operator are ubiquitous in molecular dynamics, fluid dynamics, atmospheric sciences, and control theory (Schmid, 2010, Brunton et al., 2016, Klus et al., 2018a).
    [Show full text]
  • Generalized Eigenfunction Expansions
    Universitat¨ Karlsruhe (TH) Fakult¨atf¨urMathematik Institut f¨urAnalysis Arbeitsgruppe Funktionalanalysis Generalized Eigenfunction Expansions Diploma Thesis of Emin Karayel supervised by Prof. Dr. Lutz Weis August 2009 Erkl¨arung Hiermit erkl¨areich, dass ich diese Diplomarbeit selbst¨andigverfasst habe und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet habe. Emin Karayel, August 2009 Anschrift Emin Karayel Werner-Siemens-Str. 21 75173 Pforzheim Matr.-Nr.: 1061189 E-Mail: [email protected] 2 Contents 1 Introduction 5 2 Eigenfunction expansions on Hilbert spaces 9 2.1 Spectral Measures . 9 2.2 Phillips Theorem . 11 2.3 Rigged Hilbert Spaces . 12 2.4 Basic Eigenfunction Expansion . 13 2.5 Extending the Operator . 15 2.6 Orthogonalizing Generalized Eigenvectors . 16 2.7 Summary . 21 3 Hilbert-Schmidt-Riggings 23 3.1 Unconditional Hilbert-Schmidt Riggings . 23 3.1.1 Hilbert-Schmidt-Integral Operators . 24 3.2 Weighted Spaces and Sobolev Spaces . 25 3.3 Conditional Hilbert-Schmidt Riggings . 26 3.4 Eigenfunction Expansion for the Laplacian . 27 3.4.1 Optimality . 27 3.5 Nuclear Riggings . 28 4 Operator Relations 31 4.1 Vector valued multiplication operators . 31 4.2 Preliminaries . 33 4.3 Interpolation spaces . 37 4.4 Eigenfunction expansion . 38 4.5 Semigroups . 41 4.6 Expansion for Homogeneous Elliptic Selfadjoint Differential Oper- ators . 42 4.7 Conclusions . 44 5 Appendix 47 5.1 Acknowledgements . 47 5.2 Notation . 47 5.3 Conventions . 49 3 1 Introduction Eigenvectors and eigenvalues are indispensable tools to investigate (at least com- pact) normal linear operators. They provide a diagonalization of the operator i.e.
    [Show full text]
  • On the Eigenvalues of Integral Operators with Difference Kernels
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 53, 554-566 (1976) On the Eigenvalues of Integral Operators with Difference Kernels R. VITTAL RAO Department of Applied Mathematics, Indian Institute of Science, Bangalore 560012, India Submitted by G. Milton Wing 1. INTRODUCTION Integral equations with difference kernels on finite intervals occur in the study of a wide class of transport phenomena such as neutron transport [l], radiative transfer [2], linearized gas dynamics [3, 41, and electromagnetic wave refraction [5], as well as in the study of fields such as stochastic processes, elasticity, etc. By such an equation, we mean an integral equation of the form fW =g@) + J1’K(~ - Y)f(Y) dy. (1.1) The spectral analysis of these integral operators plays an important role in the determination of the critical dimensions of nuclear reactors in which the energy of the neutrons is considered constant (that is, monoenergetic neutron transport). The spectral analysis has also received attention from a different context, namely, these integral operators can be regarded as continual analogs of Toeplitz matrices [6-91. In this paper we will be studying some properties of the spectrum of these integral operators. We prove three theorems. Of these the first one is valid even without the assumption that the kernel is a difference kernel. We denote by where we make the following assumptions on the kernel: q, Y) = qy, 4, (1.3) K(X, y) is locally square integrable. (1.4) 554 Copyright Q 1976 by Academic Press, Inc.
    [Show full text]