[ENTRY FUNCTIONAL ANALYSIS] Authors: Oliver Knill: 2002 Literature: Various Notes
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality). -
Using Functional Analysis and Sobolev Spaces to Solve Poisson’S Equation
USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON'S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye to- wards defining weak solutions to elliptic PDE. Using Lax-Milgram we prove that weak solutions to Poisson's equation exist under certain conditions. Contents 1. Introduction 1 2. Banach spaces 2 3. Weak topology, weak star topology and reflexivity 6 4. Lower semicontinuity 11 5. Hilbert spaces 13 6. Sobolev spaces 19 References 21 1. Introduction We will discuss the following problem in this paper: let Ω be an open and connected subset in R and f be an L2 function on Ω, is there a solution to Poisson's equation (1) −∆u = f? From elementary partial differential equations class, we know if Ω = R, we can solve Poisson's equation using the fundamental solution to Laplace's equation. However, if we just take Ω to be an open and connected set, the above method is no longer useful. In addition, for arbitrary Ω and f, a C2 solution does not always exist. Therefore, instead of finding a strong solution, i.e., a C2 function which satisfies (1), we integrate (1) against a test function φ (a test function is a Date: September 28, 2016. 1 2 YI WANG smooth function compactly supported in Ω), integrate by parts, and arrive at the equation Z Z 1 (2) rurφ = fφ, 8φ 2 Cc (Ω): Ω Ω So intuitively we want to find a function which satisfies (2) for all test functions and this is the place where Hilbert spaces come into play. -
The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a Self-Adjoi
The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a self-adjoint operator 3. Spectral theorem for bounded self-adjoint operators 4. Functions of unitary operators 5. Spectral theorem for unitary operators 6. Alternative approach 7. From Theorem 1.2 to Theorem 1.1 A. Spectral projections B. Unbounded self-adjoint operators C. Von Neumann's mean ergodic theorem 1 2 1. Introduction If H is a Hilbert space, a bounded linear operator A : H ! H (A 2 L(H)) has an adjoint A∗ : H ! H defined by (1.1) (Au; v) = (u; A∗v); u; v 2 H: We say A is self-adjoint if A = A∗. We say U 2 L(H) is unitary if U ∗ = U −1. More generally, if H is another Hilbert space, we say Φ 2 L(H; H) is unitary provided Φ is one-to-one and onto, and (Φu; Φv)H = (u; v)H , for all u; v 2 H. If dim H = n < 1, each self-adjoint A 2 L(H) has the property that H has an orthonormal basis of eigenvectors of A. The same holds for each unitary U 2 L(H). Proofs can be found in xx11{12, Chapter 2, of [T3]. Here, we aim to prove the following infinite dimensional variant of such a result, called the Spectral Theorem. Theorem 1.1. If A 2 L(H) is self-adjoint, there exists a measure space (X; F; µ), a unitary map Φ: H ! L2(X; µ), and a 2 L1(X; µ), such that (1.2) ΦAΦ−1f(x) = a(x)f(x); 8 f 2 L2(X; µ): Here, a is real valued, and kakL1 = kAk. -
Linear Operators
C H A P T E R 10 Linear Operators Recall that a linear transformation T ∞ L(V) of a vector space into itself is called a (linear) operator. In this chapter we shall elaborate somewhat on the theory of operators. In so doing, we will define several important types of operators, and we will also prove some important diagonalization theorems. Much of this material is directly useful in physics and engineering as well as in mathematics. While some of this chapter overlaps with Chapter 8, we assume that the reader has studied at least Section 8.1. 10.1 LINEAR FUNCTIONALS AND ADJOINTS Recall that in Theorem 9.3 we showed that for a finite-dimensional real inner product space V, the mapping u ’ Lu = Óu, Ô was an isomorphism of V onto V*. This mapping had the property that Lauv = Óau, vÔ = aÓu, vÔ = aLuv, and hence Lau = aLu for all u ∞ V and a ∞ ®. However, if V is a complex space with a Hermitian inner product, then Lauv = Óau, vÔ = a*Óu, vÔ = a*Luv, and hence Lau = a*Lu which is not even linear (this was the definition of an anti- linear (or conjugate linear) transformation given in Section 9.2). Fortunately, there is a closely related result that holds even for complex vector spaces. Let V be finite-dimensional over ç, and assume that V has an inner prod- uct Ó , Ô defined on it (this is just a positive definite Hermitian form on V). Thus for any X, Y ∞ V we have ÓX, YÔ ∞ ç. -
Hilbert Space Methods for Partial Differential Equations
Hilbert Space Methods for Partial Differential Equations R. E. Showalter Electronic Journal of Differential Equations Monograph 01, 1994. i Preface This book is an outgrowth of a course which we have given almost pe- riodically over the last eight years. It is addressed to beginning graduate students of mathematics, engineering, and the physical sciences. Thus, we have attempted to present it while presupposing a minimal background: the reader is assumed to have some prior acquaintance with the concepts of “lin- ear” and “continuous” and also to believe L2 is complete. An undergraduate mathematics training through Lebesgue integration is an ideal background but we dare not assume it without turning away many of our best students. The formal prerequisite consists of a good advanced calculus course and a motivation to study partial differential equations. A problem is called well-posed if for each set of data there exists exactly one solution and this dependence of the solution on the data is continuous. To make this precise we must indicate the space from which the solution is obtained, the space from which the data may come, and the correspond- ing notion of continuity. Our goal in this book is to show that various types of problems are well-posed. These include boundary value problems for (stationary) elliptic partial differential equations and initial-boundary value problems for (time-dependent) equations of parabolic, hyperbolic, and pseudo-parabolic types. Also, we consider some nonlinear elliptic boundary value problems, variational or uni-lateral problems, and some methods of numerical approximation of solutions. We briefly describe the contents of the various chapters. -
3. Hilbert Spaces
3. Hilbert spaces In this section we examine a special type of Banach spaces. We start with some algebraic preliminaries. Definition. Let K be either R or C, and let Let X and Y be vector spaces over K. A map φ : X × Y → K is said to be K-sesquilinear, if • for every x ∈ X, then map φx : Y 3 y 7−→ φ(x, y) ∈ K is linear; • for every y ∈ Y, then map φy : X 3 y 7−→ φ(x, y) ∈ K is conjugate linear, i.e. the map X 3 x 7−→ φy(x) ∈ K is linear. In the case K = R the above properties are equivalent to the fact that φ is bilinear. Remark 3.1. Let X be a vector space over C, and let φ : X × X → C be a C-sesquilinear map. Then φ is completely determined by the map Qφ : X 3 x 7−→ φ(x, x) ∈ C. This can be see by computing, for k ∈ {0, 1, 2, 3} the quantity k k k k k k k Qφ(x + i y) = φ(x + i y, x + i y) = φ(x, x) + φ(x, i y) + φ(i y, x) + φ(i y, i y) = = φ(x, x) + ikφ(x, y) + i−kφ(y, x) + φ(y, y), which then gives 3 1 X (1) φ(x, y) = i−kQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 The map Qφ : X → C is called the quadratic form determined by φ. The identity (1) is referred to as the Polarization Identity. -
216 Section 6.1 Chapter 6 Hermitian, Orthogonal, And
216 SECTION 6.1 CHAPTER 6 HERMITIAN, ORTHOGONAL, AND UNITARY OPERATORS In Chapter 4, we saw advantages in using bases consisting of eigenvectors of linear opera- tors in a number of applications. Chapter 5 illustrated the benefit of orthonormal bases. Unfortunately, eigenvectors of linear operators are not usually orthogonal, and vectors in an orthonormal basis are not likely to be eigenvectors of any pertinent linear operator. There are operators, however, for which eigenvectors are orthogonal, and hence it is possible to have a basis that is simultaneously orthonormal and consists of eigenvectors. This chapter introduces some of these operators. 6.1 Hermitian Operators § When the basis for an n-dimensional real, inner product space is orthonormal, the inner product of two vectors u and v can be calculated with formula 5.48. If v not only represents a vector, but also denotes its representation as a column matrix, we can write the inner product as the product of two matrices, one a row matrix and the other a column matrix, (u, v) = uT v. If A is an n n real matrix, the inner product of u and the vector Av is × (u, Av) = uT (Av) = (uT A)v = (AT u)T v = (AT u, v). (6.1) This result, (u, Av) = (AT u, v), (6.2) allows us to move the matrix A from the second term to the first term in the inner product, but it must be replaced by its transpose AT . A similar result can be derived for complex, inner product spaces. When A is a complex matrix, we can use equation 5.50 to write T T T (u, Av) = uT (Av) = (uT A)v = (AT u)T v = A u v = (A u, v). -
Complex Inner Product Spaces
Complex Inner Product Spaces A vector space V over the field C of complex numbers is called a (complex) inner product space if it is equipped with a pairing h, i : V × V → C which has the following properties: 1. (Sesqui-symmetry) For every v, w in V we have hw, vi = hv, wi. 2. (Linearity) For every u, v, w in V and a in C we have langleau + v, wi = ahu, wi + hv, wi. 3. (Positive definite-ness) For every v in V , we have hv, vi ≥ 0. Moreover, if hv, vi = 0, the v must be 0. Sometimes the third condition is replaced by the following weaker condition: 3’. (Non-degeneracy) For every v in V , there is a w in V so that hv, wi 6= 0. We will generally work with the stronger condition of positive definite-ness as is conventional. A basis f1, f2,... of V is called orthogonal if hfi, fji = 0 if i < j. An orthogonal basis of V is called an orthonormal (or unitary) basis if, in addition hfi, fii = 1. Given a linear transformation A : V → V and another linear transformation B : V → V , we say that B is the adjoint of A, if for all v, w in V , we have hAv, wi = hv, Bwi. By sesqui-symmetry we see that A is then the adjoint of B as well. Moreover, by positive definite-ness, we see that if A has an adjoint B, then this adjoint is unique. Note that, when V is infinite dimensional, we may not be able to find an adjoint for A! In case it does exist it is denoted as A∗. -
Notes for Functional Analysis
Notes for Functional Analysis Wang Zuoqin (typed by Xiyu Zhai) November 13, 2015 1 Lecture 18 1.1 Characterizations of reflexive spaces Recall that a Banach space X is reflexive if the inclusion X ⊂ X∗∗ is a Banach space isomorphism. The following theorem of Kakatani give us a very useful creteria of reflexive spaces: Theorem 1.1. A Banach space X is reflexive iff the closed unit ball B = fx 2 X : kxk ≤ 1g is weakly compact. Proof. First assume X is reflexive. Then B is the closed unit ball in X∗∗. By the Banach- Alaoglu theorem, B is compact with respect to the weak-∗ topology of X∗∗. But by definition, in this case the weak-∗ topology on X∗∗ coincides with the weak topology on X (since both are the weakest topology on X = X∗∗ making elements in X∗ continuous). So B is compact with respect to the weak topology on X. Conversly suppose B is weakly compact. Let ι : X,! X∗∗ be the canonical inclusion map that sends x to evx. Then we've seen that ι preserves the norm, and thus is continuous (with respect to the norm topologies). By PSet 8-1 Problem 3, ∗∗ ι : X(weak) ,! X(weak) is continuous. Since the weak-∗ topology on X∗∗ is weaker than the weak topology on X∗∗ (since X∗∗∗ = (X∗)∗∗ ⊃ X∗. OH MY GOD!) we thus conclude that ∗∗ ι : X(weak) ,! X(weak-∗) is continuous. But by assumption B ⊂ X(weak) is compact, so the image ι(B) is compact in ∗∗ X(weak-∗). In particular, ι(B) is weak-∗ closed. -
The Riesz Representation Theorem
The Riesz Representation Theorem MA 466 Kurt Bryan Let H be a Hilbert space over lR or Cl, and T a bounded linear functional on H (a bounded operator from H to the field, lR or Cl, over which H is defined). The following is called the Riesz Representation Theorem: Theorem 1 If T is a bounded linear functional on a Hilbert space H then there exists some g 2 H such that for every f 2 H we have T (f) =< f; g > : Moreover, kT k = kgk (here kT k denotes the operator norm of T , while kgk is the Hilbert space norm of g. Proof: Let's assume that H is separable for now. It's not really much harder to prove for any Hilbert space, but the separable case makes nice use of the ideas we developed regarding Fourier analysis. Also, let's just work over lR. Since H is separable we can choose an orthonormal basis ϕj, j ≥ 1, for 2 H. Let T be a bounded linear functionalP and set aj = T (ϕj). Choose f H, n let cj =< f; ϕj >, and define fn = j=1 cjϕj. Since the ϕj form a basis we know that kf − fnk ! 0 as n ! 1. Since T is linear we have Xn T (fn) = ajcj: (1) j=1 Since T is bounded, say with norm kT k < 1, we have jT (f) − T (fn)j ≤ kT kkf − fnk: (2) Because kf − fnk ! 0 as n ! 1 we conclude from equations (1) and (2) that X1 T (f) = lim T (fn) = ajcj: (3) n!1 j=1 In fact, the sequence aj must itself be square-summable. -
Is to Understand Some of the Interesting Properties of the Evolution Operator, We Need to Say a Few Things About Unitary Operat
Appendix 5: Unitary Operators, the Evolution Operator, the Schrödinger and the Heisenberg Pictures, and Wigner’s Formula A5.1 Unitary Operators, the Evolution Operator In ordinary 3-dimensional space, there are operators, such as an operator that simply rotates vectors, which alter ordinary vectors while preserving their lengths. Unitary operators do much the same thing in n-dimensional complex spaces.1 Let us start with some definitions. Given an operator (matrix) A, its inverse A-1 is such that AA-1 = A-1A = I .2 (A5.1.1) An operator (matrix) U is unitary if its inverse is equal to its Hermitian conjugate (its adjoint), so that U -1 = U + , (A5.1.2) and therefore UU + = U +U = I. (A5.1.3) Since the evolution operator U(t,t0 ) and TDSE do the same thing, U(t,t0 ) must preserve normalization and therefore unsurprisingly it can be shown that the evolution operator is unitary. Moreover, + -1 U(tn ,ti )= U (ti,tn )= U (ti,tn ). (A5.1.4) In addition, U(tn ,t1)= U(tn ,tn-1)U(tn-1,tn-2 )× × ×U(t2,t1), (A5.1.5) 1 As we know, the state vector is normalized (has length one), and stays normalized. Hence, the only way for it to undergo linear change is to rotate in n-dimensional space. 2 It is worth noticing that a matrix has an inverse if and only if its determinant is different from zero. 413 that is, the evolution operator for the system’s transition in the interval tn - t1 is equal to the product of the evolution operators in the intermediate intervals. -
Linear Maps on Hilbert Spaces
Chapter 10 Linear Maps on Hilbert Spaces A special tool called the adjoint helps provide insight into the behavior of linear maps on Hilbert spaces. This chapter begins with a study of the adjoint and its connection to the null space and range of a linear map. Then we discuss various issues connected with the invertibility of operators on Hilbert spaces. These issues lead to the spectrum, which is a set of numbers that gives important information about an operator. This chapter then looks at special classes of operators on Hilbert spaces: self- adjoint operators, normal operators, isometries, unitary operators, integral operators, and compact operators. Even on infinite-dimensional Hilbert spaces, compact operators display many characteristics expected from finite-dimensional linear algebra. We will see that the powerful Spectral Theorem for compact operators greatly resembles the finite- dimensional version. Also, we develop the Singular Value Decomposition for an arbitrary compact operator, again quite similar to the finite-dimensional result. The Botanical Garden at Uppsala University (the oldest university in Sweden, founded in 1477), where Erik Fredholm (1866–1927) was a student. The theorem called the Fredholm Alternative, which we prove in this chapter, states that a compact operator minus a nonzero scalar multiple of the identity operator is injective if and only if it is surjective. CC-BY-SA Per Enström © Sheldon Axler 2020 S. Axler, Measure, Integration & Real Analysis, Graduate Texts 280 in Mathematics 282, https://doi.org/10.1007/978-3-030-33143-6_10 Section 10A Adjoints and Invertibility 281 10A Adjoints and Invertibility Adjoints of Linear Maps on Hilbert Spaces The next definition provides a key tool for studying linear maps on Hilbert spaces.