Several Important Theorems 1 the Projection Theorem

Total Page:16

File Type:pdf, Size:1020Kb

Several Important Theorems 1 the Projection Theorem Several Important Theorems by Francis J. Narcowich November, 2014 1 The Projection Theorem Let H be a Hilbert space. When V is a finite dimensional subspace of H and f 2 H, we can always find a unique p 2 V such that kf − pk = minv2V kf − vk. This fact is the foundation of least-squares approximation. What happens when we allow V to be infinite dimensional? We will see that the minimization problem can be solved if and only if V is closed. Theorem 1.1 (The Projection Theorem). Let H be a Hilbert space and let V be a subspace of H. For every f 2 H there is a unique p 2 V such that kf − pk = minv2V kf − vk if and only if V is a closed subspace of H. To prove this, we need the following lemma. Lemma 1.2 (Polarization Identity). Let H be a Hilbert space. For every pair f; g 2 H, we have kf + gk2 + kf − gk2 = 2(kfk2 + kgk2): Proof. Adding the ± identities kf ±gk2 = kfk2 ±hf; gi±hg; fi+kgk2 yields the result. The polarization identity is an easy consequence of having an inner prod- uct. It is surprising that if a norm satisfies the polarization identity, then the norm comes from an inner product1. Proof. (Projection Theorem) Showing that the existence of minimizer im- plies that V is closed is left as an exercise. So we assume that V is closed. For f 2 H, let α := infv2V kv − fk. It is a little easier to work with this in 2 2 an equivalent form, α = infv2V kv − fk . Thus, for every " > 0 there is a 2 2 2 v" 2 V such that α ≤ kv" − fk < α + ". By choosing " = 1=n, where n 1 is a positive integer, we can find a sequence fvngn=1 in V such that 1 0 ≤ kv − fk2 − α2 < (1.1) n n 1Jordan, P. ; Von Neumann, J. On inner products in linear, metric spaces. Ann. of Math. (2) 36 (1935), no. 3, 719{723. 1 Of course, the same inequality holds for a possibly different integer m, 0 ≤ 2 2 1 kvm − fk − α < m . Adding the two yields this: 1 1 0 ≤ kv − fk2 + kv − fk2 − 2α2 < + : (1.2) n m n m By polarization identity and a simple manipulation, we have v + v kv − v k2 + 4kf − n m k2 = 2kf − v k2 + kf − v k2: n m 2 n m We can subtract 4α2 from both sides and use (1.2) to get v + v 2 2 kv −v k2+4(kf− n m k2−α2) = 2(kf−v k2+kf−v k2−2α2) < + : n m 2 n m n m 1 vn+vm 2 2 2 Because 2 (vn + vm) 2 V , kf − 2 k ≥ infv2V kv − fk = α . It follows that the second term on the left is nonnegative. Dropping it makes the left side smaller: 2 2 kv − v k2 < + : (1.3) n m n m 1 As n; m ! 1, we see that kvn − vmk ! 0. Thus fvngn=1 is a Cauchy sequence in H and is therefore convergent to a vector p 2 H. Since V is closed, p 2 V . Furthermore, taking limits in (1.1) implies that kp − fk = infv2V kv − fk. The uniqueness of p is left as an exercise. There are two important corollaries to this theorem; they follow from problem 3 of assignment 1 and Theorem 1.1.. We list them below. Corollary 1.3. Let V be a subspace of H. There exists an orthogonal projection P : H! V for which kf − P fk = minv2V kf − vk if and only if V is closed. Corollary 1.4. Let V be a closed subspace of H. Then, H = V ⊕ V ? and (V ?)? = V . 2 The Riesz Representation Theorem Let V be a Banach space. A bounded linear transformation Φ that maps V into R or C is called a linear functional on V . The linear functionals form a Banach space V ∗, called the dual space of V , with norm defined by jΦ(v)j kΦkV ∗ := sup : v6=0 kvkV 2 2.1 The linear functionals on Hilbert space Theorem 2.1 (The Riesz Representation Theorem). Let H be a Hilbert space and let Φ: H! C (or R) be a bounded linear functional on H. Then, there is a unique g 2 H such that, for all f 2 H, Φ(f) = hf; gi. Proof. The functional Φ is a bounded operator that maps H into the scalars. It follows from our discussion of bounded operators that the null space of Φ, N(Φ), is closed. If N(Φ) = H, then Φ(f) = 0 for all f 2 H, hence Φ = 0. Thus we may take g = 0. If N(Φ) 6= H, then, since N(Φ) is closed, we have that H = N(Φ) ⊕ N(Φ)?. In addition, since N(Φ) 6= H, there exists a nonzero vector h 2 N(Φ)?. Moreover, Φ(h) 6= 0, because h is not in the null space N(Φ). Next, note that for f 2 H, the vector w := Φ(h)f − Φ(f)h is in N(Φ). To see this, observe that Φ(w) = ΦΦ(h)f − Φ(f)h = Φ(h)Φ(f) − Φ(f)Φ(h) = 0: Because w = Φ(h)f − Φ(f)h 2 N(Φ), it is orthogonal to h 2 N(Φ)?, we have that 0 = hΦ(h)f − Φ(f)h; hi = Φ(h)hf; hi − Φ(f) hh; hi : | {z } khk2 Φ(h) Solving this equation for Φ(f) yields Φ(f) = hf; khk2 hi. The vector g := Φ(h) khk2 h then satisfies Φ(f) = hf; gi. To show uniqueness, suppose g1; g2 2 H satisfy Φ(f) = hf; g1i and Φ(f) = hf; g2i. Subtracting these two gives hf; g2 −g1i = 0 for all f 2 H. Letting f = g2 −g1 results in hg2 −g1; g2 −g1i = 0. Consequently, g2 = g1. 2.2 Adjoints of bounded linear operators We now turn the problem of showing that an adjoint for a bounded operator always exists. This is just a corollary of the Riesz Representation Theorem. Corollary 2.2. Let L : H!H be a bounded linear operator. Then there exists a bounded linear operator L∗ : H!H, called the adjoint of L, such that hLf; hi = hf; L∗hi, for all f; h 2 H. Proof. Fix h 2 H and define the linear functional Φh(f) = hLf; hi. Us- ing the boundedness of L and Schwarz's inequality, we have jΦh(f)j ≤ kLkkfkkhk = Kkfk, and so Φh is bounded. By Theorem 2.1, there is a 3 unique vector g in H for which Φh(f) = hf; gi. The vector g is uniquely determined by Φh; thus g = gh a function of h. We claim that gh is a linear function of h. Consider h = ah1 +bh2. Note that Φh(f) = hLf; ah1 +bh2i = ¯ a¯Φh1 (f) + bΦh2 (f). Since Φh1 (f) = hf; g1i and Φh2 (f) = hf; g2i, we see that ¯ Φh(f) = hf; ghi =a ¯Φh2 (f) + bΦh2 (f) = hf; agh1 + bgh2 i: It follows that gh = agh1 + bgh2 and that gh is a linear function of h. It is 2 also bounded. If f = gh, then Φh(gh) = kghk . From the bound jΦh(f)j ≤ 2 kLkkfkkhk, we have kghk ≤ jLkkghkkhk. Dividing by kghk then yields kghk ≤ kLkkhk. Thus the correspondence h ! gh is a bounded linear ∗ function on H. Denote this function by L . Since hLf; hi = hf; ghi, we have that hLf; hi = hf; L∗hi. Corollary 2.3. kL∗k = kLk. Proof. By problem 5 in Assignment 8, kLk = supf;h jhLf; hij, where khk = ∗ ∗ ∗ kfk = 1. On the other hand, kL k = supf;h jhL h; fij. Since hL h; fi = ∗ ∗ hf; L hi, we have that supf;h jhL h; fij = supf;h jhLf; hij. It immediately follows that kL∗k = kLk. Example 2.4. Let R = [0; 1]×[0; 1] and suppose that k is a Hilbert-Schmidt R 1 ∗ R 1 kernel. If Lu(x) = 0 k(x; y)u(y)dy, then L v(x) = 0 k(y; x)v(y)dy. Proof. We will use s; t as the integration variables and switch back, to avoid R 1 R 1 confusion. We begin with hLu; vi = 0 0 k(s; t)u(t)dt v(s)ds. By Fubini's theorem, we may switch the variables of integration to get this: Z 1 Z 1 Z 1 Z 1 k(s; t)u(t)dy v(s)ds = k(s; t)v(s)ds u(t)dt 0 0 0 0 Z 1 Z 1 = k(s; t)v(s)ds u(t)dt: 0 0 | {z } L∗v =hu; L∗vi The result follows by changing variables from t; s to x; y in the second equa- tion above . 3 The Fredholm Alternative Theorem 3.1 (The Fredholm Alternative). Let L : H!H be a bounded linear operator whose range, R(L), is closed. Then, the equation Lf = g 4 and be solved if and only if hg; vi = 0 for all v 2 N(L∗). Equivalently, R(L) = N(L∗)?. Proof. Let g 2 R(L), so that there is an h 2 H such that g = Lh. If v 2 N(L∗), then hg; vi = hLh; vi = hh; L∗vi = 0. Consequently, R(L) ⊆ N(L∗)?. Let f 2 N(L∗)?. Since R(L) is closed, the projection theorem, Theorem 1.1, and Corollary 1.3, imply that there exists an orthogonal pro- jection P onto R(L) such that P f 2 R(L) and f 0 = f −P f 2 R(L)?.
Recommended publications
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • Chapter IX. Tensors and Multilinear Forms
    Notes c F.P. Greenleaf and S. Marques 2006-2016 LAII-s16-quadforms.tex version 4/25/2016 Chapter IX. Tensors and Multilinear Forms. IX.1. Basic Definitions and Examples. 1.1. Definition. A bilinear form is a map B : V V C that is linear in each entry when the other entry is held fixed, so that × → B(αx, y) = αB(x, y)= B(x, αy) B(x + x ,y) = B(x ,y)+ B(x ,y) for all α F, x V, y V 1 2 1 2 ∈ k ∈ k ∈ B(x, y1 + y2) = B(x, y1)+ B(x, y2) (This of course forces B(x, y)=0 if either input is zero.) We say B is symmetric if B(x, y)= B(y, x), for all x, y and antisymmetric if B(x, y)= B(y, x). Similarly a multilinear form (aka a k-linear form , or a tensor− of rank k) is a map B : V V F that is linear in each entry when the other entries are held fixed. ×···×(0,k) → We write V = V ∗ . V ∗ for the set of k-linear forms. The reason we use V ∗ here rather than V , and⊗ the⊗ rationale for the “tensor product” notation, will gradually become clear. The set V ∗ V ∗ of bilinear forms on V becomes a vector space over F if we define ⊗ 1. Zero element: B(x, y) = 0 for all x, y V ; ∈ 2. Scalar multiple: (αB)(x, y)= αB(x, y), for α F and x, y V ; ∈ ∈ 3. Addition: (B + B )(x, y)= B (x, y)+ B (x, y), for x, y V .
    [Show full text]
  • MATH 421/510 Assignment 3
    MATH 421/510 Assignment 3 Suggested Solutions February 2018 1. Let H be a Hilbert space. (a) Prove the polarization identity: 1 hx; yi = (kx + yk2 − kx − yk2 + ikx + iyk2 − ikx − iyk2): 4 Proof. By direct calculation, kx + yk2 − kx − yk2 = 2hx; yi + 2hy; xi: Similarly, kx + iyk2 − kx − iyk2 = 2hx; iyi − 2hy; ixi = −2ihx; yi + 2ihy; xi: Addition gives kx + yk2−kx − yk2+ikx + iyk2−ikx − iyk2 = 2hx; yi+2hy; xi+2hx; yi−2hy; xi = 4hx; yi: (b) If there is another Hilbert space H0, a linear map from H to H0 is unitary if and only if it is isometric and surjective. Proof. We take the definition from the book that an operator is unitary if and only if it is invertible and hT x; T yi = hx; yi for all x; y 2 H. A unitary operator T is isometric and surjective by definition. On the other hand, assume it is isometric and surjective. We first show that T preserves the inner product: If the scalar field is C, by the polarization identity, 1 hT x; T yi = (kT x + T yk2 − kT x − T yk2 + ikT x + iT yk2 − ikT x − iT yk2) 4 1 = (kx + yk2 − kx − yk2 + ikx + iyk2 − ikx − iyk2) = hx; yi; 4 where in the second equation we used the linearity of T and the assumption that T is isometric. 1 If the scalar field is R, then we use the real version of the polarization identity: 1 hx; yi = (kx + yk2 − kx − yk2) 4 The remaining computation is similar to the complex case.
    [Show full text]
  • A Bit About Hilbert Spaces
    A Bit About Hilbert Spaces David Rosenberg New York University October 29, 2016 David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 1 / 9 Inner Product Space (or “Pre-Hilbert” Spaces) An inner product space (over reals) is a vector space V and an inner product, which is a mapping h·,·i : V × V ! R that has the following properties 8x,y,z 2 V and a,b 2 R: Symmetry: hx,yi = hy,xi Linearity: hax + by,zi = ahx,zi + b hy,zi Postive-definiteness: hx,xi > 0 and hx,xi = 0 () x = 0. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 2 / 9 Norm from Inner Product For an inner product space, we define a norm as p kxk = hx,xi. Example Rd with standard Euclidean inner product is an inner product space: hx,yi := xT y 8x,y 2 Rd . Norm is p kxk = xT y. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 3 / 9 What norms can we get from an inner product? Theorem (Parallelogram Law) A norm kvk can be generated by an inner product on V iff 8x,y 2 V 2kxk2 + 2kyk2 = kx + yk2 + kx - yk2, and if it can, the inner product is given by the polarization identity jjxjj2 + jjyjj2 - jjx - yjj2 hx,yi = . 2 Example d `1 norm on R is NOT generated by an inner product. [Exercise] d Is `2 norm on R generated by an inner product? David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 4 / 9 Pythagorean Theroem Definition Two vectors are orthogonal if hx,yi = 0.
    [Show full text]
  • 3. Hilbert Spaces
    3. Hilbert spaces In this section we examine a special type of Banach spaces. We start with some algebraic preliminaries. Definition. Let K be either R or C, and let Let X and Y be vector spaces over K. A map φ : X × Y → K is said to be K-sesquilinear, if • for every x ∈ X, then map φx : Y 3 y 7−→ φ(x, y) ∈ K is linear; • for every y ∈ Y, then map φy : X 3 y 7−→ φ(x, y) ∈ K is conjugate linear, i.e. the map X 3 x 7−→ φy(x) ∈ K is linear. In the case K = R the above properties are equivalent to the fact that φ is bilinear. Remark 3.1. Let X be a vector space over C, and let φ : X × X → C be a C-sesquilinear map. Then φ is completely determined by the map Qφ : X 3 x 7−→ φ(x, x) ∈ C. This can be see by computing, for k ∈ {0, 1, 2, 3} the quantity k k k k k k k Qφ(x + i y) = φ(x + i y, x + i y) = φ(x, x) + φ(x, i y) + φ(i y, x) + φ(i y, i y) = = φ(x, x) + ikφ(x, y) + i−kφ(y, x) + φ(y, y), which then gives 3 1 X (1) φ(x, y) = i−kQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 The map Qφ : X → C is called the quadratic form determined by φ. The identity (1) is referred to as the Polarization Identity.
    [Show full text]
  • Lesson: Bilinear, Quadratic and Hermitian Forms Lesson Developer: Vivek N Sharma College / Department: Department of Mathematics, S.G.T.B
    Bilinear, Quadratic and Hermitian Forms Lesson: Bilinear, Quadratic and Hermitian Forms Lesson Developer: Vivek N Sharma College / Department: Department of Mathematics, S.G.T.B. Khalsa College, University of Delhi Institute of Lifelong Learning, University of Delhi pg. 1 Bilinear, Quadratic and Hermitian Forms Table of Contents 1. Learning Outcomes 2. Introduction 3. Definition of a Bilinear Form on a Vector Space 4. Various Types of Bilinear Forms 5. Matrix Representation of a Bilinear Form on a Vector Space 6. Quadratic Forms on ℝ푛 7. Hermitian Forms on a Vector Space 8. Summary 9. Exercises 10. Glossary and Further Reading 11. Solutions/Hints for Exercises 1. Learning Outcomes: After studying this unit, you will be able to define the concept of a bilinear form on a vector space. explain the equivalence of bilinear forms with matrices. represent a bilinear form on a vector space as a square matrix. define the concept of a quadratic form on ℝ푛 . explain the notion of a Hermitian form on a vector space. Institute of Lifelong Learning, University of Delhi pg. 2 Bilinear, Quadratic and Hermitian Forms 2. Introduction: Bilinear forms occupy a unique place in all of mathematics. The study of linear transformations alone is incapable of handling the notions of orthogonality in geometry, optimization in many variables, Fourier series and so on and so forth. In optimization theory, the relevance of quadratic forms is all the more. The concept of dot product is a particular instance of a bilinear form. Quadratic forms, in particular, play an all important role in deciding the maxima-minima of functions of several variables.
    [Show full text]
  • Numerical Linear Algebra
    University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997{2014) Dr Nikolai Chernov Summer 2014 Contents 0. Review of Linear Algebra 1 0.1 Matrices and vectors . 1 0.2 Product of a matrix and a vector . 1 0.3 Matrix as a linear transformation . 2 0.4 Range and rank of a matrix . 2 0.5 Kernel (nullspace) of a matrix . 2 0.6 Surjective/injective/bijective transformations . 2 0.7 Square matrices and inverse of a matrix . 3 0.8 Upper and lower triangular matrices . 3 0.9 Eigenvalues and eigenvectors . 4 0.10 Eigenvalues of real matrices . 4 0.11 Diagonalizable matrices . 5 0.12 Trace . 5 0.13 Transpose of a matrix . 6 0.14 Conjugate transpose of a matrix . 6 0.15 Convention on the use of conjugate transpose . 6 1. Norms and Inner Products 7 1.1 Norms . 7 1.2 1-norm, 2-norm, and 1-norm . 7 1.3 Unit vectors, normalization . 8 1.4 Unit sphere, unit ball . 8 1.5 Images of unit spheres . 8 1.6 Frobenius norm . 8 1.7 Induced matrix norms . 9 1.8 Computation of kAk1, kAk2, and kAk1 .................... 9 1.9 Inequalities for induced matrix norms . 9 1.10 Inner products . 10 1.11 Standard inner product . 10 1.12 Cauchy-Schwarz inequality . 11 1.13 Induced norms . 12 1.14 Polarization identity . 12 1.15 Orthogonal vectors . 12 1.16 Pythagorean theorem . 12 1.17 Orthogonality and linear independence . 12 1.18 Orthonormal sets of vectors .
    [Show full text]
  • CHAPTER 9 Inner Product Spaces and Hilbert Spaces Prof. M. Saha
    CHAPTER 9 Inner Product Spaces and Hilbert Spaces BY Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India E-mail : [email protected] Chapter 9 Inner Product Spaces and Hilbert Spaces Introduction and Objectives In the preceding chapters, we discussed normed linear spaces and Banach spaces. These spaces has linear properties as well as metric properties. Although the norm on a linear space generalizes the elementary concept of the length of a vector, but the main geometric concept other than the length of a vector is the angle between two vectors, In this chapter, we take the opportunity to study linear spaces having an inner product, a generalization of the usual dot product on finite dimensional linear spaces. The concept of an inner product in a linear space leads to an inner product space and a complete inner product space which is called a Hilbert space. The theory of Hilbert Spaces does not deal with angles in general. Most interestingly, it helps us to introduce an idea of perpendicularity for two vectors and the geometry deals in various fundamental aspects with Euclidean geometry. The basics of the theory of Hilbert spaces was given by in 1912 by the work of German mathematician D. Hilbert (1862 -1943) on integral equations. However, an axiomatic basis of the theory was given by famous mathematician J. Von Neumann (1903 -1957). However, Hilbert spaces are the simplest type of infinite dimensional Banach spaces to tackle a remarkable role in functional analysis. This chapter is divided into four modules. Module 1 ideals with the introduction of inner product spaces and Hilbert spaces with some examples.
    [Show full text]
  • A Generalization of Polarization Formula and Its Application In
    atica E em ter OPEN ACCESS Freely available online th n a a M Mathematica Eterna ISSN: 1314-3344 Research Article A Generalization of Polarization Formula and Its Application in Phase Re- trieval Zhitao Zhuang1* and Guochang Wu2 1College of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou, P.R. China; 2Depart- ment of Mathematics, Henan University of Economics and Law Zhengzhou, P.R. China ABSTRACT In this paper, some generalizations of the classical polarization formula are used to recover the relative phase in phase retrieval problem. Theoretically, in order to reconstruct any signal from its intensity measurements by polarization formula, the amount of needed measurements can be same as PhaseLift method. The numerical simulation also illustrates its good effect in (affine) phase retrieval with additive white Gaussian noised intensity Fourier measurements. Keywords: Polarization formula; Phase retrieval; Frame 2010 MS Classification 42C15 INTRODUCTION In this paper, the frame theory is used to obtain some polarization identities. At first, we briefly introduce some definitions and The aim of phase retrieval is to recover signal x from its intensity N notations. Let H denotes a separable Hilbert space with the inner measurements x,ϕi , i =1,..., N , where {ϕi}i=1 forms a d product 〈⋅ , ⋅〉 and J be a countable index set. frame of C . Since |< x,ϕi >|=|< αx,ϕi >| for any α ∈C Definition 1.1. A sequence { f j} j∈J of elements in H is a frame for H with|α |=1, the best reconstruction of x is up to a unimodular if there exists constants A, B > 0 such that constant.
    [Show full text]
  • Hilbert Spaces
    Hilbert Spaces Definition. A complex inner product space (or pre-Hilbert space) is a complex vector space X together with an inner product: a function from X × X into C (denoted by hy, xi) satisfying: (1) (∀ x ∈ X) hx, xi ≥ 0 and hx, xi =0 iff x = 0. (2) (∀ α, β ∈ C) (∀ x, y, z ∈ X), hz,αx + βyi = αhz, xi + βhz,yi. (3) (∀ x, y ∈ X) hy, xi = hx, yi Remarks. (2) says the inner product is linear in the second variable; (3) says the inner product is sesquilinear; (2) and (3) imply hαx + βy,zi =α ¯hx, zi + β¯hy, zi, so the inner product is conjugate linear in the first variable. Definition. For x ∈ X, let kxk = hx, xi. p Cauchy-Schwarz Inequality. (∀ x, y ∈ X) |hy, xi| ≤ kxk·kyk, with equality iff x and y are linearly dependent. Proof. The result is obvious if hy, xi = 0. Suppose γ ≡ hy, xi= 6 0. Then x =6 0, y =6 0. Let z = γ|γ|−1y. Then hz, xi =γ ¯|γ|−1hy, xi = |γ| > 0. Let v = xkxk−1, w = zkzk−1. Then kvk = kwk = 1 and hw, vi > 0. Since 0 ≤kv − wk2 = hv, vi − 2Rehw, vi + hw,wi, it follows that hw, vi ≤ 1 (with equality iff v = w, which happens iff x and y are linearly dependent). So |hy, xi| = hz, xi = kxk·kzkhw, vi≤kxk·kzk = kxk·kyk. Facts. (1′) (∀ x ∈ X)kxk ≥ 0; kxk =0 iff x = 0. (2′) (∀ α ∈ C)(∀ x ∈ X) kαxk = |α|·kxk. (3′) (∀ x, y ∈ X) kx + yk≤kxk + kyk.
    [Show full text]
  • Functional Analysis (Under Construction)
    Functional Analysis (under construction) Gustav Holzegel∗ March 21, 2015 Contents 1 Motivation and Literature 2 2 Metric Spaces 3 2.1 Examples . .3 2.2 Topological refresher . .5 2.3 Completeness . .6 2.4 Completion of metric spaces . .8 3 Normed Spaces and Banach Spaces 8 3.1 Linear Algebra refresher . .8 3.2 Definition and elementary properties . .9 3.3 Finite vs the Infinite dimensional normed spaces . 11 4 Linear Operators 16 4.1 Bounded Linear Operators . 16 4.2 Restriction and Extension of operators . 18 4.3 Linear functionals . 19 4.4 Normed Spaces of Operators . 20 5 The dual space and the Hahn-Banach theorem 21 5.1 The proof . 25 5.2 Examples: The dual of `1 ...................... 28 5.3 The dual of C [a; b].......................... 29 5.4 A further application . 31 5.5 The adjoint operator . 32 6 The Uniform Boundedness Principle 34 6.1 Application 1: Space of Polynomials . 36 6.2 Application 2: Fourier Series . 37 6.3 Final Remarks . 38 6.4 Strong and Weak convergence . 39 ∗Imperial College London, Department of Mathematics, South Kensington Campus, Lon- don SW7 2AZ, United Kingdom. 1 7 The open mapping and closed graph theorem 44 7.1 The closed graph theorem . 46 8 Hilbert Spaces 48 8.1 Basic definitions . 48 8.2 Closed subspaces and distance . 50 8.3 Orthonormal sets and sequences [lecture by I.K.] . 52 8.4 Total Orthonormal Sets and Sequences . 53 8.5 Riesz Representation Theorem . 55 8.6 Applications of Riesz' theorem & the Hilbert-adjoint . 56 8.7 Self-adjoint and unitary operators .
    [Show full text]