4.2 Supplement

Total Page:16

File Type:pdf, Size:1020Kb

4.2 Supplement Introduction to Functional Analysis Chapter 4. Hilbert Spaces 4.2. Semi-Inner Products—Proofs of Theorems June 22, 2021 () Introduction to Functional Analysis June 22, 2021 1 / 11 Table of contents 1 Lemma 4.2. Basic Identity 2 Theorem 4.3. Cauchy-Schwartz Inequality 3 Theorem 4.4. Triangle Inequality 4 Theorem 4.5. Parallelogram Law 5 Proposition 4.7. Polarization Identity 6 Theorem 4.9. Continuity of Inner Product () Introduction to Functional Analysis June 22, 2021 2 / 11 Lemma 4.2. Basic Identity Lemma 4.2. Basic Identity Lemma 4.2. Basic Identity. Let X be a semi-inner product space. Then the semi-norm induced by the semi-inner product satisfies: for all x, y ∈ X , we have kx + yk2 = kxk2 + kyk2 + 2Rehx, yi. Proof. We have kx + yk2 = hx + y, x + yi = hx, x + yi + hy, x + yi = hx, xi + hx, yi + hy, xi + hy, yi = kxk2 + hx, yi + hx, yi + kyk2 = kxk2 + 2Re(hx, yi) + kyk2. () Introduction to Functional Analysis June 22, 2021 3 / 11 Lemma 4.2. Basic Identity Lemma 4.2. Basic Identity Lemma 4.2. Basic Identity. Let X be a semi-inner product space. Then the semi-norm induced by the semi-inner product satisfies: for all x, y ∈ X , we have kx + yk2 = kxk2 + kyk2 + 2Rehx, yi. Proof. We have kx + yk2 = hx + y, x + yi = hx, x + yi + hy, x + yi = hx, xi + hx, yi + hy, xi + hy, yi = kxk2 + hx, yi + hx, yi + kyk2 = kxk2 + 2Re(hx, yi) + kyk2. () Introduction to Functional Analysis June 22, 2021 3 / 11 Suppose kyk 6= 0. If hx, yi = 0, the result trivially follows. If hx, yi= 6 0 then x and y can be scaled so that kyk = 1 and hx, yi = 1 (the second claim possibly requiring the use of a complex factor). Then by Lemma 4.2, with y replaced by −y, we have 0 ≤ kx−yk2 = kxk2+k−yk2+2Re(hx, −yi) = kxk2+1+2(−1) = kxk2−1. So kxk ≥ 1 and the result holds in the case that kyk= 6 0 (or symmetrically, in the case that kxk= 6 0). Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof. Notice that if we scale x or y by a nonzero amount, the Cauchy-Schwartz Inequality remains unchanged (the scaling factor is introduced on both sides and can be divided out). () Introduction to Functional Analysis June 22, 2021 4 / 11 Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof. Notice that if we scale x or y by a nonzero amount, the Cauchy-Schwartz Inequality remains unchanged (the scaling factor is introduced on both sides and can be divided out). Suppose kyk= 6 0. If hx, yi = 0, the result trivially follows. If hx, yi 6= 0 then x and y can be scaled so that kyk = 1 and hx, yi = 1 (the second claim possibly requiring the use of a complex factor). Then by Lemma 4.2, with y replaced by −y, we have 0 ≤ kx−yk2 = kxk2+k−yk2+2Re(hx, −yi) = kxk2+1+2(−1) = kxk2−1. So kxk ≥ 1 and the result holds in the case that kyk 6= 0 (or symmetrically, in the case that kxk 6= 0). () Introduction to Functional Analysis June 22, 2021 4 / 11 Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof. Notice that if we scale x or y by a nonzero amount, the Cauchy-Schwartz Inequality remains unchanged (the scaling factor is introduced on both sides and can be divided out). Suppose kyk= 6 0. If hx, yi = 0, the result trivially follows. If hx, yi 6= 0 then x and y can be scaled so that kyk = 1 and hx, yi = 1 (the second claim possibly requiring the use of a complex factor). Then by Lemma 4.2, with y replaced by −y, we have 0 ≤ kx−yk2 = kxk2+k−yk2+2Re(hx, −yi) = kxk2+1+2(−1) = kxk2−1. So kxk ≥ 1 and the result holds in the case that kyk 6= 0 (or symmetrically, in the case that kxk 6= 0). () Introduction to Functional Analysis June 22, 2021 4 / 11 Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3 (continued 1) Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof (continued). If kxk = kyk = 0 but hx, yi= 6 0, then we can scale x and y such that hx, yi = 1. Then by Lemma 4.2, 0 ≤ kx − yk2 = kxk2 + k − yk2 + 2Re(hx, −yi) = 0 + 0 − 2, a contradiction (so it must be that hx, yi = 0) and the inequality follows. If y = αx then |hx, yi| = |hx, αxi| = |αhx, xi| = |α|kxk2 = kxk|α|kxk = kxkkyk. () Introduction to Functional Analysis June 22, 2021 5 / 11 Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3 (continued 1) Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof (continued). If kxk = kyk = 0 but hx, yi= 6 0, then we can scale x and y such that hx, yi = 1. Then by Lemma 4.2, 0 ≤ kx − yk2 = kxk2 + k − yk2 + 2Re(hx, −yi) = 0 + 0 − 2, a contradiction (so it must be that hx, yi = 0) and the inequality follows. If y = αx then |hx, yi| = |hx, αxi| = |αhx, xi| = |α|kxk2 = kxk|α|kxk = kxkkyk. () Introduction to Functional Analysis June 22, 2021 5 / 11 Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3 (continued 2) Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof (continued). Conversely, suppose that equality holds. Notice that h0, xi = hx − x, xi = hx, xi − hx, xi = 0, so if either kxk = 0 or kyk = 0, then x and y are linearly dependent since x = 0 or y = 0 (we are assuming h·, ·i is an inner product and k · k is a norm in the exploration of equality). If neither x nor y is 0, then again we can scale y so that kyk = 1 and then scale x so that hx, yi = 1. Then equality in the Cauchy-Schwartz Inequality implies that kxk = 1. Then, as above, with −y replacing y in Lemma 4.2, 0 ≤ kx − yk2 = kxk2 + kyk2 + 2Re(hx, −yi) = 1 + 1 + 2(−1) = 0, and so kx − yk = 0 and x = y () Introduction to Functional Analysis June 22, 2021 6 / 11 Theorem 4.3. Cauchy-Schwartz Inequality Theorem 4.3 (continued 2) Theorem 4.3. Cauchy-Schwartz Inequality. Given a semi-inner product on X , for all x, y ∈ X we have |hx, yi| ≤ kxkkyk. If h·, ·i is an inner product, then equality holds if and only if x is a scalar multiple of y (that is, x and y are linearly dependent). Proof (continued). Conversely, suppose that equality holds. Notice that h0, xi = hx − x, xi = hx, xi − hx, xi = 0, so if either kxk = 0 or kyk = 0, then x and y are linearly dependent since x = 0 or y = 0 (we are assuming h·, ·i is an inner product and k · k is a norm in the exploration of equality). If neither x nor y is 0, then again we can scale y so that kyk = 1 and then scale x so that hx, yi = 1. Then equality in the Cauchy-Schwartz Inequality implies that kxk = 1. Then, as above, with −y replacing y in Lemma 4.2, 0 ≤ kx − yk2 = kxk2 + kyk2 + 2Re(hx, −yi) = 1 + 1 + 2(−1) = 0, and so kx − yk = 0 and x = y () Introduction to Functional Analysis June 22, 2021 6 / 11 Theorem 4.4. Triangle Inequality Theorem 4.4. Triangle Inequality Theorem 4.4. Triangle Inequality. Let X be a semi-inner product space. Then the semi-norm induced by the semi-inner product satisfies: for all x, y ∈ X , we have kx + yk ≤ kxk + kyk. Proof. We simply have kx + yk2 = kxk2 + kyk2 + 2Re(hx, yi) by Lemma 4.2 ≤ kxk2 + kyk2 + 2|hx, yi| ≤ kxk2 + kyk2 + 2kxkkyk by Cauchy-Schwartz = (kxk + kyk)2.
Recommended publications
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • Chapter IX. Tensors and Multilinear Forms
    Notes c F.P. Greenleaf and S. Marques 2006-2016 LAII-s16-quadforms.tex version 4/25/2016 Chapter IX. Tensors and Multilinear Forms. IX.1. Basic Definitions and Examples. 1.1. Definition. A bilinear form is a map B : V V C that is linear in each entry when the other entry is held fixed, so that × → B(αx, y) = αB(x, y)= B(x, αy) B(x + x ,y) = B(x ,y)+ B(x ,y) for all α F, x V, y V 1 2 1 2 ∈ k ∈ k ∈ B(x, y1 + y2) = B(x, y1)+ B(x, y2) (This of course forces B(x, y)=0 if either input is zero.) We say B is symmetric if B(x, y)= B(y, x), for all x, y and antisymmetric if B(x, y)= B(y, x). Similarly a multilinear form (aka a k-linear form , or a tensor− of rank k) is a map B : V V F that is linear in each entry when the other entries are held fixed. ×···×(0,k) → We write V = V ∗ . V ∗ for the set of k-linear forms. The reason we use V ∗ here rather than V , and⊗ the⊗ rationale for the “tensor product” notation, will gradually become clear. The set V ∗ V ∗ of bilinear forms on V becomes a vector space over F if we define ⊗ 1. Zero element: B(x, y) = 0 for all x, y V ; ∈ 2. Scalar multiple: (αB)(x, y)= αB(x, y), for α F and x, y V ; ∈ ∈ 3. Addition: (B + B )(x, y)= B (x, y)+ B (x, y), for x, y V .
    [Show full text]
  • MATH 421/510 Assignment 3
    MATH 421/510 Assignment 3 Suggested Solutions February 2018 1. Let H be a Hilbert space. (a) Prove the polarization identity: 1 hx; yi = (kx + yk2 − kx − yk2 + ikx + iyk2 − ikx − iyk2): 4 Proof. By direct calculation, kx + yk2 − kx − yk2 = 2hx; yi + 2hy; xi: Similarly, kx + iyk2 − kx − iyk2 = 2hx; iyi − 2hy; ixi = −2ihx; yi + 2ihy; xi: Addition gives kx + yk2−kx − yk2+ikx + iyk2−ikx − iyk2 = 2hx; yi+2hy; xi+2hx; yi−2hy; xi = 4hx; yi: (b) If there is another Hilbert space H0, a linear map from H to H0 is unitary if and only if it is isometric and surjective. Proof. We take the definition from the book that an operator is unitary if and only if it is invertible and hT x; T yi = hx; yi for all x; y 2 H. A unitary operator T is isometric and surjective by definition. On the other hand, assume it is isometric and surjective. We first show that T preserves the inner product: If the scalar field is C, by the polarization identity, 1 hT x; T yi = (kT x + T yk2 − kT x − T yk2 + ikT x + iT yk2 − ikT x − iT yk2) 4 1 = (kx + yk2 − kx − yk2 + ikx + iyk2 − ikx − iyk2) = hx; yi; 4 where in the second equation we used the linearity of T and the assumption that T is isometric. 1 If the scalar field is R, then we use the real version of the polarization identity: 1 hx; yi = (kx + yk2 − kx − yk2) 4 The remaining computation is similar to the complex case.
    [Show full text]
  • A Bit About Hilbert Spaces
    A Bit About Hilbert Spaces David Rosenberg New York University October 29, 2016 David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 1 / 9 Inner Product Space (or “Pre-Hilbert” Spaces) An inner product space (over reals) is a vector space V and an inner product, which is a mapping h·,·i : V × V ! R that has the following properties 8x,y,z 2 V and a,b 2 R: Symmetry: hx,yi = hy,xi Linearity: hax + by,zi = ahx,zi + b hy,zi Postive-definiteness: hx,xi > 0 and hx,xi = 0 () x = 0. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 2 / 9 Norm from Inner Product For an inner product space, we define a norm as p kxk = hx,xi. Example Rd with standard Euclidean inner product is an inner product space: hx,yi := xT y 8x,y 2 Rd . Norm is p kxk = xT y. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 3 / 9 What norms can we get from an inner product? Theorem (Parallelogram Law) A norm kvk can be generated by an inner product on V iff 8x,y 2 V 2kxk2 + 2kyk2 = kx + yk2 + kx - yk2, and if it can, the inner product is given by the polarization identity jjxjj2 + jjyjj2 - jjx - yjj2 hx,yi = . 2 Example d `1 norm on R is NOT generated by an inner product. [Exercise] d Is `2 norm on R generated by an inner product? David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 4 / 9 Pythagorean Theroem Definition Two vectors are orthogonal if hx,yi = 0.
    [Show full text]
  • 3. Hilbert Spaces
    3. Hilbert spaces In this section we examine a special type of Banach spaces. We start with some algebraic preliminaries. Definition. Let K be either R or C, and let Let X and Y be vector spaces over K. A map φ : X × Y → K is said to be K-sesquilinear, if • for every x ∈ X, then map φx : Y 3 y 7−→ φ(x, y) ∈ K is linear; • for every y ∈ Y, then map φy : X 3 y 7−→ φ(x, y) ∈ K is conjugate linear, i.e. the map X 3 x 7−→ φy(x) ∈ K is linear. In the case K = R the above properties are equivalent to the fact that φ is bilinear. Remark 3.1. Let X be a vector space over C, and let φ : X × X → C be a C-sesquilinear map. Then φ is completely determined by the map Qφ : X 3 x 7−→ φ(x, x) ∈ C. This can be see by computing, for k ∈ {0, 1, 2, 3} the quantity k k k k k k k Qφ(x + i y) = φ(x + i y, x + i y) = φ(x, x) + φ(x, i y) + φ(i y, x) + φ(i y, i y) = = φ(x, x) + ikφ(x, y) + i−kφ(y, x) + φ(y, y), which then gives 3 1 X (1) φ(x, y) = i−kQ (x + iky), ∀ x, y ∈ X. 4 φ k=0 The map Qφ : X → C is called the quadratic form determined by φ. The identity (1) is referred to as the Polarization Identity.
    [Show full text]
  • Lesson: Bilinear, Quadratic and Hermitian Forms Lesson Developer: Vivek N Sharma College / Department: Department of Mathematics, S.G.T.B
    Bilinear, Quadratic and Hermitian Forms Lesson: Bilinear, Quadratic and Hermitian Forms Lesson Developer: Vivek N Sharma College / Department: Department of Mathematics, S.G.T.B. Khalsa College, University of Delhi Institute of Lifelong Learning, University of Delhi pg. 1 Bilinear, Quadratic and Hermitian Forms Table of Contents 1. Learning Outcomes 2. Introduction 3. Definition of a Bilinear Form on a Vector Space 4. Various Types of Bilinear Forms 5. Matrix Representation of a Bilinear Form on a Vector Space 6. Quadratic Forms on ℝ푛 7. Hermitian Forms on a Vector Space 8. Summary 9. Exercises 10. Glossary and Further Reading 11. Solutions/Hints for Exercises 1. Learning Outcomes: After studying this unit, you will be able to define the concept of a bilinear form on a vector space. explain the equivalence of bilinear forms with matrices. represent a bilinear form on a vector space as a square matrix. define the concept of a quadratic form on ℝ푛 . explain the notion of a Hermitian form on a vector space. Institute of Lifelong Learning, University of Delhi pg. 2 Bilinear, Quadratic and Hermitian Forms 2. Introduction: Bilinear forms occupy a unique place in all of mathematics. The study of linear transformations alone is incapable of handling the notions of orthogonality in geometry, optimization in many variables, Fourier series and so on and so forth. In optimization theory, the relevance of quadratic forms is all the more. The concept of dot product is a particular instance of a bilinear form. Quadratic forms, in particular, play an all important role in deciding the maxima-minima of functions of several variables.
    [Show full text]
  • Numerical Linear Algebra
    University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997{2014) Dr Nikolai Chernov Summer 2014 Contents 0. Review of Linear Algebra 1 0.1 Matrices and vectors . 1 0.2 Product of a matrix and a vector . 1 0.3 Matrix as a linear transformation . 2 0.4 Range and rank of a matrix . 2 0.5 Kernel (nullspace) of a matrix . 2 0.6 Surjective/injective/bijective transformations . 2 0.7 Square matrices and inverse of a matrix . 3 0.8 Upper and lower triangular matrices . 3 0.9 Eigenvalues and eigenvectors . 4 0.10 Eigenvalues of real matrices . 4 0.11 Diagonalizable matrices . 5 0.12 Trace . 5 0.13 Transpose of a matrix . 6 0.14 Conjugate transpose of a matrix . 6 0.15 Convention on the use of conjugate transpose . 6 1. Norms and Inner Products 7 1.1 Norms . 7 1.2 1-norm, 2-norm, and 1-norm . 7 1.3 Unit vectors, normalization . 8 1.4 Unit sphere, unit ball . 8 1.5 Images of unit spheres . 8 1.6 Frobenius norm . 8 1.7 Induced matrix norms . 9 1.8 Computation of kAk1, kAk2, and kAk1 .................... 9 1.9 Inequalities for induced matrix norms . 9 1.10 Inner products . 10 1.11 Standard inner product . 10 1.12 Cauchy-Schwarz inequality . 11 1.13 Induced norms . 12 1.14 Polarization identity . 12 1.15 Orthogonal vectors . 12 1.16 Pythagorean theorem . 12 1.17 Orthogonality and linear independence . 12 1.18 Orthonormal sets of vectors .
    [Show full text]
  • CHAPTER 9 Inner Product Spaces and Hilbert Spaces Prof. M. Saha
    CHAPTER 9 Inner Product Spaces and Hilbert Spaces BY Prof. M. Saha Professor of Mathematics The University of Burdwan West Bengal, India E-mail : [email protected] Chapter 9 Inner Product Spaces and Hilbert Spaces Introduction and Objectives In the preceding chapters, we discussed normed linear spaces and Banach spaces. These spaces has linear properties as well as metric properties. Although the norm on a linear space generalizes the elementary concept of the length of a vector, but the main geometric concept other than the length of a vector is the angle between two vectors, In this chapter, we take the opportunity to study linear spaces having an inner product, a generalization of the usual dot product on finite dimensional linear spaces. The concept of an inner product in a linear space leads to an inner product space and a complete inner product space which is called a Hilbert space. The theory of Hilbert Spaces does not deal with angles in general. Most interestingly, it helps us to introduce an idea of perpendicularity for two vectors and the geometry deals in various fundamental aspects with Euclidean geometry. The basics of the theory of Hilbert spaces was given by in 1912 by the work of German mathematician D. Hilbert (1862 -1943) on integral equations. However, an axiomatic basis of the theory was given by famous mathematician J. Von Neumann (1903 -1957). However, Hilbert spaces are the simplest type of infinite dimensional Banach spaces to tackle a remarkable role in functional analysis. This chapter is divided into four modules. Module 1 ideals with the introduction of inner product spaces and Hilbert spaces with some examples.
    [Show full text]
  • A Generalization of Polarization Formula and Its Application In
    atica E em ter OPEN ACCESS Freely available online th n a a M Mathematica Eterna ISSN: 1314-3344 Research Article A Generalization of Polarization Formula and Its Application in Phase Re- trieval Zhitao Zhuang1* and Guochang Wu2 1College of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou, P.R. China; 2Depart- ment of Mathematics, Henan University of Economics and Law Zhengzhou, P.R. China ABSTRACT In this paper, some generalizations of the classical polarization formula are used to recover the relative phase in phase retrieval problem. Theoretically, in order to reconstruct any signal from its intensity measurements by polarization formula, the amount of needed measurements can be same as PhaseLift method. The numerical simulation also illustrates its good effect in (affine) phase retrieval with additive white Gaussian noised intensity Fourier measurements. Keywords: Polarization formula; Phase retrieval; Frame 2010 MS Classification 42C15 INTRODUCTION In this paper, the frame theory is used to obtain some polarization identities. At first, we briefly introduce some definitions and The aim of phase retrieval is to recover signal x from its intensity N notations. Let H denotes a separable Hilbert space with the inner measurements x,ϕi , i =1,..., N , where {ϕi}i=1 forms a d product 〈⋅ , ⋅〉 and J be a countable index set. frame of C . Since |< x,ϕi >|=|< αx,ϕi >| for any α ∈C Definition 1.1. A sequence { f j} j∈J of elements in H is a frame for H with|α |=1, the best reconstruction of x is up to a unimodular if there exists constants A, B > 0 such that constant.
    [Show full text]
  • Hilbert Spaces
    Hilbert Spaces Definition. A complex inner product space (or pre-Hilbert space) is a complex vector space X together with an inner product: a function from X × X into C (denoted by hy, xi) satisfying: (1) (∀ x ∈ X) hx, xi ≥ 0 and hx, xi =0 iff x = 0. (2) (∀ α, β ∈ C) (∀ x, y, z ∈ X), hz,αx + βyi = αhz, xi + βhz,yi. (3) (∀ x, y ∈ X) hy, xi = hx, yi Remarks. (2) says the inner product is linear in the second variable; (3) says the inner product is sesquilinear; (2) and (3) imply hαx + βy,zi =α ¯hx, zi + β¯hy, zi, so the inner product is conjugate linear in the first variable. Definition. For x ∈ X, let kxk = hx, xi. p Cauchy-Schwarz Inequality. (∀ x, y ∈ X) |hy, xi| ≤ kxk·kyk, with equality iff x and y are linearly dependent. Proof. The result is obvious if hy, xi = 0. Suppose γ ≡ hy, xi= 6 0. Then x =6 0, y =6 0. Let z = γ|γ|−1y. Then hz, xi =γ ¯|γ|−1hy, xi = |γ| > 0. Let v = xkxk−1, w = zkzk−1. Then kvk = kwk = 1 and hw, vi > 0. Since 0 ≤kv − wk2 = hv, vi − 2Rehw, vi + hw,wi, it follows that hw, vi ≤ 1 (with equality iff v = w, which happens iff x and y are linearly dependent). So |hy, xi| = hz, xi = kxk·kzkhw, vi≤kxk·kzk = kxk·kyk. Facts. (1′) (∀ x ∈ X)kxk ≥ 0; kxk =0 iff x = 0. (2′) (∀ α ∈ C)(∀ x ∈ X) kαxk = |α|·kxk. (3′) (∀ x, y ∈ X) kx + yk≤kxk + kyk.
    [Show full text]
  • Functional Analysis (Under Construction)
    Functional Analysis (under construction) Gustav Holzegel∗ March 21, 2015 Contents 1 Motivation and Literature 2 2 Metric Spaces 3 2.1 Examples . .3 2.2 Topological refresher . .5 2.3 Completeness . .6 2.4 Completion of metric spaces . .8 3 Normed Spaces and Banach Spaces 8 3.1 Linear Algebra refresher . .8 3.2 Definition and elementary properties . .9 3.3 Finite vs the Infinite dimensional normed spaces . 11 4 Linear Operators 16 4.1 Bounded Linear Operators . 16 4.2 Restriction and Extension of operators . 18 4.3 Linear functionals . 19 4.4 Normed Spaces of Operators . 20 5 The dual space and the Hahn-Banach theorem 21 5.1 The proof . 25 5.2 Examples: The dual of `1 ...................... 28 5.3 The dual of C [a; b].......................... 29 5.4 A further application . 31 5.5 The adjoint operator . 32 6 The Uniform Boundedness Principle 34 6.1 Application 1: Space of Polynomials . 36 6.2 Application 2: Fourier Series . 37 6.3 Final Remarks . 38 6.4 Strong and Weak convergence . 39 ∗Imperial College London, Department of Mathematics, South Kensington Campus, Lon- don SW7 2AZ, United Kingdom. 1 7 The open mapping and closed graph theorem 44 7.1 The closed graph theorem . 46 8 Hilbert Spaces 48 8.1 Basic definitions . 48 8.2 Closed subspaces and distance . 50 8.3 Orthonormal sets and sequences [lecture by I.K.] . 52 8.4 Total Orthonormal Sets and Sequences . 53 8.5 Riesz Representation Theorem . 55 8.6 Applications of Riesz' theorem & the Hilbert-adjoint . 56 8.7 Self-adjoint and unitary operators .
    [Show full text]