A Presentation: Group of Unitary Operators
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Heisenberg Group Fourier Transform
THE HEISENBERG GROUP FOURIER TRANSFORM NEIL LYALL Contents 1. Fourier transform on Rn 1 2. Fourier analysis on the Heisenberg group 2 2.1. Representations of the Heisenberg group 2 2.2. Group Fourier transform 3 2.3. Convolution and twisted convolution 5 3. Hermite and Laguerre functions 6 3.1. Hermite polynomials 6 3.2. Laguerre polynomials 9 3.3. Special Hermite functions 9 4. Group Fourier transform of radial functions on the Heisenberg group 12 References 13 1. Fourier transform on Rn We start by presenting some standard properties of the Euclidean Fourier transform; see for example [6] and [4]. Given f ∈ L1(Rn), we define its Fourier transform by setting Z fb(ξ) = e−ix·ξf(x)dx. Rn n ih·ξ If for h ∈ R we let (τhf)(x) = f(x + h), then it follows that τdhf(ξ) = e fb(ξ). Now for suitable f the inversion formula Z f(x) = (2π)−n eix·ξfb(ξ)dξ, Rn holds and we see that the Fourier transform decomposes a function into a continuous sum of characters (eigenfunctions for translations). If A is an orthogonal matrix and ξ is a column vector then f[◦ A(ξ) = fb(Aξ) and from this it follows that the Fourier transform of a radial function is again radial. In particular the Fourier transform −|x|2/2 n of Gaussians take a particularly nice form; if G(x) = e , then Gb(ξ) = (2π) 2 G(ξ). In general the Fourier transform of a radial function can always be explicitly expressed in terms of a Bessel 1 2 NEIL LYALL transform; if g(x) = g0(|x|) for some function g0, then Z ∞ n 2−n n−1 gb(ξ) = (2π) 2 g0(r)(r|ξ|) 2 J n−2 (r|ξ|)r dr, 0 2 where J n−2 is a Bessel function. -
Lecture Notes: Harmonic Analysis
Lecture notes: harmonic analysis Russell Brown Department of mathematics University of Kentucky Lexington, KY 40506-0027 August 14, 2009 ii Contents Preface vii 1 The Fourier transform on L1 1 1.1 Definition and symmetry properties . 1 1.2 The Fourier inversion theorem . 9 2 Tempered distributions 11 2.1 Test functions . 11 2.2 Tempered distributions . 15 2.3 Operations on tempered distributions . 17 2.4 The Fourier transform . 20 2.5 More distributions . 22 3 The Fourier transform on L2. 25 3.1 Plancherel's theorem . 25 3.2 Multiplier operators . 27 3.3 Sobolev spaces . 28 4 Interpolation of operators 31 4.1 The Riesz-Thorin theorem . 31 4.2 Interpolation for analytic families of operators . 36 4.3 Real methods . 37 5 The Hardy-Littlewood maximal function 41 5.1 The Lp-inequalities . 41 5.2 Differentiation theorems . 45 iii iv CONTENTS 6 Singular integrals 49 6.1 Calder´on-Zygmund kernels . 49 6.2 Some multiplier operators . 55 7 Littlewood-Paley theory 61 7.1 A square function that characterizes Lp ................... 61 7.2 Variations . 63 8 Fractional integration 65 8.1 The Hardy-Littlewood-Sobolev theorem . 66 8.2 A Sobolev inequality . 72 9 Singular multipliers 77 9.1 Estimates for an operator with a singular symbol . 77 9.2 A trace theorem. 87 10 The Dirichlet problem for elliptic equations. 91 10.1 Domains in Rn ................................ 91 10.2 The weak Dirichlet problem . 99 11 Inverse Problems: Boundary identifiability 103 11.1 The Dirichlet to Neumann map . 103 11.2 Identifiability . 107 12 Inverse problem: Global uniqueness 117 12.1 A Schr¨odingerequation . -
CONVERGENCE of FOURIER SERIES in Lp SPACE Contents 1. Fourier Series, Partial Sums, and Dirichlet Kernel 1 2. Convolution 4 3. F
CONVERGENCE OF FOURIER SERIES IN Lp SPACE JING MIAO Abstract. The convergence of Fourier series of trigonometric functions is easy to see, but the same question for general functions is not simple to answer. We study the convergence of Fourier series in Lp spaces. This result gives us a criterion that determines whether certain partial differential equations have solutions or not.We will follow closely the ideas from Schlag and Muscalu's Classical and Multilinear Harmonic Analysis. Contents 1. Fourier Series, Partial Sums, and Dirichlet Kernel 1 2. Convolution 4 3. Fej´erkernel and Approximate identity 6 4. Lp convergence of partial sums 9 Appendix A. Proofs of Theorems and Lemma 16 Acknowledgments 18 References 18 1. Fourier Series, Partial Sums, and Dirichlet Kernel Let T = R=Z be the one-dimensional torus (in other words, the circle). We consider various function spaces on the torus T, namely the space of continuous functions C(T), the space of H¨oldercontinuous functions Cα(T) where 0 < α ≤ 1, and the Lebesgue spaces Lp(T) where 1 ≤ p ≤ 1. Let f be an L1 function. Then the associated Fourier series of f is 1 X (1.1) f(x) ∼ f^(n)e2πinx n=−∞ and Z 1 (1.2) f^(n) = f(x)e−2πinx dx: 0 The symbol ∼ in (1.1) is formal, and simply means that the series on the right-hand side is associated with f. One interesting and important question is when f equals the right-hand side in (1.1). Note that if we start from a trigonometric polynomial 1 X 2πinx (1.3) f(x) = ane ; n=−∞ Date: DEADLINES: Draft AUGUST 18 and Final version AUGUST 29, 2013. -
The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a Self-Adjoi
The Spectral Theorem for Self-Adjoint and Unitary Operators Michael Taylor Contents 1. Introduction 2. Functions of a self-adjoint operator 3. Spectral theorem for bounded self-adjoint operators 4. Functions of unitary operators 5. Spectral theorem for unitary operators 6. Alternative approach 7. From Theorem 1.2 to Theorem 1.1 A. Spectral projections B. Unbounded self-adjoint operators C. Von Neumann's mean ergodic theorem 1 2 1. Introduction If H is a Hilbert space, a bounded linear operator A : H ! H (A 2 L(H)) has an adjoint A∗ : H ! H defined by (1.1) (Au; v) = (u; A∗v); u; v 2 H: We say A is self-adjoint if A = A∗. We say U 2 L(H) is unitary if U ∗ = U −1. More generally, if H is another Hilbert space, we say Φ 2 L(H; H) is unitary provided Φ is one-to-one and onto, and (Φu; Φv)H = (u; v)H , for all u; v 2 H. If dim H = n < 1, each self-adjoint A 2 L(H) has the property that H has an orthonormal basis of eigenvectors of A. The same holds for each unitary U 2 L(H). Proofs can be found in xx11{12, Chapter 2, of [T3]. Here, we aim to prove the following infinite dimensional variant of such a result, called the Spectral Theorem. Theorem 1.1. If A 2 L(H) is self-adjoint, there exists a measure space (X; F; µ), a unitary map Φ: H ! L2(X; µ), and a 2 L1(X; µ), such that (1.2) ΦAΦ−1f(x) = a(x)f(x); 8 f 2 L2(X; µ): Here, a is real valued, and kakL1 = kAk. -
Inner Product on Matrices), 41 a (Closure of A), 308 a ≼ B (Matrix Ordering
Index A : B (inner product on matrices), 41 K∞ (asymptotic cone), 19 A • B (inner product on matrices), 41 K ∗ (dual cone), 19 A (closure of A), 308 L(X,Y ) (linear maps X → Y ), 312 A . B (matrix ordering), 184 L∞(), 311 A∗(adjoint operator), 313 L p(), 311 BX (open unit ball), 17 λ (Lebesgue measure), 317 C(; X) (continuous maps → X), λmin(B) (minimum eigenvalue of B), 315 142, 184 ∞ Rd →∞ C0 ( ) (space of test functions), 316 liminfn xn (limit inferior), 309 co(A) (convex hull), 314 limsupn→∞ xn (limit superior), 309 cone(A) (cone generated by A), 337 M(A) (space of measures), 318 d D(R ) (space of distribution), 316 NK (x) (normal cone), 19 Dα (derivative with multi-index), 322 O(g(s)) (asymptotic “big Oh”), 204 dH (A, B) (Hausdorff metric), 21 ∂ f (subdifferential of f ), 340 δ (Dirac-δ function), 3, 80, 317 P(X) (power set; set of subsets of X), 20 + δH (A, B) (one-sided Hausdorff (U) (strong inverse), 22 − semimetric), 21 (U) (weak inverse), 22 diam F (diameter of set F), 319 K (projection map), 19, 329 n divσ (divergence of σ), 234 R+ (nonnegative orthant), 19 dµ/dν (Radon–Nikodym derivative), S(Rd ) (space of tempered test 320 functions), 317 domφ (domain of convex function), 54, σ (stress tensor), 233 327 σK (support function), 328 epi f (epigraph), 19, 327 sup(A) (supremum of A ⊆ R), 309 ε (strain tensor), 232 TK (x) (tangent cone), 19 −1 f (E) (inverse set), 308 (u, v)H (inner product), 17 f g (inf-convolution), 348 u, v (duality pairing), 17 graph (graph of ), 21 W m,p() (Sobolev space), 322 Hess f (Hessian matrix), 81 -
Linear Operators
C H A P T E R 10 Linear Operators Recall that a linear transformation T ∞ L(V) of a vector space into itself is called a (linear) operator. In this chapter we shall elaborate somewhat on the theory of operators. In so doing, we will define several important types of operators, and we will also prove some important diagonalization theorems. Much of this material is directly useful in physics and engineering as well as in mathematics. While some of this chapter overlaps with Chapter 8, we assume that the reader has studied at least Section 8.1. 10.1 LINEAR FUNCTIONALS AND ADJOINTS Recall that in Theorem 9.3 we showed that for a finite-dimensional real inner product space V, the mapping u ’ Lu = Óu, Ô was an isomorphism of V onto V*. This mapping had the property that Lauv = Óau, vÔ = aÓu, vÔ = aLuv, and hence Lau = aLu for all u ∞ V and a ∞ ®. However, if V is a complex space with a Hermitian inner product, then Lauv = Óau, vÔ = a*Óu, vÔ = a*Luv, and hence Lau = a*Lu which is not even linear (this was the definition of an anti- linear (or conjugate linear) transformation given in Section 9.2). Fortunately, there is a closely related result that holds even for complex vector spaces. Let V be finite-dimensional over ç, and assume that V has an inner prod- uct Ó , Ô defined on it (this is just a positive definite Hermitian form on V). Thus for any X, Y ∞ V we have ÓX, YÔ ∞ ç. -
The Plancherel Formula for Parabolic Subgroups
ISRAEL JOURNAL OF MATHEMATICS, Vol. 28. Nos, 1-2, 1977 THE PLANCHEREL FORMULA FOR PARABOLIC SUBGROUPS BY FREDERICK W. KEENE, RONALD L. LIPSMAN* AND JOSEPH A. WOLF* ABSTRACT We prove an explicit Plancherel Formula for the parabolic subgroups of the simple Lie groups of real rank one. The key point of the formula is that the operator which compensates lack of unimodularity is given, not as a family of implicitly defined operators on the representation spaces, but rather as an explicit pseudo-differential operator on the group itself. That operator is a fractional power of the Laplacian of the center of the unipotent radical, and the proof of our formula is based on the study of its analytic properties and its interaction with the group operations. I. Introduction The Plancherel Theorem for non-unimodular groups has been developed and studied rather intensively during the past five years (see [7], [11], [8], [4]). Furthermore there has been significant progress in the computation of the ingredients of the theorem (see [5], [2])--at least in the case of solvable groups. In this paper we shall give a completely explicit description of these ingredients for an interesting family of non-solvable groups. The groups we consider are the parabolic subgroups MAN of the real rank 1 simple Lie groups. As an intermediate step we also obtain the Plancherel formula for the exponential solvable groups AN (see also [6]). In that case our results are more extensive than those of [5] since in addition to the "infinitesimal" unbounded operators we also obtain explicitly the "global" unbounded operator on L2(AN). -
216 Section 6.1 Chapter 6 Hermitian, Orthogonal, And
216 SECTION 6.1 CHAPTER 6 HERMITIAN, ORTHOGONAL, AND UNITARY OPERATORS In Chapter 4, we saw advantages in using bases consisting of eigenvectors of linear opera- tors in a number of applications. Chapter 5 illustrated the benefit of orthonormal bases. Unfortunately, eigenvectors of linear operators are not usually orthogonal, and vectors in an orthonormal basis are not likely to be eigenvectors of any pertinent linear operator. There are operators, however, for which eigenvectors are orthogonal, and hence it is possible to have a basis that is simultaneously orthonormal and consists of eigenvectors. This chapter introduces some of these operators. 6.1 Hermitian Operators § When the basis for an n-dimensional real, inner product space is orthonormal, the inner product of two vectors u and v can be calculated with formula 5.48. If v not only represents a vector, but also denotes its representation as a column matrix, we can write the inner product as the product of two matrices, one a row matrix and the other a column matrix, (u, v) = uT v. If A is an n n real matrix, the inner product of u and the vector Av is × (u, Av) = uT (Av) = (uT A)v = (AT u)T v = (AT u, v). (6.1) This result, (u, Av) = (AT u, v), (6.2) allows us to move the matrix A from the second term to the first term in the inner product, but it must be replaced by its transpose AT . A similar result can be derived for complex, inner product spaces. When A is a complex matrix, we can use equation 5.50 to write T T T (u, Av) = uT (Av) = (uT A)v = (AT u)T v = A u v = (A u, v). -
Complex Inner Product Spaces
Complex Inner Product Spaces A vector space V over the field C of complex numbers is called a (complex) inner product space if it is equipped with a pairing h, i : V × V → C which has the following properties: 1. (Sesqui-symmetry) For every v, w in V we have hw, vi = hv, wi. 2. (Linearity) For every u, v, w in V and a in C we have langleau + v, wi = ahu, wi + hv, wi. 3. (Positive definite-ness) For every v in V , we have hv, vi ≥ 0. Moreover, if hv, vi = 0, the v must be 0. Sometimes the third condition is replaced by the following weaker condition: 3’. (Non-degeneracy) For every v in V , there is a w in V so that hv, wi 6= 0. We will generally work with the stronger condition of positive definite-ness as is conventional. A basis f1, f2,... of V is called orthogonal if hfi, fji = 0 if i < j. An orthogonal basis of V is called an orthonormal (or unitary) basis if, in addition hfi, fii = 1. Given a linear transformation A : V → V and another linear transformation B : V → V , we say that B is the adjoint of A, if for all v, w in V , we have hAv, wi = hv, Bwi. By sesqui-symmetry we see that A is then the adjoint of B as well. Moreover, by positive definite-ness, we see that if A has an adjoint B, then this adjoint is unique. Note that, when V is infinite dimensional, we may not be able to find an adjoint for A! In case it does exist it is denoted as A∗. -
Functional Analysis an Elementary Introduction
Functional Analysis An Elementary Introduction Markus Haase Graduate Studies in Mathematics Volume 156 American Mathematical Society Functional Analysis An Elementary Introduction https://doi.org/10.1090//gsm/156 Functional Analysis An Elementary Introduction Markus Haase Graduate Studies in Mathematics Volume 156 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE Dan Abramovich Daniel S. Freed Rafe Mazzeo (Chair) Gigliola Staffilani 2010 Mathematics Subject Classification. Primary 46-01, 46Cxx, 46N20, 35Jxx, 35Pxx. For additional information and updates on this book, visit www.ams.org/bookpages/gsm-156 Library of Congress Cataloging-in-Publication Data Haase, Markus, 1970– Functional analysis : an elementary introduction / Markus Haase. pages cm. — (Graduate studies in mathematics ; volume 156) Includes bibliographical references and indexes. ISBN 978-0-8218-9171-1 (alk. paper) 1. Functional analysis—Textbooks. 2. Differential equations, Partial—Textbooks. I. Title. QA320.H23 2014 515.7—dc23 2014015166 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2014 by the American Mathematical Society. -
Methods of Applied Mathematics
Methods of Applied Mathematics Todd Arbogast and Jerry L. Bona Department of Mathematics, and Institute for Computational Engineering and Sciences The University of Texas at Austin Copyright 1999{2001, 2004{2005, 2007{2008 (corrected version) by T. Arbogast and J. Bona. Contents Chapter 1. Preliminaries 5 1.1. Elementary Topology 5 1.2. Lebesgue Measure and Integration 13 1.3. Exercises 23 Chapter 2. Normed Linear Spaces and Banach Spaces 27 2.1. Basic Concepts and Definitions. 27 2.2. Some Important Examples 34 2.3. Hahn-Banach Theorems 43 2.4. Applications of Hahn-Banach 48 2.5. The Embedding of X into its Double Dual X∗∗ 52 2.6. The Open Mapping Theorem 53 2.7. Uniform Boundedness Principle 57 2.8. Compactness and Weak Convergence in a NLS 58 2.9. The Dual of an Operator 63 2.10. Exercises 66 Chapter 3. Hilbert Spaces 73 3.1. Basic Properties of Inner-Products 73 3.2. Best Approximation and Orthogonal Projections 75 3.3. The Dual Space 78 3.4. Orthonormal Subsets 79 3.5. Weak Convergence in a Hilbert Space 86 3.6. Exercises 87 Chapter 4. Spectral Theory and Compact Operators 89 4.1. Definitions of the Resolvent and Spectrum 90 4.2. Basic Spectral Theory in Banach Spaces 91 4.3. Compact Operators on a Banach Space 93 4.4. Bounded Self-Adjoint Linear Operators on a Hilbert Space 99 4.5. Compact Self-Adjoint Operators on a Hilbert Space 104 4.6. The Ascoli-Arzel`aTheorem 107 4.7. Sturm Liouville Theory 109 4.8. -
Is to Understand Some of the Interesting Properties of the Evolution Operator, We Need to Say a Few Things About Unitary Operat
Appendix 5: Unitary Operators, the Evolution Operator, the Schrödinger and the Heisenberg Pictures, and Wigner’s Formula A5.1 Unitary Operators, the Evolution Operator In ordinary 3-dimensional space, there are operators, such as an operator that simply rotates vectors, which alter ordinary vectors while preserving their lengths. Unitary operators do much the same thing in n-dimensional complex spaces.1 Let us start with some definitions. Given an operator (matrix) A, its inverse A-1 is such that AA-1 = A-1A = I .2 (A5.1.1) An operator (matrix) U is unitary if its inverse is equal to its Hermitian conjugate (its adjoint), so that U -1 = U + , (A5.1.2) and therefore UU + = U +U = I. (A5.1.3) Since the evolution operator U(t,t0 ) and TDSE do the same thing, U(t,t0 ) must preserve normalization and therefore unsurprisingly it can be shown that the evolution operator is unitary. Moreover, + -1 U(tn ,ti )= U (ti,tn )= U (ti,tn ). (A5.1.4) In addition, U(tn ,t1)= U(tn ,tn-1)U(tn-1,tn-2 )× × ×U(t2,t1), (A5.1.5) 1 As we know, the state vector is normalized (has length one), and stays normalized. Hence, the only way for it to undergo linear change is to rotate in n-dimensional space. 2 It is worth noticing that a matrix has an inverse if and only if its determinant is different from zero. 413 that is, the evolution operator for the system’s transition in the interval tn - t1 is equal to the product of the evolution operators in the intermediate intervals.