Singular Fourier Transforms and the Integral Representation of the Dirac Delta Function

Total Page:16

File Type:pdf, Size:1020Kb

Singular Fourier Transforms and the Integral Representation of the Dirac Delta Function Physics 116C Singular Fourier transforms and the Integral Representation of the Dirac Delta Function Peter Young (Dated: November 10, 2013) I. INTRODUCTION You will recall that Fourier transform, g(k), of a function f(x) is defined by ∞ g(k)= f(x)eikx dx, (1) Z−∞ and that there is a very similar relation, the inverse Fourier transform,1 transforming g(k) back to f(x): 1 ∞ f(x)= g(k)e−ikx dk, (2) 2π Z−∞ One can show that, for the Fourier transform to converge as the limits of integration tend to , we must have f(x) 0 as x . In addition, let’s consider the g (k), the Fourier ±∞ → | | → ∞ 1 ′ transform of the derivative f (x), and call this g1(k), i.e. ∞ ′ ikx g1(k)= f (x)e dx, (3) Z−∞ Integrating by parts gives ∞ ∞ ikx ikx g1(k)= f(x)e (ik) f(x)e dx, (4) −∞ − −∞ h i Z which shows that the standard relation between g1(k) and g(k), g (k)= ikg(k) , (5) 1 − also only holds if f(x) vanishes for x . Hence standard Fourier transforms only apply to | | → ∞ functions which vanish at infinity. 1 Sometimes the Fourier transform and its inverse are made equivalent by incorporating at factor of 1/√2π into the definition of g(k) namely ∞ 1 ikx g(k) = f(x)e dx, √2π Z−∞ ∞ 1 −ikx f(x) = g(k)e dk. √2π Z−∞ 2 Nonetheless, Fourier transforms are so useful that it is desirable to apply them to some functions which do not satisfy this condition. These transforms are known as “singular Fourier transforms” and will need some form of “regularization” to make the integrals converge. NOTE: for this course, the important sections are I, II, III, and V. II. A SINGULAR FOURIER TRANSFORM INVOLVING A DELTA FUNCTION As an example consider f(x) = 1. In order that the Fourier transform g(k) exists, we regularize the integral by putting in the “convergence factor” e−ǫ|x| where ǫ is small and positive. Eventually we will let ǫ tend to zero. Hence we determine the Fourier transform of −ǫ|x| fǫ(x)= e , (6) which is ∞ ikx −ǫ|x| gǫ(k)= e e dx. Z−∞ We separate the integral into the negative-x region and the positive-x region to find 0 ∞ ikx ǫx ikx −ǫx 1 1 2ǫ gǫ(k)= e e dx + e e dx = + = . (7) ik + ǫ ik + ǫ ǫ2 + k2 Z−∞ Z0 − For ǫ 0 gǫ(k), becomes a narrow high peak, the area under which is → ∞ 2ǫ ∞ dk = 2 tan−1(k/ǫ) = 2π. ǫ2 + k2 −∞ Z−∞ It is therefore convenient to define a quantity δǫ(k) by 1 ǫ δǫ(k)= , (8) π ǫ2 + k2 which has unit area under it. + We now consider the limit ǫ 0 , for which δǫ(k) is a representation of what is known as → the Dirac delta function δ(k). This is an “infinitely high, infinitely narrow” peak with unit area under it. It is defined by the two relations δ(x) = 0, (x = 0), (9) 6 δ(x) dx = 1, (if region of integration includes x = 0). (10) Z 3 From these, it is straightforward to prove the following results: δ(x a)f(x) dx = f(a), (11) − Z δ(x) δ(cx) = , (12) c | | where the region of integration in Eq. (11) includes x = a. You should have seen Eqs. (11) and (12) before. If you are unfamiliar with them, you should take the trouble to derive them. From Eqs. (7) and (8) we have ∞ ikx −ǫ|x| e e dx = 2πδǫ(k). (13) Z−∞ This is a Fourier transform, for which I use the following notation: −ǫ|x| FT e 2πδǫ(k) . (14) −→ The integral in Eq. (13) is well defined because the e−ǫ|x| factor ensures convergence. Equations like Eq. (13) are generally used in situations when they are multiplied on both sides by a smooth function of k, u(k) say, and integrated, i.e. ∞ ∞ ikx −ǫ|x| u(k) e e dx dk = 2π u(k)δǫ(k) dk. (15) Z−∞ Z−∞ Z It turns out that this equation is well behaved if ǫ is set to zero on the LHS (and the limit ǫ 0 → is taken on the RHS). This gives ∞ ∞ ∞ u(k) eikx dx dk = 2π u(k)δ(k) dk Z−∞ Z−∞ Z−∞ = 2πu(0) , (16) (which is known as Fourier’s integral). We used Eq. (11) to get the last expression. One is often tempted to set ǫ to zero also in Eq. (13) (i.e. without multiplying it by a smooth function and integrating), in which case we write ∞ eikx dx = 2πδ(k) . (17) Z−∞ However, as it stands, Eq. (17) does not make sense because the integral does not exist. We therefore have to understand Eq. (17) in one of the following two senses: As a stand-alone equation, in which case it has to be regularized by the convergence factor • e−ǫ|x|, so Eq. (17) really means Eq. (13) for ǫ tending to zero (but not strictly zero). 4 Multiplied by a smooth function u(k) and integrated over k as in Eq. (15), in which case • the convergence factor is unnecessary. Equation (17) is then really a shorthand for Eq. (16). This is normally the sense in which we understand Eq. (17). Since Eq. (17) is a Fourier transform, we can write it as 1 FT 2πδ(k) , (18) −→ which should be compared with Eq. (14). The inverse transform of the delta function then gives 1 ∞ f(x)= 2πδ(k)e−ikx dk = 1 , 2π Z−∞ as required. III. APPLICATIONS OF THE INTEGRAL REPRESENTATION OF THE DELTA FUNCTION In this section we give some applications of the integral representation of the delta function, Eq. (17). A. Convolution Theorem In a previous class, you have already met the convolution theorem, that is, if ∞ F (x)= f(y)f(x y) dy, (19) − Z−∞ which is the convolution of f with itself, then G(k), the Fourier transform of F (x), is simply related to g(k), the Fourier transform of f(x), by G(k)= g(k)2 . (20) We will now give a simple alternative derivation of this result using the integral representation of the delta function, and then use this method to obtain a generalized convolution theorem. We can write Eq. (19) as ∞ ∞ F (x)= dx dx f(x )f(x )δ(x + x x). (21) 1 2 1 2 1 2 − Z−∞ Z−∞ Using Eq. (17) we have 1 ∞ ∞ ∞ F (x)= dx dx dkf(x )f(x )eik(x1+x2−x). (22) 2π 1 2 1 2 Z−∞ Z−∞ Z−∞ 5 The integrals over x1 and x2 are now independent of each other and can be carried out, with the result that 1 ∞ ∞ 2 1 ∞ F (x)= f(t)eikt dt e−ikx dk, = g(k)2e−ikx dk, (23) 2π 2π Z−∞ Z−∞ Z−∞ which shows that F (x) is the inverse transform of g(k)2, i.e. that g(k)2 is the Fourier transform of F (x). Hence we have obtained Eq. (20). The derivation immediately generalizes to the case of two different functions f1(x) and f2(x), with Fourier transforms g1(k) and g2(k), namely, the Fourier transform of the convolution ∞ F (x)= f (y) f (x y) dy, 1 2 − Z−∞ is given by G(k) where G(k)= g1(k)g2(k) . (24) We can now generalize this result to the case where the convolution, rather than involving two variables as in Eq. (21), involves n variables, x ,x , ,xn, but with the same constraint that the 1 2 ··· sum must equal some prescribed value x, i.e. ∞ ∞ F (x)= dx dxn f(x ) f(xn) δ(x + + xn x). (25) 1 ··· 1 ··· 1 ··· − Z−∞ Z−∞ Using the integral representation of the delta function, as before, the integrals over the xi decouple, and we find 1 ∞ ∞ n 1 ∞ F (x)= f(t)eikt dt e−ikx dk, = g(k)ne−ikx dk, (26) 2π 2π Z−∞ Z−∞ Z−∞ which shows that the Fourier transform of F (x) is G(k)= g(k)n, (27) a remarkably simple result. Equation (27) is the desired generalization of the convolution theorem, Eq. (20), to n variables. We shall use this result in class to solve a problem in statistics. Note: With the alternative definition of Fourier transforms given in footnote 1, which puts in factors of √2π to make the transform and the inverse transform symmetric with respect to each other, the convolution theorem for n variables is √2πG(k)= √2πg(k)n . (28) h i 6 B. Parseval’s Theorem Related to the convolution theorem is another useful theorem associated with the name of Parseval. (You may recall that there is a Parseval’s theorem for Fourier series, which is actually closely related.) Using the Fourier transform ∞ g(k)= f(x)eikx dx (29) Z−∞ we have ∞ ∞ ∞ ∞ ⋆ ⋆ ik(x1−x2) g(k)g (k) dk = dx1 dx2 dkf(x1)f (x2)e . (30) Z−∞ Z−∞ Z−∞ Z−∞ Doing the integral over k using Eq. (17) gives 2πδ(x x ), so 1 − 2 ∞ ∞ ∞ g(k) 2 dk = 2π dx dx f(x )f ⋆(x )δ(x x ) | | 1 2 1 2 1 − 2 Z−∞ Z−∞ Z−∞ ∞ = 2π f(x) 2 dx, (31) | | Z−∞ which is Parseval’s theorem. Note: With the alternative definition of Fourier transforms, the factor of 2π in Eq. (31) is missing, so there is complete symmetry between the two sides. IV. EXAMPLES OF SINGULAR FOURIER TRANSFORMS INVOLVING A STEP FUNCTION It is also interesting to consider singular Fourier transforms of functions involving the (Heaviside) step function θ(x) 1 x 0 7 0, (x< 0) θ(x)= , 1, (x> 0) which is denoted H(x) in the book.
Recommended publications
  • Rational Approximation of the Absolute Value Function from Measurements: a Numerical Study of Recent Methods
    Rational approximation of the absolute value function from measurements: a numerical study of recent methods I.V. Gosea∗ and A.C. Antoulasy May 7, 2020 Abstract In this work, we propose an extensive numerical study on approximating the absolute value function. The methods presented in this paper compute approximants in the form of rational functions and have been proposed relatively recently, e.g., the Loewner framework and the AAA algorithm. Moreover, these methods are based on data, i.e., measurements of the original function (hence data-driven nature). We compare numerical results for these two methods with those of a recently-proposed best rational approximation method (the minimax algorithm) and with various classical bounds from the previous century. Finally, we propose an iterative extension of the Loewner framework that can be used to increase the approximation quality of the rational approximant. 1 Introduction Approximation theory is a well-established field of mathematics that splits in a wide variety of subfields and has a multitude of applications in many areas of applied sciences. Some examples of the latter include computational fluid dynamics, solution and control of partial differential equations, data compression, electronic structure calculation, systems and control theory (model order reduction reduction of dynamical systems). In this work we are concerned with rational approximation, i.e, approximation of given functions by means of rational functions. More precisely, the original to be approximated function is the absolute value function (known also as the modulus function). Moreover, the methods under consideration do not necessarily require access to the exact closed-form of the original function, but only to evaluations of it on a particular domain.
    [Show full text]
  • U. Green's Functions
    Theoretical Physics Prof. Ruiz, UNC Asheville, doctorphys on YouTube Chapter U Notes. Green's Functions U1. Impulse Response. We return to our low-pass filter with R 1 , C 1 , and ft( ) 1 for 1 second from t 0 to t 1 . The initial charge on the capacitor is q(0) 0 . We have already solved this problem. The charge on the capacitor is given by the following integral. t q()()() t f u g t u du 0 This solution is the convolution of our input voltage function ft() t and the capacitor-decay response function g() t e . The input function is for us to choose, but the capacitor response function is intrinsic to our low-pass filter. q dq f() t V IR I q q Our differential equation is C dt . The Laplace transform gives F()()() s sQ s Q s 1 Q()()()() s F s F s G s s 1 We know from before that a product in Laplace-transform s-space means a convolution of the respective functions in our t-space. The s-space shows clearly the separation of our input function and the response function. All of the secrets of the system are in Gs(). What is the formal way to solve for this without the Fs() getting in the way? Well, if you set ft( ) 0 , that kills things because the Laplace transform of zero is 1 Q() s (0) (0)() G s 0 zero and you would get . s 1 Michael J. Ruiz, Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License So we really want Q() s F ()() s G s (1)() G s G () s in order to get to know the system for itself.
    [Show full text]
  • Techniques of Variational Analysis
    J. M. Borwein and Q. J. Zhu Techniques of Variational Analysis An Introduction October 8, 2004 Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo To Tova, Naomi, Rachel and Judith. To Charles and Lilly. And in fond and respectful memory of Simon Fitzpatrick (1953-2004). Preface Variational arguments are classical techniques whose use can be traced back to the early development of the calculus of variations and further. Rooted in the physical principle of least action they have wide applications in diverse ¯elds. The discovery of modern variational principles and nonsmooth analysis further expand the range of applications of these techniques. The motivation to write this book came from a desire to share our pleasure in applying such variational techniques and promoting these powerful tools. Potential readers of this book will be researchers and graduate students who might bene¯t from using variational methods. The only broad prerequisite we anticipate is a working knowledge of un- dergraduate analysis and of the basic principles of functional analysis (e.g., those encountered in a typical introductory functional analysis course). We hope to attract researchers from diverse areas { who may fruitfully use varia- tional techniques { by providing them with a relatively systematical account of the principles of variational analysis. We also hope to give further insight to graduate students whose research already concentrates on variational analysis. Keeping these two di®erent reader groups in mind we arrange the material into relatively independent blocks. We discuss various forms of variational princi- ples early in Chapter 2. We then discuss applications of variational techniques in di®erent areas in Chapters 3{7.
    [Show full text]
  • Best L1 Approximation of Heaviside-Type Functions in a Weak-Chebyshev Space
    Best L1 approximation of Heaviside-type functions in Chebyshev and weak-Chebyshev spaces Laurent Gajny,1 Olivier Gibaru1,,2 Eric Nyiri1, Shu-Cherng Fang3 Abstract In this article, we study the problem of best L1 approximation of Heaviside-type functions in Chebyshev and weak-Chebyshev spaces. We extend the Hobby-Rice theorem [2] into an appro- priate framework and prove the unicity of best L1 approximation of Heaviside-type functions in an even-dimensional Chebyshev space under the condition that the dimension of the subspace composed of the even functions is half the dimension of the whole space. We also apply the results to compute best L1 approximations of Heaviside-type functions by polynomials and Her- mite polynomial splines with fixed knots. Keywords : Best approximation, L1 norm, Heaviside function, polynomials, polynomial splines, Chebyshev space, weak-Chebyshev space. Introduction Let [a, b] be a real interval with a < b and ν be a positive measure defined on a σ-field of subsets of [a, b]. The space L1([a, b],ν) of ν-integrable functions with the so-called L1 norm : b k · k1 : f ∈ L1([a, b],ν) 7−→ kfk1 = |f(x)| dν(x) Za forms a Banach space. Let f ∈ L1([a, b],ν) and Y be a subset of L1([a, b],ν) such that f∈ / Y where Y is the closure of Y . The problem of the best L1 approximation of the function f in Y consists in finding a function g∗ ∈ Y such that : ∗ arXiv:1407.0228v3 [math.FA] 26 Oct 2015 kf − g k1 ≤ kf − gk1, ∀g ∈ Y.
    [Show full text]
  • HP 33S Solving Trigonometry Problems the Trigonometric Functions Trigonometric Modes Practice Working Problems Involving Trig Functions
    hp calculators HP 33S Solving Trigonometry Problems The trigonometric functions Trigonometric modes Practice working problems involving trig functions hp calculators HP 33S Solving Trigonometry Problems The trigonometric functions The trigonometric functions, sine, cosine, tangent, and related functions, are used in geometry, surveying, and design. They also occur in solutions to orbital mechanics, integration, and other advanced applications. The HP 33S provides the three basic functions, and their inverse, or “arc” functions. These work in degrees, radians and gradians modes. In addition, π is provided as a function on the right-shifted “cos” key, and the sign function is on the right-shifted “sin” key. The secant, cosecant and cotangent functions are easily calculated using the n, k, and q keys respectively, followed by ,. To help remember whether the secant function corresponds to the inverse sine or cosine, it can be helpful to note that the first letters of “secant” and “cosecant” are inverted in relation to those of “sine” and “cosine”, just as the secant and cosecant are the inverted cosine and sine functions. The two keys ° and ± also use trigonometric functions to convert between rectangular and radial coordinates. They can therefore be useful in some trigonometric calculations. Trigonometric modes The HP 33S can calculate trigonometric functions in any of these three modes: Degrees, Radians or Gradians. Practice working problems involving trig functions Example 1: Select the appropriate angle mode. Solution: Press the Ý key below the screen. Figure 1 Press 1, 2 or 3 to select DEGrees, RADians or GRADians mode, or use the arrow keys Ö, Õ, × and Ø to select the required mode and then press Ï.
    [Show full text]
  • Notes on Distributions
    Notes on Distributions Peter Woit Department of Mathematics, Columbia University [email protected] April 26, 2020 1 Introduction This is a set of notes written for the 2020 spring semester Fourier Analysis class, covering material on distributions. This is an important topic not covered in Stein-Shakarchi. These notes will supplement two textbooks you can consult for more details: A Guide to Distribution Theory and Fourier Transforms [2], by Robert Strichartz. The discussion of distributions in this book is quite compre- hensive, and at roughly the same level of rigor as this course. Much of the motivating material comes from physics. Lectures on the Fourier Transform and its Applications [1], by Brad Os- good, chapters 4 and 5. This book is wordier than Strichartz, has a wealth of pictures and motivational examples from the field of signal processing. The first three chapters provide an excellent review of the material we have covered so far, in a form more accessible than Stein-Shakarchi. One should think of distributions as mathematical objects generalizing the notion of a function (and the term \generalized function" is often used inter- changeably with \distribution"). A function will give one a distribution, but many important examples of distributions do not come from a function in this way. Some of the properties of distributions that make them highly useful are: Distributions are always infinitely differentiable, one never needs to worry about whether derivatives exist. One can look for solutions of differential equations that are distributions, and for many examples of differential equations the simplest solutions are given by distributions.
    [Show full text]
  • Three-Complex Numbers and Related Algebraic Structures
    S S symmetry Article Three-Complex Numbers and Related Algebraic Structures Wolf-Dieter Richter School Information, University of Rostock, Ulmenstraße 69, Haus 3, 18057 Rostock, Germany; [email protected] Abstract: Three-complex numbers are introduced for using a geometric vector product in the three- dimensional Euclidean vector space R3 and proving its equivalence with a spherical coordinate product. Based upon the definitions of the geometric power and geometric exponential functions, some Euler-type trigonometric representations of three-complex numbers are derived. Further, a 3 general l2 −complex algebraic structure together with its matrix, polynomial and variable basis vector 3 representations are considered. Then, the classes of lp-complex numbers are introduced. As an application, Euler-type formulas are used to construct directional probability laws on the Euclidean unit sphere in R3. Keywords: geometric vector product; geometric vector power; geometric exponential function; three-complex algebraic structure; matrix, polynomial and variable basis representations; Euler-type 3 representations; lp-complex numbers; construction of directional probability laws 1. Introduction While a crucial property of complex numbers x + iy is very briefly described by Citation: Richter, W.-D. the equation i2 = −1, the numbers x + iy + jz considered in the present work have the Three-Complex Numbers and fundamental property i2 = j2 = ij = −1. The basic aid for establishing this property Related Algebraic Structures. consists in the use of suitable coordinates and the appropriate handling of rotations on Symmetry 2021, 13, 342. https:// spheres in the space R3. Such rotations are often described in terms of three angles, of doi.org/10.3390/sym13020342 which the best known by name are probably the Euler angles.
    [Show full text]
  • Maple 7 Programming Guide
    Maple 7 Programming Guide M. B. Monagan K. O. Geddes K. M. Heal G. Labahn S. M. Vorkoetter J. McCarron P. DeMarco c 2001 by Waterloo Maple Inc. ii • Waterloo Maple Inc. 57 Erb Street West Waterloo, ON N2L 6C2 Canada Maple and Maple V are registered trademarks of Waterloo Maple Inc. c 2001, 2000, 1998, 1996 by Waterloo Maple Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the copyright holder, except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Contents 1 Introduction 1 1.1 Getting Started ........................ 2 Locals and Globals ...................... 7 Inputs, Parameters, Arguments ............... 8 1.2 Basic Programming Constructs ............... 11 The Assignment Statement ................. 11 The for Loop......................... 13 The Conditional Statement ................. 15 The while Loop....................... 19 Modularization ........................ 20 Recursive Procedures ..................... 22 Exercise ............................ 24 1.3 Basic Data Structures .................... 25 Exercise ............................ 26 Exercise ............................ 28 A MEMBER Procedure ..................... 28 Exercise ............................ 29 Binary Search ......................... 29 Exercises ........................... 30 Plotting the Roots of a Polynomial ............. 31 1.4 Computing with Formulæ .................. 33 The Height of a Polynomial ................. 34 Exercise ...........................
    [Show full text]
  • Normed and Banach Spaces 9 1
    Funtional Analysis Lecture notes for 18.102 Richard Melrose Department of Mathematics, MIT E-mail address: [email protected] Version 0.8D; Revised: 4-5-2010; Run: February 4, 2014 . Contents Preface 5 Introduction 6 Chapter 1. Normed and Banach spaces 9 1. Vector spaces 9 2. Normed spaces 11 3. Banach spaces 13 4. Operators and functionals 16 5. Subspaces and quotients 19 6. Completion 20 7. More examples 24 8. Baire's theorem 26 9. Uniform boundedness 27 10. Open mapping theorem 28 11. Closed graph theorem 30 12. Hahn-Banach theorem 30 13. Double dual 34 14. Axioms of a vector space 34 Chapter 2. The Lebesgue integral 35 1. Integrable functions 35 2. Linearity of L1 39 3. The integral on L1 41 4. Summable series in L1(R) 45 5. The space L1(R) 47 6. The three integration theorems 48 7. Notions of convergence 51 8. Measurable functions 51 9. The spaces Lp(R) 52 10. The space L2(R) 53 11. The spaces Lp(R) 55 12. Lebesgue measure 58 13. Density of step functions 59 14. Measures on the line 61 15. Higher dimensions 62 Removed material 64 Chapter 3. Hilbert spaces 67 1. pre-Hilbert spaces 67 2. Hilbert spaces 68 3 4 CONTENTS 3. Orthonormal sets 69 4. Gram-Schmidt procedure 69 5. Complete orthonormal bases 70 6. Isomorphism to l2 71 7. Parallelogram law 72 8. Convex sets and length minimizer 72 9. Orthocomplements and projections 73 10. Riesz' theorem 74 11. Adjoints of bounded operators 75 12.
    [Show full text]
  • Nested Formulas for Cosine and Inverse Cosine Functions Based on Vi`Ete’Sformula for Π
    Nested formulas for cosine and inverse cosine functions based on Vi`ete'sformula for π Artur Kawalec Abstract In this article, we develop nested representations for cosine and inverse cosine functions, which is a generalization of Vi`ete'sformula for π. We explore a natural inverse relationship between these representations and develop numerical algorithms to compute them. Throughout this article, we perform numerical computation for various test cases, and demonstrate that these nested formulas are valid for complex arguments and a kth branch. We further extend the presented results to hyperbolic cosine and logarithm functions, and using additional trigonometric identities, we explore the sine and tangent functions and their inverses. 1 Introduction In 1593s, Vi`etedeveloped an infinite nested radical product formula for π given as p p p q p p 2 2 2 + 2 2 + 2 + 2 = ··· ; (1) π 2 2 2 which, historically, may have been the first infinite series for π. Vi`etederived this formula geometrically by means of approximating areas of an n-sided polygon inscribed in a circle, and obtaining a successive sequence of ratios of areas of 2n-sided polygon to an n-sided polygon that is converging to an area of a circle, from which 2/π follows as a converging value as n ! 1 [2]. Subsequently, Euler also developed a new proof of Vi`ete'sformula using trigonometric identities and infinite products, namely, Euler found a product identity arXiv:2007.08639v1 [math.GM] 15 Jul 2020 1 sin(x) Y x x x x = cos = cos cos cos :::; (2) x 2n 2 22 23 n=1 where by a repeated application of the half-angle identity for cosine r x 1 − cos(x) cos = ± (3) 2 2 the Vi`ete'sformula for π is obtained by substituting x = π=2.
    [Show full text]
  • An Introduction to Some Aspects of Functional Analysis
    An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms on vector spaces, bounded linear operators, and dual spaces of bounded linear functionals in particular. Contents 1 Norms on vector spaces 3 2 Convexity 4 3 Inner product spaces 5 4 A few simple estimates 7 5 Summable functions 8 6 p-Summable functions 9 7 p =2 10 8 Bounded continuous functions 11 9 Integral norms 12 10 Completeness 13 11 Bounded linear mappings 14 12 Continuous extensions 15 13 Bounded linear mappings, 2 15 14 Bounded linear functionals 16 1 15 ℓ1(E)∗ 18 ∗ 16 c0(E) 18 17 H¨older’s inequality 20 18 ℓp(E)∗ 20 19 Separability 22 20 An extension lemma 22 21 The Hahn–Banach theorem 23 22 Complex vector spaces 24 23 The weak∗ topology 24 24 Seminorms 25 25 The weak∗ topology, 2 26 26 The Banach–Alaoglu theorem 27 27 The weak topology 28 28 Seminorms, 2 28 29 Convergent sequences 30 30 Uniform boundedness 31 31 Examples 32 32 An embedding 33 33 Induced mappings 33 34 Continuity of norms 34 35 Infinite series 35 36 Some examples 36 37 Quotient spaces 37 38 Quotients and duality 38 39 Dual mappings 39 2 40 Second duals 39 41 Continuous functions 41 42 Bounded sets 42 43 Hahn–Banach, revisited 43 References 44 1 Norms on vector spaces Let V be a vector space over the real numbers R or the complex numbers C.
    [Show full text]
  • IBM ILOG CPLEX Optimization Studio OPL Language Reference Manual
    IBM IBM ILOG CPLEX Optimization Studio OPL Language Reference Manual Version 12 Release 7 Copyright notice Describes general use restrictions and trademarks related to this document and the software described in this document. © Copyright IBM Corp. 1987, 2017 US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other company, product, or service names may be trademarks or service marks of others. © Copyright IBM Corporation 1987, 2017. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures ............... v Limitations on constraints ........ 59 Formal parameters ...........
    [Show full text]