Function Spaces 1 Hilbert Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Function Spaces 1 Hilbert Spaces Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assumed structure is exploited by algorithms and theoretical analysis. Here we review some basic facts about function spaces. As motivation, consider nonparametric regression. We observe (X1;Y1);:::; (Xn;Yn) and we want to estimate m(x) = E(Y jX = x). We cannot simply choose m to minimize the P 2 training error i(Yi − m(Xi)) as this will lead to interpolating the data. One approach is P 2 to minimize i(Yi − m(Xi)) while restricting m to be in a well behaved function space. 1 Hilbert Spaces Let V be a vector space. A norm is a mapping k · k : V ! [0; 1) that satisfies 1. kx + yk ≤ kxk + kyk. 2. kaxk = akxk for all a 2 R. 3. kxk = 0 implies that x = 0. k pP 2 An example of a norm on V = R is the Euclidean norm kxk = i xi . A sequence x1; x2;::: in a normed space is a Cauchy sequence if kxm − xnk ! 0 as m; n ! 1. The space is complete if every Cauchy sequence converges to a limit. A complete, normed space is called a Banach space. An inner product is a mapping h·; ·i : V × V ! R that satisfies, for all x; y; z 2 V and a 2 R: 1. hx; xi ≥ 0 and hx; xi = 0 if and only if x = 0 2. hx; y + zi = hx; yi + hx; zi 3. hx; ayi = ahx; yi 4. hx; yi = hy; xi k P An example of an inner product on V = R is hx; yi = i xiyi. Two vectors x and y are orthogonal if hx; yi = 0. An inner product defines a norm kvk = phv; vi. We then have the Cauchy-Schwartz inequality jhx; yij ≤ kxk kyk: (1) 1 A Hilbert space is a complete, inner product space. Every Hilbert space is a Banach space but the reverse is not true in general. In a Hilbert space, we write fn ! f to mean that jjfn − fjj ! 0 as n ! 1. Note that jjfn − fjj ! 0 does NOT imply that fn(x) ! f(x). For this to be true, we need the space to be a reproducing kernel Hilbert space which we discuss later. If V is a Hilbert space and L is a closed subspace then for any v 2 V there is a unique y 2 L, called the projection of v onto L, which minimizes kv − zk over z 2 L. The set of elements orthogonal to every z 2 L is denoted by L?. Every v 2 V can be written uniquely as v = w + z where z is the projection of v onto L and w 2 L?. In general, if L and M are subspaces such that every ` 2 L is orthogonal to every m 2 M then we define the orthogonal sum (or direct sum) as L ⊕ M = f` + m : ` 2 L; m 2 Mg: (2) A set of vectors fet; t 2 T g is orthonormal if hes; eti = 0 when s 6= t and ketk = 1 for all t 2 T . If fet; t 2 T g are orthonormal, and the only vector orthogonal to each et is the zero vector, then fet; t 2 T g is called an orthonormal basis. Every Hilbert space has an orthonormal basis. A Hilbert space is separable if there exists a countable orthonormal basis. Theorem 1 Let V be a separable Hilbert space with countable orthonormal basis fe1; e2;:::g. P1 2 Then, for any x 2 V , we have x = j=1 θjej where θj = hx; eji. Furthermore, kxk = P1 2 j=1 θj , which is known as Parseval's identity. The coefficients θj = hx; eji are called Fourier coefficients. d P The set R with inner product hv; wi = j vjwj is a Hilbert space. Another example of a R b 2 Hilbert space is the set of functions f :[a; b] ! R such that f (x)dx < 1 with inner R a product f(x)g(x)dx. This space is denoted by L2(a; b). 2 Lp Spaces Let F be a collection of functions taking [a; b] into R. The Lp norm on F is defined by 1=p Z b ! p kfkp = jf(x)j dx (3) a where 0 < p < 1. For p = 1 we define kfk1 = sup jf(x)j: (4) x 2 Sometimes we write kfk2 simply as kfk. The space Lp(a; b) is defined as follows: ( ) Lp(a; b) = f :[a; b] ! R : kfkp < 1 : (5) Every Lp is a Banach space. Some useful inequalities are: Cauchy-Schwartz R f(x)g(x)dx2 ≤ R f 2(x)dx R g2(x)dx Minkowski kf + gkp ≤ kfkp + kgkp where p > 1 H¨older kfgk1 ≤ kfkpkgkq where (1=p) + (1=q) = 1. Special Properties of L2. As we mentioned earlier, the space L2(a; b) is a Hilbert space. R b The inner product between two functions f and g in L2(a; b) is a f(x)g(x)dx and the norm 2 R b 2 of f is kfk = a f (x) dx. With this inner product, L2(a; b) is a separable Hilbert space. Thus we can find a countable orthonormal basis φ1; φ2;:::; that is, kφjk = 1 for all j, R b a φi(x) φj(x)dx = 0 for i 6= j and the only function that is orthogonal to each φj is the zero function. (In fact, there are many such bases.) It follows that if f 2 L2(a; b) then 1 X f(x) = θjφj(x) (6) j=1 where Z b θj = f(x) φj(x) dx (7) a are the coefficients. Also, recall Parseval's identity Z b 1 2 X 2 f (x)dx = θj : (8) a j=1 The set of functions ( n ) X ajφj(x): a1; : : : ; an 2 R (9) j=1 P1 is the called the span of fφ1; : : : ; φng. The projection of f = j=1 θjφj(x) onto the span of Pn fφ1; : : : ; φng is fn = j=1 θjφj(x). We call fn the n-term linear approximation of f. Let P1 Λn denote all functions of the form g = j=1 ajφj(x) such that at most n of the aj's are non-zero. Note that Λn is not a linear space, since if g1; g2 2 Λn it does not follow that g1 +g2 is in Λ . The best approximation to f in Λ is f = P θ φ (x) where A are the n n n n j2An j j n indices corresponding to the n largest jθjj's. We call fn the n-term nonlinear approximation of f. 3 The Fourier basis on [0; 1] is defined by setting φ1(x) = 1 and 1 1 φ2j(x) = p cos(2jπx); φ2j+1(x) = p sin(2jπx); j = 1; 2;::: (10) 2 2 The cosine basis on [0; 1] is defined by p φ0(x) = 1; φj(x) = 2 cos(2πjx); j = 1; 2;:::: (11) The Legendre basis on (−1; 1) is defined by 1 1 P (x) = 1;P (x) = x; P (x) = (3x2 − 1);P (x) = (5x3 − 3x);::: (12) 0 1 2 2 3 2 These polynomials are defined by the relation 1 dn P (x) = (x2 − 1)n: (13) n 2nn! dxn The Legendre polynomials are orthogonal but not orthonormal, since Z 1 2 2 Pn (x)dx = : (14) −1 2n + 1 p However, we can define modified Legendre polynomials Qn(x) = (2n + 1)=2 Pn(x) which then form an orthonormal basis for L2(−1; 1). The Haar basis on [0,1] consists of functions ( ) j φ(x); jk(x): j = 0; 1; : : : ; k = 0; 1;:::; 2 − 1 (15) where 1 if 0 ≤ x < 1 φ(x) = (16) 0 otherwise; j=2 j jk(x) = 2 (2 x − k) and 1 −1 if 0 ≤ x ≤ 2 (x) = 1 (17) 1 if 2 < x ≤ 1: This is a doubly indexed set of functions so when f is expanded in this basis we write 1 2j −1 X X f(x) = αφ(x) + βjk jk(x) (18) j=1 k=1 R 1 R 1 where α = 0 f(x) φ(x) dx and βjk = 0 f(x) jk(x) dx. The Haar basis is an example of a wavelet basis. 4 Let [a; b]d = [a; b] × · · · × [a; b] be the d-dimensional cube and define ( Z ) d d 2 L2 [a; b] = f :[a; b] ! R : f (x1; : : : ; xd) dx1 : : : dxd < 1 : (19) [a;b]d Suppose that B = fφ1; φ2;:::g is an orthonormal basis for L2([a; b]). Then the set of functions ( ) d B = B ⊗ · · · ⊗ B = φi1 (x1) φi2 (x2) ··· φid (xd): i1; i2; : : : ; id 2 f1; 2;:::; g ; (20) d is called the tensor product of B, and forms an orthonormal basis for L2([a; b] ). 3 H¨olderSpaces Let β be a positive integer.1 Let T ⊂ R. The Holder space H(β; L) is the set of functions g : T ! R such that jg(β−1)(y) − g(β−1)(x)j ≤ Ljx − yj; for all x; y 2 T: (21) The special case β = 1 is sometimes called the Lipschitz space. If β = 2 then we have jg0(x) − g0(y)j ≤ L jx − yj; for all x; y: Roughly speaking, this means that the functions have bounded second derivatives.
Recommended publications
  • Sobolev Spaces, Theory and Applications
    Sobolev spaces, theory and applications Piotr Haj lasz1 Introduction These are the notes that I prepared for the participants of the Summer School in Mathematics in Jyv¨askyl¨a,August, 1998. I thank Pekka Koskela for his kind invitation. This is the second summer course that I delivere in Finland. Last August I delivered a similar course entitled Sobolev spaces and calculus of variations in Helsinki. The subject was similar, so it was not posible to avoid overlapping. However, the overlapping is little. I estimate it as 25%. While preparing the notes I used partially the notes that I prepared for the previous course. Moreover Lectures 9 and 10 are based on the text of my joint work with Pekka Koskela [33]. The notes probably will not cover all the material presented during the course and at the some time not all the material written here will be presented during the School. This is however, not so bad: if some of the results presented on lectures will go beyond the notes, then there will be some reasons to listen the course and at the same time if some of the results will be explained in more details in notes, then it might be worth to look at them. The notes were prepared in hurry and so there are many bugs and they are not complete. Some of the sections and theorems are unfinished. At the end of the notes I enclosed some references together with comments. This section was also prepared in hurry and so probably many of the authors who contributed to the subject were not mentioned.
    [Show full text]
  • Vector Measures with Variation in a Banach Function Space
    VECTOR MEASURES WITH VARIATION IN A BANACH FUNCTION SPACE O. BLASCO Departamento de An´alisis Matem´atico, Universidad de Valencia, Doctor Moliner, 50 E-46100 Burjassot (Valencia), Spain E-mail:[email protected] PABLO GREGORI Departament de Matem`atiques, Universitat Jaume I de Castell´o, Campus Riu Sec E-12071 Castell´o de la Plana (Castell´o), Spain E-mail:[email protected] Let E be a Banach function space and X be an arbitrary Banach space. Denote by E(X) the K¨othe-Bochner function space defined as the set of measurable functions f :Ω→ X such that the nonnegative functions fX :Ω→ [0, ∞) are in the lattice E. The notion of E-variation of a measure —which allows to recover the p- variation (for E = Lp), Φ-variation (for E = LΦ) and the general notion introduced by Gresky and Uhl— is introduced. The space of measures of bounded E-variation VE (X) is then studied. It is shown, amongother thingsand with some restriction ∗ ∗ of absolute continuity of the norms, that (E(X)) = VE (X ), that VE (X) can be identified with space of cone absolutely summingoperators from E into X and that E(X)=VE (X) if and only if X has the RNP property. 1. Introduction The concept of variation in the frame of vector measures has been fruitful in several areas of the functional analysis, such as the description of the duality of vector-valued function spaces such as certain K¨othe-Bochner function spaces (Gretsky and Uhl10, Dinculeanu7), the reformulation of operator ideals such as the cone absolutely summing operators (Blasco4), and the Hardy spaces of harmonic function (Blasco2,3).
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • 171 Composition Operator on the Space of Functions
    Acta Math. Univ. Comenianae 171 Vol. LXXXI, 2 (2012), pp. 171{183 COMPOSITION OPERATOR ON THE SPACE OF FUNCTIONS TRIEBEL-LIZORKIN AND BOUNDED VARIATION TYPE M. MOUSSAI Abstract. For a Borel-measurable function f : R ! R satisfying f(0) = 0 and Z sup t−1 sup jf 0(x + h) − f 0(x)jp dx < +1; (0 < p < +1); t>0 R jh|≤t s n we study the composition operator Tf (g) := f◦g on Triebel-Lizorkin spaces Fp;q(R ) in the case 0 < s < 1 + (1=p). 1. Introduction and the main result The study of the composition operator Tf : g ! f ◦ g associated to a Borel- s n measurable function f : R ! R on Triebel-Lizorkin spaces Fp;q(R ), consists in finding a characterization of the functions f such that s n s n (1.1) Tf (Fp;q(R )) ⊆ Fp;q(R ): The investigation to establish (1.1) was improved by several works, for example the papers of Adams and Frazier [1,2 ], Brezis and Mironescu [6], Maz'ya and Shaposnikova [9], Runst and Sickel [12] and [10]. There were obtained some necessary conditions on f; from which we recall the following results. For s > 0, 1 < p < +1 and 1 ≤ q ≤ +1 n s n s n • if Tf takes L1(R ) \ Fp;q(R ) to Fp;q(R ), then f is locally Lipschitz con- tinuous. n s n • if Tf takes the Schwartz space S(R ) to Fp;q(R ), then f belongs locally to s Fp;q(R). The first assertion is proved in [3, Theorem 3.1].
    [Show full text]
  • Comparative Programming Languages
    CSc 372 Comparative Programming Languages 10 : Haskell — Curried Functions Department of Computer Science University of Arizona [email protected] Copyright c 2013 Christian Collberg 1/22 Infix Functions Declaring Infix Functions Sometimes it is more natural to use an infix notation for a function application, rather than the normal prefix one: 5+6 (infix) (+) 5 6 (prefix) Haskell predeclares some infix operators in the standard prelude, such as those for arithmetic. For each operator we need to specify its precedence and associativity. The higher precedence of an operator, the stronger it binds (attracts) its arguments: hence: 3 + 5*4 ≡ 3 + (5*4) 3 + 5*4 6≡ (3 + 5) * 4 3/22 Declaring Infix Functions. The associativity of an operator describes how it binds when combined with operators of equal precedence. So, is 5-3+9 ≡ (5-3)+9 = 11 OR 5-3+9 ≡ 5-(3+9) = -7 The answer is that + and - associate to the left, i.e. parentheses are inserted from the left. Some operators are right associative: 5^3^2 ≡ 5^(3^2) Some operators have free (or no) associativity. Combining operators with free associativity is an error: 5==4<3 ⇒ ERROR 4/22 Declaring Infix Functions. The syntax for declaring operators: infixr prec oper -- right assoc. infixl prec oper -- left assoc. infix prec oper -- free assoc. From the standard prelude: infixl 7 * infix 7 /, ‘div‘, ‘rem‘, ‘mod‘ infix 4 ==, /=, <, <=, >=, > An infix function can be used in a prefix function application, by including it in parenthesis. Example: ? (+) 5 ((*) 6 4) 29 5/22 Multi-Argument Functions Multi-Argument Functions Haskell only supports one-argument functions.
    [Show full text]
  • Introduction to Sobolev Spaces
    Introduction to Sobolev Spaces Lecture Notes MM692 2018-2 Joa Weber UNICAMP December 23, 2018 Contents 1 Introduction1 1.1 Notation and conventions......................2 2 Lp-spaces5 2.1 Borel and Lebesgue measure space on Rn .............5 2.2 Definition...............................8 2.3 Basic properties............................ 11 3 Convolution 13 3.1 Convolution of functions....................... 13 3.2 Convolution of equivalence classes................. 15 3.3 Local Mollification.......................... 16 3.3.1 Locally integrable functions................. 16 3.3.2 Continuous functions..................... 17 3.4 Applications.............................. 18 4 Sobolev spaces 19 4.1 Weak derivatives of locally integrable functions.......... 19 1 4.1.1 The mother of all Sobolev spaces Lloc ........... 19 4.1.2 Examples........................... 20 4.1.3 ACL characterization.................... 21 4.1.4 Weak and partial derivatives................ 22 4.1.5 Approximation characterization............... 23 4.1.6 Bounded weakly differentiable means Lipschitz...... 24 4.1.7 Leibniz or product rule................... 24 4.1.8 Chain rule and change of coordinates............ 25 4.1.9 Equivalence classes of locally integrable functions..... 27 4.2 Definition and basic properties................... 27 4.2.1 The Sobolev spaces W k;p .................. 27 4.2.2 Difference quotient characterization of W 1;p ........ 29 k;p 4.2.3 The compact support Sobolev spaces W0 ........ 30 k;p 4.2.4 The local Sobolev spaces Wloc ............... 30 4.2.5 How the spaces relate.................... 31 4.2.6 Basic properties { products and coordinate change.... 31 i ii CONTENTS 5 Approximation and extension 33 5.1 Approximation............................ 33 5.1.1 Local approximation { any domain............. 33 5.1.2 Global approximation on bounded domains.......
    [Show full text]
  • Using Functional Analysis and Sobolev Spaces to Solve Poisson’S Equation
    USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON'S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye to- wards defining weak solutions to elliptic PDE. Using Lax-Milgram we prove that weak solutions to Poisson's equation exist under certain conditions. Contents 1. Introduction 1 2. Banach spaces 2 3. Weak topology, weak star topology and reflexivity 6 4. Lower semicontinuity 11 5. Hilbert spaces 13 6. Sobolev spaces 19 References 21 1. Introduction We will discuss the following problem in this paper: let Ω be an open and connected subset in R and f be an L2 function on Ω, is there a solution to Poisson's equation (1) −∆u = f? From elementary partial differential equations class, we know if Ω = R, we can solve Poisson's equation using the fundamental solution to Laplace's equation. However, if we just take Ω to be an open and connected set, the above method is no longer useful. In addition, for arbitrary Ω and f, a C2 solution does not always exist. Therefore, instead of finding a strong solution, i.e., a C2 function which satisfies (1), we integrate (1) against a test function φ (a test function is a Date: September 28, 2016. 1 2 YI WANG smooth function compactly supported in Ω), integrate by parts, and arrive at the equation Z Z 1 (2) rurφ = fφ, 8φ 2 Cc (Ω): Ω Ω So intuitively we want to find a function which satisfies (2) for all test functions and this is the place where Hilbert spaces come into play.
    [Show full text]
  • Linear Spaces
    Chapter 2 Linear Spaces Contents FieldofScalars ........................................ ............. 2.2 VectorSpaces ........................................ .............. 2.3 Subspaces .......................................... .............. 2.5 Sumofsubsets........................................ .............. 2.5 Linearcombinations..................................... .............. 2.6 Linearindependence................................... ................ 2.7 BasisandDimension ..................................... ............. 2.7 Convexity ............................................ ............ 2.8 Normedlinearspaces ................................... ............... 2.9 The `p and Lp spaces ............................................. 2.10 Topologicalconcepts ................................... ............... 2.12 Opensets ............................................ ............ 2.13 Closedsets........................................... ............. 2.14 Boundedsets......................................... .............. 2.15 Convergence of sequences . ................... 2.16 Series .............................................. ............ 2.17 Cauchysequences .................................... ................ 2.18 Banachspaces....................................... ............... 2.19 Completesubsets ....................................... ............. 2.19 Transformations ...................................... ............... 2.21 Lineartransformations.................................. ................ 2.21
    [Show full text]
  • General Inner Product & Fourier Series
    General Inner Products 1 General Inner Product & Fourier Series Advanced Topics in Linear Algebra, Spring 2014 Cameron Braithwaite 1 General Inner Product The inner product is an algebraic operation that takes two vectors of equal length and com- putes a single number, a scalar. It introduces a geometric intuition for length and angles of vectors. The inner product is a generalization of the dot product which is the more familiar operation that's specific to the field of real numbers only. Euclidean space which is limited to 2 and 3 dimensions uses the dot product. The inner product is a structure that generalizes to vector spaces of any dimension. The utility of the inner product is very significant when proving theorems and runing computations. An important aspect that can be derived is the notion of convergence. Building on convergence we can move to represenations of functions, specifally periodic functions which show up frequently. The Fourier series is an expansion of periodic functions specific to sines and cosines. By the end of this paper we will be able to use a Fourier series to represent a wave function. To build toward this result many theorems are stated and only a few have proofs while some proofs are trivial and left for the reader to save time and space. Definition 1.11 Real Inner Product Let V be a real vector space and a; b 2 V . An inner product on V is a function h ; i : V x V ! R satisfying the following conditions: (a) hαa + α0b; ci = αha; ci + α0hb; ci (b) hc; αa + α0bi = αhc; ai + α0hc; bi (c) ha; bi = hb; ai (d) ha; aiis a positive real number for any a 6= 0 Definition 1.12 Complex Inner Product Let V be a vector space.
    [Show full text]
  • Function Space Tensor Decomposition and Its Application in Sports Analytics
    East Tennessee State University Digital Commons @ East Tennessee State University Electronic Theses and Dissertations Student Works 12-2019 Function Space Tensor Decomposition and its Application in Sports Analytics Justin Reising East Tennessee State University Follow this and additional works at: https://dc.etsu.edu/etd Part of the Applied Statistics Commons, Multivariate Analysis Commons, and the Other Applied Mathematics Commons Recommended Citation Reising, Justin, "Function Space Tensor Decomposition and its Application in Sports Analytics" (2019). Electronic Theses and Dissertations. Paper 3676. https://dc.etsu.edu/etd/3676 This Thesis - Open Access is brought to you for free and open access by the Student Works at Digital Commons @ East Tennessee State University. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons @ East Tennessee State University. For more information, please contact [email protected]. Function Space Tensor Decomposition and its Application in Sports Analytics A thesis presented to the faculty of the Department of Mathematics East Tennessee State University In partial fulfillment of the requirements for the degree Master of Science in Mathematical Sciences by Justin Reising December 2019 Jeff Knisley, Ph.D. Nicole Lewis, Ph.D. Michele Joyner, Ph.D. Keywords: Sports Analytics, PCA, Tensor Decomposition, Functional Analysis. ABSTRACT Function Space Tensor Decomposition and its Application in Sports Analytics by Justin Reising Recent advancements in sports information and technology systems have ushered in a new age of applications of both supervised and unsupervised analytical techniques in the sports domain. These automated systems capture large volumes of data points about competitors during live competition.
    [Show full text]
  • Function Spaces Mikko Salo
    Function spaces Lecture notes, Fall 2008 Mikko Salo Department of Mathematics and Statistics University of Helsinki Contents Chapter 1. Introduction 1 Chapter 2. Interpolation theory 5 2.1. Classical results 5 2.2. Abstract interpolation 13 2.3. Real interpolation 16 2.4. Interpolation of Lp spaces 20 Chapter 3. Fractional Sobolev spaces, Besov and Triebel spaces 27 3.1. Fourier analysis 28 3.2. Fractional Sobolev spaces 33 3.3. Littlewood-Paley theory 39 3.4. Besov and Triebel spaces 44 3.5. H¨olderand Zygmund spaces 54 3.6. Embedding theorems 60 Bibliography 63 v CHAPTER 1 Introduction In mathematical analysis one deals with functions which are dif- ferentiable (such as continuously differentiable) or integrable (such as square integrable or Lp). It is often natural to combine the smoothness and integrability requirements, which leads one to introduce various spaces of functions. This course will give a brief introduction to certain function spaces which are commonly encountered in analysis. This will include H¨older, Lipschitz, Zygmund, Sobolev, Besov, and Triebel-Lizorkin type spaces. We will try to highlight typical uses of these spaces, and will also give an account of interpolation theory which is an important tool in their study. The first part of the course covered integer order Sobolev spaces in domains in Rn, following Evans [4, Chapter 5]. These lecture notes contain the second part of the course. Here the emphasis is on Sobolev type spaces where the smoothness index may be any real number. This second part of the course is more or less self-contained, in that we will use the first part mainly as motivation.
    [Show full text]
  • Making a Faster Curry with Extensional Types
    Making a Faster Curry with Extensional Types Paul Downen Simon Peyton Jones Zachary Sullivan Microsoft Research Zena M. Ariola Cambridge, UK University of Oregon [email protected] Eugene, Oregon, USA [email protected] [email protected] [email protected] Abstract 1 Introduction Curried functions apparently take one argument at a time, Consider these two function definitions: which is slow. So optimizing compilers for higher-order lan- guages invariably have some mechanism for working around f1 = λx: let z = h x x in λy:e y z currying by passing several arguments at once, as many as f = λx:λy: let z = h x x in e y z the function can handle, which is known as its arity. But 2 such mechanisms are often ad-hoc, and do not work at all in higher-order functions. We show how extensional, call- It is highly desirable for an optimizing compiler to η ex- by-name functions have the correct behavior for directly pand f1 into f2. The function f1 takes only a single argu- expressing the arity of curried functions. And these exten- ment before returning a heap-allocated function closure; sional functions can stand side-by-side with functions native then that closure must subsequently be called by passing the to practical programming languages, which do not use call- second argument. In contrast, f2 can take both arguments by-name evaluation. Integrating call-by-name with other at once, without constructing an intermediate closure, and evaluation strategies in the same intermediate language ex- this can make a huge difference to run-time performance in presses the arity of a function in its type and gives a princi- practice [Marlow and Peyton Jones 2004].
    [Show full text]