Math 563 Lecture Notes Approximation with Orthogonal Bases
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Orthogonal Functions, Radial Basis Functions (RBF) and Cross-Validation (CV)
Exercise 6: Orthogonal Functions, Radial Basis Functions (RBF) and Cross-Validation (CV) CSElab Computational Science & Engineering Laboratory Outline 1. Goals 2. Theory/ Examples 3. Questions Goals ⚫ Orthonormal Basis Functions ⚫ How to construct an orthonormal basis ? ⚫ Benefit of using an orthonormal basis? ⚫ Radial Basis Functions (RBF) ⚫ Cross Validation (CV) Motivation: Orthonormal Basis Functions ⚫ Computation of interpolation coefficients can be compute intensive ⚫ Adding or removing basis functions normally requires a re-computation of the coefficients ⚫ Overcome by orthonormal basis functions Gram-Schmidt Orthonormalization 푛 ⚫ Given: A set of vectors 푣1, … , 푣푘 ⊂ 푅 푛 ⚫ Goal: Generate a set of vectors 푢1, … , 푢푘 ⊂ 푅 such that 푢1, … , 푢푘 is orthonormal and spans the same subspace as 푣1, … , 푣푘 ⚫ Click here if the animation is not playing Recap: Projection of Vectors ⚫ Notation: Denote dot product as 푎Ԧ ⋅ 푏 = 푎Ԧ, 푏 1 (bilinear form). This implies the norm 푢 = 푎Ԧ, 푎Ԧ 2 ⚫ Define the scalar projection of v onto u as 푃푢 푣 = 푢,푣 | 푢 | 푢,푣 푢 ⚫ Hence, is the part of v pointing in direction of u | 푢 | | 푢 | 푢 푢 and 푣∗ = 푣 − , 푣 is orthogonal to 푢 푢 | 푢 | Gram-Schmidt for Vectors 푛 ⚫ Given: A set of vectors 푣1, … , 푣푘 ⊂ 푅 푣1 푢1 = | 푣1 | 푣2,푢1 푢1 푢2 = 푣2 − = 푣2 − 푣2, 푢1 푢1 as 푢1 normalized 푢1 푢1 푢3 = 푣3 − 푣3, 푢1 푢1 What’s missing? Gram-Schmidt for Vectors 푛 ⚫ Given: A set of vectors 푣1, … , 푣푘 ⊂ 푅 푣1 푢1 = | 푣1 | 푣2,푢1 푢1 푢2 = 푣2 − = 푣2 − 푣2, 푢1 푢1 as 푢1 normalized 푢1 푢1 푢3 = 푣3 − 푣3, 푢1 푢1 What’s missing? 푢3is orthogonal to 푢1, but not yet to 푢2 푢3 = 푢3 − 푢3, 푢2 푢2 If the animation is not playing, please click here Gram-Schmidt for Functions ⚫ Given: A set of functions 푔1, … , 푔푘 .We want 휙1, … , 휙푘 ⚫ Bridge to Gram-Schmidt for vectors: ∞ ⚫ Define 푔푖, 푔푗 = −∞ 푔푖 푥 푔푗 푥 푑푥 ⚫ What’s the corresponding norm? Gram-Schmidt for Functions ⚫ Given: A set of functions 푔1, … , 푔푘 .We want 휙1, … , 휙푘 ⚫ Bridge to Gram-Schmidt for vectors: ∞ ⚫ 푔 , 푔 = 푔 푥 푔 푥 푑푥 . -
Chebyshev Polynomials of the Second, Third and Fourth Kinds in Approximation, Indefinite Integration, and Integral Transforms *
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 49 (1993) 169-178 169 North-Holland CAM 1429 Chebyshev polynomials of the second, third and fourth kinds in approximation, indefinite integration, and integral transforms * J.C. Mason Applied and Computational Mathematics Group, Royal Military College of Science, Shrivenham, Swindon, Wiltshire, United Kingdom Dedicated to Dr. D.F. Mayers on the occasion of his 60th birthday Received 18 February 1992 Revised 31 March 1992 Abstract Mason, J.C., Chebyshev polynomials of the second, third and fourth kinds in approximation, indefinite integration, and integral transforms, Journal of Computational and Applied Mathematics 49 (1993) 169-178. Chebyshev polynomials of the third and fourth kinds, orthogonal with respect to (l+ x)‘/‘(l- x)-‘I* and (l- x)‘/*(l+ x)-‘/~, respectively, on [ - 1, 11, are less well known than traditional first- and second-kind polynomials. We therefore summarise basic properties of all four polynomials, and then show how some well-known properties of first-kind polynomials extend to cover second-, third- and fourth-kind polynomials. Specifically, we summarise a recent set of first-, second-, third- and fourth-kind results for near-minimax constrained approximation by series and interpolation criteria, then we give new uniform convergence results for the indefinite integration of functions weighted by (1 + x)-i/* or (1 - x)-l/* using third- or fourth-kind polynomial expansions, and finally we establish a set of logarithmically singular integral transforms for which weighted first-, second-, third- and fourth-kind polynomials are eigenfunctions. -
Anatomically Informed Basis Functions — Anatomisch Informierte Basisfunktionen
Anatomically Informed Basis Functions | Anatomisch Informierte Basisfunktionen Dissertation zur Erlangung des akademischen Grades doctor rerum naturalium (Dr.rer.nat.), genehmigt durch die Fakult¨atf¨urNaturwissenschaften der Otto{von{Guericke{Universit¨atMagdeburg von Diplom-Informatiker Stefan Kiebel geb. am 10. Juni 1968 in Wadern/Saar Gutachter: Prof. Dr. Lutz J¨ancke Prof. Dr. Hermann Hinrichs Prof. Dr. Cornelius Weiller Eingereicht am: 27. Februar 2001 Verteidigung am: 9. August 2001 Abstract In this thesis, a method is presented that incorporates anatomical information into the statistical analysis of functional neuroimaging data. Available anatomical informa- tion is used to explicitly specify spatial components within a functional volume that are assumed to carry evidence of functional activation. After estimating the activity by fitting the same spatial model to each functional volume and projecting the estimates back into voxel-space, one can proceed with a conventional time-series analysis such as statistical parametric mapping (SPM). The anatomical information used in this work comprised the reconstructed grey matter surface, derived from high-resolution T1-weighted magnetic resonance images (MRI). The spatial components specified in the model were of low spatial frequency and confined to the grey matter surface. By explaining the observed activity in terms of these components, one efficiently captures spatially smooth response components induced by underlying neuronal activations lo- calised close to or within the grey matter sheet. Effectively, the method implements a spatially variable anatomically informed deconvolution and consequently the method was named anatomically informed basis functions (AIBF). AIBF can be used for the analysis of any functional imaging modality. In this thesis it was applied to simu- lated and real functional MRI (fMRI) and positron emission tomography (PET) data. -
Generalizations of Chebyshev Polynomials and Polynomial Mappings
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 359, Number 10, October 2007, Pages 4787–4828 S 0002-9947(07)04022-6 Article electronically published on May 17, 2007 GENERALIZATIONS OF CHEBYSHEV POLYNOMIALS AND POLYNOMIAL MAPPINGS YANG CHEN, JAMES GRIFFIN, AND MOURAD E.H. ISMAIL Abstract. In this paper we show how polynomial mappings of degree K from a union of disjoint intervals onto [−1, 1] generate a countable number of spe- cial cases of generalizations of Chebyshev polynomials. We also derive a new expression for these generalized Chebyshev polynomials for any genus g, from which the coefficients of xn can be found explicitly in terms of the branch points and the recurrence coefficients. We find that this representation is use- ful for specializing to polynomial mapping cases for small K where we will have explicit expressions for the recurrence coefficients in terms of the branch points. We study in detail certain special cases of the polynomials for small degree mappings and prove a theorem concerning the location of the zeroes of the polynomials. We also derive an explicit expression for the discrimi- nant for the genus 1 case of our Chebyshev polynomials that is valid for any configuration of the branch point. 1. Introduction and preliminaries Akhiezer [2], [1] and, Akhiezer and Tomˇcuk [3] introduced orthogonal polynomi- als on two intervals which generalize the Chebyshev polynomials. He observed that the study of properties of these polynomials requires the use of elliptic functions. In thecaseofmorethantwointervals,Tomˇcuk [17], investigated their Bernstein-Szeg˝o asymptotics, with the theory of Hyperelliptic integrals, and found expressions in terms of a certain Abelian integral of the third kind. -
A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations Maggie Elizabeth Chenoweth [email protected]
Marshall University Marshall Digital Scholar Theses, Dissertations and Capstones 1-1-2012 A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations Maggie Elizabeth Chenoweth [email protected] Follow this and additional works at: http://mds.marshall.edu/etd Part of the Numerical Analysis and Computation Commons Recommended Citation Chenoweth, Maggie Elizabeth, "A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations" (2012). Theses, Dissertations and Capstones. Paper 243. This Thesis is brought to you for free and open access by Marshall Digital Scholar. It has been accepted for inclusion in Theses, Dissertations and Capstones by an authorized administrator of Marshall Digital Scholar. For more information, please contact [email protected]. A LOCAL RADIAL BASIS FUNCTION METHOD FOR THE NUMERICAL SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS Athesissubmittedto the Graduate College of Marshall University In partial fulfillment of the requirements for the degree of Master of Arts in Mathematics by Maggie Elizabeth Chenoweth Approved by Dr. Scott Sarra, Committee Chairperson Dr. Anna Mummert Dr. Carl Mummert Marshall University May 2012 Copyright by Maggie Elizabeth Chenoweth 2012 ii ACKNOWLEDGMENTS I would like to begin by expressing my sincerest appreciation to my thesis advisor, Dr. Scott Sarra. His knowledge and expertise have guided me during my research endeavors and the process of writing this thesis. Dr. Sarra has also served as my teaching mentor, and I am grateful for all of his encouragement and advice. It has been an honor to work with him. I would also like to thank the other members of my thesis committee, Dr. -
Chebyshev and Fourier Spectral Methods 2000
Chebyshev and Fourier Spectral Methods Second Edition John P. Boyd University of Michigan Ann Arbor, Michigan 48109-2143 email: [email protected] http://www-personal.engin.umich.edu/jpboyd/ 2000 DOVER Publications, Inc. 31 East 2nd Street Mineola, New York 11501 1 Dedication To Marilyn, Ian, and Emma “A computation is a temptation that should be resisted as long as possible.” — J. P. Boyd, paraphrasing T. S. Eliot i Contents PREFACE x Acknowledgments xiv Errata and Extended-Bibliography xvi 1 Introduction 1 1.1 Series expansions .................................. 1 1.2 First Example .................................... 2 1.3 Comparison with finite element methods .................... 4 1.4 Comparisons with Finite Differences ....................... 6 1.5 Parallel Computers ................................. 9 1.6 Choice of basis functions .............................. 9 1.7 Boundary conditions ................................ 10 1.8 Non-Interpolating and Pseudospectral ...................... 12 1.9 Nonlinearity ..................................... 13 1.10 Time-dependent problems ............................. 15 1.11 FAQ: Frequently Asked Questions ........................ 16 1.12 The Chrysalis .................................... 17 2 Chebyshev & Fourier Series 19 2.1 Introduction ..................................... 19 2.2 Fourier series .................................... 20 2.3 Orders of Convergence ............................... 25 2.4 Convergence Order ................................. 27 2.5 Assumption of Equal Errors ........................... -
Dymore User's Manual Chebyshev Polynomials
Dymore User's Manual Chebyshev polynomials Olivier A. Bauchau August 27, 2019 Contents 1 Definition 1 1.1 Zeros and extrema....................................2 1.2 Orthogonality relationships................................3 1.3 Derivatives of Chebyshev polynomials..........................5 1.4 Integral of Chebyshev polynomials............................5 1.5 Products of Chebyshev polynomials...........................5 2 Chebyshev approximation of functions of a single variable6 2.1 Expansion of a function in Chebyshev polynomials...................6 2.2 Evaluation of Chebyshev expansions: Clenshaw's recurrence.............7 2.3 Derivatives and integrals of Chebyshev expansions...................7 2.4 Products of Chebyshev expansions...........................8 2.5 Examples.........................................9 2.6 Clenshaw-Curtis quadrature............................... 10 3 Chebyshev approximation of functions of two variables 12 3.1 Expansion of a function in Chebyshev polynomials................... 12 3.2 Evaluation of Chebyshev expansions: Clenshaw's recurrence............. 13 3.3 Derivatives of Chebyshev expansions.......................... 14 4 Chebychev polynomials 15 4.1 Examples......................................... 16 1 Definition Chebyshev polynomials [1,2] form a series of orthogonal polynomials, which play an important role in the theory of approximation. The lowest polynomials are 2 3 4 2 T0(x) = 1;T1(x) = x; T2(x) = 2x − 1;T3(x) = 4x − 3x; T4(x) = 8x − 8x + 1;::: (1) 1 and are depicted in fig.1. The polynomials can be generated from the following recurrence rela- tionship Tn+1 = 2xTn − Tn−1; n ≥ 1: (2) 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 CHEBYSHEV POLYNOMIALS −0.6 −0.8 −1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 XX Figure 1: The seven lowest order Chebyshev polynomials It is possible to give an explicit expression of Chebyshev polynomials as Tn(x) = cos(n arccos x): (3) This equation can be verified by using elementary trigonometric identities. -
Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5
Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5, Trefethen's book Topics marked with ∗ are not on the exam 1 In approximation theory we want to find a function p(x) that is `close' to another function f(x). We can define closeness using any metric or norm, e.g. Z 2 2 kf(x) − p(x)k2 = (f(x) − p(x)) dx or kf(x) − p(x)k1 = sup jf(x) − p(x)j or Z kf(x) − p(x)k1 = jf(x) − p(x)jdx: In order for these norms to make sense we need to restrict the functions f and p to suitable function spaces. The polynomial approximation problem takes the form: Find a polynomial of degree at most n that minimizes the norm of the error. Naturally we will consider (i) whether a solution exists and is unique, (ii) whether the approximation converges as n ! 1. In our section on approximation (loosely following Atkinson, Chapter 4), we will first focus on approximation in the infinity norm, then in the 2 norm and related norms. 2 Existence for optimal polynomial approximation. Theorem (no reference): For every n ≥ 0 and f 2 C([a; b]) there is a polynomial of degree ≤ n that minimizes kf(x) − p(x)k where k · k is some norm on C([a; b]). Proof: To show that a minimum/minimizer exists, we want to find some compact subset of the set of polynomials of degree ≤ n (which is a finite-dimensional space) and show that the inf over this subset is less than the inf over everything else. -
Linear Basis Function Models
Linear basis function models Linear basis function models Linear models for regression Francesco Corona FC - Fortaleza Linear basis function models Linear models for regression Linear basis function models Linear models for regression The focus so far on unsupervised learning, we turn now to supervised learning ◮ Regression The goal of regression is to predict the value of one or more continuous target variables t, given the value of a D-dimensional vector x of input variables ◮ e.g., polynomial curve fitting FC - Fortaleza Linear basis function models Linear basis function models Linear models for regression (cont.) Training data of N = 10 points, blue circles ◮ each comprising an observation of the input variable x along with the 1 corresponding target variable t t 0 The unknown function sin(2πx) is used to generate the data, green curve ◮ −1 Goal: Predict the value of t for some new value of x ◮ 0 x 1 without knowledge of the green curve The input training data x was generated by choosing values of xn, for n = 1,..., N, that are spaced uniformly in the range [0, 1] The target training data t was obtained by computing values sin(2πxn) of the function and adding a small level of Gaussian noise FC - Fortaleza Linear basis function models Linear basis function models Linear models for regression (cont.) ◮ We shall fit the data using a polynomial function of the form M 2 M j y(x, w)= w0 + w1x + w2x + · · · + wM x = wj x (1) Xj=0 ◮ M is the polynomial order, x j is x raised to the power of j ◮ Polynomial coefficients w0,..., wM are collected -
Introduction to Radial Basis Function Networks
Intro duction to Radial Basis Function Networks Mark J L Orr Centre for Cognitive Science University of Edinburgh Buccleuch Place Edinburgh EH LW Scotland April Abstract This do cumentisanintro duction to radial basis function RBF networks a typ e of articial neural network for application to problems of sup ervised learning eg regression classication and time series prediction It is now only available in PostScript an older and now unsupp orted hyp ertext ver sion maybeavailable for a while longer The do cumentwas rst published in along with a package of Matlab functions implementing the metho ds describ ed In a new do cument Recent Advances in Radial Basis Function Networks b ecame available with a second and improved version of the Matlab package mjoancedacuk wwwancedacukmjopapersintrops wwwancedacukmjointrointrohtml wwwancedacukmjosoftwarerbfzip wwwancedacukmjopapersrecadps wwwancedacukmjosoftwarerbfzip Contents Intro duction Sup ervised Learning Nonparametric Regression Classication and Time Series Prediction Linear Mo dels Radial Functions Radial Basis Function Networks Least Squares The Optimal WeightVector The Pro jection Matrix Incremental Op erations The Eective NumberofParameters Example Mo del Selection Criteria CrossValidation Generalised -
Linear Models for Regression
Machine Learning Srihari Linear Models for Regression Sargur Srihari [email protected] 1 Machine Learning Srihari Topics in Linear Regression • What is regression? – Polynomial Curve Fitting with Scalar input – Linear Basis Function Models • Maximum Likelihood and Least Squares • Stochastic Gradient Descent • Regularized Least Squares 2 Machine Learning Srihari The regression task • It is a supervised learning task • Goal of regression: – predict value of one or more target variables t – given d-dimensional vector x of input variables – With dataset of known inputs and outputs • (x1,t1), ..(xN,tN) • Where xi is an input (possibly a vector) known as the predictor • ti is the target output (or response) for case i which is real-valued – Goal is to predict t from x for some future test case • We are not trying to model the distribution of x – We dont expect predictor to be a linear function of x • So ordinary linear regression of inputs will not work • We need to allow for a nonlinear function of x • We don’t have a theory of what form this function to take 3 Machine Learning Srihari An example problem • Fifty points generated (one-dimensional problem) – With x uniform from (0,1) – y generated from formula y=sin(1+x2)+noise • Where noise has N(0,0.032) distribution • Noise-free true function and data points are as shown 4 Machine Learning Srihari Applications of Regression 1. Expected claim amount an insured person will make (used to set insurance premiums) or prediction of future prices of securities 2. Also used for algorithmic -
Orthogonal Polynomials on the Unit Circle Associated with the Laguerre Polynomials
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 129, Number 3, Pages 873{879 S 0002-9939(00)05821-4 Article electronically published on October 11, 2000 ORTHOGONAL POLYNOMIALS ON THE UNIT CIRCLE ASSOCIATED WITH THE LAGUERRE POLYNOMIALS LI-CHIEN SHEN (Communicated by Hal L. Smith) Abstract. Using the well-known fact that the Fourier transform is unitary, we obtain a class of orthogonal polynomials on the unit circle from the Fourier transform of the Laguerre polynomials (with suitable weights attached). Some related extremal problems which arise naturally in this setting are investigated. 1. Introduction This paper deals with a class of orthogonal polynomials which arise from an application of the Fourier transform on the Laguerre polynomials. We shall briefly describe the essence of our method. Let Π+ denote the upper half plane fz : z = x + iy; y > 0g and let Z 1 H(Π+)=ff : f is analytic in Π+ and sup jf(x + yi)j2 dx < 1g: 0<y<1 −∞ It is well known that, from the Paley-Wiener Theorem [4, p. 368], the Fourier transform provides a unitary isometry between the spaces L2(0; 1)andH(Π+): Since the Laguerre polynomials form a complete orthogonal basis for L2([0; 1);xαe−x dx); the application of Fourier transform to the Laguerre polynomials (with suitable weight attached) generates a class of orthogonal rational functions which are com- plete in H(Π+); and by composition of which with the fractional linear transfor- mation (which maps Π+ conformally to the unit disc) z =(2t − i)=(2t + i); we obtain a family of polynomials which are orthogonal with respect to the weight α t j j sin 2 dt on the boundary z = 1 of the unit disc.