Uniform Convergence of Rank-Weighted Learning

Total Page:16

File Type:pdf, Size:1020Kb

Uniform Convergence of Rank-Weighted Learning Uniform Convergence of Rank-weighted Learning Justin Khim 1 Liu Leqi 1 Adarsh Prasad 1 Pradeep Ravikumar 1 Abstract learning, where percentile based risk measures have been used to quantify the tail-risk of models. A recent line of The decision-theoretic foundations of classical work borrows classical ideas from behavioral economics machine learning models have largely focused for use in machine learning to make models more human- on estimating model parameters that minimize aligned. In particular, Prashanth et al.(2016) have brought the expectation of a given loss function. How- ideas from cumulative prospect theory (Tversky & Kahne- ever, as machine learning models are deployed in man, 1992) into reinforcement learning and bandits and Leqi varied contexts, such as in high-stakes decision- et al.(2019) have used cumulative prospect theory to intro- making and societal settings, it is clear that these duce the notion of human-aligned risk measures. models are not just evaluated by their average per- formances. In this work, we propose and study A common theme in these prior works is the notion of rank- a novel notion of L-Risk based on the classical weighted risks. The aforementioned risk measures weight idea of rank-weighted learning. These L-Risks, each loss by its relative rank, and are based upon the clas- induced by rank-dependent weighting functions sical rank-dependent expected utility theory (Diecidue & with bounded variation, is a unification of pop- Wakker, 2001). These rank-dependent utilities have also ular risk measures such as conditional value-at- been used in several different contexts in machine learn- risk and those defined by cumulative prospect ing. For example, such rank-weighted risks have been used theory. We give uniform convergence bounds of to speed up training of deep networks (Jiang et al., 2019). this broad class of risk measures and study their They have also played a key role in designing estimators consequences on a logistic regression example. which are robust to outliers in data. In particular, trimming procedures that simply throw away data with high losses have been used to design estimators that are robust to out- 1. Introduction liers (Daniell, 1920; Bhatia et al., 2015; Lai et al., 2016; Prasad et al., 2018; Lugosi & Mendelson, 2019; Prasad The statistical decision-theoretic foundations of modern ma- et al., 2019; Shah et al., 2020). chine learning have largely focused on solving tasks by minimizing the expectation of some loss function. This While these rank-based objectives have found widespread ensures that the resulting models have high average case use in machine learning, establishing their statistical prop- performance. However, as machine learning models are erties has remained elusive. On the other hand, we have deployed along side humans in decision-making, it is clear a clear understanding of the generalization properties for that they are not just evaluated by their average case per- average-case risk measures. Loosely speaking, given a col- formance but also properties like fairness. There has been lection of models and finite training data, and suppose we increasing interest in capturing these additional properties choose a model by minimizing average training error, then, via appropriate modifications of the classical objective of ex- we can roughly guarantee on how well this chosen model pected loss (Duchi et al., 2019; Garcıa & Fernandez´ , 2015; performs in an average sense. Such guarantees are typically Sra et al., 2012). obtained by studying uniform convergence of average-case risk measures. In parallel, there is a long line of work exploring alter- natives to expected loss based risk measures in decision- However, as noted before, uniform convergence of rank- making (Howard & Matheson, 1972), and in reinforcement weighted risk measures have not been explored in detail. This difficulty comes from the weights being dependent 1Machine Learning Department, Carnegie Mellon Uni- on the whole data, thereby, inducing complex dependen- versity, Pittsburgh, PA. Correspondence to: Justin Khim cies. Hence, existing work on generalization has been on <[email protected]>. the weaker notion of pointwise concentration bounds (Bhat, Proceedings of the 37 th International Conference on Machine 2019; Duchi & Namkoong, 2018) or have focused on spe- Learning, Online, PMLR 119, 2020. Copyright 2020 by the au- cific forms of rank-weighted risk measures such as condi- thor(s). Rank-weighted Learning tional value-at-risk (CVaR) (Duchi & Namkoong, 2018). where w : [0; 1] 7! [0; 1) is a scoring function. Contributions. In this work, we propose the study of With the above notion of an empirical L-statistic at hand, rank-weighted risks and prove uniform convergence results. we define the notion of empirical L-risk for any function f In particular, we propose a new notion of L-Risk in Section2 by simply replacing the empirical average of the losses with that unifies existing rank-dependent risk measures including their corresponding L-statistic. CVaR and those defined by cumulative prospect theory. In Definition 2. The empirical L-risk of f is Section3, we present uniform convergence results, and we n 1 X (︂ i )︂ observe that the learning rate depends on the weighting func- LR (f) = w ` (f); w;n n n (i) tion. In particular, when the weighting function is Lipschitz, i=1 we recover the standard O(n−1=2) convergence rate. Finally, we instantiate our result on logistic regression in Section4 where `(1)(f) ≤ ::: ≤ `(n)(f) are the order statistics of and empirically study the convergence performance of the the sample losses `(f; Z1); : : : `(f; Zn). L-Risks in Section6. Note that the empirical L-risk can be alternatively written in terms of the empirical cumulative distribution function n 2. Background and Problem Setup Ff;n of the sample losses f`(f; Zi)gi=1: n In this section, we provide the necessary background on 1 X LR = `(f; Z )w(F (`(f; Z ))): (1) rank-weighted risk minimization, and introduce the notion w;n n i f;n i of bounded variation functions that we consider in this work. i=1 Additionally, we provide standard learning-theoretic defini- Accordingly, the population L-risk for any function f can tions and notation before moving on to our main results. be defined as: 2.1. L-Risk Estimation LRw(f) = EZ∼P [`(f; Z)w(Ff (`(f; Z))] ; (2) We assume that there is a joint probability distribution where Ff (·) is the cumulative distribution function of P (X; Y ) over the space Z = X ×Y and our goal is to learn `(f; Z) for Z drawn from P . a function f : X 7! Y. In the standard decision-theoretic * framework, f is chosen among a class of functions F using 2.2. Illustrative Examples of L-Risk a non-negative loss function ` : F × Z 7! R+. The framework of risk minimization is a central paradigm Classical Risk Estimation. In the traditional setting of of statistical estimation and is widely applicable. In this risk minimization, the population risk of a function f is section, we provide illustrative examples that L-risk gener- given by the expectation of the loss function `(f; Z) when alizes classical risk and encompasses several other notions Z is drawn according to the distribution P : of risk measures. To begin with, observe that simply set- ting the weighting function as w(t) = 1 for all t 2 [0; 1], R(f) = EZ∼P [`(f; Z)]: L-Risk minimization corresponds to the classical empirical n risk estimation. Given n i.i.d. samples fZigi=1, empirical risk minimization substitutes the empirical risk for the population risk in the risk minimization objective: Conditional Value-at-Risk (CVaR). As noted in Sec- n tion1, in settings where low-probability events have catas- 1 X R (f) = `(f; Z ): trophic losses, using classical risk is inappropriate. Condi- n n i i=1 tional value-at-risk was introduced to handle such tail events and measures the expected loss when conditioned on the L-Risk. As noted in the Section1, there are many sce- event that the loss exceeds a certain threshold. Moreover, narios in machine learning, where we want to evaluate a CVaR has several desirable properties as a risk measure and function by other metrics apart from average loss. To this in particular, is convex and coherent (Krokhmal et al., 2013). end, we first define the notion of an L-Statistic which dates Hence, CVaR is studied across a number of fields such as back to the classical work of (Daniell, 1920). mathematical finance (Rockafellar et al., 2000), decision Definition 1. Let X(1) ≤ X(2) ::: ≤ X(n) be the order making, and more recently machine learning (Duchi et al., statistics of the sample X1;X2;:::Xn. Then, the L-statistic 2019). Formally, the CVaR of a function f at a confidence is a linear combination of order statistics, level 1 − 훼 is defined as, n (︂ )︂ 1 X i RCVaR,훼(f) = EZ∼P [`(f; Z)j`(f; Z) ≥ VaR훼(`(f; Z))]; T (fX gn ) = w X ; w i i=1 n n (i) (3) i=1 Rank-weighted Learning where VaR훼(`(f; Z)) = inf x is the value-at-risk. x:Ff (x)≥1−훼 Observe that CVaR is a special case L-Risk in (2) and can 1 be obtained by choosing w(t) = 훼 1ft ≥ 1 − 훼g, where 1{·} is the indicator function. Human-Aligned Risk (HRM). Cumulative prospect the- ory (CPT), which is motivated by empirical studies of hu- Figure 1. An illustration of the partition argument. Here, we have man decision-making from behavioral economics (Tversky Δi = "i and kΔk1 = ". The blue points, f(i+1)=n; (i+1)=n+ & Kahneman, 1992), has recently been studied in machine "i+1g, and the red points, fi=n; i=n + "i; (i + 2)=ng are in two learning (Prashanth et al., 2016; Gopalan et al., 2017; Leqi separate partitions.
Recommended publications
  • The Matroid Theorem We First Review Our Definitions: a Subset System Is A
    CMPSCI611: The Matroid Theorem Lecture 5 We first review our definitions: A subset system is a set E together with a set of subsets of E, called I, such that I is closed under inclusion. This means that if X ⊆ Y and Y ∈ I, then X ∈ I. The optimization problem for a subset system (E, I) has as input a positive weight for each element of E. Its output is a set X ∈ I such that X has at least as much total weight as any other set in I. A subset system is a matroid if it satisfies the exchange property: If i and i0 are sets in I and i has fewer elements than i0, then there exists an element e ∈ i0 \ i such that i ∪ {e} ∈ I. 1 The Generic Greedy Algorithm Given any finite subset system (E, I), we find a set in I as follows: • Set X to ∅. • Sort the elements of E by weight, heaviest first. • For each element of E in this order, add it to X iff the result is in I. • Return X. Today we prove: Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. 2 Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. Proof: We will show first that if (E, I) is a matroid, then the greedy algorithm is correct. Assume that (E, I) satisfies the exchange property.
    [Show full text]
  • CONTINUITY in the ALEXIEWICZ NORM Dedicated to Prof. J
    131 (2006) MATHEMATICA BOHEMICA No. 2, 189{196 CONTINUITY IN THE ALEXIEWICZ NORM Erik Talvila, Abbotsford (Received October 19, 2005) Dedicated to Prof. J. Kurzweil on the occasion of his 80th birthday Abstract. If f is a Henstock-Kurzweil integrable function on the real line, the Alexiewicz norm of f is kfk = sup j I fj where the supremum is taken over all intervals I ⊂ . Define I the translation τx by τxfR(y) = f(y − x). Then kτxf − fk tends to 0 as x tends to 0, i.e., f is continuous in the Alexiewicz norm. For particular functions, kτxf − fk can tend to 0 arbitrarily slowly. In general, kτxf − fk > osc fjxj as x ! 0, where osc f is the oscillation of f. It is shown that if F is a primitive of f then kτxF − F k kfkjxj. An example 1 6 1 shows that the function y 7! τxF (y) − F (y) need not be in L . However, if f 2 L then kτxF − F k1 6 kfk1jxj. For a positive weight function w on the real line, necessary and sufficient conditions on w are given so that k(τxf − f)wk ! 0 as x ! 0 whenever fw is Henstock-Kurzweil integrable. Applications are made to the Poisson integral on the disc and half-plane. All of the results also hold with the distributional Denjoy integral, which arises from the completion of the space of Henstock-Kurzweil integrable functions as a subspace of Schwartz distributions. Keywords: Henstock-Kurzweil integral, Alexiewicz norm, distributional Denjoy integral, Poisson integral MSC 2000 : 26A39, 46Bxx 1.
    [Show full text]
  • Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany RUDIGER¨ STEPHAN1 Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra 1Email: [email protected] ZIB-Report 08-48 (December 2008) Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra R¨udigerStephan Abstract Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N. In this paper we briefly touch on questions that addresses common grounds and differences of the complexity of a combinatorial optimization problem and its cardinality constrained version. Afterwards we focus on polytopes associated with cardinality constrained combinatorial optimiza- tion problems. Given an integer programming formulation for a combina- torial optimization problem, by essentially adding Gr¨otschel’s cardinality forcing inequalities [11], we obtain an integer programming formulation for its cardinality restricted version. Since the cardinality forcing inequal- ities in their original form are mostly not facet defining for the associated polyhedra, we discuss possibilities to strengthen them. In [13] a variation of the cardinality forcing inequalities were successfully integrated in the system of linear inequalities for the matroid polytope to provide a com- plete linear description of the cardinality constrained matroid polytope. We identify this polytope as a master polytope for our class of problems, since many combinatorial optimization problems can be formulated over the intersection of matroids. 1 Introduction, Basics, and Complexity Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N.
    [Show full text]
  • Some Properties of AP Weight Function
    Journal of the Institute of Engineering, 2016, 12(1): 210-213 210 TUTA/IOE/PCU © TUTA/IOE/PCU Printed in Nepal Some Properties of AP Weight Function Santosh Ghimire Department of Engineering Science and Humanities, Institute of Engineering Pulchowk Campus, Tribhuvan University, Kathmandu, Nepal Corresponding author: [email protected] Received: June 20, 2016 Revised: July 25, 2016 Accepted: July 28, 2016 Abstract: In this paper, we briefly discuss the theory of weights and then define A1 and Ap weight functions. Finally we prove some of the properties of AP weight function. Key words: A1 weight function, Maximal functions, Ap weight function. 1. Introduction The theory of weights play an important role in various fields such as extrapolation theory, vector-valued inequalities and estimates for certain class of non linear differential equation. Moreover, they are very useful in the study of boundary value problems for Laplace's equation in Lipschitz domains. In 1970, Muckenhoupt characterized positive functions w for which the Hardy-Littlewood maximal operator M maps Lp(Rn, w(x)dx) to itself. Muckenhoupt's characterization actually gave the better understanding of theory of weighted inequalities which then led to the introduction of Ap class and consequently the development of weighted inequalities. 2. Definitions n Definition: A locally integrable function on R that takes values in the interval (0,∞) almost everywhere is called a weight. So by definition a weight function can be zero or infinity only on a set whose Lebesgue measure is zero. We use the notation to denote the w-measure of the set E and we reserve the notation Lp(Rn,w) or Lp(w) for the weighted L p spaces.
    [Show full text]
  • Importance Sampling
    Chapter 6 Importance sampling 6.1 The basics To movtivate our discussion consider the following situation. We want to use Monte Carlo to compute µ = E[X]. There is an event E such that P (E) is small but X is small outside of E. When we run the usual Monte Carlo algorithm the vast majority of our samples of X will be outside E. But outside of E, X is close to zero. Only rarely will we get a sample in E where X is not small. Most of the time we think of our problem as trying to compute the mean of some random variable X. For importance sampling we need a little more structure. We assume that the random variable we want to compute the mean of is of the form f(X~ ) where X~ is a random vector. We will assume that the joint distribution of X~ is absolutely continous and let p(~x) be the density. (Everything we will do also works for the case where the random vector X~ is discrete.) So we focus on computing Ef(X~ )= f(~x)p(~x)dx (6.1) Z Sometimes people restrict the region of integration to some subset D of Rd. (Owen does this.) We can (and will) instead just take p(x) = 0 outside of D and take the region of integration to be Rd. The idea of importance sampling is to rewrite the mean as follows. Let q(x) be another probability density on Rd such that q(x) = 0 implies f(x)p(x) = 0.
    [Show full text]
  • Adaptively Weighted Maximum Likelihood Estimation of Discrete Distributions
    Unicentre CH-1015 Lausanne http://serval.unil.ch Year : 2010 ADAPTIVELY WEIGHTED MAXIMUM LIKELIHOOD ESTIMATION OF DISCRETE DISTRIBUTIONS Michael AMIGUET Michael AMIGUET, 2010, ADAPTIVELY WEIGHTED MAXIMUM LIKELIHOOD ESTIMATION OF DISCRETE DISTRIBUTIONS Originally published at : Thesis, University of Lausanne Posted at the University of Lausanne Open Archive. http://serval.unil.ch Droits d’auteur L'Université de Lausanne attire expressément l'attention des utilisateurs sur le fait que tous les documents publiés dans l'Archive SERVAL sont protégés par le droit d'auteur, conformément à la loi fédérale sur le droit d'auteur et les droits voisins (LDA). A ce titre, il est indispensable d'obtenir le consentement préalable de l'auteur et/ou de l’éditeur avant toute utilisation d'une oeuvre ou d'une partie d'une oeuvre ne relevant pas d'une utilisation à des fins personnelles au sens de la LDA (art. 19, al. 1 lettre a). A défaut, tout contrevenant s'expose aux sanctions prévues par cette loi. Nous déclinons toute responsabilité en la matière. Copyright The University of Lausanne expressly draws the attention of users to the fact that all documents published in the SERVAL Archive are protected by copyright in accordance with federal law on copyright and similar rights (LDA). Accordingly it is indispensable to obtain prior consent from the author and/or publisher before any use of a work or part of a work for purposes other than personal use within the meaning of LDA (art. 19, para. 1 letter a). Failure to do so will expose offenders to the sanctions laid down by this law.
    [Show full text]
  • 3 May 2012 Inner Product Quadratures
    Inner product quadratures Yu Chen Courant Institute of Mathematical Sciences New York University Aug 26, 2011 Abstract We introduce a n-term quadrature to integrate inner products of n functions, as opposed to a Gaussian quadrature to integrate 2n functions. We will characterize and provide computataional tools to construct the inner product quadrature, and establish its connection to the Gaussian quadrature. Contents 1 The inner product quadrature 2 1.1 Notation.................................... 2 1.2 A n-termquadratureforinnerproducts. 4 2 Construct the inner product quadrature 4 2.1 Polynomialcase................................ 4 2.2 Arbitraryfunctions .............................. 6 arXiv:1205.0601v1 [math.NA] 3 May 2012 3 Product law and minimal functions 7 3.1 Factorspaces ................................. 8 3.2 Minimalfunctions............................... 11 3.3 Fold data into Gramians - signal processing . 13 3.4 Regularization................................. 15 3.5 Type-3quadraturesforintegralequations. .... 15 4 Examples 16 4.1 Quadratures for non-positive definite weights . ... 16 4.2 Powerfunctions,HankelGramians . 16 4.3 Exponentials kx,hyperbolicGramians ................... 18 1 5 Generalizations and applications 19 5.1 Separation principle of imaging . 20 5.2 Quadratures in higher dimensions . 21 5.3 Deflationfor2-Dquadraturedesign . 22 1 The inner product quadrature We consider three types of n-term Gaussian quadratures in this paper, Type-1: to integrate 2n functions in interval [a, b]. • Type-2: to integrate n2 inner products of n functions. • Type-3: to integrate n functions against n weights. • For these quadratures, the weight functions u are not required positive definite. Type-1 is the classical Guassian quadrature, Type-2 is the inner product quadrature, and Type-3 finds applications in imaging and sensing, and discretization of integral equations.
    [Show full text]
  • Depth-Weighted Estimation of Heterogeneous Agent Panel Data
    Depth-Weighted Estimation of Heterogeneous Agent Panel Data Models∗ Yoonseok Lee† Donggyu Sul‡ Syracuse University University of Texas at Dallas June 2020 Abstract We develop robust estimation of panel data models, which is robust to various types of outlying behavior of potentially heterogeneous agents. We estimate parameters from individual-specific time-series and average them using data-dependent weights. In partic- ular, we use the notion of data depth to obtain order statistics among the heterogeneous parameter estimates, and develop the depth-weighted mean-group estimator in the form of an L-estimator. We study the asymptotic properties of the new estimator for both homogeneous and heterogeneous panel cases, focusing on the Mahalanobis and the pro- jection depths. We examine relative purchasing power parity using this estimator and cannot find empirical evidence for it. Keywords: Panel data, Depth, Robust estimator, Heterogeneous agents, Mean group estimator. JEL Classifications: C23, C33. ∗First draft: September 2017. The authors thank to Robert Serfling and participants at numerous sem- inar/conference presentations for very helpful comments. Lee acknowledges support from Appleby-Mosher Research Fund, Maxwell School, Syracuse University. †Corresponding author. Address: Department of Economics and Center for Policy Research, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244. E-mail: [email protected] ‡Address: Department of Economics, University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080. E-mail: [email protected] 1Introduction A robust estimator is a statistic that is less influenced by outliers. Many robust estimators are available for regression models, where the robustness is toward outliers in the regression error.
    [Show full text]
  • Lecture 4 1 the Maximum Weight Matching Problem
    CS 671: Graph Streaming Algorithms and Lower Bounds Rutgers: Fall 2020 Lecture 4 September 29, 2020 Instructor: Sepehr Assadi Scribe: Aditi Dudeja Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. In this lecture, we will primarily focus on the following paper: • \Ami Paz, Gregory Schwartzman, A (2 + )-Approximation for Maximum Weight Matching in the Semi-Streaming Model. In SODA 2017." with additional backgrounds from: • \Joan Feigenbaum, Sampath Kannan, Andrew McGregor, Siddharth Suri, Jian Zhang, On Graph Problems in a Semi-streaming Model. In ICALP 2004 and Theor. Comput. Sci. 2005." • \Michael Crouch, Daniel S. Stubbs, Improved Streaming Algorithms for Weighted Matching, via Un- weighted Matching. In APPROX-RANDOM 2014." 1 The Maximum Weight Matching Problem Let G = (V; E; w) be a simple graph with non-negative edge weights w(e) ≥ 0 on each edge e. As usual, we denote n := jV j and m := jEj. Recall that a matching M in a graph G is a set of edges so that no two edge share a vertex. In this lecture, we consider the maximum weight matching problem, namely, the problem P of finding a matching M in G which maximizes e2M we. Throughout this lecture, we shall assume that weight of each edge is bounded by some poly(n) and thus can be represented with O(log n) bits1. Recall that by what we already proved in Lecture 2 even for unweighted matching, there is no hope to find an exact semi-streaming algorithm for maximum weight matching in a single pass.
    [Show full text]
  • Numerical Integration and Differentiation Growth And
    Numerical Integration and Differentiation Growth and Development Ra¨ulSantaeul`alia-Llopis MOVE-UAB and Barcelona GSE Spring 2017 Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 1 / 27 1 Numerical Differentiation One-Sided and Two-Sided Differentiation Computational Issues on Very Small Numbers 2 Numerical Integration Newton-Cotes Methods Gaussian Quadrature Monte Carlo Integration Quasi-Monte Carlo Integration Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 2 / 27 Numerical Differentiation The definition of the derivative at x∗ is 0 f (x∗ + h) − f (x∗) f (x∗) = lim h!0 h Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 3 / 27 • Hence, a natural way to numerically obtain the derivative is to use: f (x + h) − f (x ) f 0(x ) ≈ ∗ ∗ (1) ∗ h with a small h. We call (1) the one-sided derivative. • Another way to numerically obtain the derivative is to use: f (x + h) − f (x − h) f 0(x ) ≈ ∗ ∗ (2) ∗ 2h with a small h. We call (2) the two-sided derivative. We can show that the two-sided numerical derivative has a smaller error than the one-sided numerical derivative. We can see this in 3 steps. Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 4 / 27 • Step 1, use a Taylor expansion of order 3 around x∗ to obtain 0 1 00 2 1 000 3 f (x) = f (x∗) + f (x∗)(x − x∗) + f (x∗)(x − x∗) + f (x∗)(x − x∗) + O3(x) (3) 2 6 • Step 2, evaluate the expansion (3) at x
    [Show full text]
  • Orthogonal Polynomials: an Illustrated Guide
    Orthogonal Polynomials: An Illustrated Guide Avaneesh Narla December 10, 2018 Contents 1 Definitions 1 2 Example 1 2 3 Three-term Recurrence Relation 3 4 Christoffel-Darboux Formula 5 5 Zeros 6 6 Gauss Quadrature 8 6.1 Lagrange Interpolation . .8 6.2 Gauss quadrature formula . .8 7 Classical Orthogonal Polynomials 11 7.1 Hermite Polynomials . 11 7.2 Laguerre Polynomials . 12 7.3 Legendre Polynomials . 14 7.4 Jacobi Polynomials . 16 7.5 Chebyshev Polynomials of the First Kind . 17 7.6 Chebyshev Polynomials of the Second Kind . 19 7.7 Gegenbauer polynomials . 20 1 Definitions Orthogonal polynomials are orthogonal with respect to a certain function, known as the weight function w(x), and a defined interval. The weight function must be continuous and positive such that its moments (µn) exist. Z b n µn := w(x)x dx; n = 0; 1; 2; ::: a The interval may be infinite. We now define the inner product of two polynomials as follows Z 1 hf; giw(x) := w(x)f(x)g(x) dx −∞ 1 We will drop the subscript indicating the weight function in future cases. Thus, as always, a 1 sequence of polynomials fpn(x)gn=0 with deg(pn(x)) = n are called orthogonal polynomials for a weight function w if hpm; pni = hnδmn Above, the delta function is the Kronecker Delta Function There are two possible normalisations: If hn = 1 8n 2 f0; 1; 2:::g, the sequence is orthonormal. If the coefficient of highest degree term is 1 for all elements in the sequence, the sequence is monic.
    [Show full text]
  • Contents 3 Inner Product Spaces
    Linear Algebra (part 3) : Inner Product Spaces (by Evan Dummit, 2020, v. 2.00) Contents 3 Inner Product Spaces 1 3.1 Inner Product Spaces . 1 3.1.1 Inner Products on Real Vector Spaces . 1 3.1.2 Inner Products on Complex Vector Spaces . 3 3.1.3 Properties of Inner Products, Norms . 4 3.2 Orthogonality . 6 3.2.1 Orthogonality, Orthonormal Bases, and the Gram-Schmidt Procedure . 7 3.2.2 Orthogonal Complements and Orthogonal Projection . 9 3.3 Applications of Inner Products . 13 3.3.1 Least-Squares Estimates . 13 3.3.2 Fourier Series . 16 3.4 Linear Transformations and Inner Products . 18 3.4.1 Characterizations of Inner Products . 18 3.4.2 The Adjoint of a Linear Transformation . 20 3 Inner Product Spaces In this chapter we will study vector spaces having an additional kind of structure called an inner product, which generalizes the idea of the dot product of vectors in Rn, and which will allow us to formulate notions of length and angle in more general vector spaces. We dene inner products in real and complex vector spaces and establish some of their properties, including the celebrated Cauchy-Schwarz inequality, and survey some applications. We then discuss orthogonality of vectors and subspaces and in particular describe a method for constructing an orthonormal basis for any nite-dimensional inner product space, which provides an analogue of giving standard unit coordinate axes in Rn. Next, we discuss a pair of very important practical applications of inner products and orthogonality: computing least-squares approximations and approximating periodic functions with Fourier series.
    [Show full text]