Smoothing and Weighted Average Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Smoothing and Weighted Average Techniques Smoothing and Weighted Average Techniques Shreyan Ganguly, Jeffrey Gory, Meng Li, Matt Pratola, and Angela Dean October 28, 2014 SSES Reading Group Smoothing and Weighted Averages October 28, 2014 1 / 30 General Overview Non-Stationarity Second order stationarity implies that the covariance structure between two distinct points in the spatial domain is only a function of the distance between them Under the assumption of non-stationarity, this is not the case One method of estimating the covariance structure of a non-stationary spatial process is with smoothing or weighted average techniques SSES Reading Group Smoothing and Weighted Averages October 28, 2014 2 / 30 General Overview Notable Papers Fuentes (2001) represented a non-stationary spatial process as a weighted average of orthogonal local stationary processes Fuentes and Smith (2001) extended this idea to a continuous convolution of local stationary processes Kim, Mallick, and Holmes (2005) proposed a similar approach with piecewise Gaussian processes that allows for sharp transitions in the covariance structure Nott and Dunsmuir (2002) used a weighted average of stationary processes to describe the conditional behaviour of a process given the values of the process at a set of monitoring sites SSES Reading Group Smoothing and Weighted Averages October 28, 2014 3 / 30 Fuentes (2001) High Frequency Kriging Procedure Suppose Z(·) is a non-stationary process observed on a given spatial region D The region D is divided into k disjoint subregions Si , i = 1; 2;:::; k such that k [ D = Si i=1 Here k can be considered to be a variable valid for estimation SSES Reading Group Smoothing and Weighted Averages October 28, 2014 4 / 30 Fuentes (2001) Representation of Z Z is represented as a weighted average of orthogonal local stationary processes Zi k X Z(s) = Zi (s)wi (s) i=1 Each wi (·) is a weight function (e:g: the inverse of the distance between s and Si ) Each Zi is a stationary process with spatial covariance representing the spatial structure of Z in the subregion Si The non-stationary covariance of Z is defined as k X cov(Z(s); Z(t)) = wi (s)wi (t)cov(Zi (s); Zi (t)) i=1 where cov(Zi (s); Zi (t)) = Kθi (s − t), Kθi (·) is Matern SSES Reading Group Smoothing and Weighted Averages October 28, 2014 5 / 30 Fuentes (2001) Algorithm to Find k Start with small number of subregions (i:e: a small value of k) Increase k by dividing each subregion Si in half Repeat this until we have less than 36 observations in the subregions or until the AIC suggests no significant improvement in the estimation of θi for i = 1;:::; k by increasing the number of subregions k SSES Reading Group Smoothing and Weighted Averages October 28, 2014 6 / 30 Fuentes (2001) More Details on Finding k If data are in a regular grid, the Whittle (1954) approximation to the local stationary functions is used to calculate the global likelihood to obtain the AIC value If the data are not in a regular grid, then the method proposed by Kitanidis (1993) and by Mardia and Marshall (1984) is used to efficiently maximize the global likelihood to obtain the AIC The minimum number of observations in a subregion is restricted to 36 as it is not recommended to calculate the periodogram when the number of sample points available to compute the sample covariance is less than 36 (see Haas 1990, Journel and Huijbregts 1978, SAS/STAT Technical Report 1996) SSES Reading Group Smoothing and Weighted Averages October 28, 2014 7 / 30 Fuentes (2001) Estimation of Covariance Parameters Recall that the non-stationary covariance of Z is represented as k X cov(Z(s); Z(t)) = wi (s)wi (t)cov(Zi (s); Zi (t)) i=1 Assuming cov(Zi (s); Zi (t)) = Kθi (s − t) is a Matern covariance structure we have k X cov(Z(s); Z(t)) = wi (s)wi (t)Kθi (s − t) i=1 We need to estimate θi for each Si SSES Reading Group Smoothing and Weighted Averages October 28, 2014 8 / 30 Fuentes (2001) Estimation of Covariance Parameters (cont.) Using a spectral approach assume the spectral function 2 (−νi −d=2) fi (!) = φi (j!j ) T for Zi for each subgrid Si so that θi = (φi ; νi ) Let Gi be the generalized covariance corresponding to fi ^ φi andν ^i are obtained using a weighted non-linear least squares technique in the spectral domain With these estimates we get the estimated generalized covariance k ^ X ^ G(Z(s); Z(t)) = wi (s)wi (t)Gi (Zi (s); Zi (t)) i=1 SSES Reading Group Smoothing and Weighted Averages October 28, 2014 9 / 30 Fuentes and Smith (2001) Continuous Case Spatially Weighted Average of Local Stationary Processes: k X Z(s) = Zi (s)wi (s) i=1 0 where wi is the weight function and Zi s are orthogonal stationary processes with Zi (s) ? Zj (t), for any i 6= j Continuous Convolution of Local Stationary Processes: Z Z(s) = K(s − u)Zθ(u)(s)du D where K is kernel function and Zθ(u) u 2 D, is a family of independent stationary Gaussian processes indexed by θ, which is assumed to vary smoothly across space SSES Reading Group Smoothing and Weighted Averages October 28, 2014 10 / 30 Fuentes and Smith (2001) Kernel Smoothing Method Stationary Gaussian processes can be represented in the form Z Z(s) = K(s − u)X (u)du D where K(·) is some kernel function and X (·) is a Gaussian white noise process This can be extended to non-stationary processes SSES Reading Group Smoothing and Weighted Averages October 28, 2014 11 / 30 Fuentes and Smith (2001) Kernel Smoothing Models Model 1: Process Convolution Model Z Z(s) = Ks (u)X (u)du (1) D where Ks depends on the location s Model 2: Continuous Convolution of Local Stationary Processes Z Z(s) = K(s − u)Zθ(u)(s)du (2) D where Zθ(u)(s); s 2 D is a family of independent stationary Gaussian processes indexed by θ(u) SSES Reading Group Smoothing and Weighted Averages October 28, 2014 12 / 30 Fuentes and Smith (2001) Kernel Smoothing Models (cont.) Generally, Model (1) and (2) are not equivalent Since Zθ(u) is a family of independent stationary Gaussian process, it can be written in the form, Z Zθ(u)(s) = Ku(x − s)Xu(x)dx R2 where Ku is a kernel for each location u, and Xu is an independent white noise process for each location u Only in the case that the process Xu is common across all u (instead of being an independent white noise process for each location u) can Model (1) and (2) be expressed interchangeably However, Zθ(u)(s); s 2 D are correlated through kernel smoothing SSES Reading Group Smoothing and Weighted Averages October 28, 2014 13 / 30 Fuentes and Smith (2001) Covariance Structure of Non-Stationary Process Z Let Zθ(u) be a stationary Gaussian process with covariance matrix Cθ(u), that is, cov(Zθ(u)(x1); Zθ(u)(x2)) = Cθ(u)(x1 − x2) The covariance between locations x1 and x2 of the non-stationary process Z is a convolution of the local covariances Cθ(u)(x1 − x2), Z C(x1; x2; θ) = K(x1 − u)K(x2 − u)Cθ(u)(x1 − x2)du D Assume that θ(u) is a continuous function of u In implementation, the integration can be approximated by Riemann sum or Monto Carlo integration. SSES Reading Group Smoothing and Weighted Averages October 28, 2014 14 / 30 Fuentes and Smith (2001) Hierarchical Bayesian Model Z Z(s) = K(s − u)Zθ(u)(s)du D Stage 1: Conditional on model parameters fθ(u); u 2 Dg, Z is Gaussian Stage 2: θ(u) = a + ri + cj + θ(u), where the process θ(u) is a stationary spatial process with mean zero and its covariance structure can be modeled by some well-known covariance function with unknown parameters Another parameter that needs to be estimated is the bandwidth h that defines the kernel function K SSES Reading Group Smoothing and Weighted Averages October 28, 2014 15 / 30 Kim, Mallick, and Holmes (2005) Piecewise Gaussian Processes Much like Fuentes (2001), partition region into several subregions and fit a stationary process within each subregion Adapted to deal with sharp transitions in covariance structure, so processes assumed independent across subregions The resulting non-stationary model is not smooth One possible application is modeling soil permeability Voronoi tessellation (Green and Sibson 1978) used to determine number of subregions and their centers Fit model within a Bayesian framework SSES Reading Group Smoothing and Weighted Averages October 28, 2014 16 / 30 Kim, Mallick, and Holmes (2005) Voronoi Tessellation SSES Reading Group Smoothing and Weighted Averages October 28, 2014 17 / 30 Nott and Dunsmuir (2002) General Idea Measurements of a spatial process are available at a set of monitoring sites An empirical covariance matrix can be estimated from these measurements Want to extend this covariance matrix to a covariance function over the entire region of interest Do this with a weighted average of local stationary processes SSES Reading Group Smoothing and Weighted Averages October 28, 2014 18 / 30 Nott and Dunsmuir (2002) Illustration Image from Haas 1990 SSES Reading Group Smoothing and Weighted Averages October 28, 2014 19 / 30 Nott and Dunsmuir (2002) Notation Z = fZ(s); s 2 Rd g is the spatial process of interest s1;:::; sn are the sites at which Z is observed T Γ is the covariance matrix for (Z(s1);:::; Z(sn)) Wi (s); i = 1;:::; I are a collection of independent, zero-mean, stationary processes which describe the behavior of Z locally about a collection z1;:::; zI of I (unmonitored) locations d Ri (h); h 2 R is the covariance function corresponding to Wi (s) n Ci = [Ri (sj − sk )]j;k=1 is the covariance matrix of T Wi = (Wi (s1);:::; Wi (sn)) T ci (s) is the vector (Ri (s − s1);:::; Ri (s − sn)) of cross-covariances between Wi (s) and Wi SSES Reading Group Smoothing and Weighted Averages October 28, 2014 20 / 30 Nott
Recommended publications
  • The Matroid Theorem We First Review Our Definitions: a Subset System Is A
    CMPSCI611: The Matroid Theorem Lecture 5 We first review our definitions: A subset system is a set E together with a set of subsets of E, called I, such that I is closed under inclusion. This means that if X ⊆ Y and Y ∈ I, then X ∈ I. The optimization problem for a subset system (E, I) has as input a positive weight for each element of E. Its output is a set X ∈ I such that X has at least as much total weight as any other set in I. A subset system is a matroid if it satisfies the exchange property: If i and i0 are sets in I and i has fewer elements than i0, then there exists an element e ∈ i0 \ i such that i ∪ {e} ∈ I. 1 The Generic Greedy Algorithm Given any finite subset system (E, I), we find a set in I as follows: • Set X to ∅. • Sort the elements of E by weight, heaviest first. • For each element of E in this order, add it to X iff the result is in I. • Return X. Today we prove: Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. 2 Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. Proof: We will show first that if (E, I) is a matroid, then the greedy algorithm is correct. Assume that (E, I) satisfies the exchange property.
    [Show full text]
  • CONTINUITY in the ALEXIEWICZ NORM Dedicated to Prof. J
    131 (2006) MATHEMATICA BOHEMICA No. 2, 189{196 CONTINUITY IN THE ALEXIEWICZ NORM Erik Talvila, Abbotsford (Received October 19, 2005) Dedicated to Prof. J. Kurzweil on the occasion of his 80th birthday Abstract. If f is a Henstock-Kurzweil integrable function on the real line, the Alexiewicz norm of f is kfk = sup j I fj where the supremum is taken over all intervals I ⊂ . Define I the translation τx by τxfR(y) = f(y − x). Then kτxf − fk tends to 0 as x tends to 0, i.e., f is continuous in the Alexiewicz norm. For particular functions, kτxf − fk can tend to 0 arbitrarily slowly. In general, kτxf − fk > osc fjxj as x ! 0, where osc f is the oscillation of f. It is shown that if F is a primitive of f then kτxF − F k kfkjxj. An example 1 6 1 shows that the function y 7! τxF (y) − F (y) need not be in L . However, if f 2 L then kτxF − F k1 6 kfk1jxj. For a positive weight function w on the real line, necessary and sufficient conditions on w are given so that k(τxf − f)wk ! 0 as x ! 0 whenever fw is Henstock-Kurzweil integrable. Applications are made to the Poisson integral on the disc and half-plane. All of the results also hold with the distributional Denjoy integral, which arises from the completion of the space of Henstock-Kurzweil integrable functions as a subspace of Schwartz distributions. Keywords: Henstock-Kurzweil integral, Alexiewicz norm, distributional Denjoy integral, Poisson integral MSC 2000 : 26A39, 46Bxx 1.
    [Show full text]
  • Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany RUDIGER¨ STEPHAN1 Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra 1Email: [email protected] ZIB-Report 08-48 (December 2008) Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra R¨udigerStephan Abstract Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N. In this paper we briefly touch on questions that addresses common grounds and differences of the complexity of a combinatorial optimization problem and its cardinality constrained version. Afterwards we focus on polytopes associated with cardinality constrained combinatorial optimiza- tion problems. Given an integer programming formulation for a combina- torial optimization problem, by essentially adding Gr¨otschel’s cardinality forcing inequalities [11], we obtain an integer programming formulation for its cardinality restricted version. Since the cardinality forcing inequal- ities in their original form are mostly not facet defining for the associated polyhedra, we discuss possibilities to strengthen them. In [13] a variation of the cardinality forcing inequalities were successfully integrated in the system of linear inequalities for the matroid polytope to provide a com- plete linear description of the cardinality constrained matroid polytope. We identify this polytope as a master polytope for our class of problems, since many combinatorial optimization problems can be formulated over the intersection of matroids. 1 Introduction, Basics, and Complexity Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N.
    [Show full text]
  • Some Properties of AP Weight Function
    Journal of the Institute of Engineering, 2016, 12(1): 210-213 210 TUTA/IOE/PCU © TUTA/IOE/PCU Printed in Nepal Some Properties of AP Weight Function Santosh Ghimire Department of Engineering Science and Humanities, Institute of Engineering Pulchowk Campus, Tribhuvan University, Kathmandu, Nepal Corresponding author: [email protected] Received: June 20, 2016 Revised: July 25, 2016 Accepted: July 28, 2016 Abstract: In this paper, we briefly discuss the theory of weights and then define A1 and Ap weight functions. Finally we prove some of the properties of AP weight function. Key words: A1 weight function, Maximal functions, Ap weight function. 1. Introduction The theory of weights play an important role in various fields such as extrapolation theory, vector-valued inequalities and estimates for certain class of non linear differential equation. Moreover, they are very useful in the study of boundary value problems for Laplace's equation in Lipschitz domains. In 1970, Muckenhoupt characterized positive functions w for which the Hardy-Littlewood maximal operator M maps Lp(Rn, w(x)dx) to itself. Muckenhoupt's characterization actually gave the better understanding of theory of weighted inequalities which then led to the introduction of Ap class and consequently the development of weighted inequalities. 2. Definitions n Definition: A locally integrable function on R that takes values in the interval (0,∞) almost everywhere is called a weight. So by definition a weight function can be zero or infinity only on a set whose Lebesgue measure is zero. We use the notation to denote the w-measure of the set E and we reserve the notation Lp(Rn,w) or Lp(w) for the weighted L p spaces.
    [Show full text]
  • Importance Sampling
    Chapter 6 Importance sampling 6.1 The basics To movtivate our discussion consider the following situation. We want to use Monte Carlo to compute µ = E[X]. There is an event E such that P (E) is small but X is small outside of E. When we run the usual Monte Carlo algorithm the vast majority of our samples of X will be outside E. But outside of E, X is close to zero. Only rarely will we get a sample in E where X is not small. Most of the time we think of our problem as trying to compute the mean of some random variable X. For importance sampling we need a little more structure. We assume that the random variable we want to compute the mean of is of the form f(X~ ) where X~ is a random vector. We will assume that the joint distribution of X~ is absolutely continous and let p(~x) be the density. (Everything we will do also works for the case where the random vector X~ is discrete.) So we focus on computing Ef(X~ )= f(~x)p(~x)dx (6.1) Z Sometimes people restrict the region of integration to some subset D of Rd. (Owen does this.) We can (and will) instead just take p(x) = 0 outside of D and take the region of integration to be Rd. The idea of importance sampling is to rewrite the mean as follows. Let q(x) be another probability density on Rd such that q(x) = 0 implies f(x)p(x) = 0.
    [Show full text]
  • Adaptively Weighted Maximum Likelihood Estimation of Discrete Distributions
    Unicentre CH-1015 Lausanne http://serval.unil.ch Year : 2010 ADAPTIVELY WEIGHTED MAXIMUM LIKELIHOOD ESTIMATION OF DISCRETE DISTRIBUTIONS Michael AMIGUET Michael AMIGUET, 2010, ADAPTIVELY WEIGHTED MAXIMUM LIKELIHOOD ESTIMATION OF DISCRETE DISTRIBUTIONS Originally published at : Thesis, University of Lausanne Posted at the University of Lausanne Open Archive. http://serval.unil.ch Droits d’auteur L'Université de Lausanne attire expressément l'attention des utilisateurs sur le fait que tous les documents publiés dans l'Archive SERVAL sont protégés par le droit d'auteur, conformément à la loi fédérale sur le droit d'auteur et les droits voisins (LDA). A ce titre, il est indispensable d'obtenir le consentement préalable de l'auteur et/ou de l’éditeur avant toute utilisation d'une oeuvre ou d'une partie d'une oeuvre ne relevant pas d'une utilisation à des fins personnelles au sens de la LDA (art. 19, al. 1 lettre a). A défaut, tout contrevenant s'expose aux sanctions prévues par cette loi. Nous déclinons toute responsabilité en la matière. Copyright The University of Lausanne expressly draws the attention of users to the fact that all documents published in the SERVAL Archive are protected by copyright in accordance with federal law on copyright and similar rights (LDA). Accordingly it is indispensable to obtain prior consent from the author and/or publisher before any use of a work or part of a work for purposes other than personal use within the meaning of LDA (art. 19, para. 1 letter a). Failure to do so will expose offenders to the sanctions laid down by this law.
    [Show full text]
  • 3 May 2012 Inner Product Quadratures
    Inner product quadratures Yu Chen Courant Institute of Mathematical Sciences New York University Aug 26, 2011 Abstract We introduce a n-term quadrature to integrate inner products of n functions, as opposed to a Gaussian quadrature to integrate 2n functions. We will characterize and provide computataional tools to construct the inner product quadrature, and establish its connection to the Gaussian quadrature. Contents 1 The inner product quadrature 2 1.1 Notation.................................... 2 1.2 A n-termquadratureforinnerproducts. 4 2 Construct the inner product quadrature 4 2.1 Polynomialcase................................ 4 2.2 Arbitraryfunctions .............................. 6 arXiv:1205.0601v1 [math.NA] 3 May 2012 3 Product law and minimal functions 7 3.1 Factorspaces ................................. 8 3.2 Minimalfunctions............................... 11 3.3 Fold data into Gramians - signal processing . 13 3.4 Regularization................................. 15 3.5 Type-3quadraturesforintegralequations. .... 15 4 Examples 16 4.1 Quadratures for non-positive definite weights . ... 16 4.2 Powerfunctions,HankelGramians . 16 4.3 Exponentials kx,hyperbolicGramians ................... 18 1 5 Generalizations and applications 19 5.1 Separation principle of imaging . 20 5.2 Quadratures in higher dimensions . 21 5.3 Deflationfor2-Dquadraturedesign . 22 1 The inner product quadrature We consider three types of n-term Gaussian quadratures in this paper, Type-1: to integrate 2n functions in interval [a, b]. • Type-2: to integrate n2 inner products of n functions. • Type-3: to integrate n functions against n weights. • For these quadratures, the weight functions u are not required positive definite. Type-1 is the classical Guassian quadrature, Type-2 is the inner product quadrature, and Type-3 finds applications in imaging and sensing, and discretization of integral equations.
    [Show full text]
  • Depth-Weighted Estimation of Heterogeneous Agent Panel Data
    Depth-Weighted Estimation of Heterogeneous Agent Panel Data Models∗ Yoonseok Lee† Donggyu Sul‡ Syracuse University University of Texas at Dallas June 2020 Abstract We develop robust estimation of panel data models, which is robust to various types of outlying behavior of potentially heterogeneous agents. We estimate parameters from individual-specific time-series and average them using data-dependent weights. In partic- ular, we use the notion of data depth to obtain order statistics among the heterogeneous parameter estimates, and develop the depth-weighted mean-group estimator in the form of an L-estimator. We study the asymptotic properties of the new estimator for both homogeneous and heterogeneous panel cases, focusing on the Mahalanobis and the pro- jection depths. We examine relative purchasing power parity using this estimator and cannot find empirical evidence for it. Keywords: Panel data, Depth, Robust estimator, Heterogeneous agents, Mean group estimator. JEL Classifications: C23, C33. ∗First draft: September 2017. The authors thank to Robert Serfling and participants at numerous sem- inar/conference presentations for very helpful comments. Lee acknowledges support from Appleby-Mosher Research Fund, Maxwell School, Syracuse University. †Corresponding author. Address: Department of Economics and Center for Policy Research, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244. E-mail: [email protected] ‡Address: Department of Economics, University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080. E-mail: [email protected] 1Introduction A robust estimator is a statistic that is less influenced by outliers. Many robust estimators are available for regression models, where the robustness is toward outliers in the regression error.
    [Show full text]
  • Lecture 4 1 the Maximum Weight Matching Problem
    CS 671: Graph Streaming Algorithms and Lower Bounds Rutgers: Fall 2020 Lecture 4 September 29, 2020 Instructor: Sepehr Assadi Scribe: Aditi Dudeja Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. In this lecture, we will primarily focus on the following paper: • \Ami Paz, Gregory Schwartzman, A (2 + )-Approximation for Maximum Weight Matching in the Semi-Streaming Model. In SODA 2017." with additional backgrounds from: • \Joan Feigenbaum, Sampath Kannan, Andrew McGregor, Siddharth Suri, Jian Zhang, On Graph Problems in a Semi-streaming Model. In ICALP 2004 and Theor. Comput. Sci. 2005." • \Michael Crouch, Daniel S. Stubbs, Improved Streaming Algorithms for Weighted Matching, via Un- weighted Matching. In APPROX-RANDOM 2014." 1 The Maximum Weight Matching Problem Let G = (V; E; w) be a simple graph with non-negative edge weights w(e) ≥ 0 on each edge e. As usual, we denote n := jV j and m := jEj. Recall that a matching M in a graph G is a set of edges so that no two edge share a vertex. In this lecture, we consider the maximum weight matching problem, namely, the problem P of finding a matching M in G which maximizes e2M we. Throughout this lecture, we shall assume that weight of each edge is bounded by some poly(n) and thus can be represented with O(log n) bits1. Recall that by what we already proved in Lecture 2 even for unweighted matching, there is no hope to find an exact semi-streaming algorithm for maximum weight matching in a single pass.
    [Show full text]
  • Numerical Integration and Differentiation Growth And
    Numerical Integration and Differentiation Growth and Development Ra¨ulSantaeul`alia-Llopis MOVE-UAB and Barcelona GSE Spring 2017 Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 1 / 27 1 Numerical Differentiation One-Sided and Two-Sided Differentiation Computational Issues on Very Small Numbers 2 Numerical Integration Newton-Cotes Methods Gaussian Quadrature Monte Carlo Integration Quasi-Monte Carlo Integration Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 2 / 27 Numerical Differentiation The definition of the derivative at x∗ is 0 f (x∗ + h) − f (x∗) f (x∗) = lim h!0 h Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 3 / 27 • Hence, a natural way to numerically obtain the derivative is to use: f (x + h) − f (x ) f 0(x ) ≈ ∗ ∗ (1) ∗ h with a small h. We call (1) the one-sided derivative. • Another way to numerically obtain the derivative is to use: f (x + h) − f (x − h) f 0(x ) ≈ ∗ ∗ (2) ∗ 2h with a small h. We call (2) the two-sided derivative. We can show that the two-sided numerical derivative has a smaller error than the one-sided numerical derivative. We can see this in 3 steps. Ra¨ulSantaeul`alia-Llopis(MOVE,UAB,BGSE) GnD: Numerical Integration and Differentiation Spring 2017 4 / 27 • Step 1, use a Taylor expansion of order 3 around x∗ to obtain 0 1 00 2 1 000 3 f (x) = f (x∗) + f (x∗)(x − x∗) + f (x∗)(x − x∗) + f (x∗)(x − x∗) + O3(x) (3) 2 6 • Step 2, evaluate the expansion (3) at x
    [Show full text]
  • Orthogonal Polynomials: an Illustrated Guide
    Orthogonal Polynomials: An Illustrated Guide Avaneesh Narla December 10, 2018 Contents 1 Definitions 1 2 Example 1 2 3 Three-term Recurrence Relation 3 4 Christoffel-Darboux Formula 5 5 Zeros 6 6 Gauss Quadrature 8 6.1 Lagrange Interpolation . .8 6.2 Gauss quadrature formula . .8 7 Classical Orthogonal Polynomials 11 7.1 Hermite Polynomials . 11 7.2 Laguerre Polynomials . 12 7.3 Legendre Polynomials . 14 7.4 Jacobi Polynomials . 16 7.5 Chebyshev Polynomials of the First Kind . 17 7.6 Chebyshev Polynomials of the Second Kind . 19 7.7 Gegenbauer polynomials . 20 1 Definitions Orthogonal polynomials are orthogonal with respect to a certain function, known as the weight function w(x), and a defined interval. The weight function must be continuous and positive such that its moments (µn) exist. Z b n µn := w(x)x dx; n = 0; 1; 2; ::: a The interval may be infinite. We now define the inner product of two polynomials as follows Z 1 hf; giw(x) := w(x)f(x)g(x) dx −∞ 1 We will drop the subscript indicating the weight function in future cases. Thus, as always, a 1 sequence of polynomials fpn(x)gn=0 with deg(pn(x)) = n are called orthogonal polynomials for a weight function w if hpm; pni = hnδmn Above, the delta function is the Kronecker Delta Function There are two possible normalisations: If hn = 1 8n 2 f0; 1; 2:::g, the sequence is orthonormal. If the coefficient of highest degree term is 1 for all elements in the sequence, the sequence is monic.
    [Show full text]
  • Contents 3 Inner Product Spaces
    Linear Algebra (part 3) : Inner Product Spaces (by Evan Dummit, 2020, v. 2.00) Contents 3 Inner Product Spaces 1 3.1 Inner Product Spaces . 1 3.1.1 Inner Products on Real Vector Spaces . 1 3.1.2 Inner Products on Complex Vector Spaces . 3 3.1.3 Properties of Inner Products, Norms . 4 3.2 Orthogonality . 6 3.2.1 Orthogonality, Orthonormal Bases, and the Gram-Schmidt Procedure . 7 3.2.2 Orthogonal Complements and Orthogonal Projection . 9 3.3 Applications of Inner Products . 13 3.3.1 Least-Squares Estimates . 13 3.3.2 Fourier Series . 16 3.4 Linear Transformations and Inner Products . 18 3.4.1 Characterizations of Inner Products . 18 3.4.2 The Adjoint of a Linear Transformation . 20 3 Inner Product Spaces In this chapter we will study vector spaces having an additional kind of structure called an inner product, which generalizes the idea of the dot product of vectors in Rn, and which will allow us to formulate notions of length and angle in more general vector spaces. We dene inner products in real and complex vector spaces and establish some of their properties, including the celebrated Cauchy-Schwarz inequality, and survey some applications. We then discuss orthogonality of vectors and subspaces and in particular describe a method for constructing an orthonormal basis for any nite-dimensional inner product space, which provides an analogue of giving standard unit coordinate axes in Rn. Next, we discuss a pair of very important practical applications of inner products and orthogonality: computing least-squares approximations and approximating periodic functions with Fourier series.
    [Show full text]