Elementary Numerical Methods and Computing with Python

Total Page:16

File Type:pdf, Size:1020Kb

Elementary Numerical Methods and Computing with Python Elementary Numerical Methods and computing with Python Steven Pav1 Mark McClure2 April 14, 2016 1Portions Copyright c 2004-2006 Steven E. Pav. Permission is granted to copy, dis- tribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. 2Copyright c 2016 Mark McClure. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Ver- sion 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ”GNU Free Documentation License”. 2 Contents Preface 7 1 Introduction 9 1.1 Examples ............................... 9 1.2 Iteration ................................ 11 1.3 Topics ................................. 12 2 Some mathematical preliminaries 15 2.1 Series ................................. 15 2.1.1 Geometric series ....................... 15 2.1.2 The integral test ....................... 17 2.1.3 Alternating Series ...................... 20 2.1.4 Taylor’s Theorem ....................... 21 Exercises .................................. 25 3 Computer arithmetic 27 3.1 Strange arithmetic .......................... 27 3.2 Error .................................. 28 3.3 Computer numbers .......................... 29 3.3.1 Types of numbers ...................... 29 3.3.2 Floating point numbers ................... 30 3.3.3 Distribution of computer numbers ............. 31 3.3.4 Exploring numbers with Python .............. 32 3.4 Loss of Significance .......................... 33 Exercises .................................. 36 4 Finding Roots 37 4.1 Bisection ............................... 38 4.1.1 Modifications ......................... 38 4.1.2 Convergence ......................... 39 4.1.3 Implementation ........................ 39 4.2 Functional iteration ......................... 41 4.3 Newton’s Method ........................... 43 4.3.1 Connection with functional iteration ............ 43 3 4 CONTENTS 4.3.2 Implementation ........................ 44 4.3.3 Problems ........................... 45 4.3.4 Convergence ......................... 46 4.3.5 Using Newton’s Method ................... 47 4.4 Secant Method ............................ 48 4.4.1 Problems ........................... 50 4.4.2 Convergence ......................... 52 Exercises .................................. 53 5 Interpolation 55 5.1 Polynomial Interpolation ....................... 55 5.1.1 Lagranges Method ...................... 55 5.1.2 Newton’s Method ....................... 57 5.1.3 Newton’s Nested Form .................... 59 5.1.4 Divided Differences ...................... 59 5.2 Errors in Polynomial Interpolation ................. 61 5.2.1 Interpolation Error Theorem ................ 64 5.2.2 Interpolation Error for Equally Spaced Nodes ....... 66 Exercises .................................. 68 6 Spline Interpolation 71 6.1 First and Second Degree Splines .................. 71 6.1.1 First Degree Spline Accuracy ................ 72 6.1.2 Second Degree Splines .................... 73 6.1.3 Computing Second Degree Splines ............. 74 6.2 (Natural) Cubic Splines ....................... 75 6.2.1 Why Natural Cubic Splines? ................ 75 6.2.2 Computing Cubic Splines .................. 76 6.3 B Splines ............................... 77 Exercises .................................. 80 7 Solving Linear Systems 83 7.1 Gaussian Elimination with Na¨ıve Pivoting ............. 83 7.1.1 Elementary Row Operations ................. 83 7.1.2 Algorithm Terminology ................... 86 7.1.3 Algorithm Problems ..................... 87 7.2 Pivoting Strategies for Gaussian Elimination ........... 88 7.2.1 Scaled Partial Pivoting ................... 89 7.2.2 An Example ......................... 89 7.2.3 Another Example and A Real Algorithm .......... 90 7.3 LU Factorization ........................... 92 7.3.1 An Example ......................... 93 7.3.2 Using LU Factorizations ................... 95 7.3.3 Some Theory ......................... 96 7.3.4 Computing Inverses ..................... 97 7.4 Iterative Solutions .......................... 97 CONTENTS 5 7.4.1 An Operation Count for Gaussian Elimination ...... 97 7.4.2 Dividing by Multiplying ................... 98 7.4.3 Impossible Iteration .....................100 7.4.4 Richardson Iteration .....................100 7.4.5 Jacobi Iteration ........................101 7.4.6 Gauss Seidel Iteration ....................102 7.4.7 Error Analysis ........................103 7.4.8 A Free Lunch? ........................105 Exercises ..................................106 8 Least Squares 111 8.1 Least Squares .............................111 8.1.1 The Definition of Ordinary Least Squares ......... 111 8.1.2 Linear Least Squares .....................112 8.1.3 Least Squares from Basis Functions ............114 8.2 Orthonormal Bases ..........................117 8.2.1 Alternatives to Normal Equations .............119 8.3 Orthogonal Least Squares ......................120 8.3.1 Computing the Orthogonal Least Squares Approximant . 125 8.3.2 Principal Component Analysis ...............126 Exercises ..................................128 9 Approximating Derivatives 131 9.1 Finite Differences ...........................131 9.1.1 Approximating the Second Derivative ...........133 9.2 Richardson Extrapolation ......................134 9.2.1 Abstracting Richardson’s Method ..............134 9.2.2 Using Richardson Extrapolation ..............135 Exercises ..................................138 10 Integrals and Quadrature 141 10.1 The Definite Integral .........................141 10.1.1 Upper and Lower Sums ...................141 10.1.2 Approximating the Integral .................143 10.1.3 Simple and Composite Rules ................144 10.2 Trapezoidal Rule ...........................144 10.2.1 How Good is the Composite Trapezoidal Rule? ...... 146 10.2.2 Using the Error Bound ....................147 10.3 Romberg Algorithm .........................148 10.3.1 Recursive Trapezoidal Rule .................151 10.4 Gaussian Quadrature .........................152 10.4.1 Determining Weights (Lagrange Polynomial Method) . 152 10.4.2 Determining Weights (Method of Undetermined Coeffi- cients) .............................154 10.4.3 Gaussian Nodes ........................155 10.4.4 Determining Gaussian Nodes ................156 6 CONTENTS 10.4.5 Reinventing the Wheel ....................158 Exercises ..................................160 11 Ordinary Differential Equations 163 11.1 Elementary Methods .........................163 11.1.1 Integration and ‘Stepping’ ..................164 11.1.2 Taylor’s Series Methods ...................164 11.1.3 Euler’s Method ........................165 11.1.4 Higher Order Methods ....................165 11.1.5 A basic error estimate ....................166 11.1.6 Error theorems ........................167 11.1.7 Examples ...........................168 11.1.8 Stability ............................168 11.1.9 Backwards Euler’s Method .................172 11.2 Runge-Kutta Methods ........................174 11.2.1 Taylor’s Series Redux ....................175 11.2.2 Deriving the Runge-Kutta Methods ............175 11.2.3 Examples ...........................177 11.3 Systems of ODEs ...........................177 11.3.1 Larger Systems ........................178 11.3.2 Recasting Single ODE Methods ...............179 11.3.3 It’s Only Systems .......................180 11.3.4 It’s Only Autonomous Systems ...............181 Exercises ..................................185 A GNU Free Documentation License 187 1. APPLICABILITY AND DEFINITIONS ...............187 2. VERBATIM COPYING ........................189 3. COPYING IN QUANTITY ......................189 4. MODIFICATIONS ...........................190 5. COMBINING DOCUMENTS .....................192 6. COLLECTIONS OF DOCUMENTS .................192 7. AGGREGATION WITH INDEPENDENT WORKS ........ 192 8. TRANSLATION ............................193 9. TERMINATION ............................193 10. FUTURE REVISIONS OF THIS LICENSE ............193 ADDENDUM: How to use this License for your documents ...... 194 Preface This preliminary version 0.2.1 of an open numerical methods text has been as- sembled by Mark McClure for use in UNCA’s Math/CS 441 course in Numerical Analysis. It draws largely on Stephen Pav’s Numerical Methods Course Notes. Most of chapters 4 through 8 of this version, as well as parts of chapters 2 and 3, are drawn directly from that work. The little that remains is my own, though, I hope to expand it as I teach the subject. History 0.1 2005 (Pav’s complete original) 0.2 January 11, 2016 - First new draft Chapters 1, 2, and 3 are largely new, though section 2.1.4 on “Taylor series” and section 3.4 on “Loss of significance” are mostly Pav’s work. 0.2.1 January 16, 2016 Incremental changes to sections 2.1.1 Geometric Series • 2.1.2 The integral test • 2.1.3 Alternating series • 0.2.2 January 31, 2016 Addition of sections 3.3.3 Distribution of computer numbers • 4.2 Functional iteration • Modifications to section 4.1 The bisection method, including Python code. Addition of several exercises to chapters 3 and 4. Fixed many typos and some hyperlinks License As Pav’s work is licensed under GNU FDL V2, so is this. A full copy of the license may be found at the end of the text. 7 8
Recommended publications
  • Simpson's Rule
    Simpson's rule In numerical analysis, Simpson's rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the following approximation for equally spaced subdivisions (where is even): (General Form) , where and . Simpson's rule also corresponds to the three-point Newton-Cotes quadrature rule. Simpson's rule can be derived by In English, the method is credited to the mathematician Thomas Simpson (1710–1761) of Leicestershire, England. However, approximating the integrand f (x) (in Johannes Kepler used similar formulas over 100 years prior, and for this reason the method is sometimes called Kepler's blue) by the quadratic interpolant P rule , or Keplersche Fassregel (Kepler's barrel rule) in German. (x) (in red) . Contents Quadratic interpolation Averaging the midpoint and the trapezoidal rules Undetermined coefficients Error Composite Simpson's rule An animation showing how Simpson's 3/8 rule Simpson's rule approximation Composite Simpson's 3/8 rule improves with more strips. Alternative extended Simpson's rule Simpson's rules in the case of narrow peaks Composite Simpson's rule for irregularly spaced data See also Notes References External links Quadratic interpolation One derivation replaces the integrand by the quadratic polynomial (i.e. parabola) which takes the same values as at the end points a and b and the midpoint m = ( a + b) / 2. One can use Lagrange polynomial interpolation to find an expression for this polynomial, Using integration by substitution one can show that [1] Introducing the step size this is also commonly written as Because of the factor Simpson's rule is also referred to as Simpson's 1/3 rule (see below for generalization).
    [Show full text]
  • Finding the Roots Or Solving Nonlinear Equations
    Chapter 6: Finding the Roots or Solving Nonlinear Equations Instructor: Dr. Ming Ye Solving Nonlinear Equations and Sets of Linear Equations •Equations can be solved analytically Solution of y=mx+b Solution of ax2+bx+c=0 •Large systems of linear equations •Higher order equations Specific energy equation •Nonlinear algebraic equationsx=g(x) Manning’s equation Topics Covered in This Chapter •Preliminaries •Fixed-Point Iteration •Bisection •Newton’s Method •The Secant Method •Hybrid Methods •Roots of Polynomials Roots of f(x)=0 General Considerations • Is this a special function that will be evaluated often? If so, some analytical study and possibly some algebraic manipulation of the function will be a good idea. Devise a custom iterative scheme. •How much precision is needed? Engineering calculations often require only a few significant figures. •How fast and robust must the method be? A robust procedure is relatively immune to the initial guess, and it converges quickly without being overly sensitive to the behavior of f(x) in the expected parameter space. • Is the function a polynomial? There are special procedure for finding the roots of polynomials. •Does the function have singularities? Some root finding procedures will converge to a singularity, as well as converge to a root. There is no single root-finding method that is best for all situations. The Basic Root-Finding Procedure 1.5 1 ) 0.5 3 0 f(h) (m f(h) -0.5 The basic strategy is -1 -1.5 -1.5 -1 -0.5 0 0.5 1 1.5 1. Plot the function. h (m) The plot provides an initial guess, and an indication of potential problems.
    [Show full text]
  • Applied Quantitative Finance
    Applied Quantitative Finance Wolfgang H¨ardle Torsten Kleinow Gerhard Stahl In cooperation with G¨okhanAydınlı, Oliver Jim Blaskowitz, Song Xi Chen, Matthias Fengler, J¨urgenFranke, Christoph Frisch, Helmut Herwartz, Harriet Holzberger, Steffi H¨ose, Stefan Huschens, Kim Huynh, Stefan R. Jaschke, Yuze Jiang Pierre Kervella, R¨udigerKiesel, Germar Kn¨ochlein, Sven Knoth, Jens L¨ussem,Danilo Mercurio, Marlene M¨uller,J¨ornRank, Peter Schmidt, Rainer Schulz, J¨urgenSchumacher, Thomas Siegl, Robert Wania, Axel Werwatz, Jun Zheng June 20, 2002 Contents Preface xv Contributors xix Frequently Used Notation xxi I Value at Risk 1 1 Approximating Value at Risk in Conditional Gaussian Models 3 Stefan R. Jaschke and Yuze Jiang 1.1 Introduction . 3 1.1.1 The Practical Need . 3 1.1.2 Statistical Modeling for VaR . 4 1.1.3 VaR Approximations . 6 1.1.4 Pros and Cons of Delta-Gamma Approximations . 7 1.2 General Properties of Delta-Gamma-Normal Models . 8 1.3 Cornish-Fisher Approximations . 12 1.3.1 Derivation . 12 1.3.2 Properties . 15 1.4 Fourier Inversion . 16 iv Contents 1.4.1 Error Analysis . 16 1.4.2 Tail Behavior . 20 1.4.3 Inversion of the cdf minus the Gaussian Approximation 21 1.5 Variance Reduction Techniques in Monte-Carlo Simulation . 24 1.5.1 Monte-Carlo Sampling Method . 24 1.5.2 Partial Monte-Carlo with Importance Sampling . 28 1.5.3 XploRe Examples . 30 2 Applications of Copulas for the Calculation of Value-at-Risk 35 J¨ornRank and Thomas Siegl 2.1 Copulas . 36 2.1.1 Definition .
    [Show full text]
  • An Improved Hybrid Algorithm to Bisection Method and Newton-Raphson Method 1 Introduction
    Applied Mathematical Sciences, Vol. 11, 2017, no. 56, 2789 - 2797 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.710302 An Improved Hybrid Algorithm to Bisection Method and Newton-Raphson Method Jeongwon Kim, Taehoon Noh, Wonjun Oh, Seung Park Incheon Academy of Science and Arts Incheon 22009, Korea Nahmwoo Hahm1 Department of Mathematics, Incheon National University Incheon 22012, Korea Copyright c 2017 Jeongwon Kim, Taehoon Noh, Wonjun Oh, Seung Park and Nahm- woo Hahm. This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract In this paper, we investigate the hybrid algorithm to the bisection method and the Newton-Raphson method suggested in [1]. We suggest an improved hybrid algorithm that is suitable for functions which are not applicable to the hybrid algorithm[1]. We give some numerical results to justify our algorithm. Mathematics Subject Classification: 65H04, 65H99 Keywords: Hybrid Algorithm, Bisection Method, Newton-Raphson Method 1 Introduction Finding an approximated solution to the root of a nonlinear equation is one of the main topics of numerical analysis([4], [6]). An algorithm for finding x such that f(x) = 0 is called a root-finding algorithm. The bisection method 1Corresponding author This research was supported by Incheon Academy of Science and Arts Fund, 2017. 2790 J. Kim, T. Noh, W. Oh, S. Park and N. Hahm using the intermediate value theorem is the simplest root-finding algorithm. Note that the bisection method converges slowly but it is reliable.
    [Show full text]
  • MAT 2310. Computational Mathematics
    MAT 2310. Computational Mathematics Wm C Bauldry Fall, 2012 Introduction to Computational Mathematics \1 + 1 = 3 for large enough values of 1." Introduction to Computational Mathematics Table of Contents I. Computer Arithmetic.......................................1 II. Control Structures.........................................27 S I. Special Topics: Computation Cost and Horner's Form....... 59 III. Numerical Differentiation.................................. 64 IV. Root Finding Algorithms................................... 77 S II. Special Topics: Modified Newton's Method................ 100 V. Numerical Integration.................................... 103 VI. Polynomial Interpolation.................................. 125 S III. Case Study: TI Calculator Numerics.......................146 VII. Projects................................................. 158 ICM i I. Computer Arithmetic Sections 1. Scientific Notation.............................................1 2. Converting to Different Bases...................................2 3. Floating Point Numbers........................................7 4. IEEE-754 Floating Point Standard..............................9 5. Maple's Floating Point Representation......................... 16 6. Error......................................................... 18 Exercises..................................................... 25 ICM ii II. Control Structures Sections 1. Control Structures............................................ 27 2. A Common Example.......................................... 33 3. Control
    [Show full text]
  • Approximation of Finite Population Totals Using Lagrange Polynomial
    Open Journal of Statistics, 2017, 7, 689-701 http://www.scirp.org/journal/ojs ISSN Online: 2161-7198 ISSN Print: 2161-718X Approximation of Finite Population Totals Using Lagrange Polynomial Lamin Kabareh1*, Thomas Mageto2, Benjamin Muema2 1Pan African University Institute for Basic Sciences, Technology and Innovation (PAUSTI), Nairobi, Kenya 2Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya How to cite this paper: Kabareh, L., Mage- Abstract to, T. and Muema, B. (2017) Approximation of Finite Population Totals Using Lagrange Approximation of finite population totals in the presence of auxiliary infor- Polynomial. Open Journal of Statistics, 7, 689- mation is considered. A polynomial based on Lagrange polynomial is pro- 701. posed. Like the local polynomial regression, Horvitz Thompson and ratio es- https://doi.org/10.4236/ojs.2017.74048 timators, this approximation technique is based on annual population total in Received: July 9, 2017 order to fit in the best approximating polynomial within a given period of Accepted: August 15, 2017 time (years) in this study. This proposed technique has shown to be unbiased Published: August 18, 2017 under a linear polynomial. The use of real data indicated that the polynomial Copyright © 2017 by authors and is efficient and can approximate properly even when the data is unevenly Scientific Research Publishing Inc. spaced. This work is licensed under the Creative Commons Attribution International Keywords License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Lagrange Polynomial, Approximation, Finite Population Total, Open Access Auxiliary Information, Local Polynomial Regression, Horvitz Thompson and Ratio Estimator 1. Introduction This study is using an approximation technique to approximate the finite popu- lation total called the Lagrange polynomial that doesn’t require any selection of bandwidth as in the case of local polynomial regression estimator.
    [Show full text]
  • Polynomial Interpolation 1
    CREACOMP e−Schulung von Kreativität und Problemlösungskompetenz Prof. Dr. Bruno BUCHBERGER Prof. Dr. Erich Peter KLEMENT Prof. Dr. Günter PILZ und Mitarbeiter der Institute ALGEBRA, FLLL und RISC 0 CREACOMP Endbericht Inhalt CREACOMP ................................................................................................................1 Zusammenfassung ...........................................................................................1 Benutzung der CREACOMP Software .......................................................2 Kurzbeschreibung der Lerneinheiten ..........................................................3 Theorema ..................................................................................................3 Elementary Set Theory .............................................................................3 Relations ...................................................................................................4 Equivalence Relations ..............................................................................4 Factoring Integers .....................................................................................5 Polynomial Interpolation 1 .........................................................................5 Polynomial Interpolation 2 .........................................................................6 Real Sequences 1 .....................................................................................6 Real Sequences 2 .....................................................................................7
    [Show full text]
  • Language Program for Lagrange's Interpolation
    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) Web Site: www.ijettcs.org Email: [email protected] Volume 5, Issue 6, November - December 2016 ISSN 2278-6856 ICT Tool: - ‘C’ Language Program for Lagrange’s Interpolation Pradip P. Kolhe1 , Dr Prakash R. Kolhe2 , Sanjay C. Gawande3 1Assistant Professor (Computer Sci.), ARIS Cell, Dr. PDKV, Akola (MS), India 2Associate Professor (Computer Sci.) (CAS), Dr. BSKKV, Dapoli (MS), India 3Assistant Professor (Mathematics), College of Agriculture, Dr. PDKV, Akola Abstract Although named after Joseph Louis Lagrange, who In mathematics to estimate an unknown data by analysing the published it in 1795, it was first discovered in 1779 by given referenced data Lagrange’s Interpolation technique is Edward Waring and it is also an easy consequence of a used. For the given data, (say ‘y’ at various ‘x’ in tabulated formula published in 1783 by Leonhard Euler. Lagrange form), the ‘y’ value corresponding to ‘x’ values can be found interpolation is susceptible to Runge's phenomenon, and by interpolation. the fact that changing the interpolation points requires Interpolation is the process of estimation of an unknown data recalculating the entire interpolant can make Newton by analysing the given reference data. In case of equally polynomials easier to use. Lagrange polynomials are used spaced ‘x’ values, a number of interpolation methods are in the Newton–Cotes method of numerical integration and available such as the Newton’s forward and backward in Shamir's secret sharing scheme in cryptography. interpolation, Gauss’s forward and backward interpolation, Bessel’s formula, Laplace-Everett’s formula etc.
    [Show full text]
  • Pricing Options and Computing Implied Volatilities Using Neural Networks
    risks Article Pricing Options and Computing Implied Volatilities using Neural Networks Shuaiqiang Liu 1,*, Cornelis W. Oosterlee 1,2 and Sander M. Bohte 2 1 Delft Institute of Applied Mathematics (DIAM), Delft University of Technology, Building 28, Mourik Broekmanweg 6, 2628 XE, Delft, Netherlands; [email protected] 2 Centrum Wiskunde & Informatica, Science Park 123, 1098 XG, Amsterdam, Netherlands; [email protected] * Correspondence: [email protected] Received: 8 January 2019; Accepted: 6 February 2019; Published: date Abstract: This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent’s iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly. Keywords: machine learning; neural networks; computational finance; option pricing; implied volatility; GPU; Black-Scholes; Heston 1. Introduction In computational finance, numerical methods are commonly used for the valuation of financial derivatives and also in modern risk management. Generally speaking, advanced financial asset models are able to capture nonlinear features that are observed in the financial markets.
    [Show full text]
  • Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms
    Missouri University of Science and Technology Scholars' Mine Computer Science Faculty Research & Creative Works Computer Science 01 Nov 2019 Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms Chaman Sabharwal Missouri University of Science and Technology, [email protected] Follow this and additional works at: https://scholarsmine.mst.edu/comsci_facwork Part of the Computer Sciences Commons Recommended Citation C. Sabharwal, "Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms," Mathematics, vol. 7, no. 11, MDPI AG, Nov 2019. The definitive version is available at https://doi.org/10.3390/math7111118 This work is licensed under a Creative Commons Attribution 4.0 License. This Article - Journal is brought to you for free and open access by Scholars' Mine. It has been accepted for inclusion in Computer Science Faculty Research & Creative Works by an authorized administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. mathematics Article Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms Chaman Lal Sabharwal Computer Science Department, Missouri University of Science and Technology, Rolla, MO 65409, USA; [email protected] Received: 5 October 2019; Accepted: 11 November 2019; Published: 16 November 2019 Abstract: Finding the roots of an equation is a fundamental problem in various fields, including numerical computing, social and physical sciences. Numerical techniques are used when an analytic solution is not available. There is not a single algorithm that works best for every function. We designed and implemented a new algorithm that is a dynamic blend of the bisection and regula falsi algorithms.
    [Show full text]
  • Chapter 1 Root Finding Algorithms
    Chapter 1 Root finding algorithms 1.1 Root finding problem In this chapter, we will discuss several general algorithms that find a solution of the following canonical equation f(x) = 0 (1.1) Here f(x) is any function of one scalar variable x1, and an x which satisfies the above equation is called a root of the function f. Often, we are given a problem which looks slightly different: h(x) = g(x) (1.2) But it is easy to transform to the canonical form with the following relabeling f(x) = h(x) − g(x) = 0 (1.3) Example 3x3 + 2 = sin x ! 3x3 + 2 − sin x = 0 (1.4) For some problems of this type, there are methods to find analytical or closed-form solu- tions. Recall, for example, the quadratic equation problem, which we discussed in detail in chapter4. Whenever it is possible, we should use the closed-form solutions. They are 1 Methods to solve a more general equation in the form f~(~x) = 0 are considered in chapter 13, which covers optimization. 1 usually exact, and their implementations are much faster. However, an analytical solution might not exist for a general equation, i.e. our problem is transcendental. Example The following equation is transcendental ex − 10x = 0 (1.5) We will formulate methods which are agnostic to the functional form of eq. (1.1) in the following text2. 1.2 Trial and error method Broadly speaking, all methods presented in this chapter are of the trial and error type. One can attempt to obtain the solution by just guessing it, and eventually he would succeed.
    [Show full text]
  • The Chebyshev Points of the First Kind
    Applied Numerical Mathematics 102 (2016) 17–30 Contents lists available at ScienceDirect Applied Numerical Mathematics www.elsevier.com/locate/apnum The Chebyshev points of the first kind Kuan Xu Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK a r t i c l e i n f o a b s t r a c t Article history: In the last thirty years, the Chebyshev points of the first kind have not been given as much Received 12 April 2015 attention for numerical applications as the second-kind ones. This survey summarizes Received in revised form 31 August 2015 theorems and algorithms for first-kind Chebyshev points with references to the existing Accepted 23 December 2015 literature. Benefits from using the first-kind Chebyshev points in various contexts are Available online 11 January 2016 discussed. © Keywords: 2016 IMACS. Published by Elsevier B.V. All rights reserved. Chebyshev points Chebyshev nodes Approximation theory Spectral methods 1. Introduction The Chebyshev polynomials of the first kind, Tn(x) = cos(n arccos x), were introduced by Pafnuty Chebyshev in a paper on hinge mechanisms in 1853 [11]. The zeros and the extrema of these polynomials were investigated in 1859 in a paper by Chebyshev on best approximation [12]. The zeros of Chebyshev polynomials are called Chebyshev points of the first kind, Chebyshev nodes, or, more formally, Chebyshev–Gauss points; they are given by (2k + 1)π xk = cos θk,θk = , k = 0,...,n − 1. (1) 2n The extrema, given by kπ yk = cos φk,φk = , k = 0, 1,...,n − 1, (2) n − 1 are called the Chebyshev points of the second kind, or Chebyshev extreme points, or Chebyshev–Lobatto points.
    [Show full text]