MATH 4513 Numerical Analysis Chapter 2. Solutions of Equations in One Variable

Total Page:16

File Type:pdf, Size:1020Kb

MATH 4513 Numerical Analysis Chapter 2. Solutions of Equations in One Variable MATH 4513 Numerical Analysis Chapter 2. Solutions of Equations in One Variable Xu Zhang Assistant Professor Department of Mathematics Oklahoma State University Text Book: Numerical Analysis (10th edition) R. L. Burden, D. J. Faires, A. M. Burden Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 1 / 70 Chapter 2. Solutions of Equations in One Variable Chapter 2. Solutions of Equations in One Variable Contents 2.1 The Bisection Method 2.2 Fixed-Point Iteration 2.3 Newton’s Method and Its Extensions 2.4 Error Analysis for Iterative Methods Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 2 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method Section 2.1 The Bisection Method Starting from this section, we study the most basic mathematics problem: root-finding problem f(x) = 0: The first numerical method, based on the Intermediate Value Theorem (IVT), is called the Bisection Method. Suppose that f(x) is continuous on [a; b]. f(a) and f(b) have opposite sign. By IVT, there exists a number p 2 (a; b) such that f(p) = 0. That is, f(x) has a root in (a; b). Idea of Bisection Method: repeatedly halve the subinterval of [a; b], and at each step, locating the half containing the root. Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 3 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method 2.1 The Bisection Method 49 a1+b1 FigureSet 2.1a1 a, b1 b. Calculate the midpoint p1 2 . y f (b) y f (x) f (p1) p3 p x a a1 p2 p1 b b1 f (p2) f (a) a1 p1 b1 If f(p1) = 0, then p p1, done. a2 p2 b2 If f(p ) 6= 0, then f(p ) has the same sign as either f(a) or f(b). 1 1 a3 p3 b3 If f(p1) and f(a) have the same sign, then p 2 (p1; b1). Set a2 p1, and b2 b1. If f(p1) and f(b) have the same sign, then p 2 (a1; p1). ALGORITHM Bisection 2.1 Set a2 a1, and b2 p1. To find a solution to f(x) = 0 given the continuous function f on the interval [a, b], where f(a) and f(b) have opposite signs: Repeat the process on [a2; b2]. INPUT endpoints a, b; tolerance TOL; maximum number of iterations N0. Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 4 / 70 OUTPUT approximate solution p or message of failure. Step 1 Set i = 1; FA = f(a). Step 2 While i ≤ N0 do Steps 3–6. Step 3 Set p = a + (b − a)/2; (Compute pi.) FP = f(p). Step 4 If FP = 0or(b − a)/2 < TOL then OUTPUT (p); (Procedure completed successfully.) STOP. Step 5 Set i = i + 1. Step 6 If FA · FP > 0 then set a = p;(Compute ai, bi.) FA = FP else set b = p.(FA is unchanged.) Step 7 OUTPUT (‘Method failed after N0 iterations, N0 =’, N0); (The procedure was unsuccessful.) STOP. Other stopping procedures can be applied in Step 4 of Algorithm 2.1 or in any of the iterative techniques in this chapter. For example, we can select a tolerance ε>0 and generate p1, ..., pN until one of the following conditions is met: Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method ALGORITHM – Bisection (Preliminary Version) USAGE: to find a solution to f(x) = 0 on the interval [a; b]. p = bisect0 (f; a; b) For n = 1; 2; 3; ··· ; 20, do the following Step 1 Set p = (a + b)=2; Step 2 Calculate FA = f(a), FB = f(b), and FP = f(p). Step 3 If FA · F P > 0, set a = p If FB · F P > 0, set b = p. Go back to Step 1. Remark This above algorithm will perform 20 times bisection iterations. The number 20 is artificial. Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 5 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method Example 1. Show that f(x) = x3 + 4x2 − 10 = 0 has a root in [1; 2] and use the Bisection method to find the approximation root. Solution. Because f(1) = −5 and f(2) = 14, the IVT ensures that this continuous function has a root in [1; 2]. To proceed with the Bisection method, we write a simple MATLAB code. Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 6 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method 8/21/19 5:28 PM /Users/xuzhang/Dropbox/Teachi.../bisect0.m 1 of 1 Matlab Code for Bisection (Preliminary Version) function p = bisect0(fun,a,b) % This is a preliminary version of Bisection Method for n = 1:20 % Set max number of iterations to be 20 p = (a+b)/2; FA = fun(a); FB = fun(b); FP = fun(p); if FA*FP > 0 a = p; elseif FB*FP > 0 b = p; end end A8/21/19 “Driver”5:28 File PM /Users/xuzhang/Dropbox/Teachi.../ex2_1_0.m 1 of 1 % Driver File: Example 2.1.1 in the Textbook %% Inputs fun = @(x) x^3+4*x^2-10; a = 1; b = 2; %% Call the subroutine: bisect0.m p = bisect0(fun,a,b) Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 7 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method After 20 iterations, we obtain the solution p ≈ 1:365229606628418. To display more information from the whole iteration process, we modify the MATLAB subroutine file. Matlab8/21/19 Code5:39 forPM Bisection/Users/xuzhang/Dropbox/Teachi.../bisect1.m (Preliminary Version with more outputs) 1 of 1 function p = bisect1(fun,a,b) % This is a preliminary version of Bisection Method % This version displays intermediate outputs nicely disp('Bisection Methods') disp('-----------------------------------------------------------------') disp(' n a_n b_n p_n f(p_n)') disp('-----------------------------------------------------------------') formatSpec = '%2d % .9f % .9f % .9f % .9f \n'; for n = 1:20 % Set max number of iterations to be 20 p = (a+b)/2; FA = fun(a); FB = fun(b); FP = fun(p); fprintf(formatSpec,[n,a,b,p,fun(p)]) % Printing output if FA*FP > 0 a = p; elseif FB*FP > 0 b = p; end end Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 8 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method Some Remarks on Bisection Method To start, an interval [a; b] must be found with f(a) · f(b) < 0. Otherwise, there may be no solutions in that interval. It is good to set a maximum iteration number“ maxit”, in case the the iteration enters an endless loop. It is good to set a tolerance or stopping criteria to avoid unnecessary computational effort, such as bn − an 1 < tol 2 2 jpn − pn+1j < tol jpn − pn+1j 3 < tol jpnj 4 jf(pn)j < tol Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 9 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method A8/27/19 more12:00 robust AM Matlab/Users/xuzhang/Dropbox/Teachi.../bisect.m code for Bisection method 1 of 1 function [p,flag] = bisect(fun,a,b,tol,maxIt) %% This is a more robust version of Bisection Method than bisect1.m flag = 0; % Use a flag to tell if the output is reliable if fun(a)*fun(b) > 0 % Check f(a) and f(b) have different sign error('f(a) and f(b) must have different signs'); end disp('Bisection Methods') disp('-----------------------------------------------------------------') disp(' n a_n b_n p_n f(p_n)') disp('-----------------------------------------------------------------') formatSpec = '%2d % .9f % .9f % .9f % .9f \n'; for n = 1:maxIt p = (a+b)/2; FA = fun(a); FP = fun(p); fprintf(formatSpec,[n,a,b,p,fun(p)]) % Printing output if abs(FP) <= 10^(-15) || (b-a)/2 < tol flag = 1; break; % Break out the for loop. else if FA*FP > 0 a = p; else b = p; end end end Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 10 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method Example 2. Use Bisection method to find a root of f(x) = x3 + 4x2 − 10 = 0 in the interval [1; 2] that is accurate to at least within 10−4. Solution. 8/14/18 2:17 PM /Users/zhang/Dropbox/Teaching.../ex2_1_1.m 1 of 1 We write a Matlab driver file for this test problem % Example 2.1.1 in the Textbook fun = @(x) x^3+4*x^2-10; a = 1; b = 2; tol = 1E-4; maxIt = 40; [p,flag] = bisect(fun,a,b,tol,maxIt); In this driver file, we specify all five inputs: fun, a, b, tol, maxIt call the Bisection method code bisect.m. Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 11 / 70 Chapter 2. Solutions of Equations in One Variable 2.1 The Bisection Method Outputs8/27/19 from12:08 the AM MatlabMATLAB Command Command Window Window 1 of 1 >> ex2_1_1 Bisection Methods ----------------------------------------------------------------- n a_n b_n p_n f(p_n) ----------------------------------------------------------------- 1 1.000000000 2.000000000 1.500000000 2.375000000 2 1.000000000 1.500000000 1.250000000 -1.796875000 3 1.250000000 1.500000000 1.375000000 0.162109375 4 1.250000000 1.375000000 1.312500000 -0.848388672 5 1.312500000 1.375000000 1.343750000 -0.350982666 6 1.343750000 1.375000000 1.359375000 -0.096408844 7 1.359375000 1.375000000 1.367187500 0.032355785 8 1.359375000 1.367187500 1.363281250 -0.032149971 9 1.363281250 1.367187500 1.365234375 0.000072025 10 1.363281250 1.365234375 1.364257812 -0.016046691 11 1.364257812 1.365234375 1.364746094 -0.007989263 12 1.364746094 1.365234375 1.364990234 -0.003959102 13 1.364990234 1.365234375 1.365112305 -0.001943659 14 1.365112305 1.365234375 1.365173340 -0.000935847 >> The approximation pn converges to the true solution p = 1:365230013::: Xu Zhang (Oklahoma State University) MATH 4513 Numerical Analysis Fall 2020 12 / 70 Chapter 2.
Recommended publications
  • Finding the Roots Or Solving Nonlinear Equations
    Chapter 6: Finding the Roots or Solving Nonlinear Equations Instructor: Dr. Ming Ye Solving Nonlinear Equations and Sets of Linear Equations •Equations can be solved analytically Solution of y=mx+b Solution of ax2+bx+c=0 •Large systems of linear equations •Higher order equations Specific energy equation •Nonlinear algebraic equationsx=g(x) Manning’s equation Topics Covered in This Chapter •Preliminaries •Fixed-Point Iteration •Bisection •Newton’s Method •The Secant Method •Hybrid Methods •Roots of Polynomials Roots of f(x)=0 General Considerations • Is this a special function that will be evaluated often? If so, some analytical study and possibly some algebraic manipulation of the function will be a good idea. Devise a custom iterative scheme. •How much precision is needed? Engineering calculations often require only a few significant figures. •How fast and robust must the method be? A robust procedure is relatively immune to the initial guess, and it converges quickly without being overly sensitive to the behavior of f(x) in the expected parameter space. • Is the function a polynomial? There are special procedure for finding the roots of polynomials. •Does the function have singularities? Some root finding procedures will converge to a singularity, as well as converge to a root. There is no single root-finding method that is best for all situations. The Basic Root-Finding Procedure 1.5 1 ) 0.5 3 0 f(h) (m f(h) -0.5 The basic strategy is -1 -1.5 -1.5 -1 -0.5 0 0.5 1 1.5 1. Plot the function. h (m) The plot provides an initial guess, and an indication of potential problems.
    [Show full text]
  • Applied Quantitative Finance
    Applied Quantitative Finance Wolfgang H¨ardle Torsten Kleinow Gerhard Stahl In cooperation with G¨okhanAydınlı, Oliver Jim Blaskowitz, Song Xi Chen, Matthias Fengler, J¨urgenFranke, Christoph Frisch, Helmut Herwartz, Harriet Holzberger, Steffi H¨ose, Stefan Huschens, Kim Huynh, Stefan R. Jaschke, Yuze Jiang Pierre Kervella, R¨udigerKiesel, Germar Kn¨ochlein, Sven Knoth, Jens L¨ussem,Danilo Mercurio, Marlene M¨uller,J¨ornRank, Peter Schmidt, Rainer Schulz, J¨urgenSchumacher, Thomas Siegl, Robert Wania, Axel Werwatz, Jun Zheng June 20, 2002 Contents Preface xv Contributors xix Frequently Used Notation xxi I Value at Risk 1 1 Approximating Value at Risk in Conditional Gaussian Models 3 Stefan R. Jaschke and Yuze Jiang 1.1 Introduction . 3 1.1.1 The Practical Need . 3 1.1.2 Statistical Modeling for VaR . 4 1.1.3 VaR Approximations . 6 1.1.4 Pros and Cons of Delta-Gamma Approximations . 7 1.2 General Properties of Delta-Gamma-Normal Models . 8 1.3 Cornish-Fisher Approximations . 12 1.3.1 Derivation . 12 1.3.2 Properties . 15 1.4 Fourier Inversion . 16 iv Contents 1.4.1 Error Analysis . 16 1.4.2 Tail Behavior . 20 1.4.3 Inversion of the cdf minus the Gaussian Approximation 21 1.5 Variance Reduction Techniques in Monte-Carlo Simulation . 24 1.5.1 Monte-Carlo Sampling Method . 24 1.5.2 Partial Monte-Carlo with Importance Sampling . 28 1.5.3 XploRe Examples . 30 2 Applications of Copulas for the Calculation of Value-at-Risk 35 J¨ornRank and Thomas Siegl 2.1 Copulas . 36 2.1.1 Definition .
    [Show full text]
  • An Improved Hybrid Algorithm to Bisection Method and Newton-Raphson Method 1 Introduction
    Applied Mathematical Sciences, Vol. 11, 2017, no. 56, 2789 - 2797 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.710302 An Improved Hybrid Algorithm to Bisection Method and Newton-Raphson Method Jeongwon Kim, Taehoon Noh, Wonjun Oh, Seung Park Incheon Academy of Science and Arts Incheon 22009, Korea Nahmwoo Hahm1 Department of Mathematics, Incheon National University Incheon 22012, Korea Copyright c 2017 Jeongwon Kim, Taehoon Noh, Wonjun Oh, Seung Park and Nahm- woo Hahm. This article is distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract In this paper, we investigate the hybrid algorithm to the bisection method and the Newton-Raphson method suggested in [1]. We suggest an improved hybrid algorithm that is suitable for functions which are not applicable to the hybrid algorithm[1]. We give some numerical results to justify our algorithm. Mathematics Subject Classification: 65H04, 65H99 Keywords: Hybrid Algorithm, Bisection Method, Newton-Raphson Method 1 Introduction Finding an approximated solution to the root of a nonlinear equation is one of the main topics of numerical analysis([4], [6]). An algorithm for finding x such that f(x) = 0 is called a root-finding algorithm. The bisection method 1Corresponding author This research was supported by Incheon Academy of Science and Arts Fund, 2017. 2790 J. Kim, T. Noh, W. Oh, S. Park and N. Hahm using the intermediate value theorem is the simplest root-finding algorithm. Note that the bisection method converges slowly but it is reliable.
    [Show full text]
  • MAT 2310. Computational Mathematics
    MAT 2310. Computational Mathematics Wm C Bauldry Fall, 2012 Introduction to Computational Mathematics \1 + 1 = 3 for large enough values of 1." Introduction to Computational Mathematics Table of Contents I. Computer Arithmetic.......................................1 II. Control Structures.........................................27 S I. Special Topics: Computation Cost and Horner's Form....... 59 III. Numerical Differentiation.................................. 64 IV. Root Finding Algorithms................................... 77 S II. Special Topics: Modified Newton's Method................ 100 V. Numerical Integration.................................... 103 VI. Polynomial Interpolation.................................. 125 S III. Case Study: TI Calculator Numerics.......................146 VII. Projects................................................. 158 ICM i I. Computer Arithmetic Sections 1. Scientific Notation.............................................1 2. Converting to Different Bases...................................2 3. Floating Point Numbers........................................7 4. IEEE-754 Floating Point Standard..............................9 5. Maple's Floating Point Representation......................... 16 6. Error......................................................... 18 Exercises..................................................... 25 ICM ii II. Control Structures Sections 1. Control Structures............................................ 27 2. A Common Example.......................................... 33 3. Control
    [Show full text]
  • Elementary Numerical Methods and Computing with Python
    Elementary Numerical Methods and computing with Python Steven Pav1 Mark McClure2 April 14, 2016 1Portions Copyright c 2004-2006 Steven E. Pav. Permission is granted to copy, dis- tribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. 2Copyright c 2016 Mark McClure. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Ver- sion 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ”GNU Free Documentation License”. 2 Contents Preface 7 1 Introduction 9 1.1 Examples ............................... 9 1.2 Iteration ................................ 11 1.3 Topics ................................. 12 2 Some mathematical preliminaries 15 2.1 Series ................................. 15 2.1.1 Geometric series ....................... 15 2.1.2 The integral test ....................... 17 2.1.3 Alternating Series ...................... 20 2.1.4 Taylor’s Theorem ....................... 21 Exercises .................................. 25 3 Computer arithmetic 27 3.1 Strange arithmetic .......................... 27 3.2 Error .................................. 28 3.3 Computer numbers .......................... 29 3.3.1 Types of numbers ...................... 29 3.3.2 Floating point numbers ................... 30 3.3.3 Distribution of computer numbers ............. 31 3.3.4 Exploring numbers with Python .............. 32 3.4 Loss of Significance .......................... 33 Exercises .................................. 36 4 Finding Roots 37 4.1 Bisection ............................... 38 4.1.1 Modifications ........................
    [Show full text]
  • Pricing Options and Computing Implied Volatilities Using Neural Networks
    risks Article Pricing Options and Computing Implied Volatilities using Neural Networks Shuaiqiang Liu 1,*, Cornelis W. Oosterlee 1,2 and Sander M. Bohte 2 1 Delft Institute of Applied Mathematics (DIAM), Delft University of Technology, Building 28, Mourik Broekmanweg 6, 2628 XE, Delft, Netherlands; [email protected] 2 Centrum Wiskunde & Informatica, Science Park 123, 1098 XG, Amsterdam, Netherlands; [email protected] * Correspondence: [email protected] Received: 8 January 2019; Accepted: 6 February 2019; Published: date Abstract: This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent’s iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly. Keywords: machine learning; neural networks; computational finance; option pricing; implied volatility; GPU; Black-Scholes; Heston 1. Introduction In computational finance, numerical methods are commonly used for the valuation of financial derivatives and also in modern risk management. Generally speaking, advanced financial asset models are able to capture nonlinear features that are observed in the financial markets.
    [Show full text]
  • Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms
    Missouri University of Science and Technology Scholars' Mine Computer Science Faculty Research & Creative Works Computer Science 01 Nov 2019 Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms Chaman Sabharwal Missouri University of Science and Technology, [email protected] Follow this and additional works at: https://scholarsmine.mst.edu/comsci_facwork Part of the Computer Sciences Commons Recommended Citation C. Sabharwal, "Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms," Mathematics, vol. 7, no. 11, MDPI AG, Nov 2019. The definitive version is available at https://doi.org/10.3390/math7111118 This work is licensed under a Creative Commons Attribution 4.0 License. This Article - Journal is brought to you for free and open access by Scholars' Mine. It has been accepted for inclusion in Computer Science Faculty Research & Creative Works by an authorized administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. mathematics Article Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms Chaman Lal Sabharwal Computer Science Department, Missouri University of Science and Technology, Rolla, MO 65409, USA; [email protected] Received: 5 October 2019; Accepted: 11 November 2019; Published: 16 November 2019 Abstract: Finding the roots of an equation is a fundamental problem in various fields, including numerical computing, social and physical sciences. Numerical techniques are used when an analytic solution is not available. There is not a single algorithm that works best for every function. We designed and implemented a new algorithm that is a dynamic blend of the bisection and regula falsi algorithms.
    [Show full text]
  • Chapter 1 Root Finding Algorithms
    Chapter 1 Root finding algorithms 1.1 Root finding problem In this chapter, we will discuss several general algorithms that find a solution of the following canonical equation f(x) = 0 (1.1) Here f(x) is any function of one scalar variable x1, and an x which satisfies the above equation is called a root of the function f. Often, we are given a problem which looks slightly different: h(x) = g(x) (1.2) But it is easy to transform to the canonical form with the following relabeling f(x) = h(x) − g(x) = 0 (1.3) Example 3x3 + 2 = sin x ! 3x3 + 2 − sin x = 0 (1.4) For some problems of this type, there are methods to find analytical or closed-form solu- tions. Recall, for example, the quadratic equation problem, which we discussed in detail in chapter4. Whenever it is possible, we should use the closed-form solutions. They are 1 Methods to solve a more general equation in the form f~(~x) = 0 are considered in chapter 13, which covers optimization. 1 usually exact, and their implementations are much faster. However, an analytical solution might not exist for a general equation, i.e. our problem is transcendental. Example The following equation is transcendental ex − 10x = 0 (1.5) We will formulate methods which are agnostic to the functional form of eq. (1.1) in the following text2. 1.2 Trial and error method Broadly speaking, all methods presented in this chapter are of the trial and error type. One can attempt to obtain the solution by just guessing it, and eventually he would succeed.
    [Show full text]
  • Root-Finding Methods
    LECTURE NOTES ECO 613/614 FALL 2007 KAREN A. KOPECKY Root-Finding Methods Often we are interested in finding x such that f(x) = 0; where f : Rn ! Rn denotes a system of n nonlinear equations and x is the n-dimensional root. Methods used to solve problems of this form are called root-finding or zero-finding methods. It is worthwhile to note that the problem of finding a root is equivalent to the problem of finding a fixed-point. To see this consider the fixed-point problem of finding the n-dimensional vector x such that x = g(x) where g : Rn ! Rn. Note that we can easily rewrite this fixed-point problem as a root- finding problem by setting f(x) = x ¡ g(x) and likewise we can recast the root-finding prob- lem into a fixed-point problem by setting g(x) = f(x) ¡ x: Often it will not be possible to solve such nonlinear equation root-finding problems an- alytically. When this occurs we turn to numerical methods to approximate the solution. The methods employed are usually iterative. Generally speaking, algorithms for solving problems numerically can be divided into two main groups: direct methods and iterative methods. Direct methods are those which can be completed in a predetermined finite num- ber of steps. Iterative methods are methods which converge to the solution over time. These algorithms run until some convergence criterion is met. When choosing which method to use one important consideration is how quickly the algorithm converges to the solution or the method’s convergence rate.
    [Show full text]
  • Rootfinding Bisection Method Guiding Question How Can I Solve An
    CME 108/MATH 114 Introduction to Scientific Computing Summer 2020 Rootfinding Bisection method Guiding question How can I solve an equation? In particular, given a function f, how can I find x such that f(x) = 0? Definition 1. A zero or a root of f is an element x in the domain of f such that f(x) = 0. Remark 1. Note that the problem of solving an equation f(x) = a reduces to finding a root of g(x) = f(x) a. In the general case both x and f are vector-valued (systems − of equations) but for now we’ll focus on scalar-valued functions of a single var. The most interesting case in our course is made un by functions which cannot be solved for analytically. Example 2. (Vander Waals equations) Recall from introductory chemistry the ideal gas law P V = nRT used to model ideal gases. Real gases are in fact not fully compressible and there are attractive forces among their molecules, so a better model for their behavior is n2a P + V nb = nRT, (1) V 2 − where a and b are correction!terms. In"a! lab, sup"pose 1 mol of chlorine gas has a pres- sure of 2 atm and a temperature of 313K, for chlorine, a = 6.29 atm L2/mol2, b = 0.0562L/mol. What is the volume? We can’t just “isolate” V . (Granted, in this simple case we obtain a low-degree polynomial in V and there are special methods for finding their roots. In this special case, the cubic formula will suffice.) 1 Bisection Method (Enclosure vs fixed point iteration schemes).
    [Show full text]
  • Lecture 38: Bracketing Algorithms for Root Finding 7. Solving Nonlinear
    CAAM 453 · NUMERICAL ANALYSIS I Lecture 38: Bracketing Algorithms for Root Finding 7. Solving Nonlinear Equations. Given a function f : R ! R, we seek a point x∗ 2 R such that f(x∗) = 0. This x∗ is called a root of the equation f(x) = 0, or simply a zero of f. At first, we only require that f be continuous a interval [a; b] of the real line, f 2 C[a; b], and that this interval contains the root of interest. The function f could have many different roots; we will only look for one. In practice, f could be quite complicated (e.g., evaluation of a parameter-dependent integral or differential equation) that is expensive to evaluate (e.g., requiring minutes, hours, . ), so we seek algorithms that will produce a solution that is accurate to high precision while keeping evaluations of f to a minimum. 7.1. Bracketing Algorithms. The first algorithms we study require the user to specify a finite interval [a0; b0], called a bracket, such that f(a0) and f(b0) differ in sign, f(a0)f(b0) < 0. Since f is continuous, the intermediate value theorem guarantees that f has at least one root x∗ in the bracket, x∗ 2 (a0; b0). 7.1.1. Bisection. The simplest technique for finding that root is the bisection algorithm: For k = 0; 1; 2;::: 1 1. Compute f(ck) for ck = (ak + bk). 2 [ak; ck]; if f(ak)f(ck) < 0; 2. If f(ck) = 0, exit; otherwise, repeat with [ak+1; bk+1] := [ck; bk]; if f(ck)f(bk) < 0.
    [Show full text]
  • Numerical Mathematical Analysis
    > 3. Rootfinding Rootfinding for Nonlinear Equations 3. Rootfinding Math 1070 > 3. Rootfinding Calculating the roots of an equation f(x) = 0 (7.1) is a common problem in applied mathematics. We will explore some simple numerical methods for solving this equation, and also will consider some possible difficulties 3. Rootfinding Math 1070 > 3. Rootfinding The function f(x) of the equation (7.1) will usually have at least one continuous derivative, and often we will have some estimate of the root that is being sought. By using this information, most numerical methods for (7.1) compute a sequence of increasingly accurate estimates of the root. These methods are called iteration methods. We will study three different methods 1 the bisection method 2 Newton’s method 3 secant method and give a general theory for one-point iteration methods. 3. Rootfinding Math 1070 > 3. Rootfinding > 3.1 The bisection method In this chapter we assume that f : R → R i.e., f(x) is a function that is real valued and that x is a real variable. Suppose that f(x) is continuous on an interval [a, b], and f(a)f(b) < 0 (7.2) Then f(x) changes sign on [a, b], and f(x) = 0 has at least one root on the interval. Definition The simplest numerical procedure for finding a root is to repeatedly halve the interval [a, b], keeping the half for which f(x) changes sign. This procedure is called the bisection method, and is guaranteed to converge to a root, denoted here by α.
    [Show full text]