Lecture 18 Linear Approximation & Taylor's Formula

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 18 Linear Approximation & Taylor's Formula 1 Lecture 18 Linear approximation & Taylor’s formula Layali Al-Mashhadani E-post: [email protected] MAA149, Spring 2019 Mälardalen University 14 Mars, 2019 Contents of the lecture 1. Linear Approximations 1.1 Error Analysis 2. Taylor and Maclaurin Polynomials 2 3. Linear Approximations Linear Approximation (or Tangent Line Approximation): To estimate the value of a function near a certain known point, x =a, on its graph by substituting its tangent line in its place. Such a simplification is usually good only for a small interval around the given point, that is, when x is close to a. Suppose given a function 푓(푥), and suppose (푥표 , 푦표) = (푎, 푓 (푎)) is a known point on the graph of 푓(푥). Then the line tangent to the graph 푦 = 푓 (푥) at a has slope = 푓 ′(푎). Therefore, the point-slope form of the equation of tangent line is 푓(푥) = 푓 (푎) + 푓 ′(푎) (푥 − 푎) That is, when x is near a , 푓 (푥) ≈ 푓 (푎) + 푓 ′(푎) (푥 − 푎). The expression 퐿(푥) = 푓 (푎) + 푓 ′(푎) (푥 − 푎) is called the linearization of 푓 (푥) at a. In the picture, the tangent line to 푦 = 푥1/3 at 푥 = 8 is viewed as an approximation to the original curve. 푦 = 퐿(푥) is the equation of the tangent line. The error is the difference 퐿(푥) − 푥1/3 between the approximate and correct values 푬풓풓풐풓 (푬(풙)) = 풕풓풖풆 풗풂풍풖풆 – 풂풑풑풓풐풙풊풎풂풕풆 풗풂풍풖풆 Theorem Suppose that 푦 = 푓 (푥) is a differentiable curve at 푥 = 푎. Then the tangent line at 푥 = 푎 has equation 푦 = 푓(푥) = 푓 (푎) + 푓 ′(푎) (푥 − 푎) 3 We call the above equation the linear approximation or linearization of 푦 = 푓 (푥) at the point (푎, 푓 (푎)) and write 푓 (푥) ≈ 퐿(푥) = 푓 (푎) + 푓 ′(푎) (푥 − 푎). Example 1 Consider the cube root function above: 푦 = 푓 (푥) = 3√푥. We approximate near 푥 = 8. We have 1 1 푓(8) = 2, and 푓′(푥) = 푥−2/3 ⇒ 푓′(8) = 3 12 The linear approximation is 퐿(푥) = 푓 (8) + 푓 ′(8) (푥 − 8). This can be used, for example, to approximate cube roots without using a calculator: e.g. 3√8 = 2 + 0.00833 = 2.00833 Example 2 Find an approximation to √15 Solution Since 15 is much closer to 16 than to 9, we expect that the approximation 3.875 is the superior estimate. Using the calculator, √15 = 3.873 Errors The error in an approximation 푓 (푥) _ 퐿푎(푥) is the difference 퐸푎(푥) = 퐿푎(푥) − 푓 (푥). In the above example, the errors using the two approximations, are 4 퐸9(15) = 0.127, and 퐸16(15) = 0.002 1.1 Error Analysis For any approximation error = true value − approximate value The error in approximating f(x) by its linearization L(x) about x = a is denoted E1(x) 퐸1(푥) = 푓(푥) − 퐿(푥) = 푓(푥) − 푓(푎) − 푓′(푎)(푥 − 푎): It is the vertical distance at x between the graph of f and the tangent line to that graph at x = a, as shown in the figure: 5 Theorem (An error formula for linearization) If 푓 ′′(푡) exists for all 푡 in an interval containing 푎 and 푥, then there exists some point 푋 between 푎 and 푥 such that the error 퐸1(푥) = | 푓(푥) − 퐿(푥)| in the linear approximation 푓(푥) ≈ 퐿(푥) = 푓(푎) + 푓 ′ (푎)(푥 − 푎) Satisfies 푓′′(푋) 퐸(푥) = (푥 − 푎)2 2 The following three corollaries are consequences of the last theorem: 6 Example 3 i. Use the linearization for √푥 about 푥 = 25 to find an approximate value for √26 ii. Determine the sign and estimate the size of the error in the approximation √26 solution i. ii. 7 2. Taylor’s formula The tangent line approximation of 푓 (푥) for 푥 near 푎 is called the first degree Taylor Polynomial of 푓 (푥) and is: 푃1(푥) = 퐿(푥) = 푓 (푎) + 푓 ′(푎) (푥 − 푎) We can obtain even better approximation to 푓(푥) by using quadratic or higher degree polynomials at 푥 = 푎, if 푓(푥) is twice differentiable near 푥 = 푎, then the polynomial is 푓′′(푎) 푃 (푥) = 푓 (푎) + 푓′(푎)(푥 − 푎) + (푥 − 푎)2 2 2 Taylor polynomial of the third degree is given by: 푓′′(푎) 푓′′′(푎) 푃 (푥) = 푓 (푎) + 푓′(푎)(푥 − 푎) + (푥 − 푎)2 + (푥 − 푎)3 3 2 3 In general if 푓(푥) be a function with derivatives of all orders throughout some interval containing 푎 as an interior point. Then the Taylor series at 푥 = 푎 is given by: 푓′(푎) 푓′′(푎) 푃 (푥) = 푓 (푎) + (푥 − 푎)1 + (푥 − 푎)2 푛 1! 2! 푓′′′(푎) + (푥 − 푎)3 + …. 3! 푓푛(푎) + (푥 − 푎)푛 푛! The error 퐸푛(푥) = 푓(푥) − 푃푛(푥) in the approximation 푓(푥) ≈ 푃푛(푥) is called the Error bound or Lagrange Remainder and given by: 푓(푛+1)(푋) 퐸 (푥) = (푥 − 푎)푛+1 푛 (푛 + 1)! Where X is some number between a and x Examples 4 Find the following Tylor polynomial: (a) 푃2(푥) for 푓(푥) = √푥 about 푥 = 25, then use it to approximate √26 and estimate the size of the error, and then specify the interval that you can be sure contains √26. (b) 푃3(푥) for 푓(푥) = ln 푥 about 푥 = 푒 8 Solution 푓′(25) 푓′′(25) (a) 푃 (푥) = 푓 (25) + (푥 − 25)1 + (푥 − 25)2 2 1! 2! 푓 (25) = √25 = 5 1 1 1 푓′ (푥) = 푓′ (25) = = 2 √푥 2 √25 10 3 1 − 1 1 1 1 1 1 1 푓′′ (푥) = − 푥 2 푓′′ (25) = − = − . = − . = − 4 4 3 4 3 4 125 500 푥2 (52)2 So, 1 1 − 푃 (푥) = 5 + (푥 − 25)1 + 500 (푥 − 25)2 2 10 2! 1 1 푃 (푥) = 5 + (푥 − 25)1 + (푥 − 25)2 2 10 1000 The required approximation is then: 1 1 √26 = 푓(26) ≈ 푃 (26) = 5 + (26 − 25)1 + (26 − 25)2 = 5.099 2 10 1000 The error in a Taylor approximation 풇(ퟐퟔ) ≈ 푷ퟐ(ퟐퟔ) is given by: 푓′′′(25) 퐸 (26) = (26 − 25)3 2 3! 3 3 3 3 푓′′′ (푥) = 푥−5/2 푓′′′ (25) = (25)−5/2 = = 8 8 8 × 55 8 × 3125 3 3 퐸 (26) = |푓′′′ (푋)| ≤ = = 0.00002 푓표푟 25 < 푋 < 26 2 8 × 3125 50,000 The interval that contains √ퟐퟔ is: (5.099 − 0.00002, 5.099 + 0.00002) = (5.09898, 5.09902) 푓′(푒) 푓′′(푒) 푓′′′(푒) (b) 푃 (푥) = 푓 (푒) + (푥 − 푒)1 + (푥 − 푒)2 + (푥 − 푒)3 3 1! 2! 3! 푓 (푒) = ln 푒 = 1 1 1 푓′ (푥) = 푓′ (푒) = 푥 푒 1 1 푓′′ (푥) = − 푓′′ (푒) = − 푥2 푒2 1 1 푓′′′ (푥) = 2 푓′′′ (푒) = 2 푥3 푒3 9 So, 푓′(푒) 푓′′(푒) 푓′′′(푒) 푃 (푥) = 푓 (푒) + (푒 − 푎)1 + (푥 − 푒)2 + (푥 − 푒)3 3 1! 2! 3! 1 1 1 1 푃 (푥) = + (푒 − 푎)1 − (푥 − 푒)2 + (푥 − 푒)3 3 푥 푒 2푒2 3푒3 2. Taylor and Maclaurin Polynomials Some Common Maclaurin Series with Errors in Big -O Form As 푥 → 0: 1 = 1 + 푥 + 푥2 + 푥3 + ⋯ + 푥푛 + 푂(푥푛+1) 1 − 푥 1 1 = = 1 − 푥 + 푥2 − 푥3 + ⋯ + (−1)푛+2푥푛 + 푂(푥푛+1) 1 + 푥 1 − (−푥) 푥2 푥3 푥푛 ln(1 + 푥) = 푥 − + − ⋯ + (−1)푛−1 + 푂(푥푛+1) 2 3 푛 푥2 푥3 푥푛 ln(1 − 푥) = ln(1 − (−푥)) = − 푥 − − − ⋯ + − + 푂(푥푛+1) 2 3 푛 푥2 푥3 푥푛 푒푥 = 1 + 푥 + + + ⋯ + + 푂(푥푛+1) 2! 3! 푛! 푒푎푥 = 푒푎(푥−1+1) = 푒푎. 푒푎(푥−1) = (푎(푥 − 1))2 (푎(푥 − 1))3 (푎(푥 − 1))푛 푒푎[1 + 푎(푥 − 1) + + + ⋯ + + 푂(푥푛+1) 2! 3! 푛! 푥3 푥5 푥(2푛+1) sin 푥 = 푥 − + − ⋯ + (−1)푛 + 푂(푥2푛+3) 3! 5! (2푛 + 1)! 푥2 푥4 푥2푛 cos 푥 = 1 − + − ⋯ + (−1)푛 + 푂(푥2푛+2) 2! 4! (2푛)! 푥3 푥5 푥(2푛+1) tan−1 푥 = 푥 − + − ⋯ + (−1)푛 + 푂(푥2푛+3) 3 5 2푛 + 1 10 Big-O Notation Suppose you have a function 푓(푥) with 푓(푎) = 0 and you want to consider how quickly the function goes to zero around 푎. Then ideally you would want to find a simple function 푔 (for example 푔(푥) = (푥 − 푎)푛 ) which also vanishes at a such that 푔 and 푓 are almost equal around a. The big-O notation tries to express something like this, but only states that 푓 goes to zero faster than 푔. Definition We write 푓(푥) = 푂( 푢(푥) ) as 푥 → 푎 (read is big -Oh of 푢(푥) as x approaches a) provided that |푓(푥)| ≤ 퐾|푢(푥)| holds for some constant K on some open interval containing 푥 = 푎. Similarly, 푓(푥) = 푔(푥) + 푂( 푢(푥) ) as 푥 → 푎 if 푓(푥) − 푔(푥) = 푂( 푢(푥) ) as 푥 → 푎, that is, if |푓(푥) − 푔(푥)| ≤ 퐾|푢(푥)| 푛푒푎푟 푥 = 푎 For example 푠푖푛 푥 = 푂(푥) as 푥 → 0 because |푠푖푛 푥| ≤ |푥| near 0. Another Example 푥3 푠푖푛 푥 − 푥 = 푂(푥3) 푎푠 푥 → 0 since 푃 (푥) for 푠푖푛 푥 is 푥 − , 3 3! Thus, we write 푠푖푛 푥 = 푥 + 푂(푥3) as 푥 → 0. 푥3 Using 푃 (푥) we get 푠푖푛 푥 = 푥 − + 푂(푥5) as 푥 → 0. 5 3! The following properties of the big-O notation 1.
Recommended publications
  • 1 Probelms on Implicit Differentiation 2 Problems on Local Linearization
    Math-124 Calculus Chapter 3 Review 1 Probelms on Implicit Di®erentiation #1. A function is de¯ned implicitly by x3y ¡ 3xy3 = 3x + 4y + 5: Find y0 in terms of x and y. In problems 2-6, ¯nd the equation of the tangent line to the curve at the given point. x3 + 1 #2. + 2y2 = 1 ¡ 2x + 4y at the point (2; ¡1). y 1 #3. 4ey + 3x = + (y + 1)2 + 5x at the point (1; 0). x #4. (3x ¡ 2y)2 + x3 = y3 ¡ 2x ¡ 4 at the point (1; 2). p #5. xy + x3 = y3=2 ¡ y ¡ x at the point (1; 4). 2 #6. x sin(y ¡ 3) + 2y = 4x3 + at the point (1; 3). x #7. Find y00 for the curve xy + 2y3 = x3 ¡ 22y at the point (3; 1): #8. Find the points at which the curve x3y3 = x + y has a horizontal tangent. 2 Problems on Local Linearization #1. Let f(x) = (x + 2)ex. Find the value of f(0). Use this to approximate f(¡:2). #2. f(2) = 4 and f 0(2) = 7. Use linear approximation to approximate f(2:03). 6x4 #3. f(1) = 9 and f 0(x) = : Use a linear approximation to approximate f(1:02). x2 + 1 #4. A linear approximation is used to approximate y = f(x) at the point (3; 1). When ¢x = :06 and ¢y = :72. Find the equation of the tangent line. 3 Problems on Absolute Maxima and Minima 1 #1. For the function f(x) = x3 ¡ x2 ¡ 8x + 1, ¯nd the x-coordinates of the absolute max and 3 absolute min on the interval ² a) ¡3 · x · 5 ² b) 0 · x · 5 3 #2.
    [Show full text]
  • DYNAMICAL SYSTEMS Contents 1. Introduction 1 2. Linear Systems 5 3
    DYNAMICAL SYSTEMS WILLY HU Contents 1. Introduction 1 2. Linear Systems 5 3. Non-linear systems in the plane 8 3.1. The Linearization Theorem 11 3.2. Stability 11 4. Applications 13 4.1. A Model of Animal Conflict 13 4.2. Bifurcations 14 Acknowledgments 15 References 15 Abstract. This paper seeks to establish the foundation for examining dy- namical systems. Dynamical systems are, very broadly, systems that can be modelled by systems of differential equations. In this paper, we will see how to examine the qualitative structure of a system of differential equations and how to model it geometrically, and what information can be gained from such an analysis. We will see what it means for focal points to be stable and unstable, and how we can apply this to examining population growth and evolution, bifurcations, and other applications. 1. Introduction This paper is based on Arrowsmith and Place's book, Dynamical Systems.I have included corresponding references for propositions, theorems, and definitions. The images included in this paper are also from their book. Definition 1.1. (Arrowsmith and Place 1.1.1) Let X(t; x) be a real-valued function of the real variables t and x, with domain D ⊆ R2. A function x(t), with t in some open interval I ⊆ R, which satisfies dx (1.2) x0(t) = = X(t; x(t)) dt is said to be a solution satisfying x0. In other words, x(t) is only a solution if (t; x(t)) ⊆ D for each t 2 I. We take I to be the largest interval for which x(t) satisfies (1.2).
    [Show full text]
  • A Short Course on Approximation Theory
    A Short Course on Approximation Theory N. L. Carothers Department of Mathematics and Statistics Bowling Green State University ii Preface These are notes for a topics course offered at Bowling Green State University on a variety of occasions. The course is typically offered during a somewhat abbreviated six week summer session and, consequently, there is a bit less material here than might be associated with a full semester course offered during the academic year. On the other hand, I have tried to make the notes self-contained by adding a number of short appendices and these might well be used to augment the course. The course title, approximation theory, covers a great deal of mathematical territory. In the present context, the focus is primarily on the approximation of real-valued continuous functions by some simpler class of functions, such as algebraic or trigonometric polynomials. Such issues have attracted the attention of thousands of mathematicians for at least two centuries now. We will have occasion to discuss both venerable and contemporary results, whose origins range anywhere from the dawn of time to the day before yesterday. This easily explains my interest in the subject. For me, reading these notes is like leafing through the family photo album: There are old friends, fondly remembered, fresh new faces, not yet familiar, and enough easily recognizable faces to make me feel right at home. The problems we will encounter are easy to state and easy to understand, and yet their solutions should prove intriguing to virtually anyone interested in mathematics. The techniques involved in these solutions entail nearly every topic covered in the standard undergraduate curriculum.
    [Show full text]
  • February 2009
    How Euler Did It by Ed Sandifer Estimating π February 2009 On Friday, June 7, 1779, Leonhard Euler sent a paper [E705] to the regular twice-weekly meeting of the St. Petersburg Academy. Euler, blind and disillusioned with the corruption of Domaschneff, the President of the Academy, seldom attended the meetings himself, so he sent one of his assistants, Nicolas Fuss, to read the paper to the ten members of the Academy who attended the meeting. The paper bore the cumbersome title "Investigatio quarundam serierum quae ad rationem peripheriae circuli ad diametrum vero proxime definiendam maxime sunt accommodatae" (Investigation of certain series which are designed to approximate the true ratio of the circumference of a circle to its diameter very closely." Up to this point, Euler had shown relatively little interest in the value of π, though he had standardized its notation, using the symbol π to denote the ratio of a circumference to a diameter consistently since 1736, and he found π in a great many places outside circles. In a paper he wrote in 1737, [E74] Euler surveyed the history of calculating the value of π. He mentioned Archimedes, Machin, de Lagny, Leibniz and Sharp. The main result in E74 was to discover a number of arctangent identities along the lines of ! 1 1 1 = 4 arctan " arctan + arctan 4 5 70 99 and to propose using the Taylor series expansion for the arctangent function, which converges fairly rapidly for small values, to approximate π. Euler also spent some time in that paper finding ways to approximate the logarithms of trigonometric functions, important at the time in navigation tables.
    [Show full text]
  • Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5
    Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5, Trefethen's book Topics marked with ∗ are not on the exam 1 In approximation theory we want to find a function p(x) that is `close' to another function f(x). We can define closeness using any metric or norm, e.g. Z 2 2 kf(x) − p(x)k2 = (f(x) − p(x)) dx or kf(x) − p(x)k1 = sup jf(x) − p(x)j or Z kf(x) − p(x)k1 = jf(x) − p(x)jdx: In order for these norms to make sense we need to restrict the functions f and p to suitable function spaces. The polynomial approximation problem takes the form: Find a polynomial of degree at most n that minimizes the norm of the error. Naturally we will consider (i) whether a solution exists and is unique, (ii) whether the approximation converges as n ! 1. In our section on approximation (loosely following Atkinson, Chapter 4), we will first focus on approximation in the infinity norm, then in the 2 norm and related norms. 2 Existence for optimal polynomial approximation. Theorem (no reference): For every n ≥ 0 and f 2 C([a; b]) there is a polynomial of degree ≤ n that minimizes kf(x) − p(x)k where k · k is some norm on C([a; b]). Proof: To show that a minimum/minimizer exists, we want to find some compact subset of the set of polynomials of degree ≤ n (which is a finite-dimensional space) and show that the inf over this subset is less than the inf over everything else.
    [Show full text]
  • Linearization of Nonlinear Differential Equation by Taylor's Series
    International Journal of Theoretical and Applied Science 4(1): 36-38(2011) ISSN No. (Print) : 0975-1718 International Journal of Theoretical & Applied Sciences, 1(1): 25-31(2009) ISSN No. (Online) : 2249-3247 Linearization of Nonlinear Differential Equation by Taylor’s Series Expansion and Use of Jacobian Linearization Process M. Ravi Tailor* and P.H. Bhathawala** *Department of Mathematics, Vidhyadeep Institute of Management and Technology, Anita, Kim, India **S.S. Agrawal Institute of Management and Technology, Navsari, India (Received 11 March, 2012, Accepted 12 May, 2012) ABSTRACT : In this paper, we show how to perform linearization of systems described by nonlinear differential equations. The procedure introduced is based on the Taylor's series expansion and on knowledge of Jacobian linearization process. We develop linear differential equation by a specific point, called an equilibrium point. Keywords : Nonlinear differential equation, Equilibrium Points, Jacobian Linearization, Taylor's Series Expansion. I. INTRODUCTION δx = a δ x In order to linearize general nonlinear systems, we will This linear model is valid only near the equilibrium point. use the Taylor Series expansion of functions. Consider a function f(x) of a single variable x, and suppose that x is a II. EQUILIBRIUM POINTS point such that f( x ) = 0. In this case, the point x is called Consider a nonlinear differential equation an equilibrium point of the system x = f( x ), since we have x( t )= f [ x ( t ), u ( t )] ... (1) x = 0 when x= x (i.e., the system reaches an equilibrium n m n at x ). Recall that the Taylor Series expansion of f(x) around where f:.
    [Show full text]
  • Linearization Extreme Values
    Math 31A Discussion Notes Week 6 November 3 and 5, 2015 This week we'll review two of last week's lecture topics in preparation for the quiz. Linearization One immediate use we have for derivatives is local linear approximation. On small neighborhoods around a point, a differentiable function behaves linearly. That is, if we zoom in enough on a point on a curve, the curve will eventually look like a straight line. We can use this fact to approximate functions by their tangent lines. You've seen all of this in lecture, so we'll jump straight to the formula for local linear approximation. If f is differentiable at x = a and x is \close" to a, then f(x) ≈ L(x) = f(a) + f 0(a)(x − a): Example. Use local linear approximation to estimate the value of sin(47◦). (Solution) We know that f(x) := sin(x) is differentiable everywhere, and we know the value of sin(45◦). Since 47◦ is reasonably close to 45◦, this problem is ripe for local linear approximation. We know that f 0(x) = π cos(x), so f 0(45◦) = πp . Then 180 180 2 1 π 90 + π sin(47◦) ≈ sin(45◦) + f 0(45◦)(47 − 45) = p + 2 p = p ≈ 0:7318: 2 180 2 90 2 For comparison, Google says that sin(47◦) = 0:7314, so our estimate is pretty good. Considering the fact that most folks now have (extremely powerful) calculators in their pockets, the above example is a very inefficient way to compute sin(47◦). Local linear approximation is no longer especially useful for estimating particular values of functions, but it can still be a very useful tool.
    [Show full text]
  • Linearization Via the Lie Derivative ∗
    Electron. J. Diff. Eqns., Monograph 02, 2000 http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu or ejde.math.unt.edu (login: ftp) Linearization via the Lie Derivative ∗ Carmen Chicone & Richard Swanson Abstract The standard proof of the Grobman–Hartman linearization theorem for a flow at a hyperbolic rest point proceeds by first establishing the analogous result for hyperbolic fixed points of local diffeomorphisms. In this exposition we present a simple direct proof that avoids the discrete case altogether. We give new proofs for Hartman’s smoothness results: A 2 flow is 1 linearizable at a hyperbolic sink, and a 2 flow in the C C C plane is 1 linearizable at a hyperbolic rest point. Also, we formulate C and prove some new results on smooth linearization for special classes of quasi-linear vector fields where either the nonlinear part is restricted or additional conditions on the spectrum of the linear part (not related to resonance conditions) are imposed. Contents 1 Introduction 2 2 Continuous Conjugacy 4 3 Smooth Conjugacy 7 3.1 Hyperbolic Sinks . 10 3.1.1 Smooth Linearization on the Line . 32 3.2 Hyperbolic Saddles . 34 4 Linearization of Special Vector Fields 45 4.1 Special Vector Fields . 46 4.2 Saddles . 50 4.3 Infinitesimal Conjugacy and Fiber Contractions . 50 4.4 Sources and Sinks . 51 ∗Mathematics Subject Classifications: 34-02, 34C20, 37D05, 37G10. Key words: Smooth linearization, Lie derivative, Hartman, Grobman, hyperbolic rest point, fiber contraction, Dorroh smoothing. c 2000 Southwest Texas State University. Submitted November 14, 2000.
    [Show full text]
  • Linearization and Stability Analysis of Nonlinear Problems
    Rose-Hulman Undergraduate Mathematics Journal Volume 16 Issue 2 Article 5 Linearization and Stability Analysis of Nonlinear Problems Robert Morgan Wayne State University Follow this and additional works at: https://scholar.rose-hulman.edu/rhumj Recommended Citation Morgan, Robert (2015) "Linearization and Stability Analysis of Nonlinear Problems," Rose-Hulman Undergraduate Mathematics Journal: Vol. 16 : Iss. 2 , Article 5. Available at: https://scholar.rose-hulman.edu/rhumj/vol16/iss2/5 Rose- Hulman Undergraduate Mathematics Journal Linearization and Stability Analysis of Nonlinear Problems Robert Morgana Volume 16, No. 2, Fall 2015 Sponsored by Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN 47803 Email: [email protected] a http://www.rose-hulman.edu/mathjournal Wayne State University, Detroit, MI Rose-Hulman Undergraduate Mathematics Journal Volume 16, No. 2, Fall 2015 Linearization and Stability Analysis of Nonlinear Problems Robert Morgan Abstract. The focus of this paper is on the use of linearization techniques and lin- ear differential equation theory to analyze nonlinear differential equations. Often, mathematical models of real-world phenomena are formulated in terms of systems of nonlinear differential equations, which can be difficult to solve explicitly. To overcome this barrier, we take a qualitative approach to the analysis of solutions to nonlinear systems by making phase portraits and using stability analysis. We demonstrate these techniques in the analysis of two systems of nonlinear differential equations. Both of these models are originally motivated by population models in biology when solutions are required to be non-negative, but the ODEs can be un- derstood outside of this traditional scope of population models.
    [Show full text]
  • 1 Introduction 2 Linearization
    I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying Q(λ)x = 0, (1.1) where Q(λ)= λ2M + λD + K, M, D and K are given n × n matrices. Sometimes, we are also interested in finding the left eigenvectors y: yH Q(λ) = 0. Note that Q(λ) has 2n eigenvalues λ. They are the roots of det[Q(λ)] = 0. 2 Linearization A common way to solve the QEP is to first linearize it to a linear eigenvalue problem. For example, let λu z = , u Then the QEP (1.1) is equivalent to the generalized eigenvalue problem Lc(λ)z = 0 (2.2) where M 0 D K L (λ)= λ + ≡ λG + C. c 0 I −I 0 Lc(λ) is called a companion form or a linearization of Q(λ). Definition 2.1. A matrix pencil L(λ)= λG + C is called a linearization of Q(λ) if Q(λ) 0 E(λ)L(λ)F (λ)= (2.3) 0 I for some unimodular matrices E(λ) and F (λ).1 For the pencil Lc(λ) in (2.2), the identity (2.3) holds with I λM + D λI I E(λ)= , F (λ)= . 0 −I I 0 There are various ways to linearize a quadratic eigenvalue problem. Some are preferred than others. For example if M, D and K are symmetric and K is nonsingular, then we can preserve the symmetry property and use the following linearization: M 0 D K L (λ)= λ + .
    [Show full text]
  • Function Approximation Through an Efficient Neural Networks Method
    Paper ID #25637 Function Approximation through an Efficient Neural Networks Method Dr. Chaomin Luo, Mississippi State University Dr. Chaomin Luo received his Ph.D. in Department of Electrical and Computer Engineering at Univer- sity of Waterloo, in 2008, his M.Sc. in Engineering Systems and Computing at University of Guelph, Canada, and his B.Eng. in Electrical Engineering from Southeast University, Nanjing, China. He is cur- rently Associate Professor in the Department of Electrical and Computer Engineering, at the Mississippi State University (MSU). He was panelist in the Department of Defense, USA, 2015-2016, 2016-2017 NDSEG Fellowship program and panelist in 2017 NSF GRFP Panelist program. He was the General Co-Chair of 2015 IEEE International Workshop on Computational Intelligence in Smart Technologies, and Journal Special Issues Chair, IEEE 2016 International Conference on Smart Technologies, Cleveland, OH. Currently, he is Associate Editor of International Journal of Robotics and Automation, and Interna- tional Journal of Swarm Intelligence Research. He was the Publicity Chair in 2011 IEEE International Conference on Automation and Logistics. He was on the Conference Committee in 2012 International Conference on Information and Automation and International Symposium on Biomedical Engineering and Publicity Chair in 2012 IEEE International Conference on Automation and Logistics. He was a Chair of IEEE SEM - Computational Intelligence Chapter; a Vice Chair of IEEE SEM- Robotics and Automa- tion and Chair of Education Committee of IEEE SEM. He has extensively published in reputed journal and conference proceedings, such as IEEE Transactions on Neural Networks, IEEE Transactions on SMC, IEEE-ICRA, and IEEE-IROS, etc. His research interests include engineering education, computational intelligence, intelligent systems and control, robotics and autonomous systems, and applied artificial in- telligence and machine learning for autonomous systems.
    [Show full text]
  • Graduate Macro Theory II: Notes on Log-Linearization
    Graduate Macro Theory II: Notes on Log-Linearization Eric Sims University of Notre Dame Spring 2017 The solutions to many discrete time dynamic economic problems take the form of a system of non-linear difference equations. There generally exists no closed-form solution for such problems. As such, we must result to numerical and/or approximation techniques. One particularly easy and very common approximation technique is that of log linearization. We first take natural logs of the system of non-linear difference equations. We then linearize the logged difference equations about a particular point (usually a steady state), and simplify until we have a system of linear difference equations where the variables of interest are percentage deviations about a point (again, usually a steady state). Linearization is nice because we know how to work with linear difference equations. Putting things in percentage terms (that's the \log" part) is nice because it provides natural interpretations of the units (i.e. everything is in percentage terms). First consider some arbitrary univariate function, f(x). Taylor's theorem tells us that this can be expressed as a power series about a particular point x∗, where x∗ belongs to the set of possible x values: f 0(x∗) f 00 (x∗) f (3)(x∗) f(x) = f(x∗) + (x − x∗) + (x − x∗)2 + (x − x∗)3 + ::: 1! 2! 3! Here f 0(x∗) is the first derivative of f with respect to x evaluated at the point x∗, f 00 (x∗) is the second derivative evaluated at the same point, f (3) is the third derivative, and so on.
    [Show full text]