Tangent Planes, Linear Approximations and Differentiability

Total Page:16

File Type:pdf, Size:1020Kb

Tangent Planes, Linear Approximations and Differentiability Jim Lambers MAT 280 Spring Semester 2009-10 Lecture 5 Notes These notes correspond to Section 11.4 in Stewart and Section 2.3 in Marsden and Tromba. Tangent Planes, Linear Approximations and Differentiability Now that we have learned how to compute partial derivatives of functions of several independent variables, in order to measure their instantaneous rates of change with respect to these variables, we will discuss another essential application of derivatives: the approximation of functions by linear functions. Linear functions are the simplest to work with, and for this reason, there are many instances in which functions are replaced by a linear approximation in the context of solving a problem such as solving a differential equation. Tangent Planes and Linear Approximations In single-variable calculus, we learned that the graph of a function f(x) can be approximated near a point x0 by its tangent line, which has the equation 0 y = f(x0) + f (x0)(x − x0): 0 For this reason, the function Lf (x) = f(x0) + f (x0)(x − x0) is also referred to as the linearization, or linear approximation, of f(x) at x0. 2 Now, suppose that we have a function of two variables, f : D ⊆ R ! R, and a point (x0; y0) 2 D. Furthermore, suppose that the first partial derivatives of f, fx and fy, exist at (x0; y0). Because the graph of this function is a surface, it follows that a linear function that approximates f near (x0; y0) would have a graph that is a plane. Just as the tangent line of f(x) at x0 passes through the point (x0; f(x0)), and has a slope that 0 is equal to f (x0), the instantaneous rate of change of f(x) with respect to x at x0, a plane that best approximates f(x; y) at (x0; y0) must pass through the point (x0; y0; f(x0; y0)), and the slope of the plane in the x- and y-directions, respectively, should be equal to the values of fx(x0; y0) and fy(x0; y0). Since a general linear function of two variables can be described by the formula Lf (x; y) = A(x − x0) + B(y − y0) + C; so that Lf (x0; y0) = C, and a simple differentiation yields @L @L f = A; f = B; @x @y 1 we conclude that the linear function that best approximates f(x; y) near (x0; y0) is the linear approximation @f @f L (x; y) = f(x ; y ) + (x ; y )(x − x ) + (x ; y )(y − y ): f 0 0 @x 0 0 0 @y 0 0 0 Furthermore, the graph of this function is called the tangent plane of f(x; y) at (x0; y0). Its equation is @f @f z − z = (x ; y )(x − x ) + (x ; y )(y − y ): 0 @x 0 0 0 @y 0 0 0 2 2 Example Let f(x; y) = 2x y + 3y , and let (x0; y0) = (1; 1). Then f(x0; y0) = 5, and the first partial derivatives at (x0; y0) are 2 fx(1; 1) = 4xyjx=1;y=1 = 4; fy(1; 1) = 2x + 6yjx=1;y=1 = 8: It follows that the tangent plane at (1; 1) has the equation z − 5 = 4(x − 1) + 8(y − 1); and the linearization of f at (1; 1) is Lf (x; y) = 5 + 4(x − 1) + 8(y − 1): Let (x; y) = (1:1; 1:1). Then f(x; y) = 6:292, while Lf (x; y) = 6:2, for an error of 6:292−6:2 = 0:092. However, if (x; y) = (1:01; 1:01), then f(x; y) = 5:120902, while Lf (x; y) = 5:12, for an error of 5:120902 − 5:12 = 0:000902. That is, moving 10 times as close to (1; 1) decreased the error by a factor of over 100. 2 Another useful application of a linear approximation is to estimate the error in the value of a function, given estimates of error in its inputs. Given a function z = f(x; y), and its linearization Lf (x; y) around a point (x0; y0), if x0 and y0 are measured values and dx = x − x0 and dz = y − y0 are regarded as errors in x0 and y0, then the error in z can be estimated by computing dz = z − z0 = Lf (x; y) − f(x0; y0) = [f(x0; y0) + fx(x0; y0)(x − x0) + fy(x0; y0)(y − y0)] − f(x0; y0) = fx(x0; y0) dx + fy(x0; y0) dy: The variables dx and dy are called differentials, and dz is called the total differential, as it depends on the values of dx and dy. The total differential dz is only an estimate of the error in z; the actual error is given by Δz = f(x; y) − f(x0; y0), when the actual errors in x and y,Δx = x − x0 and Δy = y − y0, are known. Since this is rarely the case in practice, one instead estimates the error in z from estimates dx and dy of the errors in x and y. 2 Example Recall that the volume of a cylinder with radius r and height h is V = r2h. Suppose that r = 5 cm and h = 10 cm. Then the volume is V = 250 cm3. If the measurement error in r and h is at most 0.1 cm, then, to estimate the error in the computed volume, we first compute 2 Vr = 2rh = 100; Vh = r = 25: It follows that the error in V is approximately 3 dV = Vr dr + Vh dh = 0:1(100 + 25) = 12:5 cm : If we specify Δr = 0:1 and Δh = 0:1, and compute the actual volume using radius r + Δr = 5:1 and height h + Δh = 10:1, we obtain V + ΔV = (5:1)2(10:1) = 262:701 cm3; which yields the actual error ΔV = 262:701 − 250 = 12:701 cm3: Therefore, the estimate of the error, dV , is quite accurate. 2 Functions of More than Two Variables The concepts of a tangent plane and linear approximation generalize to more than two variables in n (0) (0) (0) a straightforward manner. Specifically, given f : D ⊆ R ! R and p0 = (x1 ; x2 ; : : : ; xn ) 2 D, n+1 we define the tangent space of f(x1; x2; : : : ; xn) at p0 to be the n-dimensional hyperplane in R whose points (x1; x2; : : : ; xn; y) satisfy the equation @f (0) @f (0) @f (0) y − y0 = (p0)(x1 − x1 ) + (p0)(x2 − x2 ) + ⋅ ⋅ ⋅ + (p0)(xn − xn ); @x1 @x2 @xn where y0 = f(p0). Similarly, the linearization of f at p0 is the function Lf (x1; x2; : : : ; xn) defined by @f (0) @f (0) @f (0) Lf (x1; x2; : : : ; xn) = y0 + (p0)(x1 − x1 ) + (p0)(x2 − x2 ) + ⋅ ⋅ ⋅ + (p0)(xn − xn ): @x1 @x2 @xn The Gradient Vector It can be seen from the above definitions that writing formulas that involve the partial derivatives of functions of n variables can be cumbersome. This can be addressed by expressing collections of partial derivatives of functions of several variables using vectors and matrices, especially for vector-valued functions of several variables. 3 (0) (0) (0) By convention, a point p0 = (x1 ; x2 ; : : : ; xn ), which can be identified with the position vector (0) (0) (0) p0 = hx1 ; x2 ; : : : ; xn i, is considered to be a column vector 2 (0) 3 x1 6 (0) 7 6 x2 7 p0 = 6 . 7 : 6 . 7 4 . 5 (0) xn n Also, by convention, given a function of n variables, f : D ⊆ R ! R, the collection of its partial derivatives with respect to all of its variables is written as a row vector h i rf(p ) = @f (p ) @f (p ) ⋅ ⋅ ⋅ @f (p ) : 0 @x1 0 @x2 0 @xn 0 This vector is called the gradient of f at p0. Viewing the partial derivatives of f as a vector allows us to use vector operations to describe, much more concisely, the linearization of f. Specifically, the linearization of f at p0, evaluated at a point p = (x1; x2; : : : ; xn), can be written as @f (0) @f (0) @f (0) Lf (p) = f(p0) + (p0)(x1 − x1 ) + (p0)(x2 − x2 ) + ⋅ ⋅ ⋅ + (p0)(xn − xn ) @x1 @x2 @xn n X @f (0) = f(p ) + (p )(x − x ) 0 @x 0 i i i=1 i = f(p0) + rf(p0) ⋅ (p − p0); where rf(p0) ⋅ (p − p0) is the dot product, also known as the inner product, of the vectors rf(p0) and p − p0. Recall that given two vectors u = hu1; u2; : : : ; uni and v = hv1; v2; : : : ; vni, the dot product of u and v, denoted by u ⋅ v, is defined by n X u ⋅ v = uivi = u1v1 + u2v2 + ⋅ ⋅ ⋅ + unvn = kukkvk cos ; i=1 where is the angle between u and v. 3 Example Let f : R ! R be defined by f(x; y; z) = 3x2y3z4: Then 3 4 2 2 4 2 3 3 rf(x; y; z) = fx fy fz = 6xy z 9x y z 12x y z : Let (x0; y0; z0) = (1; 2; −1). Then rf(x0; y0; z0) = rf(1; 2; −1) = fx(1; 2; −1) fy(1; 2; −1) fz(1; 2; −1) = 48 36 −96 : 4 It follows that the linearization of f at (x0; y0; z0) is Lf (x; y; z) = f(1; 2; −1) + rf(1; 2; −1) ⋅ hx − 1; y − 2; z + 1i = 24 + h48; 36; −96i ⋅ hx − 1; y − 2; z + 1i = 24 + 48(x − 1) + 36(y − 2) − 96(z + 1) = 48x + 36y − 96z − 192: At the point (1:1; 1:9; −1:1), we have f(1:1; 1:9; −1:1) ≈ 36:5, while Lf (1:1; 1:9; −1:1) = 34:8.
Recommended publications
  • 1 Probelms on Implicit Differentiation 2 Problems on Local Linearization
    Math-124 Calculus Chapter 3 Review 1 Probelms on Implicit Di®erentiation #1. A function is de¯ned implicitly by x3y ¡ 3xy3 = 3x + 4y + 5: Find y0 in terms of x and y. In problems 2-6, ¯nd the equation of the tangent line to the curve at the given point. x3 + 1 #2. + 2y2 = 1 ¡ 2x + 4y at the point (2; ¡1). y 1 #3. 4ey + 3x = + (y + 1)2 + 5x at the point (1; 0). x #4. (3x ¡ 2y)2 + x3 = y3 ¡ 2x ¡ 4 at the point (1; 2). p #5. xy + x3 = y3=2 ¡ y ¡ x at the point (1; 4). 2 #6. x sin(y ¡ 3) + 2y = 4x3 + at the point (1; 3). x #7. Find y00 for the curve xy + 2y3 = x3 ¡ 22y at the point (3; 1): #8. Find the points at which the curve x3y3 = x + y has a horizontal tangent. 2 Problems on Local Linearization #1. Let f(x) = (x + 2)ex. Find the value of f(0). Use this to approximate f(¡:2). #2. f(2) = 4 and f 0(2) = 7. Use linear approximation to approximate f(2:03). 6x4 #3. f(1) = 9 and f 0(x) = : Use a linear approximation to approximate f(1:02). x2 + 1 #4. A linear approximation is used to approximate y = f(x) at the point (3; 1). When ¢x = :06 and ¢y = :72. Find the equation of the tangent line. 3 Problems on Absolute Maxima and Minima 1 #1. For the function f(x) = x3 ¡ x2 ¡ 8x + 1, ¯nd the x-coordinates of the absolute max and 3 absolute min on the interval ² a) ¡3 · x · 5 ² b) 0 · x · 5 3 #2.
    [Show full text]
  • Linear Approximation of the Derivative
    Section 3.9 - Linear Approximation and the Derivative Idea: When we zoom in on a smooth function and its tangent line, they are very close to- gether. So for a function that is hard to evaluate, we can use the tangent line to approximate values of the derivative. p 3 Example Usep a tangent line to approximatep 8:1 without a calculator. Let f(x) = 3 x = x1=3. We know f(8) = 3 8 = 2. We'll use the tangent line at x = 8. 1 1 1 1 f 0(x) = x−2=3 so f 0(8) = (8)−2=3 = · 3 3 3 4 Our tangent line passes through (8; 2) and has slope 1=12: 1 y = (x − 8) + 2: 12 To approximate f(8:1) we'll find the value of the tangent line at x = 8:1. 1 y = (8:1 − 8) + 2 12 1 = (:1) + 2 12 1 = + 2 120 241 = 120 p 3 241 So 8:1 ≈ 120 : p How accurate was this approximation? We'll use a calculator now: 3 8:1 ≈ 2:00829885. 241 −5 Taking the difference, 2:00829885 − 120 ≈ −3:4483 × 10 , a pretty good approximation! General Formulas for Linear Approximation The tangent line to f(x) at x = a passes through the point (a; f(a)) and has slope f 0(a) so its equation is y = f 0(a)(x − a) + f(a) The tangent line is the best linear approximation to f(x) near x = a so f(x) ≈ f(a) + f 0(a)(x − a) this is called the local linear approximation of f(x) near x = a.
    [Show full text]
  • DYNAMICAL SYSTEMS Contents 1. Introduction 1 2. Linear Systems 5 3
    DYNAMICAL SYSTEMS WILLY HU Contents 1. Introduction 1 2. Linear Systems 5 3. Non-linear systems in the plane 8 3.1. The Linearization Theorem 11 3.2. Stability 11 4. Applications 13 4.1. A Model of Animal Conflict 13 4.2. Bifurcations 14 Acknowledgments 15 References 15 Abstract. This paper seeks to establish the foundation for examining dy- namical systems. Dynamical systems are, very broadly, systems that can be modelled by systems of differential equations. In this paper, we will see how to examine the qualitative structure of a system of differential equations and how to model it geometrically, and what information can be gained from such an analysis. We will see what it means for focal points to be stable and unstable, and how we can apply this to examining population growth and evolution, bifurcations, and other applications. 1. Introduction This paper is based on Arrowsmith and Place's book, Dynamical Systems.I have included corresponding references for propositions, theorems, and definitions. The images included in this paper are also from their book. Definition 1.1. (Arrowsmith and Place 1.1.1) Let X(t; x) be a real-valued function of the real variables t and x, with domain D ⊆ R2. A function x(t), with t in some open interval I ⊆ R, which satisfies dx (1.2) x0(t) = = X(t; x(t)) dt is said to be a solution satisfying x0. In other words, x(t) is only a solution if (t; x(t)) ⊆ D for each t 2 I. We take I to be the largest interval for which x(t) satisfies (1.2).
    [Show full text]
  • Generalized Finite-Difference Schemes
    Generalized Finite-Difference Schemes By Blair Swartz* and Burton Wendroff** Abstract. Finite-difference schemes for initial boundary-value problems for partial differential equations lead to systems of equations which must be solved at each time step. Other methods also lead to systems of equations. We call a method a generalized finite-difference scheme if the matrix of coefficients of the system is sparse. Galerkin's method, using a local basis, provides unconditionally stable, implicit generalized finite-difference schemes for a large class of linear and nonlinear problems. The equations can be generated by computer program. The schemes will, in general, be not more efficient than standard finite-difference schemes when such standard stable schemes exist. We exhibit a generalized finite-difference scheme for Burgers' equation and solve it with a step function for initial data. | 1. Well-Posed Problems and Semibounded Operators. We consider a system of partial differential equations of the following form: (1.1) du/dt = Au + f, where u = (iti(a;, t), ■ ■ -, umix, t)),f = ifxix, t), ■ • -,fmix, t)), and A is a matrix of partial differential operators in x — ixx, • • -, xn), A = Aix,t,D) = JjüiD', D<= id/dXx)h--Yd/dxn)in, aiix, t) = a,-,...i„(a;, t) = matrix . Equation (1.1) is assumed to hold in some n-dimensional region Í2 with boundary An initial condition is imposed in ti, (1.2) uix, 0) = uoix) , xEV. The boundary conditions are that there is a collection (B of operators Bix, D) such that (1.3) Bix,D)u = 0, xEdti,B<E(2>. We assume the operator A satisfies the following condition : There exists a scalar product {,) such that for all sufficiently smooth functions <¡>ix)which satisfy (1.3), (1.4) 2 Re {A4,,<t>) ^ C(<b,<p), 0 < t ^ T , where C is a constant independent of <p.An operator A satisfying (1.4) is called semi- bounded.
    [Show full text]
  • Calculus I - Lecture 15 Linear Approximation & Differentials
    Calculus I - Lecture 15 Linear Approximation & Differentials Lecture Notes: http://www.math.ksu.edu/˜gerald/math220d/ Course Syllabus: http://www.math.ksu.edu/math220/spring-2014/indexs14.html Gerald Hoehn (based on notes by T. Cochran) March 11, 2014 Equation of Tangent Line Recall the equation of the tangent line of a curve y = f (x) at the point x = a. The general equation of the tangent line is y = L (x) := f (a) + f 0(a)(x a). a − That is the point-slope form of a line through the point (a, f (a)) with slope f 0(a). Linear Approximation It follows from the geometric picture as well as the equation f (x) f (a) lim − = f 0(a) x a x a → − f (x) f (a) which means that x−a f 0(a) or − ≈ f (x) f (a) + f 0(a)(x a) = L (x) ≈ − a for x close to a. Thus La(x) is a good approximation of f (x) for x near a. If we write x = a + ∆x and let ∆x be sufficiently small this becomes f (a + ∆x) f (a) f (a)∆x. Writing also − ≈ 0 ∆y = ∆f := f (a + ∆x) f (a) this becomes − ∆y = ∆f f 0(a)∆x ≈ In words: for small ∆x the change ∆y in y if one goes from x to x + ∆x is approximately equal to f 0(a)∆x. Visualization of Linear Approximation Example: a) Find the linear approximation of f (x) = √x at x = 16. b) Use it to approximate √15.9. Solution: a) We have to compute the equation of the tangent line at x = 16.
    [Show full text]
  • Linearization of Nonlinear Differential Equation by Taylor's Series
    International Journal of Theoretical and Applied Science 4(1): 36-38(2011) ISSN No. (Print) : 0975-1718 International Journal of Theoretical & Applied Sciences, 1(1): 25-31(2009) ISSN No. (Online) : 2249-3247 Linearization of Nonlinear Differential Equation by Taylor’s Series Expansion and Use of Jacobian Linearization Process M. Ravi Tailor* and P.H. Bhathawala** *Department of Mathematics, Vidhyadeep Institute of Management and Technology, Anita, Kim, India **S.S. Agrawal Institute of Management and Technology, Navsari, India (Received 11 March, 2012, Accepted 12 May, 2012) ABSTRACT : In this paper, we show how to perform linearization of systems described by nonlinear differential equations. The procedure introduced is based on the Taylor's series expansion and on knowledge of Jacobian linearization process. We develop linear differential equation by a specific point, called an equilibrium point. Keywords : Nonlinear differential equation, Equilibrium Points, Jacobian Linearization, Taylor's Series Expansion. I. INTRODUCTION δx = a δ x In order to linearize general nonlinear systems, we will This linear model is valid only near the equilibrium point. use the Taylor Series expansion of functions. Consider a function f(x) of a single variable x, and suppose that x is a II. EQUILIBRIUM POINTS point such that f( x ) = 0. In this case, the point x is called Consider a nonlinear differential equation an equilibrium point of the system x = f( x ), since we have x( t )= f [ x ( t ), u ( t )] ... (1) x = 0 when x= x (i.e., the system reaches an equilibrium n m n at x ). Recall that the Taylor Series expansion of f(x) around where f:.
    [Show full text]
  • Linearization Extreme Values
    Math 31A Discussion Notes Week 6 November 3 and 5, 2015 This week we'll review two of last week's lecture topics in preparation for the quiz. Linearization One immediate use we have for derivatives is local linear approximation. On small neighborhoods around a point, a differentiable function behaves linearly. That is, if we zoom in enough on a point on a curve, the curve will eventually look like a straight line. We can use this fact to approximate functions by their tangent lines. You've seen all of this in lecture, so we'll jump straight to the formula for local linear approximation. If f is differentiable at x = a and x is \close" to a, then f(x) ≈ L(x) = f(a) + f 0(a)(x − a): Example. Use local linear approximation to estimate the value of sin(47◦). (Solution) We know that f(x) := sin(x) is differentiable everywhere, and we know the value of sin(45◦). Since 47◦ is reasonably close to 45◦, this problem is ripe for local linear approximation. We know that f 0(x) = π cos(x), so f 0(45◦) = πp . Then 180 180 2 1 π 90 + π sin(47◦) ≈ sin(45◦) + f 0(45◦)(47 − 45) = p + 2 p = p ≈ 0:7318: 2 180 2 90 2 For comparison, Google says that sin(47◦) = 0:7314, so our estimate is pretty good. Considering the fact that most folks now have (extremely powerful) calculators in their pockets, the above example is a very inefficient way to compute sin(47◦). Local linear approximation is no longer especially useful for estimating particular values of functions, but it can still be a very useful tool.
    [Show full text]
  • Chapter 3, Lecture 1: Newton's Method 1 Approximate
    Math 484: Nonlinear Programming1 Mikhail Lavrov Chapter 3, Lecture 1: Newton's Method April 15, 2019 University of Illinois at Urbana-Champaign 1 Approximate methods and asumptions The final topic covered in this class is iterative methods for optimization. These are meant to help us find approximate solutions to problems in cases where finding exact solutions would be too hard. There are two things we've taken for granted before which might actually be too hard to do exactly: 1. Evaluating derivatives of a function f (e.g., rf or Hf) at a given point. 2. Solving an equation or system of equations. Before, we've assumed (1) and (2) are both easy. Now we're going to figure out what to do when (2) and possibly (1) is hard. For example: • For a polynomial function of high degree, derivatives are straightforward to compute, but it's impossible to solve equations exactly (even in one variable). • For a function we have no formula for (such as the value function MP(z) from Chapter 5, for instance) we don't have a good way of computing derivatives, and they might not even exist. 2 The classical Newton's method Eventually we'll get to optimization problems. But we'll begin with Newton's method in its basic form: an algorithm for approximately finding zeroes of a function f : R ! R. This is an iterative algorithm: starting with an initial guess x0, it makes a better guess x1, then uses it to make an even better guess x2, and so on. We hope that eventually these approach a solution.
    [Show full text]
  • The Linear Algebra Version of the Chain Rule 1
    Ralph M Kaufmann The Linear Algebra Version of the Chain Rule 1 Idea The differential of a differentiable function at a point gives a good linear approximation of the function – by definition. This means that locally one can just regard linear functions. The algebra of linear functions is best described in terms of linear algebra, i.e. vectors and matrices. Now, in terms of matrices the concatenation of linear functions is the matrix product. Putting these observations together gives the formulation of the chain rule as the Theorem that the linearization of the concatenations of two functions at a point is given by the concatenation of the respective linearizations. Or in other words that matrix describing the linearization of the concatenation is the product of the two matrices describing the linearizations of the two functions. 1. Linear Maps Let V n be the space of n–dimensional vectors. 1.1. Definition. A linear map F : V n → V m is a rule that associates to each n–dimensional vector ~x = hx1, . xni an m–dimensional vector F (~x) = ~y = hy1, . , yni = hf1(~x),..., (fm(~x))i in such a way that: 1) For c ∈ R : F (c~x) = cF (~x) 2) For any two n–dimensional vectors ~x and ~x0: F (~x + ~x0) = F (~x) + F (~x0) If m = 1 such a map is called a linear function. Note that the component functions f1, . , fm are all linear functions. 1.2. Examples. 1) m=1, n=3: all linear functions are of the form y = ax1 + bx2 + cx3 for some a, b, c ∈ R.
    [Show full text]
  • 2.8 Linear Approximation and Differentials
    196 the derivative 2.8 Linear Approximation and Differentials Newton’s method used tangent lines to “point toward” a root of a function. In this section we examine and use another geometric charac- teristic of tangent lines: If f is differentiable at a, c is close to a and y = L(x) is the line tangent to f (x) at x = a then L(c) is close to f (c). We can use this idea to approximate the values of some commonly used functions and to predict the “error” or uncertainty in a compu- tation if we know the “error” or uncertainty in our original data. At the end of this section, we will define a related concept called the differential of a function. Linear Approximation Because this section uses tangent lines extensively, it is worthwhile to recall how we find the equation of the line tangent to f (x) where x = a: the tangent line goes through the point (a, f (a)) and has slope f 0(a) so, using the point-slope form y − y0 = m(x − x0) for linear equations, we have y − f (a) = f 0(a) · (x − a) ) y = f (a) + f 0(a) · (x − a). If f is differentiable at x = a then an equation of the line L tangent to f at x = a is: L(x) = f (a) + f 0(a) · (x − a) Example 1. Find a formula for L(x), the linear function tangent to the p ( ) = ( ) ( ) ( ) graph of f x p x atp the point 9, 3 . Evaluate L 9.1 and L 8.88 to approximate 9.1 and 8.88.
    [Show full text]
  • Linearization Via the Lie Derivative ∗
    Electron. J. Diff. Eqns., Monograph 02, 2000 http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu or ejde.math.unt.edu (login: ftp) Linearization via the Lie Derivative ∗ Carmen Chicone & Richard Swanson Abstract The standard proof of the Grobman–Hartman linearization theorem for a flow at a hyperbolic rest point proceeds by first establishing the analogous result for hyperbolic fixed points of local diffeomorphisms. In this exposition we present a simple direct proof that avoids the discrete case altogether. We give new proofs for Hartman’s smoothness results: A 2 flow is 1 linearizable at a hyperbolic sink, and a 2 flow in the C C C plane is 1 linearizable at a hyperbolic rest point. Also, we formulate C and prove some new results on smooth linearization for special classes of quasi-linear vector fields where either the nonlinear part is restricted or additional conditions on the spectrum of the linear part (not related to resonance conditions) are imposed. Contents 1 Introduction 2 2 Continuous Conjugacy 4 3 Smooth Conjugacy 7 3.1 Hyperbolic Sinks . 10 3.1.1 Smooth Linearization on the Line . 32 3.2 Hyperbolic Saddles . 34 4 Linearization of Special Vector Fields 45 4.1 Special Vector Fields . 46 4.2 Saddles . 50 4.3 Infinitesimal Conjugacy and Fiber Contractions . 50 4.4 Sources and Sinks . 51 ∗Mathematics Subject Classifications: 34-02, 34C20, 37D05, 37G10. Key words: Smooth linearization, Lie derivative, Hartman, Grobman, hyperbolic rest point, fiber contraction, Dorroh smoothing. c 2000 Southwest Texas State University. Submitted November 14, 2000.
    [Show full text]
  • Linearization and Stability Analysis of Nonlinear Problems
    Rose-Hulman Undergraduate Mathematics Journal Volume 16 Issue 2 Article 5 Linearization and Stability Analysis of Nonlinear Problems Robert Morgan Wayne State University Follow this and additional works at: https://scholar.rose-hulman.edu/rhumj Recommended Citation Morgan, Robert (2015) "Linearization and Stability Analysis of Nonlinear Problems," Rose-Hulman Undergraduate Mathematics Journal: Vol. 16 : Iss. 2 , Article 5. Available at: https://scholar.rose-hulman.edu/rhumj/vol16/iss2/5 Rose- Hulman Undergraduate Mathematics Journal Linearization and Stability Analysis of Nonlinear Problems Robert Morgana Volume 16, No. 2, Fall 2015 Sponsored by Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN 47803 Email: [email protected] a http://www.rose-hulman.edu/mathjournal Wayne State University, Detroit, MI Rose-Hulman Undergraduate Mathematics Journal Volume 16, No. 2, Fall 2015 Linearization and Stability Analysis of Nonlinear Problems Robert Morgan Abstract. The focus of this paper is on the use of linearization techniques and lin- ear differential equation theory to analyze nonlinear differential equations. Often, mathematical models of real-world phenomena are formulated in terms of systems of nonlinear differential equations, which can be difficult to solve explicitly. To overcome this barrier, we take a qualitative approach to the analysis of solutions to nonlinear systems by making phase portraits and using stability analysis. We demonstrate these techniques in the analysis of two systems of nonlinear differential equations. Both of these models are originally motivated by population models in biology when solutions are required to be non-negative, but the ODEs can be un- derstood outside of this traditional scope of population models.
    [Show full text]