Tangent Planes and Linear Approximations

Total Page:16

File Type:pdf, Size:1020Kb

Tangent Planes and Linear Approximations PARTIAL DERIVATIVES TANGENT PLANES Suppose a surface S has equation z = f(x, y), 15.4 where f has continuous first partial derivatives. Tangent Planes and Linear Approximations Let P(x0, y0, z0) be a point on S. In this section, we will learn how to: Approximate functions using tangent planes and linear functions. TANGENT PLANES Equation 2 TANGENT PLANES Example 1 Suppose f has continuous partial derivatives. Find the tangent plane to the elliptic paraboloid z = 2 x2 + y2 at the point (1, 1, 3). An equation of the tangent plane to the Let f(x, y) = 2 x2 + y2. surface z = f(x, y) at the point P(x0, y0, z0) is: Then, fx(x, y ) = 4 x fy(x, y ) = 2 y z – z0 = fx(x0, y0)( x – x0) + fy(x0, y0)( y – y0) fx(1, 1) = 4 fy(1, 1) = 2 TANGENT PLANES Example 1 TANGENT PLANES So, Equation 2 gives the equation The figure shows the elliptic paraboloid of the tangent plane at (1, 1, 3) as: and its tangent plane at (1, 1, 3) that we z – 3 = 4( x – 1) + 2( y – 1) found in Example 1. or z = 4 x + 2 y – 3 1 LINEAR APPROXIMATIONS LINEAR APPROXIMATIONS In Example 1, we found that an equation of Thus, in view of the visual evidence in the tangent plane to the graph of the function the previous two figures, the linear function f(x, y) = 2 x2 + y2 at the point (1, 1, 3) is: of two variables L(x, y) = 4 x + 2 y – 3 z = 4 x + 2 y – 3 is a good approximation to f(x, y) when ( x, y) is near (1, 1). LINEARIZATION & LINEAR APPROXIMATION LINEAR APPROXIMATIONS The function L is called the linearization of f For instance, at the point (1.1, 0.95), the linear at (1, 1). approximation gives: f(1.1, 0.95) The approximation ≈ 4(1.1) + 2(0.95) – 3 f(x, y) ≈ 4x + 2 y – 3 = 3.3 is called the linear approximation or tangent This is quite close to the true value plane approximation of f at (1, 1). of f(1.1, 0.95) = 2(1.1) 2 + (0.95) 2 = 3.3225 LINEAR APPROXIMATIONS LINEAR APPROXIMATIONS However, if we take a point farther away In general, we know from Equation 2 that from (1, 1), such as (2, 3), we no longer get an equation of the tangent plane to the graph a good approximation. of a function f of two variables at the point (a, b, f(a, b)) is: In fact, L(2, 3) = 11, whereas f(2, 3) = 17. z = f(a, b) + fx(a, b)( x – a) + fy(a, b)( y – b) 2 LINEARIZATION Equation 3 LINEAR APPROXIMATION Equation 4 The linear function whose graph is The approximation this tangent plane, namely f(x, y) ≈ f(a, b) + fx(a, b)( x – a) + f (a, b)( y – b) L(x, y) = f(a, b) + fx(a, b)( x – a) y + fy(a, b)( y – b) is called the linear approximation or is called the linearization of f at ( a, b). the tangent plane approximation of f at ( a, b). LINEAR APPROXIMATIONS Theorem 8 LINEAR APPROXIMATIONS Example 2 xy If the partial derivatives fx and fy exist Show that f(x, y) = xe is differentiable near ( a, b) and are continuous at ( a, b), at (1, 0) and find its linearization there. then f is differentiable at ( a, b). Then, use it to approximate f(1.1, –0.1). LINEAR APPROXIMATIONS Example 2 LINEAR APPROXIMATIONS Example 2 The partial derivatives are: The linearization is: L(x, y) = f(1, 0) + fx(1, 0)( x – 1) + fy(1, 0)( y – 0) xy xy 2 xy . fx(x, y) = e + xye fy(x, y) = x e = 1 + 1( x – 1) + 1 y fx(1, 0) = 1 fy(1, 0) = 1 = x + y Both fx and fy are continuous functions. So, f is differentiable by Theorem 8. 3 LINEAR APPROXIMATIONS Example 2 DIFFERENTIALS The corresponding linear approximation For a differentiable function of one variable, is: y = f(x), we define the differential dx to be xe xy ≈ x + y an independent variable. So, That is, dx can be given the value f(1.1, – 0.1) ≈ 1.1 – 0.1 = 1 of any real number. Compare this with the actual value of f(1.1, –0.1) = 1.1 e–0.11 ≈ 0.98542 DIFFERENTIALS Equation 9 DIFFERENTIALS Then, the differential of y is defined The figure shows the as: relationship between the increment ∆y and the differential dy . dy = f’(x) dx See Section 3.10 DIFFERENTIALS DIFFERENTIALS ∆y represents the change in height of For a differentiable function of two variables, the curve y = f(x). z = f(x, y), we define the differentials dx and dy represents the change in height of dy to be independent variables. the tangent line when x changes That is, they can be given any values. by an amount dx = ∆x. 4 TOTAL DIFFERENTIAL Equation 10 DIFFERENTIALS Then the differential dz , also called If we take dx = ∆x = x – a and dy = ∆y = y – b the total differential, is defined by: in Equation 10, then the differential of z ∂ ∂ is: = + =+z z dz fx(,) xydx f y (,) xydy dx dy dz = f (a, b)( x – a) + f (a, b)( y – b) ∂x ∂ y x y Compare with Equation 9. So, in the notation of differentials, the linear approximation in Equation 4 can be written as: Sometimes, the notation df is used in place of dz . f(x , y) ≈ f(a, b) + dz DIFFERENTIALS DIFFERENTIALS The figure is the three-dimensional It shows the geometric interpretation of counterpart of the previous figure. the differential dz and the increment ∆z. DIFFERENTIALS DIFFERENTIALS dz is the change in height of the tangent ∆z represents the change in height of plane. the surface z = f(x, y) when ( x, y) changes from ( a, b) to ( a + ∆x, b + ∆y). 5 DIFFERENTIALS Example 4 DIFFERENTIALS Example 4 a a. If z = f(x, y) = x2 + 3 xy – y2, find Definition 10 gives: the differential dz . ∂z ∂ z dz= dx + dy b. If x changes from 2 to 2.05 and y changes ∂x ∂ y from 3 to 2.96, compare ∆z and dz . =+(2x 3) ydx +− (3 x 2) ydy DIFFERENTIALS Example 4 b DIFFERENTIALS Example 4 b Putting The increment of z is: x = 2, dx = ∆x = 0.05, y = 3, dy = ∆y = –0.04, ∆z = f(2.05, 2.96) – f(2, 3) we get: = [(2.05) 2 + 3(2.05)(2.96) – (2.96) 2] – [2 2 + 3(2)(3) – 32] dz = [2(2) + 3(3)]0.05 + [3(2) – 2(3)](–0.04) = 0.6449 = 0.65 Notice that ∆z ≈ dz, but dz is easier to compute. DIFFERENTIALS DIFFERENTIALS Example 5 In Example 4, dz is close to ∆z because The base radius and height of a right circular the tangent plane is a good approximation cone are measured as 10 cm and 25 cm, to the surface z = x2 + 3 xy – y2 near (2, 3, 13). respectively, with a possible error in measurement of as much as 0.1 cm in each. Use differentials to estimate the maximum error in the calculated volume of the cone. 6 DIFFERENTIALS Example 5 DIFFERENTIALS Example 5 The volume V of a cone with base radius r Each error is at most 0.1 cm. and height h is V = πr2h/3. So, we have: So, the differential of V is: |∆r| ≤ 0.1 ∂V ∂ V2π rh π r 2 dV= dr + dh = dr + dh |∆h| ≤ 0.1 ∂r ∂ h 3 3 DIFFERENTIALS Example 5 DIFFERENTIALS Example 5 To find the largest error in the volume, That gives: we take the largest error in the measurement 500π 100 π of r and of h. dV =(0.1) + (0.1) 3 3 Therefore, we take dr = 0.1 and dh = 0.1 = 20 π along with r = 10, h = 25. So, the maximum error in the calculated volume is about 20 π cm 3 ≈ 63 cm 3. FUNCTIONS OF THREE OR MORE VARIABLES MULTIPLE VARIABLE FUNCTIONS Example 6 The differential dw is defined in terms of the The dimensions of a rectangular box are differentials dx , dy , and dz of the independent measured to be 75 cm, 60 cm, and 40 cm, variables by: and each measurement is correct to within 0.2 cm. ∂w ∂ w ∂ w dw= dx + dy + dz ∂x ∂ y ∂ z Use differentials to estimate the largest possible error when the volume of the box is calculated from these measurements. 7 MULTIPLE VARIABLE FUNCTIONS Example 6 MULTIPLE VARIABLE FUNCTIONS Example 6 If the dimensions of the box are x, y, and z, We are given that its volume is V = xyz. |∆x| ≤ 0.2, | ∆y| ≤ 0.2, | ∆z| ≤ 0.2 Thus, To find the largest error in the volume, ∂V ∂ V ∂ V dV= dx + dy + dz we use ∂ ∂ ∂ dx = 0.2, dy = 0.2, dz = 0.2 x y z together with =yz dx + xz dy + xy dz x = 75, y = 60, z = 40 MULTIPLE VARIABLE FUNCTIONS Example 6 MULTIPLE VARIABLE FUNCTIONS Example 6 Thus, So, an error of only 0.2 cm in measuring each dimension could lead to an error of as much ∆V ≈ dV as 1980 cm 3 in the calculated volume. = (60)(40)(0.2) + (75)(40)(0.2) + (75)(60)(0.2) This may seem like a large error. = 1980 However, it’s only about 1% of the volume of the box.
Recommended publications
  • 1 Probelms on Implicit Differentiation 2 Problems on Local Linearization
    Math-124 Calculus Chapter 3 Review 1 Probelms on Implicit Di®erentiation #1. A function is de¯ned implicitly by x3y ¡ 3xy3 = 3x + 4y + 5: Find y0 in terms of x and y. In problems 2-6, ¯nd the equation of the tangent line to the curve at the given point. x3 + 1 #2. + 2y2 = 1 ¡ 2x + 4y at the point (2; ¡1). y 1 #3. 4ey + 3x = + (y + 1)2 + 5x at the point (1; 0). x #4. (3x ¡ 2y)2 + x3 = y3 ¡ 2x ¡ 4 at the point (1; 2). p #5. xy + x3 = y3=2 ¡ y ¡ x at the point (1; 4). 2 #6. x sin(y ¡ 3) + 2y = 4x3 + at the point (1; 3). x #7. Find y00 for the curve xy + 2y3 = x3 ¡ 22y at the point (3; 1): #8. Find the points at which the curve x3y3 = x + y has a horizontal tangent. 2 Problems on Local Linearization #1. Let f(x) = (x + 2)ex. Find the value of f(0). Use this to approximate f(¡:2). #2. f(2) = 4 and f 0(2) = 7. Use linear approximation to approximate f(2:03). 6x4 #3. f(1) = 9 and f 0(x) = : Use a linear approximation to approximate f(1:02). x2 + 1 #4. A linear approximation is used to approximate y = f(x) at the point (3; 1). When ¢x = :06 and ¢y = :72. Find the equation of the tangent line. 3 Problems on Absolute Maxima and Minima 1 #1. For the function f(x) = x3 ¡ x2 ¡ 8x + 1, ¯nd the x-coordinates of the absolute max and 3 absolute min on the interval ² a) ¡3 · x · 5 ² b) 0 · x · 5 3 #2.
    [Show full text]
  • Linear Approximation of the Derivative
    Section 3.9 - Linear Approximation and the Derivative Idea: When we zoom in on a smooth function and its tangent line, they are very close to- gether. So for a function that is hard to evaluate, we can use the tangent line to approximate values of the derivative. p 3 Example Usep a tangent line to approximatep 8:1 without a calculator. Let f(x) = 3 x = x1=3. We know f(8) = 3 8 = 2. We'll use the tangent line at x = 8. 1 1 1 1 f 0(x) = x−2=3 so f 0(8) = (8)−2=3 = · 3 3 3 4 Our tangent line passes through (8; 2) and has slope 1=12: 1 y = (x − 8) + 2: 12 To approximate f(8:1) we'll find the value of the tangent line at x = 8:1. 1 y = (8:1 − 8) + 2 12 1 = (:1) + 2 12 1 = + 2 120 241 = 120 p 3 241 So 8:1 ≈ 120 : p How accurate was this approximation? We'll use a calculator now: 3 8:1 ≈ 2:00829885. 241 −5 Taking the difference, 2:00829885 − 120 ≈ −3:4483 × 10 , a pretty good approximation! General Formulas for Linear Approximation The tangent line to f(x) at x = a passes through the point (a; f(a)) and has slope f 0(a) so its equation is y = f 0(a)(x − a) + f(a) The tangent line is the best linear approximation to f(x) near x = a so f(x) ≈ f(a) + f 0(a)(x − a) this is called the local linear approximation of f(x) near x = a.
    [Show full text]
  • DYNAMICAL SYSTEMS Contents 1. Introduction 1 2. Linear Systems 5 3
    DYNAMICAL SYSTEMS WILLY HU Contents 1. Introduction 1 2. Linear Systems 5 3. Non-linear systems in the plane 8 3.1. The Linearization Theorem 11 3.2. Stability 11 4. Applications 13 4.1. A Model of Animal Conflict 13 4.2. Bifurcations 14 Acknowledgments 15 References 15 Abstract. This paper seeks to establish the foundation for examining dy- namical systems. Dynamical systems are, very broadly, systems that can be modelled by systems of differential equations. In this paper, we will see how to examine the qualitative structure of a system of differential equations and how to model it geometrically, and what information can be gained from such an analysis. We will see what it means for focal points to be stable and unstable, and how we can apply this to examining population growth and evolution, bifurcations, and other applications. 1. Introduction This paper is based on Arrowsmith and Place's book, Dynamical Systems.I have included corresponding references for propositions, theorems, and definitions. The images included in this paper are also from their book. Definition 1.1. (Arrowsmith and Place 1.1.1) Let X(t; x) be a real-valued function of the real variables t and x, with domain D ⊆ R2. A function x(t), with t in some open interval I ⊆ R, which satisfies dx (1.2) x0(t) = = X(t; x(t)) dt is said to be a solution satisfying x0. In other words, x(t) is only a solution if (t; x(t)) ⊆ D for each t 2 I. We take I to be the largest interval for which x(t) satisfies (1.2).
    [Show full text]
  • Generalized Finite-Difference Schemes
    Generalized Finite-Difference Schemes By Blair Swartz* and Burton Wendroff** Abstract. Finite-difference schemes for initial boundary-value problems for partial differential equations lead to systems of equations which must be solved at each time step. Other methods also lead to systems of equations. We call a method a generalized finite-difference scheme if the matrix of coefficients of the system is sparse. Galerkin's method, using a local basis, provides unconditionally stable, implicit generalized finite-difference schemes for a large class of linear and nonlinear problems. The equations can be generated by computer program. The schemes will, in general, be not more efficient than standard finite-difference schemes when such standard stable schemes exist. We exhibit a generalized finite-difference scheme for Burgers' equation and solve it with a step function for initial data. | 1. Well-Posed Problems and Semibounded Operators. We consider a system of partial differential equations of the following form: (1.1) du/dt = Au + f, where u = (iti(a;, t), ■ ■ -, umix, t)),f = ifxix, t), ■ • -,fmix, t)), and A is a matrix of partial differential operators in x — ixx, • • -, xn), A = Aix,t,D) = JjüiD', D<= id/dXx)h--Yd/dxn)in, aiix, t) = a,-,...i„(a;, t) = matrix . Equation (1.1) is assumed to hold in some n-dimensional region Í2 with boundary An initial condition is imposed in ti, (1.2) uix, 0) = uoix) , xEV. The boundary conditions are that there is a collection (B of operators Bix, D) such that (1.3) Bix,D)u = 0, xEdti,B<E(2>. We assume the operator A satisfies the following condition : There exists a scalar product {,) such that for all sufficiently smooth functions <¡>ix)which satisfy (1.3), (1.4) 2 Re {A4,,<t>) ^ C(<b,<p), 0 < t ^ T , where C is a constant independent of <p.An operator A satisfying (1.4) is called semi- bounded.
    [Show full text]
  • Calculus I - Lecture 15 Linear Approximation & Differentials
    Calculus I - Lecture 15 Linear Approximation & Differentials Lecture Notes: http://www.math.ksu.edu/˜gerald/math220d/ Course Syllabus: http://www.math.ksu.edu/math220/spring-2014/indexs14.html Gerald Hoehn (based on notes by T. Cochran) March 11, 2014 Equation of Tangent Line Recall the equation of the tangent line of a curve y = f (x) at the point x = a. The general equation of the tangent line is y = L (x) := f (a) + f 0(a)(x a). a − That is the point-slope form of a line through the point (a, f (a)) with slope f 0(a). Linear Approximation It follows from the geometric picture as well as the equation f (x) f (a) lim − = f 0(a) x a x a → − f (x) f (a) which means that x−a f 0(a) or − ≈ f (x) f (a) + f 0(a)(x a) = L (x) ≈ − a for x close to a. Thus La(x) is a good approximation of f (x) for x near a. If we write x = a + ∆x and let ∆x be sufficiently small this becomes f (a + ∆x) f (a) f (a)∆x. Writing also − ≈ 0 ∆y = ∆f := f (a + ∆x) f (a) this becomes − ∆y = ∆f f 0(a)∆x ≈ In words: for small ∆x the change ∆y in y if one goes from x to x + ∆x is approximately equal to f 0(a)∆x. Visualization of Linear Approximation Example: a) Find the linear approximation of f (x) = √x at x = 16. b) Use it to approximate √15.9. Solution: a) We have to compute the equation of the tangent line at x = 16.
    [Show full text]
  • Linearization of Nonlinear Differential Equation by Taylor's Series
    International Journal of Theoretical and Applied Science 4(1): 36-38(2011) ISSN No. (Print) : 0975-1718 International Journal of Theoretical & Applied Sciences, 1(1): 25-31(2009) ISSN No. (Online) : 2249-3247 Linearization of Nonlinear Differential Equation by Taylor’s Series Expansion and Use of Jacobian Linearization Process M. Ravi Tailor* and P.H. Bhathawala** *Department of Mathematics, Vidhyadeep Institute of Management and Technology, Anita, Kim, India **S.S. Agrawal Institute of Management and Technology, Navsari, India (Received 11 March, 2012, Accepted 12 May, 2012) ABSTRACT : In this paper, we show how to perform linearization of systems described by nonlinear differential equations. The procedure introduced is based on the Taylor's series expansion and on knowledge of Jacobian linearization process. We develop linear differential equation by a specific point, called an equilibrium point. Keywords : Nonlinear differential equation, Equilibrium Points, Jacobian Linearization, Taylor's Series Expansion. I. INTRODUCTION δx = a δ x In order to linearize general nonlinear systems, we will This linear model is valid only near the equilibrium point. use the Taylor Series expansion of functions. Consider a function f(x) of a single variable x, and suppose that x is a II. EQUILIBRIUM POINTS point such that f( x ) = 0. In this case, the point x is called Consider a nonlinear differential equation an equilibrium point of the system x = f( x ), since we have x( t )= f [ x ( t ), u ( t )] ... (1) x = 0 when x= x (i.e., the system reaches an equilibrium n m n at x ). Recall that the Taylor Series expansion of f(x) around where f:.
    [Show full text]
  • Linearization Extreme Values
    Math 31A Discussion Notes Week 6 November 3 and 5, 2015 This week we'll review two of last week's lecture topics in preparation for the quiz. Linearization One immediate use we have for derivatives is local linear approximation. On small neighborhoods around a point, a differentiable function behaves linearly. That is, if we zoom in enough on a point on a curve, the curve will eventually look like a straight line. We can use this fact to approximate functions by their tangent lines. You've seen all of this in lecture, so we'll jump straight to the formula for local linear approximation. If f is differentiable at x = a and x is \close" to a, then f(x) ≈ L(x) = f(a) + f 0(a)(x − a): Example. Use local linear approximation to estimate the value of sin(47◦). (Solution) We know that f(x) := sin(x) is differentiable everywhere, and we know the value of sin(45◦). Since 47◦ is reasonably close to 45◦, this problem is ripe for local linear approximation. We know that f 0(x) = π cos(x), so f 0(45◦) = πp . Then 180 180 2 1 π 90 + π sin(47◦) ≈ sin(45◦) + f 0(45◦)(47 − 45) = p + 2 p = p ≈ 0:7318: 2 180 2 90 2 For comparison, Google says that sin(47◦) = 0:7314, so our estimate is pretty good. Considering the fact that most folks now have (extremely powerful) calculators in their pockets, the above example is a very inefficient way to compute sin(47◦). Local linear approximation is no longer especially useful for estimating particular values of functions, but it can still be a very useful tool.
    [Show full text]
  • Chapter 3, Lecture 1: Newton's Method 1 Approximate
    Math 484: Nonlinear Programming1 Mikhail Lavrov Chapter 3, Lecture 1: Newton's Method April 15, 2019 University of Illinois at Urbana-Champaign 1 Approximate methods and asumptions The final topic covered in this class is iterative methods for optimization. These are meant to help us find approximate solutions to problems in cases where finding exact solutions would be too hard. There are two things we've taken for granted before which might actually be too hard to do exactly: 1. Evaluating derivatives of a function f (e.g., rf or Hf) at a given point. 2. Solving an equation or system of equations. Before, we've assumed (1) and (2) are both easy. Now we're going to figure out what to do when (2) and possibly (1) is hard. For example: • For a polynomial function of high degree, derivatives are straightforward to compute, but it's impossible to solve equations exactly (even in one variable). • For a function we have no formula for (such as the value function MP(z) from Chapter 5, for instance) we don't have a good way of computing derivatives, and they might not even exist. 2 The classical Newton's method Eventually we'll get to optimization problems. But we'll begin with Newton's method in its basic form: an algorithm for approximately finding zeroes of a function f : R ! R. This is an iterative algorithm: starting with an initial guess x0, it makes a better guess x1, then uses it to make an even better guess x2, and so on. We hope that eventually these approach a solution.
    [Show full text]
  • The Linear Algebra Version of the Chain Rule 1
    Ralph M Kaufmann The Linear Algebra Version of the Chain Rule 1 Idea The differential of a differentiable function at a point gives a good linear approximation of the function – by definition. This means that locally one can just regard linear functions. The algebra of linear functions is best described in terms of linear algebra, i.e. vectors and matrices. Now, in terms of matrices the concatenation of linear functions is the matrix product. Putting these observations together gives the formulation of the chain rule as the Theorem that the linearization of the concatenations of two functions at a point is given by the concatenation of the respective linearizations. Or in other words that matrix describing the linearization of the concatenation is the product of the two matrices describing the linearizations of the two functions. 1. Linear Maps Let V n be the space of n–dimensional vectors. 1.1. Definition. A linear map F : V n → V m is a rule that associates to each n–dimensional vector ~x = hx1, . xni an m–dimensional vector F (~x) = ~y = hy1, . , yni = hf1(~x),..., (fm(~x))i in such a way that: 1) For c ∈ R : F (c~x) = cF (~x) 2) For any two n–dimensional vectors ~x and ~x0: F (~x + ~x0) = F (~x) + F (~x0) If m = 1 such a map is called a linear function. Note that the component functions f1, . , fm are all linear functions. 1.2. Examples. 1) m=1, n=3: all linear functions are of the form y = ax1 + bx2 + cx3 for some a, b, c ∈ R.
    [Show full text]
  • 2.8 Linear Approximation and Differentials
    196 the derivative 2.8 Linear Approximation and Differentials Newton’s method used tangent lines to “point toward” a root of a function. In this section we examine and use another geometric charac- teristic of tangent lines: If f is differentiable at a, c is close to a and y = L(x) is the line tangent to f (x) at x = a then L(c) is close to f (c). We can use this idea to approximate the values of some commonly used functions and to predict the “error” or uncertainty in a compu- tation if we know the “error” or uncertainty in our original data. At the end of this section, we will define a related concept called the differential of a function. Linear Approximation Because this section uses tangent lines extensively, it is worthwhile to recall how we find the equation of the line tangent to f (x) where x = a: the tangent line goes through the point (a, f (a)) and has slope f 0(a) so, using the point-slope form y − y0 = m(x − x0) for linear equations, we have y − f (a) = f 0(a) · (x − a) ) y = f (a) + f 0(a) · (x − a). If f is differentiable at x = a then an equation of the line L tangent to f at x = a is: L(x) = f (a) + f 0(a) · (x − a) Example 1. Find a formula for L(x), the linear function tangent to the p ( ) = ( ) ( ) ( ) graph of f x p x atp the point 9, 3 . Evaluate L 9.1 and L 8.88 to approximate 9.1 and 8.88.
    [Show full text]
  • Linearization Via the Lie Derivative ∗
    Electron. J. Diff. Eqns., Monograph 02, 2000 http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu or ejde.math.unt.edu (login: ftp) Linearization via the Lie Derivative ∗ Carmen Chicone & Richard Swanson Abstract The standard proof of the Grobman–Hartman linearization theorem for a flow at a hyperbolic rest point proceeds by first establishing the analogous result for hyperbolic fixed points of local diffeomorphisms. In this exposition we present a simple direct proof that avoids the discrete case altogether. We give new proofs for Hartman’s smoothness results: A 2 flow is 1 linearizable at a hyperbolic sink, and a 2 flow in the C C C plane is 1 linearizable at a hyperbolic rest point. Also, we formulate C and prove some new results on smooth linearization for special classes of quasi-linear vector fields where either the nonlinear part is restricted or additional conditions on the spectrum of the linear part (not related to resonance conditions) are imposed. Contents 1 Introduction 2 2 Continuous Conjugacy 4 3 Smooth Conjugacy 7 3.1 Hyperbolic Sinks . 10 3.1.1 Smooth Linearization on the Line . 32 3.2 Hyperbolic Saddles . 34 4 Linearization of Special Vector Fields 45 4.1 Special Vector Fields . 46 4.2 Saddles . 50 4.3 Infinitesimal Conjugacy and Fiber Contractions . 50 4.4 Sources and Sinks . 51 ∗Mathematics Subject Classifications: 34-02, 34C20, 37D05, 37G10. Key words: Smooth linearization, Lie derivative, Hartman, Grobman, hyperbolic rest point, fiber contraction, Dorroh smoothing. c 2000 Southwest Texas State University. Submitted November 14, 2000.
    [Show full text]
  • Linearization and Stability Analysis of Nonlinear Problems
    Rose-Hulman Undergraduate Mathematics Journal Volume 16 Issue 2 Article 5 Linearization and Stability Analysis of Nonlinear Problems Robert Morgan Wayne State University Follow this and additional works at: https://scholar.rose-hulman.edu/rhumj Recommended Citation Morgan, Robert (2015) "Linearization and Stability Analysis of Nonlinear Problems," Rose-Hulman Undergraduate Mathematics Journal: Vol. 16 : Iss. 2 , Article 5. Available at: https://scholar.rose-hulman.edu/rhumj/vol16/iss2/5 Rose- Hulman Undergraduate Mathematics Journal Linearization and Stability Analysis of Nonlinear Problems Robert Morgana Volume 16, No. 2, Fall 2015 Sponsored by Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN 47803 Email: [email protected] a http://www.rose-hulman.edu/mathjournal Wayne State University, Detroit, MI Rose-Hulman Undergraduate Mathematics Journal Volume 16, No. 2, Fall 2015 Linearization and Stability Analysis of Nonlinear Problems Robert Morgan Abstract. The focus of this paper is on the use of linearization techniques and lin- ear differential equation theory to analyze nonlinear differential equations. Often, mathematical models of real-world phenomena are formulated in terms of systems of nonlinear differential equations, which can be difficult to solve explicitly. To overcome this barrier, we take a qualitative approach to the analysis of solutions to nonlinear systems by making phase portraits and using stability analysis. We demonstrate these techniques in the analysis of two systems of nonlinear differential equations. Both of these models are originally motivated by population models in biology when solutions are required to be non-negative, but the ODEs can be un- derstood outside of this traditional scope of population models.
    [Show full text]