
Lectures on Numerical Analysis Dennis Deturck and Herbert S. Wilf Department of Mathematics University of Pennsylvania Philadelphia, PA 19104-6395 Copyright 2002, Dennis Deturck and Herbert Wilf April 30, 2002 2 Contents 1 Differential and Difference Equations 5 1.1Introduction.................................... 5 1.2Linearequationswithconstantcoefficients................... 8 1.3Differenceequations............................... 11 1.4Computingwithdifferenceequations...................... 14 1.5 Stability theory . ................................. 16 1.6 Stability theory of difference equations . ................ 19 2 The Numerical Solution of Differential Equations 23 2.1Euler’smethod.................................. 23 2.2Softwarenotes.................................. 26 2.3Systemsandequationsofhigherorder..................... 29 2.4Howtodocumentaprogram.......................... 34 2.5Themidpointandtrapezoidalrules....................... 38 2.6Comparisonofthemethods........................... 43 2.7 Predictor-corrector methods . ....................... 48 2.8Truncationerrorandstepsize.......................... 50 2.9 Controlling the step size . ....................... 54 2.10Casestudy:Rockettothemoon........................ 60 2.11Mapleprogramsforthetrapezoidalrule.................... 65 2.11.1Example:Computingthecosinefunction............... 67 2.11.2Example:Themoonrocketinonedimension............. 68 2.12Thebigleagues.................................. 69 2.13LagrangeandAdamsformulas......................... 74 4 CONTENTS 3 Numerical linear algebra 81 3.1 Vector spaces and linear mappings ....................... 81 3.2Linearsystems.................................. 86 3.3Buildingblocksforthelinearequationsolver................. 92 3.4Howbigiszero?................................. 97 3.5Operationcount................................. 102 3.6Tounscrambletheeggs............................. 105 3.7 Eigenvalues and eigenvectors of matrices . ................ 108 3.8 The orthogonal matrices of Jacobi ....................... 112 3.9ConvergenceoftheJacobimethod....................... 115 3.10 Corbat´o’s idea and the implementation of the Jacobi algorithm . 118 3.11 Getting it together . ....................... 122 3.12Remarks...................................... 124 Chapter 1 Differential and Difference Equations 1.1 Introduction In this chapter we are going to study differential equations, with particular emphasis on how to solve them with computers. We assume that the reader has previously met differential equations, so we’re going to review the most basic facts about them rather quickly. A differential equation is an equation in an unknown function, say y(x), where the equation contains various derivatives of y and various known functions of x. The problem is to “find” the unknown function. The order of a differential equation is the order of the highest derivative that appears in it. Here’s an easy equation of first order: y0(x)=0: (1.1.1) The unknown function is y(x) = constant, so we have solved the given equation (1.1.1). The next one is a little harder: y0(x)=2y(x): (1.1.2) A solution will, now doubt, arrive after a bit of thought, namely y(x)=e2x.But,ify(x) is a solution of (1.1.2), then so is 10y(x), or 49:6y(x), or in fact cy(x) for any constant c. Hence y = ce2x is a solution of (1.1.2). Are there any other solutions? No there aren’t, because if y is any function that satisfies (1.1.2) then (ye−2x)0 = e−2x(y0 − 2y)=0; (1.1.3) and so ye−2x must be a constant, C. In general, we can expect that if a differential equation is of the first order, then the most general solution will involve one arbitrary constant C. This is not always the case, 6 Differential and Difference Equations since we can write down differential equations that have no solutions at all. We would have, for instance, a fairly hard time (why?) finding a real function y(x)forwhich (y0)2 = −y2 − 2: (1.1.4) There are certain special kinds of differential equations that can always be solved, and it’s often important to be able to recognize them. Among there are the “first-order linear” equations y0(x)+a(x)y(x)=0; (1.1.5) where a(x) is a given function of x. Before we describe the solution of these equations, let’s discuss the word linear.Tosay that an equation is linear is to say that if we have any two solutions y1(x)andy2(x)ofthe equation, then c1y1(x)+c2y2(x) is also a solution of the equation, where c1 and c2 are any two constants (in other words, the set of solutions forms a vector space). Equation (1.1.1) is linear, in fact, y1(x)=7andy2(x) = 23 are both solutions, and so is 7c1 +23c2. Less trivially, the equation y00(x)+y(x) = 0 (1.1.6) is linear. The linearity of (1.1.6) can be checked right from the equation itself, without knowing what the solutions are (do it!). For an example, though, we might note that y =sinx is a solution of (1.1.6), that y =cosx is another solution of (1.1.6), and finally, by linearity, that the function y = c1 sin x + c2 cos x is a solution, whatever the constants c1 and c2. Now let’s consider an instance of the first order linear equation (1.1.5): y0(x)+xy(x)=0: (1.1.7) So we’re looking for a function whose derivative is −x times the function. Evidently y = 2 2 e−x =2 will do, and the general solution is y(x)=ce−x =2. If instead of (1.1.7) we had y0(x)+x2y(x)=0; 3 then we would have found the general solution ce−x =3. As a last example, take y0(x) − (cos x) y(x)=0: (1.1.8) The right medicine is now y(x)=esin x. In the next paragraph we’ll give the general rule of which the above are three examples. The reader might like to put down the book at this point and try to formulate the rule for solving (1.1.5) before going on to read about it. Ready? What we need is to choose some antiderivative A(x)ofa(x), and then the solution is y(x)=ce−A(x). Since that was so easy, next let’s put a more interesting right hand side into (1.1.5), by considering the equation y0(x)+a(x)y(x)=b(x) (1.1.9) 1.1 Introduction 7 where now b(x) is also a given function of x (Is (1.1.9) a linear equation? Are you sure?). To solve (1.1.9), once again choose some antiderivative A(x)ofa(x),andthennotethat we can rewrite (1.1.9) in the equivalent form d e−A(x) eA(x)y(x) = b(x): dx Now if we multiply through by eA(x) we see that d eA(x)y(x) = b(x)eA(x) (1.1.10) dx so , if we integrate both sides, Z x eA(x)y(x)= b(t)eA(t) dt +const.; (1.1.11) where on the right side, we mean any antiderivative of the function under the integral sign. Consequently Z x y(x)=e−A(x) b(t)eA(t) dt +const. : (1.1.12) As an example, consider the equation y y0 + = x +1: (1.1.13) x We find that A(x)=logx, then from (1.1.12) we get Z 1 x y(x)= (t +1)tdt+ C x (1.1.14) x2 x C = + + : 3 2 x We may be doing a disservice to the reader by beginning with this discussion of certain types of differential equations that can be solved analytically, because it would be erroneous to assume that most, or even many, such equations can be dealt with by these techniques. Indeed, the reason for the importance of the numerical methods that are the main subject of this chapter is precisely that most equations that arise in “real” problems are quite intractable by analytical means, so the computer is the only hope. Despite the above disclaimer, in the next section we will study yet another important family of differential equations that can be handled analytically, namely linear equations with constant coefficients. Exercises 1.1 1. Find the general solution of each of the following equations: (a) y0 =2cosx 8 Differential and Difference Equations 2 (b) y0 + y =0 x (c) y0 + xy =3 1 (d) y0 + y = x +5 x (e) 2yy0 = x +1 2. Show that the equation (1.1.4) has no real solutions. 3. Go to your computer or terminal and familiarize yourself with the equipment, the operating system, and the specific software you will be using. Then write a program that will calculate and print the sum of the squares of the integers 1; 2;:::;100. Run this program. 4. For each part of problem 1, find the solution for which y(1) = 1. 1.2 Linear equations with constant coefficients One particularly pleasant, and important, type of linear differential equation is the variety with constant coefficients, such as y00 +3y0 +2y =0: (1.2.1) It turns out that what we have to do to solve these equations is to try a solution of a certain form, and we will then find that all of the solutions indeed are of that form. Let’s see if the function y(x)=eαx is a solution of (1.2.1). If we substitute in (1.2.1), and then cancel the common factor eαx, we are left with the quadratic equation α2 +3α +2=0 whose solutions are α = −2andα = −1. Hence for those two values of α our trial function y(x)=eαx is indeed a solution of (1.2.1). In other words, e−2x is a solution, e−x is a solution, and since the equation is linear, −2x −x y(x)=c1e + c2e (1.2.2) is also a solution, where c1 and c2 are arbitrary constants. Finally, (1.2.2) must be the most general solution since it has the “right” number of arbitrary constants, namely two. Trying a solution in the form of an exponential is always the correct first step in solving linear equations with constant coefficients. Various complications can develop, however, as illustrated by the equation y00 +4y0 +4y =0: (1.2.3) Again, let’s see if there is a solution of the form y = eαx.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages125 Page
-
File Size-