
Princeton University—Physics 205 Notes on Approximations Introduction One aspect of the course that many students find particularly vexing is the need to make approximations. Unlike freshman physics, where problems are simple enough to be exactly soluble, many of the problems in Physics 205 can be solved only by making suitable approximations. This can make life seem much more complicated. Afterall, limiting oneself to what is exactly true has a certain simplicity. However, the use of approximations makes tractable a large number of difficult-to-solve or impossible-to-solve problems, so it’s worth some effort to learn how to do it. In the days before computers and pocket calculators, approximations were indispen- sible since even something as simple as evaluating a sine function could be a really big deal (they had to be looked up in tables). Although this is not such a big factor anymore, the ability to extract simple analytical solutions is still generally a big plus, even if those solutions are only approximately correct. Analytical solutions are often very useful for providing insight—e.g., how things change if one or another variable changes. Believe it or not, there actually is some rhyme and reason in the art of approximation making, which when mastered will increase your comfort level. In what follows I’ll try to present some general guidelines. When and Why to Make Approximations There is no set rule, but generally approximations are useful when they render a difficult or impossible integral or differential equation soluble. A classic example is the simple pendulum. Starting with the Lagrangian for this system 1 L = T − V = m2θ˙2 + mg cos θ 2 from which Lagrange’s equation yields d ∂L ∂L − = m2θ¨ + mg sin θ =0 dt ∂θ˙ ∂θ or g θ¨ + sin θ =0 1 This problem can actually be solved exactly in terms of “elliptic integrals” (not a pretty sight! See M&T Section 4.4), but we immediately gain insight by making the approxima- tion sin θ θ, θ 1 at which point we obtain g θ¨ + θ =0 which is just simple harmonic motion with ω = g/. One hallmark of a good approximation is whether or not it approaches the exact so- lution as its underlying assumption becomes more and more true. In the simple pendulum example, the exact result for the period is given by l k2 9 τ =2π 1+ + k4 + ... g 4 64 where k =sinθ0/2 is a measure of the pendulum’s amplitude. For an amplitude of θ0 = 45◦, k =0.3827 and the period of the pendulum is about 4% larger than what is predicted using the sin θ θ approximation. This sort of accuracy is already close enough for many purposes. If the amplitude is small, say 1◦, the exact and approximate results differ by less than 20 parts per million. At that level many other considerations come into play: air resistance, the mass of the pendulm string, etc. Exercise: Show that small θ equation of motion above can be obtained by expanding the Lagrangian in a Taylor series around θ = θ0 = 0. In other words, show that the small θ approximation corresponds to the “standard trick” of approximating a potential energy function by a parabola. Other Standard Approximations In the example and exercise above, trignometric functions (sine or cosine) were re- placed by their Taylor series expansions. Another common practice is to approximate the sum of two (or more) terms raised to some power using the binomial expansion. This works best√ when one of the two terms is much smaller than the other. For example, if we have z = x + y and x y,wecanwrite 1 √ y 2 √ 1 y z = x 1+ x 1+ x 2 x If x =10y, this approximation is accurate to almost one part per thousand. In general we have n(n − 1) n(n − 1)(n − 2) (1 + x)n 1+nx + x2 + x3 ... 2! 3! 2 If n is a positive integer, the series has a finite number of terms (n + 2). Note that this is just a special case of the Taylor series and is applicable for positive and negative n and for integer and non-integer n. This approximation is used frequently since is “linearizes” expressions, often making them much more amenable to algebraic solution. One also frequently encounters exponentials and logarithms, which can be expanded as x x2 x2 x3 ex =1+ + + ... and log(1 + x)=x − + ... 1! 2! 2 3 The list goes on and on. Indeed Taylor’s theorem tells us that essentially everything has its expansion—a good thing since all computers can do is multiply, divide, add, and subtract. Don’t be fooled by that “sin” key on your calculator! In the end what it needs to do is to evaluate the appropriate series expansion. (Question: Most such series are infinite. How can the calculator complete the calculation in a finite time?) Some jargon and some pitfalls One often hears scientists (and others) say things like “We estimate to first order that ...”. What is meant is that the estimate to be described is accurate to the extent that the “expansion parameter” is small compared to its square. For example, the first order estimate for sin θ is θ. Note that there is an implicit caveat in such statements, because θ might not be small, or, the series being expanded might not converge nicely. One way to check the validity of such estimates is to evaluate the “higher order” terms. In the case of thesinewehave x3 x5 sin x = x − + − ... 3! 5! Thus for x =0.1 radians, the first order estimate for sin x is 0.1, the second order correction is zero, and the third order correction is 1.67 × 10−4. Here is a case where the first order estimate is pretty good. (The fact that the next order correction is small is usually, although not always, a pretty good indicator that the first-order estimate is reliable.) If, however, x =1.0, the first order estimate for sin x (i.e., 1.0) is subject to a 16% higher order correction, since the exact value is sin(1.0) = .8414710 Note that including the next term in the expansion, gives a value (0.833333) that is accurate to within 1%. Once again we could have guessed this by noting that the next term in the series x5/5! 0.008333 is pretty small. One thing one needs to watch out for is the case where two leading order terms cancel. Consider, for example, the expression y = 1000(x − sin x)forx 1. The second term is in the parentheses is usually approximated by x, which is also quite obviously an excellent approximation to the first term. One might argue, therefore, that y 0. In some cases, 3 this might be an acceptable answer, but in other cases it might not. For example, if y represents the probability of a major nuclear disaster in a crowded urban area and x =0.1, the exact value of 16.6% is unacceptably high. To get a better answer one needs to “go out to higher order”—i.e., to write x3 x3 y 1000 (x − x + ) = 1000 =0.166 3! 3! Even though the higher order term is still off by a tad, the basic conclusion is not altered— i.e., the risk is unacceptably high. Whether it’s 16% or 16.5% is a fine point. It is also important to carry out calculations to a consistent order. An example of this occurs is the solar oblateness problem in Homework 3. In that problem, one needs to expand the quantity 1 1 = r r2 + a2 − 2ar cos φ where r a. If we define a 2 2a x ≡ − cos φ r r it is tempting (but wrong!) to write 2 1 1 − 1 1 x 1 a 2a = (1 + x) 2 1 − = 1 − + cos φ r r r 2 r r r the reason this won’t work is that the next term in the square root expansion (3x2/8) contains terms of order (a/r)2. As it turns out, the leading order term (∝ (a/r)cosφ) cancels out during the integration that follows, making the higher order corrections quite important. Fortunately, this sort of cancellation isn’t too common, but one does need to watch out. Determining the “Order” of a Term It is difficult to come up with a well defined recipe for doing this, but it may be helpful to consider an example in some detail. The example in question is the “Not-So- Simple Pendulum” (see Lecture I Notes, pg. 11). Lagrange’s equations yield two coupled differential equations, the first of which is mx¨ + mθ¨ cos θ − mθ˙2 sin θ + kx =0 In the limit where θ and x are small, this becomes mx¨ + mθ¨ + kx 0 4 The first term is first order (one power of x and/or θ) the second term is also first order, but we use cos θ 1. Note that this would not have been a good approximation in the potential energy term of the Lagrangian. The reason for this is that the constant part of cos θ 1 − θ2/2 (i.e., “1”) has no effect at all when added to the potential energy, whereas by the time we get to the differential equation cos θ appears as a multiplicative factor and can therefore be safely neglected provided θ is small enough. The third term in the differential equation is third order: two powers of θ appear in the θ˙2 factor, and a third appears in the sin θ θ factor.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-