Composition Functionals in Calculus of Variations. Application to Products and Quotients

Total Page:16

File Type:pdf, Size:1020Kb

Composition Functionals in Calculus of Variations. Application to Products and Quotients Preprint 11/2005 COMPOSITION FUNCTIONALS IN CALCULUS OF VARIATIONS. APPLICATION TO PRODUCTS AND QUOTIENTS Enrique Castillo, Alberto Luceno,~ Pablo Pedregal E.T.S. Ingenieros Industriales October 2005 Universidad de Castilla - La Mancha http://matematicas.uclm.es/omeva/ 13071 Ciudad Real Composition Functionals in Calculus of Variations. Application to Products and Quotients. Enrique Castillo¤, Alberto Luceno~ ¤ and Pablo Pedregaly ¤ Department of Applied Mathematics and Computational Sciences,University of Cantabria, Spain y Department of Mathematics, University of Castilla-La Mancha, Spain Abstract This paper deals with the problem of the Calculus of Variations for a functional which is the composition of a certain scalar function H with the integral of a vector valued ¯eld f, i.e., of the form x1 H f(x; y(x); y0(x))dx ; x µZ 0 ¶ n 3 n where H : IR ! IR and f : IR ! IR . The integral of f is calculated here componentwise. We examine su±cient conditions for the existence of optimal solutions, and provide rules to ¯nd the Euler-Lagrange and the natural, transversality, Weirstrass-Erdmann and junction conditions for such a functional. Particular attention is paid to the cases of the product and the quotient as we take these as model situations. Finally, the theory is illustrated with a slope stability problem, and an example coming from Economics. Key Words: coercivity, weak lower semicontinuity, Euler-Lagrange equations, product func- tionals, quotient functionals, transversality conditions, natural conditions, Weierstrass -Erdman conditions, slope stability. 1 Introduction Many problems in science and technology can be formulated as problems of the Calculus of Vari- ations where a certain functional is minimized. The Calculus of Variations has been traditionally concerned with functionals of the form x1 0 I(x0; x1; y(x)) = f(x; y(x); y (x))dx; (1) Zx0 where (x0; x1; y(x)) is assumed to belong to adequate spaces for the problem to be tractable. Necessary and su±cient conditions for the existence of a minimum are well-known (see for example Euler (1744), Elsgolc (1962); Gelfand and Fomin (1963); Forray (1968); Bolza (1973)). In particular, ¯rst order necessary conditions such as the following appear in the existing literature: 1. The Euler-Lagrange equation, to be satis¯ed by any extremal that minimizes (1): d f ¡ f 0 = 0: (2) y dx y 2. The natural condition, to be satis¯ed at an end point x = x0 when y(x0) is free: 0 0 fy (x0; y0(x0); y0(x0)) = 0: (3) 3. The transversality condition, to be satis¯ed when an end point x = x0 must be on a given curve y = y¹(x): 0 0 0 0 0 f(x0; y0(x0); y0(x0)) + (y¹0(x0) ¡ y0(x0))fy (x0; y0(x0); y0(x0)) = 0: (4) 1 4. The Weierstrass-Erdmann condition, to be satis¯ed at any angular point x = c: 0 0 0 0 fy (x; y0(x); y0(x)) x=c¡0 ¡ fy (x; y0(x); y0(x)) x=c+0 = 0 (5) 0 0 0 (f(x; y (x); y (¯x)) ¡ y f 0 (x; y (x); y (x)))¯ 0 0 ¯ y 0 0 ¯x=c¡0 0 0 0 ¡ (f(x; y (x); y (x)) ¡ y f 0 (x; y (x); y (x)))¯ = 0 (6) 0 0 y 0 0 ¯x=c+0 ¯ 5. The junction condition for unilateral constraints of the type ¯y ¸ Á(x), to be satis¯ed at the junction point x¹: 0 0 0 0 0 f(x; y0(x); y0(x)) ¡ f(x; y0(x); Á(x)) ¡ (Á (x) ¡ y0(x))fy (x; y0(x); y0(x)) x=x¹ = 0: (7) £ ¤¯ In addition, other second order necessary conditions, such as Legendre's or ¯Jacobi's, and su±cient conditions, such as Legendre's or Hilbert's conditions, are well known for functionals (1). Moreover, su±cient conditions for the existence of optimal solutions involve the so-called \direct method" (Dacorogna (1989, 1992)) which amounts basically to two main ingredients: coercivity and convexity. More precisely, if the integrand f(x; y; ¸) : (x0; x1) £ IR £ IR ! IR is continuous in (y; ¸), measurable in x, convex in ¸ (for every ¯xed pair (x; y)) and satis¯es the coercivity condition f(x; y; ¸) lim = +1 ¸!1 j¸j uniformly in (x; y), then there are global optimal solutions for the minimization problem (1). This is classical and has been extended to the much more complex vector situation (Dacorogna (1989)). There are however interesting problems in which the functional to be minimized is not of the form (1). In this paper we deal with general (non-classical) functionals of the form x1 0 H(x0; x1; y(x)) = H f(x; y(x); y (x))dx ; (8) µZx0 ¶ where f has n components f = (f1; : : : ; fn) and H has n independent variables. In particular, two interesting cases for n = 2 are the product functional: x1 x1 0 0 P (x0; x1; y(x)) = f1(x; y(x); y (x))dx f2(x; y(x); y (x))dx ; (9) µZx0 ¶ µZx0 ¶ and the ratio or quotient functional: x1 0 x0 f1(x; y(x); y (x))dx Q(x0; x1; y(x)) = x1 0 : (10) Rx0 f2(x; y(x); y (x))dx Functionals of the form (8) were dealt witRh by Euler (1744). However, it is really surprising that the scienti¯c community seems to be unaware of it. In fact only very few authors, as Petrov (1968) and Goldstine (1980), cite this important Euler's work, but do it as if it were a secondary contribution. Functionals of the form (10) have appeared in the past in relation to Soil Mechanics problems (see for example Garber (1973), Revilla and Castillo (1977) and Luceno~ (1979)). The product 2 functional is a very particular case of a non-local cost functional, as it can trivially be written as a double integral x1 x1 0 0 P (x0; x1; y(x)) = f1(t; y(t); y (t))f2(s; y(s); y (s)) dt ds: Zx0 Zx0 In general, some non-local integral functionals can be expressed as a double (or multiple) integral of the form x1 x1 F (t; s; y(t); y(s); y0(t); y0(s)) dt ds: Zx0 Zx0 See Pedregal (1997a) for more information on these non-local examples. In this paper, we deal with the problem of minimizing this general type of functionals. In particular, one is interested in answering basic questions such as: 1. What are the corresponding Euler-Lagrange equations for these problems? 2. What are the corresponding natural, transversality, Weierstrass-Erdmann and junction conditions for these problems? 3. Can these problems be reduced to other calculus of variation problems? 4. Sets of su±cient conditions ensuring the existence of optimal solutions. In this paper we answer some of these questions. The paper is organized as follows. Section 2 is concerned with changes needed in the classical direct method so that it can be applied to this class of functionals. We will see that the coercivity issue is very di®erent for the product and the quotient functionals. Indeed, as far as coercivity is concerned we can divide functionals in these two kinds: product-type and quotient-type func- tionals. Section 3 deals with the problem of obtaining a general formula for the Euler-Lagrange equations for the general functional (8), which is then applied to the product (9), the quotient (10) and another more general functional. In Section 4 we give the natural, transversality, Weierstrass-Erdmann and junction conditions for the general functional (8). Sections 5 and 6 illustrate the proposed methods using a slope stability and an economics problem, respectively, Notice that many of our results can be easily generalized and are also valid for much more general situations, such as those functionals involving multiple integrals or multiple unknown functions. Proofs and techniques are very similar, and formally the same, to the ones used here. In addition, the Pontryagin maximum principle can be easily generalized for these more general functionals. 2 Su±cient conditions for existence We ¯rst give a basic, general existence theorem, and then examine with care the two cases of product and quotient of functionals. This result is an adaptation of the direct method to the sort of functionals we are considering. Theorem 1 (First existence theorem.) Let n n f(x; y; ¸) : (x0; x1) £ IR £ IR ! IR ; H(z) : IR ! IR; be given satisfying the following three sets of assumptions: 3 ² regularity and boundedness: f is continuous in (y; ¸) and measurable in x, and H is lower semicontinuous and bounded from below over the subset of IRn n n (x1 ¡ x0)co(im(f)) = z 2 IR : z = sizi; si ¸ 0; si = (x1 ¡ x0); zi 2 im(f) ( ) Xi=0 X by a constant c; ² coercivity: the level sets of H enjoy the following requirement: for each C ¸ c (c has just been determined above) and every sequence z(j) such that z(j) ! 1; c · H(z(j)) · C; ¯ ¯ ¯ ¯ we can ¯nd a subsequence (no¯t rela¯ beled), a constant M and an index i 2 f1; 2; : : : ; ng (all depending possibly on C) such that (j) zi · M and f (x; y; ¸) lim i = +1 j¸j!1 j¸j uniformly in (x; y); ² convexity-monotonicity: each component fi(x; y; ¸) is convex in ¸ for ¯xed (x; y), and H is non-decreasing on each variable zi when the other variables zj are ¯xed values in (x1 ¡ x0)co(im(fj)) for j 6= i. Then there exist global optimal solutions for the variational problem consisting in minimizing the functional x1 0 I(x0; x1; y(x)) = H f(x; y(x); y (x)) dx µZx0 ¶ over the functions y that are absolutely continuous in (x0; x1) complying with suitable boundary conditions and/or other types of restrictions (which should be stable by weak convergence).
Recommended publications
  • 1 Approximating Integrals Using Taylor Polynomials 1 1.1 Definitions
    Seunghee Ye Ma 8: Week 7 Nov 10 Week 7 Summary This week, we will learn how we can approximate integrals using Taylor series and numerical methods. Topics Page 1 Approximating Integrals using Taylor Polynomials 1 1.1 Definitions . .1 1.2 Examples . .2 1.3 Approximating Integrals . .3 2 Numerical Integration 5 1 Approximating Integrals using Taylor Polynomials 1.1 Definitions When we first defined the derivative, recall that it was supposed to be the \instantaneous rate of change" of a function f(x) at a given point c. In other words, f 0 gives us a linear approximation of f(x) near c: for small values of " 2 R, we have f(c + ") ≈ f(c) + "f 0(c) But if f(x) has higher order derivatives, why stop with a linear approximation? Taylor series take this idea of linear approximation and extends it to higher order derivatives, giving us a better approximation of f(x) near c. Definition (Taylor Polynomial and Taylor Series) Let f(x) be a Cn function i.e. f is n-times continuously differentiable. Then, the n-th order Taylor polynomial of f(x) about c is: n X f (k)(c) T (f)(x) = (x − c)k n k! k=0 The n-th order remainder of f(x) is: Rn(f)(x) = f(x) − Tn(f)(x) If f(x) is C1, then the Taylor series of f(x) about c is: 1 X f (k)(c) T (f)(x) = (x − c)k 1 k! k=0 Note that the first order Taylor polynomial of f(x) is precisely the linear approximation we wrote down in the beginning.
    [Show full text]
  • A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- Sity, Pomona, CA 91768
    A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- sity, Pomona, CA 91768 In a recent calculus course, I introduced the technique of Integration by Parts as an integration rule corresponding to the Product Rule for differentiation. I showed my students the standard derivation of the Integration by Parts formula as presented in [1]: By the Product Rule, if f (x) and g(x) are differentiable functions, then d f (x)g(x) = f (x)g(x) + g(x) f (x). dx Integrating on both sides of this equation, f (x)g(x) + g(x) f (x) dx = f (x)g(x), which may be rearranged to obtain f (x)g(x) dx = f (x)g(x) − g(x) f (x) dx. Letting U = f (x) and V = g(x) and observing that dU = f (x) dx and dV = g(x) dx, we obtain the familiar Integration by Parts formula UdV= UV − VdU. (1) My student Victor asked if we could do a similar thing with the Quotient Rule. While the other students thought this was a crazy idea, I was intrigued. Below, I derive a Quotient Rule Integration by Parts formula, apply the resulting integration formula to an example, and discuss reasons why this formula does not appear in calculus texts. By the Quotient Rule, if f (x) and g(x) are differentiable functions, then ( ) ( ) ( ) − ( ) ( ) d f x = g x f x f x g x . dx g(x) [g(x)]2 Integrating both sides of this equation, we get f (x) g(x) f (x) − f (x)g(x) = dx.
    [Show full text]
  • Operations on Power Series Related to Taylor Series
    Operations on Power Series Related to Taylor Series In this problem, we perform elementary operations on Taylor series – term by term differen­ tiation and integration – to obtain new examples of power series for which we know their sum. Suppose that a function f has a power series representation of the form: 1 2 X n f(x) = a0 + a1(x − c) + a2(x − c) + · · · = an(x − c) n=0 convergent on the interval (c − R; c + R) for some R. The results we use in this example are: • (Differentiation) Given f as above, f 0(x) has a power series expansion obtained by by differ­ entiating each term in the expansion of f(x): 1 0 X n−1 f (x) = a1 + a2(x − c) + 2a3(x − c) + · · · = nan(x − c) n=1 • (Integration) Given f as above, R f(x) dx has a power series expansion obtained by by inte­ grating each term in the expansion of f(x): 1 Z a1 a2 X an f(x) dx = C + a (x − c) + (x − c)2 + (x − c)3 + · · · = C + (x − c)n+1 0 2 3 n + 1 n=0 for some constant C depending on the choice of antiderivative of f. Questions: 1. Find a power series representation for the function f(x) = arctan(5x): (Note: arctan x is the inverse function to tan x.) 2. Use power series to approximate Z 1 2 sin(x ) dx 0 (Note: sin(x2) is a function whose antiderivative is not an elementary function.) Solution: 1 For question (1), we know that arctan x has a simple derivative: , which then has a power 1 + x2 1 2 series representation similar to that of , where we subsitute −x for x.
    [Show full text]
  • Derivative of Power Series and Complex Exponential
    LECTURE 4: DERIVATIVE OF POWER SERIES AND COMPLEX EXPONENTIAL The reason of dealing with power series is that they provide examples of analytic functions. P1 n Theorem 1. If n=0 anz has radius of convergence R > 0; then the function P1 n F (z) = n=0 anz is di®erentiable on S = fz 2 C : jzj < Rg; and the derivative is P1 n¡1 f(z) = n=0 nanz : Proof. (¤) We will show that j F (z+h)¡F (z) ¡ f(z)j ! 0 as h ! 0 (in C), whenever h ¡ ¢ n Pn n k n¡k jzj < R: Using the binomial theorem (z + h) = k=0 k h z we get F (z + h) ¡ F (z) X1 (z + h)n ¡ zn ¡ hnzn¡1 ¡ f(z) = a h n h n=0 µ ¶ X1 a Xn n = n ( hkzn¡k) h k n=0 k=2 µ ¶ X1 Xn n = a h( hk¡2zn¡k) n k n=0 k=2 µ ¶ X1 Xn¡2 n = a h( hjzn¡2¡j) (by putting j = k ¡ 2): n j + 2 n=0 j=0 ¡ n ¢ ¡n¡2¢ By using the easily veri¯able fact that j+2 · n(n ¡ 1) j ; we obtain µ ¶ F (z + h) ¡ F (z) X1 Xn¡2 n ¡ 2 j ¡ f(z)j · jhj n(n ¡ 1)ja j( jhjjjzjn¡2¡j) h n j n=0 j=0 X1 n¡2 = jhj n(n ¡ 1)janj(jzj + jhj) : n=0 P1 n¡2 We already know that the series n=0 n(n ¡ 1)janjjzj converges for jzj < R: Now, for jzj < R and h ! 0 we have jzj + jhj < R eventually.
    [Show full text]
  • Calculus Terminology
    AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential
    [Show full text]
  • 20-Finding Taylor Coefficients
    Taylor Coefficients Learning goal: Let’s generalize the process of finding the coefficients of a Taylor polynomial. Let’s find a general formula for the coefficients of a Taylor polynomial. (Students complete worksheet series03 (Taylor coefficients).) 2 What we have discovered is that if we have a polynomial C0 + C1x + C2x + ! that we are trying to match values and derivatives to a function, then when we differentiate it a bunch of times, we 0 get Cnn!x + Cn+1(n+1)!x + !. Plugging in x = 0 and everything except Cnn! goes away. So to (n) match the nth derivative, we must have Cn = f (0)/n!. If we want to match the values and derivatives at some point other than zero, we will use the 2 polynomial C0 + C1(x – a) + C2(x – a) + !. Then when we differentiate, we get Cnn! + Cn+1(n+1)!(x – a) + !. Now, plugging in x = a makes everything go away, and we get (n) Cn = f (a)/n!. So we can generate a polynomial of any degree (well, as long as the function has enough derivatives!) that will match the function and its derivatives to that degree at a particular point a. We can also extend to create a power series called the Taylor series (or Maclaurin series if a is specifically 0). We have natural questions: • For what x does the series converge? • What does the series converge to? Is it, hopefully, f(x)? • If the series converges to the function, we can use parts of the series—Taylor polynomials—to approximate the function, at least nearby a.
    [Show full text]
  • The Set of Continuous Functions with Everywhere Convergent Fourier Series
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 302, Number 1, July 1987 THE SET OF CONTINUOUS FUNCTIONS WITH EVERYWHERE CONVERGENT FOURIER SERIES M. AJTAI AND A. S. KECHRIS ABSTRACT. This paper deals with the descriptive set theoretic properties of the class EC of continuous functions with everywhere convergent Fourier se- ries. It is shown that this set is a complete coanalytic set in C(T). A natural coanalytic rank function on EC is studied that assigns to each / € EC a count- able ordinal number, which measures the "complexity" of the convergence of the Fourier series of /. It is shown that there exist functions in EC (in fact even differentiable ones) which have arbitrarily large countable rank, so that this provides a proper hierarchy on EC with uii distinct levels. Let G(T) be the Banach space of continuous 27r-periodic functions on the reals with the sup norm. We study in this paper descriptive set theoretic aspects of the subset EC of G(T) consisting of the continuous functions with everywhere convergent Fourier series. It is easy to verify that EC is a coanalytic set. We show in §3 that EC is actually a complete coanalytic set and therefore in particular is not Borel. This answers a question in [Ku] and provides another example of a coanalytic non-Borel set. We also discuss in §2 (see in addition the Appendix) the relations of this kind of result to the studies in the literature concerning the classification of the set of points of divergence of the Fourier series of a continuous function.
    [Show full text]
  • Calculus of Variations Raju K George, IIST
    Calculus of Variations Raju K George, IIST Lecture-1 In Calculus of Variations, we will study maximum and minimum of a certain class of functions. We first recall some maxima/minima results from the classical calculus. Maxima and Minima Let X and Y be two arbitrary sets and f : X Y be a well-defined function having domain → X and range Y . The function values f(x) become comparable if Y is IR or a subset of IR. Thus, optimization problem is valid for real valued functions. Let f : X IR be a real valued → function having X as its domain. Now x X is said to be maximum point for the function f if 0 ∈ f(x ) f(x) x X. The value f(x ) is called the maximum value of f. Similarly, x X is 0 ≥ ∀ ∈ 0 0 ∈ said to be a minimum point for the function f if f(x ) f(x) x X and in this case f(x ) is 0 ≤ ∀ ∈ 0 the minimum value of f. Sufficient condition for having maximum and minimum: Theorem (Weierstrass Theorem) Let S IR and f : S IR be a well defined function. Then f will have a maximum/minimum ⊆ → under the following sufficient conditions. 1. f : S IR is a continuous function. → 2. S IR is a bound and closed (compact) subset of IR. ⊂ Note that the above conditions are just sufficient conditions but not necessary. Example 1: Let f : [ 1, 1] IR defined by − → 1 x = 0 f(x)= − x x = 0 | | 6 1 −1 +1 −1 Obviously f(x) is not continuous at x = 0.
    [Show full text]
  • Taylor's Series of Sin X
    Taylor's Series of sin x In order to use Taylor's formula to find the power series expansion of sin x we have to compute the derivatives of sin(x): sin0(x) = cos(x) sin00(x) = − sin(x) sin000(x) = − cos(x) sin(4)(x) = sin(x): Since sin(4)(x) = sin(x), this pattern will repeat. Next we need to evaluate the function and its derivatives at 0: sin(0) = 0 sin0(0) = 1 sin00(0) = 0 sin000(0) = −1 sin(4)(0) = 0: Again, the pattern repeats. Taylor's formula now tells us that: −1 sin(x) = 0 + 1x + 0x 2 + x 3 + 0x4 + · · · 3! x3 x5 x7 = x − + − + · · · 3! 5! 7! Notice that the signs alternate and the denominators get very big; factorials grow very fast. The radius of convergence R is infinity; let's see why. The terms in this sum look like: x2n+1 x x x x = · · · · · : (2n + 1)! 1 2 3 (2n + 1) Suppose x is some fixed number. Then as n goes to infinity, the terms on the right in the product above will be very, very small numbers and there will be more and more of them as n increases. In other words, the terms in the series will get smaller as n gets bigger; that's an indication that x may be inside the radius of convergence. But this would be true for any fixed value of x, so the radius of convergence is infinity. Why do we care what the power series expansion of sin(x) is? If we use enough terms of the series we can get a good estimate of the value of sin(x) for any value of x.
    [Show full text]
  • Math 104 – Calculus 10.2 Infinite Series
    Math 104 – Calculus 10.2 Infinite Series Math 104 - Yu InfiniteInfinite series series •• Given a sequence we try to make sense of the infinite Given a sequence we try to make sense of the infinite sum of its terms. sum of its terms. 1 • Example: a = n 2n 1 s = a = 1 1 2 1 1 s = a + a = + =0.75 2 1 2 2 4 1 1 1 s = a + a + a = + + =0.875 3 1 2 3 2 4 8 1 1 s = a + + a = + + =0.996 8 1 ··· 8 2 ··· 256 s20 =0.99999905 Nicolas Fraiman MathMath 104 - Yu 104 InfiniteInfinite series series Math 104 - YuNicolas Fraiman Math 104 Geometric series GeometricGeometric series series • A geometric series is one in which each term is obtained • A geometric series is one in which each term is obtained from the preceding one by multiplying it by the common ratiofrom rthe. preceding one by multiplying it by the common ratio r. 1 n 1 2 3 1 arn−1 = a + ar + ar2 + ar3 + ar − = a + ar + ar + ar + ··· kX=1 ··· kX=1 2 n 1 • DoesWe have not converges for= somea + ar values+ ar of+ r + ar − n ··· 1 2 3 n rsn =n ar1 + ar + ar + + ar r =1then ar − = a + a + a + a···+ n ···!1 sn rsn = a ar −kX=1 − 1 na(11 rn) r = 1 then s ar= − =− a + a a + a a − n 1 −r − − ··· kX=1 − Nicolas Fraiman MathNicolas 104 Fraiman Math 104 - YuMath 104 GeometricGeometric series series Nicolas Fraiman Math 104 - YuMath 104 Geometric series Geometric series 1 1 1 = 4n 3 n=1 X Math 104 - Yu Nicolas Fraiman Math 104 RepeatingRepeatingRepeang decimals decimals decimals • We cancan useuse geometricgeometric series series to to convert convert repeating repeating decimals toto fractions.fractions.
    [Show full text]
  • Calculus of Variations
    Calculus of Variations The biggest step from derivatives with one variable to derivatives with many variables is from one to two. After that, going from two to three was just more algebra and more complicated pictures. Now the step will be from a finite number of variables to an infinite number. That will require a new set of tools, yet in many ways the techniques are not very different from those you know. If you've never read chapter 19 of volume II of the Feynman Lectures in Physics, now would be a good time. It's a classic introduction to the area. For a deeper look at the subject, pick up MacCluer's book referred to in the Bibliography at the beginning of this book. 16.1 Examples What line provides the shortest distance between two points? A straight line of course, no surprise there. But not so fast, with a few twists on the question the result won't be nearly as obvious. How do I measure the length of a curved (or even straight) line? Typically with a ruler. For the curved line I have to do successive approximations, breaking the curve into small pieces and adding the finite number of lengths, eventually taking a limit to express the answer as an integral. Even with a straight line I will do the same thing if my ruler isn't long enough. Put this in terms of how you do the measurement: Go to a local store and purchase a ruler. It's made out of some real material, say brass.
    [Show full text]
  • Tent Cash Flow Designs and Analysis for Gradient Cash Flow Lectures
    Page 8.1095.1 Page 8.1095.2 Page 8.1095.3 Page 8.1095.4 Page 8.1095.5 Page 8.1095.6 Page 8.1095.7 Page 8.1095.8 Page 8.1095.9 Page 8.1095.10 Page 8.1095.11 Page 8.1095.12 Page 8.1095.13 Page 8.1095.14 Page 8.1095.15.
    [Show full text]