<<

Preprint 11/2005

COMPOSITION FUNCTIONALS IN OF VARIATIONS. APPLICATION TO PRODUCTS AND QUOTIENTS

Enrique Castillo, Alberto Luceno,˜ Pablo Pedregal

E.T.S. Ingenieros Industriales October 2005 Universidad de Castilla - La Mancha http://matematicas.uclm.es/omeva/ 13071 Ciudad Real Composition Functionals in . Application to Products and Quotients. Enrique Castillo∗, Alberto Luceno˜ ∗ and Pablo Pedregal†

∗ Department of Applied and Computational Sciences,University of Cantabria, Spain

† Department of Mathematics, University of Castilla-La Mancha, Spain

Abstract This paper deals with the problem of the Calculus of Variations for a functional which is the composition of a certain scalar H with the of a vector valued field f, i.e., of the form x1 H f(x, y(x), y0(x))dx , x µZ 0 ¶ n 3 n where H : IR → IR and f : IR → IR . The integral of f is calculated here componentwise. We examine sufficient conditions for the existence of optimal solutions, and provide rules to find the Euler-Lagrange and the natural, transversality, Weirstrass-Erdmann and junction conditions for such a functional. Particular attention is paid to the cases of the product and the quotient as we take these as model situations. Finally, the theory is illustrated with a stability problem, and an example coming from Economics. Key Words: coercivity, weak lower semicontinuity, Euler-Lagrange equations, product func- tionals, quotient functionals, transversality conditions, natural conditions, Weierstrass -Erdman conditions, slope stability.

1 Introduction

Many problems in science and technology can be formulated as problems of the Calculus of Vari- ations where a certain functional is minimized. The Calculus of Variations has been traditionally concerned with functionals of the form x1 0 I(x0, x1, y(x)) = f(x, y(x), y (x))dx, (1) Zx0 where (x0, x1, y(x)) is assumed to belong to adequate spaces for the problem to be tractable. Necessary and sufficient conditions for the existence of a minimum are well-known (see for example Euler (1744), Elsgolc (1962); Gelfand and Fomin (1963); Forray (1968); Bolza (1973)). In particular, first order necessary conditions such as the following appear in the existing literature: 1. The Euler-Lagrange equation, to be satisfied by any extremal that minimizes (1): d f − f 0 = 0. (2) y dx y

2. The natural condition, to be satisfied at an end point x = x0 when y(x0) is free:

0 0 fy (x0, y0(x0), y0(x0)) = 0. (3)

3. The transversality condition, to be satisfied when an end point x = x0 must be on a given curve y = y¯(x):

0 0 0 0 0 f(x0, y0(x0), y0(x0)) + (y¯0(x0) − y0(x0))fy (x0, y0(x0), y0(x0)) = 0. (4)

1 4. The Weierstrass-Erdmann condition, to be satisfied at any angular point x = c:

0 0 0 0 fy (x, y0(x), y0(x)) x=c−0 − fy (x, y0(x), y0(x)) x=c+0 = 0 (5) 0 0 0 (f(x, y (x), y (¯x)) − y f 0 (x, y (x), y (x)))¯ 0 0 ¯ y 0 0 ¯x=c−0 0 0 0 − (f(x, y (x), y (x)) − y f 0 (x, y (x), y (x)))¯ = 0 (6) 0 0 y 0 0 ¯x=c+0 ¯ 5. The junction condition for unilateral constraints of the type ¯y ≥ φ(x), to be satisfied at the junction point x¯:

0 0 0 0 0 f(x, y0(x), y0(x)) − f(x, y0(x), φ(x)) − (φ (x) − y0(x))fy (x, y0(x), y0(x)) x=x¯ = 0. (7) £ ¤¯ In , other second order necessary conditions, such as Legendre’s or ¯Jacobi’s, and sufficient conditions, such as Legendre’s or Hilbert’s conditions, are well known for functionals (1). Moreover, sufficient conditions for the existence of optimal solutions involve the so-called “direct method” (Dacorogna (1989, 1992)) which amounts basically to two main ingredients: coercivity and convexity. More precisely, if the integrand

f(x, y, λ) : (x0, x1) × IR × IR → IR is continuous in (y, λ), measurable in x, convex in λ (for every fixed pair (x, y)) and satisfies the coercivity condition f(x, y, λ) lim = +∞ λ→∞ |λ| uniformly in (x, y), then there are global optimal solutions for the minimization problem (1). This is classical and has been extended to the much more complex vector situation (Dacorogna (1989)). There are however interesting problems in which the functional to be minimized is not of the form (1). In this paper we deal with general (non-classical) functionals of the form

x1 0 H(x0, x1, y(x)) = H f(x, y(x), y (x))dx , (8) µZx0 ¶ where f has n components f = (f1, . . . , fn) and H has n independent variables. In particular, two interesting cases for n = 2 are the product functional:

x1 x1 0 0 P (x0, x1, y(x)) = f1(x, y(x), y (x))dx f2(x, y(x), y (x))dx , (9) µZx0 ¶ µZx0 ¶ and the or quotient functional:

x1 0 x0 f1(x, y(x), y (x))dx Q(x0, x1, y(x)) = x1 0 . (10) Rx0 f2(x, y(x), y (x))dx Functionals of the form (8) were dealt witRh by Euler (1744). However, it is really surprising that the scientific community seems to be unaware of it. In fact only very few authors, as Petrov (1968) and Goldstine (1980), cite this important Euler’s work, but do it as if it were a secondary contribution. Functionals of the form (10) have appeared in the past in relation to Soil Mechanics problems (see for example Garber (1973), Revilla and Castillo (1977) and Luceno˜ (1979)). The product

2 functional is a very particular case of a non-local cost functional, as it can trivially be written as a double integral

x1 x1 0 0 P (x0, x1, y(x)) = f1(t, y(t), y (t))f2(s, y(s), y (s)) dt ds. Zx0 Zx0 In general, some non-local integral functionals can be expressed as a double (or multiple) integral of the form x1 x1 F (t, s, y(t), y(s), y0(t), y0(s)) dt ds. Zx0 Zx0 See Pedregal (1997a) for more information on these non-local examples. In this paper, we deal with the problem of minimizing this general type of functionals. In particular, one is interested in answering basic questions such as:

1. What are the corresponding Euler-Lagrange equations for these problems?

2. What are the corresponding natural, transversality, Weierstrass-Erdmann and junction conditions for these problems?

3. Can these problems be reduced to other calculus of variation problems?

4. Sets of sufficient conditions ensuring the existence of optimal solutions.

In this paper we answer some of these questions. The paper is organized as follows. Section 2 is concerned with changes needed in the classical direct method so that it can be applied to this class of functionals. We will see that the coercivity issue is very different for the product and the quotient functionals. Indeed, as far as coercivity is concerned we can divide functionals in these two kinds: product-type and quotient-type func- tionals. Section 3 deals with the problem of obtaining a general formula for the Euler-Lagrange equations for the general functional (8), which is then applied to the product (9), the quotient (10) and another more general functional. In Section 4 we give the natural, transversality, Weierstrass-Erdmann and junction conditions for the general functional (8). Sections 5 and 6 illustrate the proposed methods using a slope stability and an economics problem, respectively, Notice that many of our results can be easily generalized and are also valid for much more general situations, such as those functionals involving multiple or multiple unknown functions. Proofs and techniques are very similar, and formally the same, to the ones used here. In addition, the Pontryagin maximum principle can be easily generalized for these more general functionals.

2 Sufficient conditions for existence

We first give a basic, general existence theorem, and then examine with care the two cases of product and quotient of functionals. This result is an adaptation of the direct method to the sort of functionals we are considering.

Theorem 1 (First existence theorem.) Let

n n f(x, y, λ) : (x0, x1) × IR × IR → IR , H(z) : IR → IR, be given satisfying the following three sets of assumptions:

3 • regularity and boundedness: f is continuous in (y, λ) and measurable in x, and H is lower semicontinuous and bounded from below over the of IRn n n (x1 − x0)co(im(f)) = z ∈ IR : z = sizi, si ≥ 0, si = (x1 − x0), zi ∈ im(f) ( ) Xi=0 X by a constant c; • coercivity: the level sets of H enjoy the following requirement: for each C ≥ c (c has just been determined above) and every z(j) such that

z(j) → ∞, c ≤ H(z(j)) ≤ C, ¯ ¯ ¯ ¯ we can find a subsequence (no¯t rela¯ beled), a constant M and an index i ∈ {1, 2, . . . , n} (all depending possibly on C) such that (j) zi ≤ M and f (x, y, λ) lim i = +∞ |λ|→∞ |λ| uniformly in (x, y);

• convexity-monotonicity: each component fi(x, y, λ) is convex in λ for fixed (x, y), and H is non-decreasing on each zi when the other variables zj are fixed values in (x1 − x0)co(im(fj)) for j 6= i. Then there exist global optimal solutions for the variational problem consisting in minimizing the functional x1 0 I(x0, x1, y(x)) = H f(x, y(x), y (x)) dx µZx0 ¶ over the functions y that are absolutely continuous in (x0, x1) complying with suitable boundary conditions and/or other types of restrictions (which should be stable by weak convergence).

Proof. Let us denote by A the class of functions y(x) absolutely continuous in the (x1, x0) complying with additional restrictions that need to be considered in our variational problem, and which are stable by weak convergence. This means that if yj ∈ A and yj weakly 1,1 converges to y (in W (x0, x1)) then y ∈ A. A is the class of feasible functions for our optimiza- tion problem. Because of the boundedness of H over the (x1−x0)co(im(f)), we can consider a decreasing minimizing sequence {yj} so that

lim I(x0, x1, yj(x)) = m = inf I(x0, x1, y(x)). j→∞ y∈A Our task consists in showing that this infimum is indeed a minimum under our set of assumptions in the statement of the theorem. Put x1 (j) 0 z = f(x, yj(x), yj(x)) dx. Zx0 Then, it is clear that H(z(j)) ≤ m+1 for j large. By the second part of our coercivity hypothesis, we can always assume that for some component i we have

x1 0 fi(x, yj(x), yj(x)) dx ≤ M. Zx0 4 This upper bound together with the coercivity assumed on fi enables us to conclude that, for a 1,1 subsequence not relabeled, yj converges weakly (in W (x0, x1)) to some y ∈ A. This function y is the candidate for minimizer. We need to check that

I(x0, x1, y(x)) ≤ lim inf I(x0, x1, yj(x)) (11) j→∞ if we put x1 z = f(x, y(x), y0(x)) dx. Zx0 This requires the weak lower semicontinuity property for the functional I. But this is a direct consequence of the convexity assumed on f and the corresponding monotonicity assumed on H. This is standard, and can be easily shown by using, for instance, Young theory (see Pedregal (1997b)). 1,1 0 Indeed, under the weak convergence (always in W (x0, x1)), we can associate with yj a family of probability measures ν = {νx}x ∈ (x0, x1) supported in IR (the corresponding nYoungo measure) such that

x1 x1 0 lim inf f(x, yj(x), yj(x)) dx ≥ f(x, y(x), λ) dνx(λ) dx, j→∞ Zx0 Zx0 ZIR and 0 y (x) = λ dνx(λ) (12) ZIR for a.e. x ∈ (x0, x1). By the lower semicontinuity of H and its monotonicity, we can also write

x1 lim inf I(x0, x1, yj(x)) ≥ H f(x, y(x), λ) dνx(λ) dx . j→∞ µZx0 ZIR ¶ Finally, by the convexity of each component of f and Jensen’s inequality, we can further have

x1 lim inf I(x0, x1, yj(x)) ≥ H f x, y(x), λ dνx(λ) dx . j→∞ µZx0 µ ZIR ¶ ¶ By (12), we immediately get (11).

Let us examine the product and quotient cases for n = 2

P (z) = z1z2, Q(z) = z1/z2.

The condition on the boundedness also depends on what f is, so that we cannot say much without this information. Same applies for the convexity, since P regarded as a function of each variable will be increasing or decreasing depending upon the sign of the other variable. The last requirement on coercivity does not involve f, and this is what we would like to explore now. Suppose then that (j) (j) (j) z → ∞, c ≤ z1 z2 ≤ C. ¯ ¯ It is then clear that both comp¯onen¯ts cannot go at the same time to ∞, and therefore the ¯ ¯ requirement for coercivity is true for the product case. This is not so for the quotient as the condition

(j) (j) (j) z → ∞, c ≤ z1 /z2 ≤ C ¯ ¯ ¯ ¯ ¯ ¯ 5 is compatible with both components going to ∞ simultaneously. In fact, in the case of the quotient, it is very easy to give examples of non-existence. Consider the problem of minimizing the quotient of the integrals corresponding to

2 2 f1(x, y, λ) = λ , f2(x, y, λ) = −1 − λ .

All the requirements in the statement of Theorem 1 hold except for the above property for the quotient. If we impose vanishing boundary conditions on the interval (0, 1), then it is elementary to check that the infimum of the problem is −1 by considering the minimizing sequence

yj(x) = jx(1 − x), but it can never be achieved by a feasible function. To have existence of optimal solutions for the quotient case, additional information is necessary to rule out this behavior. As a matter of fact, as we will see in one of our examples, one is often led to apply to each particular situation the leading strategy in the proof above rather than the direct application of its statement. The philosophy behind the proof of Theorem 1 consists in checking three facts:

1. The infimum is finite.

2. From the boundedness of the infimum derive the boundedness of the integrals correspond- ing to one of the components of f.

3. The convexity-monotonicity requirement.

Theorem 1 can be generalized to functionals depending on higher without any additional effort. We are now considering the functional

x1 0 (d I(x0, x1, y(x)) = H f(x, y(x), y (x), . . . , y (x)) dx (13) µZx0 ¶ under additional restrictions (end-point conditions) which are respected by weak convergence in d,1 0 (d−1 W (x0, x1). We will use the variable ρ to indicate the whole vector of derivatives (y, y , . . . , y ), and reserve λ for the variable corresponding to the highest .

Theorem 2 (Second existence theorem.) Let

d n n f(x, ρ, λ) : (x0, x1) × IR × IR → IR , H(z) : IR → IR, be given satisfying the following three sets of assumptions:

• regularity and boundedness: f is continuous in (ρ, λ) and measurable in x, and H is lower semicontinuous and bounded from below over the subset of IRn

n n (x1 − x0)co(im(f)) = z ∈ IR : z = sizi, si ≥ 0, si = (x1 − x0), zi ∈ im(f) ( ) Xi=0 X by a constant c;

• coercivity: the level sets of H enjoy the following requirement: for each C ≥ c (c has just been given) and every sequence z(j) such that

z(j) → ∞, c ≤ H(z(j)) ≤ C, ¯ ¯ ¯ ¯ ¯ ¯ 6 we can find a subsequence (not relabeled), a constant M and an index i ∈ {1, 2, . . . , n} (all depending possibly on C) such that (j) zi ≤ M and f (x, ρ, λ) lim i = +∞ |λ|→∞ |λ| uniformly in (x, ρ);

• convexity-monotonicity: each component fi(x, ρ, λ) is convex in λ for fixed (x, ρ), and H is non-decreasing on each variable zi when the other variables zj are fixed values in (x1 − x0)co(im(fj)) for j 6= i. Then there exist global optimal solutions for the variational problem consisting in minimizing d,1 the functional in (13) over the functions y belonging to the space W (x0, x1) and complying with suitable boundary conditions and/or other types of restrictions (which should be stable by weak convergence in such a space).

In practice one faces problems in which the H does not satisfies the convexity-monotonicity conditions in Theorem 1 and 2 directly; however, changing the signs of the arguments and convexity by concavity, one can transform the initial problem in one that satisfies these theorems.

3 Euler-Lagrange equation

In this section we derive the Euler-Lagrange equations associated with functionals of the general form (8). The normal way of deriving the necessary conditions for optimality in calculus of variations consists of writing the expansion of the functional including the first partial varia- tions with respect to x0, x1 and y(x), and then using the fundamental lemma of the calculus of variations to the equation resulting from vanishing these variations. However, a simpler procedure consists of using the for obtaining the derivative of a composed function, and then, we get the following theorem.

Theorem 3 (General functional.) Assume that the function H in (8) and

x1 0 Gi(x0, x1, y(x)) = fi(x, y(x), y (x))dx, (14) xZ0 can be expanded as a Taylor series up to second order terms with respect to their arguments. Then, the Euler-Lagrange equation associated with functional (8) is

n 0 d H f − f 0 = 0, (15) i iy dx iy Xi=1 µ ¶ 0 where Hi represents the of H with respect to its i-th argument, and its second variation is

n x1 2 1 0 2 0 0 0 δ H = H f δy + 2f 0 δyδy + f 0 0 δy δy dx 2 i iyy iyy iy y Xi=1xZ0 h i

7 n x1 x1 1 00 d 0 d 0 + H f − f 0 (r, y(r), y (r)) f − f 0 ) (s, y(s), y (s))δy(r)δy(s)drds 2 ij iy dx iy jy dx jy Xi,j xZ0 xZ0 · ¸ · ¸ (16)

Proof. Including up to the second order terms of the Taylor of Gi(x0, x1, y(x)) one gets

δGi = Gi(x0, x1, y(x) + δy(x)) − Gi(x0, x1, y(x)) x1 x1 d 1 2 0 0 0 = f − f 0 δydx + f δy + 2f 0 δyδy + f 0 0 δy δy dx. (17) iy dx iy 2 iyy iyy iy y xZ0 · ¸ xZ0 h i Similarly, including up to the second order terms of the Taylor series expansion of H, one obtains

δH = H(G1(x0, x1, y(x) + δy(x)), . . . , Gn(x0, x1, y(x) + δy(x)))

−H(G1(x0, x1, y(x)), . . . , Gn(x0, x1, y(x))) n x1 n x1 0 d 1 0 2 0 0 0 ≈ H f − f 0 δydx + H f δy + 2f 0 δyδy + f 0 0 δy δy dx i iy dx iy 2 i iyy iyy iy y Xi=1xZ0 · ¸ Xi=1xZ0 h i n x1 x1 1 00 d d + H f − f 0 δydx f − f 0 δydx , (18) 2 ij  iy dx iy   jy dx jy  Xi,j xZ0 · ¸ xZ0 · ¸     00 where δH is the variation of H, Hij is the second partial derivative of H with respect to the arguments i and j. Expression (18) includes first and second terms. From the first term, as it has to vanish for any δy, we immediately get (15). As the second of the second order terms can be written as

n x1 x1 1 00 d 0 d 0 H f − f 0 (r, y(r), y (r)) f − f 0 ) (s, y(s), y (s))δy(r)δy(s)drds (19) 2 ij iy dx iy jy dx jy Xi,j xZ0 xZ0 · ¸ · ¸ we have (16).

The following theorem shows that the minimization of functional (8) is equivalent to the minimization of a quadratic functional and vice versa, in the sense of sharing the same minimiz- ers. Because it has exactly the same Euler-Lagrange equation and second variation as the func- tional (8), it is of interest analyzing this quadratic functional.

Theorem 4 (Reducing the general functional to a quadratic functional.) Assume that the function H in (8) is differentiable with respect to its n arguments. If (x0, x1, y0(x)) is an extremal leading to a finite relative minimum (maximum) of functional H(x0, x1, y(x)) in (8) of value H0 = H(F10, F20, . . . , Fn0) the functionals

x1 0 Fi(x0, x1, y(x)) = fi(x, y(x), y (x))dx; i = 1, 2, . . . , n (20) Zx0 8 are expandable as Taylor series at (x0, x1, y0(x)) up to second order terms, and not all

00 Hij(F10, F20, . . . , Fn0); i, j = 1, 2, . . . , n are null, then (x0, x1, y0(x)) is also an extremal leading to a relative minimum (maximum) n n K0 = Ai0Fi0 + Bij0Fi0Fj0 of functional i=1 i,j=1 P P x1 K f(x, y(x), y0(x))dx µZx0 ¶ n x1 n x1 x1 0 0 0 = Ai0 fi(x, y0(x), y0(x))dx + Bij0 fi(x, y(x), y (x))dx fj(x, y(x), y (x))dx x0 x0 x0 Xi=1 Z i,jX=1 µZ ¶ µZ ¶ (21) where

0 00 Ai0 = Hi0 − Hij0Fj0 (22) jX=1 1 B = H00 (23) ij0 2 ij0 0 0 Hi0 = Hi(F10, F20, . . . , Fn0); i = 1, 2, . . . , n, (24) 00 00 Hij0 = Hij(F10, F20, . . . , Fn0); i, j = 1, 2, . . . , n, (25) x1 0 Fi0 = fi(x, y0(x), y0(x))dx; i = 1, 2, . . . , n. (26) Zx0 and vice versa.

Proof. The proof is very simple using Theorem 3. From (21), (22) and (23) one gets

n 0 0 Ki0 = Ai0 + 2 Bij0Fj0 = Hi0, i = 1, 2, . . . , n (27) jX=1 00 00 Kij0 = 2Bij0 = Hij0, i, j = 1, 2, . . . , n, (28) which proves that both functionals have the same first and second variations.

This theorem reveals the importance of the quadratic functional (21) in deriving the sufficient conditions for the existence of a local minimum (maximum) of functionals of the form (8).

Corollary 1 The functional (8) and the classical functional

x1 n x1 ∗ 0 0 0 H f(x, y(x), y (x))dx = Hi0 fi(x, y0(x), y0(x))dx (29) x0 x0 µZ ¶ Xi=1 Z share the same Euler-Lagrange equations.

0 0∗ The proof is obvious because Hi0 = Hi0, ∀i.

9 3.1 The product functional The Euler-Lagrange equation for the product functional (9) can be easily obtained using well known techniques of taking variations, i.e., we have

x1+α1 x1+α1 0 0 0 0 δP = f1(x, y(x) + w(x), y (x) + w (x))dx f2(x, y(x) + w(x), y (x) + w (x))dx Zx0+α0 Zx0+α0 x1 x1 0 0 − f1(x, y(x), y (x))dx f2(x, y(x), y (x))dx, (30) Zx0 Zx0 where (α0, α1, w(x)) is a feasible variation such that ||(α0, α1, w(x))|| → 0. Then, using the corresponding Taylor expansion, neglecting high order terms, and vanishing the first partial variation of the functional with respect to y(x), which is:

x1 x1 d 0 δP (x0, x1, y(x))(w(x)) = f1y − f1y0 f2(x, y(x), y (x))dx 0 dx 0 Zx ·µ ¶ µZx ¶ x1 d 0 + f2y − f2y0 f1(x, y(x), y (x))dx w(x)dx, (31) dx 0 µ ¶ µZx ¶¸ the following Euler-Lagrange equation for this problem results:

x1 x1 d 0 d 0 f1y − f1y0 f2(x, y(x), y (x))dx + f2y − f2y0 f1(x, y(x), y (x))dx = 0. dx 0 dx 0 µ ¶ µZx ¶ µ ¶ µZx ¶ (32) Alternatively, we can derive the Euler-Lagrange equation associated with functional (9) using Theorem 3, i.e., Equation (15) that leads to the well known rule: “The derivative of a product is the derivative of the first factor times the second factor plus the derivative of the second factor times the first factor”, and directly get (32). Applying Theorem 4 we obtain the same functional (9), as expected.

3.2 The quotient functional Assuming that the denominator does not vanish, the Euler-Lagrange equation for the quotient problem can be easily obtained using the well known techniques of taking variations, i.e., we have x1+α1 0 0 x1 0 f1(x, y(x) + w(x), y (x) + w (x))dx f1(x, y(x), y (x))dx δQ = x0+α0 − x0 , (33) x1+α1 0 0 x1 f (x, y(x), y0(x))dx Rx0+α0 f2(x, y(x) + w(x), y (x) + w (x))dx Rx0 2 and using the TRaylor expansion, neglecting high order termsR , and vanishing its first partial variation of the functional with respect to y(x), which is:

d x1 d x1 f − f 0 f (x, y(x), y0(x))dx − f − f 0 f (x, y(x), y0(x))dx x−1 1y dx 1y x0 2 2y dx 2y x0 1 µ ¶ ³ ´ µ 2 ¶ ³ ´w(x)dx, x0 R x1 0 R Z x0 f2(x, y(x), y (x))dx ³R ´ (34) one gets the corresponding Euler-Lagrange equation:

d x1 d x1 f − f 0 f (x, y(x), y0(x))dx − f − f 0 f (x, y(x), y0(x))dx 1y dx 1y x0 2 2y dx 2y x0 1 µ ¶ ³ ´ µ 2 ¶ ³ ´ = 0, R x1 0 R x0 f2(x, y(x), y (x))dx ³R ´ (35)

10 which simplifies to: d (f − Q f ) − f 0 − Q f 0 = 0, (36) 1y 0 2y dx 1y 0 2y £¡ ¢¤ where Q0 = Q(x0, x1, y0(x)) is the optimal value of functional Q. lternatively, we can derive the Euler-Lagrange equation associated with functional (10) using Theorem 3, i.e., Expression (15) that leads to the well known rule:

“The derivative of a quotient is the derivative of the numerator times the denom- inator minus the derivative of the denominator times the numerator”. and get (35) or its equivalent (36). In addition, applying Corollary 1 we find that the functional (10) shares Euler-Lagrange equations with the classical functional

x1 0 0 R(x0, x1, y(x)) = f1(x, y(x), y (x)) − Q0f2(x, y(x), y (x)) dx, (37) Zx0 £ ¤ where x1 0 0 x0 f1(x, y0(x), y0(x))dx H20 Q0 = Q(x0, x1, y0(x)) = x1 0 = − 0 (38) Rx0 f2(x, y0(x), y0(x))dx H10 is a constant, and y0(x) is the minimizer ofR(10). Using Theorem 4, one finds that the functional (10) is equivalent to the quadratic functional:

x1 x1 2 0 2F10 0 S(x0, x1, y(x)) = f1(x, y0(x), y0(x))dx − 2 f2(x, y0(x), y0(x))dx F 0 F 0 20 Zx 20 Zx x1 x1 1 0 0 − 2 f1(x, y0(x), y0(x))dx f2(x, y0(x), y0(x))dx F 0 0 20 Zx Zx x1 2 F10 0 + 3 f2(x, y0(x), y0(x))dx (39) F 0 20 µZx ¶ 3.3 Another example As a final example, we derive the Euler-Lagrange equation associated with functional:

x1 0 x1 0 x0 f1(x, y(x), y (x))dx x0 f2(x, y(x), y (x))dx Z(x0, x1, y(x)) = x1 0 . (40) R x0 f3(x, y(xR), y (x))dx R Applying Theorem 3, that is, Expression (15), reducing to common denominator and removing it, one gets the Euler-Lagrange equation

x1 x1 d 0 0 f1y − f1y0 f2(x, y(x), y (x))dx f3(x, y(x), y (x))dx dx 0 0 µ ¶ µZx ¶ µZx ¶ x1 x1 d 0 0 + f2y − f2y0 f1(x, y(x), y (x))dx f3(x, y(x), y (x))dx dx 0 0 µ ¶ µZx ¶ µZx ¶ x1 x1 d 0 0 − f3y − f3y0 f1(x, y(x), y (x))dx f2(x, y(x), y (x))dx = 0. dx 0 0 µ ¶ µZx ¶ µZx ¶ The reader can apply Theorem 4 to obtain the equivalent functional.

11 4 Necessary conditions

According to Theorem 4, the minimization of the general functionals in (8) can be reduced to the minimization of a quadratic functional, and Corollary 1 allows obtaining first order conditions based on classical functionals. This is particularly true for the product and quotient functionals. Thus the corresponding necessary and sufficient conditions and, in particular, the natural, transversality, Weierstrass-Erdmann and junction for inequality constraints conditions can be obtained with the help of the corresponding classical problems. Thus these conditions for the general functional (8) are:

1. The natural condition, to be satisfied at an end point x = x0 when y(x0) is free becomes:

n 0 Ai0fiy0 (x0, y(x0), y (x0)) = 0. (41) Xi=1

2. The transversality condition, to be satisfied when an end point x = x0 is on a given curve y = y¯(x), is:

n n 0 0 0 0 Ai0fi(x0, y(x0), y (x0)) + (y¯ (x0) − y (x0)) Ai0fiy0 (x0, y(x0), y (x0)) = 0. (42) Xi=1 Xi=1 3. The Weierstrass-Erdmann condition, to be satisfied at any angular point x = c, becomes:

n n 0 0 0 0 Ai0fiy (x, y0(x), y0(x)) − Ai0fiy (x, y0(x), y0(x)) = 0. (43) ¯ ¯ i=1 ¯x=c−0 i=1 ¯x=c+0 X ¯ X ¯ n ¯ n ¯ 0 ¯ 0 0 0 ¯ ( Ai0fi(x, y0(x), y0(x)) − y ( Ai0fiy (x, y0(x), y0(x))) ¯ i=1 i=1 ¯x=c−0 X X ¯ n n ¯ 0 0 0 0 ¯ − ( Ai0fi(x, y0(x), y0(x)) − y ( Ai0fiy (x, y0(x), y0(x))) = 0. (44) ¯ i=1 i=1 ¯x=c+0 X X ¯ ¯ 4. The junction condition for unilateral constraints of the type y ≥ φ(¯x) is:

n n 0 0 Ai0fi(x, y(x), y (x)) − Ai0fi(x, y(x), φ (x)) " Xi=1 Xi=1 n 0 0 0 −(φ (x) − y (x)) Ai0fiy0 (x, y(x), y (x)) = 0. (45) #¯ i=1 ¯x=x¯ X ¡ ¢ ¯ ¯ As an example, for the quotient functional we have: ¯

1. The natural condition, to be satisfied at a free end point x = x0, is:

0 0 f1y0 (x0, y(x0), y (x0)) − Q0f2y0 (x0, y(x0), y (x0)) = 0. (46)

2. The transversality condition, to be satisfied when an end point x = x0 is on a given curve y = y¯(x), becomes:

0 0 f1(x0, y(x0), y (x0)) − Q0f2(x0, y(x0), y (x0)) 0 0 0 0 +(y¯ (x0) − y (x0)) f1y0 (x0, y(x0), y (x0)) − Q0f2y0 (x0, y(x0), y (x0)) = 0. (47) ¡ ¢ 12 3. The Weierstrass-Erdmann condition, to be satisfied at any angular point x = c, is:

0 0 0 0 f1y (x, y0(x), y0(x)) − Q0f2y (x, y0(x), y0(x)) x=c−0 0 0 −¡ f 0 (x, y (x), y (x)) − Q f 0 (x, y (x), y (x))¢¯ = 0. (48) 1y 0 0 0 2y 0 0 ¯ x=c+0 0 0 ¡ (f (x, y (x), y (x)) − Q f (x, y (x)¢,¯y (x)) 1 0 0 0 2 0 ¯ 0 0 0 0 0 0 −y (f1y (x, y0(x), y0(x)) − Q0f2y (x, y0(x), y0(x)))) x=c−0 −(f (x, y (x), y0 (x)) − Q f (x, y (x),¯y0 (x)) 1 0 0 0 2 0 ¯ 0 0 0 0 0 0 −y (x)(f1y (x, y0(x), y0(x)) − Q0f2y (x, y0(x), y0(x)))) x=c+0 = 0. (49) ¯ 4. The junction condition for unilateral constraints of the type y ≥¯φ(x) is: 0 0 0 0 f1(x, y, y ) − Q0f2(x, y, y ) − f1(x, y, φ ) + Q0f2(x, y, φ )

0 0 0 0 0 0 £−(φ − y )(f1y (x, y, y ) − Q0f2y (x, y, y ) x=x¯ = 0. (50) ¤¯ The reader can now easily obtain the corresponding conditions¯ for the product functional.

5 Example of application

With the aim of illustrating the above theory, we present a slope stability problem in this section. Slope stability consist of determining the safety factors F (the ratio of the resisting to sliding forces or moments) associated with a series of sliding lines previously defined by the engineer, and determining the one leading to a minimum safety factor F0. Since each of these forces and moments can be given as a functional, the problem can be stated as the minimization of a quotient of two functionals. Revilla and Castillo (1977) and Castilllo and Revilla (1977) based on the Janbu method (see Janbu (1957)) proposed the following implicit functional: c b + (z¯(u) − z(u)) tan φ (1 + z02(u)) γ · ¸ du z0(u) tan φ Za 1 + F = F , (51) b (z¯(u) − z(u)) z0(u)du Za where F is the safety factor, z¯(u) is the slope profile (ordinate at point u), z(u) is the ordinate of the sliding line at point u, c is the cohesion of the soil, φ is the of internal friction of the soil, γ is the unit weight of the soil, H is the slope height, and a and b are the u-coordinates of the sliding line end points. Castillo and Luceno˜ (1982) showed that this functional satisfies the necessary and sufficient conditions for a minimum, and consequently is valid for slope design. They have also shown that other functionals are not valid. Note that (51), for a given slope profile z¯(x), relates five important variables φ, c, γ, H and F , i.e., we are dealing with a 5-dimensional space. Of course, we can work in this space, but this complicates things unnecessarily and hides the deep structure of the slope stability problem. Dimensional analysis, by means of the Π-theorem (see Buckingham (1915)), reveals that expression (51) can be written in terms of the three non-dimensional variables in the set c {F, N = , ψ = tan φ} (52) γH

13 0.5

N*= 0.003

0 3

* = 0.075 N

* = 0.3 -0.5 N

z * = 0.675 N arctan(πu) y(u)= π * = 1.2 -1 N * N * = 1.875 N = N ψ = 2.7 -1.5 N* ψ=tan(φ)

-2 -1 0 1 2 3 u

Figure 1: Sliding lines for different values of N ∗.

so that x1 [N + (y¯(x) − y(x))ψ] (1 + y02(x)) dx ψy0(x) xZ0 1 + F F = x1 (53) (y¯(x) − y(x)) y0(x)dx

xZ0 where u x = (54) H is the non-dimensional u coordinate, and

z(xH) y(x) = (55) H z¯(xH) y¯(x) = (56) H are the non-dimensional slope profile and sliding line, respectively. In fact, it can be shown that the problem depends only on the two non-dimensional param- eters N ∗ = N/ψ and F ∗ = F/ψ. Figure 1 shows the critical sliding lines for some different values of N ∗ = N/ψ and shows that the critical line for the case ψ = 0 is infinitely deep. For the sake of illustration, in this paper we consider the particular case of a purely cohesive soil (ψ = 0), and then, Equation (53) becomes:

x1 02 F x0 (1 + y (x))dx Q = = x1 0 . (57) N x0 R(y˜(x) − y(x))y (x)dx R

14 Suppose also that the slope profile is H πu z˜(u) = arctan ; −∞ < u < ∞ (58) π H µ ¶ which leads to the non-dimensional slope profile (see Figure 2) 1 y˜(x) = arctan(πx); −∞ < x < ∞ (59) π In order for this problem to be well-posed (i.e. to admit optimal profiles in the spirit of our Theorem 1), we also assume that the two end-points x0, x1, are on the slope profile:

y(x0) = y˜(x0), y(x1) = y˜(x1), (60) and that there is a hard or rock stratum located at y = k for some constant height k, so that feasible profiles must also comply with

k ≤ y(x) ≤ y˜(x). (61)

Thus, the problem becomes to minimize the coefficient

x1 02 x0 (1 + y (x))dx Q = x1 0 x0 R(y˜(x) − y(x))y (x)dx for slope profiles such that y˜0(x) ≥ 0 anR d absolutely continuous admissible profiles y subject to (60) and (61). We now apply the three-step strategy described in Section 2. 1. Q is always non-negative. The numerator is obviously always positive. Concerning the denominator, notice the following 1 d (y˜ − y)2 = (y˜ − y)(y˜0 − y0). 2 dx Therefore, as an immediate consequence,

x1 x1 (y˜(x) − y(x))y0(x) dx = (y˜(x) − y(x))y˜0(x) dx, (62) Zx0 Zx0 for any admissible sliding line y. But then the denominator can never be negative because both factors in the last integral are non-negative. This equality of integrals has an in- teresting physical interpretation, i.e., the work due to the weight of the soil (the value of the integrals in (62)) (can be calculated based on the sliding line, the slope profile or any linear convex combination of both.

2. Upper bound for the denominator. Notice again that, because y˜0(x) ≥ 0,

x1 x1 (y˜(x) − y(x))y˜0(x) dx ≤ (y˜(x) − k)y˜0(x) dx = K Zx0 Zx0 where K ≥ 0 for any feasible sliding line y. If we assume here that there exist a y(x) such that its associated safety factor M, is finite, then the optimal coefficient Q is finite and bounded above, and we have 1 x1 1 + y0(x)2 dx ≤ Q ≤ M, K x0 Z ³ ´ 15 and x1 1 + y0(x)2 dx ≤ MK. x0 Z ³ ´ Note how this integrand has quadratic growth at infinity, and hence coercivity holds for the numerator.

3. Convexity-monotonicity. Because of (62), the denominator does not depend on derivatives, and then it is clear that convexity should only be checked for the numerator. Concerning the monotonicity of the quotient with respect to the numerator, this is also true because as shown above, the denominator can never be negative.

We can therefore conclude the existence of optimal sliding lines for our problem. The Euler-Lagrange equation is

0 d 0 f (x, y(x), y (x)) − f 0 (x, y(x), y (x)) 00 1y dx 1y 2y (x) Q = = 0 , (63) d 0 y˜ (x) f (x, y(x), y0(x)) − f 0 (x, y(x), y (x)) 2y dx 2y that is, Qy˜0(x) Q Q y00(x) = ⇒ y0(x) = y˜(x) + B ⇒ y(x) = y˜(x)dx + Bx + C, (64) 2 2 2 Z where B and C are arbitrary constants. Then, Equation (59) provides the set of extremals

Q x arctan(πx) log(1 + π2x2) y(x) = − + Bx + C. (65) 2 " π 2π2 #

The constants B and C, the end point coordinates x0 and x1 and constant Q must satisfy the end point conditions

y˜(x) = y(x); x = x0, x1, (66) the transversality conditions

02 0 0 y (x) − 2y˜ (x)y (x) − 1 = 0; x = x0, x1, (67) where Equation (66) has been used. In this case, as shown in Figure 1, the critical sliding line is infinitely deep (a well known result to soil mechanics experts). In fact if one tries to solve this system of equations can face numerical problems or get only approximate solutions (large absolute values of x0 and x1) because the end points go to infinity. As indicated above, to better illustrate the theory, we consider a hard or rock stratum located at y = −1 (see Figure 2). In this case we must minimize the functional

x1 02 x2 02 x3 02 x0 (1 + y1 (x))dx + x1 (1 + y0 (x))dx + x2 (1 + y2 (x))dx Q = x1 0 x2 0 x3 0 , (68) x0 (y˜(x) − yR1(x))y1(x)dx + x1 (Ry˜(x) − y0(x))y0(x)dxR + x2 (y˜(x) − y2(x))y2(x)dx R R R where y1, y0 and y2 are the sliding lines associated with intervals (x0, x1), (x1, x2) and (x2, x3), respectively, and Q, x0, x1, x2 and x3 are to be determined.

16 Since the unknown function is defined by pieces, the functional becomes n x1 0 x0 f1(x, yi(x), yi(x))dx i=1 Q(x0, x1, y1(x), y2(x), . . . , yn(x)) = n R , (69) P x1 0 x0 f2(x, yi(x), yi(x))dx i=1 P R which is of the type (8) and the resulting system of Euler-Lagrange equations associated with the minimization of this functional is (see Revilla and Castillo (1977)):

0 0 f1y(x, yi(x), yi(x)) − Q0f2y(x, yi(x), yi(x)) d 0 0 −¡ f 0 (x, y (x), y (x)) − Q f 0 (x, y (x)¢, y (x)) = 0; i = 1, 2, . . . , n, (70) dx 1y i i 0 2y i i £¡ ¢¤ where n x1 0 x0 f1(x, yi0(x), yi0(x))dx i=1 Q0 = n R ; i = 1, 2, . . . , n, P x1 0 x0 f2(x, yi0(x), yi0(x))dx i=1 P R and yi0(x); i = 1, 2, . . . , n are the extremals leading to the optimal value of the functional. Then, the functions y1(x) and y2(x), according to (70), must satisfy the Euler-Lagrange equations Qy˜0 (x) y00(x) = j ; j = 1, 2. (71) j 2 Thus, similarly to the previous case we get

Q x arctan(πx) log(1 + π2x2) yi(x) = − + Bix + Ci; i = 1, 2. (72) 2 " π 2π2 #

Because of the hard stratum, we also know that y0(x) = −1. In addition, the following equations must be satisfied (see Figure 2): 1. The two end point conditions at A and D:

y1(x) = y˜(x); x = x0, x3. (73)

2. The two end point transversality conditions at A and D:

02 0 0 02 0 0 y1 (x0) − 2y˜1(x0)y1(x0) − 1 = 0; y2 (x3) − 2y˜2(x3)y2(x3) − 1 = 0 (74) which can be written as

0 0 02 0 0 02 y1(x0) = y˜1(x0) − 1 + y˜1 (x0); y2(x3) = y˜2(x3) + 1 + y˜2 (x3). (75) q q 3. The continuity conditions at B and C:

y1(x1) = −1 (76)

y2(x2) = −1. (77)

4. The first derivative continuity conditions at B and C:

0 y1(x1) = 0 (78) 0 y2(x2) = 0. (79)

17 y y(x) D

x0 x1 x2 x3 x

y2(x) A

y1(x)

y0(x) B C Hard stratum or rock

Figure 2: Different pieces of the critical sliding line when it is limited by a hard or rock stratum.

The system of equations (68) and (72) to (79) allows us obtaining the 9 unknowns, which are:

B1 = 0.327013, B2 = −0.349503, C1 = −0.982243, C2 = −0.979661, Q = 6.13838,

x0 = −1.04476, x1 = −0.110713, x2 = 0.118995, x3 = 1.94323. The resulting sliding line is the one illustrated in Figure 2. The interested reader can see other different applications to soil mechanics in Castillo and Luceno˜ (1983).

6 An example from Economics

A firm tries to program its production and investments policies to reach a given production rate k(T ) and to maximize its future market competitiveness at time horizon T . To solve this problem, a model leading to the maximization of a functional of the form described in Section 3 is presented below. The model is based on the following assumptions: 1. The firm competitiveness can be measured by the function f(k(T ), a(T )), which depends not only on the accumulated capital (accumulated goods devoted to production) k(T ) , but on the accumulated technology (capability given by the practical application of knowledge and experience) a(T ), both at time horizon t = T . In this paper, the function to measure the firm market competitiveness is assumed to be

f(k(T ), a(T )) = k(T )γ1 a(T )γ2 , (80)

where γ1 and γ2 are given constants to measure the absolute and relative importance on competitiveness of capital and technology, respectively. Note that this implies that the firm can decide to sale product at a small or no benefit, or even losses, if this is compensated by the corresponding gained experience and associated technology acquisition.

18 Note also that the product of k(T )γ1 and a(T )γ2 has been chosen instead of other function as the sum, for example, because large differences between capital and technology must be penalized appropriately. In other words, a lack of one of these components must be equili- brated by large amounts of the other component in order to reach the same competitiveness level.

2. The acquisition technology rate is g(y(t), y0(t)) where y(t) is the sales rate at time t, which we assume equal to the production rate at the same time, and g is a function that gives the acquisition technology rate at time t, and is assumed to be a function of the actual production rate y(t) (to account for required machines and other technology components, as gained experience, etc.) and the actual production rate change y0(t) (important changes in the production rate are an incentive for technology increase). More precisely, the y(t) argument accounts for machines and gained experience, and the y0(t) argument accounts for technology due to sales rate changes (large positive or negative sales rate changes y0(t) increments are a warning to the firm who has to decide about technology increases either to face the production increase or to avoid the decrease).

3. The firm starts operating at time t = 0 and accumulates capital over time as

T k(T ) = e−ρ(T −t) y(t)p(t) − c(y(t), y0(t)) dt, (81) Z 0 £ ¤ where ρ is the discount rate, p(t) is the unit product price, and c(y(t), y0(t)) is the cost of producing y(t) units of product at time t plus technology increases.

4. The accumulated technology is

T a(T ) = e−ρ(T −t)g(y(t), y0(t))dt, (82) Z0 i.e., the discounted integral of the technology acquisition rate over time.

5. There is a price-sales (production) relationship regulating the market given by the following function h(y(t), p(t)) = (y(t) − y0)(p(t) − p0) − B = 0, (83) which hyperbolic form has been chosen to reflect not only that sales increase when the unit price decreases, but a lower for the sales y(t) at y0 and a lower limit for the unit price p(t) at p0. 6. There is an upper bound k for the size of production rate change so that |y0| ≤ k.

7. The initial sales rate and the target sales rate at t = T are given:

y(0) = 2; y(T ) = 3. (84)

They are the two boundary conditions required for the Euler-Lagrange second order dif- ferential equation below (see (88)) to have a unique solution.

19 Then, the firm problem can be stated as γ1 γ2 T T Maximize e−ρ(T −t) [y(t)p(t) − c(y(t), y0(t))] dt e−ρ(T −t)g(y(t), y0(t))dt (85) y(t) Ã0 ! Ã0 ! R R subject to conditions (80)-(84). Note that Equation (85) is of the form (8). To simplify the exposition we assume from here that

γ1 = 1; γ2 = 1, (86) i.e., we use a product function for competitiveness. Moreover we transform the maximization problem into a more familiar minimization process by introducing a minus sign on the first factor

T T Minimize e−ρ(T −t) [c(y(t), y0(t)) − y(t)p(t)] dt e−ρ(T −t)g(y(t), y0(t))dt (87) y(t) Ã0 ! Ã0 ! R R Notice that the first factor in this minimum problem is expected to be negative while the second one will be positive. The infimum will thus be negative. This in particular implies that the structure of the functions c and g that we need in order to apply our main existence theorem in Section 3 is convex dependence of c on y0 and concave dependence of g on y0. The bound on y0 that is enforced, immediately implies that the infimum is finite. If we put 0 02 c(y, y ) = c0 + c1y + c2y and g(y, y0) = λy + β y0 + k, where all constants are positive and have a precise pinterpretation in our model, then the main structural assumptions are correct, and a direct application of Theorem 1 furnishes optimal solutions for our problem. Note how the dependence of the production cost c on y0 is of a much higher order than the one for the dependence of the acquisition technology g on y0. This reflects the well known fact that technology is very difficult (costly) to incorporate. According to (32) the Euler-Lagrange equation in this case is γρ γy00(t) 0 = k(T ) λ − + Ã 2 y0(t) + k 4(y0(t) + k)3/2 ! p By0 0 00 +α(t) c1 − p0(t) + 2 − 2c2β(ρy (t) + y (t)) , (88) µ (y(t) − y0) ¶ which is a parametric family of second order ordinary differential equations depending on two parameters k(T ) and a(T ). To obtain the particular values of k(T ) and a(T ) leading to the opti- mal solution, together with those of the two arbitrary constants associated with the integration constants, we have the four equations in (81), (82) and (84). For illustrative purposes we have selected the following values of the parameters:

ρ = 0.05; c0 = 3; c1 = 0.5; c2 = 3; p0 = 1; y0 = 1; k = 4; γ = 1; λ = 1/2; β = 1/4; B = 2; T = 1. Solving (88) together with (81), (82) and (84), one obtains the optimal solution, which appears in Figure 3, where the resulting functions y(t), p(t), k(t) and a(t) are shown. Note that the company increases the sales rate y(t) even though this implies a large reduction of the unit price p(t) and a reduction in the earnings rate (see that k0(t) is decreasing), because this is compensated by the technology increase (note in Figure 3 that not only a(t) but a0(t) increase).

20

 

£¥¡¤¨ £¢¡¤¨

£¥¡¤§ £¢¡¤§ L

L

H

H





£¢¡¤¦ £¢¡¤¦

£¥¡¤£

£¥¡¤£

£

£

¥¡¤£ ¥¡¤¦ ¥¡¤§ ¥¡¤¨ © ¢¡¤£ ¥¡¤¦ ¥¡¤§ ¥¡¤¨ ©

¢¡¤¨



£¥¡¤

¢¡ § £

L L

¥¡¤¦ © ¡¤

H H



©

¢¡¤£

¥¡¤

¢¡¤£ ¥¡¤¦ ¥¡¤§ ¢¡¤¨ © ¢¡¤£ ¥¡¤¦ ¥¡¤§ ¢¡¤¨ ©

Figure 3: Solution to the economic problem, showing the sales y(t), unit price p(t), accumulated capital k(t) and accumulated technology a(t) optimal functions.

7 Conclusions

The main conclusions of this paper are:

1. A general method for analyzing the existence of solutions and solving calculus of variations problems involving functionals of the general form (8) has been given. This extends the classical calculus of variation problems to much more general cases.

2. In particular, the method has been applied to the particular cases of product and quotient functionals, and two illustrative examples of applications involving these type of function- als, one related to slope stability and the other to economy, have been given.

3. It has been shown that the first order necessary conditions for the general functional (8) to have an optimum coincide with those of the classical functional (29). This allows one obtaining not only the Euler-Lagrange equations for the optimality of functional (8), but the corresponding natural, transversality and Weirstrass-Erdman conditions.

4. It has been shown that the problem of minimizing a general functional of the form (8) is equivalent to the minimization of a quadratic functional of the form (21), in the sense of sharing the first and second variations.

5. Sufficient conditions for the existence of global optimal solutions for the variational prob- lem associated with a functional of the form (8) have been given, and illustrated by two examples.

6. The proposed methodology is easily generalized to other calculus of variations problems

21 involving higher order derivatives (Theorem 2) and multiple integrals. Thus, the methods developed allow a wide range of new applications to be addressed by calculus of variations.

References

Bolza, O. (1973). Lectures on the Calculus of Variations. Chelsea Publishing Company, New York.

Buckingham, E. (1915). The principle of similitude. Nature, 96(3):396–397.

Castilllo, E. and Revilla, J. (1977). One application of the calculus of variations to the stability of . In Proceedings of the 9th International Conference on Soil Mechanics and Foundations Engineering, volume 2, pages 25–30, Tokyo.

Castillo, E. and Luceno˜ , A. (1982). A critical analysis of some variational methods in slope sta- bility analysis. International Journal for Numerical and Analytical Methods in Geomechanics, 6:195–209.

Castillo, E. and Luceno˜ , A. (1983). Variational methods and the upper bound theorem. Journal of Engineering Mechanics (ASCE), 109(5):1157–1174.

Dacorogna, B. (1989). Direct methods in the calculus of variations, volume 78 of Applied Math- ematical Sciences. Springer-Verlag, Berlin.

Dacorogna, B. (1992). Introduction au calcul des variations, volume 3 of Cahiers Math´ematiques de l’E´cole Polytechnique F´ed´erale de Lausanne [Mathematical Papers of the E´cole Polytech- nique F´ed´erale de Lausanne]. Presses Polytechniques et Universitaires Romandes, Lausanne.

Elsgolc, L. (1962). Calculus of Variations. Pergamon Press, London-Paris-Frankfurt.

Euler, L. (1744). “Metod Nakhozhdeniia Krivykh Linii, Obladiaiushchikh Svoistvami Maksi- muma Libo Minimuma, Ili REshenie Izoperimetricheskoi Zadachi Vziatoi v Samom Shirokom Smysle” (Method of Finding Curves Possessing Maximum or Minimum Properties, or The Solution of the Isoperimetric Problem Taken in Its Broadest Sense). GITTL. Translated from the 1744 ed.

Forray, M. J. (1968). Variational Calculus in Science and Engineering. Mc Graw Hill book Company, New York.

Garber, M. (1973). Variational methods for investigating the stability of slopes. Soil Mechanics and Foundations Engineering, 10(1):77–79.

Gelfand, I. M. and Fomin, S. V. (1963). Calculus of Variations. Prentice Hall, Englewood Cliffs, N.J.

Goldstine, H. H. (1980). A history of the calculus of variations from the 17th to the 19th Century. Springer Verlag, New York, Heildelberg, Berlin.

Janbu, N. (1957). Earth pressure and bearing capacity calculations by generalized procedure of slices. In Proceedings of the 4th International Conference on Soil Mechanics and Foundations Engineering, London.

22 Luceno˜ , A. (1979). An´alisis de los m´etodos variacionales aplicados a los problemas de estabilidad en Mec´anica del suelo. Utilizaci´on del teorema de la cota superior. PhD thesis, Escuela de Ingenieros de Caminos, Canales y Puertos. University of Cantabria, Santander, Spain.

Pedregal, P. (1997a). Nonlocal variational principles. Nonlinear Analysis, 29:1379–1392.

Pedregal, P. (1997b). Parametrized Measures and Variational Principles. Birkhauser-Verlag, Basel, First edition.

Petrov, I. P. (1968). Variational methods in optimum control theory. Academic Press, New York and London. Translated from the Russian by M. D. Friedman and H. J. Zeldam.

Revilla, J. and Castillo, E. (1977). The calculus of variations applied to stability of slopes. Geotechnique, 27(1):1–11.

23