Quick viewing(Text Mode)

Classical Problems in Calculus of Variations and Optimal Control

Classical Problems in Calculus of Variations and Optimal Control

Classical Problems in of Variations and Optimal Control

Stephen Lamb

Supervised by Lyle Noakes

The University of Western Australia

Vacation Research Scholarships are funded jointly by the Department of Education and Training and the Australian Mathematical Sciences Institute. 1 Introduction

Calculus of variations (COV) is a field of that deals with finding the extremals of Lagrangian functions defined by functionals, in an attempt to find optimal solutions. Although the subject has a long and rich history, current research in the field is still producing new results. Op- timal control is the generalisation of the . My project focuses on the classical problem of COV, and how the tools of optimal control can be used to simplify results, and even produce results where COV was too blunt an object. My project started with a brief study of some classical problems in COV and then I started generating the differential equations that we solve to answer the problems. These include such problems as the brachistochrone and the .

The next task was to learn about optimal control and its link to COV. Once able to write out our control problem, I needed a similar tool to solve for extremals as above, which in optimal con- trol is the Pontryagin Maximum Principle. The most crucial part of the project was about learning and understanding how to use the Pontryagin Maximum Principle to solve the optimal control problems. We then extended it from Rn to other smooth manifolds using Riemannian geometry. We used our framework of optimal control to solve two very important classical problems: on Riemannian manifolds and elastica . Lastly, after generating solutions to these problems for different spaces, we had to have a method of solving these boundary value problems numerically, for problems where no closed form solution exists.

The study of classical problems in calculus of variations and optimal control has provided me with a good foundation for further study into differential geometry, optimisation methods and functional analysis. It has also given good insight into the topics of topology, Riemannian geometry and anal- ysis. Although I have aquired a deeper level of understanding through this research project, I am left with far more questions to answer.

Future work into the topic involves study of dependence of these classical problems to assumed conditions. e.g. The study of the brachistochrone under different gravitational field conditions or the catenary under a changing viscocity of air as a function of height. Further research into geodesics phrased using optimal control could involve looking into the geodesics on manifolds that aren’t as nice and symmetric as the ones chosen. Also research into the existence of abnormal geodesics in sub-Riemannian manifolds would yield enlightening results. For the problems of elastic curves, future work could involve the study of elastic curves in other riemannian manifolds. Lastly further investigation into other numerical solving methods for solving these problems, aiming at reducing computing cost could prove a good idea.

2 2 Calculus of Variations

To understand what calculus of variations is, and in turn what optimal control is, we require understanding of the lagrangian function and how to determine extremals from it. Definition 2.1. The Lagrangian (L) is an energy function defined on the bundle, that maps onto the space of real numbers. We can write L as L(q(t), q˙(t))

L : TM → R Calculus of variations is concerned with finding the extremals of functions defined by:

Z t2 J(t) = L(q(t), q˙(t))dt, (1) t1 where extremals are the critical points of the Lagrangian. Theorem 2.2. If q is an extremal of the Lagrangian, then it must satisfy the Euler-Lagrange equation: d ∂L ∂L = (2) dt ∂q˙ ∂q Proof. Consider the system defined above by (1), and let’s assume that q is an extremal. Without loss of generality, we will assume that q is a minimising function. Hence we can conclude that:

L(q(t), q˙(t)) < L(q(t) + η(t), q˙(t)η˙(t))

Therefore if we take the with respect to , the following result is obtained:

d d Z t2 J(t)|=0 = L(q(t) + η(t), q˙(t) + η˙(t))dt|=0 = 0 d d t1

Z t2 ∂L ∂L  =⇒ η + η˙ dt = 0 t1 ∂q ∂q˙ Using on the second term above, we get the following:

Z t2 ∂L d ∂L  =⇒ η − η dt = 0 t1 ∂q dt ∂q˙ Factoring out η we get the Euler-Lagrange equation:

d ∂L ∂L = dt ∂q˙ ∂q

3 2.1 Classical Problems in COV 2.1.1 Brachistochrone The Brachistochrone is a classical problem that deals with finding a between two points: (x0, y0) and (x1, y1) with x0 < x1 and y0 < y1, such that the time taken for a bead to go along the curve between the two points on the curve is minimal.

Time is the variable that is going to be minimised. Let the total length of the curve be L. Using Newtons , we can derive the equation for total time taken to traverse the curve:

Z 0 ds T (y) = ds (3) L v(s) (Where v(s) is the velocity and ds is the arclength)

Using the principle, we can define this expression in terms of y(x).

KE + PE = E = constant 1 E = mgy(x) + mv(x)2 (4) 2 Rearranging for v(x), we get the following: r 2(E − mgy(x)) v(x) = (5) m Thus using (3), we get the following result:

x p Z 1 1 +y ˙2 T (y) = q dx (6) x0 2(E−mgy(x)) m

With y(x0) = y0, and y(x1) = y1 1  2E−2gmy(x)  Simplifying this expression with the substitution: z = 2g m we get the following func- tional: r Z x1 1 +z ˙2 J(z) = p2g dx (7) x0 z Using the Euler-Lagrange equation, we derive the following differential equation:

z(1 +z ˙2) = c (8)

Using the substitution z = tan(θ), then 1+z ˙2 = sec(θ). Hence the curve that satisfies the conditions of the is the parametric equation of a :

y(θ) = d1(1 + cos(2θ)) (9)

x(θ) = d2 − d1(2θ + sin(2θ)) (10)

4 π Figure 1: Brachistochrone curve plotted between fixed points (− 2 , 1) and (0, 0)

This problem can be extended and studied in many ways for various reasons. As such, some of these include having a forcing/damping system, or even considering a non-constant gravitational field, dependent on the y(x). Harry H. Denman goes into other possible solutions to the brachistochrone problem by varying fields. See [2] for more.

5 2.1.2 Tautochrone The tautochrone problem questions finding a curve such that under constant acceleration, a particle placed anywhere on the curve will take the same amount of time to reach the bottom of the curve, no matter its start position. In a constant gravitational field, it can be easily shown that the brachistochrone curve satisfies such a property. [2]

2.1.3 Catenery The catenary is a sometimes refered to as the hanging chain problem. It seeks to determine the shape of a curve made between two fixed points in a constant gravitational field. In practice it is the curve made when hanging a chain/wire between two fixed points, making a curve called the catenary curve, who’s name originates from the problem. The problem can also be described as a curve that minimises gravitational potential energy, which we take advantage of to derive our answer. We start by defining the functional to minimise: Z L PE = m g y(s) ds (11) 0 (where PE is the potential energy of the whole curve, L is the length of the curve and s is the arclength of the curve) r 2 2 2 2  dy  Using the fact that ds = dx + dy =⇒ ds = 1 + dx dx, we can rewrite the above equation as: s Z L  dy 2 PE = m g y(x) 1 + dx (12) 0 dx Now we can employ the Euler-Lagrange equation above. Hence any extremal of the above equation must satisfy the following: d ∂P E  ∂P E  − = 0 dx ∂y˙ ∂y Simplifying the above equation gives the following differential equation

y y˙ p y˙ − y 1 +y ˙2 = Constant p1 +y ˙2

y2 = D2 (13) 1 +y ˙2 1 We ignore the trivial case and only consider D 6= 0 to get: s y2 y˙ = 2 − 1 (14) D1 Separating variables and integrating we get the follwoing equation:

p 2 2 ! y + y − D1 x = D1 ln + D2 (15) D1

6 Rearanging the above equation, we get the solution of the catenary curve in terms of y as:   x − D2 y = D1 cosh (16) D1 which is graphed below. (D1 and D2 are constants)

Figure 2: Plot of equation (16) - Catenary curve/Hanging chain

This curve looks similar to the parabolic curve, however the two are very different. Catenary curves find themselves in nature and also lend themselves to building the perfect arches for buildings.

2.1.4 Isoperimetric Problem The isoperimetric problems set to solve constrained variational problems by first transforming them into unconstrained problems using Lagrange multipliers. These problems allow solutions of problems constrained to have extremals exist on curves or other constraints. These are very interesting problems, however further research is needed to fully understand the importance of Lagrange multipliers.

2.1.5 Minimal Surfaces Minimal surfaces are objects on spaces that have zero mean curvature (See [1] for definition). They are also more commonly defined as surfaces that minimise surface area given a set of boundary conditions. These problems are usually posed in R3, which coincides with the common definition of

7 how we view surfaces. In mathematical terms, if the surface is parameterized as x=(u, v, f(u, v)), then it is a minimal surface if it satisfies the following condition:

2 2 (1 + fv )fuu + 2fufvfuv + (1 + fu)fvv = 0 (17)

The simplest minimal surface is just the space R3. Another example of a minimal surface is the catenoid, which is just a surface of revolution of a catenary curve. This is pictured below:

Figure 3: Catenoid figure

Minimal surfaces are wonderful mathematical objects that are very fascinating to study. The idea of minimal surfaces can be extended to minimal submanifolds. See [12] for more

3 Optimal Control

Optimal control is an important extension of COV to a more general framework, allowing a greater range of problems to be solved as a result. All COV problems can be posed as optimal control problems. This is a result of how optimal control problems are set out, and the similarities it shares with COV. Rather than operate with a Lagrangian, optimal control problems use a different energy function known as the Hamiltonian.

Definition 3.1. The Hamiltonian is an energy function on the of manifold, making the problem in the phase space rather than the state space. Definition 3.2. A Legendre Transformation is a type of transformation that provides a connection between the Euler-Lagrange equation and the Hamiltonian system of equations. The transforma- tion can be phrased as such:

8 Let y :[t1, t2] → R, and let u =y ˙(t).

∂L Now suppose there is a Lagrangian defined as L(y, y˙), then letting λ = ∂y˙ . From this, we can construct our Hamiltonian: H(y, λ0, λ, u) = λ u − λ0L(y, u) (18) Using a Legendre transformation, we can generate our problem in phase space and solve 2n first-order equations instead of n second-order ones. However, do they generate the same extremals for the problem. Theorem 3.3. The extremals of the Lagrangian function are the exact same as those of the Hamiltonian function under the Legendre Transformation. [10]

Now that we know that we can solve the same types of problems in optimal control as COV, we’ll need to know how to solve them.

3.1 Pontryagin Maximum Principle The Pontryagin Maximum Principle (PMP) is a very elegant tool used in optimal control to solve these control problems. In some sense, it does the equivalent to a Hamiltonian as the Euler-Lagrange equation does to the Lagrangian. However because it lends itself to a more general framework of problems, it can be used for so much more. Phrasing the PMP goes like this: Imagine that we had a control problem, solving for the extremals of the following Hamiltonian:

H(y, λ, λ0, u) = λ0L(y, u, t) + λ(u) (19) and also assume that we know the optimal solutions of the trajectory is (y*,u*), then the following conditions must be satisfied.

1. Non-triviality: (λ0, λ) 6= 0

˙ ∂H 2. Costate equations: λ = − ∂y

∗ ∗ 3. Maximum condition: H(y , λ, λ0, u ) = max H(y, λ, λ0, u) u∈V

The u’s in these problems are part of a larger set U called the set of admissible functions. This is the functional space that fits our conditions to be a solution. For the purpose of this project, we have only considered piecewise continuous functions with a finite number of discontinuities, however there exist much larger functional spaces that are required to solve some optimal control problems as the answer isn’t contained in the smaller set we consider. In his book, Pontryagin considers such a larger set to solve problems.

The first condition of the PMP ensures the non-existence of trivial solutions when the Hamil- tonian is equal to 0. The second follows from the existence of the Hamiltonian, satisfying the Hamiltonian system of equations.

9 The last condition is by far the most important. Note that the set V is the set of all u, such that any u ∈ V satisfies the Hamiltonian system of equations. So the optimal solution is only determined by the control, and the u is such that the value of the Hamiltonian is maximised. This is sometimes refered to as the strong condition as it lends itself to problems that are bounded. This is one reasong that the optimal control framework of problems is more efficient, and may generate results where COV might fail.

The weaker and more familiar condition requires that our optimal solution satisfy the following condition: ∂H = 0 (20) ∂u which is used for unbounded problems, where the minima/maxima coincide with the local min- ima/maxima. This condition will be used throughout the rest of the report as most of the problems considered are unbounded.

3.2 Riemannian Geometry and Differential Geometry Optimal control can be used to solve problems in Riemannian geometry. With use of Riemannian Geometry we can put a give a smooth manifold a natural structure called a Riemannian metric (defined below), which allows us to measure lengths in the space. With use of this, we can define a metric on the smooth manifold, allowing extension of the PMP to spaces other than Rn.

Definition 3.4. A Riemannian Metric on smooth Manifold M is a smooth function that assigns an inner product on every pair of vectors V, W ∈ TpM for each p ∈ M. The inner product is symmetric, positive definite and smoothly varies as x varies. [4] The metric is defined as:

i k G = gikdx dx (In local coordinates) (21) We can therefore define our vectors above as the following: ∂ ∂ V = V i W = W i (22) ∂xi ∂xi

So given two vectors V, W ∈ TpM, the inner product can be seen as follows:

   1  g11(x) ... g1n(x) W 1 n < V, W >G |p = (V , ..., V )  ......   ...  (23) n gn1(x) ... gnn(x) W

∂ ∂ where the following is defined: gik = < ∂xi , ∂xk >

4 Geodesics on Riemannian Manifolds

As seen earlier, geodesics are curves that we can characterise as energy minimizing curves on the manifold, that satisfy the Euler-Lagrange equation above. Taking an optimal control approach, we can derive equivalent conditions for geodesics on smooth manifolds defined in phase space.

10 Using a Legendre transform on the Lagrangian function, the following Hamiltonian is generated: T H(y, u, λ, λ0) = λ0u Gu + λ(u) (24)

Taking λ0 = −1, we generate our Hamiltonian system of equations: ∂H y˙(t) = u = (25) ∂λ ∂H ∂g λ˙ (t) = − = uT ij u (26) ∂yi ∂yi We are not considering bounded cases, hence the only extremals to consider is those critical points ∂H of ∂u . We also assume that we are only looking at normal geodesics; i.e. λ0 6= 0; so we normalise it to λ0 = 1 without loss of generality. Hence our extremals are y(t) such that: ∂H = λ − 2gu =⇒ λ = 2gu (27) ∂u We take the derivative of (27) with respect to y and get the following:

˙ ˙ ∂gij ∂gij λ = (2gu) = 2y ˙i y˙k + 2gijy¨i =y ˙i y˙k (28) ∂yk ∂yk Rearranging the above and relabeling indices, we get the following equation know as the geodesic equation: d2yk yi yj + Γk = 0 (29) dt2 ij dt dt

k With the symbol Γij defined below as a Christoffel Symbol:

1 ∂g ∂g ∂g  Γm = gmk ik + kj − ij (30) ij 2 ∂xj ∂xi ∂xk

N:B. The Einsteinian summation convention was used in the derivation of this equation, so we must sum over all the indices when we use it.

4.1 Geodesics on a Sphere As the sphere is a smooth manifold, we can take coordinate charts and find the geodesics by this method. Using the usual stereographic projections, we get two charts S2/N and S2/S (where N and S are the north and south poles respectively). Let x(t) ∈ S2 and y(t) ∈ R2, then we can write x(t) in terms of y(t) as follows: 1 x(t) = (2y(t), ky(t)k2 − 1) (31) ky(t)k2 + 1

As the stereographic projection is diffeomorphic to R2, the metrics must be the same. Hence thew following condition must be satisfied:

2 2 ky˙(t)kG = kx˙(t)kE (where E stands for the Euclidean norm) (32)

11 From the above equations, G is founds to be: 4δ G = ij (where δ is the kronecker delta) (33) (1 + ky(t)k2)2 ij

Now finding the geodesics reduces to finding the extremals of the following equation:

2 H(y, λ, λ0, u) = λ0ky˙(t)kG + λ(u) (34)

Solving the system of equations, we see that the geodesics on a sphere are arcs of great circles. We should also note that geodesics between two fixed points are not unique as another geodesic would be the arc joining the two fixed points along the larger arc. Hence the geodesic pictured below can be referred to as the minimal geodesic of the sphere given these two fixed points.

Figure 4: Geodesic on S2 - Great circle arcs (code adapted from [11])

12 4.2 Geodesics on a cylinders For the geodesics, we employ a quicker and more elegant solution, using the theorem below. Theorem 4.1. If two manifolds M and N are isometric, then the geodesics of M are preserved under the isometric mapping to N. [6]

This means instead of solving the geodesic equation, (which may not always give closed form solutions), we can instead prove that two surfaces are isometric, and derive the desired geodesics from there. So our goal is to show that the plane and the cylinder are isometric to each other.

We construct our isomerism as follow. We take the open (0, 2π), which is clearly isometric to the unit circle (minus a point), under the mapping:

π : θ → (cos(θ), sin(θ)) θ ∈ (0, 2π) (35)

We take the cartesian product of both curves with the plane, which is also isometric. Hence our infinite strip in R2 is isometric to the cylinder locally. So locally our geodesics are the same. Below is a plot of one of the three possible geodesics on the cylinder. This geodesic is isometric to a diagonal line on the plane. The other two geodesics are just parameterised vertical and horizontal lines on the plane.

Figure 5: Circular helix geodesic on cylinder (code adapted from [11])

13 4.3 Abnormal Geodesics

Definition 4.2. An abnormal geodesic is a geodesic that satisfies the condition that λ0 = 0 while not violating the nontriviality condition of the Pontryagin maximum principle. Hence we have that our co-state variable must be non-zero. Applying this condition to our geodesic equation on a Riemannian manifold, we get the following hamiltonian:

H(u, λ) = λ(u) (36)

From our definition, as geodesics are locally minimising curves, it is a necessary condition that any abnormal geodesic on a riemannian manifold satisfy: ∂H = 0 =⇒ λ = 0 (37) ∂u This contradicts our non-triviality condition of the PMP, Hence there exists no abnormal geodesics on riemannian manifolds.

I was not able to investigate the existence of abnormal geodesics in Sub-Riemannian manifolds, but their existence has been proven by numerous sources, and are an interesting topic of research. See [5] for more on this.

5 Elastica

The problem of elastica goes back to the time of Euler. The problem sets out minimise the bending energy of a thin wire. For all the our problems considered, the wire is of negligible thickness, with constant length. Mathematically, we are looking for functions that are minimsers of the following function: Z t1 k(t)2dt (38) t0 (where k is the curvature of the curve y.) The other condition palced on these curves is that they must have unit speed. i.e. < y,˙ y˙ > = 1 (39)

5.1 Elastica in E2 We now consider the elastic curves in Euclidean-2 space. We seek to find a curve y defined as such:

2 y :[t0, t1] → E (40) with the y having unit speed as mentioned above. Taking of both sides, we get:

< y,˙ y¨ > = 0 (41)

14 Hence we have that the speed and acceleration of the curve are perpendicular to each other. As E2 =∼ C, we can write this as: y¨ = iky˙ (42) where k is curvature as before. Working in the complex plane, from (39) we get expressions fory ˙ andy ¨: y˙ = eiθ (43) y¨ = i θ˙ eiθ (44) From these equations, we can formulate our optimal control problem. Letting u(t) = θ˙, this implies from (42) that k(t) = u(t) (45) From the above equations, we formulate the following Hamiltonian: 2 iθ H(y, θ, λ, ω, u, λ0) = λ0 u + λ(e ) + ω(u) (46) (Where λ and ω are our co-state variables.) As we have an unbounded space, we can use the weaker condition of the PMP to generate our curves. Using the PMP, we get the following system of equations: ∂H λ˙ = − = 0 (47) ∂y ∂H ω˙ = − = −λ(ieiθ) (48) ∂θ ∂H = 2λ u + ω = 0 (49) ∂u 0 λ0 = 0 =⇒ ω = 0 =⇒ ω˙ = 0 =⇒ λ(ieiθ) = 0 Hence we either have that λ = 0 or θ = constant. The only acceptable solution is the latter, as the former violates the non-triviality condition of the PMP. Thus our first solution to elastica is the straight line.

Now assuming that λ0 6= 0 we proceed. Equation (47) implies that λ is constant. We can normalise (49) by setting λ0 = 1 without loss of generality. Hence our new equation for the extremal is: ∂H = 2 u + ω = 0 (50) ∂u Taking the derivative of (50) and using (48), we get the following differential equation 2θ¨ = R Cos (θ − α) (51) (with R ≥ 0) Solving (51), we get the following equation:  κ  k2(t) = κ2 1 − p2sn2 0 s, p (52) 0 2 This is a special case of the general equation for elastica generated by Singer [8], when w = 1  p2  κ  k2(t) = κ2 1 − sn2 0 s, p (53) 0 w2 2w Here sn is the elliptical sine function, which is defined in [8].

15 5.2 Catalogue of elastica curves Depending on the set of parameters, we can generate different curves satisfying our differential equation. These curves below were generated by David Singer in [8]

(a) Borderline Elastica [p = w = 1] (b) Orbit Elastica [w = 1]

Figure 6: Elastica figures [8]

(a) Wavelike Elastica (b) Wavelike Elastica

Figure 7: Wavelike Elastica [w = p] Function will oscilated between the +k0 and −k0 [8]

This list however is not exhaustive. Helices and the figure-eight elastica also satisfy the above equations. The last trivial elastica is that of a straight line which is obtained by setting k0 = 0, hence has a squared curvature of 0.

16 6 Numerical Methods of solving

For solving differential equations, symmetry is a key feature required for closed form solutions. For some problems in optimal control, the spaces are too complicated to give closed form solutions, hence other methods are used to produce results. We use numerical methods and techniques to solve them. Another problem faced is that most optimal control and COV problems are boundary value problems (BVP), hence there is no assurance of a unique solution or even the existence of one, but we persevere. There are many numerical methods one can use to solve ODE’s and PDE’s, such as these:

6.1 Shooting methods Studied extensively by Keller [3], shooting is a numerical approach that aims to transform our BVP into and initial value problem (IVP). It’s simplicity works well for computational time and with aid of some other numerical methods, has a fast convergence.

Shooting works as follows. Imagine we have a second order equation:

y¨(x) = f(y, y,˙ x) (54)

With y(a)=A and y(b)=B, (a < b) b−a We take a guess aty ˙(a) [usually guess t = g1, where t is the time wanting to reach other bound- ary condition; however the initial guess is not of high importance.]

Now comes the more important part of shooting, and that is evaluating the guess, then correcting it to reach the solution of the original problem. We solve our IVP, evaluate our solution at the second boundary point and then correct the previous guess. Let φ be defined as the difference of solutions at x = b φ(g1) = A(g1, b) − y(b) (55)

(where A(g1, b) is the solution to the IVP fory ˙(a) = g1). If φ(g1) = 0, then we are done, as both solutions give the same result. However let’s suppose that we have undershot our guess, and that in reality φ(g1 + dg1) = 0 where dg1 is just a small variation of g1. Then we can use Taylor series expansion to give: dφ 2 φ(g1 + dg1) = φ(g1) + dg1 (g1) + O = 0 (56) dg1 φ(g ) =⇒ dg = − 1 (57) 1 ˙ φ(g1) This gives Newtons method for finding roots, of which we get closer with every iteration. [9] Hence the corrected guess is: φ(g ) g = g − n (58) n+1 n ˙ φ(gn) This formulating works well, and converges fast (quadratically actually if sufficiently close to true ˙ zero), however how do we determine φ(gn). To obtain an expression for this, we must first solve

17 another IVP, this time a second order one. This is constructed as follows, we take the derivative of equation (54) with respect to time, we get:

∂A¨(x, t) ∂A ∂y ∂A ∂y˙ = + (59) ∂t ∂y ∂t ∂y˙ ∂t

∂A Letting z = ∂t , we get the following IVP: ∂A ∂A z¨ = z + z˙ (60) ∂y ∂y˙ with z(a) = 0 andz ˙(a) = 1 We solve the IVP’s: (54) and (60) and then re-evaluate our expression (58) using the fact that ˙ φ(gn) = z(x, t). Newtons method is not the only one we can use when shooting. We can also use the secant method, which has less steps to compute, however has a slower rate of convergence than the Newton method. It will be used for problem where solving (60) would prove to be too expensive for every iteration.

Other numerical methods can be used to solve these problems, however they may be more ex- pensive. Such a method is the finite difference method that involves discretising our continuous system and solving on these intervals. Future research into this would be needed to verify which problems this method would be most effective on.

Acknowledgments

I would like to thank Prof. Lyle Noakes for his guidance and effort spent supervising this project, and also AMSI for giving me the opportunity to complete this Summer Research Scholarship.

18 References

[1] Abbena, E., Salamon, S. and Gray, A., 2006. Modern differential geometry of curves and surfaces with Mathematica. CRC press. [2] Denman, H.H., 1985. Remarks on brachistochronetautochrone problems. American Journal of , 53(3), pp.224-227. [3] Keller, H.B. , 1976, Numerical solution of two point boundary value problems, 1st Edition, Blaisdell Publishing Company, . [4] Lebanon, G., 2002, August. Learning riemannian metrics. In Proceedings of the Nineteenth con- ference on Uncertainty in Artificial Intelligence, (pp. 362-369). Morgan Kaufmann Publishers Inc..

[5] Montgomery, R., 2006. A tour of subriemannian geometries, their geodesics and applications (No. 91). American Mathematical Soc.. [6] Petersen, P., 2006. Riemannian geometry (Vol. 171, pp. xvi+-401). New York: Springer. [7] Pontryagin, L.S., Boltyanski, V.G., Gamkrelidze, R.V. and Mischenko, E.F. (translated by K.N. Trirogoff), 1962, The Mathematical Theory of Optimal Processes, 1st Edition, Inter- science, John Wiley [8] Singer, D.A., 2008, April. Lectures on elastic curves and rods. In O.J. Garay, E. GarcaRo and R. VzquezLorenzo eds.,, AIP Conference Proceedings (Vol. 1002, No. 1, pp. 3-32). AIP. [9] Sung N. Ha, 2001, ’A nonlinear shooting method for two-point boundary value problems’, Computers & Mathematics with Applications, 42, Issue 10, pp. 1411-1420 [10] Van Brunt, B, 2004 The Calculus of Variations, 1st Edition, Springer-Verlag, New York [11] Wang,L , 2004 , geodesic, Available from: http://au.mathworks.com/matlabcentral/fileexchange/6522- geodesic [10 January 2017]

[12] Xin, Y.L., 2003. Minimal submanifolds and related topics (Vol. 8). World Scientific.

19