<<

1 A Converse Sum of Squares Lyapunov Result with a Degree Bound Matthew M. Peet and Antonis Papachristodoulou

Abstract—Sum of Squares programming has been used exten- references on early work on optimization of polynomials, sively over the past decade for the stability analysis of nonlinear see [2], [3], and [4]. For more recent work see [5] and [6]. For systems but several questions remain unanswered. In this paper, a recent review paper, see [7]. Today, there exist a number of we show that of a polynomial vector field on a bounded set implies the existence of a software packages for optimization over positive polynomials, which is a sum-of-squares of polynomials. In particular, the e.g. SOSTOOLS [8] and GloptiPoly [9]. main result states that if a system is exponentially stable on At the same time, there are still a number of unanswered a bounded nonempty set, then there exists an SOS Lyapunov questions regarding the use of sum of squares as a relaxation function which is exponentially decreasing on that bounded set. to nonnegativity and its use for the analysis of nonlinear The proof is constructive and uses the Picard iteration. A bound on the degree of this converse Lyapunov function is also given. systems. Unanswered questions include, for example, a series This result implies that semidefinite programming can be used of questions on controller synthesis and the role of duality to answer the question of stability of a polynomial vector field to convexify this problem, as well as estimating regions of with a bound on . attraction of equilibria. On the computation and optimization side, it is unclear whether multi-core computing could be used I. INTRODUCTION for computation, as well as how to take advantage of sparsity in semidefinite programming. Computational and numerical algorithms are extensively In this paper, we do not consider the problem of computing used in . A particular example is semidefinite sum-of-squares Lyapunov functions. Such work can be found programming conditions for addressing linear control prob- in, e.g. [4], [10]–[12]. Instead, we concentrate on the properties lems, which are formulated as Linear Matrix Inequalities of the converse Lyapunov functions for systems of the form (LMIs). Using such tools, several questions on the analysis and synthesis of linear systems can be formulated and addressed x˙(t)= f (x(t)), effectively. In fact, ever since the 1990s [1], LMIs have had Rn Rn a significant impact in the control field, to the point that once where f : → is polynomial. In particular, we address the solution of a control problem has been formulated as the the question of whether an exponentially stable nonlinear solution to an LMI, it is considered solved. system will have a sum-of-squares Lyapunov function which When it comes to nonlinear and infinite-dimensional sys- establishes this property. This result adds to our previous tems, the equivalent problems can be formulated as polynomial work [13], where we were able to show that exponential sta- non-negativity constraints under a Lyapunov framework, but bility on a bounded set implies the existence of a exponentially these are not, at first glance, as easy to solve. Polynomial decreasing polynomial Lyapunov function on that set. non-negativity is in fact NP hard. It is for this reason that Work that is relevant to the one presented here includes several researchers have looked at alternative tests for non- research on continuity properties, see e.g. [14], [15] and [16] negativity, that are polynomial-time complex to test, and which and the overview in [17]. Infinitely-differentiable functions imply non-negativity. One such relaxation is the existence were explored in the work [18], [19]. Other innovative results of a sum of squares decomposition: the ability to optimize are found in [20] and [21]. The books [22] and [23] treat over the set of positive polynomials using the sum-of-squares further converse theorems of Lyapunov. Continuity of Lya- relaxation has undoubtedly opened up new ways for addressing punov functions is inherited from continuity of the solution problems, in much the same way Linear map with respect to initial condition. An excellent treatment Matrix Inequalities are used to address analysis questions of this problem can be found in the text of Arnol’d [24]. Unlike the work in [13], this paper is closely tied to systems for linear finite-dimensional systems. However, there remain theory as opposed to approximation theory. Our method is to several open questions about how these methods can be used take a well-known form of converse Lyapunov function based to search for Lyapunov functions for nonlinear systems. For on the solution map and use the Picard iteration to approximate M. M. Peet is with the Department of Mechanical, Materials, and Aerospace the solution map. The advantage of this approach is that if Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-252B, the vector field is polynomial, the Picard iteration will also Chicago, IL 60616, U.S.A. [email protected] A. Papachristodoulou is with the Department of Engineering Sci- be polynomial. Furthermore, the Picard iteration inductively ence, University of Oxford, Parks Road, Oxford, OX1 3PJ, U.K. retains almost all the properties of the solution map. The result [email protected]. is a new form of iterative converse Lyapunov function, V . This This material is based upon work supported by the National Science k Foundation under Grant No. CMMI 110036 and from EPSRC grants function is discussed in Section VI. EP/H03062X/1, EP/I031944/1 and EP/J012041/1 The first practical contribution of this paper is to give a 2 bound on the number of decision variables involved in the SOS decomposition, which can be tested using Semidefinite question of exponential stability of polynomial vector fields programming. on bounded sets. This is because SOS functions of bounded Consider, for example, the problem of ensuring that a degree can be parameterized by the set of positive matrices polynomial p(x) ∈ R[x] satisfies p(x) ≥ 0. This problem arises of fixed size. Furthermore, we note that the question of naturally when trying to construct Lyapunov functions for existence of a Lyapunov function with negative derivative is the stability analysis of dynamical systems, which is the convex. Therefore, if the question of polynomial positivity on topic of this paper. Since ensuring non-negativity is hard [25] a bounded set is decidable, we can conclude that the problem many researchers have investigated alternative ways to do this. of exponential stability of polynomial vector fields on that In [26], the existence of a Sum of Squares decomposition was set is decidable. The further complexity benefit of using SOS used for that purpose, which involves the presentation of other Lyapunov functions is discussed in Section VIII. polynomials pi(x) such that The main result of the paper is stated and proven in k 2 Section VI. Preceding the main result is a series of lemmas p(x)= ∑ pi(x) (1) that are used in the proof of the main theorem. In Subsection V i=1 we show that the Picard iteration is contractive on a certain Algorithms for ensuring this have appeared in the 1990’s [27] ; and in Subsection V-A we propose a new way but it was not until the turn of the century that this was recog- of extending the Picard iteration. In Section V-B we show that nized as being solvable using Semidefinite Programming [28]. the Picard iteration approximately retains the differentiability In particular, (1) can be shown equivalent to the existence of properties of the solution map, before we prove the main a Q  0 and a vector of monomials Z(x) of degree less than result. The implications of the main result are then explored or equal half the degree of p(x), such that in Section VIII and Section VII. A detailed example is given T in Section IX. The paper is concluded in Section X. p(x)= Z(x) QZ(x) In the above representation, the matrix Q is not unique, in fact II. MAIN RESULT it can be represented as

Before we begin the technical part of the paper, we give a Q = Q0 + ∑λiQi (2) simplified version of the main result. i Theorem 1: Suppose that f is polynomial of degree q and T where Qi satisfy Z(x) QiZ(x)= 0. The search for λi such that that solutions ofx ˙ = f (x) satisfy Q in (2) is such that Q  0 is a Linear Matrix Inequality, which λ kx(t)k≤ K kx(0)ke− t can be solved using Semidefinite Programming. Moreover, if p(x) has unknown coefficients that enter affinely in the for some λ > 0, K ≥ 1 and for any x(0) ∈ M, where M is representation (1), Semidefinite Programming can be used to a bounded nonempty region of radius r. Then there exist find values for them so that the resulting polynomial is SOS. α,β,γ > 0 and a sum-of-squares polynomial V(x) such that This latter observation can allow us to search for polynomi- for any x ∈ M, als that satisfy SOS conditions: the most important example is 2 2 in the construction of Lyapunov functions, which is the topic of α kxk ≤ V(x) ≤ β kxk this paper. For more details, please see [10], [28]. The question ∇ T γ 2 V(x) f (x) ≤− kxk . that we address in this paper is whether Sum of Squares Further, the degree of V will be less than Lyapunov functions always exist for locally exponentially 2q(Nk−1), where k(L,λ,K) is any integer such that stable systems. ∑N−1 TL k i 2 k c(k) := i=0 e + K(TL) K (T L) < K, and  IV. NOTATION AND BACKGROUND log2K2 (T L)k 1 c(k)2 + K (1 + c(k))(K + c(k)) < , 2λ T 2 The core concept we use in this paper is the Picard iteration. λ L We use this to construct an approximation to the solution map c(k)2 < (1 − (2K2)− λ ) KLlog2K2 and then use the approximate solution map to construct the Lyapunov function. Construction of the Lyapunov function λ log2K2 and N(L, ,K) is any integer such that NT > 2λ and T < will be discussed in more depth later on. 1 2L for some T and where L is a Lipschitz bound on f on B4Kr. Denote the Euclidean ball centered at 0 of radius r by Br. Consider an ordinary of the form III. SUM-OF-SQUARES x˙(t)= f (x(t)), x(0)= x0, f (0)= 0, (3) Sum of squares (SOS) methods have been introduced over where x ∈ Rn and f satisfies appropriate smoothness properties the past decade to allow for the algorithmic solution of for local existence and uniqueness of solutions. The solution problems that frequently arise in systems and control the- map is a function φ which satisfies ory, many of which can be formulated as polynomial non- negativity constraints that are however difficult to solve. In ∂ φ(t,x)= f (φ(t,x)) and φ(0,x)= x. these methods, non-negativity is relaxed to the existence of a ∂t 3

A. We begin by showing that for any radius r, there exists a The use of Lyapunov functions to prove stability of ordi- T such that the Picard iteration is contractive on XT,2r for any nary differential equations is well-established. The following x ∈ Br. r 1 theorem illustrates the use of Lyapunov functions. Lemma 7: Given r > 0, let T < min{ Q , L } where f has Definition 2: We say that the system defined by the equa- Lipschitz factor L on B2r and Q = supx∈B f (x). Then P : φ 2r tions in (3) are exponentially stable on X if there exist γ,K > 0 XT,2r → XT,2r and there exists some ∈ XT,2r such that for such that for any x0 ∈ X, t ∈ [0,T] and x ∈ Br, −γt d kx(t)k≤ K kx0ke φ(t)= f (φ(t)), φ(0)= x dt for all t ≥ 0. and for any z ∈ XT,2r, Theorem 3 (Lyapunov): Suppose there exist constants α,β,γ > 0 and a continuously differentiable function V such φ − Pkz ≤ (T L)k kφ − zk. Rn that the following conditions are satisfied for all x ∈ U ⊂ . Proof: We first show that for x ∈ Br, P : XT,2r → XT,2r. If α kxk2 ≤ V (x) ≤ β kxk2 q ∈ XT,2r, then supt∈[0,T ] kq(t)k≤ 2r and so T 2 ∇V (x) f (x) ≤−γ kxk t kPqk = sup x + f (q(s)) ds Then we have exponential stability of System (3) on X Z t∈[0,T ] 0 {x : {y : V(y) ≤ V(x)}⊂ U}. T ≤kxk + k f (q(s))kds Z0 B. Fixed-Point Theorems T Definition 4: Let X be a metric space. A mapping F : X → ≤ r + sup k f (y)kds Z0 y∈B2r X is contractive with coefficient d ∈ [0,1) if ≤ r + TQ < 2r kFx − Fyk≤ d kx − yk x,y ∈ X. Thus we conclude that Pq ∈ XT,2r. Furthermore, for q1,q2 ∈ The following is a Fixed-Point Theorem. XT,2r, Theorem 5 (Contraction Mapping Principle [29]): Let X t be a complete metric space and let F : X → X be a contraction kPq1 − Pq2k = sup ( f (q1(s)) − f (q2(s)))ds X Z with coefficient d. Then there exists a unique y ∈ X such that t∈[0,T ] 0

T Fy = y. ≤ sup k f (q1(s)) − f (q2(s))kds Z0 t∈[0,T ] Furthermore, for any x0 ∈ X, ≤ T L sup kq1(s) − q2(s)k = T Lkq1 − q2kX k k t∈[0,T ] F x0 − y ≤ d kx0 − yk. Therefore, by the contraction mapping theorem, the Picard

To apply these results to the existence of the solution map, iteration converges on [0,T ] with convergence rate (T L)k. we use the Picard iteration. A. Picard Extension Convergence Lemma V. PICARD ITERATION In this section we propose an extension to the Picard iter- We begin by reviewing the Picard iteration. This is the basic ation approximation. We divide the interval into subintervals mathematical tool we will use to define our approximation to on which the Picard iteration is guaranteed to converge. On the solution map and can be found in many texts, e.g. [30]. each interval, we apply the Picard iteration using the final Definition 6: For given T and r, define the complete metric space value of the solution estimate from the previous interval as the initial condition, x. For a polynomial vector field, the result is XT,r := q(t) : supt∈[0,T ] kq(t)k≤ r, q is continuous. (4) a piece-wise polynomial approximation which is guaranteed  to converge on an arbitrary interval – see Figure 1 for an with norm illustration. kqkX = sup kq(t)k. φ t∈[0,T ] Definition 8: Suppose that the solution map exists on t ∈ [0,∞) and kφ(t,x)k≤ K kxk for any x ∈ Br. Suppose that f For a fixed x ∈ Br and q ∈ XT,r, the Picard Iteration [31], has Lipschitz factor L on B4Kr and is bounded on B4Kr with is defined as bound Q. Given T < min{ 2Kr , 1 }, let z = 0 and define t Q L (Pq)(t) , x + f (q(s))ds. k k Z0 G0(t,x) := (P z)(t,x)

In this paper, we also define the Picard iteration iteration on and for i > 0, define the functions Gi recursively as functions z(t,x) as Gk (t,x) := (Pkz)(t,Gk(T,x)). t i+1 i (Pz)(x,t) , x + f (z(x,s))ds. k k Z0 The Gi are Picard iterations P z(t,x) defined on each 4

x(t) k and suppose that G − φ ≤ ci−1(k)kxk on interval [iT − 1.0 ∞ T,iT]. Then k=1 k=2 0.9 k=3 sup Gk(s,x) − φ(s,x) k=4 s∈[iT,iT +T ] 0.8 k=5 k φ True = sup Gi (s − iT,x) − (s,x) s∈[iT,iT +T] 0.7

= sup Pkz(s − iT,Gk (T,x)) − φ(s − iT,φ(iT,x)) 0.6 i−1 s∈[iT,iT +T]

k k φ k 0.5 ≤ sup P z(s − iT,Gi−1(T,x)) − (s − iT,Gi−1(T,x)) [ 0, T ] [ T, 2T ] [ 2T, 3T ] s∈[iT,iT +T ]

φ k φ φ 0.2 0.4 0.6 0.8 1.0 + sup (s − iT,Gi−1(T,x)) − (s − iT, (iT,x)) s∈[iT,iT +T]

φ k Fig. 1. The Solution map and the functions Gi for k = 1,2,3,4,5 and We treat these final two terms separately. First note that i = 1,2,3 for the systemx ˙(t)= −x(t)3. The interval of convergence of the Picard Iteration is T = 1 . k φ φ k 3 Gi−1(T,x) ≤k (iT,x)k + (iT,x) − Gi−1(T,x)

≤ K kxk + ci−1( k)kxk ≤ (K + ci−1(k))kxk. k sub-interval where we substitute the initial condition x 7→ Since ci−1(k) ≤ c(k) < K and x ∈ Br, we have Gi−1(T,x) ≤ k k G t x . Define the concatenation of these G as (K + Kci−1(k))kxk≤ 2Kr. Hence i−1( , ) i k k ∞ k k φ k G (t,x) := Gi (t −iT,x) ∀ t ∈ [iT,iT +T] and i = 1,··· , . sup P z(s − iT,Gi−1(T,x)) − (s − iT,Gi−1(T,x)) s∈[iT,iT +T ]

k k φ k k k If f is polynomial, then the Gi are polynomial for any ≤ sup d (s − iT,Gi−1(T,x)) ≤ Kd Gi−1(T,x) k s∈[iT,iT +T ] i,k and G is continuously differentiable in x for any k. The k following lemma provides several properties for the functions ≤ Kd (K + ci−1(k))kxk. Gk. φ k Now, if x ∈ Br, k (s,x)k≤ Kr and since Gi−1(T,x) ≤ 2Kr Lemma 9: Given δ > 0, suppose that the solution map and f is Lipschitz on B , it is well-known that 4Kr φ(t,x) exists on t ∈ [0,δ] and on x ∈ Br. Further suppose that sup φ(s − iT,Gk (T,x)) − φ(s − iT,φ(iT,x)) kφ(t,a,x)k≤ K kxk for any x ∈ Br. Suppose that f is Lipschitz i−1 s∈[iT,iT +T] on B4Kr with factor L and bounded with bound Q. Choose 2Kr 1 δ k k sup eL(s−iT ) Gk T x φ iT x eTLc k x T < min{ Q , L } and integer N > /T . Then let G and Gi ≤ i−1( , ) − ( , ) ≤ i−1( )k k s∈[iT,iT +T ] be defined as above. Combining, we conclude that Define the function k N−1 i sup G (s − iT,x) − φ(s,x) TL k 2 k i c(k)= ∑ e + K(TL) K (T L) . s∈[iT,iT +T ]

i=0   TL k ≤ e ci−1(k)kxk + Kd (K + ci−1(k))kxk Given any k sufficiently large so that c(k) < K, then for any d k 2 k = ((e + Kd )ci−1(k)+ K d )kxk = ci(k)kxk. x ∈ Br, k sup G (s,x) − φ(s,x) ≤ c(k)kxk. (5) Since ci(k) ≤ c(k), and δ < NT , by induction, we conclude s∈[0,δ ] that

sup Gk(s,x) − φ(s,x) ≤ c(k)kxk. s∈[0,δ ] Proof: Suppose x ∈ Br. By assumption, the conditions of Lemma 7 are satisfied using r′ = 2Kr. Let z(t,x)= 0. Define the convergence rate d = TL < 1. By Lemma 7, k φ k φ sup G0(s,x) − (s,x) = sup (P z)(s,x) − (s,x) s∈[0,T] s∈[0,T ] B. Derivative Inequality Lemma

≤ dk sup kφ(s,x)k≤ Kdk kx k. In this critical lemma, we show that the Picard iteration s∈[0,T] approximately retains the differentiability properties of the Thus Equation (5) is satisfied on the interval [0,T ]. We proceed solution map. The proof is based on induction, with a key step by induction. Define based on an approach in [32] (Proof of Thm 4.14). This lemma is then adapted to the extended Picard iteration introduced in i d k j 2 k the previous section. ci(k)= ∑ (e + Kd ) K d . j=1 Lemma 10: Suppose that the conditions of Lemma 7 are 5

satisfied. Then for any x ∈ Br and any k ≥ 0, satisfied. Then for any x ∈ Br, ∂ ∂ (T L)k ∂ ∂ (T L)k sup (Pkz)(t,x)T f (x) − (Pkz)(t,x) ≤ kxk sup Gk(t,x)T f (x) − Gk(t,x) ≤ (K + c(k))kxk ∂x ∂t T ∂x ∂t T t∈[0,T ] t∈[0,T ]

Proof: Begin with the identity for k ≥ 1 Proof: Recall that t Gk t x : G t iT x t iT iT T and i 1 ∞ (Pkz)(t,x)= x + f ((Pk−1z)(s,x))ds ( , ) = i( − , ) ∀ ∈ [ , + ] = ,··· , . Z0 k k k 0 and Gi+1(t,x)= P z(t,Gi (T,x)) where z = 0. Then for t ∈ = x + f ((Pk−1z)(s +t,x))ds. [iT,iT + T], Z−t ∂ ∂ Then, by differentiating the right-hand side, we get Gk(t,x)T f (x) − Gk(t,x) ∂x ∂t ∂ (Pkz)(t,x) ∂ ∂ ∂t = Gk(t − iT,x)T f (x) − Gk(t − iT,x) ∂x ∂t i = f ((Pk−1z)(0,x)) ∂ ∂ 0 ∂ = − Pkz(t − iT,Gk(T,x)) + Pk(t − iT ,Gk(T,x))T f (x) + ∇ f ((Pk−1z)(s +t,x))T (Pk−1z)(s +t,x)ds ∂t i ∂x i Z ∂1 −t k t ∂ (T L) k k−1 ∇ k−1 T k−1 ≤ Gi (T,x) = f ((P z)(0,x)) + f ((P z)(s,x)) ∂ (P z)(s,x)ds T Z0 s t ∂ As was shown in the proof of Lemma 9, Gk(T,x) ≤ (K + = f (x)+ ∇ f ((Pk−1z)(s,x))T (Pk−1z)(s,x)ds, i ∂ ci(k))kxk. Thus for t ∈ [iT,iT + T], Z0 s ∂ ∂ ∂ k where ∂ f denotes partial differentiation of f with respect to k T k (TL) i G (t,x) f (x) − G (t,x) ≤ (K + ci(k))kxk its ith variable and ∂x ∂t T

∂ t ∂ Since the c are non-decreasing, (Pkz)(t,x)= I + ∇ f ((Pk−1z)(s,x))T (Pk−1z)(s,x)ds. i ∂x Z ∂x 0 ∂ ∂ (T L)k sup Gk(t,x)T f (x) − Gk(t,x) ≤ (K + c(k))kxk. Now define for k ≥ 1, ∂x ∂t T t∈[0,δ ] ∂ ∂ y (t,x) := (Pkz)(t,x)T f (x) − (Pkz)(t,x). k ∂x ∂t For k ≥ 2, we have ∂ ∂ VI. MAIN RESULT - A CONVERSE SOS LYAPUNOV y (t,x) := (Pkz)(t,x)T f (x) − (Pkz)(t,x) FUNCTION k ∂x ∂t t ∂ In this section, we combine the previous results to obtain = − ∇ f ((Pk−1z)(s,x))T (Pk−1z)(s,x)ds Z0 ∂t a converse Lyapunov function which is also a sum-of-squares t ∂ polynomial. Specifically, we use a standard form of converse + ∇ f ((Pk−1z)(s,x))T (Pk−1z)(s,x) f (x)ds Z0 ∂x Lyapunov function and substitute our extended Picard iteration t for the solution map. Consider the system = ∇ f ((Pk−1z)(s,x))T · Z0 ∂ ∂ x˙(t)= f (x(t)), x(0)= x0. (6) (Pk−1z)(s,x) f (x) − (Pk−1z)(s,x) ds ∂x ∂s  Theorem 12: Suppose that f is polynomial of degree q and t that system (6) is exponentially stable on M with ∇ f Pk−1z s x T y s x ds = (( )( , )) k−1( , ) . λ Z0 kx(t)k≤ K kx(0)ke− t , This means that since (Pk−1z)(t,x) ∈ B , by induction 2r where M is a bounded nonempty region of radius r. Then there k−1 exist α,β,γ > 0 and a sum-of-squares polynomial V(x) such sup kyk(t)k≤ T sup ∇ f ((P z)(t,x)) sup kyk−1(t,x)k [0,T ] t∈[0,T ] t∈[0,T ] that for any x ∈ M,

(k−1) 2 2 ≤ TL sup kyk−1(t,x)k≤ (T L) sup ky1(t,x)k α kxk ≤ V(x) ≤ β kxk (7) t∈[0,T ] t∈[0,T ] ∇V(x)T f (x) ≤−γ kxk2 . (8) For k = 1, (Pz)(t,x)= x, so y1(t)= f (x) and sup[0,T ] ky1(t)k≤ Lkxk. Thus Further, the degree of V will be less than 2q(Nk−1), where (T L)k k(L,λ,K) is any integer such that c(k) < K, sup ky (t)k≤ kxk. k T t∈[0,T ] log2K2 (T L)k 1 c(k)2 + K (1 + c(k))(K + c(k)) < , (9) 2λ T 2 λ L We now adapt this lemma to the extended Picard iteration. c(k)2 < (1 − (2K2)− λ ). (10) Lemma 11: Suppose that the conditions of Lemma 9 are KLlog2K2 6 where c(k) is defined as Negativity of the Derivative: Next, we prove the derivative condition. Recall N−1 i TL k 2 k δ c(k)= ∑ e + K(TL) K (T L) , (11) k T k i=0   Vk(x) := G (s,x) G (s,x)ds Z0 log2K2 t+δ and N(L,λ,K) is any integer such that NT > and T < k T k 2λ = G (s −t,x) G (s −t,x)ds 1 Z 2L for some T and where L is a Lipschitz bound on f on B4Kr. t δ log2K2 then since ∇V(x(t))T f (x(t)) = d V(x(t)), we have by the Proof: Define = 2λ and d = T L. By assumption dt δ Leibnitz rule for differentiation of , N > T . Next, we note that since stability implies f (0)= 0, f is bounded on any B with bound Q = Lr. Thus for B , we have d r 4Kr V x t Gk δ x t T Gk δ x t Gk 0 x t T Gk 0 x t the bound Q = 4KrL. By assumption, T < 1 = 2Kr = 2Kr . k( ( )) = ( , ( )) ( , ( )) − ( , ( )) ( , ( )) 2L 4KrL Q dt h i h i t+δ ∂ Therefore, if k is defined as above, the conditions of Lemma 9 k T k k − 2G (s −t,x(t)) G (s −t,x(t))ds are satisfied. Define G as in Lemma 9. By Lemma 9, if k Zt ∂1 k is defined as above, G (s,x) − φ(s,x) ≤ c(k)kφ(s,x)k on t+δ ∂ k T k s ∈ [0,δ] and x ∈ Br. + 2G (s −t,x(t)) G (s −t,x(t)) f (x(t))ds Zt ∂2 We propose the following Lyapunov functions, indexed by 2 k δ 2 k. = G ( ,x(t)) −kx(t)k δ k T k δ ∂ ∂ Vk(x) := G (s,x) G (s,x)ds k T k k Z0 + 2G (s,x(t)) G (s,x(t)) f (x(t)) − G (s,x(t)) ds Z0 ∂2 ∂s  We will show that for any k which satisfies Inequali- where recall ∂ f denotes partial differentiation of f with ties (9), (10) and (11), then if we define V(x)= Vk(x), we ∂ i have that V satisfies the Lyapunov Inequalities (7) and (8) and respect to its ith variable. As per Lemma 11, we have (Nk−1) has degree less than 2q . The proof is divided into four ∂ ∂ dk Gk(t,x(t))Tf (x(t)) − Gk(t,x(t)) ≤ (K+c(k))kx(t)k parts: ∂2 ∂1 T

Upper and Lower Bounded: To prove that Vk is a valid k δ 2 2 −2λ (s−t) Lyapunov function, first consider upper boundedness. If x ∈ B and as previously noted G ( ,x(t)) ≤ (K e + δ c(k)2)kx(t)k2. Also, Gk(s, x(t)) ≤ K( 1 + c(k))kx(t)k. We and s ∈ [0, ]. Then 2 2 conclude that k φ k T φ G (s,x) = (s,x)+ G (s,x) − (s,x) d 2 −2λ δ 2 2 2 h i Vk(x(t)) ≤ (K e + c(k) )kx(t)k −kx(t)k 2 dt φ 2 k T φ ≤k (s,x)k + G (s,x) − (s, x) dk h i + 2δ K(1 + c(k))(K + c(k))kx(t)k2 k φ φ T As per Lemma 9, G (s,x) − (s,x) ≤ c(k)k (s ,x)k ≤ k λ δ d Kc(k)kxk. From stability we have kφ(s, x)k≤ K kxk. Hence, ≤ K2e−2 + c(k)2−1 + 2δK (1 + c(k))(K + c(k)) kx(t)k2.  T  δ 2 k 2 2 2 Vk(x)= G (s,x) ds ≤ δK 1 + c(k) kxk . Therefore, we have strict negativity of the derivative since Z0  k λ δ d Therefore the upper boundedness condition is satisfied for any K2e−2 + c(k)2 + 2δ (1 + c(k))(K + c(k)) k ≥ 0 with β = δK2(1 + c(k)2) > 0. T 1 K log2K2 dk Next we consider the strict positivity condition. First we 2 = + c(k) + 2 λ (1 + c(k))(K + c(k)) < 1 note 2 2 T d 2 2 Thus Vk(x(t)) ≤−γ kx(t)k for some γ > 0. kφ(s,x)k2 = Gk(s,x)+ φ(s,x) − Gk(s,x) dt h i 2 2 k φ k Sum of Squares: Since f is polynomial and z is trivially ≤ G (s,x) + (s,x) − G (s,x) k polynomial, (P z)(s,x) is a polynomial in x and s. Therefore, which implies Vk(x) is a polynomial for any k > 0. To show that Vk is sum- 2 2 of-squares, we first rewrite the function Gk s x φ s x 2 φ s x Gk s x ( , ) ≥k ( , )k − ( , ) − ( , ) N iT V (x)= ∑ Gk(s − iT,x)T Gk(s − iT,x) ds. 2 −2Ls 2 k i i By Lipschitz continuity of f , kφ(s ,x)k ≥ e kxk and i=1 ZiT −T h i Gk(s,x) − φ(s,x) ≤ Kc(k)kxk. Thus Since Gkz is a polynomial in all of its arguments, Gk(s − i i δ 2 iT x T Gk s iT x is sum-of-squares. It can therefore be k 1 −2Lδ δ 2 2 , ) i ( − , ) Vk(x)= G (s,x) ds ≥ (1 − e ) − Kc(k) kxk . represented as R x T Z s T Z s R x for some polynomial Z0 2L  i( ) i( ) i( ) i( ) vector Ri and matrix of monomial bases Zi. Then 1 −2Lδ Therefore for k as defined previously, 2L (1 − e ) − δ 2 N iT N Kc(k) > 0 and so the positivity condition holds for some V (x)= ∑ R (x)T Z (s)T Z (s)dsR (x)= ∑ R (x)T M R (x) α k i i i i i i i > 0. i=1 ZiT −T i=1 7

60 iT T 10 Where Mi = iT −T Zi(s) Zi(s)ds ≥ 0 is a constant matrix. This proves that VRk is sum-of-squares since it is a sum of sums-of- 10 50 squares. 40 We conclude that V = Vk satisfies the conditions of the 10 theorem for any k which satisfies Inequalities (9) and (10). 10 30 Degree Bound: Given a k which satisfies the inequality k conditions on c(k), we consider the resulting degree of G , DegreeBound 10 20 and hence, of Vk. If f is a polynomial of degree q, and y is a polynomial of degree d in x, then Py will be a polynomial 10 10 of degree max{1,dq} in x. Thus since z = 0, the degree of 10 0 k k−1 k 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 P z will be q . If N > 1, then the degree of Gi will be Nk−1 Exponential Decay Rate q . Thus the maximum degree of the Lyapunov function 3 is 2q(Nk−1). 10

In the proof of Theorem 12, the integration interval, δ 2 was chosen such that the conditions will always be feasible 10 for some k > 0. However, this choice may not be optimal. Numerical experimentation has shown us that a better degree

DegreeBound 1 bound may be obtained by varying this parameter in the proof. 10 However, the given value is one which we have found to work well in the vast majority of cases.

We conclude this section by commenting on the form of the 0 10 .7 1 2 3 4 5 converse Lyapunov function, Exponential Decay Rate δ k T k Fig. 2. Degree bound vs. Exponential Convergence Rate for K = 1.2, r = Vk(x) := G (s,x) G (s,x)ds. λ λ Z0 L = 1, q = 5. Domains < .7 and > .7 are plotted separately for clarity. Our Lyapunov function is defined using an approximation of the solution map. A dual approach to solution of the Hamilton- Jacobi-Bellmand Equation was taken in [33] using occupation the monomial terms in the vector field. If the complexity is still measures instead of Picard iteration. Indeed, the dual space of unacceptably high, then one can consider the use of parallel the Sum of Squares Lyapunov functions can be understood in computing: unlike single-core processing, parallel computing terms of moments of such occupation measures [34]. power continues to increase exponentially. For a discussion As a final note, the proof of Theorem 12 also holds for time- on using parallel computing to solve polynomial optimization varying systems. Indeed the original proof was for this case. problems, we refer to [35]. However, because Sum-of-Squares is rarely used for time- varying systems, the result has been simplified to improve VII. QUADRATIC LYAPUNOV FUNCTIONS clarity of presentation. In this section, we briefly explore the implications of our result for the existence of quadratic Lyapunov functions prov- A. Numerical Illustration ing exponential stability of nonlinear systems. Specifically, we To illustrate the degree bound and hence the complexity of look at when the theorem predicts the existence of a degree analyzing a , we plot the degree bound versus bound of 2. We first note that when the vector field is linear, the exponential convergence rate of the system. For given then q = 1, which implies that 2q(Nk−1) = 2 independent of N parameters, this bound is obtained by numerically searching and k. Recall N is the number of Picard iterations, k is the for the smallest k which satisfies the conditions of Theorem 12. number of extensions and q is the degree of the polynomial The convergence rate parameter can be viewed as a metric vector field, f . Hence an exponentially stable linear system for the accuracy of the sum-of-squares approach: suppose has a quadratic Lyapunov function - which is not surprising. we have a degree bound as a function of convergence rate, Instead we consider the case when q 6= 1. In this case, d(γ). If it is not possible to find a sum-of-squares Lyapunov for a quadratic Lyapunov function, we require N = k = 1 - function of degree d(γ) proving stability, then we know that a single Picard iteration and no extensions. By examining the convergence rate of the system must be less than γ. the proof of Theorem 12, we see that if the conditions of As can be seen, as the convergence rate increases, the the theorem are satisfied with N = k = 1 then V(x)= xT x is degree bound decreases super-exponentially, so that at γ = 2.4, a Lyapunov function which establishes exponential stability only a quadratic Lyapunov function is required to prove of the system. Since this is perhaps the most commonly stability. For cases where high accuracy is required, the degree used form of Lyapunov function, it is worth considering how 1 bound increases quickly; scaling approximately as e γ . To conservative it is when applied to nonlinear systems of the reduce the complexity of the problem, in come cases less form conservative bounds on the degree can be found by considering x˙(t)= f (x(t)). 8

5 VIII.IMPLICATIONS FOR SUM-OF-SQUARES 4.5 PROGRAMMING 4 In this section we consider the implications that the above 3.5 results have on Sum of Squares programming.

3

2.5 A. Bounding the number of decision variables 2 Because the set of continuously differentiable functions is 1.5 an infinite-dimensional vector space, the general problem of finding a Lyapunov function is an infinite-dimensional feasi-

Exponential Decay Rate Decay Exponential 1 bility problem. However, the set of sum-of-squares Lyapunov 0.5 functions with bounded degree is finite-dimensional. The most 00 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 significant implication of our theorem is a bound on the Lipschitz Factor number of variables in the problem of determining stability of a nonlinear vector field. The nonlinear stability problem can Fig. 3. Required decay rate for a quadratic Lyapunov function vs. Lipschitz now be expressed as a feasibility problem of the following bound for K = 1.2 form. Theorem 13: For a given λ, let 2d be the degree bound as- (n+d)! sociated with Theorem 12 and define N = n!d! . If System (6) λ In the following corollary we give sufficient conditions on the is exponentially stable on M with decay rate or greater, the α β γ vector field and decay rate for the Lyapunov function xT x to following is feasible for some , , > 0. prove exponential stability. Find: P ∈ SN : Corollary 1: Suppose that system (6) is exponentially sta- P ≥ 0 ble with α 2 T β 2 −λt kxk ≤ Z(x) PZ(x) ≤ kxk for all x ∈ M kx(t)k≤ K kx(0)ke T ∇ Z(x)T PZ(x) f (x) ≤−γ kxk2 for all x ∈ M λ for some > 0, K ≥ 1 and for any x(0) ∈ M, where M is a  bounded nonempty region of radius r. Let L be a Lipschitz where Z(x) be the vector of monomials in x of degree d or 1 less. bound for f on B4Kr. Suppose that there exists some 2L > δ > 0 such that Proof: The proof follows immediately from the fact that a polynomial V of degree 2d is SOS if and only if there exists 2 −2λ δ 2 δ K e + c1 + 2K L(1 + c1)(K + c1) < 1 a P ≥ 0 such that V(x)= Z(x)T PZ(x). 2 T Our condition bounds the number of variables in the feasibility and KδL < 1, where c1 = K δL. Let V(x)= x x. Then for any x ∈ M, problem associated with Theorem 13. If M is semialgebraic, then the conditions in Theorem 13 can be enforced using sum- T 2 V˙ (x)= ∇x f (x) ≤−β kxk . of-squares and the Positivstellensatz [36]. The complexity of solving the optimization problem will depend on the complex- for some β > 0. ity of the Positivstellensatz test. If positivity on a semialgebraic Proof: We reconsider the proof of Theorem 12. This time, set is decidable, as indicated in [37], this implies the question we set N = k = 1 and T = δ and determine if there exists of exponential stability on a bounded set is decidable. δ 1 a = T < 2L which satisfies the upper-boundedness, lower- boundedness and derivative conditions. Because V(x)= δxT x, the upper and lower boundedness conditions are immediately B. Local Positivity satisfied. The derivative negativity condition is Another implication of our result is that it reduces the λ δ complexity of enforcing the positivity constraint. As discussed K2e−2 + c(1)2 + 2KδL(1 + c(1))(K + c(1)) < 1 in Section III, semidefinite programming is used to optimize 2 where c(1)= c1 = K δL. This is satisfied by the statement of over the cone of sums-of-squares of polynomials. There are the theorem. several different ways the stability conditions can be enforced. For example, we have the following theorem. Note that neither the size of the region we consider nor Theorem 14: Suppose there exist polynomial V and sum- the degree of the vector field plays any role in determining of-squares polynomials s ,s ,s and s such that the following the degree bound. To illustrate the conditions for existence 1 2 3 4 conditions are satisfied for α,γ > 0. of a quadratic Lyapunov function, we plot the required decay 2 rate vs. the Lipschitz continuity factor in Figure 3 for K = V(x) − α kxk = s1(x)+ g(x)s2(x) 1.2. This plot shows that as the Lipschitz continuity of the 2 −∇V(s)T f (x) − γ kxk = s (x)+ g(x)s (x) vector field increases (and the field becomes less smooth), the 3 4 conservatism of using the quadratic Lyapunov function xT x Then we have exponential stability of System (6) on increases. {x : {y : V(y) ≤ V (x)}⊂ U}. 9

The complexity of the conditions associated with Theo- 1 rem 14 is determined by the four sum-of-squares variables, 0.8 si. Theorem 14 uses the Positivstellensatz multipliers s2 and s to ensure that the Lyapunov function need only be positive 4 0.6 and decreasing on the region X = {x : g(x) ≥ 0}. However, as we now know that the Lyapunov function can be assumed 0.4 SOS, we can eliminate the multiplier s2, reducing complexity of the problem. 0.2 Theorem 15: Suppose there exist polynomial V and sum- of-squares polynomials s1, s2 and s3 such that the following 0 conditions are satisfied for α,γ > 0. −0.2 2 V(x) − α kxk = s1(x) 2 T 2 −0.4 −∇(V(x)+ α kxk ) f (x) − γ kxk = s2(x)+ g(x)s3(x)

Then we have exponential stability of System (6) for any x(0) −0.6 such that {y : V(y) ≤ V(x(0))}⊂ X where X := {x : g(x) ≥ 0}. This simplification reduces the size of the SOS variables by −0.8 25% (from 4 to 3). If the semialgebraic set X is defined using −1 several polynomials (e.g. a hypercube), then the reduction in −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 the number of variables can approach 50% . SDP solvers are Fig. 4. Plot of trajectories of the Van-der-Pol Oscillator. We estimate the 6 typically of complexity O(n ), where n is the dimension of the overshoot parameter as K =∼ 1 symmetric matrix variable. In the above example we reduced n = 4N to n = 3N. Thus this simplification can potentially decrease computation by a factor of 82%.

IX. NUMERICAL EXAMPLE 1 10 In this section, we use the Van-der-Pol oscillator to illustrate how the degree bound influences the accuracy of the stability test. The zero equilibrium point of the Van-der-Pol oscillator

is unstable. In reverse-time, however, this equilibrium is stable DegreeBound with a domain of attraction bounded by the well-known forward-time limit-cycle. The reverse-time dynamics are as follows.

x˙1(t)= −x2(t)

0 2 10 µ −2 −1 0 1 x˙2(t)= − (1 − x1(t) )x2(t)+ x1(t) 10 10 10 10 Exponential Decay Rate For simplicity, we choose µ = 1. On a ball of radius r, the Fig. 5. Degree Bound for the Van-der-Pol Oscillator as a Function of Decay Lipschitz constant can be found from L = supx∈Br kD f (x)k, where k·k is the maximum singular value norm. We find a Rate

Lipschitz constant for the Van-der-Pol oscillator on radius 10 0 r = 1 to be 2.1. Numerical simulations indicate K =∼ 1, as illustrated in Figure 4. Given these parameters, the degree −1 10 bound plot is illustrated in Figure 5. Note that the choice of K = 1 dramatically improves the degree bound. Numerical −2 simulation shows the decay rate to be a relatively constant 10 λ = .542 throughout the unit ball. This is illustrated in Figure 6. This gives us an estimate of the degree bound as 10 −3 d = 6.

To find the converse Lyapunov function associated with this 10 −4 degree bound we construct the Picard iteration.

t −5 (Pz)(t,x)= x + f (0)ds = x. 10 Z0 t −6 2 10 (P z)(t,x)= x + f (Pz(s,x))ds 0 2 4 6 8 10 12 14 16 18 20 Z0 t = x + f (x)ds = x + f (x)t Fig. 6. A semi-log plot of kxk for three trajectories. We estimate λ = .542 Z0 for the Van-der-Pol oscillator 10

0.8 The converse Lyapunov function is δ V(x)= (P2z(s,x))T (P2z(s,x))ds 0.6 Z0 δ 0.05 = (x + f (x)s)T (x + f (x)s)ds 0.4 Z0 δ x T I x 0.03 = I sI ds 0.2 Z0  f (x) sI  f (x) 0.05 0.03 0.01 0.005   T δ x I sI x 0 = 2 ds  f (x) Z0 sI s I  f (x) T δ δ 2 x I /2I x −0.2 0.05 = 0.01  f (x) δ 2/2I δ 3/3I f (x)

δ 1 1 −0.4 0.03 If = T = 2L = 4 , for the Van-der-Pol Oscillator, we get the SOS Lyapunov function. 0.05 −0.6 x T 48I 6I x 192 ·V(x)=  f (x)  6II  f (x) T 2 −0.8 x 6.93I 2.45I x −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 =  f (x) 2.45II   f (x) Fig. 7. Level Sets of the converse Lyapunov function, with Ball of radius T 6.93x + 2.45 f (x) 6.93x + 2.45 f (x) r = .25 =  2.45x + f (x)   2.45x + f (x)  3 2 2 2 = (6.93x1 − 2.45x2) + 2.45(x1 + x1x2)+ 4.48x2 2 2 2  d=10 + (2.45x − x2) + x1 + x1x2 + 1.45x2  2 d=8 As per the previous discussion, we use SOSTOOLS to d=4 verify that this Lyapunov function proves stability. Note that d=6 we must show the function is decreasing on the ball of 1 radius r = .25, as the Lipschitz bound used in the theorem is for the ball of radius B4r. We are able to verify that the Lyapunov function is decreasing on the ball of radius r = .25. 0 Some level sets of this Lyapunov function are illustrated in Figure 7. Through experimentation, we find that when we d=2 increase the ball to radius r = 1, the Lyapunov function -1 is no longer decreasing. We also found that the quadratic Lyapunov function V(x)= xT x is not decreasing on the ball of radius r = .25. Although we believe that our degree bound is -2 somewhat conservative, these results indicate the conservatism is not excessive. To explore the limits of the SOS approach, for degree bound -3 2, 4, 6, 8 and 10, we find the maximum unit ball on which -3 -2 -1 0 1 2 3 we are able to find a sum-of-squares Lyapunov function. We Fig. 8. Best Invariant Region vs. Degree Bound with Limit Cycle then use the largest sublevel set of this Lyapunov function on which the trajectories decrease as an estimate for the domain of attraction of the system. These level sets are illustrated bounded set may be decidable. Furthermore, the converse in Figure 8. We see that as the degree bound increases, our Lyapunov function we have used in this paper is relatively easy estimate of the domain of attraction improves. to construct given the vector field and may find applications in other areas of control. The main result also holds for time- X. CONCLUSION varying systems. In this paper, we have used the Picard iteration to construct Recently, there has been interest in using semidefinite an approximation to the solution map on arbitrarily long programming for the analysis on nonlinear systems using sum- intervals. We have used this approximation to prove that of-squares. This paper clarifies several questions on the appli- exponential stability of a polynomial vector field on a bounded cation of this method. We now know that exponential stability set implies the existence of a Lyapunov function which is a on a bounded set implies the existence of an SOS Lyapunov sum-of-squares of polynomials with a bound on the degree. function and we know how complex this function may be. It This implies that the question of exponential stability on a has been recently shown that globally asymptotically stable 11 vector fields do not always admit sum-of-squares Lyapunov [28] P. A. Parrilo, Structured Semidefinite Programs and Semial- functions [38]. Still unresolved is the question of the existence gebraic Geometry Methods in Robustness and Optimization. PhD thesis, Caltech, Pasadena, CA, 2000. Available at of polynomial Lyapunov functions for stability of globally http://www.mit.edu/˜parrilo/pubs/index.html. exponentially stable vector fields. [29] J. E. Marsden and M. J. Hoffman, Elementary Classical Analysis. W. H. Feeman and Company, 2nd ed., 1993. REFERENCES [30] E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations. McGraww-Hill, 1955. [1] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix [31] E. Lindel¨of and M. Picard, “Sur l’application de la m´ethode des approx- Inequalities in System and Control Theory. SIAM Studies in Applied imations successives aux ´equations diff´erentielles ordinaires du premier Mathematics, 1994. ordre,” Comptes rendus hebdomadaires des s´eances de l’Acad´emie des [2] J. B. Lasserre, “Global optimization with polynomials and the problem sciences, vol. 114, pp. 454–457, 1894. of moments,” SIAM J. Optim., vol. 11, no. 3, pp. 796–817, 2001. [32] H. Khalil, Nonlinear Systems. Prentice Hall, third ed., 2002. [3] Y. Nesterov, High Performance Optimization, vol. 33 of Applied Opti- [33] J. B. Lasserre, D. Henrion, C. Prieur, and E. Trelat, “Nonlinear optimal mization, ch. Squared Functional Systems and Optimization Problems. control via occupation measures and LMI-relaxations,” vol. 47, no. 4, Springer, 2000. pp. 1643–1666, 2008. [4] P. A. Parrilo, Structured Semidefinite Programs and Semialgebraic Ge- [34] H. Peyrl and P. A. Parrilo, “A theorem of the alternative for SOS ometry Methods in Robustness and Optimization. PhD thesis, California Lyapunov functions,” in Proceedings IEEE Conference on Decision and Institute of Technology, 2000. Control, pp. 1687–1692, 2007. [5] D. Henrion and A. Garulli, eds., Positive Polynomials in Control, [35] M. M. Peet and Y. V. Peet, “A parallel-computing solution for op- vol. 312 of Lecture Notes in Control and Information Science. Springer, timization of polynomials,” in Proceedings of the American Control 2005. Conference, pp. 4851 – 4856, 2010. [6] G. Chesi, “On the gap between positive polynomials and SOS of poly- [36] M. Putinar, “Positive polynomials on compact semi-algebraic sets,” nomials,” IEEE Transactions on Automatic Control, vol. 52, pp. 1066– Indiana Univ. Math. J., vol. 42, no. 3, pp. 969–984, 1993. 1072, June 2007. [37] J. Nie and M. Schweighofer, “On the complexity of Putinar’s positivstel- [7] G. Chesi, “LMI techniques for optimization over polynomials in control: lensatz,” Journal of Complexity, vol. 23, pp. 135–150, 2007. A survey,” IEEE Transactions on Automatic Control, vol. 55, no. 11, [38] A. A. Ahmadi, M. Krstic, and P. A. Parrilo, “A globally asymptotically pp. 2500–2510, 2010. stable polynomial vector field with no polynomial Lyapunov function,” [8] S. Prajna, A. Papachristodoulou, P. Seiler, and P. A. Parrilo, “New in Proceedings of the IEEE Conference on Decision and Control, developments in sum of squares optimization and SOSTOOLS,” in pp. 7579–7580, 2011. Proceedings of the American Control Conference, pp. 5606 – 5611, 2004. [9] D. Henrion and J.-B. Lassere, “GloptiPoly: Global optimization over polynomials with MATLAB and SeDuMi,” in IEEE Conference on Decision and Control, pp. 747–752, 2001. [10] A. Papachristodoulou and S. Prajna, “On the construction of Lyapunov functions using the sum of squares decomposition,” in Proceedings IEEE Matthew M. Peet received B.S. degrees in Physics Conference on Decision and Control, pp. 3482 – 3487, 2002. and in from the University [11] T.-C. Wang, Polynomial Level-Set Methods for Nonlinear Dynamics and of Texas at Austin in 1999 and the M.S. and Control. PhD thesis, Stanford University, 2007. Ph.D. in Aeronautics and Astronautics from Stanford [12] W. Tan, Nonlinear Control Analysis and Synthesis using Sum-of-Squares PLACE University in 2001 and 2006, respectively. He was Programming. PhD thesis, University of California, Berkeley, 2006. PHOTO a Postdoctoral Fellow at the National Institute for [13] M. M. Peet, “Exponentially stable nonlinear systems have polynomial HERE Research in Computer Science and Control (IN- Lyapunov functions on bounded regions,” IEEE Transactions on Auto- RIA) near Paris, France, from 2006-2008 where he matic Control, vol. 52, pp. 979–987, May 2009. worked in the SISYPHE and BANG groups. He is [14] E. A. Barbasin, “The method of sections in the theory of dynamical currently an Assistant Professor in the Mechanical, systems,” Rec. Math. (Mat. Sbornik) N. S., vol. 29, pp. 233–280, 1951. Materials, and Aerospace Engineering Department [15] I. Malkin, “On the question of the reciprocal of Lyapunov’s theorem on of the Illinois Institute of Technology and director of the Cybernetic Systems asymptotic stability,” Prikl. Mat. Meh., vol. 18, pp. 129–138, 1954. and Controls Laboratory. His current research interests are in the role of [16] J. Kurzweil, “On the inversion of Lyapunov’s second theorem on computation as it is applied to the understanding and control of complex and stability of motion,” Amer. Math. Soc. Transl., vol. 2, no. 24, pp. 19–77, large-scale systems. Applications include fusion and immunology. 1963. English Translation. Originally appeared 1956. [17] J. L. Massera, “Contributions to ,” Annals of Mathematics, vol. 64, pp. 182–206, July 1956. [18] F. W. Wilson Jr., “Smoothing derivatives of functions and applications,” Transactions of the American Mathematical Society, vol. 139, pp. 413– 428, May 1969. [19] Y. Lin, E. Sontag, and Y. Wang, “A smooth converse Lyapunov theorem Antonis Papachristodoulou received an MA/MEng for robust stability,” Siam J. Control Optim., vol. 34, no. 1, pp. 124–160, degree in Electrical and Information Sciences from 1996. the University of Cambridge in 2000, as a member [20] V. Lakshmikantam and A. A. Martynyuk, “Lyapunov’s direct method of Robinson College. In 2005 he received a Ph.D. in stability theory (review),” International Applied Mechanics, vol. 28, PLACE in Control and Dynamical Systems, with a minor in pp. 135–144, March 1992. PHOTO Aeronautics from the California Institute of Technol- [21] A. R. Teel and L. Praly, “Results on converse Lyapunov functions from HERE ogy. In 2005 he held a David Crighton Fellowship class-KL estimates,” pp. 2545–2550, 1999. at the University of Cambridge and a postdoctoral [22] W. Hahn, Stability of Motion. Springer-Verlag, 1967. research associate position at the California Institute [23] N. N. Krasovskii, Stability of Motion. Stanford University Press, 1963. of Technology before joining the Department of [24] V. I. Arnol’d, Ordinary Differential Equations. Springer, 2 ed., 2006. Engineering Science at the University of Oxford, Translated by Roger Cook. Oxford, UK in January 2006, where he is now a University Lecturer in Control [25] K. G. Murty and S. N. Kabadi, “Some NP-complete problems in Engineering and tutorial fellow at Worcester College. His research interests quadratic and nonlinear programming,” Mathematical Programming, include scalable analysis of nonlinear systems using convex optimization vol. 39, pp. 117–129, 1987. based on Sum of Squares programming, analysis and design of large-scale [26] N. Z. Shor, “Class of global minimum bounds of polynomial functions,” networked control systems with communication constraints and Systems and Cybernetics, vol. 23, no. 6, pp. 731–734, 1987. Synthetic Biology. [27] V. Powers and T. W¨ormann, “An algorithm for sums of squares of real polynomials,” Journal of Pure and Applied Linear Algebra, vol. 127, pp. 99–104, 1998.