Stability Analysis of Parabolic Linear PDEs with Two Spatial Dimensions Using Lyapunov Method and SOS

Evgeny Meyer and Matthew M. Peet

Abstract— In this paper, we address stability of parabolic it provides the boundary control law. Applications for two- linear Partial Differential Equations (PDEs). We consider PDEs dimensional cases were discussed in [15] and [16]. However, with two spatial variables and spatially dependent polynomial backstepping requires us to guess on the target PDE and coefficients. We parameterize a class of Lyapunov functionals and their time derivatives by polynomials and express stability solve the PDE for kernel, which may be a challenging task as optimization over polynomials. We use Sum-of-Squares and for PDEs with two spatial variables and spatially dependent Positivstellensatz results to numerically search for a solution coefficients. Moreover, backstepping cannot be used for to the optimization over polynomials. We also show that our stability analysis in the absence of a control input. algorithm can be used to estimate the rate of decay of the Note that SOS has been previously applied in [17] and [18] solution to PDE in the L2 norm. Finally, we validate the technique by applying our conditions to the 2D biological KISS to find Lyapunov functionals for 1D parabolic PDE. Input- PDE model of population growth and an additional example. Output analysis of PDE systems with SOS implementation is discussed in [19]. Examples of using SOS in controller I.INTRODUCTION design for one-dimensional PDEs can be found in [20]–[22]. Stability analysis and controller design for Partial Dif- The main contribution of this paper is to provide an ferential Equations (PDEs) is an active area of research algorithm for stability analysis of 2D parabolic PDEs with [1], [2]. One approach to stability analysis of PDEs is to spatially varying coefficients. We search for polynomial s which defines a Lyapunov functional of the form approximate the PDEs with Ordinary Differential Equations ∫ (ODEs) using, e.g. Galerkin’s method or finite difference, · 2 and then apply finite-dimensional optimal control methods, V (u(t, )) = s(x)u(t, x) dx Ω e.g. [3]–[6]. In this paper we consider stability analysis where u is the solution of the PDE, t represents time, x and without discretization. Specifically, we use Sum-of-Squares Ω are the spatial variable and domain. We introduce stability (SOS) optimization to construct Lyapunov functionals for conditions in the form of parameter dependent LMIs, which parabolic PDEs with two spatial variables and spatially certify Lyapunov inequalities, i.e. dependent coefficients. d It is well-known that existence of a V (u(t, ·)) > 0 and [V (u(t, ·))] < 0 for all t > 0. (1) for a system of ODEs or PDEs is a sufficient condition dt for stability. For example, [7] uses a Lyapunov approach There exist several algorithms for optimization over poly- and Linear Operator Inequalities (LOIs) to provide sufficient nomials which can be applied to (1) in order to find V . conditions for of a controlled heat and See e.g., [24] for a survey on algorithms for optimization delayed wave equations. Another method based on Lyapunov of polynomials. In this paper, we apply SOS and the Posi- and semigroup theories was applied in [8] for analysis tivstellensatz results to find a solution to parameter dependent of wave and beam PDEs with constant coefficients and LMIs. For details on Positivstellensatz see [23] or [25]. We delayed boundary control. In [9] a Lyapunov based analysis use the MATLAB toolbox SOSTOOLS (see [26] for details of semilinear diffusion equations with delays gave stability on SOSTOOLS) and SeDuMi, a well-known semidefinite conditions in terms of Linear Matrix Inequalities (LMIs). programming solver, to solve the SOS optimization problem. Extensive examples of applying the backstepping method to The paper is organized as follows. In Section 2 we the boundary control of PDEs can be found in [10]–[14]. specify the notation and provide some background. Lyapunov Briefly speaking, backstepping uses a Volterra operator to stability conditions for parabolic PDEs are given in Section search for an invertible mapping from the original PDE to 3. We demonstrate the proposed method in Section 4 and a chosen ”target” PDE, known to be stable. In order to find present SOS optimization problems in Section 5. Finally, we such mapping, one has to solve analytically or numerically a discuss numerical implementations in Section 6 and conclude PDE for Volterra operator’s kernel. If the mapping is found, the paper in Section 7 with ideas about future research steps.

This work was supported by the National Science Foundation under II.PRELIMINARIES Grants No. 1301851 and 1301660. A. Notation Evgeny Meyer is a Ph.D student with the School for Engineering of n Matter, Transport and , Arizona State University, Tempe, AZ, 85281 N is the set of natural numbers and N0 := N ∪ {0}. R USA [email protected] and Sn are the n-dimensional Euclidean space and space of Matthew M. Peet is an assistant professor with the School for Engineering × ∈ Rn T of Matter, Transport and Energy, Arizona State University, Tempe, AZ, n n real symmetric matrices. For x , let x denote 85281 USA [email protected] transposed x and xi ∈ R is the i-th component of x. ∥ · ∥1 is ∑ Rn ∥ ∥ n | | ∈ Sn a norm on , defined as x 1 := i=1 xi . For X , Polynomial matrices and SOS matrices∑ are defined in a X ≤ 0 means that X is negative semidefinite. The symbol ∗ similar manner (See, e.g. [27]). We use [S(x)] to denote will denote the symmetric elements of a symmetric matrix. ∑the set of real symmetric SOS polynomial matrices. If M ∈ For Ω ⊆ Rn ∫and f :Ω → R let f(x) stand for [Sm(x)] then for all x ∈ Rn, M(x) ∈ Sm and M(x) ≥ 0. f(x , ..., x ) and f(x) dx represent an of f over 1 n Ω C. Comparison Lemma Ω with dx := dx1dx2...dxn. Nn { ∈ Rn ∈ N } ∈ Nn Recall the comparison principle from, e.g. [28]. Let 0 := α : αi 0 . A vector α 0 is called multi-index. For l ∈ N define the set Lemma 1: Consider the scalar d [u(t)] = f(t, u(t)), u(t0) = u0, where f(t, x) is con- n { ∈ Nn ∥ ∥ ≤ } dt Ql := α 0 : α 1 l . (2) tinuous in t and locally Lipschitz in x, for all t ≥ 0 and all x ∈ J ⊂ R. Let [t ,T ) (T could be infinity) be the maximal For α ∈ Nn, x ∈ Rn and g : Rn → R partial derivative 0 0 interval of existence of the solution u, and suppose u(t) ∈ J ∏n ∂α ∂αi for all t ∈ [t ,T ). Let v be a whose Dα[g(x)] := [g(x)] = [g(x)]. (3) 0 ∂xα ∂xαi upper right-hand derivative D+[v(t)] satisfies i=1 i 0 + ∂ D [v(t)] ≤ f(t, v(t)), v(t0) ≤ u0 Note that 0 [g(x)] = g(x) for any i ∈ {1, ..., n}. We will ∂xi ∂ ∂ ∈ ∈ ≤ also use classical notation, ux x (t, x) := [ [u(t, x)]]. with v(t) J for all t [t0,T ). Then, v(t) u(t) for all 1 2 ∂x2 ∂x1 → R ∈ Nn t ∈ [t0,T ). If for a function f :Ω and some α 0 derivative Dα[f(x)] exists for all x ∈ Ω, there exists g :Ω → R such III. FOR PARABOLIC PDE that g(x) = Dα[f(x)] for all x ∈ Ω. For brevity Dα[f] := g. Further we use W 2,2(Ω). It is one of Banach spaces First consider the following general form of parabolic W k,p(Ω) which denote Sobolev spaces of functions u :Ω → PDEs. For all t ∈ (0, ∞) and x ∈ Ω ⊂ Rn, R α ∈ ∈ n n with D [u] Lp(Ω) for all α Qk , where Qk is defined α(1) α(k) ut(t, x) = F (t, x, D [u(t, x)] , ..., D [u(t, x)]), (4) as in (2) and norm ∑ ∥ ∥ ∥ α ∥ u : [0, ∞)×Ω → R i = 1, ..., k Dα(i)[u(t, x)] u k,p := D [u] Lp , where and for ,

∥α∥1≤k are partial derivatives as denoted in (3). Assume that solu- where Lp(Ω) stands for the space of Lebesgue-measurable tions to (4) exist, are unique and depend continuously on ini- functions g :Ω → R with norm, for p ∈ N tial conditions. This implies that the function is continuously ∫ 2,2 ( )1/p differentiable in t and for each t ∈ [0, ∞), u(t, ·) ∈ W (Ω). ∥ ∥ | |p g Lp := g(s) ds Theorem 1: Let there exist continuous V : L2(Ω) → R, Ω l, m ∈ N and b, a > 0 such that ∥ ∥ | | and g L∞ := sups∈Ω g(s) . It is known that for continuous functions u : [0, ∞) → a∥v∥l ≤ V (v) ≤ b∥v∥m , L2 L2 (5) 2,2 2,2 → R ◦ W (Ω) and V : W (Ω) their composition (V u): ∈ ∞ → R for all v L2(Ω). Furthermore, suppose that there exists [0, ) is also continuous and the upper right-hand ≥ ≥ + c 0 such that for all t 0 the upper right-hand derivative derivative Dt V (u(t)) is defined by D+ [V (u(t, ·))] ≤ −c∥u(t, ·)∥m , (6) V (u(t + h)) − V (u(t)) L2 D+[V (u(t))] := lim sup . h→0+ h where u satisfies (4). Then √ { } b c Note that if v : [0, ∞) → R is differentiable at t ∈ (0, ∞) ∥ · ∥ ≤ l ∥ · ∥m/l − ≥ u(t, ) L2 u(0, ) L exp t for all t 0. then D+[v(t)] = d [v(t)]. a 2 lb dt Proof: Let conditions of Theorem 1 be satisfied. Since 2,2 ⊂ ≥ B. Polynomials W (Ω) L2(Ω), from (5) it follows that for each t 0 For a multi-index α ∈ Nn and x ∈ Rn, let xα := a∥u(t, ·)∥l ≤ V (u(t, ·)) ≤ b∥u(t, ·)∥m . ∏ 0 L2 L2 (7) n αi α1 α2 αn α i=1 xi = x1 x2 ... xn . Then x is a monomial of de- ∥ ∥ ∈ N Dividing both sides of the second inequality in (7) by b gree α 1 0. A polynomial∑ is a finite linear combination α results in 1 of monomials p(x) := α pαx , where the summation is V (u(t, ·)) ≤ ∥u(t, ·)∥m . L2 (8) applied over a given finite set of multi-indexes α and pα ∈ R b denotes the corresponding coefficient. Let R[x] stand for After multiplying both sides of (8) by −c, we have c polynomial ring in n variables x1, ..., xn with coefficients in −c∥u(t, ·)∥m ≤ − V (u(t, ·)). (9) R. The degree of a polynomial p ∈ R[x] is the largest degree L2 b among all monomials, and is denoted by deg(p) ∈ N0. From (6) and (9) it follows that ∈ R c A polynomial p [x] is called Sum of Squares (SOS), D+ [V (u(t, ·))] ≤ − V (u(t, ·)). (10) ∈ R b if there is a finite number∑ of polynomials zi [x] such that ∈ Rn 2 To use the comparison principle, consider the ODE for all x , p(x) = i zi(x) . We denote∑ the set of∑ sum of square polynomials in variables x by [x]. If p ∈ [x], d c ≥ ∈ Rn [ϕ(t)] = − ϕ(t), ϕ(0) = V (u(0, ·)), (11) then p(x) 0 for all x . dt b where t ∈ (0, ∞) and function ϕ : [0, ∞) → R is continuous. where ∫ The well-known solution for (11) is I (t) := 2s(x)u(t, x)a(x)u (t, x) dx, { } 1 x1x1 c ∫Ω ϕ(t) = V (u(0, ·)) exp − t b I2(t) := s(x)u(t, x)b(x)ux2x1 (t, x) dx, ∫Ω for all t ≥ 0. Applying Lemma 1 for (10) and (11) results in { } I3(t) := s(x)u(t, x)b(x)ux1x2 (t, x) dx, c V (u(t, ·)) ≤ V (u(0, ·)) exp − t (12) ∫Ω b I4(t) := 2s(x)u(t, x)c(x)ux2x2 (t, x) dx, for all t ≥ 0. Substituting t = 0 in the second inequality of ∫Ω ( (7) implies I (t) := 2s(x)u(t, x) d(x)u (t, x) m 5 x1 V (u(0, ·)) ≤ b∥u(0, ·)∥ . Ω L2 (13) ) + e(x)u (t, x) + f(x)u(t, x) dx. Combining the first inequality of (7) with (13) and (12) gives x2 { } c Note that, based on section 5.2.3 of [29], we have a∥u(t, ·)∥l ≤ V (u(t, ·)) ≤ V (u(0, ·)) exp − t L2 b { } ux1x2 (t, x) = ux2x1 (t, x) (20) ≤ ∥ · ∥m −c b u(0, ) L exp t . (14) x ∈ Ω I I 2 b for all . We used property (20) to define 2 and 3. Alternatively, I5 can be formulated as Dividing (14) by a and taking the lth root results in ∫ √ { } I (t) = U T (t, x)Z (x)U(t, x) dx, (21) b c 5 5 ∥ · ∥ ≤ l ∥ · ∥m/l − ≥ Ω u(t, ) L2 u(0, ) L exp t for all t 0. a 2 lb where for all x ∈ Ω T IV. STABILITY TEST EXPRESSED AS U (t, x) := [u(t, x) ux1 (t, x) ux2 (t, x)] (22) OPTIMIZATION OVER POLYNOMIALS and   In this paper, we focus on PDEs of the following form. 2s(x)f(x) s(x)d(x) s(x)e(x) 2   For all t > 0 and x ∈ Ω := (0, 1) , Z5(x) := ∗ 0 0 . ∗ ∗ 0 ut(t, x) = a(x)ux x (t, x) + b(x)ux x (t, x) 1 1 1 2 Using integration by parts and boundary conditions (16), I + c(x)u (t, x) + d(x)u (t, x) 1 x2x2 x1 can be rewritten∫ as follows. + e(x)u (t, x) + f(x)u(t, x), (15) d x2 I (t) = 2s(x)u(t, x)a(x) [u (t, x)] dx 1 dx x1 a, b, c, d, e, f ∈ R[x] Ω∫ 1 where . Assume that solution to (15) ex- 1( ists, is unique and depends continuously on initial conditions. x1=1 = 2 s(x)u(t, x)a(x)ux (t, x)| 2,2 1 x1=0 For each t ≥ 0 suppose u(t, ·) ∈ W (Ω). Furthermore, let 0 ∫ ) u 1 satisfy zero Dirichlet boundary conditions, i.e. − d ux1 (t, x) [s(x)u(t, x)a(x)] dx1 dx2 0 dx1 u(t, 1, x2) = u(t, 0, x2) = u(t, x1, 1) = u(t, x1, 0) = 0 (16) ∫ ( − d = 2ux1 (t, x) u(t, x) [s(x)a(x)] for all x1, x2 ∈ [0, 1]. Define V : L2(Ω) → R as dx ∫ Ω ) 1 + s(x)a(x)ux (t, x) dx V (v) := s(x)v(x)2 dx, (17) ∫ 1 Ω T = − U (t, x)Z1(x)U(t, x) dx, (23) where s ∈ R[x]. Using u(t, ·) for v in (17) and differentiating Ω the result with respect to t gives where for all x ∈ Ω   [∫ ] d d d 0 [s(x)a(x)] 0 [V (u(t, ·))] = s(x)u(t, x)2 dx  dx1  Z1(x) := ∗ 2s(x)a(x) 0 . dt ∫dt Ω ∗ ∗ 0 = 2s(x)u(t, x)u (t, x) dx. (18) t Following steps of (23) for I ,I and I , we get Ω ∫ 2 3 4 d Substituting for ut(t, x) from (15) into (18) implies I (t) = s(x)u(t, x)b(x) [u (t, x)] dx ∫ ( 2 dx x2 d ∫Ω 1 [V (u(t, ·))] = 2s(x)u(t, x) a(x)u (t, x) 1( x1x1 x =1 dt Ω = s(x)u(t, x)b(x)u (t, x)| 1 x2 x1=0 0 + b(x)ux1x2 (t, x) + c(x)ux2x2 (t, x) ∫ ) 1 d + d(x)ux (t, x) + e(x)ux (t, x) − u (t, x) [s(x)u(t, x)b(x)] dx dx 1 ) 2 x2 1 2 ∫ 0 dx1 + f(x)u(t, x) dx T = − U (t, x)Z2(x)U(t, x) dx, = I1(t) + I2(t) + I3(t) + I4(t) + I5(t), (19) Ω ∫ d Using the chain rule, we have I3(t) = s(x)u(t, x)b(x) [ux1 (t, x)] dx dx2 ∫Ω ( d 1 [u(t, x)p (x)u(t, x)] = s(x)u(t, x)b(x)u (t, x)|x2=1 dx 1 x1 x2=0 1 0 ∫ d 1 ) d = u(t, x) [p1(x)]u(t, x) + 2p1(x)u(t, x)ux1 (t, x) − dx1 ux1 (t, x) [s(x)u(t, x)b(x)]dx2 dx1 dx2 T ∫ 0 = U (t, x)Υ1(x)U(t, x), (28) − T = U (t, x)Z3(x)U(t, x) dx, where for all x ∈ Ω ∫ Ω   d d [p1(x)] p1(x) 0 I (t) = 2s(x)u(t, x)c(x) [u (t, x)] dx dx1 4 x2  ∗  Ω∫ dx2 Υ1(x) := 0 0 . (29) ∗ ∗ T 0 = − U (t, x)Z4(x)U(t, x) dx, (24) Ω Combining (27) and (28) results in ∫ where U is defined as in (22) and for all x ∈ Ω   U(t, x)T Υ (x)U(t, x) dx = 0. (30) 0 0 1 d [s(x)b(x)] 1 2 dx1 Ω Z (x) := ∗ 1  , 2 0 2 s(x)b(x) Likewise in (27), because of the boundary conditions (16), ∗ ∗ 0 the following holds for any p2 ∈ R[x].   ∫ 0 1 d [s(x)b(x)] 0 2 dx2 d  1  Z3(x) := ∗ 0 s(x)b(x) , [u(t, x)p2(x)u(t, x)] dx = 0. 2 dx2 ∗ ∗ 0 Ω   d d Following steps of (28) for [u(t, x)p2(x)u(t, x)], gives 0 0 [s(x)c(x)] dx2 dx2   Z (x) := ∗  . d 4 0 0 [p2(x)] 0 p2(x) ∗ ∗  dx2  2s(x)c(x) Υ2(x) := ∗ 0 0 (31) By combining (19), (21), (23) and (24) it follows that ∗ ∗ 0 ∫ d such that [V (u(t, ·))] = U T (t, x)Q(x)U(t, x) dx, (25) ∫ dt Ω T U(t, x) Υ2(x)U(t, x) dx = 0. (32) where for all x ∈ Ω Ω   ∈ R 2s(x)f(x) Q1(x) Q2(x) Similarly to (27), the following is true for any p3 [x].   ∫ Q(x) := ∗ −2s(x)a(x) −s(x)b(x) (26) d ∗ ∗ −2s(x)c(x) [u(t, x)p3(x)ux1 (t, x)] dx = 0. (33) Ω dx2 with Note that the left-hand side of (33) can be written as follows. d 1 d ∫ Q1(x) : = s(x)d(x) − [s(x)a(x)] − [s(x)b(x)], d dx 2 dx [u(t, x)p3(x)ux1 (t, x)] dx 1 2 dx 1 d d Ω 2 ∫ ( Q2(x) : = s(x)e(x) − [s(x)b(x)] − [s(x)c(x)]. 2 dx1 dx2 = ux2 (t, x)p3(x)ux1 (t, x) Ω ) A. Spacing functions d ≤ ∈ + u(t, x) [p3(x)]ux1 (t, x) dx If Q(x) 0 for all x Ω, then the time derivative ∫dx2 in (25) is clearly non-positive for all t > 0. However, + u(t, x)p (x)u (t, x) dx, (34) such a condition on Q is conservative. To decrease that 3 x2x1 Ω conservatism, we introduce matrix valued functions Υ such i where we need property (20). Applying integration by parts that ∫ T to the second integral of the right-hand side of the last U (t, x)Υi(x)U(t, x) dx = 0 equation in (34) results in Ω ∫ d and, therefore, can be added to Q without altering the u(t, x)p (x) [u (t, x)] dx Υ 3 dx x2 integral. i are called spacing functions. We parameterize Ω ∫ 1 1 ( Υi by polynomials pi. The idea came from [27]. x1=1 = u(t, x)p3(x)ux (t, x)| The following holds for any p1 ∈ R[x], because of the 2 x1=0 0 ∫ boundary conditions (16). 1 ) ∫ − d d ux2 (t, x) [u(t, x)p3(x)] dx1 dx2 [u(t, x)p (x)u(t, x)] dx ∫ 0 dx1 dx 1 ( ) Ω 1 ∫ d 1 ( ) = − u (t, x) u (t, x)p (x) + u(t, x) [p (x)] dx. x2 x1 3 dx 3 = u(t, x)p (x)u(t, x)|x1=1 dx = 0. Ω 1 1 x1=0 2 (27) 0 (35) From (34) and (35) the following holds. Theorem 2: Suppose that for (15) there exist s, pi ∈ R[x], ∫ ≥ ≤ d for i = 1, ..., 4, and θ > 0, such that s(x) θ and M(x) 0 [u(t, x)p3(x)ux (t, x)] dx for all x ∈ Ω, where M is defined as in (42) – (44). Then dx 1 Ω 2 ∫ ( (15) is stable. d = u(t, x) [p (x)]u (t, x) Proof: If conditions of Theorem 2 are satisfied, let dx 3 x1 Ω 2 ) d a = inf {s(x)}, b = sup{s(x)}. (46) − ∈ u(t, x) [p3(x)]ux2 (t, x) dx x Ω x∈Ω ∫ dx1 T Since s(x) ≥ θ > 0 for all x ∈ Ω, then b, a > 0 and the = U(t, x) Υ3(x)U(t, x) dx, (36) ∈ Ω following holds for all v L2(Ω). ∫ ∫ where for all x ∈ Ω   a∥v∥2 = inf {s(x)} v2(x) dx ≤ s(x)v2(x) dx 1 d 1 d L2 ∈ 0 [p3(x)] − [p3(x)] x Ω Ω Ω 2 dx2 2 dx1 ∫ Υ (x) := ∗  . (37) 3 0 0 ≤ sup{s(x)} v2(x) dx = b∥v∥2 . L2 (47) ∗ ∗ 0 x∈Ω Ω Combining (33) and (36) gives Using (17) we have a∥v∥2 ≤ V (v) ≤ b∥v∥2 . Since ∫ L2 L2 M(x) ≤ 0 d [V (u(t, ·))] ≤ 0 T , from (45) it follows that dt for U(t, x) Υ3(x)U(t, x) dx = 0. (38) all t > 0. Theorem 1 with a, b, defined as in (46), m = l = 2 Ω and c = 0 ensures stability of (15). d Following steps (33) – (36) for [u(t, x)p4(x)ux (t, x)] dx1 2 Theorem 3: Suppose that for (15) there exist θ, γ > 0 with any p4 ∈ R[x], leads to the following. ∈ R ≥ ∫ and s, pi [x], for i = 1, ..., 4, such that s(x) θ and ≤ ∈ T M(x) + γS(x) 0 for all x Ω, where M is defined as in U(t, x) Υ4(x)U(t, x) dx = 0, (39) (42) – (44) and Ω   where for all x ∈ Ω s(x) 0 0     1 d 1 d S(x) := ∗ 0 0 . (48) 0 − [p4(x)] [p4(x)]  2 dx2 2 dx1  ∗ ∗ 0 Υ4(x) := ∗ 0 0 . (40) ∗ ∗ 0 Then for all t > 0 solution to (15) satisfies √ From (25), (30), (32), (38) and (39) the following holds. ( ) ∥ · ∥ ≤ b ∥ · ∥ {−γ } ∫ u(t, ) L2 u(0, ) L2 exp t , (49) d ∑4 a 2 [V (u(t, ·))]= U T (t, x) Q(x)+ Υ (x) U(t, x)dx. (41) dt i where a, b are defined as in (46). Ω i=1 Proof: Under the assumptions of Theorem 3, (47) holds. By substituting for Q and Υi from (26), (29), (31), (37) and With (48) and (22) we can write (40) in (41) we can define ∫ V (u(t, ·)) = U(t, x)T S(x)U(t, x) dx. (50) M = Φ(a, b, c, d, e, f, s, p1, p2, p3, p4), (42) Ω if for all x ∈ Ω   Since M(x) + γS(x) ≤ 0 for all x ∈ Ω, it holds that ∫ M1(x) M2(x) M3(x) M(x) =  ∗ −2s(x)a(x) −s(x)b(x)  , (43) U T (t, x)(M(x) + γS(x))U(t, x) dx ≤ 0. (51) ∗ ∗ −2s(x)c(x) Ω where Since γ is a scalar, (51) can be easily satisfied as follows. ∫ ∫ d d T T M1(x) := 2s(x)f(x) + [p1(x)] + [p2(x)], U (t, x)M(x)U(t, x)dx ≤−γ U(t, x) S(x)U(t, x)dx, dx1 dx2 Ω Ω d 1 d d M2(x) := s(x)d(x) − [s(x)a(x)] − [s(x)b(x)] which with (45) and (50) provides [V (u(t, ·))] ≤ dx 2 dx dt 1 2 − · 1 d γV (u(t, )). Using proof of Theorem 1 with c/b = γ, + p1(x) + [p3(x) − p4(x)], results in (49). 2 dx2 1 d d V. LYAPUNOV STABILITY IN TERMS OF M3(x) := s(x)e(x) − [s(x)b(x)] − [s(x)c(x)] 2 dx1 dx2 SOS OPTIMIZATION 1 d + p (x) + [p (x) − p (x)], (44) Solving optimization over polynomials is computationally 2 2 dx 4 3 1 hard. Thus, in this section we present SOS programming such that ∫ d alternatives, whose solutions yield solutions to problems in [V (u(t, ·))] = U T (t, x)M(x)U(t, x)dx. (45) dt Theorems 2 and 3 and, therefore, provide sufficient condi- Ω tions for stability of System (15). Theorem 4: Suppose that for (15) there exist s, pi ∈ R[x] 2 for i = 1, ..., 4, θ > 0, n1, n2 ∈ Σ[x] and Q1,Q2,Q3 ∈ deg(s)=4 3 Σ[S (x)] such that for all x1, x2 ∈ (0, 1) 1.5 deg(s)=6

) deg(s)=8 2

− − L 1 deg(s)=10 s(x) = θ + x1(1 x1)n1(x) + x2(1 x2)n2(x), k )

· deg(s)=12

M(x) = − Q1(x) − x1(1 − x1)Q2(x) t, 0.5 Numerical ( u

− x (1 − x )Q (x), (52) k 0

2 2 3 ( where M is defined as in (42) – (44). Then (15) is stable. 10 −0.5 log Proof: If (52) holds, then clearly s(x) ≥ θ and −1 M(x) ≤ 0 for all x ∈ Ω. Using Theorem 2 provides stability −1.5 of (15). 0 0.05 0.1 0.15 0.2 t Theorem 5: Suppose that for (15) there exist θ, γ > 0, Fig. 1. Semi-log plots of the L2 norm of the numerical solution to (55) with s, p ∈ R[x] for i = 1, ..., 4, n , n ∈ Σ[x] and Q ,Q ,Q ∈ 3 i 1 2 4 5 6 u(0, x) = 10 x1x2(1 − x1)(1 − x2) and bounds, given by the proposed 3 Σ[S (x)] such that for all x1, x2 ∈ (0, 1) method with different deg(s). TABLE I s(x) = θ + x1(1 − x1)n1(x) + x2(1 − x2)n2(x), MINIMUM h VSDEG(s) FOR (55) − − − cr M(x) + γS(x) = Q4(x) x1(1 x1)Q5(x) deg(s) 4 6 8 10 12 analytic − x2(1 − x2)Q6(x), (53) hcr 0.332 0.259 0.238 0.229 0.227 0.203 where M is defined as in (42) – (44) and S as in (48), then for all t > 0 solution to (15) satisfies may be shown to be feasible. Results for different degrees √ of s (deg(s)) are presented in Table (I). b γ Now we choose l = 1, h = 2 and r = 4. Using a bisection ∥u(t, ·)∥ ≤ ∥u(0, ·)∥ exp{− t}, (54) L2 a L2 2 search over γ, we determine the maximum γ for which the where a, b are defined as in (46). SOS problem in Theorem 5, with (58), may be shown to be Proof: If (53) is true, then for all x ∈ Ω, s(x) ≥ θ feasible. Results for different deg(s) are presented in Table and M(x) + γS(x) ≤ 0, which, combined with Theorem 3, (II). Using finite difference scheme, we numerically solve 3 gives (54). (55) with u(0, x) = 10 x1x2(1 − x1)(1 − x2). Plots of ∥ · ∥ log10 ( u(t, ) L2 ) versus t, using a numerical solution, and VI.NUMERICALVERIFICATIONOFTHE ∥ · ∥ bounds on log10 ( u(t, ) L2 ), given by the proposed method PROPOSEDMETHOD for different deg(s), are presented in Fig. (1). These plots A. KISS model allow us to determine γ by examining the rate of decrease We applied the method described in Sections (IV) and in the L2 norm. Plots are aligned at t = 0 in order to better (V) to stability of the biological KISS PDE named after compare our SOS estimates of γ to the estimate of γ derived Kierstead, Slobodkin and Skelam, which describes popula- from numerical simulation as a function of increasing deg(s). tion growth on a finite area. For more details see [30]. The B. Randomly generated system system is modeled by the following PDE. Consider u (t, x) = h (u (t, x) + u (t, x)) + ru(t, x), (55) t x1x1 x2x2 2 − 2 ut(t, x) = (5x1 15x1x2 + 13x2)(ux1x1 (t, x) where h, r > 0, x ∈ Ω ⊂ R2 and scalar function u satisfies + ux x (t, x)) + (10x1 − 15x2)ux (t, x) zero Dirichlet boundary conditions. 2 2 1 + (−15x + 26x )u (t, x) − (17x4 − 30x It is claimed in [30] that if Ω is a square with edge of 1 2 x2 1 2 − 2 − 3 − 4 length l, then √ 25x1 8x2 50x2)u(t, x), h u(0, x) = 103x x (1 − x )(1 − x ) l := 2π2( ) (56) 1 2 1 2 (59) cr r where x ∈ Ω := (0, 1)2 and the scalar function u satisfies defines a critical length. That means, if l > l , then (55) is cr zero Dirichlet boundary conditions. unstable. Alternatively, for given l and r (56) defines h as cr Using a bisection search over γ, we determine maximum 2 2 hcr := l r/2π . (57) γ for which SOS problem in Theorem 5, with 2 − 2 − Therefore, if h < hcr, then (55) is unstable. a(x) = 5x1 15x1x2 + 13x2, e(x) = 15x1 + 26x2, For our purpose we fix l = 1 and arbitrarily choose r = 4. 2 2 c(x) = 5x − 15x1x2 + 13x , d(x) = 10x1 − 15x2, Thus, according to (57), h ≈ 0.203. 1 2 cr − 4 − − 2 − 3 − 4 Using a bisection search over h, we determine minimum f(x) = (17x1 30x2 25x1 8x2 50x2), b(x) = 0 hcr for which SOS problem in Theorem 4, with may be shown to be feasible. a(x) = h, b(x) = 0, c(x) = h, TABLE II d(x) = 0, e(x) = 0, f(x) = 4 for all x ∈ Ω (58) MAXIMUM γ VSDEG(s) FOR (55) WITH h = 2 deg(s) 4 6 8 10 12 γ 40.25 53 59 61 62 [10] M. Krstic and A. Smyshlyaev, “Boundary control of PDEs: A course 2 Proposed method on backstepping designs,” Society for Industrial and Applied Mathe- 1.5 Numerical simulation matics, Philadelphia, 2008.

) [11] M. Krstic and A. Smyshlyaev, “Backstepping boundary control for first 2

L 1 order hyperbolic PDEs and application to systems with actuator and k )

· sensor delays,” Systems & Control Letters, volume 57, no. 9, pages

t, 0.5 750–758, 2008. (

u [12] M. Krstic and A. Smyshlyaev, “Adaptive boundary control for unsta- k 0 ( ble parabolic PDEsPart I: Lyapunov design,” IEEE Transactions on 10 −0.5 Automatic Control, volume 53, no. 7, pages 1575–1591, 2008.

log [13] A. Smyshlyaev and M. Krstic, “On control design for PDEs with −1 space-dependent diffusivity or time-dependent reactivity,” Automatica, volume 41, no. 9, pages 1601–1608, 2005. −1.5 0 0.05 0.1 0.15 0.2 [14] A. Smyshlyaev and M. Krstic, “Adaptive control of parabolic PDEs,” t Princeton University Press, Princeton, 2005. [15] R. Vazquez and M. Krstic, “Explicit integral operator feedback for lo- Fig. 2. Semi-log plots of the L2 norm of the numerical solution to (59) cal stabilization of nonlinear thermal convection loop PDEs,” Systems and bound, given by the proposed method with deg(s) = 8. & control letters, volume 55, no. 8, pages 624–632, 2006. [16] C. Xu, E. Schuster, R. Vazquez and M. Krstic, “Stabilization of Using finite difference scheme, we numerically solve (59). linearized 2D magnetohydrodynamic channel flow by backstepping The estimated rate of decay, based on numerical solution, boundary control,” Systems & control letters, volume 57, no. 10, pages is 13.07. The computed rate of decay, based on our SOS 805–812, 2008. [17] G. Valmorbida, M. Ahmadi and A. Papachristodoulou, “Semi-definite method, is 12.5 for deg(s) = 8. Plots are given in Fig. 2 programming and functional inequalities for Distributed Parameter and, as for Fig. 1, are aligned at t = 0. Systems,” Proceedings of the 53rd IEEE Conference on Decision and Control, pages 4304–4309, 2014. VII.CONCLUSIONANDFUTUREWORK [18] A. Papachristodoulou and M. M. Peet, “On the analysis of systems described by classes of partial differential equations,” Proceedings of In this paper we have presented a method which allows the 45th IEEE Conference on Decision and Control, pages 747–752, us to search for Lyapunov functionals for parabolic linear 2006. [19] M. Ahmadi, G. Valmorbida and A. Papachristodoulou, “Input-Output PDEs with two spatial variables and spatially dependent Analysis of Distributed Parameter Systems Using Convex Optimiza- coefficients. We demonstrated accuracy of the method in tion,” Proceedings of the 53rd IEEE Conference on Decision and estimating critical diffusion for the KISS PDE and the rate Control, pages 4310–4315, 2014. [20] A. Gahlawat and M. M. Peet, “Designing observer-based controllers of decay for the KISS PDE and for a randomly chosen PDE. for pde systems: A heat-conducting rod with point observation and In future work, we will extend the method by considering boundary control,” Proceedings of the 50th IEEE Conference on Lyapunov functionals of more complicated types, e.g. func- Decision and Control and European Control Conference, pages 6985– 6990, 2011. tionals with semi-separable kernels, as in [31]. In addition, [21] A. Gahlawat, M. M. Peet and E. Witrant, “Control and verification of we will consider alternative boundary conditions, as well as the safety-factor profile in tokamaks using sum-of-squares polynomi- semi-linear and nonlinear 2D and 3D parabolic PDEs. als,” Proceedings of the 18th IFAC World Congress, 2011. [22] A. Gahlawat and M. M. Peet, “A Convex Approach to Output Feedback Control of Parabolic PDEs Using Sum-of-Squares,” arXiv REFERENCES preprint arXiv:1408.5206, 2014. [23] M. Laurent, “Sums of squares, moment matrices and optimization over [1] P. D. Christofides, “Nonlinear and robust control of PDE systems: polynomials,” Emerging applications of algebraic geometry, pages Methods and applications to transport-reaction processes,” Springer 157–270, Springer, New York, 2009. Science & Business Media , New York, 2001. [24] R. Kamyar and M. M. Peet, “Polynomial Optimization with Ap- [2] R. F. Curtain and H. Zwart, “An introduction to infinite-dimensional plications to Stability Analysis and Control - Alternatives to Sum linear systems theory,” Springer Science & Business Media, New York, of Squares,” Discrete and Continuous Dynamical Systems Series B, 1995. volume 20, no. 8, pages 2383–2417, 2015. [3] N. H. El-Farra, A. Armaou and P. D. Christofides, “Analysis and [25] P. J. C. Dickinson and J. Povh, “On an extension of Polyas Positivstel- control of parabolic PDE systems with input constraints,” Automatica, lensatz,” Journal of Global Optimization, volume 61, pages 615–625, volume 39, no. 4, pages 715–725, 2003. 2014. [4] J. Baker and P. D. Christofides, “Finite-dimensional approximation and [26] S. Prajna, A. Papachristodoulou, P. Seiler and P. A. Parrilo, “New control of non-linear parabolic PDE systems,” International Journal developments in sum of squares optimization and SOSTOOLS,” of Control, volume 73, no. 5, pages 439–456, 2000. Proceedings of the American Control Conference, pages 5606–5611, [5] P. D. Christofides and P. Daoutidis, “Finite-dimensional control of 2004. parabolic PDE systems using approximate inertial manifolds,” Pro- [27] M. Peet, A. Papachristodoulou, S. Lall, “Positive forms and stability ceedings of the 36th IEEE Conference on Decision and Control, pages of linear time-delay systems,” SIAM Journal on Control and Optimiza- 1068–1073, 1997. tion, volume 47, no. 6, pages 3237–3258, 2009. [6] R. Kamyar, M. M. Peet and Y. Peet, “Solving Large-Scale Robust [28] H. K. Khalil and J. W. Grizzle, “Nonlinear systems,” Prentice hall, Stability Problems by Exploiting the Parallel Structure of Polya’s New Jersey, 1996. Theorem,” IEEE Transactions on Automatic Control, volume 58, [29] L. Evans, “Partial differential equations,” American Mathematical No. 8, pages 1931–1947, 2013. Society, 1998. [7] E. Fridman and Y. Orlov, “Exponential stability of linear distributed [30] E. E. Holmes, M. A. Lewis, J. E. Banks and R. R. Veit, “Partial parameter systems with time-varying delays,” Automatica, volume 45, differential equations in ecology: spatial interactions and population no. 1, pages 194–201, 2009. dynamics,” Ecology, pages 17–29, 1994. [8] E. Fridman, S. Nicaise and J. Valein, “Stabilization of second order [31] M. Peet and A. Papachristodoulou, “Using polynomial semi-separable evolution equations with unbounded feedback with time-dependent kernels to construct infinite-dimensional Lyapunov functions,” Pro- delay,” SIAM Journal on Control and Optimization, volume 48, no. 8, ceedings of the 47th IEEE Conference on Decision and Control, pages pages 5028–5052, 2010. 847–852, 2008. [9] O. Solomon and E. Fridman, “Stability and passivity analysis of semilinear diffusion PDEs with time-delays,” International Journal of Control, volume 88, no. 1 pages 180–192, 2015.