Robust Optimization — Methodology and Applications1 Aharon Ben-Tal and Arkadi Nemirovski [email protected] [email protected] Faculty of Industrial Engineering and Management Technion – Israel Institute of Technology Technion City, Haifa 32000, Israel

Abstract Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computa- tionally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for real-world applications. We discuss some of these applications, specifically: antenna design, truss topol- ogy design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small pertur- bations of the data and that the RO methodology can be successfully used to overcome this phenomenon.

AMS 1991 subject classification. Primary: 90C05, 90C25, 90C30. OR/MS subject classification. Primary: Programming/Convex. Key words. , data uncertainty, robustness, linear programming, quadratic program- ming, semidefinite programming, engineering design, Lyapunov stability synthesis.

1 Introduction

A generic mathematical programming problem is of the form

min x : f (x, ζ) x 0, f (x, ζ) 0, i = 1,...,m (P[ξ]) n 0 0 0 i x0 R,x R { − ≤ ≤ } ∈ ∈

where x is the design vector, the functions f0 (the objective function) and f1,...,fm are structural elements of the problem, and ζ stands for the data specifying a particular problem instance. For real-world optimization problems, the “decision environment” is often characterized by the following facts: F.1. The data are uncertain/inexact; F.2. The optimal solution, even if computed very accurately, may be difficult to implement accurately; F.3. The constraints must remain feasible for all meaningful realizations of the data; F.4. Problems are large-scale (n or/and m are large); F.5. “Bad” optimal solutions (those which become severely infeasible in the face of even relatively small changes in the nominal data) are not uncommon.

1This research was partially supported by the Israeli Ministry of Science grant # 0200-1-98 and the Israel Science Foundation grant # 683/99-10.0

1 F.1 (and in fact F.2 as well) imply that in many cases we deal with uncertain optimization problems – families of the usual (“certain”) optimization problems

(P[ζ]) ζ , (1) { | ∈ U} where is some “uncertainty set” in the space of the data. U Fact F.3 implies that a meaningful candidate solution (x0,x) of an uncertain problem (1) is required to satisfy the semi-infinite system of constraints

f (x, ζ) x , f (x, ζ) 0 i = 1,...,m (ζ ). (2) 0 ≤ 0 i ≤ ∀ ∈U Fact F.4 on the other hand imposes a severe requirement of being able to process efficiently the semi-infinite system of constraints (2) for large-scale problems. Robust Optimization originating from [3, 4, 5, 11, 12, 13] is a modeling methodology, combined with a suite of computational tools, which is aimed at accomplishing the above requirements. The urgency of having such a methodology stems from the fact F.5, whose validity is well illustrated in the cited papers (and in the examples to follow). In the Robust Optimization methodology, one associates with an uncertain problem (1) its robust counterpart, which is a usual (semi-infinite) optimization program

min x0 : f0(x, ζ) x0, fi(x, ζ) 0, i = 1,...,m (ζ ) ; (3) x0,x { ≤ ≤ ∀ ∈U } fesible/optimal solutions of the robust counterpart are called robust feasible/robust optimal solutions of the uncertain problem (1). The major challenges associated with the Robust Optimization methodology are: C.1. When and how can we reformulate (3) as a “computationally tractable” optimization problem, or at least approximate (3) by a tractable problem. C.2. How to specify reasonable uncertainty sets in specific applications. U In what follows, we overview the results and the potential of the Robust Optimization methodology in the most interesting cases of uncertain Linear, Conic Quadratic and Semidefinite Programming. For these cases, the instances of our uncertain optimization problems will be in the conic form min x : cT x x , A x + b K , i = 1,...,m, , (C) n 0 0 i i i x0 R,x R ≤ ∈ ∈ ∈ n o where for every i Ki is mi – either a non-negative orthant R+ (linear constraints), mi 1 mi mi − 2 – or the Lorentz cone L = y R ymi yj (conic quadratic constraints), ( ∈ | ≥ s j=1 ) mi P mi – or a semidefinite cone S+ – the cone of positive semidefinite matrices in the space S of m m symmetric matrices (Linear Matrix Inequality constraints). i × i The class of problems which can be modeled in the form of (C) is extremely wide (see, e.g., [10, 9]). It is also clear what is the structure and what are the data in (C) – the former is the design dimension n, the number of “conic constraints” m and the list of the cones K1,..., Km, while the latter is the collection of matrices and vectors c, A , b m of appropriate sizes. Thus, { i i}i=1 an uncertain problem (C) is a collection

min x : x cT x K R , A x + b K , i = 1,...,m (c, A , b m )  n  0 0 0 + i i i  i i i=1  x0 R,x R − ∈ ≡ ∈ | { } ∈U  ∈ ∈      A0x+b0   (4)   | {z }   2 of instances (C) of a common structure (n,m, K ,..., K ) and data (c, A , b m ) varying in 1 m { i i}i=1 a given set . Note that the robust counterpart is a “constraint-wise” notion: in the robust U counterpart of (4), every one of the uncertain constraints A x + b K , i = 0,...,m, of (4) is i i ∈ i replaced with its robust counterpart

A x + b K (A , b ) , i i ∈ i ∀ i i ∈Ui where is the projection of the uncertainty set on the space of the data of i-th constraint. Ui U This observation allows us in the rest of the paper to focus on the “pure cases” (all Ki’s are orthants, or all of them are Lorentz cones, or all are the semidefinite cones) only; although the results can be readily applied to the “mixed cases” as well. In the rest of this paper, we consider in turn uncertain Linear, Conic Quadratic and Semidefi- nite programming problems; our major interest is in the relevant version of C.1 in the case when the uncertainty set is an intersection of ellipsoids (which seems to be a general enough model from the viewpoint of C.2). We shall illustrate our general considerations by several examples, mainly coming from engineering (additional examples can be found, e.g., in [2, 3, 7, 11, 12, 13, 14]). We believe that these examples will convince the reader that in many applications taking into ac- count from the very beginning data uncertainty is a necessity which cannot be ignored, and that the Robust Optimization methodology does allow one to deal with this necessity successfully.

2 Robust Linear Programming

2.1 Robust counterpart of uncertain LP In [4] we showed that the robust counterpart

min t : t cT x,Ax b (c, A, B) (5) t,x ≥ ≥ ∀ ∈U n o of an uncertain LP

T n m n m min c x : Ax b (c, A, b) R R × R (6) x ≥ | ∈U⊂ × × n n o o is equivalent to an explicit computationally tractable problem, provided that the uncertainty set itself is “computationally tractable”. To simplify our presentation, we restrict ourselves U to a restricted family of “computationally tractable” sets, and here is the corresponding result:

Theorem 2.1 [[4]] Assume that the uncertainty set in (6) is given as the affine image of a U bounded set = ζ RN , and is given Z { }⊂ Z either (i) by a system of linear inequality constraints

P ζ p ≤ or (ii) by a system of Conic Quadratic inequalities

P ζ p qT ζ r , i = 1,...,M, k i − i k2≤ i − i or

3 (iii) by a system of Linear Matrix Inequalities

dim ζ P + ζ P 0. 0 i i  Xi=1

In the cases (ii), (iii) assume also that the system of constraints defining is strictly feasible. U Then the robust counterpart (5) of the uncertain LP (6) is equivalent to – a Linear Programming problem in case (i), – a Conic Quadratic problem in case (ii), – a Semidefinite program in case (iii). In all cases, the data of the resulting robust counterpart problem are readily given by m, n and the data specifying the uncertainty set. Moreover, the sizes of the resulting problem are polynomial in the size of the data specifying the uncertainty set. 2

Proof. A point y = (t,x) Rn+1 is a feasible solution of (5) if and only if y solves a semi-infinite ∈ system of inequalities of the form

(C ) [B ζ + β ]T y +[cT ζ + d ] 0 ζ , i i i i i ≥ ∀ ∈ Z (7) i = 0,...,m

The fact that y solves (Ci) means that the optimal value in the problem

T T min [Biζ + βi] y +[ci ζ + di]: ζ (Pi[y]) ζ ∈ Z n o is nonnegative. Now, by assumption is representable as Z = ζ Rζ r K Z { | − ∈ } where K is either a nonnegative orthant, or a direct product of the second-order cones, or the semidefinite cone. Consequently, (Pi[y]) is the conic problem

T T T min [Bi y + ci] ζ +[βi y + di]: Rζ r K ζ − ∈ n o Note that this conic problem is bounded (since is bounded). Applying either the LP Duality Z Theorem or the Conic Duality Theorem [15], Chapter 4, depending on whether K is or is not polyhedral, we see that the optimal value in (Pi[y]) is equal to the one in the (solvable) dual problem T T T T max r ξ +[βi y + di]: ξ K, R ξ = Bi y + ci (Di[y]) ξ ∈ n o (note that the cone K we deal with is self-dual, this is why the cone arising (Di[y]) is K). Thus, y solves (C ) if and only if there exists ξ K such that P T ξ = BT y +c and pT ξ +[βT y +d ] 0. i ∈ i i i i ≥ We conclude that (5) is equivalent to the problem

ξi K, i = 0,...,m, T i ∈ T min t : P ξ = B y + ci, i = 0,...,m, i m  i  y=(t,x), ξ i=0 T i T { }  p ξ +[β y + di] 0, i = 0,...,m.  i ≥  

4 2.2 An example: robust antenna design To illustrate the practical potential of Robust LP, consider a particular application – the antenna design problem. A monochromatic electro-magnetic antenna is characterized by its diagram, which is a complex-valued function D(δ) of a 3D-direction δ; the sensitivity of the antenna w.r.t. a flat wave incoming along a direction δ is proportional to D(δ) 2. | | An important property of the diagram is that the diagram of an antenna array – a complex antenna consisting of elements with diagrams D1,...,DN – is just the sum of the diagrams of the elements. When amplification of the outputs of the elements is allowed, the diagram of N the resulting complex antenna becomes xjDj( ), where xj are (in general, complex-valued) j=1 · “weights” of the elements. P A typical antenna design problem is:

(AD) Given antenna elements with diagrams D1,...,DN , find (complex-valued) weights N x1,...,xN such that the diagram D( ) = xjDj( ) of the resulting antenna array · j=1 · satisfies given design specifications (i.e., DP( ) is as close as possible to a given target | · | function). As a simple example, consider the following problem: Circular Antenna. Let the antenna array consist of rings, centered at the origin, in the XY- plane. The diagram of such a ring is real-valued and depends solely on the altitude angle θ (the angle between a direction and the XY-plane): specifically,

2π 1 D (θ)= cos (2πκ cos(θ) cos(φ)) dφ κ 2 Z0 where κ is the ratio of the ring radius and the wavelength. Assume there are 40 rings in the array, with κj = j/10, j = 1,..., 40. Our goal is to choose real weights xj such that the diagram

40

D(θ)= xjDκj (θ) jX=1 is nearly uniform in the “angle of interest” 77o θ 90o, specifically, ≤ ≤ 40 77o θ 90o 0.9 x D (θ) 1; ≤ ≤ ⇒ ≤ j κj ≤ jX=1 under this restriction, we want to minimize the “sidelobe attenuation level” max D(θ) . 0 θ 70o | | With discretization in θ, the problem can be modeled as a simple LP ≤ ≤

40 0.9 xjDκj (θ) 1, θ Θcns  ≤ j=1 ≤ ∈  min τ : 40 (8) τ,x1,...,x40  P   τ xjDκj (θ) τ, θ Θobj  − ≤ j=1 ≤ ∈  P   o o o  o where Θcns and Θobj are finite grids on the segments [77 , 90 ] and [0 ,70 ], respectively. Solving (8), one arrives at the nominal design with a nice diagram as follows:

5 1.2 103 90 110 1 120 60

0.8 1

0.6 150 30 0.8 0.4

0.2 0.6

180 0 0.4

0.2

210 330 1.81e−02 0

240 300 −0.2 0 10 20 30 40 50 60 70 80 90 270 Dream: Nominal design, no implementation errors Sidelobe attenuation level 0.018

In reality, the optimal weights xj correspond to characteristics of certain physical devices and as such cannot be implemented exactly. Thus, it is important to know what happens if the actual weights are affected by “implementation errors”

x (1 + ξ )x , ξ Uniform[ ǫ, ǫ] are independent. (9) j 7→ j j j ∼ − It turns out that even quite small implementation errors have a disastrous effect on the designed antenna:

400 3.89e+02 103 90 110 389.1556 120 60

300

259.437

200 150 30

129.7185 100

0 180 0

−100

−200 210 330

−300

240 300 −400 0 10 20 30 40 50 60 70 80 90 270 Reality: Nominal design, implementation errors x (1 + ξ )x j 7→ j j [ξ Uniform[ 0.001, 0.001]] j ∼ − 100-diagram sample: Sidelobe level [97, 536] ∈

6000 103 90 110 5177.8006 5.18e+03 120 60

4142.2405 4000 3106.6803 150 30 2071.1202 2000

1035.5601

0 180 0

−2000

210 330

−4000

240 300 −6000 0 10 20 30 40 50 60 70 80 90 270 Reality: Nominal design, implementation errors x (1 + ξ )x j 7→ j j [ξ Uniform[ 0.02, 0.02]] j ∼ − 100-diagram sample: Sidelobe level [2469, 11552] ∈ We see that from the practical viewpoint the nominal design is meaningless – its “optimality” is completely destroyed by fairly small implementation errors!

6 2.2.1 From nominal to a robust design: Interval model of uncertainty In order to “immunize” the design against implementation errors, one can use the Robust Counterpart methodology. Indeed, the influence of multiplicative errors

x (1 + ξ )x , ξ ǫ j 7→ j j | j|≤ j on a solution of an LP min cT x : Ax b x ≥ n o cT is as if there were no implementation errors, but the matrix was known up to multi- " A # plication from the right by a diagonal matrix D with the diagonal entries Djj varying in the segments [1 ǫ , 1+ ǫ ]. As far as the robust counterpart is concerned, the resulting uncertainty − j j is (equivalent to) a particular case of interval uncertainty, where every entry in the data of an LP, independently of other entries, runs through a given interval. Note that the robust counterpart of an uncertain LP with interval uncertainty

n cj cj δcj, j = 1,...,n T |n − |≤ min c x : Ax b Aij Aij δAij, i = 1,...,m,j = 1,...,n  x ≥ | | − |≤ n  n o bi b δbi, i = 1,...,m  | − i |≤ is clearly equivalent to the LP 

n [Aijxj + δAijyj] bi δbi, i = 1,...,m, n j ≤ − min [cj xj + δcjyj]: x,y,t  P y x y , j = 1,...,n  Xj − j ≤ j ≤ j  It is worthy of mentioning that uncertain LPs with interval uncertainty and their robust coun- terparts were considered by A.L. Soyster as early as in 1973. As applied to our Circular Antenna problem affected by implementation errors, the outlined approach leads to the Interval robust counterpart

40 0.9 x D (δ) ǫ D (δ) y , θ Θ ≤ j κj − | κj | j ∈ cns  j=1  40 P h i    xjDκj (δ)+ ǫ Dκj (δ) yj 1, θ Θcns   j=1 | | ≤ ∈   h i  min τ : P 40  , τ,x,y    τ xjDκj (δ) ǫ Dκj (δ) yj , θ Θobj   − ≤ j=1 − | | ∈  40 P h i  x D (δ)+ ǫ D (δ) y τ, θ Θ   j κj κj j obj   j=1 | | ≤ ∈   h i   P yj xj yj, j = 1,..., 40   − ≤ ≤    where ǫ is the level of implementation errors we intend to withstand. 

2.2.2 From nominal to a robust design: Ellipsoidal model of uncertainty The “worst-case-oriented” interval model of uncertainty looks “too conservative”. Indeed, when the perturbations in coefficients of an uncertain linear inequality are of stochastic nature, it is “highly unprobable” that they will simultaneously take the “most dangerous” values. This is the case, e.g., in our Antenna Design problem, where we have all reasons to assume that the implementation errors are as in (9).

7 Let us look at what happens with a linear inequality

n a + a x 0 (10) 0 j j ≤ jX=1 at a given candidate solution x, when the coefficients of the constraint are affected by random T perturbations, so that the vector a = (a0,a1,...,an) is random. In this case, the left hand side of (10) is a random variable ζx with the mean and the standard deviation given by

T T n n E ζx = (1,x ) a , a = E a , { } { } (11) StD ζ E1/2 [ζ E ζ ]2 = (1,xT )V (1,xT )T , V = E (a an)(a an)T . { x}≡ x − { x} − −  q n o Now let us choose a “safety parameter” Ω and ignore the “rare event” ζ > E ζ + ΩStD ζ , x { x} { x} but, at the same time, let us take upon ourselves full responsibility to satisfy (10) in all other events. With this approach, a “safe” version of our inequality becomes

n an + anx + Ω (1,xT )V (1,xT )T 0; (12) 0 j j ≤ jX=1 q if this inequality is satisfied at a given x, then the “true” inequality (10) is not satisfied at x with probability p(x, Ω) which approaches 0 as Ω grows. (For example, in the case of normally distributed a, one has p(x, Ω) exp Ω2/2 , so that when choosing Ω = 5.24, we are sure that ≤ {− } if x satisfies (12), then the probability that x violates (10) is less than 1.e-6; with Ω = 9.6, the probability in question is less than 1.e-20). It remains to note that (12) is exactly the robust counterpart of the uncertain inequality (10) corresponding to choosing as the uncertainty set the ellipsoid

= an + da da = ΩV 1/2ζ, ζT ζ 1 . U | ≤ n o We see that an ellipsoidal uncertainty is a natural way to model, in an “ǫ-reliable fashion”, random perturbations of the data in uncertain LPs; usually this approach is significantly less conservative than the worst-case-oriented interval one (interval uncertainty). Note, however, that although “ǫ-reliability” with ǫ =1.e-20, and in many applications – with ǫ =1.e-6, for all practical purposes, is the same as “complete reliability”, one can use the above approach to modeling uncertainty only when there are strong reasons to believe that data perturbations indeed are random with known “light tail” distribution. As applied to our Circular Antenna problem affected by implementation errors (9), the approach leads to the Ellipsoidal robust counterpart

40 40 2 2 0.9 + Ωǫ xj Dκj (θ) xjDκj (δ), θ Θcns  sj=1 ≤ j=1 ∈   40 P 40 P   2 2   xjDκj (δ) + Ωǫ xj Dκj (θ) 1, θ Θcns   j=1 sj=1 ≤ ∈  min τ :  . τ,x,y  P 40 P 40   2 2   τ + Ωǫ xj Dκj (θ) xjDκj (δ), θ Θobj  − sj=1 ≤ j=1 ∈    40 P 40 P   x D (δ) + Ωǫ x2D2 (θ) τ, θ Θ   j κj j κj obj   j=1 sj=1 ≤ ∈     P P   

8 2.2.3 Comparing the designs Below we summarize a comparison of the designs corresponding to the Nominal problem and its Interval and Ellipsoidal Robust counterparts for our Circular Antenna example.

103 90 103 90 103 90 110 1 110 0.96932 110 0.99513 120 60 120 60 120 60

0.8 0.72699 0.74635

0.6 150 30 150 0.48466 30 150 0.49757 30 0.4

0.24233 0.24878 0.2

180 0 180 0 180 0

210 330 210 330 210 330

240 300 240 300 240 300

270 270 270 Nominal, no errors Interval Robust, 2% errors Ellipsoidal Robust, 2% errors ǫ in (9) Design ǫ = 0 ǫ = 0.001 ǫ = 0.02 Sidelobe Constr. Sidelobe Constr. Sidelobe Constr. level viol. level1) viol.2) level1) viol.2) 97 2469 Nom 0.018 0 1338 23756 536 11552 0.0653 0.0657 Ell Rob 0.065 0 0 0.009 0.0657 0.0712 0.1095 0.1077 Int Rob 0.110 0 0 0 0.1099 0.1151 1) minimum and maximum in a 100-diagram sample 2) maximum in a 100-diagram sample Note that although the Interval and the Ellipsoidal Robust designs are built for the implementa- tion error ǫ = 0.001, both designs are capable to withstand even 20 times larger implementation errors. We see also that in terms of the objective the Ellipsoidal Robust design is 50% better than the Interval Robust one, although both designs possess the same capability to withstand random implementation errors.

2.3 NETLIB case study Consider a real-world LP program PILOT4 from the NETLIB library (1,000 variables, 410 con- straints). The constraint # 372 is:

[an]T x 15.79081x 8.598819x 1.88789x 1.362417x 1.526049x ≡− 826 − 827 − 828 − 829 − 830 0.031883x 28.725555x 10.792065x 0.19004x 2.757176x − 849 − 850 − 851 − 852 − 853 12.290832x + 717.562256x 0.057865x 3.785417x 78.30661x − 854 855 − 856 − 857 − 858 122.163055x 6.46609x 0.48371x 0.615264x 1.353783x (13) − 859 − 860 − 861 − 862 − 863 84.644257x 122.459045x 43.15593x 1.712592x 0.401597x − 864 − 865 − 866 − 870 − 871 +x 0.946049x 0.946049x 880 − 898 − 916 b 23.387405. ≥ ≡ Most of the coefficients are “ugly reals” like -8.598819; we have all reasons to believe that these coefficients are characteristics of certain technological processes/devices and as such cannot be

9 known with high accuracy; thus, we can treat the ugly coefficients as uncertain and coinciding with the “true” coefficients with an accuracy of, say, 3-4 digits, not more. An exception is the coefficient 1 at x880 which perhaps represents the structure of the problem and is therefore certain. With these natural assumptions, we can ask what happens with the constraint at the optimal solution xn, as reported by CPLEX, when we perturb the uncertain coefficients within, say, 0.01% margin. The answer is as follows: The worst, over all 0.01%-perturbations of uncertain coefficients, violation of the constraint • at xn is as large as 450% of the right hand side; With independent random multiplicative perturbations, distributed uniformly, of uncer- • tain coefficients, the constraint at xn is violated by at most 150% of the right hand side with probability 0.18. We see that the usual solution to a real-world LP can be highly unreliable – it can become heavily infeasible as a result of fairly small perturbations of the uncertain data. To realize how frequent this unpleasant phenomenon is and how to struggle with it, we have carried out a case study as follows (for full details, see [6]). We have looked through the list of NETLIB LPs and for every one of them treated as • certain the coefficients of equality constraints and “simple” (representable as fractions with denominators 100) coefficients of the inequality constraints; all other coefficients were treated ≤ as uncertain. At the “analysis” stage of our case study, we looked at what happens with the feasibility • properties of the nominal solution (as reported by CPLEX 6.2) when the uncertain coefficients are subject to small (0.01%) random perturbations. It turned out that with these data perturbations, in 13 (of totally 90) problems the nominal solution violates some of the constraints by more than 50% of the right hand side! At the “synthesis” stage, we used the Interval and the Ellipsoidal robust counterparts to • get “uncertainty-immunized” solutions to the “bad” NETLIB LPs given by the Analysis stage. It turned out that both approaches yield fairly close results, and that in terms of optimality, “immunization” against ǫ-uncertainty is basically costless, provided that ǫ is not too large. For example, passing from the nominal solutions of NETLIB problems to the robust ones, immunized against 0.1% uncertainty, we never lost more than 1% in the value of the objective. We believe that the outlined case study demonstrates the high potential of the Robust Optimization methodology in processing correctly real-world LPs.

3 Robust Quadratic Programming

Consider an uncertain convex quadratically constrained problem

T T T m min c x : x Aix 2b x + ci, i = 1,...,m Ai, bi,ci (14) x ≤ i |{ }i=1 ∈U n n o o (w.l.o.g., we may treat the objective as certain, moving, if necessary, the original objective to the list of constraints). Here, in contrast to the case of LP, uncertainty sets even of fairly simple geometry (e.g., a box) can yield NP-hard (and thus computationally intractable) robust counterparts [5]. In these cases, the Robust Optimization methodology recommends using an approximate robust counterpart instead of the true one.

10 3.1 Approximate robust counterpart of an uncertain problem Consider an uncertain optimization problem

= min cT x : F (x, ζ) 0 ζ (15) P x ≤ | ∈U n n o o (as above, we lose nothing when assuming the objective to be linear and certain). Typically, the uncertainty set is given as U = ζn + , (16) U V where ζn stands for the nominal data, and is a perturbation set. From now on, we postulate V this structure and assume that is a convex compact set containing the origin. V In the case of (16), our uncertain problem can be treated as a member of the parametric P family of uncertain problems

T n ρ = min c x : F (x, ζ) 0 ζ ρ = ζ + ρ , (17) P x ≤ | ∈U V n n o o where ρ 0 is the “level of uncertainty” (which is equal to 1 for the original problem). Let ≥ = x F (x, ζ) 0 ζ (18) Xρ { | ≤ ∀ ∈Uρ} be the robust feasible set of ; these sets clearly shrink as ρ grows. Pρ We call an optimization problem

min cT x : G(x,u) 0 (19) x,u ≤ n o an approximate robust counterpart of , if the projection of the feasible set of (19) on the P Y x-plane is contained in , and we say that the level of conservativeness of an approximate X1 robust counterpart (19) does not exceed α, if . In other words, (19) is an approximate Xα ⊂ Y RC of with the level of conservativeness α, if P ≤ 1. Whenever a given x can be extended to a feasible solution of (19), x is robust feasible for ; an approximate RC is “more conservative” than the true RC; P 2. Whenever x cannot be extended to a feasible solution of (19), it may or may not be robust feasible for , but it certainly loses robust feasibility when the level of perturbations is P increased by factor α. Taking into account that the level of perturbations is usually something which is known only “up to a factor of order of 1”, an approximate robust counterpart with an O(1) level of conser- vativeness can be treated as an appropriate alternative for the true RC.

3.2 Approximate robust counterparts of uncertain convex quadratic prob- lems with ellipsoidal uncertainty Assume that the uncertainty set in (14) is ellipsoidal: U L m = (c , A , b ) = (cn, An, bn)+ ζ (cℓ, Aℓ, bℓ) ζT Q ζ 1, j = 1, ..., k , (20) U  i i i i i i ℓ i i i j ≤  ( ℓ=1 )  X i=1 

 k  where Qj 0, Qj 0. Note that this is a fairly general model of uncertainty (including, as  j=1 ≻ a particular case,P the interval uncertainty).

11 Given an uncertain convex quadratically constrained problem (14) with the ellipsoidal un- certainty (20), let us associate with it the semidefinite program

minimize cT x subject to k 1 L T n n ci T 1 ci T L n T 2x bi + ci λij 2 + x bi 2 + x bi [Ai x]  − j=1 ···  1 ci T 1P 1 T  2 + x bi [Ai x]  (21)  . k .  0, i = 1,...,m,  . λijQi .    j=1   cL L T   i T L [Ai x]   2 + x bi P     Anx A1x ALx I   i i ··· i   λ 0, i = 1,...,m, j = 1, ..., k  ij ≥ in variables x, λij. Theorem 3.1 [[8]] Problem (21) is an approximate robust counterpart of the uncertain convex quadratically constrained problem (14) with ellipsoidal uncertainty (20), and the level of conser- vativeness Ω of this approximation can be bounded as follows: (i) In the case of a general-type ellipsoidal uncertainty (20), one has

k Ω v3.6+2ln Rank(Qj) . (22) ≤ u   u j=1 u X t   k Note that the right hand side in (22) is < 6.1, provided that Rank(Qj) 15, 000, 000. j=1 ≤ (ii) In the case of box uncertainty (i.e., ζT Q ζ = ζ2, 1 jP k = L dim ζ), j j ≤ ≤ ≡ π Ω . ≤ 2 (iii) In the case of simple ellipsoidal uncertainty (k = 1 in (20)),

Ω = 1, i.e., problem (21) is equivalent to the robust counterpart of (14), (20). 2

3.3 Approximate robust counterparts of uncertain conic quadratic problems with ellipsoidal uncertainty Consider an uncertain conic quadratic program

T T m min c x : Aix + bi 2 α x + βi, i = 1,...,m (Ai, bi, αi,βi) (23) x k k ≤ i | { }i=1 ∈U n n o o affected by side-wise uncertainty:

m left m Ai, bi i=1 , = Ai, bi, αi,βi i=1 { }m ∈Uright (24) U { } | αi,βi ( { }i=1 ∈U ) and assume that

12 1. The left-hand-side uncertainty set is ellipsoidal:

L m left n n ℓ ℓ T = (Ai, bi) = (Ai , bi )+ ζℓ(Ai , bi ) ζ Qjζ 1, j = 1, ..., k (25) U ( ) | ≤   Xℓ=1 i=1  where Qj 0,j = 1, ..., k, and Qj 0;   j ≻ P 2. The right-hand-side uncertainty set right is bounded and semidefinite-representable: U m R right = (α ,β ) = (αn,βn)+ η (αr,βr) η , U i i i i r i i | ∈V (26) (( r=1 )i=1 ) = η u : P (η)+ QP(u) R 0 , V { |∃ −  } where P (η) : RR SN , Q(u) : Rp SN are linear mappings taking values in the space → → SN of symmetric N N matrices and R SN . × ∈ Let us associate with (23) – (26) the semidefinite program minimize cT x subject to k τ λ [Anx + bn]T i − ij i i  j=1  P 1 1 T [Ai x + bi ]  k  0, i = 1,...,m,  .   λijQi .    j=1   [ALx + bL]T   P i i   Anx + bn A1x + b1 ALx + bL τ I   i i i i ··· i i i  (27)  λ 0, i = 1,...,m, j = 1,...,k,  ij ≥ T n n τi x αi + βi + Tr(RVi), i = 1,...,m, ≤ T 1 1 x αi + βi . P ∗(Vi)=  .  , i = 1,...,m, T R R  x αi + βi  Q∗(Vi) = 0, i = 1,...,m, V 0, i = 1,...,m i  q q N in variables x, λij,τi, Vi; here for a linear mapping S(y) = yiSi : R S , S∗(Y ) = i=1 → Tr(Y S1) P . : SN Rq is the conjugate mapping.  .  →  Tr(Y Sq)    Theorem 3.2 [[8]] Assume that the semidefinite representation of in (26) is strictly feasible, V i.e., there exist η¯, u¯ such that P (¯η)+Q(¯u) R 0. Then problem (27) is an approximate robust − ≻ counterpart of the uncertain conic quadratic problem (23) with uncertainty (24) – (26), and the level of conservativeness Ω of this approximation can be bounded as follows: (i) In the case of a general-type ellipsoidal uncertainty (25) in the left hand side data, one has k Ω v3.6+2ln Rank(Qj) . (28) ≤ u   u j=1 u X t   13 (ii) In the case of box uncertainty in the left hand side data (ζT Q ζ = ζ2, 1 j k = L j j ≤ ≤ ≡ dim ζ), π Ω . ≤ 2 (iii) In the case of simple ellipsoidal uncertainty in the left hand side data (k = 1 in (25)), Ω = 1, so that problem (27) is equivalent to the robust counterpart of (23) – (26). 2

3.4 Example: Least Squares Antenna Design To illustrate the potential of the Robust Optimization methodology as applied to conic quadratic problems, consider the Circular Antenna problem from Section 2.2 and assume that now our 40 goal is to minimize the (discretized) L2-distance from the synthesized diagram xjDκj ( ) to j=1 · the “ideal” diagram D ( ) which is equal to 1 in the range 77o θ 90o and isP equal to 0 in ∗ · ≤ ≤ the range 0o θ 70o. The associated problem is just the Least Squares problem ≤ ≤

2 2 Dx(θ)+ (Dx(θ) 1)  θ Θcns θ Θobj −  v ∈ ∈ minτ,x τ : u P P τ ,  u card (Θcns Θobj) ≤   u ∪  (29) t D∗ Dx 2  k40 − k   | {z }   Dx(θ)= xjDκj (θ)  j=1 P The Nominal Least Squares design obtained from the optimal solution to this problem is com- pletely unstable w.r.t. small implementation errors:

103 90 103 90 103 90 110 1 110 1.1607 110 1 120 60 120 60 120 60

0.8 0.92859 0.8

0.6 0.69644 0.6 150 30 150 30 150 30 0.4 0.4643 0.4

0.2 0.23215 0.2

180 0 180 0 180 0

210 330 210 330 210 330

240 300 240 300 240 300

270 270 270 Dream Reality Reality no errors 0.1% errors 2% errors D D 2= 0.014 D D 2 [0.17, 0.89] D D 2 [2.9, 19.6] k ∗ − k k ∗ − k ∈ k ∗ − k ∈ Nominal Least Squares design: dream and reality Data over a 100-diagram sample

In order to take into account implementation errors (9), we should treat (29) as an uncertain conic quadratic problem

min τ : Ax b 2 τ A τ,x  { k − k ≤ }| ∈U with the uncertainty set of the form

= A = An + An Diag(δ) δ ǫ ; U { | k k∞≤ }

14 in the results to follow, we use ǫ = 0.02. The corresponding approximate RC (27) yields the Robust design as follows:

103 90 103 90 103 90 110 1.0344 110 1.0348 110 1.042 120 60 120 60 120 60

0.82755 0.82781 0.83362

0.62066 0.62086 0.62521 150 30 150 30 150 30 0.41378 0.4139 0.41681

0.20689 0.20695 0.2084

180 0 180 0 180 0

210 330 210 330 210 330

240 300 240 300 240 300

270 270 270 Dream Reality Reality no errors 0.1% errors 2% errors D D 2= 0.025 D D 2 0.025 D D 2 0.025 k ∗ − k k ∗ − k ≈ k ∗ − k ≈ Robust Least Squares design: dream and reality Data over a 100-diagram sample Other examples of applications of Robust Optimization in quadratic and conic quadratic pro- gramming can be found in [12, 13].

4 Robust Semidefinite Programming

The robust counterpart of an uncertain semidefinite program n T min c x : A0 + xiAi 0 (A0,...,An) (30) ( x (  ) | ∈U) Xi=1 is NP hard already in the simple case where the uncertainty set is an ellipsoid. Thus, we U are forced to look for “moderately conservative” approximate robust counterparts of uncertain SDPs. The strongest result in this direction deals with the case of box uncertainty.

4.1 Approximate robust counterpart of uncertain semidefinite program af- fected by a box uncertainty Theorem 4.1 [7] Assume that the uncertain SDP (30) is affected by “box uncertainty”, i.e.,

L n n ℓ ℓ = (A0,...,An) = (A0,...,An)+ ζℓ(A0,...,An) ζ 1 (31) U ( | k k∞≤ ) Xℓ=1 Then the semidefinite program n Xℓ A [x] Aℓ + x Aℓ, ℓ = 1,...,L,  ℓ ≡ 0 j j  j=1  T ℓ P min c x : X Aℓ[x], ℓ = 1,...,L,  (32) x,Xℓ  L − n   ℓ n n  X A0 + xjAj ℓ=1  j=1    P P  is an approximate robust counterpart of the uncertain semidefinite program(30) – (31), and the level of conservativeness Ω of this approximation can be bounded as follows. Let

µ = max max Rank(Aℓ[x]) 1 ℓ L x ≤ ≤

15 (note ℓ 1 in the max). Then ≥ π , µ 2 Ω ϑ(µ)= 2 ≤ . (33) ≤ π√k , µ 3 ( 2 ≥ 2

We see that the level of conservativeness of the approximate robust counterpart (32) depends solely on the ranks of the “basic perturbation matrices” A [ ] and is independent of other sizes ℓ · of the problem. This is good news, since in many applications the ranks of the perturbation matrices are O(1). Consider an instructive example.

4.1.1 Example: Lyapunov Stability Synthesis An LMI region is a set in the complex plane C representable as

= z C f (z) L + Mz + M T z¯ 0 , C { ∈ | C ≡ ≺ } where L = LT and M are real k k matrices andz ¯ is the complex conjugate of z. The simplest × examples of LMI regions are:

1. Open left half-plane: f (z)= z +z ¯; C r z¯ + q 2. Open disk z z + q r , q R,r> 0: f (z)= − ; { | | |≤ } ∈ C z + q r  −  3. The interior of the sector z π θ arg(z) π ( π < arg(z) π, 0 <θ< π ): { | − ≤| |≤ } − ≤ 2 (z +z ¯) sin θ (z z¯) cos θ f (z)= − − ; C (z z¯) cos θ (z +z ¯) sin θ  − 

2h1 (z +z ¯) 0 4. The stripe z h1 < (z) < h2 : f (z)= − . { | ℜ } C 0 (z +z ¯) 2h2  −  It is known that

(!) The spectrum Σ(A) of a real n n matrix A belongs to if and only if there × C exists Y Sn, Y 0, such that the matrix ∈ ≻ T [Y, A]= LpqY + MpqAY + MqpY A M 1 p,q k h i ≤ ≤ is negative definite. Here and in what follows [Bpq]1 p,q k denotes the block matrix with blocks Bpq. ≤ ≤ We can treat Y from (!) as a certificate of the inclusion Σ(A) , and by homogeneity reasons ⊂ C we can normalize this certificate to satisfy the relations Y I, [Y, A] I. From now on,  M  − we speak about normalized certificates only. In many control applications, we are interested in solving the following problem:

16 (LSS[ρ]) Given an LMI region and an “uncertain interval matrix” C n n n m ρ = (A, B) R × R × AB  ∈ × | n n Aij Aij ρCij, Bij Biℓ Diℓ, 1 i, j n, 1 ℓ m | − |≤ | − |≤ ≤ ≤ ≤ ≤  find a linear feedback K Rn m and a matrix Y which certifies that the spectra of ∈ × all matrices of the form A + BK, (A, B) , belong to . ∈ ABρ C For example, in the case when is the open left half-plane, our question becomes: C find K such that all matrices of the form A+BK, (A, B) ρ, share a common “stability certificate” Y I: ∈ AB  T (A + BK)Y + Y (A + BK) I (A, B) ρ.  − ∀ ∈ AB The interpretation is as follows: given an uncertain time-varying controlled linear system d x(t) = A(t)x(t) + B(t)u(t), (A(t),B(t)) ρ t, dt ∈ AB ∀ we are looking for a linear feedback

u(t) = Kx(t)

such that the stability of the resulting uncertain closed loop system d x(t) = (A(t) + B(t)K)x(t) (A(t),B(t)) ρ t, dt ∈ AB ∀ can be certified by a quadratic Lyapunov function xT Y −1x:

d −1 (xT (t)Y x(t)) < 0 dt for all t and all nonzero trajectories X( ) of the closed loop system. · Note that replacing in the preceding story the open left half-plane with the open unit disk, we come to a completely similar problem for a discrete time uncertain controlled system. Problem (LSS[ρ]) is NP-hard. It turns out, however, that with the aid of Theorem 4.1 we can build a fairly good tractable approximation of the problem. Note that Y I and K form a  solution of (LSS[ρ]) if and only if (Y, K) is a robust feasible solution of the uncertain matrix inequality [Y, A + BK] I (A, B) {{M − }| ∈ ABρ} with box uncertainty. Passing from the variables Y, K to Y, Z = KY , we convert this uncertain matrix inequality into an uncertain Linear Matrix Inequality (LMI)

T LpqY + Mpq(AY + BZ)+ Mqp(AY + BZ) I (I[ρ]) { 1 p,q k − } h i ≤ ≤ in variables Y, Z. Thus, (Y, K = ZY 1) solves (LSS[ρ]) if and only if Y I and (Y, Z) is a −  robust feasible solution of the uncertain LMI (I[ρ]) with a box uncertainty. In the notation from Theorem 4.1, x = (Y, Z) and the perturbation matrices Aℓ[x], ℓ = 1,...,L, are

ij ji Cij MpqE Y + MqpYE 1 p,q k , i,j = 1,...,n, iℓ T iℓ T ≤ ≤ (34) Diℓ MpqF  Z + MqpZ (F )  , i = 1, ..., n, ℓ = 1,...,m, 1 p,q k h i ≤ ≤ 17 where Eij are the standard n n basic matrices (“1 in the cell ij, zeros in other cells”) and F iℓ × are the standard n m basic matrices. × The approximate robust counterpart (32) of the uncertain LMI (I[ρ]) is the system of LMI’s Xij C M EijY + M YEji , i,j = 1,...,n, ± ij pq qp 1 p,q k iℓ iℓ T iℓ T ≤ ≤ Z Diℓ MpqF Z + MqpZ (F )  , i = 1, ..., n, ℓ = 1,...,m, ± 1 p,q k h i ≤ ≤ ij iℓ n n n n T ρ X + Z I LpqY + Mpq(A Y + B Z)+ Mqp(A Y + B Z) , "i,j i,ℓ # − − 1 p,q k h i ≤ ≤ P P Y I  (II[ρ]) Theorem 4.1 implies the following Corollary 4.1 Let µ be the maximum, over Y, Z and the indices i, j, ℓ, of the ranks of matrices (34). Then: (i) If the system of LMI’s (II[ρ]) is solvable, so is the problem (LSS[ρ]), and every solution 1 (Y,Z,...) of the former system yields a solution (Y, K = ZY − ) of the latter problem; (ii) If the system of LMI’s (II[ρ]) is unsolvable, so is the problem (LSS[ϑ(µ)ρ]), cf. (33). 2

The point is that the parameter µ in Corollary 4.1 normally is a small integer; specifically, we always have µ 2k, where k is the size of the matrices L,M specifying the LMI region in ≤ C question; for the most important cases where is, respectively, the open left half-plane and the πC open unit disk, we have µ = 2, i.e., (II[ρ]) is 2 -conservative approximation of (LSS[ρ]). There are many other applications of Theorem 4.1 to systems of LMI’s arising in Control and affected by an interval data uncertainty. Usually the structure of such a system ensures that when perturbing a single data entry, the right hand side of every LMI is perturbed by a matrix of a small rank, which is exactly the case considered in Theorem 4.1.

4.2 Approximate robust counterpart of uncertain semidefinite program af- fected by a ball uncertainty Theorem 4.2 [11] Assume that (30) is affected by a ball uncertainty:

L n n ℓ ℓ = (A0,...,An) = (A0,...,An)+ ζℓ(A0,...,An) ζ 2 1 (35) U ( | k k ≤ ) Xℓ=1 Then the semidefinite program

G A1[x] A2[x] ... AL[x] A1[x] F    A [x] F   2 0,    . .    min cT x :  . ..   (36)   .   x,F,G       AL[x] F    n   n n   F + G 2 A0 + xjAj    j=1 !     P  (cf. (32)) is an approximate robust counterpart of the uncertain semidefinite program (30), (35), and the level of conservativeness Ω of this approximation does not exceed

min √M, √L , h i 18 where M is the row size of the matrices Ai in (30). 2

4.3 Uncertain semidefinite programs with tractable robust counterparts The only known “general” geometry of the uncertainty set in (30) which leads to a compu- U tationally tractable robust counterpart of (30) is the trivial one, namely, a polytope given as a convex hull of a finite set:

= Conv (A1, A1,...,A1 ),..., (AL, AL,...,AL) ; U 0 1 n 0 1 n n o in this case, the robust counterpart of (30) merely is the semidefinite program

n T ℓ ℓ min c x : A + xjA 0, ℓ = 1,...,L . x  0 j    jX=1  There are, however, important special cases where the robust counterpart of an uncertain semidefinite program with a “nontrivial” uncertainty set is computationally tractable. Let us look at two most important examples.

4.3.1 Example: “Rank 2” ellipsoidal uncertainty and robust truss topology design Consider a “nominal” (certain) semidefinite problem

n T n n n min c x : A [x] A + xjA 0 , (37) x  ≡ 0 j    jX=1  where An are symmetric M M matrices. Let d be a fixed nonzero M-dimensional vector, and j × let us call a rank 2 perturbation of An[ ] associated with d a perturbation of the form · An[x] An[x]+ b(x)dT + dbT (x), 7→ where b(x) is an affine function of x taking values in RM . Consider the uncertain semidefinite problem obtained from (37) by all possible rank 2 perturbations associated with a fixed vector d = 0 and with b( ) varying in a (bounded) ellipsoid: 6 · L L T n ℓ T ℓ T T min c x : A [x]+[ uℓb (x)]d + d[ uℓb (x)] 0 u u 1 , (38) ( x (  ) | ≤ ) Xℓ=1 Xℓ=1 where bℓ(x) are given affine functions of x taking values in RM .

Proposition 4.1 [[5], Proposition 3.1] The robust counterpart of the uncertain semidefinite problem (38) is equivalent to the following SDP problem:

1 2 k T T λI [b (x); b (x); ...; b (x)] min c x : 1 2 k n T 0 . (39) x,λ [b (x); b (x); ...; b (x)] A [x] λdd    −   2

19 Application: Robust Truss Topology Design. The situation described in Proposition 4.1 arises, e.g., in the truss topology design problem. In this problem, we are interested in designing a truss – a construction comprised of thin elastic bars of a given total weight linked with each other at nodes from a given finite 2D or 3D nodal set – in such a way that the resulting construction is most rigid w.r.t. a given external load (a collection of forces distributed along the nodes). For a detailed description of the model, see, e.g., [1]. In the Truss Topology Design problem, the “nominal” program (37) can be posed as the following SDP:

τ f T p i   f tiA  0,   i=1   min τ :    ; (40) τ,t   P Diag(t)     p    t = 1,   i   i=1   P    here p is the number of tentative bars, the design variables t1,...,tpare volumes of these bars, Ai 0 are matrices readily given by the geometry of the nodal set, f represents the load of  interest, and τ stands for the compliance of the truss w.r.t. f (the lower the compliance is, the more rigid the truss is when loaded by f). In the robust setting of the TTD problem, we treat the load f as the uncertain data which runs through a given ellipsoid

E = Qu u Rk,uT u 1 . { | ∈ ≤ } This is nothing but a rank 2 perturbation of the nominal problem (40) with properly chosen bℓ(x) (in fact independent of x). The robust counterpart of the resulting uncertain semidefinite problem is the semidefinite program

τI QT p i   Q tiA  0,   i=1   min τ :    ; (41) τ,t   P Diag(t)        p  ti = 1,  i=1     P  For discussing the role played by the Robust Optimization methodology in the context of Truss Topology Design and for illustrative examples, see [3]; here we present a single example as follows. We want to design a minimum-compliance 2D cantilever arm with the 9 9 nodal set, × boundary conditions and a loading scenario as follows:

9 9 nodal grid and the load of interest f n × [the most left nodes are fixed]

20 Solving the associated problem (40), we come to the Nominal design which is completely unstable

w.r.t. small occasional loads:

a a

b

a a

b a a

a a

b

a a

b a a

a a

b

a a

b a a

a a

b

a a

a

b a

a

b

a

b a

a

b

a

b

J

J

J

A

f

J

A

J

A

J

A

J

A

J

Nominal Design: Dream and Reality The compliance w.r.t. f n is 1.000 The compliance w.r.t. f with f = 0.01 f n is 17 times larger! k k2 k k2 In order to get a “reliable” truss, we reduce the original 9 9 nodal set to its 12-element subset × formed by the nodes which are present in the nominal design, and then replace the load of interest f n with the uncertainty set chosen as the smallest volume ellipsoid of loads spanned F by f n and all “small occasional loads” f, f 0.1 f n , distributed along the reduced k k2≤ k k2 nodal set. Solving the associated robust counterpart (41), we come to the Robust design as

follows

a

b

a

T

a b

a

b

T a

a b

a

T b

a

a b

T a

b

a

a

b

T

H

b

H

T b

H

H b

T H

b

T

T

T

A

T

A

T

H

A

H

H

A

H

H

A

Robust Design: Dream and Reality The compliance w.r.t. f n is 1.0024 Compliance w.r.t. Maximum compliance Design f n w.r.t. f ∈ F Nominal 1.000 >3,360 Robust 1.002 1.003

We see that the truss corresponding to the Robust design is nearly as good w.r.t. the load of interest as the truss obtained from the Nominal design; at the same time, the robust design is incomparably more stable w.r.t. small occasional loads distributed along the nodes of the truss.

4.3.2 Example: norm-bounded perturbations and Lyapunov stability analysis Consider a pair of specific uncertain LMI’s

T T T T T (a) R ZP ∆Q + Q ∆ P Z R Y ∆ 2 ρ ,  | k k ≤ (42) (b) n RT ZP ∆Q + QT ∆T P T ZT R Y ∆ ρ o;  | k k≤ n o

21 here Y Sn and Z Rp q are variable matrices, ∆ Rµ ν is a perturbation, and Q Rν n, ∈ ∈ × ∈ × ∈ × P Rq µ are constant matrices, ∆ = Tr(∆∆T ) is the Frobenius norm, and ∆ = ∈ × k k2 k k max ∆x : x 1 is the spectral normq of a matrix. {k k2 k k2≤ } Proposition 4.2 [10] Let Q = 0. The sets of robust feasible solutions of the uncertain LMI’s 6 (42.a), (42.b) coincide with each other and with the set

 Y λQT Q ρRT ZP  (Y, Z) λ R : −T T 0 . (43)  |∃ ∈ ρP Z R λI       ( )  ∗    Thus, the robust counterparts of both (42.| a), (42.b) are{z equivalent to} the explicit LMI ( ) in ∗ variables Y,Z,λ. 2

As an application example, consider the Lyapunov Stability Analysis problem for an uncertain controlled linear dynamic system

d dt x(t) = Ax(t)+ B(t)u(t) [state equations] y(t) = Cx(t) [observer] u(t) = K(t)y(t) [feedback] (44)

d ⇓ dt x(t)=[A + BK(t)C]x(t) [closed loop system] with K(t) varying in the uncertainty set

= K K Kn ρ . (45) U { | k − k≤ } Our goal is to find, if possible, a quadratic Lyapunov function L(x) = xT Xx for the resulting uncertain closed loop system, i.e., to find a matrix X 0 such that ≻ d xT (t)Xx(t) < 0 dt h i for all t and all nonzero trajectories of the closed loop system, whatever is a measurable matrix- valued function K(t) taking values in . U It is easily seen that a necessary and sufficient condition for a matrix X 0 to yield a ≻ quadratic Lyapunov function for the system (44) – (45) is for it to satisfy the semi-infinite system of strict matrix inequalities

[A + BKC]T X + X[A + BKC] 0 K ; (46) ≺ ∀ ∈U by homogeneity reasons, we lose nothing when restricting ourselves with X’s normalized by the requirement X I and satisfying the following normalized version of (46):  [A + BKC]T X + X[A + BKC] I K . (47) − ∀ ∈U Thus, what we are looking for is a robust feasible solution of the uncertain system of LMI’s

X I,  ∆ ρ (48) [A + B(Kn + ∆)C]T X + X[A + B(Kn + ∆)C] I (( ) k k≤ ) −

22 in matrix variable X. According to Proposition 4.2, the robust counterpart of the uncertain system (48) is equivalent to the explicit system of LMI’s

I [A + BKnC]T X X[A + BKnC] λCT C ρXB − − − − 0, ρBT X λI (49)    X I  in variables X, λ. Note that what is important in the above construction is that as a result of the uncertainty affecting (44), the possible values of the matrix A = A + BKC of the closed loop system form the set of the type P + Q∆R ∆ ρ . Note that we would get drifts of A of exactly this { | k k≤ } type when assuming, instead of a norm-boundede perturbation in the feedback, that among the four matrices A,B,C,K participating in (44) three are fixed, and the remaininge matrix, let it be called S, is affected by a norm-bounded uncertainty, specifically, S runs through the set

Sn + U∆V ∆ ρ { | k k≤ } or through the set Sn + U∆V ∆ ρ . { | k k2≤ } References

[1] M.P. Bendsœ, Optimization of Structural Topology, Shape and Material – Springer, Heidel- berg, 1995.

[2] A. Ben-Tal, T. Margalit, A. Nemirovski, “Robust modeling of multi-stage portfolio prob- lems”, in: H. Frenk, C. Roos, T. Terlaky, S. Zhang, Eds. High Performance Optimization, Kluwer Academic Publishers, 2000, 303-328.

[3] A. Ben-Tal, A. Nemirovski, “Stable Truss Topology Design via Semidefinite Programming” – SIAM Journal on Optimization 7 (1997), 991-1016.

[4] A. Ben-Tal, A. Nemirovski, “Robust solutions to uncertain linear programs” – OR Letters 25 (1999), 1-13.

[5] A. Ben-Tal, A. Nemirovski, “Robust Convex Optimization” – Mathematics of 23 (1998).

[6] A. Ben-Tal, A. Nemirovski, “Robust solutions of Linear Programming problems contami- nated with uncertain data” – Mathematical Programming 88 (2000), 411-424.

[7] A. Ben-Tal, A. Nemirovski, “On tractable approximations of uncertain linear matrix in- equalities affected by interval uncertainty” – SIAM J. on Optimization, 2001, to appear.

[8] A. Ben-Tal, A. Nemirovski, C. Roos, “Robust solutions of uncertain quadratic and conic quadratic problems” – Research Report #2/01, Minerva Optimization Cen- ter, Technion – Israel Institute of Technology, Technion City, Haifa 32000, Israel. http://iew3.technion.ac.il:8080/subhome.phtml?/Home/research

[9] A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization, SIAM-MPS Series on Optimization, SIAM Publications, Philadelphia, 2001.

23 [10] S. Boyd, L. El Ghaoui, F. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and – volume 15 of Studies in Applied Mathematics, SIAM, Philadelphia, 1994.

[11] A. Ben-Tal, L. El Ghaoui, A. Nemirovski, “Robust Semidefinite Programming” – in: H. Wolkowicz, R. Saigal, L. Vandenberghe, Eds. Handbook on Semidefinite Programming, Kluwer Academic Publishers, 2000.

[12] L. El Ghaoui, H. Lebret, “Robust solutions to least-square problems with uncertain data matrices” – SIAM J. of Matrix Anal. and Appl. 18 (1997), 1035-1064.

[13] L. El Ghaoui, F. Oustry, H. Lebret, “Robust solutions to uncertain semidefinite programs” – SIAM J. on Optimization 9 (1998), 33-52.

[14] L. El Ghaoui, “Inversion error, condition number, and approximate inverses of uncertain matrices” – to appear in Linear Algebra and its Applications, 342 (2002).

[15] Nesterov, Yu., and Nemirovskii, A. Interior point polynomial methods in Convex Program- ming – SIAM Series in Applied Mathematics, SIAM: Philadelphia, 1994.

[16] Soyster, A.L. “Convex Programming with Set-Inclusive Constraints and Applications to Inexact Linear Programmin” - Operations Research (1973), pp. 1154-1157.

24