IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 36, NO. 2, FEBRUARY 1991 143 Local-Global Double Algebras for Slow H- Adaptation: Part 11- Optimization of Stable Plants Le Y. Wang, Member, IEEE, and George Zames, Fellow, IEEE

Aktruct-In this two-part paper, a common algebraic framework is In this section, the double algebra will be E,, with local norm introduced for the frozen-time analysis of stability and H" optimiza- PJ.) and global norm It II(l(,). tion in slowly timevarying systems, based on the notion of a normed Suppose that W,,W, E IS, represent two weightings, and algebra on which local and global products are defined. Relations E represents a strictly causal plant. It is standard that between local stability, local (near) optimality, local coprime factoriza- G E, tion, and global versions of these properties are sought. feedbacks which are globally stabilizing in IS,, i.e., maintain all The framework is valid for time-domain disturbances in I". H"-be- closed-loop operators in E ,, can be parametrized by a compen- havior is related to I" input-output behavior via the device of an sator QEE, which gives a sensitivity (I- GQ),and a approximate isometry between frequency and time-domain norms. weighted sensitivity S E , In Part 1, some of the main features of normed double algebras were introduced. S = - Part 11 establishes an explicit formula linking global and local sensitiv- W,(Z GQ)W,. (2.1) ity for systems with stable plants, where local sensitivity is a Lipschitz- continuous function of data. Frequencydomain estimates of time-do- Denote W,G by G, and suppose that it has a local factoriza- main sensitivity norms, which become accurate as rates of time variation tion approach zero, are obtained. Notions of adaptive versus nonadaptive G,= U@ (robust) control am introduced. It is shown that adaptive control can GO"' achieve better sensitivity than optimal nonadaptive control. It is demonstrated by an example that, in general, H"-optimal where U and Goutare locally inner an! locally outer in E,, interpolants do not depend Lipschitz continuously on data. However, for some U, > U, i.e., for each t E Z, U,(u,(.))EH" is inner

&suboptimal interpolants of the central (maximal entropy) type and ( GOu),( U,( a)) is outer. AAK ' EH" are shown to satisfy a tractable Lipschitz condition. We are given a sensitivity S' E E, which locally interpolates W, W, at U, i.e., for which there exists Q,E E, such that I. INTRODUCTION SI= WZW, - U@Q, (2.2) NE of the aims of this work is to obtain a paradigm of where is smaller *an -W,W, in CL,(*). Q is now chosen to 0adaptive feedback in the H" context and a comparison of ,?' adaptive versus nonadaptive feedback. A prerequisite for such a locally realize S'. The simplest case' will be considered here, in comparison would appear to be some means of computing which Q is chosen to satisfy optimal or nearly optimal performance under time-varying n weightings, in order to determine whether the best that can be S'= W,W, - U@Gout@(QWl). (2.3) achieved with updated information is better than the best that can This choice of Q can be described as a local product of be achieved without. Frozen-time analysis provides a way of noncausal operators, provided the domains of definition of the obtaining approximate optimization, which can be used to con- various local functions are first extended to noncausal operators. struct an elementary paradigm. However, as freezing is involved Previously, these domains included the space E, of operators in several operations, including inner-outer factorization and with kernels k E 16( - 00,00) satisfying the causality constraint optimal H" interpolation, it can involve some messy bookkeep- k(t, 7) = 0 for t < 7. ing. Our formalism seeks to tidy up this process. Definition 2.1: Henceforth, the definitions o,f local product The same notation will be used here as in Part I. K @ F,transform norm p,(K), and rate d:p)(K)mare extended to operators with (possibly noncausal) kernels k(t, t - PI. ADAPTIVEDESIGN BY bCALINTERPOLATION (*))el;(- QO, 00). (The definitions as originally stated can be The main objective here is to synthesize a global sensitivity extended intact.) from a prescribed local one, which may be locally optimal or Q satisfying (2.3) is now explicitly given by nearly optimal, and to determine how well the global (optimum) solution approximates the local (optimum) one. The double-alge- Q := [ GQ8 ( W2W1- S')] W;' (2.4) bra symbolism allows an explicit description of this problem. provided the inverses in (2.4) exist, where Gp may be non- Manuscript received December 4, 1989; revised June 26, 1990. Paper causal with Fourier transform in L:. The problem is to deter- recommended by Associate Editor, J. Hammer. G. Zames is with the Systems and Control Group, Department of Electri- mine whether (2.4) is stabilizing and makes the (true global) cal Engineering, McGill University, 3480 University Street, , P.Q., H3A 2A7. 'This case occurs when the global products in (2.2) are accessible to L. Y. Wang is with the Department of Electrical and Computer Engineer- computation. Alternatively, the global products can be replaced by local ing, Wayne State University, Detroit, MI 48202. ones. The ensuing inequalities then involve more terms, generated by the IEEE Log Number 9041414. additional local factors, but the essentials remain similar.

0018-9286/91/0200-0143$01.00 0 1991 IEEE 144 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 36, NO. 2, FEBRUARY 1991 sensitivity S a good approximant to S' for slowly-varying G, mentioned at the beginning of this remark approach each other 4,and S'. by (2.9). Assumption 2.1 for Theorem 2.1: (2 Ip 5 00, U > 1) Proof of Theorem 2.1: As Q satisfies (2.4), it has the and W;' are in E, form ;tydL)@ -U. Sufficient conditions for the local interpolation (2.2) and as- Q= {(Gout)~@(U@@(W2Wl-S'))}W;' sumption b) to' hold are that there exists U, > U for which Since (Gout)@, U 63 ( W, W, - S'),and W; ' are in Bu by (2.29 The zero vector! of in z 1 IU, con- 0 (F-m), I assumptions a) and b) and the local interpolation condition (2.2), tain the zero veciors of z)_for each t E Z (where {€ is a U,( Gn and EOis closed under the @ and products, Q is in Eu,i.e., zero vecior of K(z) ai z if z)l = 0) - K( Q stabilizes G in gu. are bounded in an annulus b') I U,(z)-' I, I Cyt(z)-' I The bounds (2.7), (2.8) are obtained from the inequalities U II z I IU,, uniformly in t and z. 0 (3.9), (3.14) of Part I, respectively, applied to the expressions It will be recalled that the following constants were defined in S' = W2W, Gw63 (GQ @ F) and S = W2W1 Part I - - G ,(GP @ F) where the identifications G + - G w, K -+ G$) @Fand F+ W2W1 aremade. 0 A. Local H" Adaptive Optimization A natural idea for adaptive compensation is to make the weighted sensitivity S, at time t E Z depend on the local behav- ior of the weighted plant inner and outer factors U,,G;', and' weighting W, := ( W2 Wl),. That triple of local operators consti- tutes the data available at t about the plant in the form of a Theorem 2.1: Q defined by (2.4) stabilizes G in B. If G,, nominal model and a band of uncertainty. The data may be G 9, and S' are slowly varying, then the weighted sensitivity available a priori or acquired through identification or a combi- S E is explicitly given by nation of both. In frozen-time adaptive design the controller generates a local approximation S: to S, based on that data. The adaptation law can be represented by a map Y ' where 9':2 x (IT;), s; = Y'(fi,,@,), U, > U. F := W2Wl-S'andG,=U@Gout. If the data varies slowly, Thecrem 2.1 provides a basis for frozen-time adaptation provided S' also varies slowly. A suffi- Moreover, the difference )I S(I ,) - pU($:) =: A(t)has the upper bound cient condition for this is that Y' be Lipschitz in its variables, i.e., that there be constants l$?, g@ for which the inequality A(t) 5 K(,~){K$(~,U) + (2.7) ll$; - i:-lllH~oI&J)llfi, - fi,,-lI1 + @, - R-111 and )I SI1 ,) also has the lower bound holds with )I * )I representing 11 (Ip or, in more compact matrix notation U0 where (2.10) B(p, (2.8a) a):= ~~(6~)8s) where a,([8,]) means [du(8i,)] and A(:) is the row matrix [A$), X'L)]of Lipschitz constants. In the case of variable rates, it diP)(F) 5 a,( k2@')+ au($'). (2.8b) will be assumed the Lipschitz constants hold independently of Remark 2.1: It follows from (2.7) that in a variable rate rate. situation (see Section 111-D in _Part I), if the rates approach 0, the One can try to design S' by local Er; optimization, which frequency-domain norm pu(S;) of the local sensitivity S: ap- gives an optimal weighted sensitivity Stpt satisfying proaches an upper bound on the time-domain norm 11 SI( ,) of the global sensitivity S. If S' actually satisfies the uniform radial growth condition defined in Section 111-C of Part I, then it follows from (2.7), which would imply (2.8) and the inequality

that A(t) is bounded by In the case in which is a Lipschitz function of data, Theorem 2.1 gives a global sensitivity S which approximates the local optimum. These cases are illustrated by the following example and discussion of robust versus adaptive design. + K(,~)a:p)($') + ~,,~:~)6(p,U) (2.9)

and if, in addition, U + 1, the frequency and time-domain norms 'For simplicity, assume that S' does not depend on W,,W, separately. WANG AND ZAMES: LOCAL-GLOBAL DOUBLE ALGEBRAS: PART 11 145

III. ROBUSTVERSUS ADAPTIVE SENSITIVITY MINIMIZATION Metric information about uncertain perturbations or distur- bances is represented by a weighting operator WE E U > 1. At time t , disturbances are assumed to lie in the image under W, of the unit ball of If ( - 00, t), U, > U, in the case of noise (or of Hu: in the case of transfer function uncertainty). The smaller the weighting, the more tightly thi: uncertainty is confined in e" (or @"'";") at time t, and therefore he greater the information pertaining to that time. (Information can be measured by e-ent- ropy or €-dimension [24]; dthough quantitative information measures will not be estimated here, we note that they depend monotonely on the weighting). We distinguish a priori informa- tion at some starting time to, and a posteriori information at time 7 2 to represented by operators WO and W'. The differ- ence between WO and W' represents a reduction of uncertainty or acquisition of information in the interval [to,71, and this reduction is reflected in a shrinkage of weighting, 1 ( W7)t(z) 1 5 I ( We)f(z) 1 for at least some t L T and z in some subset of the circle 1 z 1 = U, of nonzero length. A sensitivity reduction scheme will be called robust or adaptive if based on apriori or a posteriori information, respectively. A controller which achieves a sensitivity which is better than an optimal robust one is necessarily adaptive, and the question arises of how much I advantage the adaptation provides. For slowly time-varying sys- I I I tems, this can be answered independently of how the information was obtained. Example 3.1: We will introduce a family of "narrow band" disturbance weighting functions whose center frequencies be- come known with increasing accuracy, and whose envelope is easy to compute.

Let f(*):[0, a] -+ R be a differentiable monotone decreasing a function satisfying f(0) = 1, f(0) = 5 for 8 L -7a, where 0 < 5 4 1, 0 < a 4 1 are constants. f(.) will be-fixed. (See Fig. Let > > be fixed. A narrowband weighting In an interval [0, t], additional information is received about the disturbances, and results in a shrinkage in a posteriori a - a yeo)EH&*with center 009 ;a 5 00 is a func- uncertainty about the center frequency parameter, i.e., /3, is tion such that yea,( u0(.)) is otter in ff", define: in terms of its monotone decreasing as t -+ W. An adaptive local optimization boundary magnitude by of the worst case sensitivity, based on the a posteriori envelope

Narrowband disturbances with uncertain center frequencies based on Theorem 2.1, achieves will be represented as elements of a family of such narrowband weightings P~o[(~Lpt),]= os,II @t - htQ,IIqo= @,(O) (3.2)

' and the resulting adaptive sensitivity achieved is

The center frequencies lie in an interval with midpoint c and sadpt= SifdPt+ U v( U063 (W - s')). width 6; 6 is a measure of uncertainty about center frequencies. Let V&) EH& denote the envelope weighting of the fam- The constants in (3.1143.2) can be expressed in terms of the ily, ya,c)(uo(*))outer in H", and satisfying logarithmic bandwidth F(t) of the envelope at time t, defined bY 1 $a,c)(u,ei8)1 = sup 1 V(uOeie)1, -a 5 e 5 a. V€.,m, c) 1* A priori information about the disturbances is that they log sy(t) = -12a --r log I qpI,c,)(uoeie)I de. belong to the family .V(p,, c,). (The a priori weighting is assumed to be time invariant.) a From the assumption that f(0) = 5 for 0 L -a and the fact Sensitivity is to be minimized in a SISO time-invariant plant 2 GEDuo, whose inner part consists of one zero at the origin, that 1 qao,co)(*)1 is a widening of I

F(Pf,Cf)(~)= clog"(') = F(o)~(@o-D~). (3.3) IV. 6-SUBOPTIMALSENSITIVITY However,-it will be shown in Section V that the optimal local Let us evaluate the recent past norms 11 * )Iu(oo;,) of the sensitivity for the robust and adaptive controllersh the robust sensitivity SA, is not always Lipschitz, and is therefore not case alwdys a suitable candidate for frozen-time design. On the other hand, it will be shown that the AAK maximal-entropy (local) p1CJd 5 iiSrbstiio(oo)5 puo(irbst) = ~(0). interpolation, selected to be 6-suboptimal at each t E Z,6 > 0, produces a local sensitivity Siuo,6) which is 6-suboptimal in the In this simple example, the pu0(S,,,) norm is independent of a, ’ sense that and we get, from (3.1) and (3.3)

pUO(’~Uo,6)) I~oo(i~pI(oo)) -k (44 11 srbst 11 ~(00)= x(o). (3.4) In the adaptive case, (3.2) and (3.3) give and satisfies a Lipschitz continuity condition (2.10) (see Corol- lary 5.2). puO[(i;pt)J = T(o)[‘Po-w (3.5) Subject to the Lipschitz condition (2. lo), the global sensitivity S(,o,6, p) achieved using such a locally 6-suboptimal adaptation Suppose now that P, and c, change slowly, I 0,- I I scheme (spcified by (2.3), (2.1)) is 6’-suboptimal as follows. (8) df Denote [SA, (ao)l, by f)‘ pg9 IC, - Cf-l I IpC, and I- I Ipf. The rates of W i:pI(oo. de Let a, > a 2 1, and suppose the Assumption 2.1 holds. and S& are Corollary 4. I: a) Given any 6’ > 6, S(uo,6, ,,) has the bound %,(a)IP := P,(& + Pc) -1 ~~S(ao,6,p)~~a(o,f)Ipu~(~opt( s0.f)) + ” (4.2) %o(%pt) = duo( ?af, Iduo( fi). provided the variation rates of U and W satisfy the inequality As S:Nt depends Lipschitz continuously (L”’ L”’) on W, the C(P) a(p) e I6’ - 6 rate of the local optimum sensitivity becomes small as p + 0, U0 00 ([ fi ’ ,Ir) (4.3) and we can base our solution on it. To :valuate thc upper bound (2.7)-of Theorem 2.1, we note that G,(z) = U(z)= ao’z, where C::) is the row matrix PUO(GW)= 1

e(oo, aO) a,,(E) 5 duo( fi) + doo(i&) 12p with ai:) := ~~~~~~~~,,(~,)p,,(~~),and A(:; is the Lips- which gives chitz constant defined in (2.10). If the data operators U, W have variable rates towards 0, IISa~tII~o0;t)5 ~ug($pt) + p~b;)[l + 2~0~1-(3.6) a::)(.) Ip, then

Proof: a) (4.2), (4.3Lare obtained from Theorem 2.1, by substituting

(2.10) for dg)(S‘)into (2.7), and noting that p,,( e) and dip)( a) are monotone increasing ina and 1 IU Ia,. If U and W have variable rates approaching 0, as the

In the limit of slow, time variation, as p + 0, (3.8) shows that Lipschitz constants are not dependent on p, S:u,.6) has adaptive sensitivity is better than optimal robust sensitivity by a variable rate approaching 0, and p,,,(~[uo,a,) 5 oo(~~’pt(oo))+ 6 factor [(PO-Pf), where (Po - 0,) is the reduction in log band- by hypothesis; (4.4) follows by taking the indicated limits of width of the disturbance weighiing resulting from extra informa- (2.7). tion about disturbances acquired in the intervening interval b) Since fil(iApt(l;f))5 pI($,pI~oo~ ,,) by optimality, after tak- [O, tl. ing the limits p --t 0 and 6 -+ 0 in (2.8) and (4.1). we obtain Remark 3.1: It should be emphasized that the point of this example is not the trivial one that the smaller the weighting the smaller the time invariant sensitivity. Rather, it is that if a way is found of computing sensitivity optimally under time-varying To emphasize the dependence of the inner-outer factorization weightings, then that opens up a means of determining whether a on a, in the rest of the proof, write U(u;,)for Uf and GP,:,, for black box is adaptive or merely robust, provided that weighting Cy‘, i.e., in this notation (Gw)f= CJ(u;f)GP:!,)where a is a 147 WANG AND ZAMES: LOCALGLOBAL DOUBLE ALGEBRAS: PART II variable.It will be shown in the following that ~~(8:~(,,;~)),satisfying viewed as a function of U, satisfies the Lipschitz condition S= V- UQ. (5.1) -1 (In Section IV, both data functions were locally in H", so that I Pu,(~Op(uo; t,) - Pl(J&t(l; t)) I this interpolation is more general.) If the data (V, U) is vari-

able, and (a , * ) is an interpolation rule which assigns inter- 5 Puo( wPl(fi(uo;t)(UOC)) - 4;I)(.)) Y polants to data pairs, the interpolation rule will be called Lips- + Pl(W~O(9)- W>)* (44 chitz continuous (L" --* L2) if there are constants ~$20, It can be shown (see [17]) that for G, satisfying Assumption 7; 2 0, for which any two assignments Si = Y(Vi, q), i = 1.2, satisfy the inequalities 2.1, there exists a unitary matrix B E Q"'" such that IIS2 - = r"vIV2 - VIIIo. + rEIIU2 - UlIIm (5.2) Pl(quo;r)(~o(*>P- ql;t)(*)) or, in incremental notation which will be used in the proofs = const. ll(~w)t(uo(*)) - (~w),(*)ll(l)*(4.7) lIASII2 5 r"vlAJ'llm + ~EllAullm* Since poo(i?&(uo;t,) is not affected by the multiplication of the inner factor U(uo;t,(uo( -)) by a unitary constant matrix B, we The rule will be said to yield Lipschitz continuous norm if for obtain from (4.6) and (4.7) that constants 7; 2 0, 7; 2 0

I lls2llm - IIS1IImI r;II V2 - VlIIm + +Y;IIY - u1llm.

(5.3)\I 5 const.lI(Gw)t(uo(.)) - (GW)t(9Il(l) Now consider the optimal interpolation rule (V, U) -+ Sop, + Pl(WO(.)) - w>>* (4.8) where IISopIIm = inf I1 V- UQIlm =: PO. It is easy to show that under Assumption 2.1 w II(Gw)t(~o(.)) - (~W)k)Il(l)-+o Let I&: l:+ 1: (where 1: := 12[0, 00)) denote the Hankel operator with symbol U*V. By Nehari's theorem, po = Pl(*(UO(.)) - jw)) I1 ru*vII * Proposition 5.1: The norm of the optimal interpolant de- as U, + 0, which implies pends lipschitz continuously on the data ( V,U); indeed, (5.3) is

Puo~~&4(uo;t))+ Pl(~&t(l;tJ satisfied with by constants 7; = 1, 7; = mini=1,2(1yllm. Proof: Write ri := I'urvi. By Nehari's theorem and (4.5) follows. It remains onlv to establish (4.6). This will be done in I llS2llm - IISi llml = 1 Ilr211 - IIrill 1 5 IIr2 - rill. Proposition 5.1 of the next section. Inequality (4.9 will follow As l', is unitarily equivalent to a projection of K, r, is linear from (5.9 with the followins substitution: U, t,( uo(-)), in K and III',II 5 (1 K 11". Therefore V2 = WAS), S, = 0 IIF2 - FlII 5 IIUi?V* - ~WlII" the global time-domain 5 IIUZ - u?Ilmll V2llm + IIu?IIcelI V2 - V111m. sensitivity (under persistent disturban$es) 1) S(uo, p) 11 a(u; ,) is at most 6'4nferior to the optimum ppo(S~p(uo;t,) achieved by local Without loss of generality, we can assume V2 to be the smaller frequency domain interpolation; it is &inferior in the limit of of 5, i = 1,2,which implies (5.3) with 7; = 11 V211m = zero variation and, since 6 is arbitrary, actually approaches the ~ni=l,2llV,IIm,and~;=llU?l[rn = 1. 0 pl norm of the W optimum as U, + 1. So long as that norm However, the optimal interpolant itself does not depend Lips- depends continuously on U, near U, = 1, i.e., chitz continuously on the data, and in fact, can have an infinite modulus of continuity, as the following counterexample shows. Example 5.1: Consider the problem of optimally interpolat- ing (W,, U) in H", where U E H" is fixed, U(z) this means that in a certain sense the best that can be achieved PI-z 02-2 with slowly varying data and slowly varying compensators is ---- 0 < 0, < P2 < 1, (i = 1,2), and W,E BIZ- 1 P2z- 1' near the time-invariant best, but does not preclude the possibility H" is variable depending on a parameter o > 0, W,(eje) that quickly varying compensators might do better still. = W,(e-je). By the Nevanlinna-Pick theory, the optimal in- V. CONTINUITYPROPERTIES OF H" INTERPOLANTS terpolant of ( W,, U) has the form The continuity properties used in local interpolation will be developed in this section. As the normed spaces H," and H" are isometrically related by the change of variable z o UZ, there is no loss of generality in considering H" only. Throughout where S, satisfies the interpolation constraints S,( Pi) = W,( Pi), Section V, 1) 11 Lp will be abbreviated to 1) * 11 p, for p = 2, 00, i = 1,2. Consider any W, for which the ratio W,(P2)/ W,(&) and the unsubscripted norm symbol 1) 1) will denote the opera- =: p, approaches 1 as w + 0, and which satisfies the inequality tor norm of an operator between two Hilbert spaces. SEL" will be said to interpolate a pair of data functions (V,U), VEL", UEH", U inner, if there exists QEH" 148 WEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 36, NO. 2, FEBRUARY 1991

For example, W, := 1 + OW',where W' EH", )I W' 11 < 1, ity (La + L2)of this interpolant, expressed by the inequality W'(P,) = 0, W'(P,) > 0 will have these properties.Let 11% - SI112 5 rvll V2 - Villm (5.9) 11 dW,I( denote 11 W' dw)(". We will show that as w + 0, 11 dS, )I / 11 d W, 11 + 00, implying that the optimal interpolant where S,, S2 are any two such interpolants of the respective data has an infinite modulus of continuity as a function of W and is pairs (V,, 0,(V29 0. certainly not Lipschitz. In Theorem 5.1, let pi := ~~I'vi~~,ai := (pi/pi)2, p := As + 0, we get o maX(P1r P2), := mx(a,,a2). Theorem 5.1: The AAK central 6-suboptimal interpolant sat- dS (2-1) da (-2) dP =cL +-- isfies the Lipschitz condition (5.9) with constant IIdWIIm (az - 1)2 IIdWIIm (az - 1) IIdWIIm (5.4) 7, = n'l' [I (5.10) (1 - + (z)]l+& where S, W,a, p, all depend on W. The term proportional to dp is I1 for I z I 5 1 by Proposition 5.1, HO it is enough to (5.11) establish the unboundedness of the term proportional to da. Now o + 0 implies that p, + 1 which, it is not hard to show, implies that a + - 1 from the right, and 1 da/dw I + (P2 - B. The AAK Construction and the Proof of Theorem 5.1 (1 + P2)2(1 + 0,)' A Hankel operator 1: with symbol VEL" has the W(&)> 0. Therefore, for w small I',: I:+ infinite matrix representation [u~+~-~],j, k 1 1, where uk E enough Gnxn are negative matrix Fourier coefficients

The following construction was introduced by Adamjan, Arov, and Krein in [l]. Let T+: 1:- 1: denote the right shift operator in I:, (T+u)(t)= u(t - 1) if t 2 1, = 0 if t = 0. View @" as the subspace of 1: consisting of sequences of the form = W' ")-I. Contour integration now gives as dw/ 11 dW 11 (11 11 {x,,O,O, }, x1E@". Denote rv by I'. For any p > Ill"l(, assumed to be fixed, introduce the following operators, with IIdSIl,/lldWII, ? Cl(l - a2)-'I2+ C2 domains and codomains as shown (where C,,C2 are constants) which grows without bound as R := (p2z - r*r)-' ii := (p2~- rr*)-' r: a + - 1, and therefore as w + 0. 0 :I:+ This means that the optimal interpolant is not a suitable G := (n,nRIGn) -'I2 := (nG"k1 mn)-112 :@n + @n candidate for the local interpolation outlined in Section II. e Instead, we turn to a 6-suboptimal interpolant based on the AAK P :=pRG P :=piiG :en+ I: parameterization, which has the requisite Lipschitz continuity. Q := T+rRG Q := T+I'*ZG :Gn+ 1:. A. Lipschitz Continuity of AAK Central Interpolants (5.12) For VELa, consider E L" which interpolates ( V, I) and S All operators defined in (5.12) depend on (r,p). satisfies The operator P is isomorphic to a multiplication operator IISIIm IP. (5 4 with multiplicant P+(a) in (H2)"xn satisfying (Eventually, V will be identified with U*W of Section IV, and P+(z)h= S[Ph](z), IzI = 1 US with the resulting sensitivity). Let r,: I:+I: be the Hankel operator with symbol V. By Nehari's theorem, inter- for h E en,where 5 I:-+ H2 denotes the z-transform. P+(-) polants satisfying (5.6) exist if p 2 llI',ll and will be called is determined by P uniquely up to L2 equivalence. If 6-suboptimal if p - llrvll I6, 6 > 0. The AAK parametriza- {fl,3;;.., f,,} denotes the basis f, = [l,O,O,...,O]fr, f2 = tion provides two pairs of functions in H2, P, and Q, with [O,I, 0, 0, * It' for e",then P+ is expressed as

Similarly, we define functions Q+E(H~)"~~,and P-, Q-E (L~e H~)~~~ to satisfy *(P+(z)+ Q+(Z)E(Z))-~, I ZI = 1 (5.7) Q+(z)h= 9[Qh](z) 5 1. The central (or maximal en- for some E EH", 11 E 11 P-(z)h = g[Fh](Z) tropy) &suboptimal interpolant is obtained when E = 0 Q-(z)h = 9[Qh] (2) S(Z)= pQ-(z)P;'(z) (5.8) for I z 1 = 1 and h E@". P, and Q, satisfy the identity (see and is unique subject to the constraint that p = IlrvlI + 6, [I, p. 1501) which will be assumed to hold. The main objective of Section V will be to prove Theorem 5.1, establishing the Lipschitz continu- P? (z)P+ (Z) - Q*,(z) Q+ (z) = I, I z I = 1 (5.13) WANG AND ZAMES: LOCAL-GLOBAL DOUBLE ALGEBRAS: PART II 149

(a.e. in z, but we shall cease distinguishing between functions where I(KO)) I (1 K I( Ia. Therefore equal everywhere and a.e.) which implies that for any EEH", )IE(),5 1, P+(z)and [P+(z)+ Q+(z)E(z)]are invertible p-1~= [nGn(z- ~)-ll@n]-1/2 and = (1 - a2)1/Z(zGn + IP;l(z)l 5 1, 121 = 1. (5.14) As KO is a contraction, Il(Zen + Ko)'pl1 5 (1 - by (Recall that 11 * 11 denotes an operator norm, which of course elementary Banach algebra, and (5.17) holds. depends on the domain and codomain.) Proof of inequalities (5.18) and (5.19): Note that Lemma 5.1: If K: @" + l:, and K(z)h = %-(Kh)(z)for each he@",(z( = 1,then I P2 - P1I I lP2 - PlI I IIArII. (5.22) IIKII2 In1'211KlI. (5.15) Therefore

Proof: The hypotheses imply that K E H2,and K satisfies IIP;lr2 - P;lrlII 5 I P;' - P;'l IIrlII + P;lllr2 - FlII the inequalities 5 PT1(6+ 1)llArlI 1* IIK(I; = 1 K*(eie)K(eie)1 dB which proves (5.18). Also -12* -* 1* I1 P;'rfr2 - PmrII1 I- 1 trace (K*(eie) K( eie)) dB 2* --* 5 IlPm(r2 - r1)s + ll(Pm - P;lrf)rlll

= -trace1 (/-*:*(eie)K(eie) dB 5 &IIArII + PY1(l + &)IWll * IIrlll 2r by (5.18), which implies (5.19).

I nlkJ_:K'(eie)K(eie) de - P,:2ryi)-1= pil(z - ~~)-l,where := P;2r:ri,

K*(eie)K(eie)dB h (5.16) 5 (PlP2)-l(l - = "*(A1: 1 * IlUP2 - Pl)Z+ PlAl - P2A211N1 - al)-l for an h E @". The lemma follows. 0 For any function y = f(V),the notation Ay denotes f(V2) I(p1p2)-l(l - aJ1(1 - a2)-l -f(vl).Denote rvby r and recall that P = llrvll + 6. Lemma 5.2: The following inequalities hold: '(1 + 6+ & + G)llArll by (5.19) and as 1 p2 - p1 I I11 Ar11. Therefore (5.20) holds. JIGJII (S.17) PJicr Proof of inequality (5.21): Let Xi := ri/pi, i = llA(p-lI')ll IpY1(l + 6)llArll (5.18) l,2,(~~Xil~I A). We have

Proof of inequality(5.17): Write A = p-21'*I' and note 5 Il(X2 - X,)II ll(Z- x2*x2)-111+ IlXlIl that )IA((sa

where 1) K 11 Ia. (The bound on K follows from the observa- tion that K = (A - a2Z)(Z - A)-' and the observation that I (1 - a2)-lIIx2 - XlII (Z - A) is symmetric positive definite.) Since projection re- duces norm + 6(1- al)-l(l - a2)-111(x,*x2- x:xl)II

n,"(z - ,,-zr*r)-'IGn= (1 - a2)-l(Z,n + K,) and as 11 XfX2 - XFX, 1) I I( Xf(X, - XI) + (Xf - 150 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 36, NO. 2, FEBRUARY 1991

Xf)X,II 5 ( & + 6))I X2 - X,11, we obtain the bound = ((1 - a1) + 6(d&+ 6)) ‘(1 - aJ1(1 - a2)-lIlX2- X,II which proves the corollary. 0 C. Application to Sensitivity Design (1 + dGG)((1 - 41- a2))-l(l + 6)P;111Arll We return to the problem posed in Section 11-A of selecting a [by (5.18)] which is (5.21). 0 sensitivity S‘ subject to the local 6-suboptimality constraint (4.1) Proof of Theorem 5.1: Let oi := (QJiG;’, pi := and Lipschitz condition (2.10), and resume ugng the terminol- (P+),.G;’, i = 1,2, where Gi denotes the constant matrix in ogy of Secton II. Recall that ( W, U)are in Pu, and therefore L” such that Gih = F[Gih],Gi being the operator in (5.12). must be in Puo for some U, > U. For each t E Z, define Then Si = P~(Q_)~(P+);’= piQip;-’, and W):=ir;r(~O(.))~t(~O(.)) (5.23) II AS II 2 In L”. Let Sp be theA6-subopimal AAK interpolant of ( V,, I) = IlP202W - P101k1112 defined in Q.8) and S: := U,Sp. Then S‘ locally interpolates 5 11~202- ~l0lIIzIIp;~IIrn + IIP1O1(kl - jc1)I12 ( W, U)in Euo. As U is locally unitary, S’ satisfies 5 {IlA(P0)ll2 + Pll101~~111mll~2- ~ll12}ll~;111m. Cuo(9‘) = cLUO(&@) + 6 The suboptimality of P~(Q-)~(P+);’implies that ~~~lP~l~~m(Sin defined in Section 11-A) which implie~(4.1).Call S‘ the 5 1. Also, (5.13) implies that Il(P+);’I I 1, from which locally central 6-suboptimal sensitivity in Puo.Let y v, be the 11~;’11 5 llG2(P+)Al = llG2llm 5 Pz J- by (5.17). Lipschitz constant defined in Theorem 5.1, (5.10) with V, Therefore identified as in (5.23). Write A, := sup,y,,. Corollary 5.1 now immediately gives: IIASII2 5 di=G{P2lIA(P0)ll2 + P1P211A~112) Co_rollary 5.2: The locally central 6-suboptimal sensitivity 5 n1pJiar,{~2~~~(~Q)~I+ P~P~IIA~II) S’ E P uo that locally interpolates ( W, U)(and which was intro- duced in Section 11-A) depends Lipschitz continuously (Lm+ L2) where 8 := Pic;’ = piRi, Qi := QiGT’ = T,riRi. Now on the data, i.e., (5.20) and (5.21) yield IIA(mf;o 5 ~wll~~ll~o IIASJI, I dpJl‘cY 1 (1 + a)(l + 6) + { C + 6 + ~WCo,(w) 1II AQ II eo (5.24) + II ArI1 (1 - a)’ 1 [cf. the Lipschitz condition (2.10)]. * [ (1 + VI. CONCLUDINGREMARKS which implies (5.10), as IIAI’II I IIAYI(. As 1 - A= 6(pi + 6)-’ and a < 1, (5.11) follows. 0 Double algebras provide a natural mathematical framework S will be called the central 6-suboptimal interpolant of for the frozen-time analysis of feedback systems. They suggest a (V’,U)if U*S is such an interpolant of (U*Y‘, I), where U is simple approach to the approximate optimization of such sys- inner. tems. Corollary 5.1: The central 6-suboptimal interpolant S of One of the open questions of has been: can one (V’, U)depends Lipschitz continuously (L” + L2)on the data, provide a paradigm of adaptive feedback which does not depend i.e., on the structure or parameterization of the controller? Frozen- time optimization appears capable of providing such a paradigm, llASll2 5 Yu*v~IIAv‘llm+ {Pl + Yu*v,II ~~ll”}ll~~ll”at least for the elementary case of slowly time-varying systems. where yurr is defined by @.lo), with U*V‘ = V. Proof: Let Si be the central 6-suboptimal interpolants of REFERENCES ( vi). The corresponding interpolants of ( vi* vi., I) are q.*Si. V. M. Adamjan, D. Z. Arov, and M. G. Krein, “Infinite Hankel v, blocks matrices and related extension problems,” AMA Trans- Now lations, vol. 111, pp. 133-156, 1978. II s2 - Sl II 2 W. Arveson, “Interpolation problems in nest algebras,” J. Functional Anal., vol. 20, pp. 208-233, 1975. = II cJ2w2 - U1 UVl II 2 J. A. Ball, C. Foias, 1. W. Helton, and A. Tannenbaum, On a Local Nonlinear Commutant Luting Theorem, preprint. 5 II U2 II 0 II U?S2 - UP1 II 2 + II U2 - U1 II 2 II UFl II 0 H. M. J. Cantalloube, C. E. Nahum, and P. E. Caines, “Robust adaptive control: A direct factorization approach,” to be pub- U?S2 UP1 2 P1 AU 2 = II - I1 + II II lished. as 11 4 (1 I 1, and 11 U?S1 1) 5 p1 by the AAK construction. By M. Dahleh and M. A. Dahleh, “On the slowly time-varying Theorem 5.1 systems,” M.I.T.Tech. Rep. LIDS-P-1852, Feb. 1989. W. L. Duren, Theory of HP Spaces. New York: Academic, 1970. II U?S2 - UP1 II 2 B.A. Francis, “A course in H” control theory,” in Lecture = ru*rIIu,.v; - ~?V;llm Notes in Control and Information Sciences, Vol. 88. New York: Springer-Verlag, 1985. 5 yu*Y’{II~?II-Il - KIIm + IIW - u?IImII VIIm} A. Feintuch and B. A. Francis, “Uniformly optimal control of linear systems,” Automatica, vol. 21, no. 5, pp. 563-574, 5 YU*V{IIA~II~+ llA~IImIIKIIm}* 1985. WANG AND ZAMES: LOCAL-GLOBAL DOUBLE ALGEBRAS: PART I1 151

B. A. Francis and M. Vidyasagar, “Algebraic and topological -, “Slowly time-varying systems and H” optimization,” in aspects of the servo problem for lumped systems,” Dep. Eng. Proc. 8th IFAC Symp. Iden. Param. Est., Beijing, R.O.C., Appl. Sci., Yale Univ., New Haven, CT, S&IS Rep. 8003, 1980. 1988, vol. 1, pp. 492-495. T. T. Georgiou, A. M. Pascoal, and P.P. Khargonekar, “On the -, “Local-global double algebras for slow H” adaptation,” robust stabilizability of uncertain linear time-invariant plants us- MTNS Conf., Amsterdam, The Netherlands, June 22-28, 1989. ing nonlinear time-varying controllers,” Autornatica, vol. 23, -, “Local-global double algebras for slow H” adaptation,” no. 5, pp. 617-624, Sept. 1987. in Proc. IEEE 28th Conf. Decision Contr., Tampa, FL, Dec. P. P. Khargonekar and K. Poolla, “On polynomial matrix frac- 13-15, 1989. tion representations for linear time-varying systems,” Linear -, “Local- lobal double algebras for slow H” adaptation: Algebra Appl., vol. 80, pp. 1-37, 1986. The case of 4’I disturbances,” E.E. Rep.. Nov. 1989; to be -, “Uniformly optimal control of linear time-invariant plants: published. Nonlinear time-varying controllers,” Syst. Contr. Lett., vol. 6, G. Zames, “Feedback and optimal sensitivity: Model reference no. 5, pp. 303-308, 1986. transformations, multiplicative seminorms, and approximate in- M. C. Smith, “Well-posedness of H” optimal control verses,” IEEE Trans. Automat. Contr., vol. AC-26, pp. problems,” to be published. 301-320, 1981. M. S. Verma, “Robust stability of linear feedback systems under -, “On the mettic complexity of causal linear systems, e-ent- time-varying and nonlinear perturbations in the plant,” preprint. ropy and €-dimension for continuous time,” ZEEE Trans. nu- M. Vidyasagar, H. Schneider, and B. A. Francis, “Algebraic tomat. Contr., vol. AC-24, pp. 222-230, 1979. and topological aspects of feedback stabilization,” IEEE Trans. Automat. Contr., vol. AC-27, pp. 880-894, Aug. 1982. L. Y. Wang, “Adaptive H” optimization,” Ph.D. dissertation, Le Y. Wang (S’85-M’89), for a photograph and biography, see this 1989. issue, p. 142. -, “Lipschitz continuity of inner-outer factorization,” McGill UNv., Montreal, P.Q., McGill Rep., 1989. L. Y. Wang and G. Zames, “H” optimization and slowly time-varying systems,” in Proc. 26 Conf. Dec. Contr., Dec. George Zames (S’57-M’61 -SM’78-F’79), for a photograph and biog- 1987, pp. 81-83. raphy, see this issue, p. 142.