ECE5570: Optimization Methods for Systems and Control 4–1

Parameter Optimization: Constrained

Many of the concepts which arise in unconstrained parameter optimization are also important in the study of constrained optimization, so we will build on the material presented in Chapter 3. Unlike unconstrained optimization, however, it is more difficult to generate consistent numerical results; hence the choice of a suitable algorithm is often more challenging for the class of constrained problems. The general form of most constrained optimization problems can be ! expressed as: n min L.u/ u R 2

subject to ci .u/ 0; i E D 2 ci .u/ 0; i I " 2 where E and I denote the set of equality and inequality constraints, respectively.

Any point u0 satisfying all the contraints is said to be a feasible point; ! the set of all such points defines the feasible region, R.

Let’s consider an example for the n 2 case where L.u/ u1 u2 D D C subject to constraints: 2 c1.u/ u2 u D # 1 2 2 c2.u/ 1 u u D # 1 # 2 This situation is depicted in Fig 4.1 below.

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–2

Figure 4.1: Constrained Optimization Example

Equality Constraints: Two-Parameter Problem Consider the following two-parameter problem: ! T – Find the parameter vector u u u to minimize L.u/ subject D 1 2 to the c.u/ ! h i D – Question: why must there be at least two parameters in this problem? From the previous chapter, we know that the change in L caused by ! changes in the parameters u1and u2 in a neighborhood of the optimal solution is given approximately by: @L @L 1 @2L L u u uT u 1 2 2 4 % @u1 4 C @u2 4 C 24 @u 4 ˇ ˇ Ä " ˇ& ˇ& – but as we changeˇ.u1;u2/ we mustˇ ensure that the constraint ˇ ˇ remains satisfied – to first order, this requirement means that @c @c c u1 u2 0 4 D @u 4 C @u 4 D 1 ˇ 2 ˇ ˇ& ˇ& Lecture notes prepared by M. Scott Trimboli. Copyright cˇ 2013, M. Scott Trimboliˇ $ˇ ˇ ECE5570, Parameter Optimization: Constrained 4–3

– so u1 (or u2 ) is not arbitrary; it depends on u2 (or u1 ) 4 4 4 4 according to the following relationship: @c =@u2 u1 j& u2 @c 4 D# =@u1 4 # j& $

) @c @L @L =@u2 L u2 @c 4 % @u2 # @u1 =@u1 4 # Ä "$& 2 2 @c 2 @c 2 1 @ L @ L =@u2 @ L =@u2 2 2 u2 C2 @u2 # @u @u @c=@u C @u2 @c=@u 4 ( 2 1 2 Ä 1 " 1 Ä 1 " ) &

– but, u2 is arbitrary; so how do we find the solution? 4 coefficient of first-order term must be zero ı coefficient of second-order term must be greater than or equal ı to zero

So, we can solve this problem by solving ! @L @L @c=@u 2 0 @u # @u @c=@u D 2 ˇ 1 Ä 1 " ˇ& – but this is only one equationˇ in two unknowns; where do we get the ˇ rest of the information required to solve this problem? from the constraint: ı c.u1;u2/ ! D Although the approach outlined above is straightforward for the ! 2-parameter and 1- constraint problem, it becomes very difficult to implement as the dimensions of the problem increase For this reason, it would be nice to develop an alternative approach ! that can be extended to more difficult problems

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–4 Is this possible? ! YESŠ )

Lagrange Multipliers: Two-Parameter Problem

Consider the following cost function: ! L L.u1;u2/ # c.u1;u2/ ! N D C f # g where # is a constant that I am free to select

– since c ! 0, this cost function will be minimized at precisely the # D same points as L.u1;u2/

– so, a necessary condition for a local minimum of L.u1;u2/ is: L0 L # c 0 4 D4 C 4 D @L @c @L @c # u1 # u2 0 @u C @u 4 C @u C @u 4 D # 1 1 $ # 2 2 $ but because of the constraint, u1 and u2 are not ı 4 4 independent; so it is conceivable that this result could be true even if the two coefficients are not zero. however, # is free to be chosen: ı @L =@u1 Let # # @c D =@u1 @L @c then # 0 @u2 C @u2 D

and c.u1;u2/ 0 D result 3 equations, 3 unknowns ı ) Comments: ! – Is this a new result? No

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–5 if you substute the expression for # into the second equation, ı you get the same result as developed previously – So why do it? @L @c # 0 @u1 C @u1 D @L @c # 0 @u2 C @u2 D these two equations are precisely the ones I would have ı obtained if I had assumed u1 and u2 were independent 4 4 so the use of Lagrange multipliers allows me to develop the ı necessary conditions for a minimum using standard unconstrained parameter optimization techniques, which is a significant simplification for complicated problems

Example 4.1

Determine the rectangle of maximum area that be inscribed inside a ! circle of radius R.

Figure 4.2: Example - Constrained Optimization

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–6 Generate objective function, L.x; y/: ! A 4xy L 4xy D ) D# Formulate constraint equation, c.x; y/: ! c.x; y/ x2 y2 R2 D C D Calculate solution ) @L 4y @x D# @c 2x @x D @L 4x @y D# @c 2y @y D

x2 y2 R2 C D 4x 4y.2y=2x/ 0 # C D ) x2 y2 R2 C D x2 y2 0 # D

R 2 x y A& 2R ) D D p2 ) D ! x2 y2 R2 C D 4y #2x 0 # C D 4x #2y 0 # C D

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–7

) y # 2 D x y2 4x 4 0 x2 y2 0 )# C x D ) # D ) R 2 x y A& 2R D D p2 ) D # 2 D Is the fact that # 2 important? ! D

Multi-Parameter Problem

The same approach used for the two-parameter problem can also be ! applied to the multi-parameter problem But now for convenience, we’ll adopt vector notation: ! – Cost Function: L.x; u/ – Constraints: c.x; u/ !, where c is dimension n 1 D ' – by convention, x will denote the set of dependent variables and u will denote the set of independent variables – how many dependent variables are there? n (why?) x .n 1/ u .m 1/ ) Á ' Á ' Method of First Variations: ! @L @L L x u 4 % @x 4 C @u 4 @c @c c x u 4 % @x4 C @u4

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–8

1 @c # @c x u )4 D# @x @u4 # $ – NOTE: x must be selected so that @c=@x is nonsingular; for well-posed systems, this can always be done. – using this expression for x, the necessary conditions for a 4 minimum are: 1 @L @L @c # @c 0 @u # @x @x @u D # $ c.x; u/ ! D giving .m n/ equations in .m n/ unknowns. C C Lagrange Multipliers: ! L L.x; u/ "T c.x; u/ ! N D C f # g where " is now a vector of parameters that I am free to pick.

@L T @c @L T @c T L0 " x " u c.x; u/ ! " 4 N D @x C @x 4 C @u C @u 4 C f # g 4 # $ # $ – and because I’ve introduced these Lagrange multipliers, I can treat all of the parameters as if they were independent. – so necessary conditions for a minimum are:

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–9

@L @L @c N "T 0 @x D @x C @x D

@L @L @c N "T 0 @u D @u C @u D

T @L N c.x; u/ ! 0 (@") D # D which gives .2n m/ equations in .2n m/ unknowns. C C NOTE: solving the first equation for " and substituting this result into ! the second equation yields the same result as that derived using the Method of First Variations. Using Lagrange multipliers, we transform the problem from one of ! minimizing L subject to c ! to one of minimizing L without D N constraints.

Sufficient Conditions for a Local Minimum Using Lagrange multipliers, we can develop sufficient conditions for a ! minimum by expanding L to second order: 4 N 1 L L x L L x L u T T N xx N xu N N x N u x u 4 4 D 4 C 4 C 2 4 4 " Lux Luu #" u # h i N N 4 where @2L @2L Lxx N Luu N N D @x2 N D @u2 @ @L T Lxu Lux N D @u @x D N # $ Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–10 – for a stationary point, Lx Lu 0 N D N D and 1 x c# cu u h:o:t: 4 D# x 4 C – substituting this expression into L and neglecting terms higher 4 N than second order yields: 1 1 L L c# c L uT T T N xx N xu x u u N cu cx# Im m # 4 D 24 # ' " Lux Luu #" Im m # 4 h i N N ' 1 T L u L& u 4 N D 24 N uu4 where 1 T T T T 1 L& Luu Luxc# cu c c# Lxu c c# Lxxc# cu N uu D N # N x # u x N C u x N x – so, a sufficient condition for a local minimum is that L Š N uu& is positive definite

Example 4.2 Returning to the previous problem of a rectangle within a circle, we ! have

Lxx 2# Lyx 4cx 2x N D N D# D Lxy 4 Lyy 2#cy 2y N D# N D D Let y be the independent variable: ! 4y 4y 2#y2 L& 2# # # N yy D # x # x C x2 R At the minimum (?), x y # 2 ! D D p2 D

L& 16 > 0 ) N yy D

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–11 Interpretation of Results

Using first variations and/or Lagrange multipliers, we developed the ! following sufficient conditions for a local minimum:

1 L& Lu Lxc# cu 0 c ! u D # x D D L >0 N uu& @L @L we know that Lu holding x constant and Lx holding u ! D @u D @x constant

– do we have a similar interpretation for Lu& ? @L YES L& holding c constant ) u D @u so L& builds in the constraint relationship between x and u ! u – similarly, @2L L& holding c constant uu D @u2 Additional Interpretations of Results:

By introducing Lagrange multipliers, we found that we could solve ! equality constrained optimization problems using unconstrained optimization techniques It turns out that Lagrange multipliers serve another useful purpose as ! “cost sensitivity parameters” – consider the adjoined cost developed previously, L L "T .c !/ N D C # @L @L L N x N u "T ı! 4 N D @x 4 C @u 4 # ˇ ˇ ˇ& ˇ& Lecture notes prepared by M. Scott Trimboli. Copyright ˇc 2013, M. Scott Trimboliˇ ˇ$ ˇ ECE5570, Parameter Optimization: Constrained 4–12 – NOTE: the term ı! is included because we want to determine how the optimal solution changes when the constraints are changed by a small amount – but at a minimum, @L @L N N 0 @x D @u D ˇ ˇ ˇ& ˇ& so any change in the cost, ˇL (andˇ hence L ) is due to "T ı! 4ˇ N ˇ 4 ) @L "T min D# @! so, the Lagrange multipliers describe the rate of change of the ı optimal value of L with respect to the constraints – obviously, when the constraints change, the entire solution changes; for small changes, we can perform a perturbation analysis to identify the new solution at the original minimum, ı Lx 0 Lx 0 N D )4N D Lu 0 Lu 0 N D )4N D c ! c ı! D )4D expanding these terms in a Taylor series expansion yields: ı T Lx Lxx x Lxu u c " 0 4 N D N 4 C N 4 C x 4 D T Lu Lux x Luu u c " 0 4 N D N 4 C N 4 C u 4 D cx x cu u ı! 4 C 4 D 1 1 T x c# ı! c# cu u " c# Lxx x Lxu u )4 D x # x 4 4 D# x N 4 C N 4 and substituting these results into the expression˚ for Lu 0 « ı 4 N D yields:

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–13

u €ı! 4 D# 1 T T 1 € L&# Lux c c# Lxx c# D N uu N # u x N x NOTE: if L& exists, neighboring˚ optimal solutions« will existx2 ı N uu Example 4.3

Continuing the previous example, R x y Lxx Lyy 2# cx 2x D D p2 N D N D D # 2 Lxy Lyx 4 cy 2y D N D N D# D

L 4xy Lyy 16 D# N D 2 L& 2R D# 2 2 2 2 ! R change R by R ; L& 2 R #Š 4 D4 ) 4 4 D# 4 D# What about x, y, # ? ! 4 4 4 1 1 1 x R2 R2 4 D 2x # 4x 4 D 4x4 # $ 1 2y R2 1 y 4 2# 4 R2 4 D#16 # # 2x 2x D 4x4 # $ 1 2# 4 # R2 0 4 D#2x 4x # 4x 4 D # $ Does this agree with reality? To first order, YES! ! – # is constant so # 0 4 D R0 1 2 2 x& y& R R D D p2 D p2 C4 &p Á R R2 1 4 2 D p2 r C R !

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–14 R 1 R2 1 4 % p C 2 R2 2 # $ where the last step makes use of the two-term Taylor series expansion of the square root. R2 R2 x y 4 4 )4 D4 D p D 4x 2 2R # $ Example 4.4 Consider the two-parameter objective function ! 1 2 1 2 L.u/ u u 2u1 2u2 D 2 1 C 2 2 # # with constraints: c1.u/ u1 u2 1 D C D c2.u/ 2u1 2u2 6 D C D A contour plot of the objective function is given in Figure 4.3 below. !

Figure 4.3: Example 4.4

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–15 A graphic examination of the two constraint equations shows they are ! incompatible; i.e. the feasible region is empty. Consider a third constraint equation, ! c3.u/ 6u1 3u2 6 D C D – the combination of constraints c1.u/ and c3.u/ produce a non-empty feasible region but show the general difficulty when the number of constrants n equals the number of paramters. solving the constraint equations gives ı 11 u1 1 " 63#"u2 # D " 6 #

1 u& D " 0 #

Let’s now solve the general problem L.u/ with single constraint c1.u/. ! – this time we have n 1, giving a single dependent variable which D we’ll identify as u2

Forming the adjoined objective function we write ! 1 2 2 L .u u / 2.u1 u2/ #.u1 u2 1/ N D 2 1 C 2 # C C C # The necessary conditions for a minimum give: ! @L N u1 2 # 0 @u1 D # C D @L N u2 2 # 0 @u2 D # C D @L N u1 u2 1 0 @# D C # D

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–16 – solving this set of 3 equations in 3 unknowns we get

101 u1 2 011 u 2 2 3 2 2 3 D 2 3 110 # 1 6 7 6 7 6 7 4 5 4 5 4 5 u& 0:5 1 D u& 0:5 2 D # 1:5 D Solving by the First Method of Variations, we use the equations, ! 1 @L @L @c # @c 0 @u # @u @u @u D 1 2 Ä 2 " 1 c.u1;u2/ ! D to write .u1 2/ .u2 2/ .1/ .1/ 0 # # # D u1 2 u2 2 0 # # C D u1 u2 0 # D and from the constraint equation, u1 u2 1 C D solving this set of 2 equations in 2 unknowns we get !

1 1 u1 0 # " 11#"u2 # D " 1 #

u& 0:5 1 D u& 0:5 2 D as before.

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–17 Checking for sufficiency, it is easy to compute ! L& 2>0 N uu D

Numerical Algorithms

As in the case of unconstrained optimization, the necessary and ! sufficient conditions for a local minimum may produce a set of equations that are too difficult to solve analytically.

– indeed, the introduction of constraints complicates the problem much more quickly

So again, we must resort to numerical algorithms which, for the ! equality-constrained problem, will be developed by adapting “steepest-descent” procedures as presented previously

Second-Order Gradient Method

General theory: ! – unlike the unconstrained problem, we now have more to worry about than simply finding a minimizing parameter vector u& – in addition, we must find " and x ; but we do have enough equations available to solve this problem: Given u Solve c.x; u/ ! for x ı ) D @L Solve N 0 for " ı ) @x D @L Solve N 0 for new u ı ) @u D Practical application of the theory: ! Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–18 – as in the unconstrained problem, we’ll select an initial u (u.k/) that we’ll assume to be correct – solving for x ) @c c.x; u/ c.x.k/; u.k// x ! % C @x 4 D ˇk ˇ 1 @c # ˇ x c.x.k/;ˇ u.k// ! )4 D# @x # # ˇk$ ˇ ( ) .k 1/ ˇ .k/ x C x x ˇD C4 so, by iterating through this process, we find the value of x ı which satisfies the constraints for the given u notice that we must select an initial x to start the iteration ı process – solving for " ) @L @L @c N "T 0T @x D @x C @x D

1 @L @c # "T ) D#@x @x # $ @L T – if u and x are truly the optimal solution, then N=@u 0 D @L T – but, if N=@u 0 , then we can use the following development to ¤ iterate for u,

T T @L @L N N Luu& u 0 @u ˇ D @u ˇ C N 4 D ˇc !; u u ˇc !; u ˇ D C4 ˇ D where ˇ ˇ ˇ 1 T ˇ T T T 1 L& Luu Luxc# cu c c# Lxu c c# Lxxc# cu N uu D N # x # u x N C u x N x

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–19

T 1 @L u L&# N )4 D#N uu @u L ; L ; L ; L – be careful about how you calculate N uu N ux N xu and N xx; e.g., @L @L @c N "T @x D @x C @x

2 2 n 2 @ L @ L @ ci N #i ) @x2 D @x2 C @x2 i 1 XD – Note that our matrix notation has broken down to some extent

First-Order Gradient Method

The same arguments used to justify the first-order algorithm for the ! unconstrained problem also apply here So, x and " can be identified using the procedure above; but u is ! updated as follows: T @L u K N 4 D# @u where K is a positive scalar when searching for a minimum.

Prototype Algorithm

1. Guess x and u

2. Compute c ! ; If c ! <%1, go to (5.) @c# k # k 3. Compute @x 1 @c # 4. x x.k/ Œc !; go to (2.) D # @x # # $ Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–20 5. Compute L @L @L @c 6. Compute , ; @x @u @u 1 @L @c # 7. Compute "T D#@x @x # $ @L @L T @c @L 8. Compute N " ; If N <%2, stop! @u D @u C @u @u * * * * T .k 1/ .k/ @L .k 1/ * .k/* 1 @L 9. u C u K N or u C *u * L&# N ; go to (2.) D # @u D # N uu @u

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–21 Inequality-Constrained Parameter Optimization Our goal in the equality constrained parameter optimization problem ! discussed over the last section was to find a set of parameters T y y1 y2 yp D ))) which minimizes L.y/ subjecth to a set of constraints:i

ci .y/ !i ;i 1; 2; : : : ; n D D where n

Scalar Parameter / Scalar Constraint

GOAL: minimize L.y/ subject to the constraint c.y/ 0 Ä Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–22

Case 1: constraint inactive (i.e., c.y&/<0)

Constraint can be ignored ! So the problem is identical to the unconstrained optimization problem ! discussed in the previous section

L(y)

Lmin

y

y∗ c(y)

feasible region

Figure 4.4: Constrained Optimization: Scalar Function Case 1

Case 2: constraint active (i.e., c.y&/ 0) D If y& is the optimal solution, then for all admissible perturbations away ! from y& the following conditions must exist: L dL=dy y 0 & c dc=dy y 0 4 % jy& 4 " 4 % jy& 4 Ä sgn dL=dy sgn dc=dy OR dL=dy 0 ) D# D These two conditions can be neatly summarized into a single ! relationship using a Lagrange multiplier:

dL=dy #dc=dy 0; # 0 C D "

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–23

L(y) L(y)

y y

c(y) c(y)

feasible region feasible region

(a) (b)

Figure 4.5: Constrained Optimization: Scalar Function Case 2

dc dc =dy >0 yfeas <0 =dy <0 yfeas >0 jy& )4 jy& )4 dL dL yfeas <0 =dy 0 yfeas >0 =dy 0 4 ) jy& Ä 4 ) jy& "

In each of these cases, y& occurs at the constraint boundary and ! dL=dy #dc=dy 0 for #> 0Š C D The conditions for a minimum in both Case 1 and Case 2 can, in fact, ! be handled analytically using the same cost function: L L #c N D C – the necessary conditions for a minimum become: dL=dy 0 and c.y/ 0 N D Ä where # 0 for c.y/ 0 and # 0 for c.y/ < 0 " D D Example 1 Consider the function L.u/ u2 ! D 2

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–24 Unconstrained minimum: ! dL=du 0 u 0 D ) D 1. u k; k > 0 Ä dL N 0 du D u # 0 ) C D # 0 u 0 satisfies constraint D ) D 2. u k; k < 0 Ä dL N 0 Lu D # 0 u 0 no good D ) D # 0 u k # k > 0Š ¤ ) D ) D#

Vector Parameter / Scalar Constraint

GOAL: Minimize L.y/ subject to c.y/ 0 Ä For this problem, the results developed above must simply be ! reinterpreted:

– for all admissable y, L @L=@y y 0 4 4 % 4 " – if y&exists on a constraint boundary, then c @c=@y y 0 for all admissible y 4 % 4 Ä 4 @L @c # 0 .# 0/ ) @y C @y D " – i.e., either @L=@y 0 or @L=@y is parallel to @c=@y and in the opposite D direction at y&

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–25

∇c c =0 L = constant y2 y2 c =0 ∇L

L = constant ∇c

∇L y1 y1

(a) Inactive Constraint (b) Active Constraint

Figure 4.6

As for the scalar/scalar case, we can handle this problem analytically ! using the following cost function: L L #c N D C – the necessary conditions for a minimum are: @L N 0 and c.y/ 0 @y D Ä where # 0 for c.y/ 0 and # 0 for c.y/<0 " D D Example: Package Constraint Problem Maximize the volume of a rectangular box under the dimension ! constraint: 2 .x y/ z D C C Ä

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–26

z

y

x

Figure 4.7: Example: Package Constraint Problem

Objective function: ! L xyz D# Solving, ! L xyz #.2x 2y z D/ N D# C C C # @L yz 2# xz 2# xy # @u D # C # C # C – # 0 either x h 0 or y 0 volume is not maximizedi D ) D D ) – # 0 ¤ ) 1. yz 2# D 2. xz 2# D 3. xy # D 4. 2x 2y z D C C D – combining (1) and (2) gives x y D – combining (3) with this result gives # x2 D – from (2) we get z 2x D – from (4) we compute x D=6 D x& y& D=6 ) D D Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–27

z& D=3 D Vector Parameter / Vector Constraint

The problem is still exactly the same as before, but the complexity of ! the solution continues to increase

– if y& is the optimum, then L @L=@y y 0 4 % jy& 4 " for all admissible y 4 – and if y& lies on a constraint boundary, then

@c ci i=@y y 0 i 1; 2; : : : ; q 4 % jy& 4 Ä f D g – again, these conditions can be summarized using Lagrange multipliers (only now we need q of them): q @L @c =@y #i . i=@y/ 0#i >0 C D i 1 XD or in vector notation, T @L=@y " .@c=@y/ 0 " 0 C D " – the last equation above is known as the Kuhn-Tucker condition in – interpretation:

1. if no constraints are active, " 0 and @L=@y 0 D D 2. if some constraints are active, @L=@y must be a negative linear combination of the appropriate gradients .@ci=@y/ (a) physically, this means that @L=@y must lie inside a cone # formed by the active constraint gradients (i.e., @L=@y at a

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–28 minimum must be pointed so that any decrease in L can only be obtained by violating the constraints)

y2 c2 =0

∇c1 ∇c2

∇L y1

c1 =0

Figure 4.8: Vector Parameter / Vector Constraint

Summary: ! – define: L L "T c N D C – necessary conditions for a minimum are:

@L =@y 0ci .y/ 0 i 1; 2; : : : ; q D Ä f D g where #i 0 for ci .y/ 0 and #i 0 for ci .y/<0 " D D – question: How many constraints can be active?

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–29

Simplest type of constrained optimization problem occurs when the ! objective function and the constraint functions are all linear functions of u : these are Linear Programming problems Standard linear programming problem may be stated as: ! min f.x/ cT x x D subject to Ax b; x 0 D " Here, A is dimension .m n/ and m n (usually) ! ' Ä Coefficients c are often referred to as costs ! Example (problem set-up):

minimize x1 2x2 3x3 4x4 C C C subject to x1 x2 x3 x4 1 C C C D 1 x1 x3 3x4 C # D 2 x1 0; x2 0x3 0; x4 0 " " " " Consider the case where m n, i.e., the number of constraints ! D equals the number of parameters

– in this case, equations Ax b determine a unique solution, so the D More commonly, m

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–30 Example

For the example above, we can rearrange the constraint equation to ! give: 1 x1 x3 3x4 D 2 # C 1 x2 4x4 D 2 # Or, alternatively we can write: ! 7 3 x1 x2 x3 D 8 # 4 # 1 1 x4 x2 D 8 # 4 The objective function d T x is linear, so it contains no curvature which ! can give rise to a minimum point

– a minimum point must be created by the conditions xi 0 " becoming active on the boundary of the feasible region

Substituting the second form of these equations into the main ! problem statement allows us to write 11 1 f.x/ x1 2x2 3x3 4x4 x2 2x3 D C C C D 8 C 4 C Obviously this function has no minimum unless we impose the ! bounds x2 0 and x3 >0; in this case x2 x3 0 and the minimum 11" D D is fmin : D 8 Example

Consider the simple set of conditions: ! Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–31

x1 2x2 1; x1 0; x2 0 C D " "

Figure 4.9: Linear Programming Example

The feasible region is the line joining points a Œ0; 1=2 and b Œ1; 0 ! D D Whenever the objective function is linear, the solution must occur at ! either a or b with x1 0 or x2 0 D D – in the case where f.x/ x1 2x2, then any point on the line D C segment is a solution (non-unique solution)

Summarizing, a solution of a linear programming problem always ! exists at one particular extreme point or vertex of the feasible region

– at least n m variables have value zero # – the remaining m variables are determined uniquely from the equations Ax b and x 0 D "

Main challenge in solving linear programming problems is finding ! which n m variables equals zero at the solution # One popular method of solution is the simplex method, which tries ! different sets of possibilites in a systematic manner

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–32 – the method generates a sequence of feasible points x.1/; x.2/;::: which terminates at a solution – each iterate x.k/ is an extreme point

We define: ! – the set of nonbasic variables (set N .k/) as the n m variables # having zero value at x.k/ – the set of basic variables (set B.k/) as the remaining m variables having non-zero value

Parameter vector x is partitioned so that the basic variables are the ! first m elements: x x B D " xN #

Likewise, we correspondingly partition A ! W A A A D B N Then we can then write the constrainth equationsi as: !

xB AB AN ABxB AN xN b " xN # D C D h i Also, since x.k/ 0; we can write ! N D .k/ x b x.k/ B O D " xN # D " 0 # where 1 b A# b and b 0 O D B O " Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–33 Example

Returning to our previous example, let us choose B 1; 2 and ! D f g N 3; 4 (i.e., variables x1 and x2 are basic, and x3 and x4 are D f g nonbasic) We then have, ! 11 11 AB ;AN ; D 10 D 1 3 " # " # # and 1 1 01 1 =2 b AB# b 0 O D D 1 1 1=2 D 1=2 " " # #" # " #

Since AB is nonsingular and b 0 , this choice of B and N gives a ! O " basic feasible solution Solving for the value of the objective function, ! f cT x.k/ cT b O D D B O – where we have partitioned c as c c B D " cN # – and since 1 cB D " 2 # we compute f 1:5 O D Now examine whether or not our basic feasible solution is optimal !

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–34 – essential idea is to eliminate the basic variables from the objective function and reduce it to a function of nonbasic variables only – this way we can determine whether an increase in any nonbasic variable will reduce the objective function further

Reducing f.x/ we can write ! T T f.x/ c xB c xN D B C N T T 1 T c b c A# AN xN c xN D B O # B B C N T T T 1 c b c c A# AN xN D B O C N # B B T f c (xN ) D O C O N where the reduced cost can be written,

T T cN cN A A# cB O D # N B

Now, with f.x/ in terms of xN it is straightforward to check the ! conditions for which f.xN / can be reduced

– note here that although xN 0 at a basic feasible solution, in D general, xN 0 " – so we define the optimality test: cN 0 O " – if the optimality test is satisfied, our solution is optimal and we terminate the algorithm – if it is not satisfied, we have more work to do

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–35

Denote cN by ! O c1 O 2 c2 3 cN O O D c 6 q 7 6 O: 7 6 : 7 6 7 4 5 – choose variables xq for which cq <0, which implies f.xN / is O decreased by increasing xq (usually choose most negative cq) O – as xq increases, in order to keep Ax b satisfied, xB changes D according to 1 1 xB A# .b AN xN / b A# AN xN D B # D O # B – in general, since xq is the only nonbasic variable changing, 1 xB b A# aqxq; D O # B b dxq D O # where aq is the column of matrix A corresponding to q

– in this development, d behaves like a derivative of xB w.r.t. xq

– so, our approach is to increase xq (thereby decreasing f.x/) until

another element of xB reaches 0.

– it is clear that xi becomes 0 when bi xq D di # – since the amount by which xq can be increased is limited by the first basic variable to become 0 , we can state the ratio test:

bp bi O min O dp D i B di # 2 # where di <0.

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–36

Geometrically, the increase in xq and the corresponding change to xB ! causes a move along an edge of the feasible region .k 1/ .k 1/ When a new element xi reaches 0, the new sets N C and B C ! are re-ordered and an iteration of the simplex method is complete

Example

Finshing our example, we can express f.x/ in terms of the reduced ! cost and xN as follows: T f.x/ f c xN D O C O N 1:5 2 1 xN D C # h i Since cN does not satisfy the optimality test, our basic feasible ! O solution is not an optimal one

The negative value of cN corresponds to q 4, i.e., c4 1 ! O D O D# So, with ! 1 a4 D 3 " # # we can compute

1 01 1 3 d 4 A# a4 D# B D# 1 1 3 D 4 " # #"# # " # # Therefore, ! 0:5 3 xB x4 D 0:5 C 4 " # " # # 1 from which we find that the max value x4 =8 brings the second D element of xB to zero.

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–37 Now, we re-partition: ! x1 x x x B 2 4 3 D x D x " N # 6 2 7 6 7 6 x3 7 6 7 4 5 1111 A AB AN D D " 1 3 01# h i # from which we compute 0:8750 b 0 O D " 0:1250 # " and 0:25 cN >0 optimalŠ O D " 2:0 # ) Thus, the optimal solution is ! x& 0:5 1 D x& 0 2 D x& 0 3 D x& 0:125 4 D giving an optimal cost f & 1:0 D

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–38

Quadratic programming is an approach in which the objective function ! is quadratic and the constraint functions ci .x/ are linear General problem statement: ! 1 min q.x/ xT Gu gT u x Á 2 C

T subject to a x bi ;i E i D 2 T a x bi ;i I i Ä 2 – assume that a solution x& exists

– if the Hessian G is positive definite, x& is a unique minimizing solution

First, we’ll develop the equality constrained case, then generalize to ! the inequality constrained case using an active set strategy Quadratic programming is different from Linear Programming in that it ! is possible to have meaningful problems in which there are no inequality constraints (due to curvature of the objective function)

Equality Constrained Quadratic Programming

Problem statement: ! 1 min q.x/ xT Gx gT x x Á 2 C

subject to AT x b D – assume there are m n constraints and A has rank m Ä Solution involves using constraints to eliminate variables ! Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–39 – define x A g G G x 1 ;A 1 ; g 1 ;G 11 12 D " x2 # D " A2 # D " g2 # D " G21 G22 # – then we can write the constraint equations as T T A x1 A x2 b 1 C 2 D – which, solving for x1 gives T T x1 A# b A x2 D 1 # 2 – substituting into q.x/ yields the equivalent+ minimization, problem:

min .x2/ x2 where 1 T T T 1 1 T T .x2/ x G22 G21A# A A2A# G12 A2A# G11A# A x2 D 2 2 # 1 2 # 1 C 1 1 2

T + 1 T 1 T 1 T , x G21 A2A# G11 A# b b A# G11A# b C 2 # 1 1 C 2 1 1 T 1 T T x +g A2A# g , g A# b C 2 2 # 1 1 C 1 1 + , Unique minimizing solution exists if the Hessian in the quadratic term ! is positive definite

– then, x2& is found by solving the linear system

.x2/ 0 r D

– and x1& is found by substitution

Lagrange multiplier vector "& is defined by g& A"&, where ! D g& q.x&/ giving Dr 1 "& A# g G11x& G12x& D 1 1 C 1 C 2

Lecture notes prepared by M. Scott Trimboli. Copyright +c 2013, M. Scott Trimboli , $ ECE5570, Parameter Optimization: Constrained 4–40 Example

Consider the quadratic programming problem given by: ! 2 2 2 min q.x1;x2;x3/ x1 x2 x3 x1;x2;x3 D C C

subject to x1 2x2 x3 4 C # D x1 x2 x3 2 # C D# – this corresponds to the general problem form with:

200 0 G 020 g 0 D 2 3 I D 2 3 002 0 6 7 6 7 4 5 4 5 12 1 4 AT # b D 1 11 I D 2 " # # " # # Partitioning ! ) T 12 1 T T A # A12 A3 D " 1 1 2 # D # h i and x1 x12 x 2 x2 3 D D x3 x " # 6 3 7 4 5 – we can invoke constraint equations to express x12 in terms of

element x3 :

T T x12 A12 A3 b " x3 # D h i T T A x12 A x3 b 12 C 3 D

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–41 T T x12 A# b A x3 D 12 # 3 T T T A# b+ A A x, 3 D 12 # 12 3 which for the present example gives:

1 x1 0 =3 x3 2 x2 D 2 # =3 " # " # " # # Substituting back into q.x/, we have ! 1 T T G11 G12 x12 q.x/ x12 x3 D 2 " G21 G22 #" x3 # h i 1 T T x G11x12 2x G12x3 x3G22x3 D 2 12 C 12 C

1 ( T T T T T ) T T A# b A A x3 G11 A# b A A x3 D 2 12 # 12 3 12 # 12 3 h T T + 2 A# b A A3,x3 G12+x3 x3G22x3 , C 12 # 12 C Substituting values for+G, A, and b, we, obtain a quadratic) expression ! in element x3 W 2 .x3/ 0:5556x 2:6667x3 4 D 3 C C Since the Hessian ! @2 2 3:1111 > 0 @x3 D – we conclude the minimizer is unique and found by setting @ 0 r D @x3 D – this yields the solution x& :8571 3 D# and back substitution yields

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–42

x& :2857 x& 1:4286 1 D 2 D Method of Lagrange Multipliers

An alternative approach for generating the solution to quadratic ! programming problems is via the method of Lagrange multipliers as seen previously Consider the augmented cost function: ! 1 J.x/ xT Gx gT x "T AT x b D 2 C C # The stationarity condition is given by the equations:+ , ! @J Gx g A" 0 @x D C C D @J AT x b 0 @" D # D – these equations can be rearranged in the form of a linear system to give

G A x g # AT 0 " D# b " # #" # " # – the matrix on the left is called the Lagrangian Matrix and is symmetric but not positive definite – several analytical methods of solution make use of the inverse form of the Lagrangian matrix – solving directly we may write: T 1 1 T 1 " A G# A # b A G# g D# C 1 x G+ # .A" , g/+ , D# C Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–43 it is interesting to note that x can be written in the form of two ı terms: 1 1 0 1 x G# g G# A " x G# A" D# # D # where the first term x0 is the global minimum solution to the unconstrained problem and the second term is a correction due to the equality constraints

Example Returning to our previous example, the solution without equality ! constraints is given by: 0 1 x G# g 0 D# D Solving for the Lagrange multipliers, ! 1 0:5 11 # 12 1 " # 0:5 2 1 D#0 1 11 2 3 2 # 31 " # # 0:5 11 B 6 7 6 # 7C @ 4 5 4 5A 0:5 0 4 12 1 # 0:5 0 ' 0 2 C 1 11 2 3 22 331 " # # " # # 0:5 0 B 6 7 66 77C @ 4 5 44 55A giving 1:1429 " # D " 0:5714 # The minimizing solution vector is ! x 0 0:5 11 0:2857 1 1:1429 2 x2 3 2 0 3 2 0:5 3 2 2 1 3 # 2 1:4286 3 D # # " :5714 # D x3 0 0:5 11 # :8571 6 7 6 7 6 7 6 # 7 6 # 7 4 5 4 5 4 5 4 5 4 5 Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–44 Inequality Constrained Quadratic Programming

Inequality constrained problems have constraints from among the set ! i I where the number of inequality constraints could be larger than 2 the number of decision variables The constraint set, AT x b may include both active and inactive ! Ä constraints

T T – a constraint ai x bi is said to be active if ai x bi and inactive if T Ä D ai x

Gx g A" 0 C C D AT x b 0 # Ä "T AT x b 0 # D " 0 + , " We can express the Kuhn-Tucker conditions in terms of the active ! constraints as:

Gx g #i ai 0 C C D i A X2 T a x bi 0i A i # D 2 T a x bi <0 i A i # … #i 0i A " 2 #i 0i A D … – in other words, the active constraints are equality constraints

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–45 Assuming that AT and " were known (where A denotes the active ! A A set), the original problem can be replaced by the corresponding problem having only equality constraints: T 1 1 T 1 " A G# A # b A G# g A D# A A A C A 1 x G+ # .g A ," +/ , D# C A A Active Set Methods Active set methods take advantage of the solution to equality ! constraint problems in order to solve more general inequality contraint problems Basic idea: define at each algorithm step a set of constraints (working ! set) that is treated as the active set – the working set, W , is a subset of the active set A at the current

point; the vectors ai W are linearly independent 2 – current point is feasible for the working setll – algorithm proceeds to an improved point – an equality constrained problem is solved at each step

If all #i 0 , the point is a local solution ! " If some #i <0, then the objective function can be decreased by ! relaxing the constraint

Example [Wang, pg. 60] Here we develop a solution to the following problem: ! 1 1 min q.x/ xT 1 x 2 3 1 x x D 2 2 3 C # # # 1 h i 6 7 4 5 Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–46

subject to x1 x2 x3 1 W C C Ä 3x1 2x2 3x3 1 # # Ä x1 3x2 2x3 1 # C Ä Relevent matrices are: ! 1 2 11 1 1 # G 1 g 3 AT 3 2 3 b 1 D 2 3 I D 2 # 3 I D 2 # # 3 I D 2 3 1 1 1 32 1 6 7 6 # 7 6 # 7 6 7 4 5 4 5 4 5 4 5 A feasible solution of the equality constraints exists since the linear ! equations AT x b are well determined (i.e., AT is full rank) D Using the three equality constraints as the first working set, we ! calculate 1:6873 T 1 1 T 1 " A G# A # b A G# g 0:0309 D# C D 2 3 0:4352 + , + , 6 # 7 4 5 – since #3 <0, we conclude the third constraint equation is inactive, and omit it from the active set, A

We then solve the reduced problem, ! 1 1 min q.x/ xT 1 x 2 3 1 x x D 2 2 3 C # # # 1 h i 6 7 4 5 subject to x1 x2 x3 1 W C C Ä 3x1 2x2 3x3 1 # # Ä

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–47 Solving now for the remaining Lagrange multipliers, ! 1:6452 " D :0323 " # # – like before, we see that #2 <0, so conclude that the second constraint equation is inactive, and omit it from A

We are now left with an equality constraint problem having the single ! constraint, x1 x2 x3 1 C C D Solving this problem, we obtain ! # 1:6 D and compute 0:3333

x& 1:3333 D 2 3 0:6667 6 # 7 Primal-Dual Method 4 5

Active methods are a subset of the primal methods, wherein solutions ! are based on the decision (i.e., primal) variables Computationally, this method can become burdensome if the number ! of contraints is large A dual method can often be used to reach the solution of a primal ! method while realizing a computational savings For our present problem, we will identify the Lagrange multipliers as ! the dual variables; we derive the dual problem as follows

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–48 – assuming feasibility, the primal problem is equivalent to: 1 max min xT Gx gT x "T AT x b " 0 x 2 C C # " Ä " – the minimum over x is unconstrained and given+ by , 1 x G# .g A"/ D# C – substituting into the above expression, we write the dual problem as: 1 T T 1 T 1 max " H " " K g G# g " 0 #2 # # 2 " Â Ã where T 1 H A G# A D T 1 K b A G# g D C – this is equivalent to the quadratic programming problem:

1 T T 1 T 1 min " H" " K g G# g " 0 2 C C 2 " Â Ã – note that this may be easier to solve than the primal problem since the constraints are simpler

Hildreth’s Quadratic Programming Algorithm

Hildreth’s procedure is a systematic way to solve the dual quadratic ! programming problem The basic idea is to vary the Lagrange multiplier vector one ! component at a time, #i , and adjust this component to minimize the objective function One iteration through the cycle may be expresses as: ! .k 1/ .k 1/ # C max 0; w C i D i & Á Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–49 where i 1 n .k 1/ 1 # .k 1/ .m/ w C ki hij # C hij # i D#h 2 C j C j 3 ii j 1 j i 1 XD XD C 4 5 – here, we define hij to be the ij -th element of the matrix H and ki is the i-th element of the vector K.

The Hildreth algorithm implements an iterative solution of the linear ! equations: T 1 1 T 1 " A G# A # b A G# g D# C 1 H+ # K , + , D# which can be equivalently expressed as: H " K D# – note that this approach avoids the need to perform a matrix inverse, which leads to a robust algorithm

Once the vector " converges to "&, the solution vector is found as: ! 1 x& G# .g A"&/ D# C Example [Wang, pg. 64]

Consider the optimization problem ! 2 1 2 min q.x/ x x1x2 x x1 x D 1 # C 2 2 #

subject to 3x1 2x2 4 C Ä x1 0 " x2 0 "

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–50 In standard form it can be shown that ! 10 0 2 1 1 # G # g # AT 0 1 b 0 D 11 I D 0 I D 2 # 3 I D 2 3 " # # " # 32 4 6 7 6 7 4 5 4 5 The global optimum (unconstrained) minimum is: ! 1 # 0 1 2 1 1 1 x G# g # # D# D# 11 0 D 1 " # # " # " # It is easily seen that the optimum solution violates the inequality ! constraints

Figure 4.10: Quadratic Programming Example

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–51 In order to implement the Hildreth algorithm, we form the matrices H ! and K: 11 5 1 T 1 # T 1 H A G# A 12 7 K b A G# g 1 D D 2 # 3 I D C D 2 3 5 7 29 1 6 # # 7 6 # 7 4 5 4 5 Iteration k 0 ! D .0/ .0/ .0/ – # # # 0 1 D 2 D 3 D Iteration k 1 ! D w.1/ 1 0 1 C D #.1/ 2w.1/ 1 0 1 C 2 C D 5#.1/ 7#.1/ 29w.1/ 1 0 # 1 # 2 C 3 # D – solving gives: .1/ .1/ # max 0; w 0 1 D 1 D .1/ & .1/Á # max 0; w 0 2 D 2 D .1/ & .1/Á # max 0; w :0345 3 D 3 D Iteration k 2 & Á ! D w.2/ #.1/ 5#.1/ 1 0 1 C 2 # 3 C D #.2/ 2w.2/ 7#.1/ 1 0 1 C 2 # 3 C D 5#.2/ 7#.2/ 29w.2/ 1 0 # 1 # 2 C 3 # D – solving gives: .2/ .2/ # max 0; w 0 1 D 1 D .2/ & .2/Á # max 0; w 0 2 D 2 D & Á Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–52 .2/ .2/ # max 0; w :0345 3 D 3 D – thus, the iterative process has& convergedÁ

The optimal solution is given by: ! 1 x& G# .g A"&/ D# C 1 1 0 1 G# g G# A"& x G# A"& D# # D # 1 0 1 2 1 # 103 0:8276 # # 0 D 1 # 11 0 12 2 3 D 0:7586 " # " # # " # # 0:0345 " # 6 7 4 5

Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $