
<p>Convex Optimization — Boyd & Vandenberghe </p><p>4. Convex optimization problems </p><p>• optimization problem in standard form • convex optimization problems • quasiconvex optimization • linear optimization • quadratic optimization • geometric programming • generalized inequality constraints • semidefinite programming • vector optimization </p><p>4–1 </p><p>Optimization problem in standard form </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) </p><p>subject to f<sub style="top: 0.2588em;">i</sub>(x) ≤ 0, i = 1, . . . , m h<sub style="top: 0.2588em;">i</sub>(x) = 0, i = 1, . . . , p </p><p>• x ∈ R<sup style="top: -0.771em;">n </sup>is the optimization variable • f<sub style="top: 0.258em;">0 </sub>: R<sup style="top: -0.771em;">n </sup>→ R is the objective or cost function • f<sub style="top: 0.258em;">i </sub>: R<sup style="top: -0.771em;">n </sup>→ R, i = 1, . . . , m, are the inequality constraint functions • h<sub style="top: 0.2588em;">i </sub>: R<sup style="top: -0.7702em;">n </sup>→ R are the equality constraint functions </p><p>optimal value: </p><p>p<sup style="top: -0.711em;">⋆ </sup>= inf{f<sub style="top: 0.2588em;">0</sub>(x) | f<sub style="top: 0.2588em;">i</sub>(x) ≤ 0, i = 1, . . . , m, h<sub style="top: 0.2588em;">i</sub>(x) = 0, i = 1, . . . , p} </p><p>• p<sup style="top: -0.6247em;">⋆ </sup>= ∞ if problem is infeasible (no x satisfies the constraints) • p<sup style="top: -0.6247em;">⋆ </sup>= −∞ if problem is unbounded below </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–2 </li></ul><p></p><p>Optimal and locally optimal points </p><p>x is feasible if x ∈ dom f<sub style="top: 0.2588em;">0 </sub>and it satisfies the constraints a feasible x is optimal if f<sub style="top: 0.258em;">0</sub>(x) = p<sup style="top: -0.6247em;">⋆</sup>; X<sub style="top: 0.258em;">opt </sub>is the set of optimal points x is locally optimal if there is an R > 0 such that x is optimal for </p><p>minimize (over z) f<sub style="top: 0.2588em;">0</sub>(z) subject to </p><p>f<sub style="top: 0.2587em;">i</sub>(z) ≤ 0, i = 1, . . . , m, h<sub style="top: 0.2587em;">i</sub>(z) = 0, i = 1, . . . , p </p><p>kz − xk<sub style="top: 0.2588em;">2 </sub>≤ R </p><p>examples (with n = 1, m = p = 0) <br>• f<sub style="top: 0.258em;">0</sub>(x) = 1/x, dom f<sub style="top: 0.258em;">0 </sub>= R<sub style="top: 0.258em;">++</sub>: p<sup style="top: -0.6255em;">⋆ </sup>= 0, no optimal point </p><p>• f<sub style="top: 0.2587em;">0</sub>(x) = − log x, dom f<sub style="top: 0.2587em;">0 </sub>= R<sub style="top: 0.2587em;">++</sub>: p<sup style="top: -0.6248em;">⋆ </sup>= −∞ • f<sub style="top: 0.258em;">0</sub>(x) = x log x, dom f<sub style="top: 0.258em;">0 </sub>= R<sub style="top: 0.258em;">++</sub>: p<sup style="top: -0.6248em;">⋆ </sup>= −1/e, x = 1/e is optimal </p><p>• f<sub style="top: 0.2587em;">0</sub>(x) = x<sup style="top: -0.6248em;">3 </sup>− 3x, p<sup style="top: -0.6248em;">⋆ </sup>= −∞, local optimum at x = 1 </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–3 </li></ul><p></p><p>Implicit constraints </p><p>the standard form optimization problem has an implicit constraint </p><p>pm</p><p></p><ul style="display: flex;"><li style="flex:1">\</li><li style="flex:1">\</li></ul><p></p><p>x ∈ D = dom f<sub style="top: 0.258em;">i </sub>∩ </p><p>dom h<sub style="top: 0.258em;">i</sub>, </p><p></p><ul style="display: flex;"><li style="flex:1">i=0 </li><li style="flex:1">i=1 </li></ul><p></p><p>• we call D the domain of the problem • the constraints f<sub style="top: 0.258em;">i</sub>(x) ≤ 0, h<sub style="top: 0.258em;">i</sub>(x) = 0 are the explicit constraints • a problem is unconstrained if it has no explicit constraints (m = p = 0) </p><p>example: </p><p>P</p><p>k</p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) = − <sub style="top: 0.516em;">i=1 </sub>log(b<sub style="top: 0.258em;">i </sub>− a<sup style="top: -0.6255em;">T</sup><sub style="top: 0.447em;">i </sub>x) </p><p>is an unconstrained problem with implicit constraints a<sup style="top: -0.6247em;">T</sup><sub style="top: 0.447em;">i </sub>x < b<sub style="top: 0.258em;">i </sub></p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–4 </li></ul><p></p><p>Feasibility problem </p><p>find </p><p>xsubject to f<sub style="top: 0.258em;">i</sub>(x) ≤ 0, i = 1, . . . , m h<sub style="top: 0.258em;">i</sub>(x) = 0, i = 1, . . . , p </p><p>can be considered a special case of the general problem with f<sub style="top: 0.258em;">0</sub>(x) = 0: minimize 0 </p><p>subject to f<sub style="top: 0.2588em;">i</sub>(x) ≤ 0, i = 1, . . . , m h<sub style="top: 0.258em;">i</sub>(x) = 0, i = 1, . . . , p </p><p>• p<sup style="top: -0.6247em;">⋆ </sup>= 0 if constraints are feasible; any feasible x is optimal • p<sup style="top: -0.6247em;">⋆ </sup>= ∞ if constraints are infeasible </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–5 </li></ul><p></p><p>Convex optimization problem </p><p>standard form convex optimization problem </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) </p><p>subject to f<sub style="top: 0.258em;">i</sub>(x) ≤ 0, i = 1, . . . , m a<sup style="top: -0.6247em;">T</sup><sub style="top: 0.447em;">i </sub>x = b<sub style="top: 0.258em;">i</sub>, i = 1, . . . , p </p><p>• f<sub style="top: 0.258em;">0</sub>, f<sub style="top: 0.258em;">1</sub>, . . . , f<sub style="top: 0.258em;">m </sub>are convex; equality constraints are affine • problem is quasiconvex if f<sub style="top: 0.2588em;">0 </sub>is quasiconvex (and f<sub style="top: 0.2588em;">1</sub>, . . . , f<sub style="top: 0.2588em;">m </sub>convex) </p><p>often written as minimize f<sub style="top: 0.258em;">0</sub>(x) </p><p>subject to f<sub style="top: 0.258em;">i</sub>(x) ≤ 0, i = 1, . . . , m <br>Ax = b </p><p>important property: feasible set of a convex optimization problem is convex </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–6 </li></ul><p></p><p>example </p><p>minimize f<sub style="top: 0.2587em;">0</sub>(x) = x<sup style="top: -0.6248em;">2</sup><sub style="top: 0.4267em;">1 </sub>+ x<sub style="top: 0.4267em;">2</sub><sup style="top: -0.6248em;">2 </sup>subject to f<sub style="top: 0.2588em;">1</sub>(x) = x<sub style="top: 0.2588em;">1</sub>/(1 + x<sup style="top: -0.6248em;">2</sup><sub style="top: 0.4267em;">2</sub>) ≤ 0 </p><p>h<sub style="top: 0.258em;">1</sub>(x) = (x<sub style="top: 0.258em;">1 </sub>+ x<sub style="top: 0.258em;">2</sub>)<sup style="top: -0.6255em;">2 </sup>= 0 </p><p>• f<sub style="top: 0.258em;">0 </sub>is convex; feasible set {(x<sub style="top: 0.258em;">1</sub>, x<sub style="top: 0.258em;">2</sub>) | x<sub style="top: 0.258em;">1 </sub>= −x<sub style="top: 0.258em;">2 </sub>≤ 0} is convex • not a convex problem (according to our definition): f<sub style="top: 0.258em;">1 </sub>is not convex, h<sub style="top: 0.258em;">1 </sub>is not affine </p><p>• equivalent (but not identical) to the convex problem minimize x<sup style="top: -0.6248em;">2</sup><sub style="top: 0.426em;">1 </sub>+ x<sup style="top: -0.6248em;">2</sup><sub style="top: 0.426em;">2 </sub>subject to x<sub style="top: 0.258em;">1 </sub>≤ 0 </p><p>x<sub style="top: 0.258em;">1 </sub>+ x<sub style="top: 0.258em;">2 </sub>= 0 </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–7 </li></ul><p></p><p>Local and global optima </p><p>any locally optimal point of a convex problem is (globally) optimal proof: suppose x is locally optimal, but there exists a feasible y with </p><p>f<sub style="top: 0.2588em;">0</sub>(y) < f<sub style="top: 0.2588em;">0</sub>(x) </p><p>x locally optimal means there is an R > 0 such that </p><p>z feasible, kz − xk<sub style="top: 0.258em;">2 </sub>≤ R =⇒ f<sub style="top: 0.258em;">0</sub>(z) ≥ f<sub style="top: 0.258em;">0</sub>(x) consider z = θy + (1 − θ)x with θ = R/(2ky − xk<sub style="top: 0.258em;">2</sub>) <br>• ky − xk<sub style="top: 0.2588em;">2 </sub>> R, so 0 < θ < 1/2 </p><p>• z is a convex combination of two feasible points, hence also feasible </p><p>• kz − xk<sub style="top: 0.258em;">2 </sub>= R/2 and </p><p>f<sub style="top: 0.258em;">0</sub>(z) ≤ θf<sub style="top: 0.258em;">0</sub>(y) + (1 − θ)f<sub style="top: 0.258em;">0</sub>(x) < f<sub style="top: 0.258em;">0</sub>(x) </p><p>which contradicts our assumption that x is locally optimal </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–8 </li></ul><p></p><p>Optimality criterion for differentiable f<sub style="top: 0.3097em;">0 </sub></p><p>x is optimal if and only if it is feasible and <br>∇f<sub style="top: 0.2588em;">0</sub>(x)<sup style="top: -0.711em;">T </sup>(y − x) ≥ 0 for all feasible y </p><p>−∇f<sub style="top: 0.2063em;">0</sub>(x) </p><p>x<br>X</p><p>if nonzero, ∇f<sub style="top: 0.258em;">0</sub>(x) defines a supporting hyperplane to feasible set X at x </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–9 </li></ul><p></p><p>• unconstrained problem: x is optimal if and only if </p><p>x ∈ dom f<sub style="top: 0.2588em;">0</sub>, </p><p>∇f<sub style="top: 0.2588em;">0</sub>(x) = 0 </p><p>• equality constrained problem </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) subject to Ax = b x is optimal if and only if there exists a ν such that </p><p>x ∈ dom f<sub style="top: 0.2588em;">0</sub>, </p><p>• minimization over nonnegative orthant </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) subject to x ꢀ 0 </p><p>Ax = b, </p><p>∇f<sub style="top: 0.2588em;">0</sub>(x) + A<sup style="top: -0.7103em;">T </sup>ν = 0 </p><p>x is optimal if and only if </p><p>ꢀ</p><p>∇f<sub style="top: 0.2588em;">0</sub>(x)<sub style="top: 0.2588em;">i </sub>≥ 0 x<sub style="top: 0.2588em;">i </sub>= 0 </p><p></p><ul style="display: flex;"><li style="flex:1">x ∈ dom f<sub style="top: 0.258em;">0</sub>, </li><li style="flex:1">x ꢀ 0, </li></ul><p></p><p>∇f<sub style="top: 0.2588em;">0</sub>(x)<sub style="top: 0.2588em;">i </sub>= 0 x<sub style="top: 0.2588em;">i </sub>> 0 </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–10 </li></ul><p></p><p>Equivalent convex problems </p><p>two problems are (informally) equivalent if the solution of one is readily obtained from the solution of the other, and vice-versa </p><p>some common transformations that preserve convexity: </p><p>• eliminating equality constraints </p><p>minimize f<sub style="top: 0.2588em;">0</sub>(x) </p><p>subject to f<sub style="top: 0.258em;">i</sub>(x) ≤ 0, i = 1, . . . , m <br>Ax = b </p><p>is equivalent to minimize (over z) f<sub style="top: 0.2588em;">0</sub>(Fz + x<sub style="top: 0.2588em;">0</sub>) subject to </p><p>f<sub style="top: 0.258em;">i</sub>(Fz + x<sub style="top: 0.258em;">0</sub>) ≤ 0, i = 1, . . . , m </p><p>where F and x<sub style="top: 0.258em;">0 </sub>are such that </p><p>Ax = b ⇐⇒ x = Fz + x<sub style="top: 0.2588em;">0 </sub>for some z </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–11 </li></ul><p></p><p>• introducing equality constraints </p><p>minimize f<sub style="top: 0.258em;">0</sub>(A<sub style="top: 0.258em;">0</sub>x + b<sub style="top: 0.258em;">0</sub>) </p><p>subject to f<sub style="top: 0.258em;">i</sub>(A<sub style="top: 0.258em;">i</sub>x + b<sub style="top: 0.258em;">i</sub>) ≤ 0, i = 1, . . . , m </p><p>is equivalent to minimize (over x, y<sub style="top: 0.258em;">i</sub>) f<sub style="top: 0.258em;">0</sub>(y<sub style="top: 0.258em;">0</sub>) subject to </p><p>f<sub style="top: 0.2588em;">i</sub>(y<sub style="top: 0.2588em;">i</sub>) ≤ 0, i = 1, . . . , m y<sub style="top: 0.2588em;">i </sub>= A<sub style="top: 0.2588em;">i</sub>x + b<sub style="top: 0.2588em;">i</sub>, i = 0, 1, . . . , m </p><p>• introducing slack variables for linear inequalities </p><p>minimize f<sub style="top: 0.2588em;">0</sub>(x) </p><p>subject to a<sup style="top: -0.6255em;">T</sup><sub style="top: 0.447em;">i </sub>x ≤ b<sub style="top: 0.258em;">i</sub>, i = 1, . . . , m </p><p>is equivalent to minimize (over x, s) f<sub style="top: 0.258em;">0</sub>(x) subject to </p><p>a<sup style="top: -0.6247em;">T</sup><sub style="top: 0.447em;">i </sub>x + s<sub style="top: 0.258em;">i </sub>= b<sub style="top: 0.258em;">i</sub>, i = 1, . . . , m s<sub style="top: 0.2588em;">i </sub>≥ 0, i = 1, . . . m </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–12 </li></ul><p></p><p>• epigraph form: standard form convex problem is equivalent to minimize (over x, t) t subject to </p><p>f<sub style="top: 0.258em;">0</sub>(x) − t ≤ 0 </p><p>f<sub style="top: 0.258em;">i</sub>(x) ≤ 0, i = 1, . . . , m Ax = b </p><p>• minimizing over some variables </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x<sub style="top: 0.258em;">1</sub>, x<sub style="top: 0.258em;">2</sub>) </p><p>subject to f<sub style="top: 0.258em;">i</sub>(x<sub style="top: 0.258em;">1</sub>) ≤ 0, i = 1, . . . , m </p><p>is equivalent to </p><p>˜</p><p>minimize f<sub style="top: 0.258em;">0</sub>(x<sub style="top: 0.258em;">1</sub>) </p><p>subject to f<sub style="top: 0.258em;">i</sub>(x<sub style="top: 0.258em;">1</sub>) ≤ 0, i = 1, . . . , m </p><p>˜where f<sub style="top: 0.258em;">0</sub>(x<sub style="top: 0.258em;">1</sub>) = inf<sub style="top: 0.258em;">x </sub>f<sub style="top: 0.258em;">0</sub>(x<sub style="top: 0.258em;">1</sub>, x<sub style="top: 0.258em;">2</sub>) </p><p>2</p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–13 </li></ul><p></p><p>Quasiconvex optimization </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) </p><p>subject to f<sub style="top: 0.2588em;">i</sub>(x) ≤ 0, i = 1, . . . , m <br>Ax = b </p><p>with f<sub style="top: 0.258em;">0 </sub>: R<sup style="top: -0.771em;">n </sup>→ R quasiconvex, f<sub style="top: 0.258em;">1</sub>, . . . , f<sub style="top: 0.258em;">m </sub>convex can have locally optimal points that are not (globally) optimal </p><p>(x, f<sub style="top: 0.2063em;">0</sub>(x)) </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–14 </li></ul><p></p><p>convex representation of sublevel sets of f<sub style="top: 0.258em;">0 </sub></p><p>if f<sub style="top: 0.2588em;">0 </sub>is quasiconvex, there exists a family of functions φ<sub style="top: 0.2588em;">t </sub>such that: <br>• φ<sub style="top: 0.258em;">t</sub>(x) is convex in x for fixed t • t-sublevel set of f<sub style="top: 0.258em;">0 </sub>is 0-sublevel set of φ<sub style="top: 0.258em;">t</sub>, i.e., </p><p>f<sub style="top: 0.258em;">0</sub>(x) ≤ t ⇐⇒ φ<sub style="top: 0.258em;">t</sub>(x) ≤ 0 </p><p>example </p><p>p(x) f<sub style="top: 0.258em;">0</sub>(x) = q(x) </p><p>with p convex, q concave, and p(x) ≥ 0, q(x) > 0 on dom f<sub style="top: 0.258em;">0 </sub>can take φ<sub style="top: 0.258em;">t</sub>(x) = p(x) − tq(x): <br>• for t ≥ 0, φ<sub style="top: 0.258em;">t </sub>convex in x • p(x)/q(x) ≤ t if and only if φ<sub style="top: 0.2588em;">t</sub>(x) ≤ 0 </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–15 </li></ul><p></p><p>quasiconvex optimization via convex feasibility problems </p><p>φ<sub style="top: 0.2587em;">t</sub>(x) ≤ 0, f<sub style="top: 0.2587em;">i</sub>(x) ≤ 0, i = 1, . . . , m, Ax = b </p><p>(1) <br>• for fixed t, a convex feasibility problem in x • if feasible, we can conclude that t ≥ p<sup style="top: -0.6247em;">⋆</sup>; if infeasible, t ≤ p<sup style="top: -0.6247em;">⋆ </sup></p><p>Bisection method for quasiconvex optimization </p><p>given l ≤ p<sup style="top: -0.6217em;">⋆</sup>, u ≥ p<sup style="top: -0.6217em;">⋆</sup>, tolerance ǫ > 0. </p><p>repeat </p><p>1. t := (l + u)/2. </p><p>2. Solve the convex feasibility problem (1). 3. if (1) is feasible, u := t; else l := t. </p><p>until u − l ≤ ǫ. </p><p>requires exactly ⌈log<sub style="top: 0.4208em;">2</sub>((u − l)/ǫ)⌉ iterations (where u, l are initial values) </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–16 </li></ul><p></p><p>Linear program (LP) </p><p>minimize c<sup style="top: -0.6247em;">T </sup>x + d subject to Gx ꢁ h </p><p>Ax = b </p><p>• convex problem with affine objective and constraint functions • feasible set is a polyhedron </p><p>−c </p><p>x<sup style="top: -0.6217em;">⋆ </sup></p><p>P</p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–17 </li></ul><p></p><p>Examples </p><p>diet problem: choose quantities x<sub style="top: 0.258em;">1</sub>, . . . , x<sub style="top: 0.258em;">n </sub>of n foods <br>• one unit of food j costs c<sub style="top: 0.258em;">j</sub>, contains amount a<sub style="top: 0.258em;">ij </sub>of nutrient i • healthy diet requires nutrient i in quantity at least b<sub style="top: 0.258em;">i </sub></p><p>to find cheapest healthy diet, minimize c<sup style="top: -0.6247em;">T </sup>x subject to Ax ꢀ b, x ꢀ 0 </p><p>piecewise-linear minimization </p><p>minimize max<sub style="top: 0.258em;">i=1,...,m</sub>(a<sup style="top: -0.6248em;">T</sup><sub style="top: 0.447em;">i </sub>x + b<sub style="top: 0.258em;">i</sub>) </p><p>equivalent to an LP minimize t </p><p>subject to a<sup style="top: -0.6247em;">T</sup><sub style="top: 0.447em;">i </sub>x + b<sub style="top: 0.2588em;">i </sub>≤ t, i = 1, . . . , m </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–18 </li></ul><p></p><p>Chebyshev center of a polyhedron </p><p>Chebyshev center of </p><p>P = {x | a<sup style="top: -0.711em;">T</sup><sub style="top: 0.4252em;">i </sub>x ≤ b<sub style="top: 0.258em;">i</sub>, i = 1, . . . , m} </p><p>is center of largest inscribed ball </p><p>B = {x<sub style="top: 0.258em;">c </sub>+ u | kuk<sub style="top: 0.258em;">2 </sub>≤ r} </p><p>x<sub style="top: 0.2063em;">cheb </sub></p><p>• a<sup style="top: -0.6248em;">T</sup><sub style="top: 0.447em;">i </sub>x ≤ b<sub style="top: 0.258em;">i </sub>for all x ∈ B if and only if </p><p>sup{a<sup style="top: -0.711em;">T</sup><sub style="top: 0.4253em;">i </sub>(x<sub style="top: 0.258em;">c </sub>+ u) | kuk<sub style="top: 0.258em;">2 </sub>≤ r} = a<sup style="top: -0.711em;">T</sup><sub style="top: 0.4253em;">i </sub>x<sub style="top: 0.258em;">c </sub>+ rka<sub style="top: 0.258em;">i</sub>k<sub style="top: 0.258em;">2 </sub>≤ b<sub style="top: 0.258em;">i </sub></p><p>• hence, x<sub style="top: 0.2588em;">c</sub>, r can be determined by solving the LP maximize r </p><p>subject to a<sup style="top: -0.6248em;">T</sup><sub style="top: 0.4477em;">i </sub>x<sub style="top: 0.2587em;">c </sub>+ rka<sub style="top: 0.2587em;">i</sub>k<sub style="top: 0.2587em;">2 </sub>≤ b<sub style="top: 0.2587em;">i</sub>, i = 1, . . . , m </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–19 </li></ul><p></p><p>Linear-fractional program </p><p>minimize f<sub style="top: 0.258em;">0</sub>(x) subject to Gx ꢁ h </p><p>Ax = b </p><p>linear-fractional program </p><p>c<sup style="top: -0.6247em;">T </sup>x + d e<sup style="top: -0.4973em;">T </sup>x + f </p><p>f<sub style="top: 0.2588em;">0</sub>(x) = </p><p></p><ul style="display: flex;"><li style="flex:1">,</li><li style="flex:1">dom f<sub style="top: 0.2588em;">0</sub>(x) = {x | e<sup style="top: -0.7103em;">T </sup>x + f > 0} </li></ul><p></p><p>• a quasiconvex optimization problem; can be solved by bisection • also equivalent to the LP (variables y, z) </p><p>minimize c<sup style="top: -0.6248em;">T </sup>y + dz subject to Gy ꢁ hz </p><p>Ay = bz e<sup style="top: -0.6247em;">T </sup>y + fz = 1 z ≥ 0 </p><p></p><ul style="display: flex;"><li style="flex:1">Convex optimization problems </li><li style="flex:1">4–20 </li></ul><p></p>
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages47 Page
-
File Size-