Lecture 4 Closed Functions

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 4 Closed Functions Lecture 4 Closed Functions September 4, 2007 Lecture 4 Outline • Log-Transformation • Jensen’s Inequality • Level Sets • Closed Functions • Convexity and Continuity Convex Optimization 1 Lecture 4 Log-Transformation of Variables Useful for transforming a nonconvex function to a convex one A posynomial is a function of y1 > 0, . , yn > 0 of the form a1 an g(y1, . , yn) = by1 ··· yn with scalars b > 0 and ai > 0 for all i. • A posynomial need not be convex • Log-transformation of variables xi = ln yi for all i • We have a convex function 0 f(x) = bea1x1 ··· eanxn = bea x Convex Optimization 2 Lecture 4 Log-Transformation of Functions Replacing f with ln f [when f(x) > 0 over domf] Useful for: • Transforming non-separable functions to separable ones n 1/n Example: (Geometric Mean) f(x) = (Πi=1xi) for x with xi > 0 for all i is non-separable. Using F (x) = ln f(x), we obtain a separable F , n 1 X F (x) = ln xi n i=1 • Separable structure of objective function is advantageous in distributed optimization Convex Optimization 3 Lecture 4 General Convex Inequality Basic convex inequality: For a convex f, we have for x, y ∈ domf and α ∈ (0, 1), f(αx + (1 − α)y) ≤ αf(x) + (1 − α)f(y) General convex inequality: For a convex f and any convex combination of the points in domf, we have m ! m X X f αixi ≤ αif(xi) i=1 i=1 Pm (xi ∈ domf and αi > 0 for all i, i=1 αi = 1, m > 0 integer) Pm A convex combination i=1 αixi can be viewed as the expectation of a random vector z having outcomes z = xi with probability αi Convex Optimization 4 Lecture 4 Jensen’s Inequality General convex inequality can be interpreted as: • For a convex f and a (finite) discrete random variable z with outcomes zi ∈ domf, we have f(Ez) ≤ E [f(z)] General Jensen’s inequality: The above relation holds for a convex f and, a random variable z with outcomes in domf and a finite expectation Ez Convex Optimization 5 Lecture 4 Level Sets Def. Given a scalar c ∈ R and a function f : Rn → R, a (lower) level set of f associated with c is given by Lc(f) = {x ∈ domf | f(x) ≤ c} 2 n x Examples: f(x) = kxk for x ∈ R , f(x1, x2) = e 1 • Every level set of a convex function is convex • Converse is false: Consider f(x) = −ex for x ∈ R Def. A function g is concave when −g is convex • Every (upper) level set of a concave function is convex Convex Optimization 6 Lecture 4 Convex Extended-Value Functions • The definition of convexity that we have used thus far is applicable to functions mapping from a subset of Rn to Rn. It does not apply to extended-value functions mapping from a subset of Rn to the extended set R ∪ {−∞, +∞}. • The general definition of convexity relies on the epigraph of a function • Let f be a function taking values in R ∪ {−∞, +∞}. The epigraph of f is the set given by n n epif = {(x, w) ∈ R × R | f(x) ≤ w for some x ∈ R }. • Def. The extended-value function f is convex when its epigraph epif is convex set (in Rn × R). • This definition coincides with the one we have used for a function f with values in R. Convex Optimization 7 Lecture 4 Closed Functions Def. A function f is closed if its epigraph epif is a closed set in Rn × R, i.e., for every sequence {(xk, wk)} ⊂ epif converging to some (ˆx, wˆ), we have (ˆx, wˆ) ∈ epif Examples Affine functions are closed [f(x) = a0x + b] Quadratic functions are closed [f(x) = x0P x + a0x + b] Continuous functions are closed • A class of closed functions is larger than the class of continuous functions • For example f(x) = 0 for x = 0, f(x) = 1 for x > 0, and f(x) = +∞ otherwise This f is closed but not continuous Convex Optimization 8 Lecture 4 Closed Function Properties Lower-Semicontinuity Def. A function f is lower-semicontinuous at a given vector x0 if for every sequence {xk} converging to x0, we have f(x0) ≤ lim infk→0 f(xk) We say that f is lower-semicontinuous over a set X if f is lower- semicontinuous at every x ∈ X Th. For a function f : Rn → R ∪ {−∞, +∞} the following statements are equivalent (i) f is closed (ii) Every level set of f is closed (iii) f is lower-semicontinuous (l.s.c.) over Rn Convex Optimization 9 Lecture 4 Proof of Theorem (i)⇒(ii) Let c be any scalar and consider Lc(f). If Lc(f) = ∅, then Lc(f) is closed. Suppose now that Lc(f) 6= ∅. Pick {xk} ⊂ Lc(f) such that n xk → x¯ for some x¯ ∈ R . We have f(xk) ≤ c for all k, implying that (xk, c) ∈ epif for all k. Since (xk, c) → (¯x, c) and epif is closed, it follows that (¯x, c) ∈ epif. Consequently f(¯x) ≤ c, showing that x¯ ∈ Lc(f). n (ii)⇒(iii) Let x0 ∈ R be arbitrary and let {xk} be a sequence such that xk → x0. To arrive at a contradiction, assume that f is not l.s.c. at x0, i.e., lim infk→∞ f(xk) < f(x0) Then, there exist a scalar γ and a subsequence {xk}K ⊂ {xk} such that f(xk) ≤ γ < f(x0) for all k ∈ K yielding that {xk}K ⊂ Lγ(f). Since xk → x0 and the set Lγ(f) is closed, it follows that x0 ∈ Lγ(f). Hence, f(x0) ≤ γ - a contradiction. Thus, we must have f(x0) ≤ lim infk→∞ f(xk) Convex Optimization 10 Lecture 4 Proof of Theorem - continues (iii)⇒(i) To arrive at a contradiction assume that epif is not closed. Then, there exists a sequence {(xk, wk)} ⊂ epif such that (xk, wk) → (¯x, w¯) and (¯x, w¯) 6∈ epif . Since (xk, wk) ∈ epif for all k, we have f(xk) ≤ wk for all k Taking the limit inferior as k → ∞, and using wk → w¯, we obtain lim inf f(xk) ≤ lim wk =w. ¯ k→∞ k→∞ Since (¯x, w¯) 6∈ epif, we have f(¯x) > w¯, implying that lim inf f(xk) ≤ w¯ < f(¯x) k→∞ On the other hand, because xk → x¯, and f is l.s.c. at x¯, we have f(¯x) ≤ lim inf f(xk) k→∞ - a contradiction. Hence, epif must be closed. Convex Optimization 11 Lecture 4 Operations Preserving Closedness • Positive Scaling For a closed function f : Rn → R∪{−∞, +∞} and λ > 0, the function g(x) = λf(x) is closed • Sum n For closed functions fi : R → R ∪ {−∞, +∞}, i = 1, . , m, the sum Pm g(x) = i=1 fi(x) is closed • Composition with Affine Mapping For an m × n matrix A, a vector b ∈ Rm, and a closed function f : Rm → R ∪ {−∞, +∞}, the function g(x) = f(Ax + b) is closed This generalizes to a Composition with Continuous Mapping Convex Optimization 12 Lecture 4 Pointwise Supremum n For a collection of closed functions fi : R → R ∪ {−∞, +∞} over an arbitrary index set I, the function g(x) = sup fi(x) is closed i∈I • Example: Piecewise-linear function (polyhedral function) 0 0 f(x) = max{a1x + b1, . , amx + bm} is closed n where ai ∈ R and bi ∈ R for all i Convex Optimization 13 Lecture 4 Convexity and Continuity • Let f : Rn → R with domf = Rn. If f is convex, then f is continuous over Rn • In this case, a convex f is l.s.c Theorem Let f : Rn → R be convex and such that int(domf) 6= ∅. Then, f is continuous over int(domf). • In this case, a convex f need not be l.s.c over Rn • Example: 1 for x = 0 f(x) = 0 for x > 0 +∞ otherwise Convex Optimization 14 Lecture 4 Proof of the Theorem • Using the translation if necessary, we may assume without loss of generality that the origin is in the interior of the domain of f. • It is sufficient to show that f is continuous at the origin • By scaling the unit box if necessary, we may assume without loss of n generality that the unit box {x ∈ R | kxk∞ ≤ 1} is contained in domf n • Let vi, i ∈ I = {1,..., 2 } be vertices of the unit box (i.e., each vi has entries 1 or −1). The unit box can be viewed as a simplex generated by these vertices, i.e., every x with kxk∞ ≤ 1 is a convex combination of vertices vi, i ∈ I Convex Optimization 15 Lecture 4 or equivalently: every x with kxk∞ ≤ 1 is given by X P x = αivi with αi ≥ 0 and i∈I αi = 1 i∈I • Note that by convexity of f, we have f(x) ≤ max f(vi) = M (1) i∈I • Let xk → 0 and assume that xk 6= 0 for all k xk −xk • We introduce yk = and zk = kxkk∞ kxkk∞ • Note that we can write 0 as a convex combination of yk and zk, as follows 1 kxkk∞ 0 = xk + zk for all k kxkk∞ + 1 kxkk∞ + 1 Convex Optimization 16 Lecture 4 • By convexity of f it follows that 1 kxkk∞ f(0) ≤ f(xk) + f(zk) for all k kxkk∞ + 1 kxkk∞ + 1 • By letting k → ∞ and using Eq. (1), we have f(0) ≤ lim inf f(xk) (2) k→0 • Note that we can write xk = (1 − kxkk∞) 0 + kxkk∞ yk for all k • By using convexity, we obtain f(xk) ≤ (1 − kxkk∞) f(0) + kxkk∞ f(yk) Convex Optimization 17 Lecture 4 • Taking the limsup as k → ∞ and using Eq.
Recommended publications
  • On Choquet Theorem for Random Upper Semicontinuous Functions
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector International Journal of Approximate Reasoning 46 (2007) 3–16 www.elsevier.com/locate/ijar On Choquet theorem for random upper semicontinuous functions Hung T. Nguyen a, Yangeng Wang b,1, Guo Wei c,* a Department of Mathematical Sciences, New Mexico State University, Las Cruces, NM 88003, USA b Department of Mathematics, Northwest University, Xian, Shaanxi 710069, PR China c Department of Mathematics and Computer Science, University of North Carolina at Pembroke, Pembroke, NC 28372, USA Received 16 May 2006; received in revised form 17 October 2006; accepted 1 December 2006 Available online 29 December 2006 Abstract Motivated by the problem of modeling of coarse data in statistics, we investigate in this paper topologies of the space of upper semicontinuous functions with values in the unit interval [0,1] to define rigorously the concept of random fuzzy closed sets. Unlike the case of the extended real line by Salinetti and Wets, we obtain a topological embedding of random fuzzy closed sets into the space of closed sets of a product space, from which Choquet theorem can be investigated rigorously. Pre- cise topological and metric considerations as well as results of probability measures and special prop- erties of capacity functionals for random u.s.c. functions are also given. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Choquet theorem; Random sets; Upper semicontinuous functions 1. Introduction Random sets are mathematical models for coarse data in statistics (see e.g. [12]). A the- ory of random closed sets on Hausdorff, locally compact and second countable (i.e., a * Corresponding author.
    [Show full text]
  • Math 327 Final Summary 20 February 2015 1. Manifold A
    Math 327 Final Summary 20 February 2015 1. Manifold A subset M ⊂ Rn is Cr a manifold of dimension k if for any point p 2 M, there is an open set U ⊂ Rk and a Cr function φ : U ! Rn, such that (1) φ is injective and φ(U) ⊂ M is an open subset of M containing p; (2) φ−1 : φ(U) ! U is continuous; (3) D φ(x) has rank k for every x 2 U. Such a pair (U; φ) is called a chart of M. An atlas for M is a collection of charts f(Ui; φi)gi2I such that [ φi(Ui) = M i2I −1 (that is the images of φi cover M). (In some texts, like Spivak, the pair (φ(U); φ ) is called a chart or a coordinate system.) We shall mostly study C1 manifolds since they avoid any difficulty arising from the degree of differen- tiability. We shall also use the word smooth to mean C1. If we have an explicit atlas for a manifold then it becomes quite easy to deal with the manifold. Otherwise one can use the following theorem to find examples of manifolds. Theorem 1 (Regular value theorem). Let U ⊂ Rn be open and f : U ! Rk be a Cr function. Let a 2 Rk and −1 Ma = fx 2 U j f(x) = ag = f (a): The value a is called a regular value of f if f is a submersion on Ma; that is for any x 2 Ma, D f(x) has r rank k.
    [Show full text]
  • A Level-Set Method for Convex Optimization with a 2 Feasible Solution Path
    1 A LEVEL-SET METHOD FOR CONVEX OPTIMIZATION WITH A 2 FEASIBLE SOLUTION PATH 3 QIHANG LIN∗, SELVAPRABU NADARAJAHy , AND NEGAR SOHEILIz 4 Abstract. Large-scale constrained convex optimization problems arise in several application 5 domains. First-order methods are good candidates to tackle such problems due to their low iteration 6 complexity and memory requirement. The level-set framework extends the applicability of first-order 7 methods to tackle problems with complicated convex objectives and constraint sets. Current methods 8 based on this framework either rely on the solution of challenging subproblems or do not guarantee a 9 feasible solution, especially if the procedure is terminated before convergence. We develop a level-set 10 method that finds an -relative optimal and feasible solution to a constrained convex optimization 11 problem with a fairly general objective function and set of constraints, maintains a feasible solution 12 at each iteration, and only relies on calls to first-order oracles. We establish the iteration complexity 13 of our approach, also accounting for the smoothness and strong convexity of the objective function 14 and constraints when these properties hold. The dependence of our complexity on is similar to 15 the analogous dependence in the unconstrained setting, which is not known to be true for level-set 16 methods in the literature. Nevertheless, ensuring feasibility is not free. The iteration complexity of 17 our method depends on a condition number, while existing level-set methods that do not guarantee 18 feasibility can avoid such dependence. We numerically validate the usefulness of ensuring a feasible 19 solution path by comparing our approach with an existing level set method on a Neyman-Pearson classification problem.20 21 Key words.
    [Show full text]
  • Full Text.Pdf
    [version: January 7, 2014] Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) vol. 12 (2013), no. 4, 863{902 DOI 10.2422/2036-2145.201107 006 Structure of level sets and Sard-type properties of Lipschitz maps Giovanni Alberti, Stefano Bianchini, Gianluca Crippa 2 d d−1 Abstract. We consider certain properties of maps of class C from R to R that are strictly related to Sard's theorem, and show that some of them can be extended to Lipschitz maps, while others still require some additional regularity. We also give examples showing that, in term of regularity, our results are optimal. Keywords: Lipschitz maps, level sets, critical set, Sard's theorem, weak Sard property, distri- butional divergence. MSC (2010): 26B35, 26B10, 26B05, 49Q15, 58C25. 1. Introduction In this paper we study three problems which are strictly interconnected and ultimately related to Sard's theorem: structure of the level sets of maps from d d−1 2 R to R , weak Sard property of functions from R to R, and locality of the divergence operator. Some of the questions we consider originated from a PDE problem studied in the companion paper [1]; they are however interesting in their own right, and this paper is in fact largely independent of [1]. d d−1 2 Structure of level sets. In case of maps f : R ! R of class C , Sard's theorem (see [17], [12, Chapter 3, Theorem 1.3])1 states that the set of critical values of f, namely the image according to f of the critical set S := x : rank(rf(x)) < d − 1 ; (1.1) has (Lebesgue) measure zero.
    [Show full text]
  • CORE View Metadata, Citation and Similar Papers at Core.Ac.Uk
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Bulgarian Digital Mathematics Library at IMI-BAS Serdica Math. J. 27 (2001), 203-218 FIRST ORDER CHARACTERIZATIONS OF PSEUDOCONVEX FUNCTIONS Vsevolod Ivanov Ivanov Communicated by A. L. Dontchev Abstract. First order characterizations of pseudoconvex functions are investigated in terms of generalized directional derivatives. A connection with the invexity is analysed. Well-known first order characterizations of the solution sets of pseudolinear programs are generalized to the case of pseudoconvex programs. The concepts of pseudoconvexity and invexity do not depend on a single definition of the generalized directional derivative. 1. Introduction. Three characterizations of pseudoconvex functions are considered in this paper. The first is new. It is well-known that each pseudo- convex function is invex. Then the following question arises: what is the type of 2000 Mathematics Subject Classification: 26B25, 90C26, 26E15. Key words: Generalized convexity, nonsmooth function, generalized directional derivative, pseudoconvex function, quasiconvex function, invex function, nonsmooth optimization, solution sets, pseudomonotone generalized directional derivative. 204 Vsevolod Ivanov Ivanov the function η from the definition of invexity, when the invex function is pseudo- convex. This question is considered in Section 3, and a first order necessary and sufficient condition for pseudoconvexity of a function is given there. It is shown that the class of strongly pseudoconvex functions, considered by Weir [25], coin- cides with pseudoconvex ones. The main result of Section 3 is applied to characterize the solution set of a nonlinear programming problem in Section 4. The base results of Jeyakumar and Yang in the paper [13] are generalized there to the case, when the function is pseudoconvex.
    [Show full text]
  • Convexity I: Sets and Functions
    Convexity I: Sets and Functions Ryan Tibshirani Convex Optimization 10-725/36-725 See supplements for reviews of basic real analysis • basic multivariate calculus • basic linear algebra • Last time: why convexity? Why convexity? Simply put: because we can broadly understand and solve convex optimization problems Nonconvex problems are mostly treated on a case by case basis Reminder: a convex optimization problem is of ● the form ● ● min f(x) x2D ● ● subject to gi(x) 0; i = 1; : : : m ≤ hj(x) = 0; j = 1; : : : r ● ● where f and gi, i = 1; : : : m are all convex, and ● hj, j = 1; : : : r are affine. Special property: any ● local minimizer is a global minimizer ● 2 Outline Today: Convex sets • Examples • Key properties • Operations preserving convexity • Same for convex functions • 3 Convex sets n Convex set: C R such that ⊆ x; y C = tx + (1 t)y C for all 0 t 1 2 ) − 2 ≤ ≤ In words, line segment joining any two elements lies entirely in set 24 2 Convex sets Figure 2.2 Some simple convexn and nonconvex sets. Left. The hexagon, Convex combinationwhich includesof x1; its : :boundary : xk (shownR : darker), any linear is convex. combinationMiddle. The kidney shaped set is not convex, since2 the line segment between the twopointsin the set shown as dots is not contained in the set. Right. The square contains some boundaryθ1x points1 + but::: not+ others,θkxk and is not convex. k with θi 0, i = 1; : : : k, and θi = 1. Convex hull of a set C, ≥ i=1 conv(C), is all convex combinations of elements. Always convex P 4 Figure 2.3 The convex hulls of two sets in R2.
    [Show full text]
  • Convex Functions
    Convex Functions Prof. Daniel P. Palomar ELEC5470/IEDA6100A - Convex Optimization The Hong Kong University of Science and Technology (HKUST) Fall 2020-21 Outline of Lecture • Definition convex function • Examples on R, Rn, and Rn×n • Restriction of a convex function to a line • First- and second-order conditions • Operations that preserve convexity • Quasi-convexity, log-convexity, and convexity w.r.t. generalized inequalities (Acknowledgement to Stephen Boyd for material for this lecture.) Daniel P. Palomar 1 Definition of Convex Function • A function f : Rn −! R is said to be convex if the domain, dom f, is convex and for any x; y 2 dom f and 0 ≤ θ ≤ 1, f (θx + (1 − θ) y) ≤ θf (x) + (1 − θ) f (y) : • f is strictly convex if the inequality is strict for 0 < θ < 1. • f is concave if −f is convex. Daniel P. Palomar 2 Examples on R Convex functions: • affine: ax + b on R • powers of absolute value: jxjp on R, for p ≥ 1 (e.g., jxj) p 2 • powers: x on R++, for p ≥ 1 or p ≤ 0 (e.g., x ) • exponential: eax on R • negative entropy: x log x on R++ Concave functions: • affine: ax + b on R p • powers: x on R++, for 0 ≤ p ≤ 1 • logarithm: log x on R++ Daniel P. Palomar 3 Examples on Rn • Affine functions f (x) = aT x + b are convex and concave on Rn. n • Norms kxk are convex on R (e.g., kxk1, kxk1, kxk2). • Quadratic functions f (x) = xT P x + 2qT x + r are convex Rn if and only if P 0.
    [Show full text]
  • Sum of Squares and Polynomial Convexity
    CONFIDENTIAL. Limited circulation. For review only. Sum of Squares and Polynomial Convexity Amir Ali Ahmadi and Pablo A. Parrilo Abstract— The notion of sos-convexity has recently been semidefiniteness of the Hessian matrix is replaced with proposed as a tractable sufficient condition for convexity of the existence of an appropriately defined sum of squares polynomials based on sum of squares decomposition. A multi- decomposition. As we will briefly review in this paper, by variate polynomial p(x) = p(x1; : : : ; xn) is said to be sos-convex if its Hessian H(x) can be factored as H(x) = M T (x) M (x) drawing some appealing connections between real algebra with a possibly nonsquare polynomial matrix M(x). It turns and numerical optimization, the latter problem can be re- out that one can reduce the problem of deciding sos-convexity duced to the feasibility of a semidefinite program. of a polynomial to the feasibility of a semidefinite program, Despite the relative recency of the concept of sos- which can be checked efficiently. Motivated by this computa- convexity, it has already appeared in a number of theo- tional tractability, it has been speculated whether every convex polynomial must necessarily be sos-convex. In this paper, we retical and practical settings. In [6], Helton and Nie use answer this question in the negative by presenting an explicit sos-convexity to give sufficient conditions for semidefinite example of a trivariate homogeneous polynomial of degree eight representability of semialgebraic sets. In [7], Lasserre uses that is convex but not sos-convex. sos-convexity to extend Jensen’s inequality in convex anal- ysis to linear functionals that are not necessarily probability I.
    [Show full text]
  • Subgradients
    Subgradients Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: gradient descent Consider the problem min f(x) x n for f convex and differentiable, dom(f) = R . Gradient descent: (0) n choose initial x 2 R , repeat (k) (k−1) (k−1) x = x − tk · rf(x ); k = 1; 2; 3;::: Step sizes tk chosen to be fixed and small, or by backtracking line search If rf Lipschitz, gradient descent has convergence rate O(1/) Downsides: • Requires f differentiable next lecture • Can be slow to converge two lectures from now 2 Outline Today: crucial mathematical underpinnings! • Subgradients • Examples • Subgradient rules • Optimality characterizations 3 Subgradients Remember that for convex and differentiable f, f(y) ≥ f(x) + rf(x)T (y − x) for all x; y I.e., linear approximation always underestimates f n A subgradient of a convex function f at x is any g 2 R such that f(y) ≥ f(x) + gT (y − x) for all y • Always exists • If f differentiable at x, then g = rf(x) uniquely • Actually, same definition works for nonconvex f (however, subgradients need not exist) 4 Examples of subgradients Consider f : R ! R, f(x) = jxj 2.0 1.5 1.0 f(x) 0.5 0.0 −0.5 −2 −1 0 1 2 x • For x 6= 0, unique subgradient g = sign(x) • For x = 0, subgradient g is any element of [−1; 1] 5 n Consider f : R ! R, f(x) = kxk2 f(x) x2 x1 • For x 6= 0, unique subgradient g = x=kxk2 • For x = 0, subgradient g is any element of fz : kzk2 ≤ 1g 6 n Consider f : R ! R, f(x) = kxk1 f(x) x2 x1 • For xi 6= 0, unique ith component gi = sign(xi) • For xi = 0, ith component gi is any element of [−1; 1] 7 n
    [Show full text]
  • An Improved Level Set Algorithm Based on Prior Information for Left Ventricular MRI Segmentation
    electronics Article An Improved Level Set Algorithm Based on Prior Information for Left Ventricular MRI Segmentation Lei Xu * , Yuhao Zhang , Haima Yang and Xuedian Zhang School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; [email protected] (Y.Z.); [email protected] (H.Y.); [email protected] (X.Z.) * Correspondence: [email protected] Abstract: This paper proposes a new level set algorithm for left ventricular segmentation based on prior information. First, the improved U-Net network is used for coarse segmentation to obtain pixel-level prior position information. Then, the segmentation result is used as the initial contour of level set for fine segmentation. In the process of curve evolution, based on the shape of the left ventricle, we improve the energy function of the level set and add shape constraints to solve the “burr” and “sag” problems during curve evolution. The proposed algorithm was successfully evaluated on the MICCAI 2009: the mean dice score of the epicardium and endocardium are 92.95% and 94.43%. It is proved that the improved level set algorithm obtains better segmentation results than the original algorithm. Keywords: left ventricular segmentation; prior information; level set algorithm; shape constraints Citation: Xu, L.; Zhang, Y.; Yang, H.; Zhang, X. An Improved Level Set Algorithm Based on Prior 1. Introduction Information for Left Ventricular MRI Uremic cardiomyopathy is the most common complication also the cause of death with Segmentation. Electronics 2021, 10, chronic kidney disease and left ventricular hypertrophy is the most significant pathological 707.
    [Show full text]
  • EE364 Review Session 2
    EE364 Review EE364 Review Session 2 Convex sets and convex functions • Examples from chapter 3 • Installing and using CVX • 1 Operations that preserve convexity Convex sets Convex functions Intersection Nonnegative weighted sum Affine transformation Composition with an affine function Perspective transformation Perspective transformation Pointwise maximum and supremum Minimization Convex sets and convex functions are related via the epigraph. • Composition rules are extremely important. • EE364 Review Session 2 2 Simple composition rules Let h : R R and g : Rn R. Let f(x) = h(g(x)). Then: ! ! f is convex if h is convex and nondecreasing, and g is convex • f is convex if h is convex and nonincreasing, and g is concave • f is concave if h is concave and nondecreasing, and g is concave • f is concave if h is concave and nonincreasing, and g is convex • EE364 Review Session 2 3 Ex. 3.6 Functions and epigraphs. When is the epigraph of a function a halfspace? When is the epigraph of a function a convex cone? When is the epigraph of a function a polyhedron? Solution: If the function is affine, positively homogeneous (f(αx) = αf(x) for α 0), and piecewise-affine, respectively. ≥ Ex. 3.19 Nonnegative weighted sums and integrals. r 1. Show that f(x) = i=1 αix[i] is a convex function of x, where α1 α2 αPr 0, and x[i] denotes the ith largest component of ≥ ≥ · · · ≥ ≥ k n x. (You can use the fact that f(x) = i=1 x[i] is convex on R .) P 2. Let T (x; !) denote the trigonometric polynomial T (x; !) = x + x cos ! + x cos 2! + + x cos(n 1)!: 1 2 3 · · · n − EE364 Review Session 2 4 Show that the function 2π f(x) = log T (x; !) d! − Z0 is convex on x Rn T (x; !) > 0; 0 ! 2π .
    [Show full text]
  • NP-Hardness of Deciding Convexity of Quartic Polynomials and Related Problems
    NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems Amir Ali Ahmadi, Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis ∗y Abstract We show that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomial of degree four (or higher even degree) is globally convex. This solves a problem that has been open since 1992 when N. Z. Shor asked for the complexity of deciding convexity for quartic polynomials. We also prove that deciding strict convexity, strong convexity, quasiconvexity, and pseudoconvexity of polynomials of even degree four or higher is strongly NP-hard. By contrast, we show that quasiconvexity and pseudoconvexity of odd degree polynomials can be decided in polynomial time. 1 Introduction The role of convexity in modern day mathematical programming has proven to be remarkably fundamental, to the point that tractability of an optimization problem is nowadays assessed, more often than not, by whether or not the problem benefits from some sort of underlying convexity. In the famous words of Rockafellar [39]: \In fact the great watershed in optimization isn't between linearity and nonlinearity, but convexity and nonconvexity." But how easy is it to distinguish between convexity and nonconvexity? Can we decide in an efficient manner if a given optimization problem is convex? A class of optimization problems that allow for a rigorous study of this question from a com- putational complexity viewpoint is the class of polynomial optimization problems. These are op- timization problems where the objective is given by a polynomial function and the feasible set is described by polynomial inequalities.
    [Show full text]