JOURNAL OF INDUSTRIAL AND Website: http://AIMsciences.org MANAGEMENT OPTIMIZATION Volume 3, Number 3, August 2007 pp. 415–427

CONJUGATE FOR GENERALIZED CONVEX OPTIMIZATION PROBLEMS

Anulekha Dhara and Aparna Mehra

Department of Mathematics Indian Institute of Technology Delhi Hauz Khas, New Delhi-110016, India (Communicated by M. Mathirajan)

Abstract. Equivalence between a constrained scalar optimization problem and its three conjugate dual models is established for the class of general- ized C-subconvex functions. Applying these equivalent relations, optimality conditions in terms of conjugate functions are obtained for the constrained multiobjective optimization problem.

1. Introduction and preliminaries. Two basic approaches have emerged in lit- erature for studying nonlinear programming duality namely the Lagrangian ap- proach and the Conjugate Function approach. The latter approach is based on the perturbation theory which in turn helps to study stability of the optimization prob- lem with respect to variations in the data parameters. That is, the optimal solution of the problem can be considered as a function of the perturbed parameters and the effect of the changes in the parameters on the objective value of the problem can be investigated. In this context, conjugate duality helps to analyze the geometric and the economic aspects of the problem. It is because of this reason that since its early inception conjugate duality has been extensively studied by several authors for convex programming problems (see, [1, 6] and references therein). The class of convex functions possess rich geometrical properties that enable us to examine the conjugation theory. However, in non convex case, many of these properties fail to hold thus making it difficult to develop conjugate duality for such classes of functions. Recently, an attempt has been made by Bot¸et al.[3] to establish the equivalence between the three conjugate duals and the corresponding primal prob- lem under nearly convexity assumptions. It is important to observe that in their study convexity of the closure of set plays a crucial role in developing the results. There are many other important classes of generalized convex functions for which this kind of duality theory is yet to be described. Proceeding in this vein, our aim in the present work is to obtain between a scalar constrained optimization problem and its three conjugate duals, namely, the Lagrange dual, the Fenchel dual and the Fenchel-Lagrange dual under generalized C-subconvexity

2000 Mathematics Subject Classification. Primary: 90C26, 90C46; Secondary: 26B25. Key words and phrases. Nonlinear optimization problems, conjugate duality, generalized C- subconvex function, theorem of alternative. The first author is supported by Senior Research Fellowship from Council of Scientific and Industrial Research, India.

415 416 ANULEKHA DHARA AND APARNA MEHRA restrictions besides suitable constraint qualification. The results are subsequently extended to the multiobjective case. We first present a general framework necessary for understanding and developing the main results. We denote by ℜ¯ the extended real line ℜ ∪ {−∞, +∞} and F : ℜn → ℜ¯. Consider the unconstrained optimization problem:

inf F (x). (Pu) x∈ℜn For p∗ ∈ ℜn, suppose H(x) = p∗T x − b is the linear function majorized by F , i.e., p∗T x − F (x) ≦ b, ∀ x ∈ℜn. The greatest lower bound of b satisfying the above inequality as the function of p∗ is termed as the conjugate function of F . Formally, the conjugate function F ∗ : ℜn → ℜ¯ of F is defined as F ∗(p∗) = sup (p∗T x − F (x)). x∈ℜn Geometrically, −F ∗(p∗) is the intercept with the F-axis of the highest linear function having a vector of coefficient p∗ and lying below the function F . In order to formulate the conjugate theory we need to define the . For this, let ℜm be the space of the perturbation variables. Consider the perturbation function φ : ℜn ×ℜm → ℜ¯ satisfying the property that φ(x, 0) = n m F (x), ∀ x ∈ ℜ . For q ∈ ℜ , the perturbed problem associated with (Pu) is given by inf φ(x, q). x∈ℜn We now turn our attention to the following constrained optimization problem inf f(x) subject to g(x) ∈−C (Pc) x ∈ X where f : ℜn → ℜ¯, g : ℜn → ℜk, C ⊆ ℜk is a nonempty closed convex cone and k n X = dom(f) ∩ {∩i=1dom(gi)}⊆ℜ . The notation dom(f) stands for the effective n domain of f, i.e., dom(f)= {x ∈ℜ : f(x) < +∞}. Denote the feasible set of (Pc) by S = {x ∈ X : g(x) ∈−C}. Then (Pc) can be restated as follows inf f(x). x∈S Define a function F : ℜn → ℜ¯ as f(x) x ∈ S F (x)= +∞ otherwise  Consequently, (Pc) reduces to the unconstrained problem (Pu). The associated perturbation function φ satisfies the following property f(x) x ∈ S φ(x, 0) = +∞ otherwise  The conjugate of φ is defined as φ∗ : ℜn ×ℜm → ℜ¯, φ∗(x∗, q∗) = sup ((x∗, q∗)T (x, q) − φ(x, q)) x∈ℜn, q∈ℜm = sup (x∗T x + q∗T q − φ(x, q)). x∈ℜn, q∈ℜm CONJUGATE DUALITY 417

The perturbation function φ and its conjugate function φ∗ will be used to investigate the conjugate duality results for (Pc).

2. Lagrange and Fenchel dualities. We first describe a few notations that will be used in the sequel. The dual cone of C, denoted by C∗, is given by C∗ = {c∗ ∈ k ∗T k ℜ : c c ≧ 0, ∀ c ∈ C}. We shall consider the ordering ≦C in ℜ induced by C k as y ≦C x iff x − y ∈ C, ∀ x, y ∈ ℜ . Let us recall the three different kinds of conjugate dual models associated with (Pc).

n k 2.1. The Lagrange Dual. The Lagrangian perturbed function φL : ℜ ×ℜ → ℜ¯ is defined as f(x), x ∈ X, g(x) ≦ q φ (x, q)= C L +∞, otherwise  k and q ∈ℜ is the perturbation variable. The conjugate of the function φL is ∗ ∗ ∗ ∗T ∗T φL(x , q ) = sup (x x + q q − f(x)) k x∈X, q∈ℜ , g(x)≦C q = sup (x∗T x + q∗T (s + g(x)) − f(x)), q − g(x)= s ∈ C x∈X, s∈C sup(x∗T x + q∗T g(x) − f(x)), q∗ ∈−C∗ = x∈X ( +∞, otherwise For x ∈ S, q∗ ∈ C∗ we have q∗T g(x) ≦ 0. Consequently inf (f(x)+ q∗T g(x)) ≦ inf f(x), x∈X x∈S which implies sup inf (f(x)+ q∗T g(x)) ≦ inf f(x), (1) q∗∈C∗ x∈X x∈S i.e., ∗ ∗ sup (−φL(0, −q )) ≦ inf f(x). q∗∈C∗ x∈S The above inequality yields the following form of the Lagrangian dual ∗T sup inf (f(x)+ q g(x)). (DL) q∗∈C∗ x∈X It may be noted that in the construction of the Lagrangian dual the perturbation k parameter q ∈ℜ is associated with the constraints of (Pc).

2.2. The Fenchel Dual. Another dual of (Pc) is the Fenchel dual which is formu- lated by introducing the linear perturbation in the objective function variable while the feasible set remains unaltered, i.e., f(x + p), x ∈ S φ (x, p)= F +∞, otherwise  n where p ∈ℜ is the perturbation variable. The conjugate of the function φF is ∗ ∗ ∗ ∗T ∗T φF (x ,p ) = sup (x x + p p − f(x + p)) x∈S, p∈ℜn = sup (p∗T r − f(r)) + sup((x∗ − p∗)T x), x + p = r r∈ℜn x∈S =f ∗(p∗) − inf ((p∗ − x∗)T x). x∈S 418 ANULEKHA DHARA AND APARNA MEHRA

Using the definition of conjugate function we have −f ∗(p∗)+ p∗T x ≦ f(x), ∀ p∗ ∈ℜn, ∀ x ∈ℜn. Thus sup (−f ∗(p∗) + inf p∗T x) ≦ inf f(x). (2) p∗∈ℜn x∈S x∈S This leads to the Fenchel dual problem of the following form ∗ ∗ ∗T sup (−f (p ) + inf p x). (DF ) p∗∈ℜn x∈S ∗T It is important to observe that in (DF ) the infimum of the linear function p x is taken over the feasible set S of (Pc) comprising of the constraint functions g. 2.3. The Fenchel-Lagrange Dual. Combining the ideas of the Lagrange dual and the Fenchel dual, Wanka and Bot¸[7] proposed the Fenchel-Lagrange dual in which the perturbation parameters appear both in the constraints as well as in the objective function variables. The Fenchel Lagrangian perturbed function φFL : ℜn ×ℜn ×ℜk → ℜ¯ is defined as f(x + p), x ∈ X, g(x) ≦ q φ (x, p, q)= C FL +∞, otherwise  where (p, q) ∈ ℜn ×ℜk is the perturbation vector. The conjugate of the function φFL is ∗ ∗ ∗ ∗ ∗T ∗T ∗T φFL(x ,p , q ) = sup (x x + p p + q q − f(x + p)) n k x∈X, p∈ℜ , q∈ℜ , g(x)≦C q = sup (x∗T x + p∗T (r − x)+ q∗T (s + g(x)) − f(r)), x∈X, r∈ℜn, s∈C q − g(x)= s, x + p = r f ∗(p∗) + sup((x∗ − p∗)T x + q∗T g(x)), q∗ ∈−C∗ = x∈X ( +∞, otherwise

The Fenchel-Lagrange dual problem associated with (Pc) is given by ∗ ∗ ∗T ∗T sup (−f (p ) + inf (p x + q g(x))). (DFL) p∗∈ℜn, q∗∈C∗ x∈X

The feasibility conditions of (Pc) along with the definition of conjugate function yields f(x) ≧ −f ∗(p∗)+ p∗T x + q∗T g(x), ∀ x ∈ S, ∀ p∗ ∈ℜn, ∀ q∗ ∈ C∗ which implies inf f(x) ≧ −f ∗(p∗) + inf (p∗T x + q∗T g(x)) x∈S x∈S (3) ≧ −f ∗(p∗) + inf (p∗T x + q∗T g(x)), ∀ p∗ ∈ℜn, ∀ q∗ ∈ C∗. x∈X Thus leading to the Fenchel-Lagrange weak duality.

The relationship between the optimal values of the three different duals (DL), (DF ), (DFL) and the corresponding optimal value of the primal problem (Pc) has been proved by Bot¸et al. [3] and is given by

val(Pc) ≧ val(DL) or val(DF ) ≧ val(DFL).

It may be noted that there is no relationship between the optimal values of (DL) and (DF ). Moreover, the above inequality can be satisfied as strict relations entailing CONJUGATE DUALITY 419 that the between (Pc) and the duals (DL), (DF ) or (DFL) may still persist. The following example supports this assertion. 1+ x x 0 ≦ x < 1 Example 1. Let f(x , x )= 1 2 2 and g(x , x )= x defined 1 2 |x − 2| 1 ≦ x ≦ 3 1 2 1  2 2 0 ≦ x2 < 1 x1 =0 on X = {(x1, x2):0 ≦ x1 ≦ 2, } with C = {0}. Here 0 ≦ x2 ≦ 3 x1 > 0 val(Pc)=1 while val(DL)=0 and hence duality gap exists. In order to bridge this gap one needs to impose convexity type conditions and some suitable constraint qualifications. In the next section we will focus on these two aspects.

3. Generalized C-subconvex functions. In this section, we present some im- portant classes of generalized convex functions with particular emphasis on the class of generalized C-subconvex functions where C is closed convex cone. Some of its related properties are investigated. These properties will be subsequently used to prove strong duality theorems between (Pc) and (DL) and between (Pc) and (DFL). For any subset Y of ℜn, interior of Y is defined as

int(Y )= {y ∈ Y : ∃ δ > 0 such that Bδ(y) ⊆ Y } where Bδ(y) is an open ball centered at y with radius δ. The relative interior of Y is defined as

ri(Y )= {y ∈ Y : ∃ δ > 0 such that Bδ(y) ∩ aff(Y ) ⊆ Y }. Here aff(Y ) is the affine hull of Y . An interior of a may be empty while its relative interior is always a nonempty set (see, Theorem 2.1.2 [2], Theorem 6.2 [5]). For instance, let Y = {(0,y) : y ∈ℜ} ⊂ℜ2. Then ri(Y ) = Y whereas int(Y )= ∅. The cone generated by the set Y , cone(Y ), is defined as cone(Y )= tY t[≧0 while cone+(Y ) is defined as

cone+(Y )= tY. t>0 [ It is obvious that cone(Y )= cone+(Y ) ∪{0}. Next, we recall the definitions of C-convex and nearly C-convex functions. Definition 1. G : Y →ℜk defined on a convex set Y ⊆ℜn is said to be C-convex if ∀ y1, y2 ∈ Y, ∀ α ∈ [0, 1],

αG(y1)+(1 − α)G(y2) ⊆ G(αy1 + (1 − α)y2)+ C. Definition 2. G : Y →ℜk is said to be nearly C-convex on Y ⊆ℜn if ∃ α ∈ (0, 1) such that ∀ y1, y2 ∈ Y ,

αy1 + (1 − α)y2 ∈ Y

and αG(y1)+(1 − α)G(y2) ⊆ G(αy1 + (1 − α)y2)+ C. 420 ANULEKHA DHARA AND APARNA MEHRA

It is important to note that for nearly C- the underlying set Y is not necessarily a convex set. Due to this it is easy to construct functions that are nearly C-convex but not necessarily C-convex. Also, for a nearly C-convex function G, the sets G(Y )+ C and epiG + Y × C, where Z denotes the closure of the set Z, are convex in ℜk and Y ×ℜk respectively where epiG = {(y, v) ∈ Y ×ℜk : G(y) ⊆ v + C}. Applying the latter characterization as a main tool Bot¸et al. [3] derived the Fenchel-Lagrange conjugate duality for (Pc) under nearly convexity assumptions on the objective and the constraint functions. Our primary goal is to see if these results can be obtained for other classes of non convex functions. Here we will be concentrating on the class of generalized C-subconvex functions. Definition 3. G : Y →ℜk defined on a convex set Y ⊆ℜn is said to be generalized C-subconvex on Y if ∃ u ∈ ri(C), ∀ y1, y2 ∈ Y, ∀ α ∈ (0, 1), ∀ ǫ > 0, ∃ ρ > 0 such that ǫu + αG(y1)+(1 − α)G(y2) ⊆ ρG(αy1 + (1 − α)y2)+ C. (4)

Remark 1. If G is a generalized C-subconvex function on Y then cone+G(Y )+ k ri(C) is a convex set in ℜ because for z1 = λ1G(y1)+ r1, z2 = λ2G(y2)+ r2, y1, y2 ∈ Y and r1, r2 ∈ ri(C) αλ (1 − α)λ αλ (1 − α)λ ǫu + 1 G(y )+ 2 G(y ) ⊆ ρG( 1 y + 2 y )+ C (5) λ 1 λ 2 λ 1 λ 2 where λ = αλ1 + (1 − α)λ2. Choosing ǫ> 0 sufficiently small such that

αr1 + (1 − α)r2 − ǫuλ ∈ ri(C).

This along with (5) leads to the convexity of cone+G(Y )+ ri(C). Note that the converse characterization is not true in general as shown in the following example. y3 −1 ≦ y < 0 Example 2. Consider G(y)= . Then cone G(Y )+ri(ℜ ) y3 − y 0 ≦ y ≦ 1 + +  is a convex set even though G is not a generalized ℜ+-subconvex function. An important point to observe is that (4) is supposed to hold at the points on the line segment connecting y1 and y2. However if we replace this criterion by an arbitrary point in Y then we get the well known concept of the generalized C-subconvexlike function defined by Yang et al. [8]. Definition 4. G : Y → ℜk defined on Y ⊆ ℜn is said to be generalized C- subconvexlike on Y if ∃ u ∈ ri(C), ∀ y1, y2 ∈ Y, ∀ α ∈ (0, 1), ∀ ǫ > 0, ∃ y3 ∈ Y, ∃ ρ> 0 such that

ǫu + αG(y1)+(1 − α)G(y2) ⊆ ρG(y3)+ C.

Remark 2. Unlike the generalized C-subconvex functions, if cone+G(Y )+ ri(C) is a convex set in ℜk, then G is a generalized C-subconvexlike function on Y and conversely. Remark 3. From the above definitions and discussions one can observe that every C-convex function is generalized C-subconvex whereas a nearly convex function need not be generalized C-subconvex. Furthermore, a generalized C-subconvex function may neither be C-convex nor nearly C-convex function but is generalized C-subconvexlike. The following examples illustrate these claims. CONJUGATE DUALITY 421

Example 3. Consider G(x, y) = (x2,y2), x> 0,y > 0 and C = {(0,y) : y ∈ ℜ}. G(x, y) is not a C−convex function but is generalized C−subconvex function, for 2 2 αx1 + (1 − α)x2 u = (0, 1) ∈ ri(C) and ρ = 2 > 0. (αx1 + (1 − α)x2) 3 Example 4. Consider G(y) = y + |y|, −1 ≦ y ≦ 1. It is neither ℜ+-convex nor nearly ℜ+-convex but it is generalized ℜ+-subconvex on [−1, 1]. Example 5. Let D = {(s, 0) : s ∈ Q }⊆ℜ2, Q is a set of rationals and C = 2 2 {(0,t) : t ≧ 0}⊆ℜ+. Consider the identity mapping G : D →ℜ . Then G is nearly C-convex with α =1/2 but not generalized C-subconvex function. 3 Example 6. Let G(y)= y , y ∈ℜ. Observe that G is generalized ℜ+-subconvexlike but not generalized ℜ+-subconvex function.

Remark 4. If the constraint function g of (Pc) is generalized C-subconvex function on X then S is a convex set. For this, let x1, x2 ∈ S, i.e., g(x1) ∈ −C and g(x2) ∈−C. By generalized C-subconvexity of g, ∃ u ∈ ri(C), ∀ α ∈ (0, 1), ∀ n> 0, ∃ ρ> 0 such that 1 u + αg(x )+(1 − α)g(x ) ⊆ ρg(αx + (1 − α)x )+ C. n 1 2 1 2 Since −C is a closed convex cone, so in the limiting case when n → ∞ 1 u + αg(x )+(1 − α)g(x ) ∈−C, ∀α ∈ (0, 1). n 1 2 Consequently αx1 + (1 − α)x2 ∈ S, ∀ α ∈ (0, 1), thereby yielding the desired result.

Remark 5. If (G1, G2) is generalized (C1×C2)-subconvex function, then G1 and G2 are respectively generalized C1-subconvex and generalized C2-subconvex functions but not conversely. −1 0 ≦ x< 1 Example 7. Consider G (x) = and G (x) = x, 0 ≦ 1 |x − 2|− 2 1 ≦ x ≦ 3 2  x ≦ 3. Both functions are generalized ℜ+-subconvex functions on [0, 3] but the vector function (G1, G2) is not generalized ℜ+ ×ℜ+-subconvex on [0, 3]. Besides generalized C-subconvexity, we would be needing the following general- ized Slater constraint qualification 0 ∈ ri(g(X)+ C). (GSCQ) Observe that in Example 1 (GSCQ) is not satisfied as ri(g(X)+ C)=(0, 2).

From now onward we assume that the set X in (Pc) is a nonempty convex subset n 1+k of ℜ and K = ℜ+ × C, a closed convex cone in ℜ . The main results of the paper will be obtained by means of the following theorem of alternative for the generalized K−subconvex function. Theorem 1. Let F (x) = (f(x) − a,g(x)), a ∈ ℜ, be a generalized K-subconvex function on X and let (GSCQ) holds. If the system f(x) − a < 0 g(x) ∈−ri(C)   x ∈ X has no solution then ∃ q∗ ∈ C∗ such that  f(x) − a + q∗T g(x) ≧ 0, ∀ x ∈ X. 422 ANULEKHA DHARA AND APARNA MEHRA

Proof. By Remark 1, cone+F (X)+ ri(K) is a convex set. Furthermore, by hypoth- esis it is implied that F (X) ∩−ri(K)= ∅. Consequently,

(0, 0) ∈/ cone+F (X)+ ri(K) and hence (0, 0) and cone+F (X)+ ri(K) can be properly separated. ∗ ∗ Applying the separation theorem (see, Theorem 3.1 [4]), ∃ (λ0, q0 ) 6= (0, 0), ∗ ∗ ∗ ∗ (λ0, q0 ) ∈ aff(cone+F (X)+ ri(K)), i.e., λ0 ∈ ℜ, q0 ∈ aff(cone+g(X)+ ri(C)) such that ∗ ∗T λ0(f(x) − a + r)+ q0 (g(x)+ c) ≧ 0, ∀ x ∈ X, ∀ r> 0, ∀ c ∈ ri(C). ∗ ∗ ∗ ∗ It can easily be shown that λ0 ≧ 0 and q0 ∈ C . Next we shall prove that λ0 6=0. ∗ ∗ Suppose λ0 =0. Then q0 6= 0 and ∗T q0 (g(x)+ c) ≧ 0, ∀ x ∈ X, ∀ c ∈ ri(C). (6) By (GSCQ) and the fact that ri(g(X)+ C)= ri(g(X)+ ri(C)) (see, Theorem 3.2 [4]) we have 0 ∈ ri(g(X)+ ri(C)) ⊆ g(X)+ ri(C). (7)

Since aff(cone+g(X)+ri(C)) ⊆ lin(g(X)+ri(C)) = aff(g(X)+ri(C)) where the latter equality follows on account of (7) and the converse relation holds trivially, ∗ ∗ ∗ ∗ ∗ ∗ −q0 ∈ aff(g(X)+ ri(C)). So −q0 = g(x )+ c for some x ∈ X and c ∈ ri(C). ∗ ∗ This together with (6) implies q0 = 0 leading to a contradiction. Therefore λ0 6=0 and hence, we obtain f(x) − a + r + q∗T (g(x)+ c) ≧ 0, ∀ x ∈ X, ∀ r> 0, ∀ c ∈ ri(C),

∗ 1 ∗ ∗ where q = ∗ q0 ∈ C . In particular, for any positive integer n λ0 f(x) − a + r/n + q∗T (g(x)+ c/n) ≧ 0. Taking limit n → ∞, we get f(x) − a + q∗T g(x) ≧ 0, ∀ x ∈ X.

At this stage it is important to point out that convexity of X and S are not required in the above theorem. We are now in a position to derive the equality be- tween the optimal values of (DF ) and (DFL) using the above theorem as a principal tool. Theorem 2. Let p∗ ∈ℜn and aˆ = inf p∗T x. Suppose (p∗T x−a,gˆ (x)) is generalized x∈S K-subconvex function on X and (GSCQ) holds. Then val(DF )= val(DFL). ∗T Proof. Taking f(x)= p x and a =ˆa in (Pc). It follows that the system p∗T x − aˆ < 0 g(x) ∈−ri(C)   x ∈ X has no solution. Since all the conditions of the theorem of alternative (Theorem 1) hold, as a consequence, ∃ q∗ ∈ C∗ such that p∗T x − aˆ + q∗T g(x) ≧ 0, ∀ x ∈ X, CONJUGATE DUALITY 423 implying sup inf (p∗T x + q∗T g(x)) ≧ a.ˆ q∗∈C∗ x∈X This together with (1) leads to sup inf (p∗T x + q∗T g(x)) = inf p∗T x. q∗∈C∗ x∈X x∈S Adding −f ∗(p∗) to both sides and taking the supremum over p∗ ∈ ℜn we obtain that val(DFL)= val(DF ). A similar result was also obtained by Bot¸et al.[3] under nearly C-convexity conditions (see, Lemma 3.5 and Theorem 3.1 [3]). We would like to emphasize here that the vector function (p∗T x − a,gˆ (x)) can be generalized K-subconvex without g being convex on X. The following example justifies this assertion: ∗ Example 8. Let X = [−3, 3], p =1 and C = ℜ+. Define −5/4x − 3/4 −3 ≦ x ≦ −1 1/2x2 −1 ≦ x ≦ 0 g(x)=  2x2 0 ≦ x ≦ 2   −x2 +12 2 ≦ x ≦ 3  Clearly g is non convex function on X. Moreover S = {0} so aˆ = 0. It can be verified that (x, g(x)) is generalized K-subconvex on X.

4. Strong dualities. This section is devoted to obtaining the strong duality results between (Pc) and its three different dual models (DL), (DF ) and (DFL) where, by the strong duality, we understand that the optimal objective value of the primal problem coincide with the optimal objective value of its dual. Throughout this section we shall assume a = inf f(x). x∈S 4.1. Lagrange Strong Duality. Theorem 3. Let (f(x) − a,g(x)) be generalized K-subconvex function on X and (GSCQ) holds. Then the optimal values of (Pc) and (DL) coincide. Proof. The proof follows from Theorem 1 along with inequality (1). We shall now proceed to prove the strong duality for the other two duals. 4.2. Fenchel Strong Duality. Theorem 4. Assume the following:

(i) cone+(epi(f(x) − a)) is a convex set on X ×ℜ (ii) g(x) is generalized C-subconvex positively homogeneous on X (iii) (GSCQ) holds (iv) g(X\S) ⊆ aff(C)

Then the optimal values of (Pc) and (DF ) are equal. Proof. From the definition of a, we obtain that

cone+(epi(f(x) − a)) ∩ (ri(S) ×−ri(ℜ+)) = ∅. because otherwise ∃ (x, α) ∈ epi(f(x) − a), λ> 0 such that

g(λx) ≦C 0 and λ(f(x) − a) ≦ λα < 0 424 ANULEKHA DHARA AND APARNA MEHRA and in lieu of positive homogeneity of g this leads to a contradiction. Furthermore, from assumption (ii), S × −ℜ+ is a closed convex cone. Therefore the convex sets cone+(epi(f(x) − a)) and (S ×−ℜ+) can be properly separated (see, Theorem 11.3 [5]), i.e., there exist (p∗, µ∗)(=6 0) ∈ℜn ×ℜ and b ∈ℜ such that ∗T ∗ ∗T ∗ p x + µ µ ≦ b ≦ p x˜ + µ µ,˜ ∀ (x, µ) ∈ cone+(epi(f(x) − a)), (8) ∀ (˜x, µ˜) ∈ S × −ℜ+.

By carefully examining (8) and the structure of the set cone+(epi(f(x) − a)), we get, µ∗ ≦ 0. Also inf (p∗T x + µ∗µ) < sup (p∗T x˜ + µ∗µ˜). (9) ∈ − (x, µ) cone+(epi(f(x) a)) (˜x, µ˜)∈S×−ℜ+ We shall prove that µ∗ 6= 0. Suppose µ∗ = 0. Then (8) implies p∗T x ≦ b ≦ p∗T x,˜ ∀ x ∈ X, ∀ x˜ ∈ S. Since S ⊆ X therefore p∗T x˜ = b, ∀ x˜ ∈ S.

Since S is a cone, so, b = 0. Furthermore (9) implies the existence of x1 ∈ X such that ∗T p x1 < 0. (10) Due to (GSCQ) 0 ∈ ri(g(X)+ C) ⊆ g(X)+ ri(C).

Consequently, there exists x2 ∈ X such that g(x2) ∈ −ri(C) ⊆ −C. Therefore, x2 ∈ S and hence ∗T p x2 =0. (11) Also, there exists δ > 0 such that

Bδ(g(x2)) ∩ aff(−C) ⊆−C. (12) By generalized C-subconvexity of g, ∃ u ∈ ri(C), ∀ α ∈ (0, 1), ∀ ǫ > 0, ∃ ρ > 0 such that

ǫu + αg(x1)+(1 − α)g(x2) ⊆ ρg(αx1 + (1 − α)x2)+ C. Choosingǫ> ˆ 0 andα ˆ ∈ (0, 1) small enough such that

kǫuˆ +α ˆ(g(x1) − g(x2))k <δ which implies ǫuˆ +αg ˆ (x1)+(1 − αˆ)g(x2) ∈ Bδ(g(x2)). (13)

Invoking the condition, g(X\S) ⊆ aff(C)= lin(C) we obtainα ˆ(g(x1) − g(x2)) ∈ lin(C). Therefore

ǫuˆ +αg ˆ (x1)+(1 − αˆ)g(x2) ∈ lin(C)= lin(−C)= aff(−C). (14)

Hence, (13) and (14) along with (12) yieldsǫu ˆ +αg ˆ (x1)+(1 − αˆ)g(x2) ∈−C. By generalized C-subconvexity of g, we get

g(ˆαx1 + (1 − αˆ)x2) ∈−C and thusαx ˆ 1 + (1 − αˆ)x2 ∈ S. Hence ∗T p (ˆαx1 + (1 − αˆ)x2)=0. CONJUGATE DUALITY 425

But on account of (10) and (11), we have ∗T p (ˆαx1 + (1 − αˆ)x2) < 0 thereby contradicting the above equality. Therefore, µ∗ 6= 0 and hence µ∗ < 0. By −1 taking p∗ = p∗ in (8) along withµ ˜ = 0 we get 0 µ∗ ∗T ∗T p0 x − f(x)+ a ≦ p0 x,˜ ∀ x ∈ X, ∀ x˜ ∈ S. Consequently, ∗ ∗ ∗T inf f(x) ≦ −f (p0) + inf p0 x, x∈S x∈S leading to the Fenchel strong duality.

Remark 6. Since C is a convex cone, −C ⊆ affC. Hence g(S) ⊆ affC, and so condition (iv), g(X\S) ⊆ affC, is equivalent to g(X) ⊆ affC. −x −2 ≦ x ≦ 0 Example 9. Let X = (−∞, 2], f(x) = x − x3 0 ≦ x ≦ 1 and g(x) =   −2x +2 1 ≦ x ≦ 2 x, ∀ x ∈ X with C = ℜ+. Observe that the assumptions of Theorem 4 are satisfied. Here, val(Pc)=0= val(DF ). Therefore no duality gap exists. Remark 7. (i) − (iv) of Theorem 4 provide only sufficient conditions for Fenchel duality.

Example 10. In Example 9, if we take g(x)= |x +1|− 1, ∀ x ∈ X with C = ℜ+. Then g is not a positively homogeneous function. Also S = [−2, 0]. Here, val(Pc)= 0 = val(DF ). Instead if we take g(x) = |x|− 1, ∀ x ∈ X, then S = [−1, 1]. Here, val(Pc)=0 and val(DF )= −1. These examples justify Remark 7. 4.3. Fenchel-Lagrange Strong Duality. Combining Theorem 2 and Theorem 4 the Fenchel-Lagrange strong duality can be stated as follows Theorem 5. Assume that the conditions of Theorem 4 hold. Then ∃ p∗ ∈ℜn such ∗T that val(Pc) = val(DF ). Further if (p x − a,gˆ (x)) is a generalized K-subconvex ∗T function where aˆ = inf p x, then the optimal values of (Pc) and (DFL) coincide. x∈S Proof. Proof follows from Theorem 2 and Theorem 4.

5. Multiobjective optimization problem. Our aim in this section is to apply the results of the previous section to derive the optimality conditions for the fol- lowing multiobjective optimization problem:

inf f(x) = (f1(x),...,fm(x)) subject to g(x) ∈−C, (MPc) x ∈ X n n k k where fi : ℜ → ℜ¯, ∀i = 1,...,m, g : ℜ → ℜ , C ⊆ ℜ is a closed convex cone m k n and X = (∩i=1(dom(fi))) (∩j=1(dom(gj ))) ⊆ℜ . Let X be a nonempty convex set and S = {x ∈ X : g(x)\≦C 0} be the feasible set.

Definition 5. x¯ ∈ S is said to be a weakly efficient solution of (MPc) if ∄ x ∈ S such that fi(x)

One of the most important approaches to studying the vector optimization prob- lem is to convert the problem into an equivalent scalar optimization problem and then the usual techniques of scalar optimization are applied. The following two results summarize the scalarization technique for (MPc) under a weakened form of convexity.

Lemma 1. Let x¯ be a weak efficient solution of (MPc). Assume that (f(x) − m f(¯x),g(x)) is generalized ℜ+ × C-subconvex on X and (GSCQ) holds. Then there m T exists α ∈ℜ+ , α e =1 such that x¯ is an optimal solution of the scalar constrained s T optimization problem (MPc ): min α f(x). x∈S

Proof. It follows from the weak efficiency ofx ¯ for (MPc) that the system

fi(x) − fi(¯x) < 0, ∀ i =1,...,m g(x) ∈−ri(C)   x ∈ X has no solution. The desired result follows by invoking Theorem 1 and thereafter using (GSCQ). Lemma 2. Let x¯ be an optimal solution of the scalar constrained optimization s m T problem (MPc ) with α ∈ ℜ+ , α e = 1. Then x¯ is a weak efficient solution of (MPc). Applying the strong duality results of the previous section we deduce the opti- s mality conditions for the problem (MPc ) in terms of the conjugate functions. s The Fenchel-Lagrange dual for (MPc ) is given by T ∗ ∗ ∗T ∗T s sup (−(α f) (p ) + inf (p x + q g(x))). (DFL) p∗∈ℜn, q∗∈C∗ x∈X s Since X ⊆ dom(g), (DFL) can be rewritten as sup (−(αT f)∗(p∗) − (q∗T g)∗(−p∗)). p∗∈ℜn, q∗∈C∗ s s Similarly one can define the Lagrange dual (DL) and the Fenchel dual (DF ) for the s scalarized problem (MPc ).

Theorem 6. (1) Let x¯ ∈ X be a weak efficient solution of (MPc). Suppose (f(x)− m f(¯x),g(x)) is generalized ℜ+ × C-subconvex on X and the remaining conditions of n ∗ s Theorem 4 be satisfied. Then, there exists (¯p, q¯) ∈ℜ × C such that val(MPc )= s s T val(DL) = val(DF ). Further if (¯p x − a,g¯ (x)) is generalized ℜ+ × C-subconvex T s function where a¯ = inf p¯ x on X, then (¯x, p,¯ q¯) is an optimal solution of (DFL) x∈S s s with the optimal values of (MPc ) and (DFL) coinciding. Moreover, (i) (αT f)(¯x) + (αT f)∗(¯p)=¯pT x¯ (ii) (¯qT g)(¯x)+(¯qT g)∗(−p¯)= −p¯T x¯ (iii) (¯qT g)(¯x)=0

(2) Let x¯ ∈ X be a feasible solution of (MPc) and (¯x, p,¯ q¯) be a feasible solution of s (DFL) satisfying (i), (ii) and (iii) stated above. Then x¯ is a weak efficient solution s of (MPc), (¯x, p,¯ q¯) is an optimal solution of (DFL) and the strong duality holds s s between (MPc ) and (DFL). m T Proof. (1) It follows from Lemma 1 that there exists α ∈ℜ+ , α e = 1 such thatx ¯ s solves the problem (MPc ). Applying the Fenchel-Lagrange strong duality (Theorem CONJUGATE DUALITY 427

s n ∗ 5) to (MPc ), ∃ (¯p, q¯) ∈ℜ × C such that (αT f)(¯x) + (αT f)∗(¯p)+(¯qT g)∗(−p¯)=0 (15) which implies ((αT f)(¯x) + (αT f)∗(¯p) − p¯T x¯)+((¯qT g)(¯x)+(¯qT g)∗(−p¯)+¯pT x¯) − (¯qT g)(¯x)=0. This equality along with the definition of a conjugate function, feasibility ofx ¯ for ∗ (MPc) andq ¯ ∈ C yield the desired optimality conditions (i)-(iii). (2) Using the optimality conditions (i)-(iii) we can deduce (15) which together s s with the feasibility ofx ¯ for (MPc ) and (¯x, p,¯ q¯) for (DFL) gives inf αT f(x) ≦ αT f(¯x) = −(αT f)∗(¯p) − (¯qT g)∗(−p¯) x∈S ≦ sup (−(αT f)∗(p∗) − (q∗T g)∗(−p∗)) p∗∈ℜn, q∗∈C∗ ≦ sup (−(αT f)∗(p∗) + inf (p∗T x + q∗T g(x))) p∗∈ℜn, q∗∈C∗ x∈X where the last inequality follows on the account of X ⊆ℜn. The above inequality s along with (3) leads to the Fenchel-Lagrange strong duality between (MPc ) and s (DFL). Hence, in view of Lemma 2, we obtain thatx ¯ and (¯x, p,¯ q¯) are the weak s efficient solution of (MPc) and optimal solution of (DFL) respectively. It is important to observe that assertion (2) in the above theorem follows without the fulfilment of any constraint qualification or any weakened convexity assump- tions. Moreover if m = 1 and α = 1 we obtain the optimality conditions for scalar optimization problem.

6. Conclusions. In this paper we have made an attempt to obtain Fenchel-Lagrange duality results for the class of generalized convex functions. The results are obtained by imposing certain restrictions on the epigraph set of the objective function and also on the feasible set of the problem. This seems to be a limitation of the paper. However it would be interesting to explore the similar results under different sets of relaxed conditions or by using different types of conjugate functions.

REFERENCES

[1] M. Avriel, “Nonlinear Programming: Analysis and Methods,” Prentice-Hall, Englewood Cliffs, 1980. [2] C.R. Bector, S. Chandra, J. Dutta, “Principles of Optimization Theory,” Narosa Publishing House, New Delhi, India, 2005. [3] R.I. Bot¸, G. Kassay, G. Wanka, Strong duality for generalized convex optimization problems, J. Optim. Theory Appl., 127 (2005), 45–70. [4] J.B.G. Frenk, G. Kassay, On classes of generalized convex functions, Gordan-Farkas type theorems and lagrangian duality, J. Optim. Theory Appl., 102 (1999), 315–343. [5] R.T. Rockafellar, “,” Princeton University Press, Princeton, New Jersey, 1970. [6] Y. Sawaragi, H. Nakayama, T. Tanino, “Theory of Multiobjective Optimization,” Mathemat- ics in Science and Engineering, 176, Academic Press, New York, 1985. [7] G. Wanka, R.I. Bot¸, On the relations between different dual problems in convex mathemat- ical programming, in “Oper. Res. Proc. 2001” (eds. P. Chamoni, R. Leisten, A. Martin, J. Minnemann and H. Stadtler), Springer-Verlag, Heidelberg, Germany, 2002, 255–262. [8] X.M. Yang, X.Q. Yang, G.Y. Chen, Theorems of alternative and optimization with set-valued maps, J. Optim. Theory Appl., 107 (2000), 627–640. Received April 2006; revised October 2006. E-mail address: anulekha [email protected]