Generalized Convex Alternative Theorems and Cone Constrained Optimization

Maria Caridad Natividad

A thesis submitted for the degree of Master of Science in the School of Mathematics, University of New South Wales, Australia

July 1992 Abstract

This thesis studies nonlinear alternative theorems and their applications to cone constrained optimization problems. We examine alternative theo­ rems involving generalized cone convex functions. The different approaches in proving an alternative theorem and the relationships with other funda­ mental results in optimization and mathematical programming are studied. We also study the role of alternative theorems in the derivation of necessary optimality conditions and in the development of the theory.

A new class of quasiconvex mappings, called *-quasiconvex, is defined and studied for which an alternative theorem is proved using a minimax theorem. This alternative theorem is applied to study cone constrained optimization problems involving *-quasiconvex mappings. Thus, we develop generalized Lagrangian optimality and duality results and therefore extend the funda­ mental Lagrangian results for convex programming problems to a new class of quasiconvex problems. Moreover, we also obtain necessary optimality con­ ditions for a class of nondifferentiable programs using upper approximations.

Recently, Jeyakumar and Gwinner [30] established new versions of alter­ native theorems involving nonconvex inequality systems which use a new local closedness condition. From these theorems, approximate Lagrange multiplier results, zero duality gap property and certain stability results were derived. Here, we examine these versions of alternative theorems for generalized cone convex systems and apply the theorems to study the results that follow from these alternative theorems for cone constrained optimization problems. We study the relationship between the stability of the primal problem and the zero duality gap property and the characterization of the t-saddlepoints using t-subdifferentials of the value function of the primal problem at the point of no perturbation. Finally, we examine a stable alternative theorem which also gives a sufficient condition for the local closedness assumption.

11 Contents

. Abstract 1

Contents iii

Acknowledgements VI

1 Introduction 1

2 Preliminaries 5

2.1 Convexity of Sets and Functions ...... 6

2.2 Generalized Convexity ...... 8

2.3 Continuity and Subdifferentiability of Functions ...... 10

2.4 Linear Alternative Theorems and their Applications ...... 12

l1l 3 Approaches to Nonlinear Alternative Theorems 16

3.1 Introduction ...... 16

3.2 Nonlinear Alternative Theorems via Separation Theorems 17

3.3 An Alternative Theorem via a Minimax Theorem ...... 21

3.4 An Alternative Theorem via a Lagrangian Theorem ...... 26

3.5 Optimality Conditions and Duality ...... 28

4 An Alternative Theorem for *-quasiconvex Mappings and N ondifferentiable Optimization 33

4.1 Introduction ...... 33

4 . 2 *-quas1convex . M appmgs ...... 34

4.3 An Alternative Theorem for *-quasiconvex Mappings . . . . . 39

4.4 Generalized Lagrangian Theorems for *-quasiconvex Cone Con­ strained Problems ...... ·...... 42

4.5 Necessary Optimality Conditions for a Class of Nondifferen­ tiable Programs ...... 44

4.6 Upper Approximations and *-quasiconvexity ...... 46

IV 5 t-Alternative Theorems and Zero Duality Gaps 51

5.1 Introduction ...... 51

5.2 t-Alternative Theorems ...... 53

5.3 Approximate Lagrange Multipliers and Zero Duality Gaps . . 57

5.4 Nearly Stable Problems ...... 61

5.5 A Stable Alternative Theorem ...... 64

Bibliography 67

V Acknowledgements

I would like to express my deepest gratitude to my supervisor Dr. Vaithilingam Jeyakumar for his untiring guidance and support through­ out the course of my study and for teaching me not only mathematics but also what dedication is all about. Some of the results in Chapter 4 of this thesis were obtained in collaboration with him and Professor W. Oettli and appeared in [31].

I would also like to thank Esther Nababan and Xin Tian for helping me in the material preparations of the thesis. I am also indebted to my family for their support and patience and to my friends including Xiaojun Chen and Sadhana Subramani for making my stay in Sydney an enjoyable experience.

The thesis would be impossible without the financial assistance of the Association for Educational Projects Ltd.. I am deeply grateful for their support.

Finally, I would like to thank the School of Mathematics for providing excellent facilities and a stimulating working environment.

Ad Dominam Nostram, Sedes Sapientiae.

Vl Chapter 1

Introduction

Alternative theorems play an important role in the derivation of necessary optimality conditions, in the development of the Lagrangian duality theory and in the scalarization of vector-valued optimization problems. These the­ orems have been extensively studied by various authors in the past three decades. One of the earliest linear alternative theorems was obtained by Farkas in 1902 (see (35]) and one of the first versions of the nonlinear case which involves convex functions was given by Fan, Glicksberg and Hoffman in 1957 (see (11]). Since then, different versions of alternative theorems (that is, in finite or infinite dimensional cases and involving convex or generalized con­ vex functions ) have been established in the literature (see (5], (7], (25), (28], (19] and the references therein). A standard way of proving an alternative theorem is via a separation theorem. Recently, it has been shown that under certain conditions, an alternative theorem can be proved using a minimax theorem (see (28), (8) and the references therein) and that under appropriate conditions, alternative, minimax and Lagrangian theorems are closely related to each other. This thesis studies nonlinear alternative theorems (also called solvability theorems or transposition theorems) involving generalized cone­ convex functions which include a new class of generalized convex functions and their applications to cone constrained optimization problems.

1 Alternative theorems which hold for systems of convex functions may not be generally extended to systems of quasiconvex functions (see [14]). There­ fore results that follow from an alternative theorem such as necessary opti­ mality conditions and the development of the standard Lagrangian duality theory for optimization problems may not be obtained for programming prob­ lems involving quasiconvex functions. In chapter 4 of this thesis, we study a new class of quasiconvex mappings, called *-quasiconvex, for which an al­ ternative theorem can be proved using a version of the minimax theorem for quasiconcav~onvex functions of Sion [41], and develop optimality and du­ ality theory for optimization problems involving *-quasiconvex mappings by applying the alternative theorem. Some of these results appear in the recent report by Jeyakumar, Oettli and Natividad [31].

On the other hand, it is known that the existence of a Lagrange multiplier plays a vital role in the development of the duality theory and the stability of optimization problems. Very recently, Jeyakumar and Gwinner [30] and Jeyakumar and Wolkowicz [32] studied nonconvex optimization problems in­ volving inequality systems for which no Lagrange multiplier exists yet duality gap is zero between the primal problem and the corresponding dual problem (that is, the optimal values of the primal and dual problems are equal) and the primal problem satisfies a certain approximate stability property. This was done by establishing new versions of alternative theorems using a new local closedness condition. In chapter 5, we establish these versions of alter­ native theorems for generalized cone-convex systems and apply the theorems to study zero duality gap results for cone constrained optimization problems. This development shows a new connection between alternative theorems and zero duality gap property.

We now give an outline of the thesis. In the second chapter, basic results and definitions used throughout the thesis are presented. We also discuss how linear alternative theorems are used to study differentiable optimization problems. In passing, we also examine the role of linear alternative theo-

2 rems in the characterization of the weakened invexity of a finite dimensional programming problem.

The third chapter examines various versions of alternative theorems and their relationship with other fundamental results in optimization and the dif­ ferent approaches in proving alternative theorems. The chapter begins by presenting versions of Gordan, Farkas and Motzkin alternative theorems for S-convex, S-convexlike and S-subconvexlike cone functions. Then we study the different ways of proving alternative theorems. We examine how a Basic (or Gordan type) Alternative Theorem is derived using a separation theorem (see [l], [4] [11], [25],[29], [19]), minimax theorem (see [5], [7],[8],[28],[24]), and Lagrangian theorem (see (28], (26]). We see that minimax, alternative and Lagrangian theorems are equivalent under appropriate conditions ( as shown in (28)). Thus, each of these can be derived directly from each other with­ out the use of a separation theorem. Then we shall see how an alternative theorem is used to derive necessary optimality conditions and to develop the Lagrangian duality theory. To complete this chapter we also show how alter­ native theorems are used in the derivation of necessary optimality conditions for nondifferentiable optimization problems.

As we have mentioned earlier, an alternative theorem does not, in general, hold for quasiconvex functions. Therefore, the results that follow from alter­ native theorems may not hold. However, the result that minimax theorems can be used to prove an alternative theorem motivates us to apply the mini­ max theorem for quasiconcave-convex functions of Sion (41] in developing an alternative theorem. We begin the fourth chapter by defining and studying the properties of a new class of quasiconvex mappings, called *-quasiconvex, and its relationship with other generalized convex functions (namely, invex and pseudoconvex functions). Then we establish an alternative theorem for *-quasiconvex mappings using the minimax theorem of Sion [41]. This al­ ternative theorem is used to derive duality results, saddlepoint optimality conditions for a class of nondifferentiable programs, and necessary optimality

3 conditions for a cone constrained optimization problem using upper approxi­ mations. Therefore, we are able to extend fundamental Lagrangian results for convex programming problems to a class of quasiconvex problems. Slightly extended versions of the results presented in this chapter appear in [31].

In the final chapter, we study new versions of alternative theorems for cone systems, which use a new local closedness condition, and their applications to cone constrained optimization. We also see how zero duality gap properties for cone constrained problems can be studied by way of studying new versions of alternative theorems. Moreover, we examine various relationships between the zero duality gap property and approximate stability properties of the problem. The chapter begins by establishing €-alternative theorems involving S-convexlike functions. Then we apply the theorems to derive the existence of approximate Lagrange multipliers and of zero duality gap between the primal problem and the corresponding dual problem. Then we obtain the equivalence of the zero duality gap between the primal and dual problems and a certain stability of the primal problem. We also study the characterization of the €• saddlepoints using €-subdifferentials of the primal problem at the point of no perturbation. Finally, we derive a stable alternative theorem for cone systems, which also gives a sufficient condition for the local closedness assumption.

4 Chapter 2

Preliminaries

In this chapter, we introduce definitions, notations and basic results that will be used throughout the thesis. The versions of the alternative theorems that are going to be examined in this chapter are Gordan , Farkas and Motzkin alternative theorems for the linear mapping case . We also look at some applications of linear alternative theorems, particularly in the development of the Lagrangian duality theory and in the characterization of invexity of a finite dimensional programming problem.

It is worth pointing out that the theorems, corollaries, definitions, remarks and examples presented throughout are numbered according to their appear­ ance in the section of the chapter. For example, "Theorem 2.4.3" means that it is the third theorem in section 4 of chapter 2. However, displayed equations are numbered according to their appearance in the chapter. For example, "(5.10)" means that it is the tenth displayed equation in chapter 5.

5 2.1 Convexity of Sets and Functions

We assume throughout that X and Y are real Banach spaces, C C X is convex, and S C Y is a closed convex cone.

A set C is convex if

(Va E (0, 1)) (Vx1 , x2 E C) ax1 + (1 - a)x2 E C.

A set S is a cone if

V').. > 0, Vs E S, >..s E S.

The cone Sis said to be a convex cone if Sis convex . Thus, S is a convex cone if for every ).. > 0, ').. S C S and S + S C S. The interior of a set S is denoted by intS and closure of S is denoted by S. The topological dual space of Y is denoted by Y' . The space Y' is equipped with the weak*topology.

The dual cone (or polar cone) S* of Sis defined by

S* = {p E Y' I ps ~ 0, Vs E S}.

Note that ps := {p, s) where p E Y' and s E S and ps denotes the evalu­ ation of the functional p at s E S. H Y = ~m then ps = pT s and if S C ~m then S* C (~m)' = ~m.

From the bipolar theorem (see [40] ) for a closed convex cone S ,

s ES<==> s*s ~ 0, Vs* ES*.

If intS "I- 0 then

s E intS <==> s*s > 0, VO#- s* ES*.

6 The set B C s• is said to be a weak*compact base for s• if B is a weak*compact such that O

B = {s* E s· I s· s = 1} is a weak*compact convex base for s•.

The function f: C -+ Y is said to be S-convex if

(Va E (0, 1)) (Vx1, x2 E C) af(xi) + (1 - a)f(x2) - f(ax1 + (1 - a)x2) E S.

If f is differentiable then f is S-convex if and only if for any x1 , x 2 EC,

f is positively homogenuous (of degree one) if

(Vx EX) (V,\ > 0) f(Ax) = Af(x). f is S-sublinear if it is S-convex and positively homogenuous.

The following separation theorems for convex sets are versions of the Hahn-Banach theorem and are necessary for proving alternative theorems.

Theorem 2.1.1 ( [4}, Theorem 2.2.3) Let I< and M be convex subsets of X with I< n M = 0. If I< is open then

30 =/ g EX' such that sup g(x) < inf g(x). xeM xeK If I< is closed, and M consists of a single point b, then

30 =/ g EX' such that g(b) < inf g(x). xeK

Let H = [

7 Theorem 2.1.2 ( {£1}, p. 63} Let I< and M be convex subsets of X and intl( # 0. Then I< and M can be separated by a closed hyperplane if and only if intI< n M = 0.

2.2 Generalized Convexity

In this section we shall introduce the various notions of generalized convex functions and study their basic properties.

A function f is nearly S-subconvexlike [8] if

(30 E intS) (3a E (0, 1)) (Vxi, x2 EC) {Vf. > 0) (3x3 EC) f.0 + af(x1) + (1 - a)f(x2) - f(x3) E S.

A function f is S-subconvexlike [25] if

(30 E intS) (Va E (0, 1)) (Vx1 , x2 EC) (Vf. > 0) (3x3 EC) f.0 + af(x1) + (1 - a)f(x2) - f(x3) E S.

A function f is S-convexlike [10] if

(Va E (0, 1)) (Vx1 , x 2 E C) (3x3 E C)

af(xt) + (1 - a)f(x2) - f(x3) E S.

It is immediate from the definitions that

S-convex ==} S-convexlike ==} S-subconvexlike ==} nearly S-subconvexlike.

The converse implications are not necessarily valid (for numerical exam­ ples, see [8]). The function f is S-subconvexlike if and only if f ( C) + intS is convex and f is S-convexlike if and only if f(C) +Sis convex (see [37]).

8 This property is an important tool in proving an alternative theorem using a Hahn-Banach separation theorem.

If g: C -+ Y and f: C -+ ~, then the pair (/, g) is (~+ x S)-su bconvexlike if for some () E intS,

(Va E (0, 1)) (Vx1, x2 E C) (V€ > 0) (3x3 E C) € + a/(x1) + (1 - a)/(x2) - /(xa) E ~+

€8 + ag(xi) + (1 - a)g(x2 ) - g(x3 ) ES

The pair (/, g) is (~+ x S)-convexlike if the above conditions hold for € = 0.

The following is an extension of the definition of quasiconvexity of func­ tions to mappings.

A mapping /: C -+ Y is said to be quas1convex if for every

X1, X2 E C, y E Y

f(x1) - y E -Sand /(x2 ) - y E -S

==} J(e) - y E -S, Ve E [x1, x2]

If Y =~and S =~+,then the above definition reduces to

(Vx1,x2 E C)(Va E (0, 1)) f(ax1 + (1 - a)x2):::; max{/(xi), /(x2)}.

This is equivalent to the condition that the level set

{x EC I f(x):::; µ} is convex for every µ E ~- If the function f is differentiable, then it is quasi­

convex if for each x1 , x2 E C,

9 Recall that a real-valued function g is pseudoconvex if for each x1,x2EC,

The function f is quasiconcave if - f is quasiconvex. The function J is

S-invex [45] if for each x 1 , x 2 EC there exists T}: C x C-+ X such that

If Y = ~ and S = ~+ then J is invex. If TJ(x 1 , x2 ) = x1 - x 2 , then J is S-convex.

2.3 Continuity and Subdifferentiability of Func­ tions

In this section, aside from introducing some definitions related to conti­ nuity and differentiability, we also discuss the notion of value functions and stability of programming problems.

A function f: X -+ ~ is lower semicontinuous (l.s.c.) (see [3], [22]) at a point x0 if for every f > 0, there corresponds a neighbourhood U(x0 ) such that

x E U(xo) ===> J(x) > J(xo) - f.

A function is l.s.c. on the domain if it is l.s.c. at each point of its domain. The lower sernicontinuity of f on X is equivalent to the condition that each level set { x If (x) ~ 1} is closed for each real , .

The function f is upper semicontinuous ( u.s.c.) at x 0 if - f is l.s.c. at x0 • A function g: S -+ Y is weakly S*-l.s.c. at a if s*g is l.s.c. , for each s* E S*.

10 The function /: X --+ ~ is directionally differentiable at x in the di­ rection d if '( d) . f(x + ad) - J(x) ! x, = 1ima!O exists. a

Consider the primal problem (Pi)

(P1) inf f(x) subject to x EC, -g(x) ES

where f:X--+ ~,g:X--+ Y.

The Value (Perturbation) Function V: Y--+ ~ of (Pi) is defined by

V(u) := inf{/(x) Ix E C,u -g(x) ES}.

In general, V( u) is the optimal value of a programming problem obtained by replacing -g( x) with ·u - g( x) (that is, u represents the "perturbations" of (Pi)). Note that V(O) is the optimal value of (Pi) and that the infimum over the empty set is +oo.

The f -subdifferential of the Value function at u is defined by

afV(u) = {A E Y' I Vu E Y, V(u) ~ V(u) + A(u - u)- f}.

If f = 0 then we have the subdifferential of the Value function at u. If ii = 0 then we have

af V(O) = {A E Y' I Vu E Y, V(u) ~ V(O) + A(u) - f}.

For the problem (Pi), if V(O) is finite and if Bf V(O) # 0, then the value function satisfies the following relation: there exists A E Y',

V(u) ~ V(O) - A(u) - f, u E Y. (2.1)

The problem (Pi) is said to be nearly stable if V(O) < oo and the value

function satisfies (2.1) for every f > 0. If (2.1) holds for f = 0, then (P1 )

11 is stable (or in an "equilibrium" situation) and A can be interpreted as an approximate lower bound on the marginal rate of decrease in the optimal value of (Pi) when (Pi) is perturbed (see (32]).

An economic interpretation of the notion of value functions is as follows. Let us assume that f(x) is the cost. Therefore, if for some reason, we want to perturb the original problem, we must pay for this change ( the price being u* per unit of perturbation u). For any perturbation u, the minimum cost we can achieve in the perturbed problem plus the cost of the perturbation u is

V(u) + u*u (2.2)

A perturbation is "worth buying" if (2.2) is less than V(O) (see (39]).

2.4 Linear Alternative Theorems and their Applications

In the following, versions of the Gordan, Farkas and Motzkin alternative theorems for the linear mapping case are discussed. We also discuss appli­ cations of linear alternative theorems to the development of the Lagrangian theory and to the characterization of the invexity of a finite dimensional pro­ gramming problem.

Theorem 2.4.1 (Gordan Alternative Theorem) Let B: X -+ Y be a contin­ uous linear mapping and let S C Y be a closed convex cone with intS =/:- 0. Then exactly one of the following statements holds:

(J) 3x EX, -Bx E intS

(JI) 30 # p E S*, pB = 0

12 Theorem 2.4.2 ([4] , Farkas Theorem} Let A: X -+ Y and C: Y -+ X be continuous linear mappings and let b E Y, c E Y'. Then

(I) ATu E S* ==> u(b) > 0 {=:::::} b E A(S)

( assuming A( S) is closed)

(II) Cv ES==> c(v) ~ 0 {=:::::} c E CT(S*)

(assuming cr(S*) is weak*closed}

Theorem 2.4.3 ([4], Motzkin Alternative Theorem ) Let A: X -+ Z, B: X -+ Y be continuous linear mappings and let S C Y be a convex cone with intS =/:- 0; let TC Z be a closed convex cone. If the convex cone AT(T*) is weak*closed, then exactly one of the following statements holds:

(I) 3x E X,-Ax ET, -Bx E intS (II) 30 =/:- (p,q) E (S* x T*) pB + qA = 0

Remark 2.4.1 The Motzkin alternative theorem easily follows from Gordan alternative theorem and Farkas theorem.

Linear alternative theorems play an important role in the development of the Lagrangian Theory. In Craven [4], it is shown how a Farkas Theorem (Theorem 2.4.2) is used to derive necessary and sufficient conditions for an optimal solution of linear programming problems.

Consider now the following differentiable programming problem:

(P2) minimize f(x) subject to x EX, -g(x) ES, -h(x) ET

where f: X -+ ~, g: X -+ Y, and h: X -+ Y are Frechet differentiable functions and S, T C Y are closed convex cones with intS =/:- 0. The first set of necessary

13 conditions (called the Fritz-John (FJ) conditions) for (P2 ) to attain a local minimum at x = a is

(F J) r J'(a) + vg'(a) + wh'(a) = 0 vg(a) = 0, wh(a) = 0 where r E ~+, v E S*, w E T* and r, w are not both zero. The second set of necessary conditions ( called the Kuhn-Tucker (KT) conditions) follow under additional regularity assumptions and omitting -h(x) ET from (P2 ):

(I

The Motzkin alternative theorem (Theorem 2.4.3) together with a lin­ earization theorem (Craven [41) can be used to derive the above necessary conditions. A duality theorem also follows from this. In the next chapters, we shall study applications of general nonlinear alternative theorems to the development of the Lagrangian theory for cone constrained noncon vex prob­ lems.

To complete this section, we also wish to mention that (see [6], [18] and [36]) alternative theorems for linear mappings can also be used to show the relationship between some basic results in optimization and invexity. Recall that a real-valued function f is in vex on C if for each xi, x 2 E C, there exists r,: C x C -+ X such that

Suppose we consider (P2 ) where S = ~+,X = ~n and -h(x) ET is omitted.

Thus, (P2 ) is invex if

C { /(x1) - /(x2) - J'(x2)TJ(X1, x2) > 0 x1, X2 E ===} 9i(x1) - 9i(x2) - gHx2)TJ(X1, x2) > 0.

Martin [36] and Hanson and Mond [18] established that a suitably relaxed version of the above invexity becomes a sufficient and necessary condition for a Kuhn-Tucker point (that is, satisfying the (KT) conditions) to be op­ timal. Martin [36] called the relaxed invexity as Kuhn-Tucker invexity. The

14 main tools in deriving these results are linear versions of the Motzkin al­ ternative theorem and Farkas theorem. In Craven and Glover (6), a version of the Motzkin alternative theorem (specifically, Theorem 2.4.3) is used to characterise the invexity of the functions involved in a cone constrained pro­ gramming problem.

15 Chapter 3

Approaches to Nonlinear Alternative Theorems

3.1 Introduction

Over the last three decades, various versions of alternative theorems have been derived for convex and nonconvex functions in finite and infinite di­ mensional cases. With appropriate conditions, an alternative theorem may be proved using a Hahn-Banach separation theorem, a minimax theorem or a Lagrangian theorem. It has been shown recently in (28], (26] that under appropriate conditions, alternative, minimax and Lagrangian theorems are equivalent.

Alternative theorems play an important role in the derivation of opti­ mality conditions and in the development of duality theory for constrained optimization problems . The applications include results concerning the ex­ istence of Lagrange multipliers for a constrained optimization problem, the Lagrangian duality , and necessary and sufficient conditions for optimality. A basic alternative theorem, also known as Gordan type alternative theorem, states that either the system - f ( x) E intS has a solution x E C, or the

16 system Vx E C,pf(x) 2:: 0 has a solution p E S*, but not both, where the convexity is assumed on C and f. Recently, the convexity of f and C has been weakened {see [7] , [8], [25], etc.) and thus, better Lagrangian and du­ ality results are obtained. Under appropriate conditions, a minimax theorem establishes the equality SUPyeBinfxecf(x,y) = infxecSUPyeBf(x,y) for the function f(x, y) which satisfies certain convexity conditions. Over the years, various generalizations and applications of the minimax theorem have been established (see [10], [5], [7] , etc.) and one of its earliest applications is on the theory of games {see (44]).

In this chapter, some nonlinear versions of the Gordan, Farkas and Motzkin alternative theorems for S-convex, S-convexlike and S-subconvexlike func­ tions, that have been established in the recent years, are reviewed. Here, we study the different approaches in proving alternative theorems and the relationships with other fundamental results ( that is, Lagrangian theorem, minimax theorem, duality theorem, etc.). We begin by discussing the stan­ dard method of proving alternative theorems using a Hahn-Banach separation theorem. Then we examine a recent method which uses a minimax theorem. We also discuss how minimax theorems, alternative theorems and Lagrangian theorems are related to each other under appropriate conditions. Finally, we outline some applications of alternative theorems to derive optimality condi­ tions and to develop the Lagrangian duality theory.

3.2 Nonlinear Alternative Theorems via Sep­ aration Theorems

In this section, we shall look at versions of the Gordan, Farkas and Motzkin alternative theorems that are proved using separation theorems. A basic nonlinear version of the Gordan alternative theorem for convex inequalities is the following theorem of Fan, Glicksberg and Hoffman [11]:

17 Theorem 3.2.1 If C is a convex set and /i: C--+ ~, i = 1, ... , m, is convex, then exactly one of the following statements holds:

(I) 3xEC,fi(x)<0,i=l, ... ,m (II) 30 =/- .\ E ~:, _xT f(x) 2::: 0,Vx EC

Craven [4] generalised the above Gordan alternative theorem to cones as follows:

Theorem 3.2.2 ( [4], Theorem 2.5.1) Let S C Y be a convex cone with intS =I- 0; let C C X be convex and f: C--+ Y be S-convex. Then exactly one of the following holds:

3x EC, - f(x) E intS {3.1) 30 =/- p E S*, Vx E C,pf(x) > 0 (3.2)

The standard way of proving an alternative theorem is via a Hahn-Banach separation theorem {see [1], [11], etc.). So convexity plays an important role in the application of a separation theorem but it can be seen that even if the convexity property of the function is weakened, a separation theorem still applies. Indeed in [19], a version of the Gordan alternative theorem involving S-convexlike functions is established. In the following, a version of the Gordan alternative theorem involving S-subconvexlike functions is given (see Jeyakumar [25]).

Theorem 3.2.3 ( {25}, Theorem 3.1) If f is S-subconvexlike then exactly one of the statements {3.1) and {3.2) holds.

sketch of proof: Obviously, (3.1) and (3.2) cannot hold simultaneously . Let I{ = f (C) + intS . Since f is S-subconvexlike then I{ is convex. Suppose

18 (3.1) doesn't hold. Then, I< n intS = 0 since intS + S C intS. Now, by a separation theorem, (3.2) holds. D

Therefore, a separation theorem may still be used to prove a Basic ( or Gordan type) alternative theorem with weakened convexity conditions, if we can show that I<= f(C) + intS is convex.

In the following, versions of the Motzkin and Farkas alternative theorems are examined for convex functions. For X = ~ and Y = ~n, Bazaraa [1] presented the following version of the theorem for convex cones and functions.

Theorem 3.2.4 {[1}, Theorem 1} Let f: X -+ ~ be convex and g: X -+ ~n be S-convex; A= {x EC I f(x) < 0, -g(x) ES}; B = {(r,u) E ~+ x S* I r f(x) + ug(x) 2:: 0, V x E C}; D = {u E S* I ug(x) > 0, Vx E C} . If A= 0 then B =f. {0}. Moreover if D = {0} then B =f. {0} implies A= 0.

Jeyakumar [29] presented an infinite dimensional case of Theorem 3.2.4 where g is S-sublinear. This further restriction in g is necessary for Theorem 3.2.4 to be proved for the infinite dimensional case as it requires an application of an open mapping theorem . A version of the Farkas theorem in this case was also established which is then used to prove a version of Motzkin alternative theorem.

Theorem 3.2.5 {{29}, Lemma .4, 1} Let CCX be a convex set with intC =f.

0, f: X -+ ~ be a , continuous at some x0 E intC and g: X-+ Y be weakly S*-1.s.c., S-sublinear function. Suppose that the regular­ ity condition g( C) + S = Y holds then the following statements are equivalent:

:Ix EC, -g(x) ES===} f(x) ~ 0 (3.3) :10 =/-,\ES*, Vx E C,f(x) + ,\g(x) > 0, (3.4)

19 Note that a function g is weakly S*-1.s.c. at a if s*g is l.s.c. at a, for each s* ES*.

Theorem 3.2.6 {{29}, Theorem 4-1} Let X, Y, and Z be complete real normed spaces; let PC Zand SC Y be closed convex cones with intP =/ 0, let CCX be a convex set with intC =I 0 and f: X ---+ Z be a P-convex function, con­

tinuous at some x 0 E intC; let g: X ---+ Y be weakly S*-1.s.c., S-sublinear function. If g( C) + S = Y then exactly one of the following statements holds:

3x EC, -g(x) ES, -f(x) E intP (3.5) 30 =/ (p, ,\) E (P* x S*),pf(x) + ,\g(x) 2:: 0 (3.6)

The proof of Theorem 3.2.4 mainly uses a separation theorem while the proof of Theorem 3.2.6 requires an application of a version of the Farkas lemma and a version of the Gordan alternative theorem. Unlike the other versions of the Farkas lemma, the proof of this lemma requires not only a separation theorem but also an open mapping theorem and closed convex processes (thus, S-sublinearity of g and the completeness of the spaces are required). Therefore, this version of the Farkas lemma holds if the spaces involved are Banach spaces.

When the spaces Y and Z are finite dimensional, Jeyakumar [25] has shown that the Motzkin alternative theorem holds with the pair (f, g) is (P x S)- convexlike.

Theorem 3.2. 7 { {25}, Theorem 5.1} Let f: C---+ Y and g: C---+ Z be func­ tions such that the pair (f,g) is (P x S)-convexlike. Then, if {3.5} does not hold then {3.6} holds. Moreover, if{,\ Es· I -Xg(x) 2:: 0, Vx E C} = {O} then exactly one of {3.5} and (3.6} holds with p # 0.

If Y =~and S =~+,then the above theorem reduces to Theorem 3.2.4.

20 3.3 An Alternative Theorem via a Minimax Theorem

As we have seen in the previous theorem, the main tool in proving an alternative theorem is a separation theorem. Recently it has been shown that various versions of alternative theorems can be proved using a minimax theorem ( e.g., see [5], [7], [8]). In this section, we see how a minimax theorem is used to prove an alternative theorem. Moreover, we shall also see the converse.

The following theorem shows how the minimax property is related to the Gordan Alternative Theorem.

Theorem 3.3.1 ( {5}, Lemma 3) Let f: C-+ Y and let F(x, b) = bf(x) for x E C, b E Y'. Then S* has a convex weak*compact base B {thus, S* = {b/3 I b E B, /3 E R+}) with O (/. B and

(3.1) {::::::} (:Ix E C) (Vb E B) F( x, b) < 0

{::::::} inf sup F(x, b) < 0 (3.7) xeC beB (3.2) {::::::} (:lb E B) (Vx EC) F(x,b) > 0 -<===> NOT {sup inf F(x, b) < 0} (3.8) bEB xeC sketch of proof: Since intS # 0, S* has a convex weak*compact base.

(30 # p E S*) (Vx EC) pf(x) 2: 0 {::::::} (:lb E B) (3/3 E ~+) (Vx E C) fjbf(x) 2: 0 {::::::} (:lb E B) (Vx E C) F(x, b) > 0. and the first equivalence of (3.8) is satisfied.

21 (3x EC) - f(x) E intS

{=:::::} (3x EC) (Vb E B) bf(x) < 0

{=:::::} (3x E C) (Vb E B) F(x, b) < 0 and the first equivalence of (3.7) is satisfied. Since Bis compact and F(x, ·) is continuous,

(3x E C) (Vb E B) F(x, b) < 0 {=:::::} inf sup F(x, b) < 0 xEC bEB and (3.7) is satisfied. Let (b) = infxec F(x, b). Then is u.s.c. on B since each F(x, ·) is continuous and therefore u.s.c .. Also, supbeB (b) is attained since is u.s.c. and B is compact. Then

NOT (3.2) ===> (Vb E B) (3x E C) F(x, b) < 0

{=:::::} SU p

Consider the following convexlike-concavelike conditions for

F:C X B-+ ~-

(A)

(3a E (0, 1)) (Vx1 , x2 E C) (Vf. > 0) (3x3 E C) (Vb E B)

F(x3 , b) ~ aF(x1, b) + (1 - a)F(x2 , b) + f. and ---

(B)

(3,8 E (0, 1)) (Vb1 , ~ E B) (Vf. > 0) (3b3 E B) (Vx EC)

F(x, ~)+f.~ ,BF(x, bi)+ (1 - ,B)F(x, b2 )

Theorem 3.3.2 ( {28}, Minimax theorem) Let B be compact Hausdorff space and F: C x B -+ ~ be any function. Assume that for every x E C, F(x, ·)

22 is upper semicontinuous on B and the conditions {A} and {B) are satisfied. Then

inf max F(x, b) = max inf F(x, b). zeC bEB bEB zeC

It is easy to show that exactly one of (3.1) and (3.2) holds if the following minimax property holds:

inf sup F(x, b) < 0 <=* sup inf F(x, b) < 0 (3.9) zeC beB bEB zEC

The condition (3.9) holds if for example, the following conditions are sat­ isfied (see [38], Theorem 2.1):

(i) B is compact

(ii) F(-,b) is convexlike on C for each b E Band F(x,·) is con­ cavelike on B for each x EC.

We observe that F(x, b) is convexlike on C if (A) holds with € = 0 for every a E (0, 1) and F(x, b) is concavelike on B if (B) holds with € = 0 for every {3 E (0, 1 ).

The following theorem shows how an alternative theorem is proved using a_ minimax theorem (as shown in [8]).

Theorem 3.3.3 Theorem 3.3.2 ==> Theorem 3.2.3.

Proof: Clearly, (3.1) and (3.2) cannot hold simultaneously . Let B = {s* E S* I s*( 0) = 1} where () is from the definition of S-subconvexlike functions. Then B is a weak * compact base for S*. Let F( x, b) = bf ( x) for x E C, b E B. Since f is S-subconvexlike, then

(30 E intS)(Va E (O,l))(Vx1 ,x2 EC)

23 ('th > 0)(3x3 E C)(Vb E B)(3s ES) such that

b(d}) + abf(x1) + (1 - a)bf(x2) - bf(x3) = b(s) > 0

<=> £ + aF(x1, b) + (1 - a)F(x2, b) - F(x3, b) > 0

<=> F(x3, b) < aF(x1, b) + {1 - a)F(x2, b) + £ and (A) is satisfied.

Now, let b3 = /3b1 + {1 - /3)~ since Bis convex. Then, (/3b1 + {1- /3)~)f(x) f3b1f(x) + {1 - /3)b2f(x) f3F(x, bi)+ {1 - f3)F(x, b2)

V£ > 0, F(x, b:3) + £ > /3F(x, b1 ) + (1 - /3)F(x, ~) and {B) is satisfied. Thus, by Theorem 3.3.2,

inf max F(x, b) = max inf F(x, b) :r:EC bEB bEB :r:EC and by Theorem 3.3.1, Theorem 3.2.3 is proved. D

Therefore, any weakened convexity for f that will satisfy {A) and (B) ( or the equivalent, see [7], [8]) will satisfy Theorem 3.2.3 as those conditions will satisfy {3.9). In [5], [7], [8], it was shown that if f is S-convex, S-convexlike and nearly S-subconvexlike, then the statements (A) and (B) are satisfied.

_ Now , we look at how a minimax theorem can be derived from an alter­ native theorem. Here, we consider Theorem 3.3.2 ( and the proof is based on [28]) .

Theorem 3.3.4 Theorem 3.2.3 ==} Theorem 3.3.2 .

Proof: Since sup6eB infxec F(x, b) :::; infxec sup6eB F(x, b) is immediate, we only need to show the reverse inequality. Let

""f = inf sup F(x, b). :r:EC beB

24 If -y = -oo then we are done. Suppose -y is finite. Then

inf sup(F(x, b) ~ -y] = 0. xeC bEB

Let F*(x, b) = F(x, b) - -y and B(x) = {b E B I F*(x, b) ~ O}. Thus, we have to show that nxeC B(x) # 0. Since for every x EC, B(x) is closed and

Bis compact , it is enough to show that n,:= 1 B(xk) # 0 , for any arbitrary

Suppose n;:= 1 B(xk) = 0. Then the system :

3bE B, F*(xk,b) ~ O,k = 1, ... ,n has no solution. Since F*(xk, ·) is u.s.c. on B then there exists 8 > 0 such that

3bE B, F*(xk,b) +8 > O,k = 1, .. . ,n has no solution. Let f(b) = (-F*(x1 , b) - 8, ... , -F*(xn, b) - 8) then the system

b E B, -f(b) E int~~ is inconsistent.

Now f is ~+ -subconvexlike with()= (1, 1, ... , 1) E intR+ in the defini­ tion of S-subconvexlike functions, and so, by Theorem 3.2.3, n 30 # p E ~~ such that LPiF*(xi, b) ::; -8, i=l where E?=t Pi = 1 is assumed.

Since F*(·, b) satisfies (A) , there exists x0 E C such that

n n 8 F*(xo, b) < LPiF*(xi, b) + LPi(-) i=l i=l 2 8 < -8+2 8 -2 < 0.

25 Therefore, there exists xo E C such that inf~ec supbeB F*(x, b) < 0 which is a contradiction. Therefore,

n n B(xk) =f: 0. D k=l

3.4 An Alternative Theorem via a Lagrangian Theorem

In the previous sections, we have seen how a minimax theorem and a separation theorem are used to prove an alternative theorem. It is well known that one of the direct applications of an alternative theorem is the derivation of a Lagrangian theorem. Recently, it has been shown by Jeyakumar (see [28] and [26]) how an alternative theorem is proved using a Lagrangian theorem under general conditions. In this section, we shall study the equivalence of these theorems as shown in [28] and [26].

We again consider the following programming problem:

(Pi) inf f(x) subject to x EC, -g(x) ES where f: C -+ ~ and g: C -+ Y.

Theorem 3.4.1 (Lagrangian Theorem) For the problem (Pi) assume that (J,g) is~+ x S-subconvexlike and that it has a finite infimum µ. Then

30 =f: ( r, ,\) E (~+ x S*), Vx E C,

T f(x) + ,\g(x) > rµ.

Moreover, ifµ is attained at some feasible point l E C, then the above condi­ tions hold withµ= f(l) and ,\g(l) = 0.

26 Proof: Suppose (Pi) has a finite infimum µ. Then the system

3x EC, -(f(x) - µ,g(x)) E int(~+ x S) has no solution. Since (f,g) is~+ x S-subconvexlike, so is (f -µ,g). There­ fore, by Theorem 3.2.3, there exists nonzero ( r, ..\) E UR+ x S*) such that

Vx EC, r(f(x) - µ) + ..\g(x) > 0.

So, Vx EC, r f(x) + ..\g(x) > rµ.

Ifµ is attained at some feasible point eEC, then

r(f(e) - J(e)) + ..\g(e) ~ o, and so, ..\g( e) ~ 0. But since eis feasible then for ,\ E S*, ..\g( e) < 0 as well. Therefore, ..\g(e) = 0. D

We now derive Theorem 3.2.3 from Theorem 3.4.1.

Theorem 3.4.2 Theorem 3.,4--1 ==} Theorem 3.2.3.

Proof: Fix e E intS, and define

(Pe) p := inf{t Ix EC, t E ~, -/(x) + te ES}.

- If we let g(x, t) = t and i(x, t) = f(x) - te then {Pe) is equivalent to

p := inf{g(x,t) I (x,t) E (C x ~), -i(x,t) ES}.

Thus, for (Pe) , (g, J) is~+ x S-subconvexlike and

3x E C,-f(x) E intS {=} p < 0.

To see this equivalence, suppose there exists x0 E C, -f(x0 ) E intS. Then,

-f(xo) + N CS, for some neighbourhood N of 0. Choose t0 > 0 such that

- f(xo) - toe E S.

27 Therefore, (x0 , -t0 ) is feasible to (Pe) and p ~ -to< 0.

Conversely, let f = =f > 0. Then there exists x E C, t0 E ~ such that

-f(x) + t0 e ES andp+ t: > to. Thus, i >to.So,

0>~>t0 and -f(x)+toeES.

Therefore, - f(x) + t0 e = s , for some s E S. Thus, -f(x) - s - toe E S + intS C intS. Hence,

3x EC, -f(x) E intS <==> p < 0.

If (3.1) is inconsistent then p is finite and p > 0. From Theorem 3.4.1 there exists 0 =J (r, .\) E (~+ x S*) such that

V(x, t) E (C x ~), rt+ .\(f(x) - te) 2:'.: rp > 0.

Hence1 Vx EC, ,\g(x) 2:: 0 and .\(e) =J 0. D

Equivalence of alternative, minimax and Lagrangian theorems. Therefore, from sections 3.3 and 3.4 , we have shown the equivalence of alter­ native, minimax and Lagrangian theorems in infinite dimensional case where the convexity of the functions are weakened to subconvexlikeness . Thus, the theorems can be derived directly from each other and without directly using a separation theorem.

3.5 Optimality Conditions and Duality

In this section, we shall derive various necessary and sufficient conditions for optimality for constrained minimization problems. We begin by studying briefly how a saddlepoint optimality theorem is proved. Then we see how this leads to duality results. Moreover, we will also see the application of an

28 alternative theorem in the derivation of necessary optimality conditions in terms of directional derivatives of the objective and constraint functions.

We will consider the same primal problem (Pi) used in the previous sec­ tion. We also consider the following dual problem.

maximize (v) := inf f(x) + vg(x) xEC subject to v E S* where C C X, f: C --+ ~, and g: C --+ Y. One of the main applications of alternative theorems is to establish duality relationship between two related optimization problems ( P1 ) and ( D 1 ).

Definition 3.5.1 (P1 ) and (D1 ) satisfy the property if:

{i) weak duality ( i.e., f(x) > (v)) holds whenever x E C is feasible, v E S*, and

{ii} if (Pi) attains a minimum at x then (D1 ) attains a maximum at some point v and f(x) = (v); thus optimal objective values are equal.

Consider the Saddlepoint problems:

(1) Fritz-John Saddlepoint Problem

Find x E C, v E S*, p E ~+, if they exist, such that

w(x,p,v) ~ w(x,p,v) ~ w(x,p,v), (3.10)

where W(x,p,v) = pf(x) + vg(x).

29 (2) Kuhn-Tucker Saddlepoint Problem

Find x E C, fJ E S*, if they exist such that

L(x, v) < L(x, v) ~ L(x, v), (3.11)

where L(x,v) = f(x) + vg(x).

Note that if (x,p, v) is a solution to (1) and p > 0 then (x, j) is a solution to (2).

We now present a corollary to Theorem 3.4.1 whi~h establishes a saddle­ point optimality criteria. The problem (Pi) is said to satisfy a Generalized Slater Constraint Qualification ( GSC) if there exists x EC such that - g(x) E intS.

Corollary 3.5.1 {Saddlepoint Optimality Theorem) For the problem (Pi), assume that (f,g) is (~+ x S)-subconvexlike. If (Pi) attains a minimum at x E C and GSC is satisfied then (x, v) is a solution to the Kuhn-Tucker Saddlepoint problem.

Proof: Following the proof of Theorem 3.4.1 we can show that

30 f. (p, v) E (~+ x S*), Vx EC, p(f(x) - f(x)) + vg(x) 2:: 0.

If p - 0 then a contradiction arises since GSC is satisfied. Thus p f. 0 and p = 1 may be assumed. Hence,

f(x) + vg(x) 2:: f(x), Vx EC.

Since x E C, vg(x) 2:: 0. But from the constraints of (Pi), for every v ES*, vg(x) ~ 0. Thus, vg(x) = 0. Therefore,

f(x) + vg(x) > f(x) + vg(x) ~ f(x) + vg(x) and (3.11) holds. D

30 Theorem 3.5.1 If (x, v) satisfies {3.11} then x is the minimum for (Pi).

Proof: Suppose (3.11) holds at x E C and v E S* . From the left inequality,

vg(x) ~ vg(x), 'vv E S*.

Thus,

'vvi ES*, vg(x) > (v + vi)g(x).

So for every v1 E S*, v1g(x) ~ 0 and hence, -g(x) E S and x is feasible. From the right inequality,

f(x) > f(x) + vg(x) > f(x)

~ f(x) ~ f(x) and x is the minimum. D

And using Corollary 3.5.1 the following duality theorem may be proved.

Theorem 3.5.2 (Lagrangian Duality Theorem) Assume that (Pi) sat­ isfies the conditions of Corollary 3.5.1. Then the dual attains its maximum and optimal values of (P1) and (D1 ) are equal; thus, the strong duality prop­ erty between (Pi) and (Di) holds.

Another application of an alternative theorem is the derivation of nec­ essary optimality conditions for directionally differentiable problems. The Fritz-John and Kuhn-Tucker type conditions are derived for a constrained minimization problem in terms of directional derivatives (see [19], [25], and the references therein) of the objective and constraint functions.

Here f and g are assumed to be directionally differentiable at each point in each direction and the pair (f'(a, ·), g'(a, ·))is(~+ x S)-subconvexlike.

31 Theorem 3.5.3 Suppose (Pi) attains its minimum at x E C. Then there exists a Lagrange multiplier (P, v) E (~+ x S*), not both zero , such that

Vd E Cone(C - x), (Pf + vg)'(x,d) ~ O (3.12)

Proof: Suppose the system

J'(x,d) < o, -(g'(x,d) + g(x)) E intS (3.13) has a solution d E Cone( C - x). Then, there exists a > 0 such that for every 0 < a < a, x + ad E C, and

g(x + ad) - g(x) + g'(x, d)a + o(a) - a(g(x) + g'(x,d)) + (1- a)g(x) + o(a).

Therefore, -g(x + ad) E S since -g(x) E S and - [g'(x, d) + g(x)] E intS. Thus, x + ad is feasible for (Pi). Moreover, since f is directionally differen­ tiable at x,

f(x + ad) - f(x) = af'(x,d) + o(a) < 0 for sufficiently small a > 0. Therefore, f(x + ad) < f(x) which is a contra­ diction. Thus, (3.13) has no solution .

Let F(d) = [f'(x,d),g'(x,d) + g(x)] E int(~+ x S). Then the system

-F(d) E int(~+ x S) has no solution. By hypothesis, F is (~+ X S)- subconvexlike and thus by Theorem 3.2.3 there exists nonzero p = (p, v) E (~+ x S*) such that

pF(x) ~ o and (3.12) holds.

Remark 3.5.1 The condition {3.12} is the Fritz-John condition which will lead to I(uhn-Tucke1· conditions if a regularity hypothesis is imposed . The regularity condition will ensure that p > 0.

32 Chapter 4

An Alternative Theorem for *-quasiconvex Mappings and N ondifferentiable Optimization

4.1 Introduction

It is known that various results concerning Lagrange multipliers, duality, scalarizations and characterization of solution sets for convex vector opti­ mization problems can be derived from an alternative theorem. Alternative theorems are usually proved using a separation theorem. However, as we have seen in chapter 3, alternative theorems involving certain nonconvex functions can be proved using minimax theorems. Craven [5] showed that an alterna­ tive theorem involving convexlike functions can be derived using Fan's [10] minimax theorem. In [7] and [8], alternative theorems for nonconvex func­ tions are also established using a symmetrical version of the Fuchssteiner and Konig [12] extended minimax theorem. However, the minimax theorem of Sion [41] for quasiconcave-convex functions does not appear to have been used . Therefore, it can be conjectured that this minimax theorem may be used to prove an alternative theorem and thus extend the Lagrangian and

33 duality results to certain classes of quasiconvex problems.

In this chapter, optimality and duality theory for a new class of quasi con­ vex optimization problems, called the class of *-quasiconvex, that parallels the ones for convex optimization problems are presented. The theory is devel­ oped by first establishing a Basic Alternative Theorem, also known as Gordan type alternative theorem, for *-quasiconvex mappings. The necessary condi­ tions for optimality for a class of general nondifferentiable programs using upper approximations are also discussed. It is worth mentioning that slightly extended versions of the results presented here appear in Jeyakumar, Oettli and Natividad [31].

4.2 *-quasiconvex Mappings

In this section, a class of quasiconvex mappings, called *-quasiconvex map­ pings, is introduced. The different properties, examples and relationships with other generalized convex functions ( where Y = ~m and X = ~n ) are discussed. It is known that in general, the sum of quasiconvex functions is not quasiconvex and that a Basic Alternative Theorem does not hold for qua­ siconvex functions (see [14]). Let us now introduce a restricted version of a quas1convex mappmg.

Definition 4.2.1 A mapping f: C -+ Y is said to be *-quasiconvex if and only if for every b E S*, the mapping bf: C -+ ~ is quasiconvex .

Let f1(x) = min{x, 2} and h(x) = max{-x, -1}. Then f is quasiconvex and is *-quasi convex. The system x E ~, f1 ( x) < 0, h( x) < 0 has no solution. However, there exists nonzero (p1,p2) ~ 0 such that pif1(x)+P2h(x) ~ 0, for all x E ~- One can choose (p1 , p2 ) = (1, 1) to see this. Thus, an alternative

34 theorem holds in this case. In the next section, we shall discuss an alternative theorem involving *-quasiconvex functions.

A *-quasiconvex mapping is also a quasiconvex mapping. To see this, suppose f is *-quasiconvex. Let x 1 , X2 E C, y E Y and f(xi) - y E -Sand f(x2)-y E -S. Then for every b E S*, b(f(xi)-y) ~ 0 and b(f(x2)-y) ~ 0.

Since b(f(·)-y) is quasiconvex, b(f(0-y) ~ 0, for every b ES*,{ E [x1,x2]. Hence, f(O -y E -Sand f is quasiconvex.

The following example shows that a *-quasiconvex mapping is not neces­ sarily convex.

Example 4.2.1 Let C = §R,Y = ~2 and f(x) = (x3,kx),k > 0. Then, clearly f is not convex. However, f satisfies the *-quasiconvexity condition. Since the function is differentiable , *-quasiconvexity follows from the inequal­ ity that for each xi, x2 E §R and a 1 , a 2 > 0,

It is known that if a real-valued function f defined on ~ is monotonic, then f is quasiconvex. Indeed, x 1 < x.\ < x2 ===} f(x.\) ~ f(x2) = max{f(xi), f(x2 )} and therefore, f is quasiconvex, where x.\ = ,.\x1 +(1-,.\)x2 .

Now suppose we consider F( x) (f1(x), ... ,fm(x)) where for i = 1, ... , m, Ji: §R ~ §R is monotonically increasing. Then for any

,.\ E (0, 1), x1 , x2 E ~ such that x1 < x.\ < x2 with x.\ = ,.\x1 + (1 - ,.\)x2,

Then for every ai 2: 0, ai fi(x.\) ~ O'.i fi(x2). Thus aT J(x.\) ~ aT f(x2) and F( x) is *-quasiconvex.

Therefore, if we have an optimization problem such that the objective

35 and constraint functions are increasing functions on ~ then that optimization problem can be considered as a *-quasiconvex problem.

Example 4.2.2 Let C = ~ and Y = ~ and f(x) = (x3 ,x,x5 ). Each component of f(x) is increasing. Therefore, f(x) is *-quasiconvex..

Now, let us look at the relationship between *-quasiconvex functions and pseudoconvex and invex functions. The following theorem is based on [42] and the functions are assumed to be differentiable.

Theorem 4.2.1 Let O =f. a E ~+ and g: C-+ ~m, CC ~n is convex. Then aTg(x) is pseudoconvex if and only if aTg(x) is quasiconvex and aT g(x) is invex.

Proof: Let aT g(x) be pseudoconvex. Suppose that aTg(x) is not quasicon­ vex. Then there exists x1 , x 2 E C such that

aT g(xi) ~ aT g(x2) and aT g(x~) > aT g(x2) for some x~ E l:i: = {x Ix= (l - A) x1 + A x 2 , 0

aT g(x) = maxaT g(x). :i:Elz Thus,

(aT g)'(x,x1 - x)

< 0

Similarly, (aT g)'(xf (x2 - x) ~ 0. Now,

l. aT g(x + 11(X2 - X1)) - aTg(x) ( aT g )'( xf (x 2 - x 1) - 1m ------r1 !0 ,1 < 0

36 since x + ,1(x2 - x1) E lx. On the other hand,

- (a T g) , (x) T (x1 - (1- -X)x1" - -Xx2),0" < ,\" < 1 _j(aT g)'(xf (x2 - X1)

> 0

Thus, (aT g)'(xf(x1 - x) = 0. If aT g(x) is pseudoconvex then

which contradicts the hypothesis that aT g(x2) < aT g(x),x E lx. Therefore, aT g(x) is quasiconvex. Now, for each stationary point x, (aT g)'(x)(x - x) = 0. Since ,\T g( x) is pseudoconvex, aTg( x) > aT g( x) and therefore x is a global minimum. Thus, from Ben-Israel and Mond [2] , aT g(x) is invex.

Conversely, let aT g( x) be quasi convex and let aT g( x) be in vex with re­ spect to 77(x1, x 2 ). Then

aT g(x2) - aT g(xi) < 0 =} (aT g)'(x1f (x2 - x1) ::; 0, aT g(x2) - aT g(x1) ~ (aT g)'(xi) 17(x1, x2)-

Assume that (aT g)'(x1f (x2 - x1) ~ 0. We shall show that aT g(x2) > aT g(xi). If (aT g)'(x1) = 0 then

(aT g)'(x1f (x2 - x1) = 0 =} aT g(x2) - aT g(x1) > 0 since aT g(x) is invex . Suppose (aT g)'(x1) =I 0. If (aT g)'(x1f (x2 - xi) > 0 then since aT g( x) is quasiconvex,

If (aT g)'(x1f (x2 - x1) = 0 then the whole space is separated into H+ = {x I (aT g)'(x1f(x-xt) > O} and n- = {x I (aT g)'(x1f(x-xt) < 0} by H = {x I (aT g)'(x1f (x - x1) = 0}. That is, ~n = n+ Un- UH. Let Li(,) = {x I f(x) ::; ,}. Thus, if we let f = aTg, Li(,) = {x I aT g(x) ::; "Y }. If we let x E H+, (aT g)'(x1f (x - x1) > 0. Since aT g(x)

37 is quasiconvex, (aT g)'(x1f (x - x1) > 0 =* aT g(x) ~ aT g(x1). Therefore, x (/. L1(aT g(x1)) = {x I aT g(x) < aT g(xt)} and n+ n LJ(aTg(xt)) = 0.

From our assumption, x2 EH. Now, let x0 En+. Then (aT g)'(x1f(xo­ x1) > 0. For O < ,\ < 1, (aTg)'(x1f(x2 + ..\(xo - x2) - x1) - (aTg)'(x1f(..\xo + (1- ..\)x2 - (..\x1 + (1- ..\)x1]) (aTg)'(x1f(..\(xo - x1) + (1- ..\)(x2 - x1)) ..\(aT g)'(x1f (xo - x1) + (1 - ..\)(aT g)'(x1f (x2 - x1) - ..\(aT g)'(x1f (xo - x1) > 0.

Then, (aT g)'(x1f(x2 + ..\(xo - x2) - x1) > 0 and thus x2 + ..\(xo -x2) EH+. Therefore,

x2 + ..\(xo - x2) (/. L1(aT g(x1)) and aT g(x2 + ..\(xo - x2)) > aT g(xi).

Since g is a continuous function, as ,\ -+ 0, aT g(x2) ~ aT g(xi). D

The following corollary shows how *-quasiconvexity is related to invex and .

Corollary 4.2.1 If g: C-+ ~m is *-quasiconvex and if g is invex with respect to the sam~ 77(x1, x2) then aT g(x) is pseudoconvex for every nonzero a E ~+-

Proof: Since g 1s mvex with respect to the same 77(x1, x2), for each i = 1, ... , m,

So for each nonzero a E ~+,

Therefore, aT g( x) is in vex. Since g is *-quasiconvex, for each a E ~+, aTg is quasiconvex and hence by Theorem 4.2.1, aT g is pseudoconvex. D

38 4.3 An Alternative Theorem for *-quasiconvex Mappings

In this section, a Basic Alternative Theorem, also known as Gordan type alternative theorem, for *-quasiconvex mappings is proved using a minimax theorem established by Sion (41]. A direct application of the theorem on scalarization of a vector-valued optimization problem is also discussed.

Having in mind that certain minimax theorems hold for functions with weakened convexity properties, let us look at a minimax theorem established by Sion (41] which involves a function that is quasiconcave-convex and ap­ propriately semicontinuous in each variable.

Theorem 4.3.1 Let M, N be convex spaces one of which is compact, and let f, a function on M x N, be quasiconcave-convex and u.s.c.-1.s.c .. Then supinf f = inf sup f.

Terkelsen (43] stated and proved the above theorem to subsets of topolog­ ical vector spaces.

Theorem 4.3.2 Let M and N be convex subsets of topological vector spaces with N compact and let f: M x N-+ R. If for every x EM, f(x, ·) is u.s.c. and quasiconcave on N and if for every b E N, f(·, b) is l.s.c. and quasiconvex on M, then

inf max f(x, b) = max inf f(x, b) :z:EM bEN bEN :z:EM

Using Theorem 4.3.2 established by Terkelsen (43], we are now ready to state and prove a Basic Alternative Theorem for *-quasiconvex mappings.

39 Theorem 4.3.3 (Basic Alternative Theorem) Let f: C -+ Y be *-quasiconvex and pf(x) is l.s.c. for each p E S*, then exactly one of the following statements holds:

3x EC, -f(x) E intS ( 4.1) 30 # p ES*, Vx EC, pf(x) > 0 (4.2)

Proof: Let B = { a E S* I a( s) = 1}. Then B is a weak*compact convex base for S*.

NOT (4.1) <===> (Vx EC) (3a E B) af(x) > 0 <===> (Vx EC) max af(x) > 0 0tEB <===> inf max af(x) ~ 0 xEC 0tEB <===> max inf af(x) > 0 by Theorem 4.3.2 0tEB xeC <===> (3a E B) (Vx EC) af(x) > 0 ==> (3p ES*) (Vx EC) pf(x) > 0.

Since S* = UT~O r B , for some r ~ 0, a E B, 0 # p = ra and

Vx EC, raf(x) > 0

==> Vx E C,af(x) ~ 0 for some a E B.

Therefore,

(3a E B) (Vx EC) af(x) > 0

<===> (30 # p ES*) (Vx EC) pf(x) ~ 0 <===> (4.2) .

Clearly, (4.1) and (4.2) cannot hold simultaneously and so the proof is com- pleted. D

A direct application of Theorem 4.3.3 is the scalarization of a vector op­ timization problem.

40 Consider the problem

(VP) V-minf(x) = ((fi(x),f2(x), ... ,fm(x))

subject to x E C

Here, "V-min" stands for the vector minimization problem.

Theorem 4.3.4 (Scalarization Theorem) Consider the problem (VP). As­ sume that f is *-quasiconvex. If x is a weak minimum for (VP) then there exists nonzero ,\ E ~+ such that x is an optimal solution to the scalar problem:

subject to x E C

Proof: Since x is a weak minimum of (VP), there exists no x E C such that fi(x) < f;(x),i = 1, ... ,m. That is, the system

x E C,f;(x)-h(x) < O,i = 1, ... ,m has no solution. Since f is *-quasiconvex, ](x) = (fi(x) - f 1(x), ... , fm(x)­ f m(x)) is *-quasiconvex . Applying Theorem 4.3.3, there exists nonzero .X E ~+ such that ,xT ](x) > 0, for every x EC. That is,

I Therefore, x is an optimal solution for (SP>.)- D

Suppose we let

W P = { the set of all weak minimum points of (VP)} and

SP>. = { the set of all optimal solutions of (SP>,)}.

Therefore, from the previous theorem, we have

41 4.4 Generalized Lagrangian Theorems for*­ quasiconvex Cone Constrained Problems

In this section, the existence of Lagrange multipliers, the saddlepoint the­ orems and the duality results for a class of quasiconvex problems using the Basic Alternative Theorem for *-quasiconvex mappings established in the previous section are discussed.

Consider a constrained minimization problem :

(Pi) inf f(x)

subject to x E C,-g(x) ES where f: C -+ ~ and g: C -+ Y. Note that the pair (f, g) is said to be *-quasiconvex if for every (a, .B) E (~+ x S*), af(x) + {Jg(x) is quasiconvex.

Theorem 4.4.1 For the problem (Pi), assume that the pair (f, g) is *-quasiconvex. If (Pi) attains a finite minimum at x E C and f(x) is l.s.c. and ,\g(x) is l.s.c., for every,\ E S*, then there exists nonzero (p, X) E (~+ x S*) such that

Vx E C,pf(x) + Ag(x) ~ pf(x) and Ag(x) = 0.

Proof: If (Pi) attains a finite minimum at x then there will exist no x feasible for (Pi) such that f(x) < f(x). Then the system

x E C,f(x) - f(x) < 0, -g(x) E intS has no solution. Since (f(x)- f(x),g(x)) is *-quasiconvex, by Theorem 4.3.3, there exist nonzero (p, X) E (~+ x S*) such that

Vx E C,p(J(x) - f(x)) + Ag(x) ~ 0.

42 Since x is feasible to (P1 ), p(J(x) - f(x)) + Xg(x) > 0. Thus, Xg(x) > 0. But Xg(x) < 0 as well. Therefore, Xg(x) = 0 and that completes the proof. D

Recall that ( Pt) is said to satisfy a Generalized Slater Constraint qualifi­

cation (GSC) if there exists x0 E C such that - g(x0 ) E intS.

Theorem 4.4.2 If the conditions of the previous theorem and the GSC are satisfied, then there exist x E C, X E S* such that the Lagrangian L(x, ..\) = f(x) + ..\g(x) satisfies the saddlepoint condition at (x, X) :

Vx E C, ..\ E S*, L(x, ..\) s L(x, X) s L(x, X).

Proof: From the previous theorem,

Vx E C,pf(x) + Xg(x) > pf(x)

and Xg(x) = 0.

If p = 0 then X E S* is not equal to zero, and for every x EC, Xg(x) > 0.

Since the GSC is satisfied, there exists x0 EC such that Xg(x 0 ) < 0, and thus a contradiction. Therefore p =f=. 0 and p = 1 may be assumed. Therefore, Vx E C,f(x) + Xg(x) ~ f(x). Since Xg(x) = 0,

f(x) + Xg(x) > f(x) + Xg(x).

Since for every x EC,..\ ES*, ..\g(x) S 0,

f(x) + Xg(x) ~ f(x) + ..\g(x).

Therefore, L(x, ..\) s L(x, X) s L(x, X). o

43 Consider the dual problem :

maxcl>{,\) := inf J(x) + ,\g(x) xEC subject to,\ E S*

Theorem 4.4.3 Assume that the conditions of the previous theorem hold.

Then (Pi) and (D1 ) satisfy the strong duality property.

Proof: If x is feasible to (Pi) and,\ is feasible to {Di), then ,\g(x) < 0. So, f(x) ~ f(x) + ,\g(x) ~ infxec f(x) + ,\g(x) = cl>(,\) and weak duality holds.

If (Pi) attains a finite minimum at x E C, then the saddlepoint condition holds: f(x) = f(x) + Xg(x) ~ f(x) + Xg(x), Vx E C.

Thus,

f(x) ~ inf f(x) + Xg(x) = cJ.>(X). xeC

This with the weak duality completes the proof . D

4.5 Necessary Optimality Conditions for a Class of Nondifferentiable Programs

In the following, the Fritz-John and Kuhn-Tucker conditions are es­ tablished for constrained optimization problems in terms of the directional derivatives of the objective and constraint functions that are *-quasiconvex.

Consider (Pi) and assume that the functions are directionally differen­ tiable at each point in X.

44 Theorem 4.5.1 Suppose that (Pi) attains a minimum at x E C. Assume that ag'(x, ·) + a0 f'(x, ·) is l.s.c. and quasiconvex, for (ao, a) E (R+ x S*).

Then there exist Lagrange multipliers >..0 > O, >.. E S*, not both zero, such that

>..0 f'(x,d) + >..g'(x,d) ~ 0, V d E Cone(C - x), >..g(x) = o.

Proof: Assume that the problem attains its minimum at x. Then for any feasible point x E C, f(x) 2: f(x). Suppose there exists do E Cone(C - x) such that

-[f'(x,d0 ),g'(x,d0 ) + g(x)] E int(R+ x S).

Since f is directionally differentiable, for "Y > 0 sufficiently small,

J(x + "'tdo) < f(x). (4.3)

And since d0 E Cone(C - x) and C is convex, for sufficiently small "Y > 0, x + "'(do EC. Since g is directionally differentiable,

g(x+"'fdo) - g(x)+"'fg'(x,do)+o("Y) "'f (g'(x,do) + g(x)) + (1 -"'()g(x) + o("Y).

Since -[g'(x,d0 ) + g(x)] E intS and x is feasible to (Pi), -g(x + "'(do) E

S. Thus x + "'f d0 is feasible to (Pi). But from ( 4.3), this contradicts the assumption that x is the minimum. Therefore,

-[f'(x,do),g'(x,do) + g(x)] E int(~+ x S) has no solution. Since a0 f'(x, ·) + ag'(x, ·) is quasiconvex, so is a0 f'(x,d0 ) + a[g'(x, d0 ) + g(x)]. So by Theorem 4.3.3, there exists (>..0, >..) E OR+ x S*) , not both zero , such that

>..of'(x,d) + >..g'(x,d) ~ 0, V d E Cone(C - x) and >..g(x) = 0. o

45 With an appropriate constraint qualification, Ao can be shown to be not equal to zero. For example, consider the constraint qualification for direc­ tionally differentiable functions: there exists do E Cone( C - x) such that

-fg'(x,do) + g(x)J E intS.

Theorem 4.5.2 Consider the problem (Pi) and let the result of the previous theorem and the above constraint qualification hold at a feasible point x. Then Ao:/; 0.

Proof: Suppose that Ao = 0. Then there exists nonzero A E S* such that

Ag'(x,d) > 0 and Ag(x) = 0, V d E Cone(C - x).

Thus, there exists A > 0 such that

A[g'(x,do) + g(x)J ~ O.

But from the constraint qualification, there exists d0 E Cone( C - x) such that for every nonzero A E S*,

A[g'(x,do) + g(x)J < O. and thus a contradiction. Therefore, Ao:/; 0. D

4.6 Upper Approximations and *-quasiconvexity

In the previous section, we have seen the derivation of necessary optimality conditions for nondifferentiable programs in terms of directional derivatives. In this section , necessary optimality conditions for the constrained minimiza­ tion problem (Pi) are derived using upper approximations and Theorem 4.3.3. In this way, we give a characterization of the local minimum of a program­ ming problem whose functions' suitable approximations are *-quasiconvex.

46 Here, we consider the problem (A) introduced in section 3.4 and the at­ tainment of the infimum is assumed. The results of this section were initially proved for inequality constrained problems and was later on extended to cone constrained problems in collaboration with Jeyakumar and Oettli.

Definition 4.6.1 A mapping

f(ax + (1 - a)x) < a

Note that o(a-) --+ 0 as a --+ 0. a,

A mapping

yg(ax + (1 - a)x) :S ay

Define B(x) = {y E B I yg(x) = O} where B is a weak*compact convex base for S*. Since B is compact, B( x) is also compact.

Theorem 4.6.1 For the problem (Pi) let x E C and let q,0 : C --+ ~ be an upper approximation to f at x and

x EC, o(x) - o(x) < 0, y

Proof: Suppose that ( 4.4) has a solution ~ E C, that is,

(0 < O, Vy E B(x).

47 Since B(x) is compact and y 0 such that y(e) ~ -K, Vy E B(x).

Define U = {y E B I y(e) < -{ }. Since B \ U is compact and disjoint from B(x) = {y E B I yg(x) = O} = {y E B I y O}, there exists L > 0 such that

y

Similarly, there exists a real number M such that

y(e) ~ M, Vy E B \ U.

Let Xa = ae + (1 - a)x,a E (0, 1]. Then there exists Ot > 0 such that for all y EU,

yg(xa) < ay(e) + (1 - a)y

Similarly, there exists a 2 > 0 such that for all y E B \ U,

yg(xa) < ay

a(M + L) - L + o(a) < 0 whenever O < a < a2• Thus, for all sufficiently small a > O,

yg(xa) < 0, Vy E B.

Moreover, since

f(xa) - f(x) < ao(e) + (1 - a)o(x) + o(a) - f(x) a(o(e) - 0. This contradicts the assumption that x is a local minimum for (Pi) . Thus (4.4) has no solution. D

48 Theorem 4.6.2 Assume that (P1 ) attains a local minimum at x E C. Sup­ pose q,0 is l.s.c. and an upper approximation to f at x and for every y E S*, yq, is l.s.c. and q, is a S*-upper approximation tog at x. If the pair (q,0 , 4>) is

*-quasiconvex, then there exist Lagrange multipliers Ao ~ O, A E S*, not both zero, such that

Vx EC, Aoo(x) + Aq>(x) > Aoo(x).

Proof: From Theorem 4.6.1, the system

x EC, q,0(x) - q,0 (x) < 0, yq,(x) < 0, Vy E B(x) has no solution. Since (4>0 , ) is *-quasiconvex, (4>o(x) - q,0 (x), q,(x)) 1s *-quasiconvex. Therefore, by Theorem 4.3.3,

30 =/- (Ao, A) E (~+ x S*), Vx E C

Aoo(x) - Aoo(x) + A(x) ~ 0

==> Aoo(x) + A(x) ~ A0 q,0 (x). D

With an appropriate Slater type constraint qualification, Ao can be shown to be not equal to zero. Suppose we impose the following constraint qualifi­ cation on :

3x E C such that - ( x) E intS (4.5)

Theorem 4.6.3 Suppose that the conditions of the previous theorem and the above constraint qualification are satisfied. Then

o(x) + ..X(x) ~ o(x), Vx EC. (4.6)

Proof: From Theorem 4.6.2, there exists ..X0 ~ 0, A E S*, not both zero, such that

Aoo(x) + ..X(x) ~ Aoo(x),Vx EC.

49 Suppose Ao = 0. Then A :/- 0 and

Atp(x) ~ 0, Vx EC

which contradicts the constraint qualification ( 4.5 ). Therefore, Ao :/- 0 and may be assumed to be equal to 1. D

50 Chapter 5

€-Alternative Theorems and Zero Duality Gaps

5.1 Introduction

In finite dimensional linear programming , the primal linear problem is consistent and has a finite minimum if and only if the corresponding dual problem is consistent and has a finite maximum. Moreover, the values of the primal and dual problems are equal. However, these results do not always hold for infinite dimensional linear programs (see (9), (33) for examples). It is also known that for a (finite dimensional) convex programming problem, if the primal problem has a solution, then the problem is stable (see page 12) if and only if there is strong duality between the primal and dual problems (see also Craven (4)). Thus, many algorithms for stable problems work on the dual problem in order to solve the primal problem. These results however, do not readily extend to infinite dimensional convex programming problems (see (23), (32) for examples).

Recent work (see (30), (32)) on certain infinite dimensional convex or con­ vexlike programming problems shows that under certain conditions, the du-

51 ality gap between the primal problem and the corresponding dual problem is zero, however, Lagrange multipliers may not exist. Moreover, the primal problem has a finite optimal solution and satisfies certain stability properties. It has also been shown that there is a relationship between an t:-subdifferential of the value function of the problem at zero and an t:-saddlepoint for the pri­ mal problem.

In this chapter, we study zero duality gap properties, approximate La­ grange multiplier results and stability results for cone constrained problems by first establishing new versions of alternative theorems where a local closed­ ness condition is imposed. We shall see how alternative theorems can be used to study the zero duality gap property. The results provide a unified approach to the study of zero duality gap results and other related properties. More­ over, they extend some of the results given for example, in [15], where the primal and dual problems are studied in infinite dimensional case involving inequality systems. And in section 5.4, we shall show the relationship be­ tween a certain stability property of the primal problem and zero duality gap property and the characterization of t:-saddlepoints using t:-subdifferentials of the value function of the problem at the point of no perturbation. In the final section, we obtain a stable alternative theorem which gives a sufficient condition for the new local closedness assumption.

In this chapter, we again study the programming problem

(Pi) µ := inf f(x) subject to x EC, -g(x) ES and the corresponding dual problem

(D1 ) v := sup 4>(,\)

subject to >. E S* where 4>(>.) = infxec f(x) + >.g(x). Note that if the optimal value of (Pi) isµ and the optimal value of (D1 ) is v, then the valueµ - vis called the duality gap.

52 5.2 €-Alternative Theorems

In this section, new versions of alternative theorems, called €-alternative theorems, are studied for convexlike cone functions where a certain closedness condition is imposed on some specific sets. This €-alternative theorem is ap­ plied to prove versions of Farkas theorem and closed cone alternative theorem (see [27]).

In what follows, we define the set n 0 which plays an important role in the study of €-alternative theorems. Let

n 0 = {(u,r) E (Y x ~) I 3x E C,f(x) < r,u -g(x) ES}.

We begin by establishing convexity properties of n 0 under appropriate conditions on the functions f and g.

Lemma 5.2.1 IJ(f,g) is(~ x S)- convexlike then n0 is convex.

Proof: We want to show that for every (u1,r1),(u2,r2) E n 0,a E (0,1), (au1 + (1- a)u2,ar1 + (1- a)r2) Eno. If (u1,r1) E !10, then there exists x1 E C such that J(xi) :::; r1 and u1 - g(xi) E S. If (u2, r2) E n 0, then there exists x2 E C such that f(x2) :::; r2 and u2 - g(x2) E S. Thus, for any a E (0, 1), af(x1) + (1 - a)f(x2) :::; ar1 + (1 - a)r2

au1 + (1 - a)u2 - ag(x1 ) - (1 - a)g(x2) E S.

Since (J, g) is (~+ x S)-convexlike, there exists x 3 E C such that

ag(xt) + (1 - a)g(x2) - g(x3 ) E S.

53 Thus, f(x3) < af(x1)+(1-a)f(x2) < ar1 +(l-a)r2. Furthermore, for some s ES, ag(x1)+(1-a)g(x2) = g(xa)+s. Therefore, au1 +(l-a)u2-g(xa)-s E Sand au1 + (1-a)u2 -g(xa) ES. Thus, (au1 + (l-a)u2, ar1 + (l-a)r2) E n0 • D

Lemma 5.2.2 If n0 is convex then fi0 is also convex.

Proof: Let (u1 ,r1 ),(u2 ,r2 ) E fi0 , and let.\ E (0,1). Then, there exist nets

{( uf, rf)} E n0 , i = 1, 2, such that {( uf, rf)} --+ ( ui, ri), i = 1, 2. Then,

lim(.\uf (1 - .\)u~, .\rf (1 - .\)r~) a + +

Therefore, fi0 is convex. D

Using the preceding results and a new local closedness property, we are now ready to prove an €-alternative theorem from which a version of the Farkas alternative theorem is derived.

Theorem 5.2.1 Let f: C--+ ~,g: C--+ Y and let (f,g) be (~xS)-convexlike. Assume that for some neighbourhood U of 0 E Y and a constant 1 > 0, the set non a X ( -oo, 'i'] is a nonempty closed subset of y X ~- Then exactly one of the following holds :

::Ix E C, f(x) < 0, -g(x) E S (5.1) Vt: > 0,30 =J (.X,r) E (S* x ~+), Vx EC, r(f(x) + t:) + .\g(x) > 0 (5.2)

Proof: Suppose (5.1) does not hold. Then for any f > 0,(0,-t:)

54 U is the neighbourhood of O and , > 0, we can choose a subnet {(,.i6, r 6)} such that c5 c5 - {(u ,r)} Eno n U X (-oo,1].

Since no n O x ( -oo, ,] is closed,

c5 c5 - (0, -<:) = lim( u , r ) E n0 n U x (-oo, ,]. c5

Thus, there exists x0 E C such that /(x0 ) ~ -f < 0 and -g(x0 ) E S, which is a contradiction since (5.1) does not hold. Therefore, (0, -f) ff,. fi 0 • By a strong separation theorem, there exists (A, r) E (Y' x ~), not both zero, such that for every ( u, r) E n0 ,

Thus, A(u) + rr > -u. Hence,

Vx EC, Ag(x) + r(f(x) + <:) > 0.

Lets ES and x0 EC. Then for every f3 > 0 and T/ > 0

A(g(xo) + /3s) + r(f(xo) + r,) > -Tf (5.3)

If we multiply (5.3) by 13- 1 and take /3-+ oo, we get

and so A E S*. Similarly, if we multiply (5.3) by r,-1 and take T/ -+ oo, we get T > 0. Therefore,

(V <: > 0) ( 30 =/- (A, r) E ( S* x ~+)) (V x E C)

r(f(x) + <:) + Ag(x) > 0 and (5.2) holds. Clearly, both (5.1) and (5.2) cannot hold simultaneously and that completes the proof. D

55 Remark 5.2.1 It is worth noting that the local closedness assumption holds for instance, if C is compact and f and g are l.s.c ..

Now, using Theorem 5.2.1, we can establish a version of the Farkas theo­ rem.

Theorem 5.2.2 Suppose that the hypotheses of Theorem 5.2.1 are satisfied.

If there exists x 0 EC such that -g(xo) ES then the following are equivalent

x E C,-g(x) ES==> f(x) 2:: 0 (5.4)

(VO< 0)(:10 =/- ,\ E S*)(Vx E C),f(x) + ,\g(x) > 0 (5.5)

Proof:

(5.4) holds <==> {5.1) does not hold <==> (5.2) holds .

The statement (5.2) is equivalent to the condition that for any 0 < 0, :10 =/- (.\, r) E (S* x ~+) such that

Vx EC, r(f(x) - 0) + ,\g(x) > 0.

Suppose T = 0. Then ,\ =/- 0 and for every x E C, ,\g( x) > 0 which contradicts the assumption that there exists x 0 E C such that -g(x0 ) E S. Therefore,

T =I- 0. Without loss of generality, assume T = l. Thus, (5.5) holds. Clearly, (5.5) implies (5.4) and that completes the proof. D

Define n = {w E YI x E C,w-g(x) ES}. Now, we establish a version of a closed cone alternative theorem for cone convexlike systems ( see Theorem 3.1, [27]).

Theorem 5.2.3 Let g: C -+ Y be S-convexlike. Assume that there exists a neighbourhood U of O E Y such that n n V is a nonempty closed subset of Y.

56 Then exactly one of the following holds :

xEC, -g(x)ES (5.6) (30 # A E S*)(Vx EC) Ag(x) > 0 (5.7)

Proof: Clearly, (5.6) and (5.7) cannot hold simultaneously. Let Eo > 0. Suppose (5.6) does not hold. Then the system x E C, f(x) < 0, -g(x) E S has no solution where f(x) = -E0, for every x E C. In this case, the regularity condition in Theorem 5.2.1 holds when n n Vis closed and so from Theorem 5.2.1, there exists (r,A) E ~+ x S*, not both zero, such that

Vx EC, r(f(x) + Eo) + Ag(x) > 0

Thus, r(-Eo + fo) + Ag(x) > 0 and so 0 # A Es•, Ag(x) > 0. D

5.3 Approximate Lagrange Multipliers and Zero Duality Gaps

It is known that under appropriate conditions, a Lagrangian theorem can be proved using an alternative theorem. Furthermore, if the primal problem attains a finite optimal solution, then the strong duality property follows from the existence of the Lagrange multipliers. However, the following example shows that the infimum of a primal problem is attained and the duality gap between the primal problem and the corresponding dual problem is zero, yet no Lagrange multiplier exists:

Example 5.3.1 {32}

(Pi) µ := inf{-x I x 2 ~ 0}

(Di) v := sup inf -x + Ax 2 = lim inf -x + Ax2 • ,\~Q X .\-+oo X Andµ= v = 0 however the supremum is not attained.

57 Moreover, the example 4.4 of Jeyakumar and Gwinner (30] provides a convex problem which has no multiplier that satisfies the saddlepoint conditions but has a multiplier that satisfies the f-saddlepoint conditions (see the definition below).

In this section, we shall use the €-alternative theorem discussed in the previous section to establish f-saddlepoint conditions. Thus, we have ap­ proximate Lagrange multiplier results under a certain regularity hypothesis (which parallels the closedness condition imposed in the previous section). Moreover, we shall see how the f-saddlepoint condition is used to show zero duality gap between the primal problem and the corresponding dual problem.

Recall the programming problem

(Pi) in£ f(x) subject to x EC, -g(x) ES and define the Lagrangian function for the problem (Pi) by L(x,A) = f(x) + Ag(x), where A E Y' and x E C. The point (x,.X) E (C x S*) is called an f-saddlepoint for (Pi) if for every A E S*,x EC,

Note that the f-saddlepoint inequality implies the following €-approximate complementarity condition -f ~ .Xg( x) ~ 0.

The problem (Pi) is consistent if it has a feasible point, and (Pi) is regular if µ is finite and there exists a neighbourhood U of O E Y and a constant i > 0 such that 'Y > µ and the set no n U X ( -oo, ;] is closed in Yx~.

Theorem 5.3.1 Let f > 0 and suppose (Pi) is regular. Then there exists a feasible point x E C and .X E S* such that ( x, .X) is an f- saddlepoint for (Pi).

Moreover, f(x) = µ and -f < .Xg(x) ~ 0.

58 Proof: Since µ < oo and , > µ, for every 0 < 6 ~ (, - µ ),

(0, 6 + µ) E non lJ X (-oo, -y] which is closed. Therefore,

(0, µ) lim(0, µ + 6) Eno = .s--o and thus there exists x EC such that f(x) = µ < oo and -g(x) E S. Since x is an optimum for (Pi),

x E C,-g(x) ES===> f(x) - f(x) > 0.

Since (J,g) is(~+ x S)-convexlike, so is (J,g), where l(x) = f(x)- f(x). So, from Theorem 5.2.2,

Vx E C,30 =/ >. E S*,f(x) + >.g(x) > f(x)- €.

Since x E C is feasible,

30 =I>. E S*, f(x) + >.g(x) > f(x) - € and so >.g(x) > -€ and for every>. E S*, >.g(x) ~ 0.

Now, for every x EC,

J(x) + ,\g(x) ~ inf J(x) + ,\g(x). xEC

So,

sup f(x) + ,\g(x) ~ sup inf f(x) + ,\g(x). AES• AES•xEC Then,

L(x, >.) + f ~ L(x, >.).

Let ,\ E S*. Since x E C is feasible with /( x) = µ,

f(x) + .\g(x) ~ f(x).

Therefore, since -t: < >.g(x),

L(x,.\)- € ~ L(x,>.) ~ L(x,>.) + €. o

59 Note that the existence of the t-saddlepoint does not require the Gen­ eralized Slater constraint qualification. Rather, it requires the regularity of (Pi). Now, we shall present a zero duality gap property as an application of Theorem 5.3.1.

Theorem 5.3.2 Assume that (Pi) is regular. Then µ is attained for some feasible point x E C and

µ = f(x) = sup inf L(x, .\). Aes•xEC

Proof: From Theorem 5.3.1, there exists x E C, X E S* such that for any t > 0, x EC, L(x, X) ~ L(x, X) - t.

Hence inf L(x, X) > L(x, X) - t. xEC Since -t < Xg(x),

inf f(x) + Xg(x) > f(x) + Xg(x) - t xEC > f(x) - 2t.

Letting t -+ 0, we get

inf f(x) + Xg(x) ~ f(x). xEC

So,

sup inf f(x) + ,\g(x) ~ f(x). AES•XEC The equality follows by the weak duality property that for each feasible x E C,

f(x) ~ sup inf L(x, ,\). D AES•xEC

We conclude this section by mentioning that the main results in the last two sections can be shown to be true in locally convex spaces, and so the results are applicable to a product space in the product topology. Hence,

60 the results obtained in Jeyakumar and Gwinner [30] and Jeyakumar and Wolkowicz [32] can be be derived from these results; thus our results provide a unified approach to the study of f-alternative theorems and zero duality gap properties.

5.4 Nearly Stable Problems

For a convex programming problem in a finite dimensional space, the optimal solution of the corresponding dual problem and the strong duality between the primal and dual problems can be characterized in terms of sub­ gradients of the value function of the problem at zero. Geoffrion [13] did an extensive study on the relationship between the stability of the primal prob­ lem and the existence of the optimal multiplier vectors. Craven [4] showed that for a convex programming problem, if the primal problem has a solution then it is stable if and only if there is strong duality between the primal and dual problems.

Recall that the problem {P1 ) is nearly stable if V{O) < oo and the value function satisfies {2.1) for every f > 0. In [32] , it was established that if V(O) is finite then (Pi) is nearly stable if and only if there is zero duality gap

between {Pi) and {D1 ), where {P1 ) is an infinite dimensional programming problem involving inequality systems. And as a consequence of this result, a characterization of the f-saddlepoint for (Pi) is given using f-subdifferentials of the value function of the problem at zero.

In this section, we shall examine the above results in infinite dimensional programming problems involving cone functions under convexlike conditions.

Theorem 5.4.1 Consider (Pi) and (D1 ). Let V(O) < oo. Then (Pi) is

nearly stable if and only if there is zero duality gap between (Pi) and ( D1 ).

61 Proof: Suppose (Pi) is nearly stable. Then for every f > O, x E C, u E Y, there exists A E Y' such that

V(u) > V(O) - ..X(u) - f.

Then, V(u) + ..X(u) ~ V(O) - f. If x EC, then by choosing u = g(x), we have u - g(x) E Sand

f(x) + Ag(x) > V(u) + A(u)

> V(O)- f and so

inf f(x) + Ag(x) ~ V(O) - f. :cEC Hence, sup inf f(x) + Ag(x) ~ V(O), >.es•xec since f is arbitrary. Since weak duality holds without any additional assump- tions,µ= v.

Now, we need to show that A E S*. Suppose not. Then by a separation theorem, there exists z E S, such that for every .X E S*,

A(z) > .X(z) and A(z) < 0. Therefore, take A(z) = -1. So for every f > 0, 2fz E S. Hence,

V(2Ez) > V(O) - A(2EZ) - f

V(O) + 2f - f

V(O) + f > V(O).

Since 2fZ ES, V(2Ez) ~ V(O) which is a contradiction. Thus, AES*.

Conversely, suppose that there is zero duality gap between (Pi) and (D1 ), then for any f > 0 and for some A E S*,

inf f(x) + Ag(x) ~ V(O) - f. :ceC

62 For every x EC, f(x) + ..\g(x) > V(0) - <:

If u - g(x) ES then ..\(u - g(x)) > 0, and so, -..\g(x) > -..\(u). Hence,

f(x) > V(0) - ..\g(x) - <: ==> f(x) > V(0) - ..\(u) - <:.

Then, V(u) ~ V(0) - ..\(u) - <: and (P1 ) is nearly stable. D

In the following, we prove a corollary to Theorem 5.4.1 which characterizes the t:-saddlepoint for (Pi) using <:-subdifferentials of the value function at it= 0.

Corollary 5.4.1 Assume that the hypotheses of Theorem 5.3.1 hold. Then for every <: > 0, 8e V(0) is nonempty. Moreover, if -X E 8e V(0) then there exists x E C such that ( x, X) is an t:-saddlepoint for ( P1 ).

Proof: From Theorem 5.3.2, there exists X E S* and a feasible point x E C such that f(x) = µ = V(0) < inf L(x, X) + t:. xEC Thus, for any feasible point x E C,f(x) + Xg(x) + <: > V(O). If for any u E Y, u - g(x) E S then X(u - g(x)) ~ 0. Therefore,

f(x) > V(0) - Xg(x) - t: > V(0) - X(u) - t:.

Thus, for every u E Y, V(u) > V(0) - X(u) - <: and -X E 8eV(0).

Now, if -X E 8e V(0), then following the proof of Theorem 5.4.1,

Vx EC, f(x) + Xg(x) > f(x) - <: (5.8) > f(x) + Xg(x) - <:.

63 Therefore, L(x, X) + f ~ L(x, X). From (5.8),

f(x) + Xg(x) > f(x) - f

> f(x) + ,\g(x) - f.

Thus, L(x, X) ~ L(x, ,\) - f. Hence,

L(x, ,\) - f < L(x, X) < L(x, X) + f. o

5.5 A Stable Alternative Theorem

In this section, we obtain an alternative theorem involving cone systems with local perturbations using the local closedness condition used in the previ­ ous sections. As a result, we have a stable alternative theorem. Furthermore, the stable alternative theorem characterizes the validity of the closedness con­ dition. Thus, we have a sufficient condition for the local closedness condition.

Here, we use the same n defined in section 5.2; thus n = {w E Y I x E C,w - g(x) ES}.

Theorem 5.5.1 Let g: C -+ Y be S-convexlike. Assume that there exists an open neighbourhood U of O E Y such that n n O is a nonempty closed subset of Y. Then for any u E U, exactly one of the following holds:

:3x E C,u - g(x) ES (5.9) 30 =I=,\ E S*, in£ ,\(g(x) - u) > 0 (5.10) :z:EC

If either (5.9} or {5.10} holds, but not both, for all u E O then n n [J is closed.

Proof: Suppose (5.9) does not hold. Then u (/. n. Thus u (/. n n [J which is closed. Therefore, there exists a neighbourhood V of u such that Vn(nnO) =

64 0 and thus, (V n 0) n n = 0. Therefore, u (/. fi which is convex. By a strong separation theorem, there exists a nonzero .A E Y' such that for any VE n, .A(u) < .A(v). Thus,

Vx E C, .A(g(x) - u) > 0

and inf .A(g(x) - u) > 0. xEC

Lets E S and fix x0 E C. Then for any /3 > 0,

-X(g(x0 ) + /3s) > A(u) (5.11)

By multiplying (5.11) by 13-1 and letting /3 -+ oo, we get A(s) ~ 0 and so A E S* and (5.10) has a solution. Clearly, (5.9) and (5.10) cannot hold simultaneously.

On the other hand, let (5.9) or (5.10) hold for every u E V. We shall show that n n [J is closed. To see this, let VE (n n U) C n n V. Thus, VE n. We

want to show that V E n as well. For any w E n, A E S*' X E C,

A(g(x) - w) ~ 0.

Thus, inf A(g(x) - w) < 0. xEC This extends to fi. Thus , for any >i E S*,

inf -X(g(x) - v) ~ 0 (5.12) xEC for v E fi. So (5.10) has no solution. Now, by the assumption that either (5.9) or (5.10) hold for u E V, but not both, hence (5.9) has a solution and v En. o

In what follows, we shall prove a version of the Farkas lemma which is equivalent to the version given in section 5.2 and is derived using the sta­ ble alternative theorem. The assumptions that we shall use are similar to

65 that of Theorem 5.2.2. Recall that the set n0 (see section 5.2) is given by n0 = {(u,r) E (Y x ~) I 3x E C,f(x) < r,u- g(x) ES}.

Corollary 5.5.1 Suppose that the conditions of Theorem 5.2.1 are satisfied.

If there exists x0 EC such that -g(xo) E S then exactly one of the following statements holds:

3x EC, f(x) < 0, -g(x) ES (5.13) sup inf f(x) + ..\g(x) > 0 (5.14) >.es• xEC

Proof: Clearly (5.13) and (5.14) cannot hold simultaneously. Suppose that (5.13) is not satisfied. Then for any fixed € > 0, the system

x E C,Vp ~ 0,p(f(x) + t) $ 0,-g(x) ES has no solution. Thus the system,

-(f(x) + t,g(x)) E ~+ x S has no solution x E C. Since the conditions of Theorem 5.5.1 are satisfied, there exists (r,..\) E (~+ x S*), not both zero, such that for every x EC,

r(f(x) + t) + ..\g(x) > 0.

Suppose r = 0. Then .,\ ¥- 0 and ..\g( x) > 0 which contradicts the assumption that there exists x0 E C such that -g(x0 ) E S. Thus T > 0 and T = 1 may be assumed. Therefore, for any€> 0, there exists .,\ E S* such that for each

X EC,

f(x) + ..\g(x) > -€.

Since € > 0 is arbitrary, (5.14) follows. D

We conclude by noting that Corollary 5.5.1 provides a new version of the Farkas lemma (which is equivalent to the version given in section 5.2) and also shows how a Farkas type result can be obtained from a stable alternative theorem.

66 Bibliography

[1] Bazaraa, M.S., " A Theorem of the Alternative with application to Convex Programmin~: Optimality, Duality and Stability," J. Math. Anal. Appl. 41 {1973), 701-715. [2] Ben-Israel, A., and Mond, B., "What is lnvexity? ," J. Austral. Math. Soc. Ser. B 28 {1986), 1-9. [3] Berge, C.,Topological Spaces, MacMillan , New York, 1963.

[4] Craven, B.D., Mathematical Programming and Control Theory, Chap­ man and Hall, London, 1978. [5] Craven, B.D., "On Quasidifferentiable Optimization." J. Austral. Math. Soc. Ser. A 42 {1986), 64-78. [6] Craven, B.D. and Glover, B., "lnvex Functions an Duality," J. Austral. Math. Soc. Ser. A 39 {1985), 1-20. . [7] Craven, B.D., Gwinner, J., and Jeyakumar, V., "Nonconvex Theorems of the Alternative and Minimization," Optimization 18 {1987), 151- 163. [8] Craven, B.D., and Jeyakumar, V., "Alternative Theorems with Weak­ ened Convexity," Utilitas Mathematica 31 {1987), 149-159. [9] Duffin, R.J., and Karlovitz, L.A.," An Infinite Linear Progaram with a Duality Gap," Management Science 12 No. 1 {1965) , 122-134. [10] Fan, K., "Minimax Theorems," Proc. Nat. Acad. Sci. 39 {1953), 42-4 7. [11] Fan,K. , Glicksberg, I., and Hoffman, A.J., "Systems of lnequalites Involving Convex Functions," Proc. Amer. Math. Soc. 8 {1957), 617- 622. [12] Fuchssteiner, B., and Konig, W., "New Versions of the Hahn-Banach Theorem," in "General Inequalities 2," ed. E.F. Beckenback, Birkhauser Verlag, Basel, 1980, 255-266. [13] Geoffrion, A.M., "Duality in Nonlinear Programming: A Simplified Applications-Oriented Development," SIAM Review 13 No. 1 {1971), 1-37.

67 [14] Greenberg, H., and Pierskalla, W., "A Review of Quasi-convex Func­ tions," Operations Research 19 (1971), 1553-1570. [15] Gwinner, J., and Jeyakumar, V., "A Solvability Theorem and Mini­ max Fractional Programming," UNSW Applid Ma.thematics Preprint AM91/7 (1991), to appear in Z. Operations Research. [16] Gwinner, J., and Jeyakumar, V., "Stable Minimax Theorem on Non­ compact Sets,"Proceedings of the International Conference on Fixed Point Theory and Applications, CIRM, Marseille-Luminy, France, eds. M. Thera and J.B. Baillon, Pitman Research Notes in Mathematics Series, 252 , Longman Scientific a.nd Technical, (1991), 215-220. [17] Hanson, M., "On Sufficiency of the Kuhn-Tucker Conditions," J._ Ma.th. Ana.I. Appl. 80 (1981 ), 545-550. [18] Hanson, M., and Mond, B., "Necessary and Sufficient Conditions in Con­ strained Optimization," Ma.thema.tica.l Programming 37 (1987), 51-58. [19] Hayashi, M., and Komiya, H., "Perfect Duality for Convexlike Pro­ grams," J. Optim. Theory Appl. 38 No. 2 (1982), 179-189. [20] Heinecke, G., and Oettli, W., "Characterizations of Weakly Efficient Points," Z. Operations Research 32 (1988), 375-393. [21] Holmes, R., Geometric Functional Analysis and Applications, Springer­ Verlag, New York. [22] Jameson, G., Topology and Normed Spaces, Chapman and Hall, Lon­ don, 1974. [23] Jeroslow, R.G., "A Limiting Lagrangian for Infinitely Constrained Con­ vex Optimization in ~n1,2 ," J. Optim. Theory Appl. 33 No. 4 (1981), 479-495. [24] Jeyakumar, V., "A Generalization of a Minimax Theorem of Fan via a Theorem of the Alternative," J. Optim. Theory Appl. 48 No. 3 (1986), 525-533. [25] Jeyakumar, V., "Convexlike Alternative Theorems and Mathematical Programming, " Optimization 16 No. 2 (1985), 643-652. [26] Jeyakumar, V., Honours class Lecture notes in Mathematical Program­ ming, University of New South Wales, Session II 1990. [27] Jeyakumar, V., "Nonconvex Infinite Games," Optimization 19 No. 2 (1988), 289-296. [28] Jeyakumar, V., "Nonconvex Lagrangian , Minimax and Alternative Theorems: An Equivalence" in Methods of Operations Research edited by R. Henn, P. Kall, et. al., Verlag Anton Hain, Germany, 1991, 61-69. [29] Jeyakumar, V. , " Nonlinear Alternative Theorems and Nondifferen­ tiable Programming," Z. Operations Research 28 Series A (1984), 175-187.

68 [30] Jeyakumar, V., and Gwinner, J., "Inequality Systems and Optimiza­ tion," J. Math. Anal. Appl. 159 (1991) 51-71. [31] Jeyakumar, V., Oettli, W., and Natividad, M., "A Solvability Theorem for a Class of Quasiconvex Mappings with Applications to Optimiza­ tion," Applied Mathematics Preprint AM91/42 (submitted for publica­ tion). (32] Jeyakumar, V., and Wolkowicz, H., "Zero Duality Gaps in Infinite­ Dimensional Programming," J. Optim. Theory Appl. 67 No. 1(1990),87-108. (33] Karney, D., "Duality Gaps in Semi-infinite Linear Programming- An Approximation Problem," Ma.thema.tical Programming 20 (1981), 129- 143. (34] Kelley, J., and Namioka, !.,Linear Topological Spaces, Springer-Verlag, New York, 1963. [35] Mangasarian, 0., Nonlinear Programming, McGraw-Hill, New York, 1969. (36] Martin, D., "The Essence of lnvexity," J. Optim. Theory Appl. 47 No. 1 (1985), 65-76. (37] Paeck, S., "On Convexlike and Concavelike Conditions in Alternative, Minimax and Minimization Theorems," 1990 (preprint). [38] Pomerol, J., "Inequality Systems and Minimax Theorems," J. Math. Anal. Appl. 103 (1984), 263-292. [39] Rockafellar, T., , Princeton University Press, Prince­ ton, New Jersey, 1970. [40] Schaefer, H., Topological Vector Spaces, Macmillan, New York,1966.

[41] Sion,M. , "On General Minimax Theorems," Pacific J. Ma.th. 8 (1958), 171-176. [42] Tanaka, Y., "Note On Generalized Convex Functions," J. Optim. The­ ory Appl. 66 No. 2 (1990), 345-349. [43) Terkelsen, F., " Some Minimax Theorems," Math. Sca.nd. 31 (1972), 405-413. [44] von Neum~n, J., and Morgenstern, 0., Theory of Games and Economic Reha.yiour, Princeton University Press, Princeton, New Jersey, 1944. [45] Weir, T., and Jeyakumar, V., "A Class of Nonconvex Functions and Mathematical Programming," Bull. Austral. Math. Soc. 38 (1988), 177-189.

69