A Subgradient Method for Equilibrium Problems Involving Quasiconvex
Total Page:16
File Type:pdf, Size:1020Kb
A subgradient method for equilibrium problems involving quasiconvex bifunctions∗ Le Hai Yen†and Le Dung Muu‡ November 4, 2019 Abstract In this paper we propose a subgradient algorithm for solving the equi- librium problem where the bifunction may be quasiconvex with respect to the second variable. The convergence of the algorithm is investigated. A numerical example for a generalized variational inequality problem is provided to demonstrate the behavior of the algorithm. Keywords: Equilibria, Subgradient method, Quasiconvexity. 1 Introduction In this paper we consider the equilibrium problem defined by the Nikaido-Isoda- Fan inequality that is given as Find x∗ ∈ C such that f(x∗,y) ≥ 0 ∀y ∈ C, (EP ) where C is a closed convex set in Rn, f : Rn × Rn → R ∪{+∞} is a bifunction such that f(x, x) = 0 for every (x, x) ∈ C × C ⊂ dom f. The interest of this problem is that it unifies many important problems such as the Kakutani fixed point, variational inequality, optimization, Nash equilibria and some other problems [3, 4, 11, 14] in a convenient way. The inequality in (EP) first was used in [16] for a convex game model. The first result for solution existence arXiv:1911.00181v1 [math.OC] 1 Nov 2019 of (EP) has been obtained by Ky Fan in [6], where the bifunction f can be quasiconvex with respect to the second argument. After the appearance of the paper by Blum-Oettli [4], the problem (EP) is attracted much attention of many authors, and some solution approaches have been developed for solving Problem (EP) see e.g. the monographs [3, 11]. A basic method for solving some classes of Problem (EP) is subgradient (or auxiliary subproblem) one [15, 19], where at each iteration k, having the iterate xk, the next iterate xk+1 is obtained k 1 k 2 by solving the subproblem min{f(x ,y)+ ky − x k : y ∈ C}, with ρk > 2ρk ∗This paper is supported by the NAFOSTED. †Institute of Mathematics, VAST; Email: [email protected] ‡Thang Long University and Institute of Mathematics, VAST; Email: [email protected] 1 0. This subproblem is a strongly convex program whenever f(xk,.) is convex. Since in the case of variational inequality, where f(x, y) := hF (x),y − xi, the k+1 k 1 k iterate x = PC (x − F (x )), the gradient method can be considered 2ρk as an extension of the projection one commonly used in non smooth convex optimization [24] as well as in the variational inequality [7]. The projection algorithms for paramonotone equilibrium problems with f being convex with respect to second variable can be found in [1, 22, 25]. Note that when f(x, .) is quasiconvex the subgradient method fails to apply because of the fact that the objective function f(xk,y)+ 1 ky − xkk2, in general, is neither convex 2ρk nor quasiconvex. To our best knowledge there is no algorithm for equilibrium problems where the bifunction f is quasiconvex with respect to the second variable. In this paper we propose a projection algorithm for solving Problem (EP), where the bifunction may be quasiconvex with respect to the second variable. In order to deal with quasiconvexity we employ the subdifferential for quasiconvex function first introduced in [8], see also [20, 21] for its properties and calculus rules. The subdifferential of a quasiconvex function has been used by some authors for nonsmooth quasiconvex optimization see. e.g. [12, 10, 24], and for quasiconvex feasibility problems [5, 17]. 2 Subdifferentials of quasi-convex functions First of all, let us recall the well known definition of quasiconvex functions, see e.g. [13] Definition 2.1. A function ϕ : Rn → R ∪{+∞} is called quasiconvex on a convex subset C of Rn if and only if for every x, y ∈ C and λ ∈ [0, 1], one has ϕ[(1 − λ)x + λy] ≤ max[ϕ(x), ϕ(y)]. (1) It is easy to see that ϕ is quasiconvex on C if and only if the level set Lϕ(α) := {x ∈ C : ϕ(x) < α}. (2) is convex for any α ∈ R. We consider the following Greenberg-Pierskalla subdifferential introduced in [8] ∂GP ϕ(x) := {g ∈ Rn : hg,y − xi≥ 0 ⇒ ϕ(y) ≥ ϕ(x)}. (3) A variation of the GP-subdifferential is the star-sudifferential that is defined as n ∗ {g ∈ R : hg,y − xi > 0 ⇒ ϕ(y) ≥ ϕ(x)} if x 6∈ D∗ ∂ ϕ(x) := n (R if x ∈ D∗, n n where D∗ is the set of minimizers of ϕ on R . If ϕ is continuous on R , then ∂GP ϕ(x)= ∂∗ϕ(x) ([21]). 2 These subdifferentials are also called quasi-subdifferentials or the normal- subdifferentials. Some calculus rules and optimality conditions for these subd- ifferentials have been studied in [20, 21], among them the following results will be used in the next section. Lemma 2.2. ([10], [20]) Assume that ϕ : Rn → R ∪{+∞} is upper semicon- tinuous and quasiconvex on domϕ. Then ∂GP ϕ(x) 6= ∅ ∀x ∈ domϕ, (4) and ∂∗ϕ(x)= ∂GP ϕ(x) ∪{0}. (5) Lemma 2.3. ([10], [21]) GP GP Rn 0 ∈ ∂ ϕ(x) ⇔ ∂ ϕ(x)= ⇔ x ∈ argmin y∈Rn ϕ(y). (6) The following result follows from Lemma 4 in [10], which can be used to find the subdifferential of a fractional quasiconvex function. Lemma 2.4. ([10]) Suppose ϕ(x)= a(x)/b(x) for all x ∈ domϕ, where a is a convex function, b is finite and positive on domϕ, domϕ is convex and one of the following conditions holds (a) b is affine; (b) a is nonnegative on domϕ and b is concave; Then ϕ is quasiconvex and ∂(a − αb)(x) is a subset of ∂GP ϕ(x) for α = ϕ(x), where ∂ stands for the subdifferential in the sense of convex analysis. 3 Algorithm and its convergence analysis In this section we propose an algorithm for solving Problem (EP), by using the star-sudifferential with respect to the second variable of the bifunction f. As usual, we suppose that C is a nonempty closed convex subset in Rn, f : Rn ×Rn → R∪{+∞} is a bifunction satisfying f(x, x) = 0 and C ×C ⊂ dom f. The solution set of Problem (EP) is denoted by S(EP ). Assumptions (A1) For every x ∈ C, the function f(x, .) is quasiconvex and f(., .) is upper semi continuous on an open set containing C × C; (A2) The bifunction f is pseudomonotone on C, that is f(x, y) ≥ 0 ⇒ f(y, x) ≤ 0 ∀x ∈ C,y ∈ C, and paramonotone on C with respect to S(EP ), that is x ∈ S(EP ),y ∈ C and f(x, y)= f(y, x)=0 ⇒ y ∈ S(EP ). 3 (A3) The solution set S(EP ) is nonempty. Paramonotonicity of bifunctions has been used for equilibrium as well as split equilibrium problems in some papers see e.g. [1, 22, 25, 26]. Various properties of paramonotonicity can be found, for example in [9]. For simplicity of notation. let us denote the sublevel set of the function f(x, .) with value 0 and the star subdifferential of f(x, .) at x as follows Lf (x) := {y : f(x, y) <f(x, x)=0}, ∗ n ∂2 f(x, x) := {g ∈ R |hg,y − xi < 0 ∀y ∈ Lf (x)}. The projection algorithm below can be considered as an extension of the one in [22] to equilibrium problem (EP), where f is quasiconvex with respect to the second variable. Algorithm 3.1. (The Normal-subgradient method) Take a real sequence {αk} satisfying the following conditions αk > 0 ∀k ∈ N, ∞ ∞ 2 k=1 αk =+∞, k=1 αk < +∞. P P Step 0: x0 ∈ C, k =0. Step k: xk ∈ C. k ∗ k k Take g ∈ ∂2 f(x , x ). If gk =0, stop: xk is a solution. Else normalize gk to obtain kgkk =1. Compute k+1 k k x = PC (x − αkg ). If xk+1 = xk then stop. Else update k ←− k +1. Remark 3.2. At each iteration k, in order to check that the iterate xk is k a solution or not one can solve the programming problem miny∈C f(x ,y). In general solving this problem is costly, however in some special cases, for example, when C is a polyhedral convex set and the function f(xk,.) is affine fractional, this program can be solved efficiently by linear programming methods. The following remark ensures the validity of the algorithm. Remark 3.3. (i) Since the star-subdifferential is a cone, one can always nor- malize its nonzero element to obtain an unit vector in the subdifferential. (ii) If Algorithm 3.1 generates a finite sequence, the last point is a solution ∗ k k of (EP). Indeed, if 0 ∈ ∂2 f(x , x ) then by Lemma 2.3, we have k k x ∈ argminy∈Rn f(x ,y), which implies f(xk,y) ≥ 0 for any y ∈ C. 4 k+1 k k k k If x = x then x = PC (x − αkg ). Hence, k k h−αkg ,y − x i≤ 0 ∀y ∈ C, or hgk,y − xki≥ 0 ∀y ∈ C. k ∗ k k GP k k k k k Since g ∈ ∂2 f(x , x ) = ∂2 f(x , x ), we have f(x ,y) ≥ f(x , x ) ≥ 0 for every y ∈ C. For convergence of the algorithm we need the following results that has been proved in [22] for the case the bifunction f is convex in the second variable. Lemma 3.4. The following inequality holds true for every k, k+1 k kx − x k≤ αk. (7) k+1 k k Proof. Since x = PC (x − αkg ), k+1 k k k+1 hx − x + αkg ,y − x i≥ 0 ∀y ∈ C.