arXiv:1911.00181v1 [math.OC] 1 Nov 2019 ysligtesbrbe min subproblem the solving by fPolm(P ssbrdet(raxlaysbrbe)oe[5 19], [15, some one solving subproblem) for auxiliary method iteration (or basic each subgradient A is at 11]. (EP) [3, Problem monographs of the s for e.g. developed see been attent have (EP) much approaches attracted solution app is some the (EP) and problem After authors, the argument. [4], Blum-Oettli second by the paper to respect with quasiconvex f(P a enotie yK a n[] hr h bifunction the exis where solution [6], for in result Fan first first Ky (EP) The by in obtained model. The been game convex has way. a convenient (EP) for a of [16] in Kak 14] in the 11, used as so such 4, and problems [3, equilibria important Nash problems optimization, many inequality, unifies variational point, it fixed that is problem this uhthat such where nti ae ecnie h qiiru rbe endb h Nikaido as the by given defined is problem that equilibrium inequality the Fan consider we paper this In Introduction 1 ugain ehdfreulbimproblems equilibrium for method subgradient A ∗ ‡ † Keywords: hn ogUiest n nttt fMteais AT E VAST; , of Institute and University [email protected] Long Email: Thang VAST; NAFOSTED. Mathematics, the of by Institute supported is paper This rvddt eosrt h eairo h algorithm. the of inequali behavior variational i the generalized is demonstrate a algorithm to the for provided of wit example convergence quasiconvex numerical The be A variable. may second bifunction the the to where problem librium C nti ae epooeasbrdetagrtmfrsolving for algorithm subgradient a propose we paper this In sacoe ovxstin convex closed a is f ( ,x x, novn uscne bifunctions quasiconvex involving qiira ugain ehd Quasiconvexity. method, Subgradient Equilibria, o vr ( every for 0 = ) Find k aigteiterate the having , eHiYen Hai Le x ∗ ∈ C oebr4 2019 4, November uhthat such { f R ( † Abstract ,x x, x n n eDn Muu Dung Le and k , y , f ) 1 + ) : f x ∈ R ( k x h etiterate next the , n C ∗ 2 × ρ y , 1 k × R k ) y C ≥ n − → 0 ⊂ x R k ∀ dom k { ∪ y 2 ‡ ∈ al [email protected] mail: : + y f C, ∞} ypolmis problem ty h neetof interest The . ∈ x nvestigated. k ligProblem olving C +1 respect h sabifunction a is aac fthe of earance h equi- the } ∗ with , o fmany of ion sobtained is f eother me -Isoda- a be can classes where ( tence utani ρ EP k was > ) 0. This subproblem is a strongly convex program whenever f(xk,.) is convex. Since in the case of , where f(x, y) := hF (x),y − xi, the k+1 k 1 k iterate x = PC (x − F (x )), the gradient method can be considered 2ρk as an extension of the projection one commonly used in non smooth convex optimization [24] as well as in the variational inequality [7]. The projection algorithms for paramonotone equilibrium problems with f being convex with respect to second variable can be found in [1, 22, 25]. Note that when f(x, .) is quasiconvex the subgradient method fails to apply because of the fact that the objective function f(xk,y)+ 1 ky − xkk2, in general, is neither convex 2ρk nor quasiconvex. To our best knowledge there is no algorithm for equilibrium problems where the bifunction f is quasiconvex with respect to the second variable. In this paper we propose a projection algorithm for solving Problem (EP), where the bifunction may be quasiconvex with respect to the second variable. In order to deal with quasiconvexity we employ the subdifferential for quasiconvex function first introduced in [8], see also [20, 21] for its properties and calculus rules. The subdifferential of a quasiconvex function has been used by some authors for nonsmooth quasiconvex optimization see. e.g. [12, 10, 24], and for quasiconvex feasibility problems [5, 17].

2 Subdifferentials of quasi-convex functions

First of all, let us recall the well known definition of quasiconvex functions, see e.g. [13] Definition 2.1. A function ϕ : Rn → R ∪{+∞} is called quasiconvex on a convex C of Rn if and only if for every x, y ∈ C and λ ∈ [0, 1], one has

ϕ[(1 − λ)x + λy] ≤ max[ϕ(x), ϕ(y)]. (1)

It is easy to see that ϕ is quasiconvex on C if and only if the level set

Lϕ(α) := {x ∈ C : ϕ(x) < α}. (2) is convex for any α ∈ R. We consider the following Greenberg-Pierskalla subdifferential introduced in [8]

∂GP ϕ(x) := {g ∈ Rn : hg,y − xi≥ 0 ⇒ ϕ(y) ≥ ϕ(x)}. (3)

A variation of the GP-subdifferential is the star-sudifferential that is defined as

n ∗ {g ∈ R : hg,y − xi > 0 ⇒ ϕ(y) ≥ ϕ(x)} if x 6∈ D∗ ∂ ϕ(x) := n (R if x ∈ D∗,

n n where D∗ is the set of minimizers of ϕ on R . If ϕ is continuous on R , then ∂GP ϕ(x)= ∂∗ϕ(x) ([21]).

2 These subdifferentials are also called quasi-subdifferentials or the normal- subdifferentials. Some calculus rules and optimality conditions for these subd- ifferentials have been studied in [20, 21], among them the following results will be used in the next section. Lemma 2.2. ([10], [20]) Assume that ϕ : Rn → R ∪{+∞} is upper semicon- tinuous and quasiconvex on domϕ. Then ∂GP ϕ(x) 6= ∅ ∀x ∈ domϕ, (4) and ∂∗ϕ(x)= ∂GP ϕ(x) ∪{0}. (5) Lemma 2.3. ([10], [21]) GP GP Rn 0 ∈ ∂ ϕ(x) ⇔ ∂ ϕ(x)= ⇔ x ∈ argmin y∈Rn ϕ(y). (6) The following result follows from Lemma 4 in [10], which can be used to find the subdifferential of a fractional quasiconvex function. Lemma 2.4. ([10]) Suppose ϕ(x)= a(x)/b(x) for all x ∈ domϕ, where a is a convex function, b is finite and positive on domϕ, domϕ is convex and one of the following conditions holds (a) b is affine; (b) a is nonnegative on domϕ and b is concave; Then ϕ is quasiconvex and ∂(a − αb)(x) is a subset of ∂GP ϕ(x) for α = ϕ(x), where ∂ stands for the subdifferential in the sense of convex analysis.

3 Algorithm and its convergence analysis

In this section we propose an algorithm for solving Problem (EP), by using the star-sudifferential with respect to the second variable of the bifunction f. As usual, we suppose that C is a nonempty closed convex subset in Rn, f : Rn ×Rn → R∪{+∞} is a bifunction satisfying f(x, x) = 0 and C ×C ⊂ dom f. The solution set of Problem (EP) is denoted by S(EP ).

Assumptions (A1) For every x ∈ C, the function f(x, .) is quasiconvex and f(., .) is upper semi continuous on an open set containing C × C; (A2) The bifunction f is pseudomonotone on C, that is f(x, y) ≥ 0 ⇒ f(y, x) ≤ 0 ∀x ∈ C,y ∈ C, and paramonotone on C with respect to S(EP ), that is x ∈ S(EP ),y ∈ C and f(x, y)= f(y, x)=0 ⇒ y ∈ S(EP ).

3 (A3) The solution set S(EP ) is nonempty. Paramonotonicity of bifunctions has been used for equilibrium as well as split equilibrium problems in some papers see e.g. [1, 22, 25, 26]. Various properties of paramonotonicity can be found, for example in [9]. For simplicity of notation. let us denote the sublevel set of the function f(x, .) with value 0 and the star subdifferential of f(x, .) at x as follows

Lf (x) := {y : f(x, y)

Algorithm 3.1. (The Normal-subgradient method) Take a real sequence {αk} satisfying the following conditions

αk > 0 ∀k ∈ N, ∞ ∞ 2 k=1 αk =+∞, k=1 αk < +∞. P P Step 0: x0 ∈ C, k =0. Step k: xk ∈ C. k ∗ k k Take g ∈ ∂2 f(x , x ). If gk =0, stop: xk is a solution. Else normalize gk to obtain kgkk =1. Compute k+1 k k x = PC (x − αkg ). If xk+1 = xk then stop. Else update k ←− k +1. Remark 3.2. At each iteration k, in order to check that the iterate xk is k a solution or not one can solve the programming problem miny∈C f(x ,y). In general solving this problem is costly, however in some special cases, for example, when C is a polyhedral and the function f(xk,.) is affine fractional, this program can be solved efficiently by linear programming methods. The following remark ensures the validity of the algorithm. Remark 3.3. (i) Since the star-subdifferential is a cone, one can always nor- malize its nonzero element to obtain an unit vector in the subdifferential. (ii) If Algorithm 3.1 generates a finite sequence, the last point is a solution ∗ k k of (EP). Indeed, if 0 ∈ ∂2 f(x , x ) then by Lemma 2.3, we have

k k x ∈ argminy∈Rn f(x ,y), which implies f(xk,y) ≥ 0 for any y ∈ C.

4 k+1 k k k k If x = x then x = PC (x − αkg ). Hence,

k k h−αkg ,y − x i≤ 0 ∀y ∈ C, or hgk,y − xki≥ 0 ∀y ∈ C. k ∗ k k GP k k k k k Since g ∈ ∂2 f(x , x ) = ∂2 f(x , x ), we have f(x ,y) ≥ f(x , x ) ≥ 0 for every y ∈ C. For convergence of the algorithm we need the following results that has been proved in [22] for the case the bifunction f is convex in the second variable. Lemma 3.4. The following inequality holds true for every k,

k+1 k kx − x k≤ αk. (7)

k+1 k k Proof. Since x = PC (x − αkg ),

k+1 k k k+1 hx − x + αkg ,y − x i≥ 0 ∀y ∈ C. By subtituting y = xk, we obtain

k+1 k 2 k k k+1 kx − x k ≤ αkhg , x − x i k k+1 k ≤ αkkg kkx − x k k+1 k = αkkx − x k. Therefore, k+1 k kx − x k≤ αk.

Proposition 3.5. For every z ∈ C and k, the following inequality holds

k+1 2 k 2 k k 2 kx − zk ≤kx − zk +2αkhg ,z − x i + αk. (8) Proof. Let z ∈ C, then we have kxk+1 − zk2 = kxk − zk2 −kxk+1 − xkk2 +2hxk − xk+1,z − xk+1i ≤ kxk − zk2 +2hxk − xk+1,z − xk+1i. (9)

k+1 k k Since x = PC (x − αkg ) and z ∈ C,

k k+1 k+1 k k+1 hx − x ,z − x i≤ αkhg ,z − x i. (10) From (9) and (10), kxk+1 − zk2 k 2 k k+1 ≤ kx − zk +2αkhg ,z − x i k 2 k k = kx − zk +2αkhg ,z − x i k k k+1 +2αkhg , x − x i. (11)

5 By Cauchy-Schwart inequality and the fact that kgkk = 1, we have

hgk, xk − xk+1i≤kxk − xk+1k.

k k+1 Since, by Lemma 3.4, kx − x k≤ αk, the inequality (11) becomes

k+1 2 k 2 k k 2 kx − zk ≤kx − zk +2αkhg ,z − x i +2αk

The following lemma is an extension of Lemma 6 in [10] to the diagonal subdifferential of a bifunction being quasiconvex in its second variable.

k n Lemma 3.6. (a) If B(x,ǫ) ⊂ Lf (x ) for some x ∈ R and ǫ ≥ 0, then

hgk, xk − xi > ǫ.

k k (b) lim infk→∞hg , x − xi≤ 0 ∀x ∈ C.

k k k Proof. (a) If B(x,ǫ) ⊂ Lf (x ), then x + ǫg ∈ Lf (x ) for ǫ> 0 small enough. k ∗ k k Since g ∈ ∂2 f(x , x ), we have

hgk, x + ǫgk − xki < 0.

From kgkk = 1, it follows that

hgk, xk − xi > ǫ.

(b) By contradiction, we assume that there exist x ∈ C, ξ > 0 and k0 such that for any k ≥ k0, hgk, xk − xi≥ ξ > 0. In view of Proposition 3.5, we have

k k k 2 k+1 2 2αkhg ,z − x i≤kx − zk −kx − zk + δk.

By summing up, we obtain

∞ ∞ k k 2 αkǫ ≤ 2 αkhg ,z − x i k=0 k=0 X X ∞ 0 2 2 ≤ kx − zk + αk. k=1 X Thus, 0 2 ∞ 2 kx − zk + k=1 αk 0 <ǫ ≤ ∞ . 2 α k=0Pk ∞ 2 ∞ This is a contradiction because of Pk=1 αk < +∞ and k=0 αk =+∞. P P

6 Now, we can state the convergence theorem. Theorem 3.7. Suppose that Algorithm 3.1 does not terminate. Let {xk} be the infinite sequence generated by the algorithm and {xkq } be the subsequence of {xk} that contains all iterates belonging to S(EP ). Then under Assumptions (A1), (A2), (A3), one has

kq k (a) If {x } is infinite then limk→∞ d(x ,S(EP )) = 0;

(b) If {xkq } is finite then the sequence {xk} converges to a solution of the problem (EP). Proof. Let x∗ be an arbitrary point of S(EP ). If f(x∗, xk) ≥ 0, then by pseu- domonotonicity, f(xk, x∗) ≤ 0. If f(xk, x∗) = 0, then again by pseudomono- tonicity, we obtain f(x∗, xk)=0. Moreover, by paramonotonicity of f on C w.r.t S(EP ), we can say that xk is also a solution of (EP). We consider two cases.

(a) The sequence {xkq } is infinite. By Proposition 3.5,

k+1 2 k 2 k k 2 kx − zk ≤kx − zk +2αkhg ,z − x i + αk

k By choosing z = xq , we obtain

k+1 kq 2 k kq 2 k kq k 2 kx − x k ≤kx − x k +2αkhg , x − x i + αk (12)

k Now, for k such that kq

From (12) and (13), for kq ≤ k < kq+1, it follows that

k ∞ k kq 2 2 2 kx − x k ≤ αi ≤ αi . (14) i=k i=k Xq Xq Moreover, since xkq ∈ S(EP ), we have d2(xk,S(EP )) ≤ kxk − xkq k2. In addition, by the assumption ∞ 2 →∞ limq i=kq αi < +∞, it follows from (14) that, P lim d(xk,S(EP )=0. (15) k→∞

k (b) The sequence {x q } is finite. Then, there exists k0 such that for k ≥ k0, we have xk 6∈ S(EP ). Then f(xk, x∗) < 0. In this case, we divide our proof into four steps.

7 Step 1:{kxk − x∗k} is convergent and therefore {xk} is bounded. k ∗ ∗ k Indeed, f(x , x ) < 0 for k ≥ k0, which means that x ∈ Lf (x ) for k ≥ k0. Thus, k ∗ k hg , x − x i < 0 ∀k ≥ k0. (16) Combining this with (8) in Proposition 3.5, we obtain

k+1 ∗ 2 k ∗ 2 2 kx − x k ≤kx − x k + αk. (17)

∞ 2 k ∗ Since k=1 αk < +∞, we can conclude that {kx − x k} is convergent and therefore {xk} is bounded. P k k ∗ Step 2: lim infk→∞hg , x − x i = 0. Indeed, thanks to Lemma 3.6(b), we have lim infhgk, xk − x∗i≤ 0. (18) k→∞ From (16) and (18),

lim infhgk, xk − x∗i =0. k→∞

Step 3: Let {xki } be a subsequence of {xk} such that

lim hgki , xkq − x∗i = lim infhgk, xk − x∗i =0. (19) q→∞ k→∞

Since {xk} is bounded, then {xki } is bounded too. Let x be a limit point ki ki of {x }, without loss of generality, we assume that limi→∞ x = x. We now prove that f(x, x∗) = 0. Indeed, by pseudomonotonicity of f on C, we have

f(x, x∗) ≤ 0.

Moreover, we show that f(x, x∗)=0. In fact, by contradiction, assume that there exists a> 0 such that

f(x, x∗) ≤−a.

Then, since f(., .) is upper semicontinuous on C × C, there exist ǫ1 > ∗ 0,ǫ2 > 0 such that for any x ∈ B(x,ǫ1), y ∈ B(x ,ǫ2) we have a f(x, y) ≤− . 2

ki ki On the other hand, since limi→∞ x = x, there exists i0 such that x ∗ belongs to B(x,ǫ1) for every i ≥ i0, So, for i ≥ i0 and y ∈ B(x ,ǫ2), one has a f(xkq ,y) ≤− < 0, 2

∗ kq which means that B(x ,ǫ2) ⊂ Lf (x ). Then, by Lemma 3.6(a), we have

k k ∗ hg q , x q − x i >ǫ2 ∀q ≥ q0.

8 This contracts with (19). So, f(x, x∗) = 0. Step 4: The sequence {xk} converges to a solution of (EP). Indeed, by Step 3, f(x, x∗) = 0. Since f is pseudomonotone on C, we have f(x∗, x) ≤ 0, which together with x∗ ∈ S(EP ), implies f(x∗, x) = 0. Then, thanks to the paramonotonicity of f, x is also a solution of (EP). By Step 1, k 2 kq {kx − xk } is convergent, which together with limq→∞ x = x implies that the whole sequence {xk} must converge to the solution x of (EP).

Remark 3.8. (i) If at each iteration k, one can check that whether xk is a solution or not yet, then all generated iterates do not belong to the solution set S(EP ). In this case the sequence {xk} converges to a solution of (EP). (i) If, in addition, we assume that f(x, .) is strictly quasiconvex for x ∈ C, that is for every hg,y − xi > 0 ⇒ f(x, y) >f(x, x)=0 whenever g ∈ ∂f ∗(x, x), then the sequence {xk} converges to the unique solution of (EP). Indeed, if f(x, .) is strictly quasiconvex, then, since f(xk, x∗) ≤ 0= f(xk, xk), by strictly quasiconvex, hgk, x∗ − xki≤ 0. thus in virtue of Proposition 3.5, we obtain k+1 ∗ 2 k ∗ 2 2 kx − x k ≤kx − x k + αk ∀k. So the convergence of {xk} can be proved as in case (b) of the convergence theorem.

4 A Generalized Variational Inequality and Com- putational Experience

Consider the following generalized variational inequality problem

Find x∗ ∈ C such that hF (x), ϕ(y) − ϕ(x)i≥ 0 ∀y ∈ C, (GV I) where F : C → Rn is a given operator with domF ⊆ C, and ϕ : C → Rn is a vector function such that hF (x), ϕ(y)i is quasiconvex on C for any fixed x ∈ C. Generalized variational inequalities were considered in [18], and some solution-existence results were established there. Problem (GVI) can take the form of equilibrium (EP) by taking f(x, y) := hF (x), ϕ(y) − ϕ(x)i. Clearly, when ϕ(y) ≡ y, Problem (GVI) becomes a standard variational inequality one. Note that because of quasiconvexity of ϕ, available methods for variational inequalities as well as those for equilibrium problems with f being convex in its second variable fail to apply to (GVI). Now we consider a typical example of problem (GVI) by taking

A1y + b1 A1x + b1 f(x, y)= Ax + b, − , (20) cT y + d cT x + d   n×n n T with A, A1 ∈ R , b,b1,c ∈ R , d ∈ R and C ⊂{x| c x + d> 0}.

9 120

100

80

60 Error

40

20

0 0 10 20 30 40 50 60 70 80 90 100 Iteration

Figure 1: Behavior of the error

T T Let Aˆ = (dA1 − cb1 )A. Then, the bifunction f(x, y) is paramonotone if and ˆ 1 ˆ ˆT ˆ ˆ only if A1 = 2 (A + A ) is positive semidefinite and rank(A1) ≤ rank(A) (see [9]). We apply the two versions of the Normal-subgradient algorithm in Section 3 to solve this problem. In the first version (denoted by (NG1)), we construct the sequence without checking the solvability of each obtained iterate. In the second version (denoted by (NG2)), we check at each iteration whether xk is a k solution of (EP ) or not by solving the linear fractional program miny∈C f(x ,y). As it is well known, see e.g. [23], that affine fractional functions appear in many practical problems in various fields. n We take C = [1, 3] , the entries of A, A1,b,b1,c,d are uniformly generated in the [0, 1]. We tested the algorithm with problems of different sizes, each of sizes has hundred instances We stop the computation of the first version k k k+1 −4 (NG1) at iteration k if either g =0 or err1 := kx − x k < 10 , whereas for the second version (NG2), at each iteration k, we check the solvability of xk k k by solving the problem y = argminy∈C f(x ,y), and stop the computation at k −3 iteration k if − miny∈C f(x ,y) < 10 . Otherwise, we stop the computation if the number of iteration exceeds 2000. For both versions, we say that a problem is successfully solved if an iterate xk satisfying k −1 err := − miny∈C f(x ,y) < 10 has been obtained. Figure 1 shows that the error goes to 0 as the number of iterations k goes to +∞. The computational results are shown in Table 1 for (NG1) and Table 2 for (NG2), where the number of successfully solved problems as well as the time and error in average are reported.

10 100 Table 1: Algorithm (NG1) with α = k k +1 n N.prob. N.succ. prob. CPU-times(s) Error 5 100 100 0.026574 0.000006 10 100 100 0.277551 0.000308 20 100 100 0.582287 0.001506 50 100 87 9.787128 0.027892

100 Table 2: Algorithm (NG2) with α = k k +1 n N.prob. N.succ. prob. CPU-times(s) Error 5 100 100 0.005953 0.000004 10 100 100 0.124109 0.000066 20 100 100 0.527599 0.000625 50 100 100 6.685981 0.003728

References

[1] P. N. Anh, L.D. Muu, A hybrid subgradient algorithm for nonexpansive mapping and equilibrium problems, Optim. Lett. 8 (2014) 727-738. [2] G. Bigi , M. Castellani, M. Pappalardo, M. Panssacantando, Existence and solution methods for equilibria. Eur. Oper. Res. 227 (2013) 1-11. [3] G. Bigi, , M. Castellani, M. Pappalardo, M. Passacantando, Nonlinear Pro- gramming Techniques for Equilibria, Springer, 2018. [4] E. Blum, W. Oettli, From optimization and variational inequalities to equi- librium problems, Math. Student 62 (1994) 127-169 . [5] Y. Censor, A. Segal, Algorithms for the quasiconvex feasibility problem, J Comput. Appl. Math. 85(1) (2006) 34-50. [6] K. Fan, A minimax inequality and applications. In: Shisha O. (Ed.): In- equalities. Academic Press, New York, (1972) 103-113 . [7] F. Facchinei J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag New York , 2003. [8] H. P. Greenberg and W.P. Pierskalla, Quasi-conjugate functions and sur- rogate duality, Cahiers Centre Etudes´ Recherche Op´er., 15 (1973) 437- 448. [9] A.N. Iusem, On some properties of paramonotone operator, Convex Anal. 5 (1998) 269-278.

11 [10] K.C. Kiwiel, Convergence and efficiency of subgradient methods for quasi- convex minimization, Math. Program., Ser. A 90 (2001) 1-25 . [11] I.V. Konnov, Combined Relaxation Methods for Variational In- equalities, Lecture Notes in Economics and Mathematical Systems, 495, Springer-Verlag, 2001. [12] I. V. Konnov, On convergent properties of a subgradient method, Opti- mization Methods and Software, 17 (2003) 53-62. [13] O. Mangasarian, Nonlinear Programming, McGraw-Hill, Newyork, 1969. [14] L.D.Muu, Oettli W.: Convergence of an adaptive penalty scheme for finding constrained equilibria, Nonlinear Anal.: TMA 18 (1992) 1159- 1166 . [15] L.D.Muu, T.D.Quoc: Regularization algorithms for solving mono- tone Ky Fan inequalities with application to a Nash-Cournot equilibrium model, J. Optim. Theory Appl. 142, (2009) 185-204 . [16] H. Nikaido, K. Isoda, Note on noncooperative convex games, Pacific J. of Mathematics 5 (1955) 807-815. [17] N. Nimanaa, A. P. Farajzadehb, N. Petrot, Adaptive subgradient method for the split quasiconvex feasibility problems, Optimization 65(10) (2016) 1885-1898. [18] M. A. Noor, General nonlinear variational inequalities, J. Math. Anal. Appl. 126 (1987) 78-84. [19] T. D. Quoc, L. D. Muu, V. H. Nguyen, Extragradient algorithms extended to equilibrium problems, Optimization 57 (2008) 749-776. [20] J.-P. Penot, Are generalized derivatives useful for generalized convex func- tions? in: J.-P. Crouzeix, J.E. Martinez-Legaz and M. Volle (Edi- tors), Gen- eralized convexity, Generalized Monotonicity: Recent Results,Kluwer Aca- demic Publishers, Dordrecht, The Netherlands, (1998) 3-59. [21] J.-P. Penot and C. Zalinescu, Elements of quasiconvex subdifferential cal- culus, Journal of Convex Analysis, 7(2)(2000) 243-269 . [22] P. Santos, S. Scheimberg, An inexact subgradient algorithm for equilibrium problems, Comput. Appl. Math. 30 (2011) 91-107. [23] S. Schaible Fractional programming: Applications and Algorithms, Eur. Oper. Res. 7 (1981) 111-120 . [24] N.Z. Shor, Minimization Methods for Non-Differentiable Functions, Springer-Verlag, Berlin, Heidelberg, Germany, 1985. [25] L. Q. Thuy, T. N. Hai, A projected subgradient algorithm for bilevel equilib- rium problems and applications, J. Optim. Theory Appl. 175 (2017) 411-431 .

12 [26] L.H. Yen, N.T.T. Huyen, L. D. Muu, A subgradient algorithmfor a class of nonlinear split feasibility problems: application to jointly constrained Nash equilibrium models, J. Glob. Optim.73 ( 2019) 849-858.

13