S S symmetry

Article Approximation Results for Variational Inequalities Involving Pseudomonotone Bifunction in Real Hilbert Spaces

Kanikar Muangchoo 1 , Nasser Aedh Alreshidi 2 and Ioannis K. Argyros 3,*

1 Faculty of Science and Technology, Rajamangala University of Technology Phra Nakhon (RMUTP), 1381 Pracharat 1 Road, Wongsawang, Bang Sue, Bangkok 10800, Thailand; [email protected] 2 Department of , College of Science, Northern Border University, Arar 73222, Saudi Arabia; [email protected] 3 Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA * Correspondence: [email protected]

Abstract: In this paper, we introduce two novel extragradient-like methods to solve variational inequalities in a real Hilbert space. The variational problem is a general mathematical problem in the sense that it unifies several mathematical models, such as optimization problems, Nash equilibrium models, fixed point problems, and saddle point problems. The designed methods are analogous to the two-step extragradient method that is used to solve problems in real Hilbert spaces that have been previously established. The proposed iterative methods use a specific type of step size rule based on local operator information rather than its Lipschitz constant or any other line search procedure. Under mild conditions, such as the Lipschitz continuity and monotonicity of a bi-function (including pseudo-monotonicity), strong convergence results of the described methods are established. Finally, we provide many numerical experiments to demonstrate the performance and superiority of the designed methods.

  Keywords: subgradient extragradient method; variational inequalities; strong convergence; Lipschitz continuity; pseudo-monotone mapping Citation: Muangchoo, K.; Alreshidi, N.A.; Argyros, I.K. Approximation Results for Variational Inequalities Involving Pseudomonotone Bifunction 1. Introduction in Real Hilbert Spaces. Symmetry 2021, This paper concerns the problem of the classic variational inequality problem [1,2]. 13, 182. https://doi.org/10.3390/ G H → H sym13020182 The variational inequalities problem (VIP) for an operator : is defined in the following way:

Academic Editor: Ioan Rasa ∗ ∈ C G( ∗) − ∗ ≥ ∀ ∈ C Received: 8 January 2021 Find u such that u , y u 0, y (VIP) Accepted: 18 January 2021 where C is a non-empty, convex and closed of a real Hilbert space H and h., .i and Published: 23 January 2021 k.k denote an inner product and the induced norm on H, respectively. Moreover, R, N are Publisher’s Note: MDPI stays neu- the sets of real numbers and natural numbers, respectively. It is important to note that the tral with regard to jurisdictional claims problem (VIP) is equivalent to solving the following problem: in published maps and institutional ∗ ∈ C ∗ = [ ∗ − G( ∗)] affiliations. Find u such that u PC u χ u .

The idea of variational inequalities has been used by an antique mechanism to consider a wide range of topics, i.e., engineering, physics, optimization theory and economics. It is Copyright: © 2021 by the authors. Li- an important mathematical model that unifies a number of different mathematics problems censee MDPI, Basel, Switzerland. This such as the network equilibrium problem, the necessary optimality conditions, systems of article is an open access article distributed non-linear equations and the complementarity problems (for further details [3–9]). This under the terms and conditions of the problem was introduced by Stampacchia [2] in 1964 and also demonstrated that the prob- Creative Commons Attribution (CC BY) lem (VIP) had a key position in non-linear analysis. There are many researchers who have license (https://creativecommons.org/ licenses/by/4.0/).

Symmetry 2021, 13, 182. https://dx.doi.org/10.3390/sym13020182 https://www.mdpi.com/journal/symmetry Symmetry 2021, 13, 182 2 of 26

studied and considered many projection methods (see for more details [10–20]). Korpele- vich [13] and Antipin [21] established the following extragradient method:

 ∈ C  u0 , yn = PC [un − χG(un)], (1)  un+1 = PC [un − χG(yn)].

Recently, the subgradient extragradient method was introduced by Censor et al. [10] for solving the problem (VIP) in a real Hilbert space. It has the following form:

 ∈ C  u0 , yn = PC [un − χG(un)], (2)  un+1 = PHn [un − χG(yn)],

where Hn = {z ∈ H : hun − χG(un) − yn, z − yni ≤ 0}. It is important to mention that the above well-established method carries two serious shortcomings, the first one is the fixed step size that involves the knowledge or approxi- mation of the Lipschitz constants of the related mapping and it only converges weakly in Hilbert spaces. From the computational point of view, it might be questionable to use fixed step size, and hence the convergence rate and usefulness of the method could be affected. The main objective of this paper is to introduce inertial-type methods that are used to strengthen the convergence rate of the iterative sequence in this context. Such methods have been previously established due to the oscillator equation with a damping and conservative force restoration. This second-order dynamical system is called a heavy ball, which was originally studied by Polyak in [22]. Mainly, the functionality of the inertial-type method is that it will use the two previous iterations for the next iteration. Numerical results support that inertial term usually improves the functioning of the methods in terms of the number of iterations and elapsed time in this sense, and that inertial-type method has been broadly studied in [23–25]. So a natural question arises:

“Is it possible to introduce a new inertial-type strongly convergent extragradient-like method with a monotone variable step size rule to solve problem (VIP)”?

In this study, we provide a positive answer of this question, i.e., the gradient method still generates a strong convergence sequence by using a fixed and variable step size rule for solving a problem (VIP) associated with pseudo-monotone mappings. Motivated by the works of Censor et al. [10] and Polyak [22], we introduce a new inertial extragradient- type method to solve the problem (VIP) in the setting of an infinite-dimensional real Hilbert space. In brief, the key points of this paper are out as follows: (i) We propose an inertial subgradient extragradient method by using a fixed step size to solve the variational inequality problem in real Hilbert space and confirm that a generated sequence is strongly convergent. (ii) We also create a second inertial subgradient extragradient method by using a variable monotone step size rule independent of the Lipschitz constant to solve pseudomono- tone variational inequality problems. (iii) Numerical experiments are presented corresponding to proposed methods for the ver- ification of theoretical findings, and we compare them with the results in [Algorithm 3.4 in [23]], [Algorithm 3.2 in [24] and Algorithm 3.1 in [25]]. Our numerical data has shown that the proposed methods are useful and performed better as compared to the existing ones. The rest of the article is arranged as follows: The Section2 includes the basic def- initions and important lemmas that are used in the manuscript. Section3 consists of Symmetry 2021, 13, 182 3 of 26

inertial-type iterative schemes and convergence analysis theorems. Section4 provided the numerical findings to explain the behaviour of the new methods and in comparison with other methods.

2. Preliminaries In this section, we have written a number of important identities and relevant lemmas and definitions. A metric projection PC (u1) of u1 ∈ H is defined by

PC (u1) = arg min{ku1 − u2k : u2 ∈ C}.

Next, we list some of the important properties of the projection mapping.

Lemma 1. [26] Suppose that PC : H → C is a metric projection. Then, we have

(i) u3 = PC (u1) if and only if

hu1 − u3, u2 − u3i ≤ 0, ∀ u2 ∈ C.

(ii) 2 2 2 ku1 − PC (u2)k + kPC (u2) − u2k ≤ ku1 − u2k , u1 ∈ C, u2 ∈ H. (iii) ku1 − PC (u1)k ≤ ku1 − u2k, u2 ∈ C, u1 ∈ H.

Lemma 2. [27] Let {an} ⊂ [0, +∞) be a sequence satisfying the following inequality

an+1 ≤ (1 − bn)an + bnrn, ∀ n ∈ N.

Furthermore, {bn} ⊂ (0, 1) and {rn} ⊂ R be two sequences such that

+∞ lim bn = 0, bn = +∞ and lim sup rn ≤ 0. n→+∞ ∑ n=1 n→+∞

Then, limn→+∞ an = 0.

Lemma 3. [28] Assume that {an} ⊂ R is a sequence and there exist a subsequence {ni} of {n} such that

ani < ani+1 ∀ i ∈ N.

Then, there exists a non decreasing sequence mk ⊂ N such that mk → +∞ as k → +∞, and satisfying the following inequality for numbers k ∈ N:

amk ≤ amk+1 and ak ≤ amk+1 .

Indeed, mk = max{j ≤ k : aj ≤ aj+1}.

Next, we list some of the important identities that were used to prove the convergence analysis.

Lemma 4. [26] For any u1, u2 ∈ H and b ∈ R. Then, the following inequalities hold. (i) 2 2 2 2 kbu1 + (1 − b)u2k = bku1k + (1 − b)ku2k − b(1 − b)ku1 − u2k . (ii) 2 2 ku1 + u2k ≤ ku1k + 2hu2, u1 + u2i. Symmetry 2021, 13, 182 4 of 26

Lemma 5. [29] Assume that G : C → H is a continuous and pseudo-monotone mapping. Then, u∗ solves the problem (VIP) iff u∗ is the solution of the following problem:

Find u ∈ C such that hG(y), y − ui ≥ 0, ∀ y ∈ C.

3. Main Results Now, we introduce both inertial-type subgradient extragradient methods which incor- porate a monotone step size rule and the inertial term and provide both strong convergence theorems. The following two main results are outlined as Algorithms1 and2:

Algorithm 1 Inertial-type strongly convergent iterative scheme. 1 Step 0: Choose arbitrary starting points u−1, u0 ∈ C, θ > 0 and 0 < χ < L . Moreover, choose {φn} ⊂ (0, 1) comply with the following conditions:

+∞ lim φn = 0 and φn = +∞. n→+∞ ∑ n=1

Step 1: Evaluate   wn = un + θn(un − un−1) − φn un + θn(un − un−1) ,

where θn such that  n o θ en min , if un 6= un−1,  2 kun−un−1k 0 ≤ θn ≤ θˆn and θˆn = (3)   θ 2 else,

e where e = ◦(φ ) is a positive sequence, i.e., lim →+ n = 0. n n n ∞ φn Step 2: Evaluate yn = PC (wn − χG(wn)).

If wn = yn, then STOP. Otherwise, go to Step 3. Step 3: Evaluate

un+1 = PHn (wn − χG(yn)). where Hn = {z ∈ H : hwn − χG(wn) − yn, z − yni ≤ 0}. Set n = n + 1 and go back to Step 1.

In order to study the convergence analysis, we consider that the following condition have been satisfied: (B1) The solution set of problem (VIP), denoted by Ω is non-empty; (B2) An operator G : H → H is called to be pseudo-monotone, i.e.,

G(y1), y2 − y1 ≥ 0 =⇒ G(y2), y1 − y2 ≤ 0, ∀ y1, y2 ∈ C;

(B3) An operator G : H → H is called to be Lipschitz continuous through a constant L > 0, i.e., there exists L > 0 such that

kG(y1) − G(y2)k ≤ Lky1 − y2k, ∀ y1, y2 ∈ C;

(B4) An operator G : H → H is called to be weakly sequentially continuous, i.e., {G(un)} converges weakly to G(u) for every sequence {un} converges weakly to u. Symmetry 2021, 13, 182 5 of 26

Algorithm 2 Explicit Inertial-type strongly convergent iterative scheme.

Step 0: Choose arbitrary starting points u−1, u0 ∈ C, θ > 0, µ ∈ (0, 1), χ0 > 0. Moreover, select {φn} ⊂ (0, 1) comply with the following conditions:

+∞ lim φn = 0 and φn = +∞. n→+∞ ∑ n=1

Step 1: Evaluate   wn = un + θn(un − un−1) − φn un + θn(un − un−1)

where θn such that  n o θ en min , if un 6= un−1,  2 kun−un−1k 0 ≤ θn ≤ θˆn and θˆn = (4)   θ 2 else,

e where e = ◦(φ ) is a positive sequence, i.e., lim →+ n = 0. n n n ∞ φn Step 2: Evaluate yn = PC (wn − χG(wn)).

If wn = yn, then STOP and yn is a solution. Otherwise, go to Step 3. Step 3: Evaluate

un+1 = PHn (wn − χG(yn)). where Hn = {z ∈ H : hwn − χG(wn) − yn, z − yni ≤ 0}. (iii) Compute  n µkwn−ynk o min χn, if G(wn) − G(yn) 6= 0,  kG(wn)−G(yn)k χn+1 =  χn else.

Set n = n + 1 and go back to Step 1.

Lemma 6. Assume that G : H → H satisfies the conditions (B1)–(B4) in Algorithm1. For each u∗ ∈ Ω 6= ∅, we have

∗ 2 ∗ 2 2 2 kun+1 − u k ≤ kwn − u k − (1 − χL)kwn − ynk − (1 − χL)kun+1 − ynk .

Proof. First, consider the following

∗ 2 ∗ 2 un+1 − u = PHn [wn − χG(yn)] − u 2 = P [w − χG(y )] + [w − χG(y )] − [w − χG(y )] − u∗ Hn n n n n n n (5) ∗ 2 2 = [wn − χG(yn)] − u + PHn [wn − χG(yn)] − [wn − χG(yn)] ∗ +2 PHn [wn − χG(yn)] − [wn − χG(yn)], [wn − χG(yn)] − u .

∗ It is given that u ∈ Ω ⊂ C ⊂ Hn, such that

2 PHn [wn − χG(yn)] − [wn − χG(yn)] ∗ + PHn [wn − χG(yn)] − [wn − χG(yn)], [wn − χG(yn)] − u (6) ∗ = [wn − χG(yn)] − PHn [wn − χG(yn)], u − PHn [wn − χG(yn)] ≤ 0, Symmetry 2021, 13, 182 6 of 26

which implies that

∗ PHn [wn − χG(yn)] − [wn − χG(yn)], [wn − χG(yn)] − u 2 (7) ≤ − PHn [wn − χG(yn)] − [wn − χG(yn)] .

By the use of expressions (5) and (7), we obtain

∗ 2 ∗ 2 2 kun+1 − u k ≤ wn − χG(yn) − u − PHn [wn − χG(yn)] − [wn − χG(yn)] ∗ 2 2 ∗ (8) ≤ kwn − u k − kwn − un+1k + 2χ G(yn), u − un+1 .

It is given that u∗ ∈ Ω, we obtain

hG(u∗), y − u∗i ≥ 0, for all y ∈ C.

By the use of pseudo-monotonicity of mapping G on C, we obtain

hG(y), y − u∗i ≥ 0, for all y ∈ C.

Let consider y = yn ∈ C, we obtain

∗ hG(yn), yn − u i ≥ 0.

Thus, we have

∗ ∗ G(yn), u − un+1 = G(yn), u − yn + G(yn), yn − un+1 ≤ G(yn), yn − un+1 . (9)

By the use of expressions (8) and (9), we get

∗ 2 ∗ 2 2 kun+1 − u k ≤ kwn − u k − kwn − un+1k + 2χ G(yn), yn − un+1 ∗ 2 2 ≤ kwn − u k − kwn − yn + yn − un+1k + 2χ G(yn), yn − un+1 ∗ 2 2 2 ≤ kwn − u k − kwn − ynk − kyn − un+1k + 2 wn − χG(yn) − yn, un+1 − yn . (10)

It is given that un+1 = PHn [wn − χG(yn)], we have

2 wn − χG(yn) − yn, un+1 − yn

= 2 wn − χG(wn) − yn, un+1 − yn + 2χ G(wn) − G(yn), un+1 − yn (11) 2 2 ≤ 2χLkwn − ynkkun+1 − ynk ≤ χLkwn − ynk + χLkun+1 − ynk .

Combining expressions (10) and (11), we obtain

∗ 2 ∗ 2 2 2 kun+1 − u k ≤ kwn − u k − (1 − χL)kwn − ynk − (1 − χL)kun+1 − ynk . (12)

Theorem 1. Let {un} be a sequence generated by Algorithm1 and satisfies the conditions ∗ ∗ (B1)–(B4). Then, {un} strongly converges to u ∈ Ω. Moreover, PΩ(0) = u .

Proof. It is given in expression (3) that

θn en lim un − un−1 ≤ lim un − un−1 = 0. (13) n→+∞ φn n→+∞ φn Symmetry 2021, 13, 182 7 of 26

By the use of definition of {wn} and inequality (13), we get

∗ ∗ wn − u = un + θn(un − un−1) − φnun − θnφn(un − un−1) − u ∗ ∗ = (1 − φn)(un − u ) + (1 − φn)θn(un − un−1) − φnu (14) ∗ ∗ ≤ (1 − φn) un − u + (1 − φn)θn un − un−1 + φn u ∗ ≤ (1 − φn)kun − u k + φn M1, (15)

where θn ∗ (1 − φn) un − un−1 + u ≤ M1. φn By the use of Lemma6, we obtain

∗ 2 ∗ 2 kun+1 − u k ≤ kwn − u k , ∀ n ∈ N. (16)

Combining (15) with (16), we obtain

∗ ∗ kun+1 − u k ≤ (1 − φn)kun − u k + φn M1  ∗ ≤ max kun − u k, M1 . .  ∗ ≤ max ku0 − u k, M1 . (17)

Thus, we conclude that the {un} is bounded sequence. Indeed, by (15) we have

∗ 2 2 ∗ 2 2 2 ∗ wn − u ≤ (1 − φn) kun − u k + φn M1 + 2M1φn(1 − φn)kun − u k ∗ 2  2 ∗  ≤ kun − u k + φn φn M1 + 2M1(1 − φn)kun − u k (18) ∗ 2 ≤ kun − u k + φn M2,

for some M2 > 0. Combining the expressions (12) with (18), we have

∗ 2 ∗ 2 kun+1 − u k ≤ kun − u k + φn M2 2 2 (19) −(1 − χL)kwn − ynk − (1 − χL)kun+1 − ynk .

Due to the Lipschitz-continuity and pseudo-monotonicity of G implies that Ω is a ∗ closed and . It is given that u = PΩ(0) and by using Lemma1 (ii), we have

h0 − u∗, y − u∗i ≤ 0, ∀ y ∈ Ω. (20)

The rest of the proof is divided into the following parts:

Case 1: Now consider that a number N1 ∈ N such that

∗ ∗ kun+1 − u k ≤ kun − u k, ∀ n ≥ N1. (21)

∗ ∗ Thus, above implies that limn→+∞ kun − u k exists and let limn→+∞ kun − u k = l, for some l ≥ 0. From the expression (19), we have

2 2 (1 − χL)kwn − ynk + (1 − χL)kun+1 − ynk ∗ 2 ∗ 2 (22) ≤ kun − u k + φn M2 − kun+1 − u k .

∗ Due to existence of a limit of sequence kun − u k and φn → 0, we infer that

kwn − ynk → 0 and kun+1 − ynk → 0 as n → +∞. (23)

By the use of expression (23), we have

lim kwn − un+1k ≤ lim kwn − ynk + lim kyn − un+1k = 0. (24) n→+∞ n→+∞ n→+∞ Symmetry 2021, 13, 182 8 of 26

Next, we will evaluate   kwn − unk = kun + θn(un − un−1) − φn un + θn(un − un−1) − unk ≤ θnkun − un−1k + φnkunk + θnφnkun − un−1k (25) = φ θn ku − u k + φ ku k + φ2 θn ku − u k −→ 0. n φn n n−1 n n n φn n n−1

Thus above implies that

lim kun − un+1k ≤ lim kun − wnk + lim kwn − un+1k = 0. (26) n→+∞ n→+∞ n→+∞

The above explanation guarantees that the sequences {wn} and {yn} are also bounded. By the use of reflexivity of H and the boundedness of {un} guarantees that there exits a

subsequence {unk } such that {unk } * uˆ ∈ H as k → +∞. Next, we have to prove that

uˆ ∈ Ω. It is given that ynk = PC [wnk − χG(wnk )] that is equivalent to

hwnk − χG(wnk ) − ynk , y − ynk i ≤ 0, ∀ y ∈ C. (27)

The inequality described above implies that

hwnk − ynk , y − ynk i ≤ χhG(wnk ), y − ynk i, ∀ y ∈ C. (28)

Thus, we obtain

1 hw − y , y − y i + hG(w ), y − w i ≤ hG(w ), y − w i, ∀ y ∈ C. (29) χ nk nk nk nk nk nk nk nk

Due to boundedness of the sequence {wnk } implies that {G(wnk )} is also bounded.

By the use of limk→∞ kwnk − ynk k = 0 and k → ∞ in (29), we obtain

lim infhG(wn ), y − wn i ≥ 0, ∀ y ∈ C. (30) k→∞ k k

Moreover, we have

hG(y ), y − y i nk nk (31) = hG(ynk ) − G(wnk ), y − wnk i + hG(wnk ), y − wnk i + hG(ynk ), wnk − ynk i.

Since limk→∞ kwnk − ynk k = 0 and G is L-Lipschitz continuity on H implies that

lim kG(wn ) − G(yn )k = 0, (32) k→∞ k k

which together with (31) and (32), we obtain

lim infhG(yn ), y − yn i ≥ 0, ∀ y ∈ C. (33) k→∞ k k

Let us consider a sequence of positive numbers {ek} that is decreasing and converges to zero. For each k, we denote mk by the smallest positive integer such that

hG(wni ), y − wni i + ek ≥ 0, ∀ i ≥ mk. (34)

Due to {ek} is decreasing and {mk} is increasing.

Case I: If there is a wnm subsequence of wnm such that G(wnm ) = 0 (∀j). Let j → ∞, kj k kj we obtain

hG(uˆ), y − uˆi = lim hG(wnm ), y − uˆi = 0. (35) j→∞ kj Thus, uˆ ∈ C and imply that uˆ ∈ Ω. Symmetry 2021, 13, 182 9 of 26

Case II: If there exits N ∈ such that for all nm ≥ N , G(wn ) 6= 0. Consider that 0 N k 0 mk

G(wn ) = mk ∀ ≥ Ξnm 2 , nmk N0. (36) k kG(wn )k mk

Due to the above definition, we obtain

hG(wn ), Ξn i = 1, ∀ nm ≥ N . (37) mk mk k 0

Moreover, expressions (34) and (37), for all nmk ≥ N0, we have

hG(wn ), y + e Ξn − wn i ≥ 0. (38) mk k mk mk

Due to the pseudomonotonicity of G for nmk ≥ N0, we have

hG(y + e Ξn ), y + e Ξn − wn i ≥ 0. (39) k mk k mk mk

For all nmk ≥ N0, we have

hG(y), y − wn i ≥ hG(y) − G(y + e Ξn ), y + e Ξn − wn i − e hG(y), Ξn i. (40) mk k mk k mk mk k mk

Due to {wnk } weakly converges to uˆ ∈ C through G is sequentially weakly continuous

on the set C, we get {G(wnk )} weakly converges to G(uˆ). Suppose that G(uˆ) 6= 0, we have

kG(uˆ)k ≤ lim inf kG(wn )k. (41) k→∞ k

Since {wn } ⊂ {wn } and lim e = 0, we have mk k k→∞ k

ek 0 0 ≤ lim kekΞnm k = lim ≤ = 0. (42) k→∞ k k→∞ kG(wn )k kG(uˆ)k mk

Next, consider k → ∞ in (40), we obtain

hG(y), y − uˆi ≥ 0, ∀ y ∈ C. (43)

By the use of Minty Lemma5, we infer uˆ ∈ Ω. Next, we have

∗ ∗ ∗ ∗ ∗ ∗ lim suphu , u − uni = lim hu , u − unk i = hu , u − uˆi ≤ 0. (44) n→+∞ k→+∞

By the use of limn→+∞ un+1 − un = 0. Thus, (44) implies that

∗ ∗ lim supn→+∞hu , u − un+1i ∗ ∗ ∗ (45) ≤ lim supn→+∞hu , u − uni + lim supn→+∞hu , un − un+1i ≤ 0.

Consider the expression (14), we have

∗ 2 wn − u ∗ 2 = un + θn(un − un−1) − φnun − θnφn(un − un−1) − u ∗ ∗ 2 = (1 − φn)(un − u ) + (1 − φn)θn(un − un−1) − φnu ∗ 2 ∗ ∗ ≤ (1 − φn)(un − u ) + (1 − φn)θn(un − un−1) + 2φnh−u , wn − u i 2 ∗ 2 2 2 2 = (1 − φn) un − u + (1 − φn) θn un − un−1 2 ∗ ∗ ∗ ∗ (46) +2θn(1 − φn) un − u un − un−1 + 2φnh−u , wn − un+1i + 2φnh−u , un+1 − u i ∗ 2 2 2 ∗ ≤ (1 − φn) un − u + θn un − un−1 + 2θn(1 − φn) un − u un − un−1 ∗ ∗ ∗ +2φn u wn − un+1 + 2φnh−u , un+1 − u i 2 h = (1 − φ ) u − u∗ + φ θ u − u θn u − u n n n n n n−1 φn n n−1 i +2(1 − φ ) u − u∗ θn u − u + 2 u∗ w − u + 2hu∗, u∗ − u i . n n φn n n−1 n n+1 n+1 Symmetry 2021, 13, 182 10 of 26

From expressions (16) and (46) we obtain

∗ 2 un+1 − u 2 h ≤ (1 − φ ) u − u∗ + φ θ u − u θn u − u n n n n n n−1 φn n n−1 (47) i +2(1 − φ ) u − u∗ θn u − u + 2 u∗ w − u + 2hu∗, u∗ − u i . n n φn n n−1 n n+1 n+1

By the use of (24), (45), (47) and applying Lemma2, conclude that ∗ limn→+∞ un − u = 0.

Case 2: Consider that there exists {ni} subsequence of {n} such that

∗ ∗ kuni − u k ≤ kuni+1 − u k, ∀ i ∈ N.

By using Lemma3 there exists a sequence {mk} ⊂ N as {mk} → +∞ such that

∗ ∗ ∗ ∗ kumk − u k ≤ kumk+1 − u k and kuk − u k ≤ kumk+1 − u k, for all k ∈ N. (48)

As in Case 1, the relation (22) gives that

(1 − χL)kw − y k2 + (1 − χL)ku − y k2 mk mk mk+1 mk (49) ≤ k − ∗k2 + − k − ∗k2 umk u φmk M2 umk+1 u .

Due to φmk → 0, we deduce the following:

lim kwm − ym k = lim kum +1 − ym k = 0. (50) k→+∞ k k k→+∞ k k

It follows that

lim kum + − wm k ≤ lim kum + − ym k + lim kym − wm k = 0. (51) k→+∞ k 1 k k→+∞ k 1 k k→+∞ k k

Next, we evaluate

k − k = k + ( − ) −  + ( − ) − k wmk umk umk θmk umk umk−1 φmk umk θmk umk umk−1 umk ≤ ku − u k + ku k + ku − u k θmk mk mk−1 φmk mk θmk φmk mk mk−1 (52) θ θ = mk k − k + k k + 2 mk k − k −→ φmk umk umk−1 φmk umk φm umk umk−1 0. φmk k φmk

It follows that

lim kum − um +1k ≤ lim kum − wm k + lim kwm − um +1k = 0. (53) k→+∞ k k k→+∞ k k k→+∞ k k

By using the same explanation as in the Case 1, such that

h ∗ ∗ − i ≤ lim sup u , u umk+1 0. (54) k→+∞

By using the expressions (47) and (48) we obtain

− ∗ 2 umk +1 u h θ ≤ ( − ) − ∗ 2 + − mk − 1 φmk umk u φmk θmk umk umk −1 umk umk −1 φmk θ i + ( − ) − ∗ mk − + ∗ − + h ∗ ∗ − i 2 1 φmk umk u umk umk −1 2 u wmk umk +1 2 u , u umk +1 (55) φmk h θ ≤ ( − ) − ∗ 2 + − mk − 1 φmk umk+1 u φmk θmk umk umk −1 umk umk −1 φmk θ i + ( − ) − ∗ mk − + ∗ − + h ∗ ∗ − i 2 1 φmk umk u umk umk −1 2 u wmk umk +1 2 u , u umk +1 . φmk Symmetry 2021, 13, 182 11 of 26

Thus, above implies that

− ∗ 2 umk +1 u h θ ≤ − mk − θmk umk umk −1 umk umk −1 (56) φmk θ i + ( − ) − ∗ mk − + ∗ − + h ∗ ∗ − i 2 1 φmk umk u umk umk −1 2 u wmk umk +1 2 u , u umk +1 . φmk

∗ Since φmk → 0, and boundedness of the sequence umk − u is a bounded. Thus, expressions (54) and (56) implies that

k − ∗k2 → → + umk+1 u 0, as k ∞. (57)

It implies that

∗ 2 ∗ 2 lim kuk − u k ≤ lim kum +1 − u k ≤ 0. (58) n→+∞ n→+∞ k

∗ As a consequence un → u . This completes the proof of the theorem.

Lemma 7. Assume that G : H → H satisfies the conditions (B1)–(B4) in Algorithm2. For each u∗ ∈ Ω 6= ∅, we have

∗ 2 ∗ 2  µχn  2  µχn  2 kun+1 − u k ≤ kwn − u k − 1 − kwn − ynk − 1 − kun+1 − ynk . χn+1 χn+1

Proof. Consider that

∗ 2 ∗ 2 un+1 − u = PHn [wn − χnG(yn)] − u ∗ 2 = PHn [wn − χnG(yn)] + [wn − χnG(yn)] − [wn − χnG(yn)] − u ∗ 2 2 (59) = [wn − χnG(yn)] − u + PHn [wn − χnG(yn)] − [wn − χnG(yn)] ∗ +2 PHn [wn − χnG(yn)] − [wn − χnG(yn)], [wn − χnG(yn)] − u .

∗ It is given that u ∈ Ω ⊂ C ⊂ Hn, we obtain

2 PHn [wn − χnG(yn)] − [wn − χnG(yn)] ∗ + PHn [wn − χnG(yn)] − [wn − χnG(yn)], [wn − χnG(yn)] − u (60) ∗ = [wn − χnG(yn)] − PHn [wn − χnG(yn)], u − PHn [wn − χnG(yn)] ≤ 0,

which implies that

∗ PHn [wn − χnG(yn)] − [wn − χnG(yn)], [wn − χnG(yn)] − u 2 (61) ≤ − PHn [wn − χnG(yn)] − [wn − χnG(yn)] .

By using expressions (59) and (61), we obtain

∗ 2 ∗ 2 2 kun+1 − u k ≤ wn − χnG(yn) − u − PHn [wn − χnG(yn)] − [wn − χnG(yn)] ∗ 2 2 ∗ (62) ≤ kwn − u k − kwn − un+1k + 2χn G(yn), u − un+1 .

Thus, we have hG(u∗), y − u∗i ≥ 0, for all y ∈ C. By the use of condition (B2), we have

hG(y), y − u∗i ≥ 0, for all y ∈ C.

Take y = yn ∈ C, we obtain

∗ hG(yn), yn − u i ≥ 0. Symmetry 2021, 13, 182 12 of 26

Thus, we have

∗ ∗ G(yn), u − un+1 = G(yn), u − yn + G(yn), yn − un+1 ≤ G(yn), yn − un+1 . (63)

Combining expressions (62) and (63), we obtain

∗ 2 ∗ 2 2 kun+1 − u k ≤ kwn − u k − kwn − un+1k + 2χn G(yn), yn − un+1 ∗ 2 2 ≤ kwn − u k − kwn − yn + yn − un+1k + 2χn G(yn), yn − un+1 (64) ∗ 2 2 2 ≤ kwn − u k − kwn − ynk − kyn − un+1k + 2 wn − χnG(yn) − yn, un+1 − yn .

Note that un+1 = PHn [wn − χnG(yn)] and by the definition of χn+1, we have

2 wn − χnG(yn) − yn, un+1 − yn

= 2 wn − χnG(wn) − yn, un+1 − yn + 2χn G(wn) − G(yn), un+1 − yn 2µχn (65) ≤ 2χnkG(wn) − G(yn)kku + − ynk ≤ kwn − ynkku + − ynk n 1 χn+1 n 1 µχn 2 µχn 2 ≤ kwn − ynk + ku + − ynk . χn+1 χn+1 n 1

Combining expressions (64) and (65), we obtain

∗ 2 kun+1 − u k ∗ 2 2 2 χn  2 2 ≤ kwn − u k − kwn − ynk − kyn − un+1k + χ µkwn − ynk + µkun+1 − ynk (66)    n+1  ≤ kw − u∗k2 − 1 − µχn kw − y k2 − 1 − µχn ku − y k2. n χn+1 n n χn+1 n+1 n

Theorem 2. Let {un} be a sequence generated by Algorithm2 and satisfies the conditions ∗ ∗ (B1)–(B4). Then, {un} strongly converges to u ∈ Ω. Moreover, PΩ(0) = u .

Proof. From Lemma7, we have

∗ 2 ∗ 2  µχn  2  µχn  2 kun+1 − u k ≤ kwn − u k − 1 − kwn − ynk − 1 − kun+1 − ynk . (67) χn+1 χn+1

It is given that χn → χ such that e ∈ (0, 1 − µ), we have

 µχ  lim 1 − n = 1 − µ > e > 0. n→∞ χn+1

∗ Therefore, there exists N1 ∈ N in order that   µχn ∗ 1 − > e > 0, ∀ n ≥ N1 . (68) χn+1

Thus, implies that

∗ 2 ∗ 2 ∗ kun+1 − u k ≤ kwn − u k , ∀ n ≥ N1 . (69)

Next, we follow the same steps as in the proof of Theorem1.

4. Numerical Illustrations This section examines four numerical experiments to show the efficacy of the proposed algorithms. Any of these numerical experiments provide a detailed understanding of how better control parameters can be chosen. Some of them show the advantages of the proposed methods compared to existing ones in the literature. Symmetry 2021, 13, 182 13 of 26

Example 1. Firstly, consider the HpHard problem that is taken from [30] and this example was studied by many authors for numerical experiments (see for details [31–33]). Let G : Rm → Rm be a mapping is defined by G(u) = Mu + q where q ∈ Rm and M = NNT + B + D where B is an m × m skew-symmetric matrix, N is an m × m matrix and D is a diagonal m × m positive definite matrix. The set C is taken in the following way:

m C = {u ∈ R : −10 ≤ ui ≤ 10}.

It is clear that G is monotone and Lipschitz continuous through L = kMk. For q = 0, the so- lution set of the corresponding variational inequality problem is Ω = {0}. During this experiment, −4 the initial point is u0 = u1 = (1, 1, ··· , 1) and Dn = kwn − ynk ≤ 10 . The numerical findings of these methods are shown in Figures1–6 and Table1. The control conditions are taken as follows: (i) Algorithm 3.4 in [23] (shortly, MT-EgM):

1 1 5 χ = 0.20, θ = 0.70, µ = 0.30, φn = , τn = , θn = (1 − φn). 0 (n + 2) (n + 1)2 10

(ii) Algorithm 3.2 in [24] (shortly, VT1-EgM):

1 1 u τ = 0.20, θ = 0.50, µ = 0.50, φn = , en = , f (u) = . 0 (n + 1) (n + 1)2 3

(iii) Algorithm 3.1 in [25] (shortly, VT2-EgM):

0.7 1 1 u χ = , θ = 0.70, φn = , τn = , f (u) = . L (n + 2) (n + 1)2 3

(iv) Algorithm1 (shortly, I1-EgA):

0.7 1 1 χ = , θ = 0.70, φn = , en = . L (n + 2) (n + 1)2

(v) Algorithm2 (shortly, I2-EgA):

1 1 χ = 0.20, µ = 0.30, θ = 0.70, φn = , en = . 0 (n + 2) (n + 1)2

Example 2. Assume that H = L2([0, 1]) is a Hilbert space with an inner product

Z 1 hu, yi = u(t)y(t)dt, ∀ u, y ∈ H 0 and norm is defined by s Z 1 kuk = |u(t)|2dt. 0 Consider that the set C := {u ∈ L2([0, 1]) : kuk ≤ 1} is a unit ball. Let G : C → H is defined by Z 1 G(u)(t) = u(t) − H(t, s) f (u(s))ds + g(t) 0 where 2tse(t+s) 2tet H(t, s) = √ , f (u) = cos u, g(t) = √ . e e2 − 1 e e2 − 1 Symmetry 2021, 13, 182 14 of 26

It can be seen that G is Lipschitz-continuous with Lipschitz constant L = 2 and monotone. Figures7–9 and Table2 show the numerical results by choosing different choices of u0. The control conditions are taken as follows: (i) Algorithm 3.4 in [23] (shortly, MT-EgM):

1 1 6 χ = 0.25, θ = 0.75, µ = 0.35, τn = , φn = , θn = (1 − φn). 0 (n + 1)2 2(n + 2) 10

(ii) Algorithm 3.2 in [24] (shortly, VT1-EgM):

1 1 u τ = 0.25, θ = 0.75, µ = 0.35, en = , φn = , f (u) = . 0 (n + 1)2 2(n + 2) 4

(iii) Algorithm 3.1 in [25] (shortly, VT2-EgM):

0.75 1 1 u χ = , θ = 0.75, τn = , φn = , f (u) = . L (n + 1)2 2(n + 2) 4

(iv) Algorithm1 (shortly, I1-EgA):

0.75 1 1 χ = , θ = 0.75, en = , φn = . L (n + 1)2 2(n + 2)

(v) Algorithm2 (shortly, I2-EgA):

1 1 χ = 0.25, µ = 0.35, θ = 0.75, en = , φn = . 0 (n + 1)2 2(n + 2)

Table 1. Numerical data for Figures1–6.

Dimension m = 5 m = 20 m = 50 m = 100 Algorithm Name Iter. Time Iter. Time Iter. Time Iter. Time Algorithm 3.4 in [23] 48 0.203351 264 1.440695 294 1.514870 313 1.817463 Algorithm 3.2 in [24] 30 0.150336 321 1.766826 357 1.966747 306 2.782575 Algorithm 3.1 in [25] 27 0.131781 42 0.182407 41 0.286670 40 0.234242 Algorithm1 10 0.074348 10 0.041680 10 0.055002 9 0.064665 Algorithm2 14 0.064953 142 0.672565 89 0.437610 65 0.380447 Symmetry 2021, 13, 182 15 of 26

101

100

10-1

10-2

10-3

10-4

10-5 0 5 10 15 20 25 30 35 40 45 50 Number of iterations Figure 1. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 5.

104

102

100

10-2

10-4 0 50 100 150 200 250 300 350 Number of iterations

Figure 2. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 20. Symmetry 2021, 13, 182 16 of 26

104

102

100

10-2

10-4

10-6 0 50 100 150 200 250 300 350 400 Number of iterations Figure 3. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 50.

104

102

100

10-2

10-4

10-6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Elapsed time [sec]

Figure 4. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 50. Symmetry 2021, 13, 182 17 of 26

105

100

10-5 0 50 100 150 200 250 300 350 Number of iterations Figure 5. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 100.

105

100

10-5 0 0.5 1 1.5 2 2.5 3 Elapsed time [sec]

Figure 6. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 in [24] and Algorithm 3.1 in [25] when m = 100. Symmetry 2021, 13, 182 18 of 26

Table 2. Numerical data for Figures7–9.

2 2 2 t u0 = u1 t + 1 3t + 2 sin(t) t + e Algorithm Name Iter. Time Iter. Time Iter. Time Algorithm 3.4 in [23] 57 0.037861 71 0.168874 81 0.207324 Algorithm 3.2 in [24] 27 0.021260 32 0.086875 34 0.109731 Algorithm 3.1 in [25] 19 0.012435 23 0.042838 26 0.049123 Algorithm1 14 0.014493 14 0.032441 12 0.031265 Algorithm2 11 0.017906 15 0.042816 21 0.076447

101

100

10-1

10-2

10-3

10-4

10-5 0 10 20 30 40 50 60 Number of iterations Figure 7. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 2 in [24] and Algorithm 3.1 in [25] when u0 = u1 = t + 1.

102

101

100

10-1

10-2

10-3

10-4

10-5 0 10 20 30 40 50 60 70 80 Number of iterations

Figure 8. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 2 in [24] and Algorithm 3.1 in [25] when u0 = u1 = 3t + 2 sin(t). Symmetry 2021, 13, 182 19 of 26

102

100

10-2

10-4

10-6 0 10 20 30 40 50 60 70 80 90 Number of iterations

Figure 9. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 2 t in [24] and Algorithm 3.1 in [25] when u0 = u1 = 5t + e .

Example 3. Consider that the problem of Kojima–Shindo where the constraint set C is

4 C = {u ∈ R : 1 ≤ ui ≤ 5, i = 1, 2, 3, 4},

and the mapping G : R4 → R4 is defined by   u1 + u2 + u3 + u4 − 4u2u3u4 u + u + u + u − 4u u u  G(u) =  1 2 3 4 1 3 4. u1 + u2 + u3 + u4 − 4u1u2u4 u1 + u2 + u3 + u4 − 4u1u2u3

It is easy to see that G is not monotone on the set C. By using the Monte-Carlo approach [34], it can be shown that G is pseudo-monotone on C. This problem has a unique solution u∗ = (5, 5, 5, 5)T. Actually, in general, it is a very difficult task to check the pseudomonotonicity of any mapping G in practice. We here employ the Monte Carlo approach according to the definition of pseudo-monotonicity: Generate a large number of pairs of points u and y uniformly in C satisfying G(u)T(y − u) ≥ 0 and then check if G(y)T(y − u) ≥ 0. Tables3–8 show the numerical results by taking different values of u0. The control conditions are taken as follows: (i) Algorithm 3.4 in [23] (shortly, MT-EgM):

1 1 6 χ = 0.05, θ = 0.70, µ = 0.33, τn = , φn = , θn = (1 − φn). 0 (n + 1)2 50(n + 2) 10

(ii) Algorithm 3.2 in [24] (shortly, VT1-EgM):

1 1 u τ = 0.05, θ = 0.70, µ = 0.33, en = , φn = , f (u) = . 0 (n + 1)2 50(n + 2) 3

(iii) Algorithm 3.1 in [25] (shortly, VT2-EgM):

0.7 1 1 u χ = , θ = 0.70, τn = , φn = , f (u) = . L (n + 1)2 50(n + 2) 3 Symmetry 2021, 13, 182 20 of 26

(iv) Algorithm1 (shortly, I1-EgA):

0.7 1 1 χ = , θ = 0.70, en = , φn = . L (n + 1)2 50(n + 2)

(v) Algorithm2 (shortly, I2-EgA):

1 1 χ = 0.05, µ = 0.33, θ = 0.70, en = , φn = . 0 (n + 1)2 50(n + 2)

T Table 3. Example3: Numerical findings of Algorithm 3.4 in [23] and u0 = u1 = (1, 2, 3, 4) .

Iter (n) u1 u2 u3 u4 1 7.88110105259549 11.1335921052147 11.6608026315329 11.4916973684078 2 2.72517069971347 5.52196307009909 5.91062617486984 5.78231681677249 3 2.74650358169779 5.23892462205654 5.43540972965547 5.37058363048288 4 2.79589652551700 5.10560097798145 5.20481731038818 5.17209179402342 5 2.84815300477526 5.04321208318383 5.09330195427214 5.07678246381620 6 2.90242325540275 5.01444727730850 5.03972755805987 5.03139070422338 7 2.95876979857588 5.00154113107122 5.01429541422714 5.01008946137975 8 3.01614532087206 4.99604650580771 5.00248193413128 5.00035976548024 ...... 194 4.99949159554096 4.99949159554096 4.99949159554096 4.99949159554096 195 4.99949420145518 4.99949420145518 4.99949420145518 4.99949420145518 196 4.99949678078979 4.99949678078979 4.99949678078979 4.99949678078979 197 4.99949933394941 4.99949933394941 4.99949933394941 4.99949933394941 198 4.99950186133047 4.99950186133047 4.99950186133047 4.99950186133047 CPU time is seconds 1.011633

T Table 4. Example3: Numerical findings of Algorithm 3.2 in [24] and u0 = u1 = (1, 2, 3, 4) .

Iter (n) u1 u2 u3 u4 1 4.94814814707507 20.1659074073104 20.2289444443514 18.9075370362809 2 −8.04670389554580 12.4597307883609 12.5054021946827 11.3787988702116 3 1.03972935440247 4.90132095088790 4.89452802770933 4.98533018397129 4 1.11973537635742 4.89218078676053 4.88537889965809 4.95475431398367 5 1.20128523014071 4.89169908830941 4.88493057181313 4.94635475329384 6 1.27774539110779 4.89625929375273 4.88953484996449 4.94790178020963 7 1.34983811312425 4.90378394656115 4.89710336213719 4.95376528318611 8 1.41896397454886 4.91334870977828 4.90670981629961 4.96199173340817 ...... 144 4.99948911782781 4.99948911782781 4.99948911782781 4.99948911782781 145 4.99949261719944 4.99949261719944 4.99949261719944 4.99949261719944 146 4.99949606895811 4.99949606895811 4.99949606895811 4.99949606895811 147 4.99949947406902 4.99949947406902 4.99949947406902 4.99949947406902 148 4.99950283347142 4.99950283347142 4.99950283347142 4.99950283347142 CPU time is seconds 0.7115419 Symmetry 2021, 13, 182 21 of 26

T Table 5. Example3: Numerical findings of Algorithm2 and u0 = u1 = (1, 2, 3, 4) .

Iter (n) u1 u2 u3 u4 1 4.99999999934819 20.0658752056891 20.0955845156778 18.7728355231621 2 −7.83144812920198 11.9265613339038 11.9437095513756 10.8524606746456 3 1.01490159056644 4.96724347328928 4.96512000980275 5.10532223071200 4 1.10642915068494 4.93819144878954 4.93599467724991 4.99999999995773 5 1.18463430195965 4.93374030970576 4.93153342235750 4.97372522603788 6 1.26412238398338 4.93785658351241 4.93565678779788 4.96968839262467 7 1.33805311564430 4.94627563468593 4.94408827533979 4.97519539293950 8 1.40748289276225 4.95692798081047 4.95475304777657 4.98440717807182 ...... 96 4.99999999962290 4.99999999962290 4.99999999962290 4.99999999962290 97 4.99999999962318 4.99999999962318 4.99999999962318 4.99999999962318 98 4.99999999962346 4.99999999962346 4.99999999962346 4.99999999962346 99 4.99999999962373 4.99999999962373 4.99999999962373 4.99999999962373 100 4.99999999962399 4.99999999962399 4.99999999962399 4.99999999962399 CPU time is seconds 0.503420

T Table 6. Example3: Numerical findings of Algorithm 3.4 in [23] and u0 = u1 = (−1, 0, 1, 2) .

Iter (n) u1 u2 u3 u4 1 0.116881581449987 0.813197370923916 1.11161842825326 1.96709210640194 2 0.544096187738739 0.963936802772123 1.15066880224072 1.94902465158251 3 0.767743928389032 1.01063135569287 1.17422202753691 1.93884243796376 4 0.883925792457061 1.04508825238764 1.19706742465652 1.93285090332439 5 0.943732024663154 1.07700290591513 1.21999679143733 1.92972670723471 6 0.985076884925896 1.10717191339312 1.24263089418537 1.92912929557214 7 1.02327142457484 1.13769463060810 1.26640373232346 1.93117221434202 8 1.06041942145622 1.16848656060979 1.29115424126752 1.93571856968895 ...... 178 4.99949304114715 4.99949304114715 4.99949304114715 4.99949304114715 179 4.99949586970068 4.99949586970068 4.99949586970068 4.99949586970068 180 4.99949866686358 4.99949866686358 4.99949866686358 4.99949866686358 181 4.99950143315555 4.99950143315555 4.99950143315555 4.99950143315555 182 4.99950416908485 4.99950416908485 4.99950416908485 4.99950416908485 CPU time is seconds 0.957781

T Table 7. Example3: Numerical findings of Algorithm 3.2 in [24] and u0 = u1 = (−1, 0, 1, 2) .

Iter (n) u1 u2 u3 u4 1 0.760053582775300 1.29003298382748 1.20319480217964 1.94068518744861 2 1.21804298615978 1.59414241359208 1.42464988939697 2.01820655416591 3 1.42659480361034 1.73943108707035 1.57161284553192 2.09415611599989 4 1.57886167340713 1.85447528443790 1.69967500052918 2.17426416753376 5 1.71808153256788 1.96701984456950 1.82539532502208 2.26236423411214 6 1.85632702927712 2.08392415770031 1.95375834077497 2.35895786218053 7 1.99842296843380 2.20786382779888 2.08766201331954 2.46482556588447 8 2.14689895343515 2.34039691864025 2.22903433081793 2.58085673886984 ...... 144 4.99948910579703 4.99948910579703 4.99948910579703 4.99948910579703 145 4.99949260517052 4.99949260517052 4.99949260517052 4.99949260517052 146 4.99949605693103 4.99949605693103 4.99949605693103 4.99949605693103 147 4.99949946204375 4.99949946204375 4.99949946204375 4.99949946204375 148 4.99950282144794 4.99950282144794 4.99950282144794 4.99950282144794 CPU time is seconds 0.8748252 Symmetry 2021, 13, 182 22 of 26

T Table 8. Example3: Numerical findings of Algorithm2 and u0 = u1 = (−1, 0, 1, 2) .

Iter (n) u1 u2 u3 u4 1 0.775171247844478 1.30058646543549 1.21022901009941 1.94547500162994 2 1.21446821311615 1.59315820520707 1.41802847051195 2.01854575677085 3 1.40930154934255 1.72830536082867 1.55238769919704 2.08704132135424 4 1.55005042100928 1.83358202294312 1.66945202714427 2.15901099064876 5 1.67652335037437 1.93461531040966 1.78314662252036 2.23718181897121 6 1.80020428317844 2.03794015805606 1.89767214585452 2.32166725738566 7 1.92573577123647 2.14610904279505 2.01564105118587 2.41306525933486 8 2.05546783590764 2.26053026824905 2.13879627759795 2.51211478239818 ...... 96 4.99999999998368 4.99999999998368 4.99999999998368 4.99999999998368 97 4.99999999998368 4.99999999998368 4.99999999998368 4.99999999998368 98 4.99999999998368 4.99999999998368 4.99999999998368 4.99999999998368 99 4.99999999998369 4.99999999998369 4.99999999998369 4.99999999998369 100 4.99999999998369 4.99999999998369 4.99999999998369 4.99999999998369 CPU time is seconds 0.544268

Example 4. The last Example has taken from [35] where G : R2 → R2 is defined by

 7 0.5u1u2 − 2u2 − 10 G(u) = 2 7 −4u1 − 0.1u2 − 10

2 2 2 where C = {u ∈ R : (u1 − 2) + (u2 − 2) ≤ 1}. It can easily see that G is Lipschitz continuous with L = 5 and G is not monotone on C but pseudomonotone. Here, the above problem has unique solution u∗ = (2.707, 2.707)T. Figures 10–13 and Table9 show the numerical findings by letting different values of u0. The control conditions are taken as follows: (i) Algorithm 3.4 in [23] (shortly, MT-EgM):

1 1 6 χ = 0.35, θ = 0.80, µ = 0.55, τn = , φn = , θn = (1 − φn). 0 (n + 1)2 100(n + 2) 10

(ii) Algorithm 3.2 in [24] (shortly, VT1-EgM):

1 1 u τ = 0.35, θ = 0.80, µ = 0.55, en = , φn = , f (u) = . 0 (n + 1)2 100(n + 1) 5

(iii) Algorithm 3.1 in [25] (shortly, VT2-EgM):

0.8 1 1 u χ = , θ = 0.80, τn = , φn = , f (u) = . L (n + 1)2 100(n + 2) 5

(iv) Algorithm1 (shortly, I1-EgA):

0.8 1 1 χ = , θ = 0.80, en = , φn = . L (n + 1)2 100(n + 2)

(v) Algorithm2 (shortly, I2-EgA):

1 1 χ = 0.35, µ = 0.55, θ = 0.80, en = , φn = . 0 (n + 1)2 100(n + 2) Symmetry 2021, 13, 182 23 of 26

Table 9. Numerical data for Figures7–9.

T T T T u0 = u1 (1.5, 1.7) (2.0, 3.0) (1.0, 2.0) (2.7, 2.6) Algorithm Name Iter. Time Iter. Time Iter. Time Iter. Time Algorithm 3.4 in [23] 61 3.083492 59 4.127714 60 2.882394 59 3.111729 Algorithm 3.2 in [24] 48 2.189625 49 2.674055 49 2.448063 49 2.306584 Algorithm 3.1 in [25] 38 1.440188 38 1.684040 38 1.784568 38 1.645227 Algorithm1 23 0.933457 23 1.021092 24 1.139583 23 0.922199 Algorithm2 19 0.899018 19 0.969045 19 0.907344 19 0.953694

101

100

10-1

10-2

10-3 0 10 20 30 40 50 60 70 Number of iterations

Figure 10. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 T in [24] and Algorithm 3.1 in [25] when u0 = u1 = (1.5, 1.7) .

100

10-1

10-2

10-3 0 10 20 30 40 50 60 Number of iterations Figure 11. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 T in [24] and Algorithm 3.1 in [25] when u0 = u1 = (2.0, 3.0) . Symmetry 2021, 13, 182 24 of 26

101

100

10-1

10-2

10-3 0 10 20 30 40 50 60 Number of iterations Figure 12. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 T in [24] and Algorithm 3.1 in [25] when u0 = u1 = (1.0, 2.0) .

10-1

10-2

10-3 0 10 20 30 40 50 60 Number of iterations

Figure 13. Numerical illustration of Algorithms1 and2 with Algorithm 3.4 in [ 23] and Algorithm 3.2 T in [24] and Algorithm 3.1 in [25] when u0 = u1 = (2.7, 2.6) . Symmetry 2021, 13, 182 25 of 26

5. Conclusions In this study, we have introduced two new methods for finding a solution of variational inequality problem in a Hilbert space. The results have been established on the base of two previous methods: the subgradient extragradient method and the inertial method. Some new approaches to the inertial framework and the step size rule have been set up. The strong convergence of our proposed methods is set up under the condition of pseudo- monotonicity and Lipschitz continuity of mapping. Some numerical results are presented to explain the convergence of the methods over others. The results in this paper have been used as methods for figuring out the variational inequality problem in Hilbert spaces. Finally, numerical experiments indicate that the inertial approach normally enhances the performance of the proposed methods.

Author Contributions: Conceptualization, K.M., N.A.A. and I.K.A.; methodology, K.M. and N.A.A.; software, K.M., N.A.A. and I.K.A.; validation, N.A.A. and I.K.A.; formal analysis, K.M. and N.A.A.; investigation, K.M., N.A.A. and I.K.A.; writing—original draft preparation, K.M., N.A.A. and I.K.A.; writing—review and editing, K.M., N.A.A. and I.K.A.; visualization, K.M., N.A.A. and I.K.A.; supervision and funding, K.M. and I.K.A. All authors have read and agree to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Acknowledgments: This first author was supported by Rajamangala University of Technology Phra Nakhon (RMUTP). Conflicts of Interest: The authors declare no competing interests.

References 1. Konnov, I.V. On systems of variational inequalities. Russ. Math. C/C Izv. Vyss. Uchebnye Zaved. Mat. 1997, 41, 77–86. 2. Stampacchia, G. Formes bilinéaires coercitives sur les ensembles convexes. Comptes Rendus Hebd. Seances Acad. Sci. 1964, 258, 4413. 3. Elliott, C.M. Variational and quasivariational inequalities applications to free—Boundary ProbLems. (claudio baiocchi and antónio capelo). SIAM Rev. 1987, 29, 314–315. [CrossRef] 4. Kassay, G.; Kolumbán, J.; Páles, Z. On nash stationary points. Publ. Math. 1999, 54, 267–279. 5. Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of minty and stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [CrossRef] 6. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. 7. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: , The Netherlands, 2007; Volume 210. 8. Nagurney, A.; Economics, E.N. A Variational Inequality Approach; Springer: Boston, MA, USA. 1999. 9. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan 2009. 10. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [CrossRef][PubMed] 11. Censor, Y.; Gibali, A.; Reich, S. Extensions of korpelevich extragradient method for the variational inequality problem in . Optimization 2012, 61, 1119–1132. [CrossRef] 12. Iusem, A.N.; Svaiter, B.F. A variant of korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [CrossRef] 13. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. 14. Malitsky, Y.V.; Semenov, V.V. An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 2014, 50, 271–277. [CrossRef] 15. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [CrossRef] 16. Noor, M.A. Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 2010, 21, 97–108. [CrossRef] 17. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [CrossRef] 18. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2017, 78, 1045–1060. [CrossRef] Symmetry 2021, 13, 182 26 of 26

19. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [CrossRef] 20. Zhang, L.; Fang, C.; Chen, S. An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems. Numer. Algorithms 2018, 79, 941–956. [CrossRef] 21. Antipin, A.S. On a method for convex programs using a symmetrical modification of the lagrange function. Ekon. Mat. Metod. 1976, 12, 1164–1173. 22. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [CrossRef] 23. Anh, P.K.; Thong, D.V.; Vinh, N.T. Improved inertial extragradient methods for solving pseudo-monotone variational inequalities. Optimization 2020, 1–24. [CrossRef] 24. Thong, D.V.; Hieu, D.V.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2019, 14, 115–144. [CrossRef] 25. Thong, D.V.; Vinh, N.T.; Cho, Y.J. A strong convergence theorem for tseng’s extragradient method for solving variational inequality problems. Optim. Lett. 2019, 14, 1157–1175. [CrossRef] 26. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: /, Germany, 2011; Volume 408. 27. Xu, H.-K. Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65, 109–113. [CrossRef] 28. Maingé, P.-E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [CrossRef] 29. Takahashi, W. Nonlinear Analysis; Yokohama Publishers: Yokohama, Japan, 2000. 30. Harker, P.T.; Pang, J.-S. For the linear complementarity problem. Comput. Solut. Nonlinear Syst. Equ. 1990, 26, 265. 31. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2017, 70, 687–704. [CrossRef] 32. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2016, 66, 75–96. [CrossRef] 33. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [CrossRef] 34. Hu, X.; Wang, J. Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network. IEEE Trans. Neural Netw. 2006, 17, 1487–1499. 35. Shehu, Y.; Dong, Q.-L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2018, 68, 385–409. [CrossRef]