<<

JOURNAL OF INDUSTRIAL AND Website: http://AIMsciences.org MANAGEMENT OPTIMIZATION Volume 1, Number 2, May 2005 pp. 219–233

A DISCRETIZATION BASED SMOOTHING METHOD FOR SOLVING SEMI-INFINITE VARIATIONAL INEQUALITIES

Burcu Ozc¸am¨ Department of Industrial Engineering North Carolina State University Raleigh, NC 27695-7906, USA Hao Cheng SAS Institute Inc. Cary, NC 27513, USA

Abstract. We propose a new smoothing technique based on discretization to solve semi-infinite variational inequalities. The proposed algorithm is tested by both linear and nonlinear problems and proven to be efficient.

1. Introduction. Let Rn be the n-dimensional Euclidean space. Given a F from Rn into itself and a nonempty subset X of Rn, the finite dimensional variational inequality problem VI(X,F ) is defined as follows. VI(X , F ) : Find a vector x∗ ∈ X such that F (x∗)T (x − x∗) ≥ 0 for all x ∈ X.

After originating in [9], the theory, algorithms and applications of finite dimensional variational inequalities have been well studied for over four decades. Interested readers can refer to [5] for a comprehensive study. However, most of the algorithms for solving VI(X,F ) in the literature work only for special cases with X possessing certain geometric structure (such as compact polyhedral set) or defined by a finite number of equality or inequality constraints.

In this paper, we focus on the semi-infinite variational inequality problem SIVI(X,F ) as defined by Fang et al. [8], in which the set X is defined by X = {x ∈ Rn|g(x, t) ≥ 0 for all t ∈ T }, (1) where T is a nonempty compact subset of Rl. Note that T may contain infinitely many elements. For any t ∈ T , we assume that g(x, t): Rn × Rl → Rp is a continu- ously differentiable and concave function. Therefore, X is a convex set in Rn. We further assume that X is nonempty and bounded, which implies compactness.

The solvability of finite dimensional variational inequalities and consequently of semi-infinite variational inequalities is guaranteed by the following existence results in Hartman and Stampacchia [9].

2000 Subject Classification. 90C34, 57R12, 49M25. Key words and phrases. Variational inequalities, semi-infinite, smoothing.

219 220 BURCU OZC¸AM¨ AND HAO CHENG

Theorem 1.1. Let X be a compact convex set in Rn and F (x) a continuous map of X into Rn. Then there exists a solution to the variational inequalities VI(X,F ).

Proof. Proof can be found in Hartman and Stampacchia [9].

When the set X is defined by infinitely many equalities and inequalities, finding the solution of finite dimensional variational inequalities becomes more challeng- ing because general convex programming methods cannot consider infinitely many constraints all at the same time. The work by Fang et al. [8] proposes an analytic center cutting plane method to solve SIVI(X,F ). An ε-optimal solution is obtained under certain conditions. In a more recent work, Fang et al. [7] consider linear semi- infinite variational inequality problem and propose a discretization method followed by an analytic center based inexact cutting plane method. In both studies, pro- posed algorithms are based on nonempty relative interior assumption on X.

In this paper, we consider SIVI(X,F ) without nonempty relative interior as- sumption on X. We propose to solve a of variational inequality problem VI(Xi,F ) with Xi defined by a finite number of constraints to approximate the so- lution of SIVI(X,F ). For each VI(Xi,F ), its equivalent KKT formulation, which is nondifferentiable, is smoothed and solved by Newton’s method. A new smoothing function to approximate max{0, x} is designed to serve this purpose.

This paper is organized as follows. The discretization approach is discussed in Section 2. Section 3 presents the smoothing method, which is followed by algo- rithm and the convergence proof in Section 4. Finally, the numerical examples and computational results are presented in Section 5.

2. Discretization. In order to solve the SIVI(X,F ), we use discretization ap- proach to approximate the feasible set X = {x ∈ Rn|g(x, t) ≥ 0 for all t ∈ T } of our semi-infinite variational inequality problem. The cardinality of set T is denoted as |T | and we assume |T | = ∞. We can construct a nested sequence {Ti} of finite subsets of T with the property l that for each i, Ti ⊂ Ti+1. Since T is a compact subset of R , Ti can satisfy the following assumption. Assumption 2.1. Let ∆ : N → R be a positive-valued, strictly monotone decreasing function such that limn→∞ ∆(n) = 0. Then for each n ∈ N and each t ∈ T , there 0 exists n0 ∈ N and t ∈ Tn0 such that kt − t0k ≤ ∆(n).

Defining the finite set Xi as n Xi = {x ∈ R |g(x, t) ≥ 0 for all t ∈ Ti}, (2)

We further make the assumption that X1 is bounded. Then each Xi is a compact set. Since X ⊂ Xi for any i ∈ N, from nonempty assumption of X, we know that Xi is a nonempty set consisting of |Ti| = N constraints. DISCRETIZATION BASED SMOOTHING METHOD 221

We consider the following variational inequality problem VI(Xi,F ) defined set Xi i VI(Xi,F ) : find x ∈ Xi such that i T i F (x ) (x − x ) ≥ 0 for all x ∈ Xi. (3)

Since Ti ⊂ Ti+1, we have X ⊂ Xi+1 ⊂ Xi. Given that F is continuous and Xi is compact and convex, the solution to (3) is guaranteed to exist.

i Let x be a solution of VI(Xi,F ). The following result guarantees the existence of a subsequence of {xi} converging to the solution of SIVI(X,F ). Theorem 2.1. The sequence {xi} has at least one accumulation point that solves SIVI(X,F ).

i ki Proof. Since {x } ⊂ X1, a compact set, there exists a subsequence {x } such that lim {xki } =x. ˆ i→∞ We prove in the following thatx ˆ solves SIVI(X,F ), i.e., F (ˆx)T (x − xˆ) ≥ 0 for all x ∈ X. In fact, ifx ˆ is not a solution of SIVI(X,F ), then there exists at least onex ¯ such that F (ˆx)T (¯x − xˆ) < 0. ¯ Since F is continuous, there exists k ∈ {ki} such that ¯ ¯ F (xk)T (¯x − xk) < 0. k¯ However, sincex ¯ ∈ X ⊂ Xk¯ and x is a solution of VI(Xk¯,F ), we have ¯ ¯ F (xk)T (¯x − xk) ≥ 0, which is a contradiction. Thus, for all x ∈ X,F (ˆx)T (x − xˆ) ≥ 0.

The two basic approaches for solving finite dimensional variational inequality problems are solving the KKT conditions of the VI(X,F ) and direct methods. The basic idea of the direct methods is to find a sequence {xk} in X such that each xk+1 solves VI(X,F k), where F k(x) is some approximation to F (x). Based on this approximation, Newton-based iterative algorithms can be found in [12]. Another direct method is the use of merit functions. Although the use of merit functions is theoretically sound, the severe drawbacks arise in the evaluation of merit functions [5]. As long as the function F is defined everywhere, methods based on the KKT conditions offer a convenient approach mainly because KKT conditions can easily be reformulated as a mixed complementarity problem. This allows the implementation of equation-based, interior point and smoothing-based solution algorithms, which usually possess sharp convergence results. Among the several algorithms to solve KKT equations are Quasi-Newton method by Qi and Jiang [14], B-differentiable Newton method by Pang [13] and trust region method by Yang et al. [17]. The smoothing methods and their fast convergence results can be found in Chen, Qi and Sun’s work [2] and [5]. In this paper, our solution approach to VI(X,F ) is based on KKT conditions with min function. By smoothing the min function, we reformulate VI(X,F ) as a smooth system of equations. Thus, the reformulated smoothed problem possesses good potential to avoid the failure of constraint qualifications, allowing the direct 222 BURCU OZC¸AM¨ AND HAO CHENG

application of existing nonlinear programming algorithms. The smoothing approach has been proven to have superior performance for solving mixed complementarity problems by the numerical results of Billups, Dirske and Ferris [1].

3. Smoothing Approach. In this section, we first introduce an equivalent mixed nonlinear complementarity formulation of approximation problems VI(Xi,F ). Then present a procedure for implementing a new smoothing approach. Following theo- rem and its proof represents the KKT system of the VI(Xi,F ) as in [5].

Theorem 3.1. If g is continuously differentiable and it satisfies the linear indepen- dence constraint qualification, then solving the approximation problem VI(Xi,F ) is equivalent to solving its Lagrangian problem by the following two statements.

∗ (i) If x solves VI(Xi,F ) and constraint qualification holds for set Xi at the point ∗ ∗ m x , then there exists a vector λ ∈ R , m = p|Ti|, such that

∗ Pm ∗ ∗ F (x ) − j=1 λj ∇gj(x , t) = 0, ∗ ∗ λj gj(x , t) = 0 j = 1, . . . , m, ∗ ∗ (4) λj ≥ 0, gj(x , t) ≥ 0 j = 1, . . . , m.

(ii) Conversely, if g(x, t) is concave and if (x∗, λ∗) satisfies (4) then x∗ solves the VI(Xi,F ). Proof. See [5], p. 19.

Thus, under our assumptions and by Theorem 3.1, KKT conditions in (4) and the problem VI(Xi,F ) are equivalent.

n m n The Lagrangian function L : R × R → R of the VI(Xi,F ) in the vector format can be written as

L(x, λ) ≡ F (x) − ∇g(x, t)T λ.

Note that Equation 4 is a mixed nonlinear complementarity problem (MiCP) and can be reformulated as nonsmooth equations. For the nonsmooth reformulations of the complementarity problems, there are two main solution techniques.√ First and the most common one is to use the Fisher- Burmeister function ψ(a, b) ≡ a2 + b2 −(a+b) as studied in [14, 4]. The other one is to use smoothing functions for the min-reformulation. The min-reformulation of complementarity problems allows using a locally fast algorithm that requires solv- ing a system of linear equations of reduced dimension at each iteration. Moreover, the convergence of such algorithms can be established under weaker assumptions than the ones used in Fisher-Burmeister reformulation. Also, in the case of linear complementarity, finite convergence is attainable with min-reformulation but not attainable with the Fisher-Burmeister reformulation [5].

Hence, mainly for reduced dimensionality and the better convergence results, we implement the min-reformulation of the MiCP first by rewriting the KKT conditions DISCRETIZATION BASED SMOOTHING METHOD 223 as follows: L(x, λ) = 0, g(x, t) − y = 0, (5) λ ≥ 0, y ≥ 0, λiyi = 0.

The nonnegativity and orthogonality conditions in KKT reformulation (5) are equivalent to having minimum of y and λ to be equal to zero. Therefore, we obtain the following nonsmooth, nonlinear operator Φmin(x, λ, y) to represent the KKT conditions of VI(Xi,F ).   L(x, λ) n+m+m Φmin(x, λ, y) ≡  g(x, t) − y  , (x, λ, y) ∈ R . (6) min(λ, y)

Note that Φmin is semismooth and its limiting Jacobian has an exact expres- sion [5]. The semismooth Newton method might be suitable to solve the above sys- tem of equations, however the calculation and the choice of generalized Jacobians brings some drawback. In general, the difficulty due to the non-differentiability of the min operator in (6) can be overcomed by smoothing methods.

The class of smoothing functions are available for the scalar plus function b+ ≡ max(b, 0), b ∈ R. Since min(a, b) = b − (b − a)+, any smoothing function p for the plus function defines a smoothing function for the min function. Several smoothing functions including Fisher-Burmeister, Chen-Harker-Kanzow-Smale (CHKS), Log- exponential, Pinar-Zenios, and Zang have been developed and used as max operator. Consider Pinar-Zenios function with uniform kernel on interval [0,1],   0 if x < 0 p (x) = x2 ε 2ε if 0 ≤ x ≤ ε  ε x − 2 if x > ε and Zang function with uniform kernel on interval [− 1 , 1 ],  2 2 ε  0 if x < − 2 1 ε 2 ε pε(x) = 2ε (x + 2 ) if |x| ≤ 2  ε x if x > 2 The smoothing functions like Pinar-Zenios and Zang, whose kernel function has finite , result in singularity that makes implementation of Newton-based solution algorithms unsuitable. On the contrary, smoothing functions with infinite kernel support result in nonsingular Jacobian and thus allows the implementation of Newton-based algorithms with fast convergence rates. For this reason, Fisher Burmeister, CHKS and the log-exponential smoothing functions with infinite sup- port are the most commonly used ones. Chen-Harker-Kanzow-Smale (CHKS) func- tion is defined as √ 4ε2 + x2 + x p (x) = , ε 2 and log-exponential function is given as

−x/ε pε(x) = x + ε log(1 + e ). 224 BURCU OZC¸AM¨ AND HAO CHENG

3e±17

2.5e±17

2e±17

1.5e±17

1e±17

5e±18

±1.5e±17±1e±17 ±5e±180 5e±18 1e±17 1.5e±17 x Legend p(x) max(0,x)

Figure 1. Max-plus function and smooth approximation pε(x) with ε = 10−18

Another finite support smoothing function is Wu-Sun smoothing function [16], which utilizes an additional smoothing parameter q as a function of Lagrange mul- tipliers of the corresponding optimization problem.  ε  0 if x < − q qx2 ε ε pε,q(x) = 2ε + x + 2q if − q ≤ x ≤ 0 (7)  ε x + 2q if x > 0

In order to guarantee the nonsingularity property, we perturbed the finite sup- port smoothing function in (7) and used the following function as the smoothing function throughout this paper.   εx + ε2, if x ≤ −ε   1 2 ε pε(x) = ( 2ε − 1)x + (1 − ε)x + 2 , if − ε < x < 0 (8)    ε (1 − ε)x + 2 , if x ≥ 0

With pε(x) being the smoothing function of the plus function as illustrated in Figure 1, we can easily obtain a smooth approximation of the min formulation in (6). Specifically, for given ε > 0, define   L(x, λ) n+m+m Φε(z) ≡ Φε(x, λ, y) ≡  g(x, t) − y  , ∀z ∈ R . (9) λ − pε(λ − y) DISCRETIZATION BASED SMOOTHING METHOD 225

Theorem 3.2. If F : Rn → Rn is a with ∇F (x) positive definite, then the Jacobian of Φε(z), denoted JΦε(z), is nonsingular for any ε > 0 and z = (x, λ, y).

Proof. Since the function pε is continuously differentiable, the operator Φε is also continuously differentiable and its Jacobian is given by   ∇L −∇g(x, t)T 0       JΦε(z) =  ∇g(x, t) 0 −I  ,  

0 Dλ Dy where

Xm 2 ∇L = ∇F (x) − λi∇ gi(x), µ i=1 ¶ ∂ ∂ D := diag (λ − p (λ − y )),..., (λ − p (λ − y )) , λ ∂λ 1 ε 1 1 ∂λ m ε m m µ 1 m ¶ ∂ ∂ Dy := diag (λ1 − pε(λ1 − y1)),..., (λm − pε(λm − ym)) . ∂y1 ∂ym

For a given ε > 0, the square matrix Dλ is nonsingular and also positive definite since its diagonal elements are calculated by   1 − ε if λi − yi ≤ −ε  ∂  (λ − p (λ − y )) = −( 1 − 2)(λ − y ) + ε if − ε < λ − y < 0 ∂λ i ε i i  ε i i i i i   ε if λi − yi ≥ 0.

Similarly, the square matrix Dy is nonsingular since all of its diagonal elements are nonzero (strictly positive for ε > 0) by the following formula.   ε if λi − yi ≤ −ε  ∂  (λ − p (λ − y )) = ( 1 − 2)(λ − y ) + 1 − ε if − ε < λ − y < 0 ∂y i ε i i  ε i i i i i   1 − ε if λi − yi ≥ 0

The nonsingularity of the matrix JΦε(z) will be shown by proving that the only (1) (2) (3) solution to the system JΦε(z)v = 0 is the trivial solution. Let v = (v , v , v ) be a vector in Rn × Rm × Rm, then we have

∇Lv(1) − ∇g(x, t)T v(2) = 0 (10) ∇g(x, t)v(1) − v(3) = 0 (11) (2) −1 (3) v + Dλ Dyv = 0. (12) 226 BURCU OZC¸AM¨ AND HAO CHENG

(3) (1) (2) −1 (3) By substituting v = ∇g(x, t)v and v = −Dλ Dyv into (10), we obtain

£ T −1 ¤ (1) ∇L + ∇g(x, t) Dλ Dy∇g(x, t) v = 0.

By the concavity of g(x, t) and positive definiteness of ∇F (x), ∇L is positive definite. Having Dλ and Dy as positive definite matrices results in the trivial (1) (2) (3) solution, v = v = v = 0. Thus the matrix JΦε(z) is nonsingular.

∗ Theorem 3.3. The Jacobian JΦε(z) is Lipschitz continuous near z . Proof. By definition of Lipschitz continuity, we need to show that

||JΦε(z1) − JΦε(z2)|| ≤ γ||z1 − z2|| ∗ holds for a constant γ and all z1, z2 sufficiently close to z . This is true when all elements of the Jacobian are locally Lipschitz continuous.

4. Algorithm. In this section, we propose an iterative algorithm to solve the SIVI(X,F ).

Step 0 Initialize integer discretization parameters c, n0 > 0 and stopping parameter δ0 > 0. Set i = 1.

Step 1 Set the discretization parameter as ni = ni−1 + c. For Ti = {t1, . . . , tni } ⊂ T n define Xi = {x ∈ R |g(x, t) ≥ 0 ∀t ∈ Ti} and VI(Xi,F ).

i Step 2 Solve VI(Xi,F ), i.e. find x ∈ Xi such that i T i F (x ) (x − x ) ≥ 0 for all x ∈ Xi.

Step 2.0 Initialize smoothing parameters ε0 > 0. Set k = 0.

Step 2.1 Define equivalent KKTk conditions for VI(Xi,F ) as   L(x, λ, y)   Φni,εk (z) ≡ Φni,εk (x, λ, y) ≡ g(x, t) − y

λ − pni,εk (λ − y)

i i i i Step 2.2 Solve Φni,εk (z) = 0. Let zk = (xk, λk, yk) be the solution to this system.

i T i i i Step 2.3 If F (xk) (x − xk) ≥ 0 for all x ∈ Xi, then set x = xk and go to Step 3. Otherwise, update smoothing parameters (i.e., decrease εk), set k = k +1 and go to Step 2.1.

0 0 Step 3 If ∆(ni) = maxt∈T mint ∈Ti ||t − t || ≤ δ0 then stop. Otherwise, set i = i + 1 and go to Step 1.

The overall convergence rate of this finite termination algorithm depends on the discretization process and the convergence rate of the solution algorithm used in Step 2.2. In Step 2.2 of the algorithm, standardized Newton method is used since the Jacobian of the given nonlinear system is nonsingular. Given the nonsingularity DISCRETIZATION BASED SMOOTHING METHOD 227 of and Lipschitz continuity properties of the Jacobian, we have the following classic convergence theorem [10]. Theorem 4.1. Consider function H : Rn → Rn and system H(z∗) = 0. Assume ∗ ∗ JH(z ) is Lipschitz continuous and nonsingular. If z0 is sufficiently close to z , then the Newton sequence exists (i. e. JH(zn) is nonsingular for all n ≥ 0), converges to z∗, and there is K > 0 such that ∗ ∗ 2 ||zn+1 − z || ≥ K||zn − z || for n sufficiently large. In the next two theorems, we show that the sequence of solution to smoothed problems, obtained in Step 2, converges to the solution of VI(Xi,F ). k k k k Theorem 4.2. The sequence of points zi = (xi , λi , yi ) generated by the algorithm has a convergent subsequence. Proof. To simplify notations, we assume that the convergent subsequence is the sequence in consideration itself. k In order for sequence {zi } to be convergent, all its elements should be conver- k k gent. Since xi is an element of a compact set Xi, its convergence xi → x follows. k Similarly, as yi is identical to g(x, t), which is a continuous function defined on k compact set, yi is convergent. To conclude the proof, we only need to show that k the sequence {λi } is convergent. k If {λi } is bounded then the theorem holds. k k Next assume {λi } is unbounded. Since zi solves Φni,εk (z) = 0, for every k and t ∈ Ti we have k k k F (xi ) + ∇g(xi , t)λi = 0. (13) k For the unbounded sequence {λi }, we have · k k ¸ F (xi ) k λi lim k + ∇g(xi , t) k = 0. (14) k→∞ ||λi || ||λi || k λi Since the normalized vector k , locates in the unit ball, which is closed and ||λi || bounded, there exists a convergent subsequence, i.e., k λi ¯ ¯ m k → λ 6= 0, λ ∈ R . (15) ||λi || Thus, by (14) and (15) we obtain ∇g(x, t)λ¯ = 0. (16) Since λ¯ = 0 only for the inactive constraints and λ¯ 6= 0 for active constraints, (16) contradicts the linear independence assumption on the active gradients of the constraints gi(x, t), i ∈ I(x, t) := {i|gi(x, t) = 0}. Thus, the sequence of λk cannot be unbounded. This completes the proof. k Theorem 4.3. For a given i, the sequence of solutions {zi } generated by the k algorithm to the smoothed KKT system Φni,εk (zi ) = 0 converges to the solution z of the equivalent KKT system Φ(z) of VI(Xi,F ). k Proof. We need to show that as the point zi converges to point z, the function k value Φni,εk (z ) converges to Φ(z), i.e., k lim ||Φni,εk (zi ) − Φ(z)|| = 0. (17) k→∞ 228 BURCU OZC¸AM¨ AND HAO CHENG

Notice that, k k ||Φni,εk (zi ) − Φ(z)|| = ||Φni,εk (zi ) − Φni,εk (z) + Φni,εk (z) − Φ(z)|| k ≤ ||Φni,εk (zi ) − Φni,εk (z)|| + ||Φni,εk (z) − Φ(z)||. k Since Φni,εk is a continuous function and zi → z as proved in the previous theorem, we have k lim ||Φni,εk (zi ) − Φni,εk (z)|| = 0. k→∞

For a given point z, the convergence of Φni,εk (z) to Φ(z) as k → ∞, i.e.,

lim ||Φn ,ε (z) − Φ(z)|| = 0 k→∞ i k follows because

lim pεk (λ − y) = max{0, λ − y}. εk→0 This establishes the desired result in equation (17). k Theorem 4.4. As ni → ∞ and εk → 0, the sequence of solutions {xi } to the k ∗ system Φni,εk (zi ) = 0 converges to x which is a solution of SIVI(X,F ). Proof. The result can easily be established by following Theorem 4.3, Theorem 3.1, and Theorem 2.1.

5. Numerical Results. In this section we implement our solution approach to commonly encountered problems in the literature including [7, 6, 11].

Problem 1. n = 7,T = [0, 1], and Xn X4 n j−1 2l X = {x ∈ R | t xj ≤ t + 1, t ∈ T and 0 ≤ xj ≤ 1, j = 1, . . . , n} j=1 l=1 1 F = (F1,...,F7) with Fj = xj − √ , j = 1, . . . , n. xj

Problem 2. n = 7,T = [0, 1], and Xn n j−1 5 X = {x ∈ R | t xj ≤ 4t + 1, t ∈ T and 0 ≤ xj ≤ 1, j = 1, . . . , n} j=1 1 F = (F1,...,F7) with Fj = 1 + 3xj − 2 , j = 1, . . . , n. xj

Problem 3. n = 7,T = [0, 1], and Xn n j−1 5 2 X = {x ∈ R | t xj ≤ 3t + 2t + 1, t ∈ T and 0 ≤ xj ≤ 1, j = 1, . . . , n} j=1 √ 1 F = (F1,...,F7) with Fj = xj − 2 , j = 1, . . . , n. xj

Problem 4. n = 50,T = [0, 1], and Xn n j−1 X = {x ∈ R | t xj ≥ sin(t), t ∈ T and xj ≥ 0, j = 1, . . . , n} j=1 DISCRETIZATION BASED SMOOTHING METHOD 229

1 F = (F ,...,F ) with F = , j = 1, . . . , n. 1 n j j

Problem 5. n = 50,T = [0, 1], and Xn n j−1 t X = {x ∈ R | t xj ≥ e , t ∈ T and xj ≥ 0, j = 1, . . . , n} j=1 1 F = (F ,...,F ) with F = , j = 1, . . . , n. 1 n j j

Problem 6. n = 50,T = [0, 1], and Xn 1 X = {x ∈ Rn| tj−1x ≥ , t ∈ T and x ≥ 0, j = 1, . . . , n} j 2 − t j j=1 1 F = (F ,...,F ) with F = , j = 1, . . . , n. 1 n j j

Problem 7. VI(∇f, X) formulation of the following nonlinear optimization prob- lem [15]: 2 3 2 2 3 2 min f = (x1 − 2x2 + 5x2 − x2 − 13) + (x1 − 14x2 + x2 + x2 − 29) 2 2 x1+x2 t s.t. −x1 − 2x2t + e + e ≥ 0 t ∈ [0, 1]

Problem 8. VI(∇f, X) formulation of the following nonlinear optimization prob- lem [3]: min f = c2ex1 + ex2 s.t. ex1+x2 − t ≥ 0 t ∈ [0, 1]

Problem 9. VI(∇f, X) formulation of the following nonlinear optimization prob- lem [3]: 1 2 2 1 min f = 2 x1 + x2 + 2 x1 2 2 2 2 2 s.t. (1 − x1t ) − x1t − x2 + x2 ≤ 0 t ∈ [0, 1]

By implementing the proposed algorithm in Matlab 6.5, we obtained set of nu- merical results reported in the following three tables. Note that in each table, N corresponds to the number of discretization points, x∗ is the solution obtained by our smoothing methodology, xb is the reported solution of the corresponding problem in the test library, f ∗ is the objective function if the original problem is an optimization problem and fb is the best reported objective function. In each table, we report the required number of Newton iterations for our algorithm and our starting point x0.

Also note that, to solve the system of equations in Step 2.2 of the algorithm, we used the Newton Method algorithm as it is provided by Kelley [10]. Numerical results corresponding to Problem 1 through 9 are presented in the following tables. 230 BURCU OZC¸AM¨ AND HAO CHENG k k k b b b x x x − − − ∗ ∗ ∗ x x x k k k Iterations Iterations Iterations No. of Newton No. of Newton No. of Newton ∗ ∗ ∗ =(0.2760, 0.4802, 0.7246, 0.8930, 0.9664, 0.9902, 0.9970) = (0.4987, 0.5663, 0.6303, 0.6856, 0.7340, 0.7781, 0.8128) b x b x =(0.5082, 0.5360, 0.5560, 0.5700, 0.5797, 0.5859, 0.5901) b 10, 8, x − − 10, E E 0 0 . . − E = 1 = 1 0 . ε ε = 1 ε NN x N x x 10 (0.49208692,20 0.56903430, 0.63712849, 0.69579181, (0.51062811,40 0.74546035, 0.57280822, 0.78705650, 0.62980205, 0.82166529) 0.68101656, (0.49202283,80 0.72637533, 0.56469563, 0.76611237, 0.63066649, 0.80063657) 0.68899934, (0.49885431, 0.73961394, 0.56752794, 0.78293770, 0.63006102, 0.81965483) 0.68568466, 0.73432506, 0.77633042, 0.81227225) 8 79 107 586 0.02217797 0.02357289 10 0.01267899 (0.51088017,20 0.53642979, 0.55528904, 0.00226087 0.56877417, (0.50742190,40 0.57820274, 0.53680129, 0.58469377, 0.55762495, 0.58911555) 0.57180717, (0.50770281,80 0.58121077, 0.53620147, 0.58733747, 0.55664611, 0.59128402) 0.57076761, (0.50800968, 0.58027311, 0.53604204, 0.58656269, 0.55626314, 0.59067783) 0.57032151, 0.57985138, 0.58620308, 0.59038960) 21 134 76 51 0.00374727 0.00359228 10 0.00154790 (0.27967947,20 0.47840552, 0.71094465, 0.87854291, 0.00063972 (0.27680264,40 0.95651950, 0.48023127, 0.98534271, 0.72205977, 0.99514082) 0.89099390, (0.27586477,80 0.96408602, 0.48106977, 0.98888057, 0.72613247, 0.99659672) 0.89536560, (0.27656076, 0.96666893, 0.47968773, 0.99006609, 0.72284821, 0.99709701) 0.89280043, 0.96548942, 0.98962981, 0.99694755) 11 563 185 649 0.02317129 0.00428724 0.00296968 0.00220056 100 (0.49838561, 0.56725780, 0.62999654, 0.68580674, 0.73460208, 0.77672756, 0.81275632) 560 0.00184365 100 (0.50808312, 0.53602056, 0.55619460, 0.57023799, 0.57977076, 0.58613346, 0.59033325) 62 0.00047171 100 (0.27632317, 0.48009722, 0.72393830, 0.89372217, 0.96594783, 0.98981400, 0.99701657) 431 0.00119504 = (0.4, 0.5, 0.9, 0.9, 0.8, 0.9, 1), =(1, 0.4, 0.9, 0.9, 0.8, 1, 1), =(0.2, 0.4, 0.5, 0.9, 0.8, 0.9, 0.9), 0 0 0 Problem 3. Problem 2. Problem 1. x x x DISCRETIZATION BASED SMOOTHING METHOD 231 k k k b b b f f f − − − ∗ ∗ ∗ f f f k k k Iterations Iterations Iterations No. of Newton No. of Newton No. of Newton ∗ ∗ ∗ f f f = 0.6931477 = 0.47943000 = 1.71832607 b b b f f f 8, 8, 8, − − − E E E 0 0 0 . . . ∗ ∗ ∗ = 1 = 1 = 1 ε ε ε = (1, 1, 1, ... ,1), = (1, 1, 1, ... ,1), = (1, 1, 1, ... ,1), 0 0 0 x x x NN x N x x 102040 (0.04012066, 0.87713001, 0.00000042,80 0.00000038, (0.04051929, 0.00000037,...,0.00000048) 0.87748011, 0.00000042, 0.00000038, (0.04060726, 0.00000036,...,0.00000048) 0.87755736, 0.00000042, 0.00000038, (0.08478749, 0.00000036,...,0.00000048) 0.77968629, 0.00000000, 0.47868693 0.00000000, 0.00000000,...,0.00000000) 0.4792606 0.4793872 0.47939789 12910 471 (1.003394043,20 0.952491432, 1557 0.671701177, 0.000000005, 1831 (1.000000004,40 0.000000005,..., 1.000135305, 0.009675015) 0.00074307 0.498430137, 0.172096549, (1.000002746,80 0.035273337,..., 0.999890929, 1.718282483 0.071628743) 0.501386519, 0.160106683, 0.00016940 (1.000000005, 0.054467694,..., 0.999999996, 1.718282246 0.000000086) 0.00004280 0.500000084, 0.166665828, 0.041671236,..., 0.00003211 1.718281833 0.000000068) 1.718281833 39 485 31 37 0.00004359 10 0.00004382 20 (0.50241134, 0.00004424 40 0.21881863, 0.21766635, 0.000000005, (0.50000003, 0.00000000,..., 0.00004424 80 0.24999868, 0.009780431) 0.12502939, 0.062196701, (0.50000049, 0.03285327,..., 0.24999836, 0.000122779) 0.12503788, 0.062128601, (0.50000000, 0.03311006,..., 0.693143294 0.25000000, 0.000103221) 0.12500013, 0.062498419, 0.03126174,..., 0.693147612 0.000000183) 0.693147654 136 0.693147668 385 12 0.000004406 19 0.000000088 0.000000046 0.000000032 100 (0.04063040, 0.87757765, 0.00000042, 0.00000038, 0.00000036,...,0.00000048) 0.47942049 2687100 (1.000000005, 0.999999989, 0.500000254, 0.166664305, 0.041677220,..., 0.00000951 0.000000202) 1.718281833 56 0.00004424 100 (0.50000000, 0.25000000, 0.12500010, 0.062498395, 0.03126168,..., 0.000000097) 0.693147671 21 0.000000029 Problem 5. Problem 6. Problem 4. 232 BURCU OZC¸AM¨ AND HAO CHENG k k k b b b f f f − − − ∗ ∗ ∗ f f f k k k k k k b b b x x x − − − ∗ ∗ ∗ x x x = 2.20000 k k k b f = 97.158900 b = 0.194466 f b f Iterations Iterations Iterations No. of Newton No. of Newton No. of Newton ∗ ∗ ∗ f f f =(0.719962, –1.450488) , b =(–0.09531018, 0.09531018) , =(0.750000, –0.618034) , b b x x x 10, 8, 10, − − − E E E 0 0 . . 0 . = 1 = 1 ∗ ∗ ∗ = 1 ε ε ε = (1, –1), = (1, 1), = (1, –1), 0 0 0 x x x 5 (0.71996141, –1.45048733) 97.158852 9 0.000000895 0.00004778 (–0.095310183, 0.095310177) 2.2 105 0.000000003 (–0.74999983, 0.000000007 –0.61803397) 0.19446598 10 0.00000040 0.000000018 N x N x N x 1020 (0.71996141, –1.45048733)40 (0.71996141, –1.45048733)80 97.158852 (0.71996141, –1.45048733) 97.158852 (0.71996141, –1.45048733) 97.158852 97.158852 10 13 16 21 0.00000089 0.0000008910 0.00004778 0.00000089 (–0.095310183,20 0.095310177) 0.00004778 0.00000089 (–0.095310183,40 0.095310177) 0.00004778 (–0.095310183,80 0.095310177) 0.00004778 2.2 (–0.095310183, 0.095310177) 2.2 2.2 2.2 12 15 2010 24 0.00000000320 (–0.74999961, 0.000000007 0.000000003 –0.61803397)40 (–0.74999918, 0.000000007 0.000000003 –0.61803397) 0.1944659880 (–0.74999832, 0.000000007 0.000000003 –0.61803397) 0.19446598 (–0.74999661, 0.000000007 –0.61803397) 0.19446598 0.19446598 10 13 16 87 0.00000040 0.00000083 0.000000018 0.00000168 0.000000018 0.00000339 0.000000018 0.000000018 100 (0.71996141, –1.45048733) 97.158852 23 0.00000089100 0.00004778 (–0.095310183, 0.095310177) 2.2 27100 0.000000003 (–0.74999575, 0.000000007 –0.61803397) 0.19446598 121 0.00000425 0.000000018 Problem 7. Problem 9. Problem 8. DISCRETIZATION BASED SMOOTHING METHOD 233

6. Conclusion. In this paper, we propose an alternative approach to solving semi- infinite variational inequality problems without nonempty relative interior assump- tion on X. The convergence proof for the algorithm is included. The numerical experiments show that the proposed algorithm is quite efficient in finding the same quality solutions to both linear and nonlinear problems with a few discretization points. We also provide a new smoothing function for max{0, x} operator. Since this approximation function results in nonsingular Jacobian, we can further inves- tigate its use for various complementarity type of constraints.

REFERENCES [1] S. C. Billups, S. P. Dirske, and M. C. Ferris, A Comparison of Algorithms for Large Scale Mixed Complementarity Problems, Computational Optimization and Application, 7 (1997), 3–26. [2] X. Chen, L. Qi and D. Sun, Global and Superlinear Convergence of the Smoothing Newton Method and its Application to General Box-constrained Variational Inequalities, Mathemat- ics of Computation, 67 (1998), 519–540. [3] I. D. Coope and G. A. Watson, A Projected Lagrangian Algoritm for Semi-Infinite Program- ming, Mathematical Programming, 32 (1985), 337–356. [4] F. Facchinei, H. Jiang and L. Qi, Regularity Properties of a Semismooth Reformulation of Variational Inequalities, SIAM Journal on Optimization, 8 (1998), 850–869. [5] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complemen- tarity Problems, Springer Verlag, New York, (2003). [6] S.-C. Fang and S.-Y. Wu, An Inexact Approach to Solving Linear Semi-Infinite Programming Problems, Optimization, 28 (1994), 291–299. [7] S.-C. Fang, S.-Y. Wu and S. I. Birbil, Solving Variational Inequalities Defined on A Domain with Infinitely Many Linear Constraints, Erasmus University ERIM Report Series: ERS- 2002-70-LIS, (2002). [8] S.-C. Fang, S.-Y. Wu and J. Sun, An Analytic Center Cutting Plane Method for Solving Semi-Infinite Variational Inequality Problems, Journal of Global Optimization, 28 (2004), 141–152. [9] P. Hartman and G. Stampacchia, On Some Nonlinear Elliptic Differential Functional Equa- tions, Acta Mathematica, 115 (1966), 271–310. [10] C. T. Kelley, Solving Nonlinear Equations with Newton’s Method, SIAM, (2002). [11] C.-J. Lin, E. K. Yang and S.-C. Fang, Implementation of an Inexact Approach to Solving Linear Semi-infinite Programming Problems, Journal of Computational and Applied math- ematics, 61 (1995), 87–103. [12] J.-S. Pang and D. Chan, Iterative Methods for Variational and Complementarity Problems, Mathematical Programming, 24 (1982), 284–313. [13] J.-S. Pang, Newton’s Method for B-differentiable Equations, Mathematics of Operations Re- search, 15 (1990), 311–341. [14] L. Qi and H.Y. Jiang, Semismooth Karush-Kuhn-Tucker Equations and Convergence Anal- ysis of Newton and Quasi-Newton Methods for Solving these Equations, Mathematics of Computation, 22 (1997) 301–325. [15] G. A. Watson, Numerical Experiments with Globally Convergent Methods for Semi-Infinite Programming Problems, Lecture notes in Economics and Mathematical Systems, Edited by Fiacco, A.V., and K.O. Kortanek, Springer Verlag, Berlin, 215 (1983), 193–205. [16] Z. Y. Wu and C. R. Sun, An Unconstrained Convex Programming Approach to Semi-infinite Convex Programming, Personal communication, (2004). [17] Y. F. Yang, D. H. Li and S. Z. Zhou, A Trust Region Method for a Semismooth Reformulation to Variational Inequality Problems, Optimization Methods and Software, 14 (2000), 139–157. Received August 2004; revised January 2005. E-mail address: [email protected], [email protected]