arXiv:2010.15768v1 [math.OC] 29 Oct 2020 og hnhn hn.Eal [email protected] Email: China. Shenzhen, Kong, paig i-a rbe si h form: the rei in is and problem 9], [8, min-max ad a minimization as speaking, risk such empirical applications the [4–7], in from mization appear interest They considerable communities. drawn neering have problems Min-max Introduction 1 mi:[email protected]. Email: mi:[email protected]. Email: og hnhn hn.Eal [email protected]. Email: China. Shenzhen, Kong, where ieaue h ovxcnaemnmxpolm where problem, min-max convex-concave the literature, tde 1–9.Hwvr aypatclapiain inv applications practical many However, [12–19]. studied loih o ocne-ocv i-a Problems Min-Max Nonconvex-Concave for Algorithm ∗ † ‡ § hnhnRsac nttt fBgDt,Sho fSinea Science of School Data, Big of Institute Research Shenzhen oriae cec aoaoy eateto S,Univer ISE, of Department Laboratory, Science Coordinated oriae cec aoaoy eateto S,Univer ISE, of Department Laboratory, Science Coordinated hnhnRsac nttt fBgDt,Sho fSinea Science of School Data, Big of Institute Research Shenzhen igeLo mohdGain Descent-Ascent Gradient Smoothed Single-Loop A silto n nuecnegnet ttoayslto.W pro We wit solution. an combined stationary achieve be a can n can to algorithm the of convergence which is case ensure scheme and problem in “smoothing” oscillation this oscillation a introduce solve exhibit we can to paper, approach unfortunately which popular algorithm rob A and (GDA) functions networks. nonconvex of neural a of of maximum pointwise a minimizing fafiiecleto fnnovxfntos oevr h smoot the Moreover, functions. nonconvex of an collection finite a of tblzdGAagrtmt ut-lc ae r rsne.T h b achieve the to To algorithm presented. first are the cases is multi-block to algorithm GDA stabilized h rcia ffiinyo h tblzdGAagrtmo outtra robust on algorithm GDA stabilized the of efficiency practical the X O ocne-ocv i-a rbe rssi aymcielearning machine many in arises problem min-max Nonconvex-concave ⊆ (1 iwiZhang Jiawei /ǫ R 4 n trto opeiyfrgnrlnnovxcnaeproblems. nonconvex-concave general for complexity iteration ) and Y ∈ ∗ R m O (1 r ovxadcoe esand sets closed and convex are ejnXiao Peijun /ǫ O 2 trto opeiyfrmnmzn h onws maximum pointwise the minimizing for complexity iteration ) (1 /ǫ 2 coe 0 2020 30, October o ls fnnovxcnaepolm eillustrate We problem. nonconvex-concave of class a for ) min x ∈ X Abstract max † y ∈ Y 1 f n. ( uy Sun Ruoyu ,y x, iyo liosa raaCapin raa IL. Urbana, Urbana-Champaign, at Illinois of sity iyo liosa raaCapin raa IL. Urbana, Urbana-Champaign, at Illinois of sity ) dEgneig h hns nvriyo Hong of University Chinese The Engineering, nd dEgneig h hns nvriyo Hong of University Chinese The Engineering, nd f , lennovxt,adti stefcsof focus the is this and nonconvexity, olve freetlann 1,1] Concretely 11]. [10, learning nforcement scne in convex is esra erig[–] outopti- robust [1–3], learning versarial ahn erigadohrengi- other and learning machine ‡ f e D loih achieves algorithm GDA hed eta h tblzdGDA stabilized the that ve sasot ucin nthe In . smooth a is ining. s forkolde this knowledge, our of est x rdetdescent-ascent gradient D osaiiethe stabilize to GDA h h-unLuo Zhi-Quan s desra training adversarial ust plctosincluding applications n ocv in concave and novxt.I this In onconvexity. xesoso this of Extensions § y swell- is , (1.1) Algorithm Complexity Simplicity Multi-block [22] (1/ǫ2.5) Triple-loop ✗ [23] O(1/ǫ2.5) Triple-loop ✗ [20] O(1/ǫ3.5) Double-loop ✗ [24] O (1/ǫ4) Single-loop ✔ This paper O(1/ǫ2) Single-loop ✔ O

Table 1: Comparison of the algorithm in this paper with other works in solving problem (1.2). Our algorithm has a better convergence rate among others, and the single-loop and multi-block design make the algorithm suitable for solving large-scale problems efficiently. the current paper. Unlike the convex-concave setting where we can compute the global stationary solution efficiently, to obtain a global optimal solution for the setting where f is nonconvex with respect to x is difficult. In this paper, we consider the nonconvex-concave min-max problem (1.1) where f is nonconvex in x but concave of y, as well as a special case in the following form:

min max F (x)T y, (1.2) x y∈Y where Y = (y , ,y )T m y = 1,y 0 is a probability simplex and { 1 · · · m | i=1 i i ≥ } P F (x) = (f (x),f (x), ,f (x))T 1 2 · · · m is a smooth map from Rn to Rm. Note that (1.2) is equivalent to the problem of minimizing the point-wise maximum of a finite collection of functions:

min max fi(x). (1.3) x 1≤i≤m

If fi(x)= g(x,ξi) is a loss function or a negative utility function at a data point ξi, then problem (1.3) is to find the best parameter of the worst data points. This formulation is frequently used in machine learning and other fields. For example, adversarial training [3,20], fairness training [20] and distribution-agnostic meta-learning [21] can be formulated as (1.3). We will discuss the formulations for these applications in details in Section 2. Recently, various algorithms have been proposed for nonconvex-concave min-max problems [20,22–28]. These algorithms can be classified into three types based on the structure: single-loop, double-loop and triple loop. Here a single-loop algorithm is an iterative algorithm where each iteration step has a closed form update, while a double-loop algorithm uses an iterative algorithm to approximately solve the sub-problem at each iteration. A triple-loop algorithm uses a double- loop algorithm to approximately solve a sub-problem at every iteration. To find an ǫ-stationary solution, double-loop and tripe-loop algorithms have two main drawbacks. First, these existing multi-loop algorithms require at least (1/ǫ2) outer iterations, while the iteration numbers of the O other inner loop(s) also depend on ǫ. Thus, the iteration complexity of the existing multi-loop algorithms is more than (1/ǫ2) for (1.2). Among all the existing algorithms, the best known O iteration complexity is (1/ǫ2.5) from two triple-loop algorithms [22, 23]. Since the best-known O lower bound for solving (1.2) using first-order algorithms is (1/ǫ2), so there is a gap between the O existing upper bounds and the lower bound. Another drawback of multi-loop algorithms is their

2 difficulty in solving problems with multi-block structure, since the acceleration steps used in their inner loops cannot be easily extended to multi-block cases, and a standard double-loop algorithm without acceleration can be very slow. This is unfortunate because the min-max problems with block structure is important for distributed training [24] in machine learning and signal processing. Due to the aforementioned two drawbacks of double-loop and triple-loops algorithms, we focus in this paper on single-loop algorithms in hope to achieve the optimal iteration complexity (1/ǫ2) O for the nonconvex-concave problem (1.2). Notice that the nonconvex-concave applications in the aforementioned studies [20,22–28] can all be formulated as (1.2), although the iteration complexity results derived in these papers are only for general nonconvex-concave problems. In other words, the structure of (1.2) is not used in the theoretical analysis. One natural question to ask is: can we design a single loop algorithm with an iteration complexity lower than (1/ǫ2.5) for O the min-max problem (1.2)? Existing Single-loop algorithms. A simple single-loop algorithm is the so-called Gradient Descent Ascent (GDA) which alternatively performs gradient descent to the minimization problem and gradient ascent to the maximization problem. GDA can generate an ǫ-stationary solution for a nonconvex-strongly-concave problem with iteration complexity (1/ǫ2) [28]. However, GDA will O oscillate with constant stepsizes around the solution if the maximization problem is not strongly concave [19]. So the stepsize should be proportional to ǫ if we want an ǫ-solution. These limita- tions slow down GDA which has an (1/ǫ5) iteration complexity for nonconvex-concave problems. O Another single-loop algorithm [24] requires diminishing step-sizes to guarantee convergence and its complexity is (1/ǫ4). [29] also proposes a single-loop algorithm for min-max problems by perform- O ing GDA to a regularized version of the original min-max problem and the regularization term is diminishing. The iteration complexity bounds given in the references [24,28,29] are worse than the ones from multi-loop algorithms using acceleration in the subproblems. In this paper, we propose a single-loop “smoothed gradient descent-ascent” algorithm with optimal iteration complexity for the nonconvex-concave problem (1.2). Inspired by [30], to fix the oscillation issue of GDA discussed above, we introduce an exponentially weighted sequence zt of the primal iteration sequence xt and include a quadratic proximal term centered at zt to objective function. Then we perform a GDA step to the proximal function instead of the original objective. With this smoothing technique, an (1/ǫ2) iteration complexity can be achieved for problem (1.2) O under mild assumptions. Our contributions are three fold. • Optimal order in convergence rate. We propose a single-loop algorithm Smoothed- GDA for nonconvex-concave problems which finds an ǫ-stationary solution within (1/ǫ2) O iterations for problem (1.2) under mild assumptions. • General convergence results. The Smoothed-GDA algorithm can also be applied to solve general nonconvex-concave problems with an (1/ǫ4) iteration complexity. This com- O plexity is the same as in [24]. However, the current algorithm does not require the compactness of the domain X, which significantly extends the applicability of the algorithm. • Multi-block settings. We extend the Smoothed-GDA algorithm to the multi-block set- ting and give the same convergence guarantee as the one-block case. The paper is organized as follows. In Section 2, we describe some applications of nonconvex- concave problem (1.2) or (1.3). The details of the Smoothed-GDA algorithm as well as the main theoretical results are given in Section 3. The proof sketch is given in Section 4. The proofs and the details of the numerical experiments are in the appendix.

3 2 Representative Applications

We give three application examples which are in the min-max form (1.2). 1. Robust learning from multiple distributions. Suppose the data set is from n distribu- tions: D , , D . Each D is a different perturbed version of the underlying true distribution D . 1 · · · n i 0 Robust training is formulated as minimizing the maximum of expected loss over the n distributions as m E min max a∼Di [F (x; a)] = min max yifi(x), (2.1) x∈X i x∈X y∈Y Xi=1 where Y is a probability simplex, F (x; a) represents the loss with model parameter x on a data E sample a. Notice that fi(x) = a∼Di [F (x; a)] is the expected loss under distribution Di. In adversarial learning [3,31,32], Di corresponds to the distribution that is used to generate adversarial examples. In Section 5, we will provide a detailed formulation of adversarial learning on the data set MNIST and apply the Smoothed GDA algorithm to this application. 2. Fair models. In machine learning, it is common that the models may be unfair, i.e. the models might discriminate against individuals based on their membership in some group [33, 34]. For example, an algorithm for predicting a person’s salary might use that person’s protected attributes, such as gender, race, and color. Another example is training a logistic regression model for classification which can be biased against certain categories. To promote fairness, [35] proposes a framework to minimize the maximum loss incurred by the different categories:

min max fi(x), (2.2) x∈X i where x represents the model parameters and fi is the corresponding loss for category i. 3. Distribution-agnostic meta-learning. Meta-learning is a field about learning to learn, i.e. to learn the optimal model properties so that the model performance can be improved. One popular choice of meta-learning problem is called gradient-based Model-Agnostic Meta-Learning (MAML) [36]. The goal of MAML is to learn a good global initialization such that for any new tasks, the model still performs well after one gradient update from the initialization. One limitation of MAML is that it implicitly assumes the tasks come from a particular distri- bution, and optimizes the expected or sample average loss over tasks drawn from this distribution. This limitation might lead to arbitrarily bad worst-case performance and unfairness. To mitigate these difficulties, [21] proposed a distribution-agnostic formulation of MAML:

min max fi(x α fi(x)). (2.3) x∈X i − ∇

Here, fi is the loss function associated with the i-th task, x is the parameter taken from the feasible set X, and α is the stepsize used in the MAML for the gradient update. Notice that each fi is still a function over x, even though we take one gradient step before evaluating the function. This formulation (2.3) finds the initial point that minimizes the objective function after one step of gradient over all possible loss functions. It is shown that solving the distribution-agnostic meta- learning problem improves the worst-case performance over that of the original MAML [21] across the tasks.

4 3 Smoothed GDA Algorithm and Its Convergence

Before we introduce the Smoothed-GDA algorithm, we first define the stationary solution and the ǫ-stationary solution of problem (1.1).

Definition 3.1 Let 1X (x), 1Y (y) be the indicator functions of the sets X and Y respectively. A pair (x,y) is an ǫ-solution set of problem (1.1) if there exists a pair (u, v) such that

u f(x,y)+ ∂1 (x), v f(x,y)+ ∂1 (y), and u , v ǫ, (3.1) ∈∇x X ∈ −∇y Y k k k k≤ where ∂g( ) denotes the sub-gradient of a function g. A pair (x,y) is a stationary solution if · u = 0, v = 0.

Definition 3.2 The projection of a point y onto a set X is defined as P (y) = argmin 1 x X x∈X 2 k − y 2. k 3.1 Smoothed Gradient Descent-Ascent (Smoothed-GDA) A simple algorithm for solving min-max problems is the Gradient Descent Ascent (GDA) algorithm (Algorithm 1), which performs a gradient descent to the min problem and a gradient ascent to the max problem alternatively. It is well-known that with constant step size, GDA can oscillate between T iterates and fail to converge even for a simple bilinear min-max problem: minx∈Rn maxy∈Rn x y. To fix the oscillation issue, we introduce a “smoothing” technique to the primal updates. Note that smoothing is a common technique in traditional optimization such as Moreau-Yosida smoothing [37] and Nesterov’s smoothing [38]. More concretely, we introduce an auxiliary sequence zt and { } define a function K(x, z; y) as p K(x, z; y)= f(x,y)+ x z 2, (3.2) 2k − k where p > 0 is a constant, and we perform gradient descent and gradient ascent alternatively on this function instead of the original function f(x,y). After performing one-step of GDA to the function K(xt, zt; yt), zt is updated by an averaging step. The “Smoothed GDA” algorithm is formally presented in Algorithm 2. Note that our algorithm is different from the one in [29], as [29] uses an regularization term α ( x 2 y 2) and requires this term to diminishing. t k k − k k Algorithm 1 GDA Algorithm 2 Smoothed-GDA 1: Initialize x0,y0; 1: Initialize x0, z0,y0 and 0 < β 1. ≤ 2: Choose c,α > 0; 2: for t = 0, 1, 2,..., do 3: for t = 0, 1, 2,..., do 3: xt+1 = P (xt c K(xt, zt; yt)); X − ∇x 4: xt+1 = P (xt c f(xt,yt)); 4: yt+1 = P (yt + α K(xt+1, zt; yt)); X − ∇x Y ∇y 5: yt+1 = P (yt + α f(xt+1,yt)); 5: zt+1 = zt + β(xt+1 zt), Y ∇y − 6: end for 6: end for Notice that when β = 1, Smoothed-GDA is just the standard GDA. Furthermore, if the variable x has a block structure, i.e., x can be decomposed into N blocks as

x = (xT , ,xT )T , 1 · · · N

5 then Algorithm 2 can be extended to a multi-block version which we call the Smoothed Block Gradient Descent Ascent (Smoothed-BGDA) Algorithm (see Algorithm 3). In the multi-block version, we update the primal variable blocks alternatingly and use the same strategy to update the dual variable and the auxiliary variable as in the single-block version.

Algorithm 3 Smoothed Block Gradient Descent Ascent (Smoothed-BGDA) 1: Initialize x0, z0,y0; 2: for t = 0, 1, 2,..., do 3: for i = 1, 2,...,N do 4: xt+1 = P (xt c K(xt+1,xt+1, ,xt+1,xt, ,xt , zt; yt)); i X i − ∇xi 1 2 · · · i−1 i · · · N 5: end for 6: yt+1 = P (yt + α K(xt+1, zt; yt)); Y ∇y 7: zt+1 = zt + β(xt+1 zt), where 0 < β 1; − ≤ 8: end for

3.2 Iteration Complexity for Nonconvex-Concave Problems In this subsection, we present the iteration complexities of Algorithm 2 and Algorithm 3 for general nonconvex-concave problems (1.1). We first state some basic assumptions. Assumption 3.3 We assume the following. 1. f(x,y) is smooth and the gradients f(x,y), f(x,y) are L-Lipschitz continuous. ∇x ∇y 2. Y is a closed, convex and compact set of Rm. X is a closed and convex set. 3. The function ψ(x) = max f(x,y) is bounded from below by some finite constant f > . y∈Y −∞ Theorem 3.4 Consider solving problem (1.1) by Algorithm 2 (or Algorithm 3). Suppose Assump- tion 3.3 holds, and we choose the algorithm parameters to satisfy p> 3L, c< 1/(p + L) and 1 c2(p L)2 1 (p L)2 α< min , − , β min , − . (3.3) 11L 4L(1 + c(p L))2 ≤ 36 384p(p + L)2  −    Then, the following holds: • (One-block case) For any integer T > 0, if we further let β < 1/√T , then there exists a t 1, 2, , T such that (xt+1,yt+1) is a (T −1/4)-stationary solution. This means we can ∈ { · · · } O obtain an ǫ-stationary solution within (ǫ−4) iterations. O • (Multi-block case) If we replace the condition of α in Algorithm 3 by 1 c2(p L)2 α min , − (3.4) ≤ 11L 4L(1 + c(p + L)N 3/2 + c(p L))2  −  and further require β ǫ2, then we can obtain an ǫ-stationary solution within (ǫ−4) itera- ≤ O tions of Algorithm 3. Remark. The reference [24] derived the same iteration complexity of (ǫ−4) under the additional O compactness assumption on X. This assumption may not be satisfied for some applications where X can the entire space.

6 3.3 Convergence Results for Minimizing the Point-Wise Maximum of Finite Functions Now we state the improved iteration complexity results for the special min-max problem (1.2). We claim that our algorithms (Algorithm 2 and Algorithm 3) can achieve the optimal order of iteration complexity of (ǫ−2) in this case. O For any stationary solution of (1.2) denoted as (x∗,y∗), the following KKT conditions hold:

F (x∗)y∗ = 0, (3.5) ∇m ∗ yi = 1, (3.6) Xi=1 y∗ 0, i [m] (3.7) i ≥ ∀ ∈ µ ν = f (x∗), i [m], (3.8) − i i ∀ ∈ ν 0,ν y∗ = 0, i [m], (3.9) i ≥ i i ∀ ∈ where F (x) denotes the Jacobian matrix of F at x, while µ, ν are the multipliers for the equality ∇ m constraint i=1 yi = 1 and the inequality constraint yi 0 respectively. ∗ ∗ ≥ ∗ At any stationary solution (x ,y ), only the functions fi(x ) for any index i with yi > 0 P N ∗ ∗ contribute to the objective function i=1 yi fi(x ) and they correspond to the worst cases in the robust learning task. In other words, any function f ( ) with y > 0 at (x∗,y∗) contains important P i · i information of the solution. We denote a set (y∗) to represent the set of indices for which y∗ > 0. I+ i We will make a mild assumption on this set.

Assumption 3.5 For any (x∗,y∗) satisfying (3.5), we have ν > 0, i / (y∗). i ∀ ∈I+ Remark. The assumption is called “strict complementarity”, a common assumption in the field of variation inequality [39, 40] which is closely related to the study of min-max problems. This assumption is used in many other optimization papers [6,41–44]. Strict complementarity is gener- ically true (i.e. holds with probability 1) if there is a linear term in the objective function and the data is from a continuous distribution (similar to [30, 44]). Moreover, we will show that we can prove Theorem 3.8 using a weaker regularity assumption rather than the strict complementarity assumption:

Assumption 3.6 For any (x∗,y∗) W ∗, the matrix M(x∗) is of full column rank, where ∈ ∗ M(x )= JT (x∗) 1 . We say that Assumption E.1 is weaker since the strict complem entarity assumption (Assumption 3.5) can imply Assumption E.1 according to Lemma D.7 in the appendix. In the appendix, we will see that Assumption E.1 holds with probability 1 for a robust regression problem with a square loss (see Proposition E.7). We also make the following common “bounded ” assumption.

Assumption 3.7 The set x ψ(x) R is bounded for any R> 0. Here ψ(x) = max f(x,y). { | ≤ } y∈Y Remark. This bounded-level-set assumption is to ensure the iterates would stay bounded. Ac- tually, assuming the iterates zt are bounded will be enough for our proof. The bounded level { }

7 set assumption, a.k.a. coerciveness assumption, is widely used in many papers [45–47]. Bounded- iterates-assumption itself is also common in optimization [42, 48, 49]. In practice, people usually add a regularizer to the objective function to make the level set and the iterates bounded (see [50] for a neural network example).

Theorem 3.8 Consider solving problem 1.2 by Algorithm 2 or Algorithm 3. Suppose that Assump- tion 3.3, 3.5 holds and either Assumption 3.7 holds or assume zt is bounded. Then there exist { } constants β′ and β′′ (independent of ǫ and T ) such that the following holds: 1. (One-block case) If we choose the parameters in Algorithm 2 as in (3.3) and we further let β < β′ , then

(a) Every limit point of (xt,yt) is a solution of (1.2). (b) The iteration complexity of Algorithm 2 to obtain an ǫ-stationary solution is (1/ǫ2). O 2. (Multi-block case) Consider using Algorithm 3 to solve Problem 1.2. If we replace the condition for α in (3.3) by (3.4) and require β satisfying β<ǫ2 and β < β′′, then we have the same results as in the one-block case.

4 Proof Sketch

In this section, we give a proof sketch of the main theorem on the one-block cases; the proof details will be given in the appendix.

4.1 The Potential Function and Basic Estimates To analyze the convergence of the algorithms, we construct a potential function and study its behavior along the iterations. We first give the intuition why our algorithm works. We define the dual function d( ) and the proximal function P ( ) as · · d(y, z) = min K(x, z; y), P (z) = min max K(x, z; y) . x∈X x∈X{ y∈Y } We also let

x(y, z) = argmin K(x, z; y), x∈X x∗(z) = argmin max K(x, z; y), x∈X y∈Y yt (zt) = P (yt + α K(x(yt, zt), zt; yt)). + Y ∇y Notice that by Danskin’s Theorem, we have d(y, z) = K(x(y, z), z; y) and P (z) = p(z ∇y ∇y ∇z − x∗(z)). Recall in Algorithm 2, the update for xt,yt and zt can be respectively viewed as a primal descent for the function K(xt, zt; yt), approximating dual ascent to the dual function d(yt, zt) and approximating proximal descent to the proximal function P (zt). We define a potential function as follows: φt = φ(xt,yt, zt)= K(xt, zt; yt) 2d(yt, zt) + 2P (zt), (4.1) − which is a linear combination of the primal function K( ), the dual function d( ) and the proximal · · function P ( ). We hope the potential function decreases after each iteration and is bounded from ·

8 below. In fact, it is easy to prove that φt f for any t (see appendix), but it is harder to prove ≥ the decrease of φt. Since the ascent for dual and the descent for proximal is approximate, an error term occurs when estimating the decrease of the potential function. Hence, certain error bounds are needed. Using some primal error bounds, we have the following basic descent estimate.

Proposition 4.1 Suppose the parameters of Algorithm 2 satisfy (3.3), then

1 1 pβ φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt xt+1 2 (4.2) − ≥ 8ck − k 8αk − + k 8 k − k 24pβ x∗(zt) x(yt (zt), zt) 2. (4.3) − k − + k We would like the potential function φt to decrease sufficiently after each iteration. Concretely speaking, we want to eliminate the negative term (4.3) and show that the following “sufficient- decrease” holds for each iteration t: 1 1 pβ φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt xt+1 2. (4.4) − ≥ 16ck − k 16αk − + k 16 k − k It is not hard to prove that if (4.4) holds for t 0, 1, , T 1 , then there exists a t 1, 2, , T ∈ { · · · − } ∈ { · · · } such that (xt,yt) is a C/√T β-solution for some constant C > 0. Moreover, if (4.4) holds for any t, then the iteration complexity is (1/ǫ2) and we can also prove that every limit point of the iterates O is a min-max solution. Therefore by the above analysis, the most important thing is to bound the term x∗(zt) x(yt (zt), zt) 2, which is related to the so-call “dual error bound”. k − + k If yt yt (zt) = 0, then yt (zt) is the maximizer of d(y, zt) over y, and thus x∗(zt) is the k − + k + same as x(yt (zt), zt). A natural question is whether we can use the term yt yt (zt) to bound + k − + k x∗(zt) x(yt (zt), zt) 2? The answer is yes, and we have the following “dual error bound”. k − + k Lemma 4.2 If Assumptions 3.3 and 3.5 hold for (1.2) and there is an R> 0 with zt R, then k k≤ there exists δ > 0 such that if

max xt xt+1 , yt yt (zt) , xt+1 zt δ, {k − k k − + k k − k} ≤ then x(yt (zt), zt) x∗(zt) σ yt yt (zt) . k + − k≤ 5k − + k holds for some constant σ5 > 0.

Using this lemma, we can prove Theorem 3.8. We choose β sufficiently small, then when the residuals appear in (4.2) are large, we can prove that φt decreases sufficiently using the compactness of Y . When the residuals are small, the error bound Lemma 4.2 can be used to guarantee the sufficient decrease of φt. Therefore, (4.4) always holds, which yields Theorem 3.8. However, for the general nonconvex-concave problem 1.1, we can only have a “weaker” bound.

Lemma 4.3 Suppose Assumption 3.3 holds for problem 1.1. Define D(Y ) to be the diameter of Y . If Assumption 3.3 holds, we have

(p L) x∗(zt) x(yt (zt), zt) 2 (1 + αL) yt yt (zt) D(Y ) − k − + k ≤ k − + k ·

9 Note that this is a nonhomogeneous error bound, which can help us bound the term x∗(zt) k − x(yt (zt), zt) only when yt yt (zt) is not too small. Therefore, we say it is “weaker” dual error + k k − + k bound. To obtain an ǫ-stationary solution, we need to choose β sufficiently small and proportional to ǫ2. In this case, we can prove that if φt stops to decrease, we have already obtained an ǫ- stationary solution by Lemma 4.3. By the remark after (4.4), we need ((1/(ǫ√β))2) = (1/ǫ4) O O iterations to obtain an ǫ-stationary solution.

Remark. For the general nonconvex-concave problem (1.1), we need to choose β proportional to ǫ2 and hence the iteration complexity is higher than the previous case. However, it is expected that for a concrete problem with some special structure, the “weaker” error bound Lemma 4.3 can be improved, as is the iteration complexity bound. This is left as a future work.

The proof sketch can be summarized in the following steps: • In Step 1, we introduce the potential function φt which is shown to be bounded below. To obtain the convergence rate of the algorithms, we want to prove the potential function can make sufficient decrease at every iterate t, i.e., we want to show φt φt+1 > 0. − • In Step 2, we study this difference φt φt+1 and provide a lower bound of it in Proposition 4.2. − Notice that a negative term (4.3) will show up in the lower bound, and we have to carefully analyze the magnitude of this term to obtain φt φt+1 > 0. − • Analyzing the negative term is the main difficulty of the proof. In Step 3, we discuss how to deal with this difficulty for solving Problem 1.1 and Problem 1.2 separately.

• Finally, we show the potential function makes a sufficient decrease at every iterate as stated in (4.4), and conclude our proof by computing the number of iterations to achieve an ǫ-solution (as shown in Lemma B.12 the appendix).

5 Numerical Results on Robust Neural Network Training

In this section, we apply the Smoothed-GDA algorithm to train a robust neural network on MNIST data set against adversarial attacks [3,31, 32]. The optimization formulation is

N min max ℓ(f(xi + δi; w),yi), (5.1) w δi, s.t. |δi|∞≤ε Xi=1 where w is the parameter of the neural network, the pair (xi,yi) denotes the i-th data point, and δi is the perturbation added to data point i. As (5.1) is difficult to solve directly, researchers [20] have proposed an approximation of (5.1) as the following nonconvex-concave problem, which is in the form of (1.2) we discussed before.

N 9 m K min max tjℓ f xij ; w ,yi , = (t1, ,tm) ti = 1,ti 0 , (5.2) w t∈T T · · · | ≥ i=1 j=0 ( i=1 ) X X   X K where K is a parameter in the approximation, and xij is an approximated attack on sample xi by changing the output of the network to label j. The details of this formulation and the structure of the network in experiments are provided in the appendix.

10 00 -Step Projected Gradient Step

0

0

0 0 0 0 0 00

Figure 1: Convergence speed of Smoothed-GDA and the algorithm in [20].

40 Natural FGSM L∞ [32] PGD L∞ [31] ε = 0.2 ε = 0.3 ε = 0.4 ε = 0.2 ε = 0.3 ε = 0.4 [3] with ε = 0.35 98.58% 96.09% 94.82% 89.84% 94.64% 91.41% 78.67% [51] with ε = 0.35 97.37% 95.47% 94.86% 79.04% 94.41% 92.69% 85.74% [51] with ε = 0.40 97.21% 96.19% 96.17% 96.14% 95.01% 94.36% 94.11% [20] with ε = 0.40 98.20% 97.04% 96.66% 96.23% 96.00% 95.17 % 94.22% Smoothed-GDA with ε = 0.40 98.89% 97.87% 97.23% 95.81% 96.71% 95.62% 94.51%

Table 2: Test accuracies under FGSM and PGD attacks.

Results: We compare our results with three algorithms from [3, 20, 51]. The references [3, 51] are two classical algorithms in adversarial training, while the recent reference [20] considers the same problem formulation as (1.2) and has an algorithm with (1/ǫ3.5) iteration complexity. The O accuracy of our formulation are summarized in Table 2 which shows that the formulation (1.2) leads to a comparable or slightly better performance to the other algorithms. We also compare the convergence on the loss function when using the Smoothed-GDA algorithm and the one in [20]. In Figure 1, Smoothed-GDA algorithm takes only 5 epochs to get the loss values below 0.2 while the algorithm proposed in [20] takes more than 14 epochs. In addition, the loss obtained from the Smoothed-GDA algorithm has a smaller variance.

6 Conclusion

In this paper, we propose a simple single-loop algorithm for nonconvex min-max problems (1.1). For an important family of problems (1.2), the algorithm is even more efficient due to the dual error bound, and it is well-suited for problems in large-size dimensions and distributed setting. The algorithmic framework is flexible, and hence in the future work, we can extend the algorithm to more practical problems and derive stronger error bounds to attain lower iteration complexity.

11 Broader Impact

In this paper, we propose a single-loop algorithm for min-max problem. This algorithm is easy to implemented and proved to be efficient in a family of nonconvex minimax problems and have good numerical behavior in robust training. This paper focuses on theoretical study of the algorithms. In industrial applications, several aspects of impact can be expected:

1. Save energy by improving efficiency. The trick developed in this paper has the poten- tial to accelerate the training for machine learning problems involving a minimax problem such robust training for uncertain data, generative adversarial net(GAN) and AI for games. This means that the actual training time will decrease dramatically by using our algorithm. Training neural network is very energy-consuming, and reducing the training time can help the industries or companies to save energy.

2. Promote fairness. We consider min-max problems in this paper. A model that is trained under this framework will not allow poor performance on some objectives in order to boost performance on the others. Therefore, even if the training data itself is biased, the model will not allow some objectives to contribute heavily to minimizing the average loss due to the min-max framework. In other words, this framework promotes fairness, and model that is trained under this framework will provide fair solutions to the problems.

3. Provide flexible framework. Our algorithmic framework is flexible. Though in the paper, we only discuss some general formulation, our algorithm can be easily extended to many practical settings. For example, based on our general framework for multi-block problems, we can design algorithms efficiently solving problems with distributedly stored data, decentralized control or privacy concern. Therefore, our algorithm may have an impact on some popular big data applications such as distributed training, federated learning and so on.

Funding Disclosure

This research is supported by the leading talents of Guangdong Province program [Grant 00201501]; the National Science Foundation of China [Grant 61731018]; the Air Force Office of Scientific Re- search [Grant FA9550-12-1-0396]; the National Science Foundation [Grant CCF 1755847]; Shenzhen Peacock Plan [Grant KQTD2015033114415450]; the Development and Reform Commission of Shen- zhen Municipality; and Shenzhen Research Institute of Big Data.

12 References

[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014.

[2] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.

[3] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.

[4] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, Robust optimization, vol. 28. Princeton Univer- sity Press, 2009.

[5] E. Delage and Y. Ye, “Distributionally robust optimization under moment uncertainty with application to data-driven problems,” Operations research, vol. 58, no. 3, pp. 595–612, 2010.

[6] H. Namkoong and J. C. Duchi, “Stochastic gradient methods for distributionally robust opti- mization with f-divergences,” in Advances in neural information processing systems, pp. 2208– 2216, 2016.

[7] H. Namkoong and J. C. Duchi, “Variance-based regularization with convex objectives,” in Advances in neural information processing systems, pp. 2971–2980, 2017.

[8] Y. Zhang and L. Xiao, “Stochastic primal-dual coordinate method for regularized empirical risk minimization,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 2939–2980, 2017.

[9] C. Tan, T. Zhang, S. Ma, and J. Liu, “Stochastic primal-dual method for empirical risk mini- mization with o (1) per-iteration complexity,” in Advances in Neural Information Processing Systems, pp. 8366–8375, 2018.

[10] S. S. Du, J. Chen, L. Li, L. Xiao, and D. Zhou, “Stochastic variance reduction methods for policy evaluation,” in Proceedings of the 34th International Conference on Machine Learning- Volume 70, pp. 1049–1058, JMLR. org, 2017.

[11] B. Dai, A. Shaw, L. Li, L. Xiao, N. He, Z. Liu, J. Chen, and L. Song, “Sbeed: Convergent rein- forcement learning with nonlinear function approximation,” arXiv preprint arXiv:1712.10285, 2017.

[12] A. Nemirovski, “Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems,” SIAM Journal on Optimization, vol. 15, no. 1, pp. 229–251, 2004.

[13] Y. Nesterov, “Dual extrapolation and its applications to solving variational inequalities and related problems,” Mathematical Programming, vol. 109, no. 2-3, pp. 319–344, 2007.

[14] R. D. Monteiro and B. F. Svaiter, “On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean,” SIAM Journal on Optimization, vol. 20, no. 6, pp. 2755–2787, 2010.

13 [15] B. Palaniappan and F. Bach, “Stochastic variance reduction methods for saddle-point prob- lems,” in Advances in Neural Information Processing Systems, pp. 1416–1424, 2016.

[16] G. Gidel, T. Jebara, and S. Lacoste-Julien, “Frank-wolfe algorithms for saddle point problems,” arXiv preprint arXiv:1610.07797, 2016.

[17] P. Mertikopoulos, B. Lecouat, H. Zenati, C.-S. Foo, V. Chandrasekhar, and G. Piliouras, “Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile,” arXiv preprint arXiv:1807.02629, 2018.

[18] E. Y. Hamedani, A. Jalilzadeh, N. Aybat, and U. Shanbhag, “Iteration complexity of ran- domized primal-dual methods for convex-concave saddle point problems,” arXiv preprint arXiv:1806.04118, 2018.

[19] A. Mokhtari, A. Ozdaglar, and S. Pattathil, “A unified analysis of extra-gradient and opti- mistic gradient methods for saddle point problems: Proximal point approach,” arXiv preprint arXiv:1901.08511, 2019.

[20] M. Nouiehed, M. Sanjabi, T. Huang, J. D. Lee, and M. Razaviyayn, “Solving a class of non- convex min-max games using iterative first order methods,” in Advances in Neural Information Processing Systems, pp. 14905–14916, 2019.

[21] L. Collins, A. Mokhtari, and S. Shakkottai, “Distribution-agnostic model-agnostic meta- learning,” arXiv preprint arXiv:2002.04766, 2020.

[22] D. M. Ostrovskii, A. Lowy, and M. Razaviyayn, “Efficient search of first-order nash equilibria in nonconvex-concave smooth min-max problems,” arXiv preprint arXiv:2002.07919, 2020.

[23] T. Lin, C. Jin, M. Jordan, et al., “Near-optimal algorithms for minimax optimization,” arXiv preprint arXiv:2002.02417, 2020.

[24] S. Lu, I. Tsaknakis, M. Hong, and Y. Chen, “Hybrid block successive approximation for one-sided non-convex min-max problems: algorithms and applications,” arXiv preprint arXiv:1902.08294, 2019.

[25] K. K. Thekumparampil, P. Jain, P. Netrapalli, and S. Oh, “Efficient algorithms for smooth minimax optimization,” in Advances in Neural Information Processing Systems, pp. 12659– 12670, 2019.

[26] C. Jin, P. Netrapalli, and M. I. Jordan, “Minmax optimization: Stable limit points of gradient descent ascent are locally optimal,” arXiv preprint arXiv:1902.00618, 2019.

[27] H. Rafique, M. Liu, Q. Lin, and T. Yang, “Non-convex min-max optimization: Provable algorithms and applications in machine learning,” arXiv preprint arXiv:1810.02060, 2018.

[28] T. Lin, C. Jin, and M. I. Jordan, “On gradient descent ascent for nonconvex-concave minimax problems,” arXiv preprint arXiv:1906.00331, 2019.

[29] Z. Xu, H. Zhang, Y. Xu, and G. Lan, “A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems,” arXiv preprint arXiv:2006.02032, 2020.

14 [30] J. Zhang and Z.-Q. Luo, “A proximal alternating direction method of multiplier for linearly constrained nonconvex minimization,” SIAM Journal on Optimization, vol. 30, no. 3, pp. 2272– 2302, 2020.

[31] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.

[32] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.

[33] M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” in Ad- vances in neural information processing systems, pp. 3315–3323, 2016.

[34] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226, 2012.

[35] M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” arXiv preprint arXiv:1902.00146, 2019.

[36] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135, JMLR. org, 2017.

[37] N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and Trends in optimization, vol. 1, no. 3, pp. 127–239, 2014.

[38] Y. Nesterov, “Smooth minimization of non-smooth functions,” Mathematical programming, vol. 103, no. 1, pp. 127–152, 2005.

[39] P. T. Harker and J.-S. Pang, “Finite-dimensional variational inequality and nonlinear comple- mentarity problems: a survey of theory, algorithms and applications,” Mathematical program- ming, vol. 48, no. 1-3, pp. 161–220, 1990.

[40] F. Facchinei and J.-S. Pang, Finite-dimensional variational inequalities and complementarity problems. Springer Science & Business Media, 2007.

[41] A. Forsgren, P. E. Gill, and M. H. Wright, “Interior methods for nonlinear optimization,” SIAM review, vol. 44, no. 4, pp. 525–597, 2002.

[42] P. Carbonetto, M. Schmidt, and N. D. Freitas, “An interior-point stochastic approximation method and an l1-regularized delta rule,” in Advances in neural information processing sys- tems, pp. 233–240, 2009.

[43] J. Liang, J. Fadili, and G. Peyr´e, “Local linear convergence of forward–backward under partial smoothness,” in Advances in Neural Information Processing Systems, pp. 1970–1978, 2014.

[44] S. Lu, M. Razaviyayn, B. Yang, K. Huang, and M. Hong, “Snap: Finding approximate second-order stationary solutions efficiently for non-convex linearly constrained problems,” arXiv preprint arXiv:1907.04450, 2019.

15 [45] J. Zeng and W. Yin, “On nonconvex decentralized gradient descent,” IEEE Transactions on signal processing, vol. 66, no. 11, pp. 2834–2848, 2018.

[46] L. Cannelli, F. Facchinei, V. Kungurtsev, and G. Scutari, “Asynchronous parallel algorithms for nonconvex optimization,” Mathematical Programming, pp. 1–34, 2019.

[47] Y. Wang, W. Yin, and J. Zeng, “Global convergence of admm in nonconvex nonsmooth opti- mization,” Journal of Scientific Computing, vol. 78, no. 1, pp. 29–63, 2019.

[48] Y. Xu and W. Yin, “Block stochastic gradient iteration for convex and nonconvex optimiza- tion,” SIAM Journal on Optimization, vol. 25, no. 3, pp. 1686–1716, 2015.

[49] A. D´efossez, L. Bottou, F. Bach, and N. Usunier, “On the convergence of adam and adagrad,” arXiv preprint arXiv:2003.02395, 2020.

[50] X. Xie, H. Kong, J. Wu, W. Zhang, G. Liu, and Z. Lin, “Maximum-and-concatenation net- works,” arXiv preprint arXiv:2007.04630, 2020.

[51] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, “Theoretically principled trade-off between robustness and accuracy,” arXiv preprint arXiv:1901.08573, 2019.

[52] D. P. Bertsekas, Convex optimization theory.

[53] M. Berger and B. Gostiaux, Differential Geometry: Manifolds, Curves, and Surfaces: Mani- folds, Curves, and Surfaces, vol. 115. Springer Science & Business Media, 2012.

[54] W. Wang and M. A. Carreira-Perpin´an, “Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application,” arXiv preprint arXiv:1309.1541, 2013.

[55] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009.

[56] E. Wong, F. Schmidt, J. H. Metzen, and J. Z. Kolter, “Scaling provable adversarial defenses,” in Advances in Neural Information Processing Systems, pp. 8400–8409, 2018.

16 In the appendix, we will give the proof of the main theorems. The appendix is organized as follows: 1. In section A, we list some notations used in the appendix.

2. In section B, we prove the two main theorems in one-block case.

3. In section C, we briefly state the proof of the two main theorems in multi-block setting.

4. In section D, we prove the two main error bound lemmas (Lemma 4.3 and Lemma 4.2) under the strict complementarity assumption.

5. In section E, we see that the strict complementarity assumption can be relaxed to a weaker regularity assumption. We also prove that this weaker regularity assumption is generic for robust regression problems with square loss, i.e., we prove that our regularity assumption holds with probability 1 if the data points are joint from a continuous distribution.

6. In the last section F, we give some more details about the experiment.

A Notations

We first list some notations which will be used in the appendix. 1. [m]= 1, 2, ,m . { · · · } 2. W ∗ is the solution set of (1.1) or (1.2). X∗ is the set of all solutions x∗, i.e., x∗ X∗ if there ∈ exists a y∗ such that (x∗,y∗) W ∗. ∈ 3. (r) is a Euclidian ball of radius r for proper dimension. B 4. dist(v,S) means the Euclidian distance from a point v to a set S.

5. For a vector v, v means the i-th component of v. For a set , v R|S| is the vector i S S ∈ containing all components v ’s with i . i ∈ S 6. Let A Rn×m be a matrix and [m] be an index set. Then A represents the row ∈ S ⊆ S sub-matrix of A corresponding to the rows with index in . S 7. For a matrix M, γ(M) is the smallest singular value of M.

8. The projection of a point y, onto a set X is defined as P (y) = argmin 1 x y 2. X x∈X 2 k − k B Proof of the two main theorems: one-block case

In this section, we prove the two main theorems in one-block case. The proof of the multi-block case is similar and will be given in the next section. Proof Sketch.

• In Step 1, we will introduce the potential function φt which is shown to be bounded below. To obtain the convergence rate of the algorithms, we want to prove the potential function can make sufficient decrease at every iterate t, i.e., we want to show φt φt+1 > 0. −

17 • In Step 2, we will study this difference φt φt+1 and provide a lower bound of it in Proposition − 4.2. Notice that a negative term (4.3) will show up in the lower bound, and we have to carefully analyze the magnitude of this term to obtain φt φt+1 > 0. − • Analyzing the negative term will be the main difficulty of the proof. In Step 3, we will discuss how to deal with this difficulty for solving Problem 1.1 and Problem 1.2 separately. • Finally, we will show the potential function makes a sufficient decrease at every iterate as stated in (4.4), and will conclude our proof by computing the number of iterations to achieve an ǫ-solution in Lemma B.12.

B.1 The potential function and basic estimate Recall that the potential function is: φt = Φ(xt, zt; yt)= K(xt, zt; yt) 2d(yt, zt) + 2P (zt), − where p K(x, z; y) = f(x,y)+ x z 2, 2k − k d(y, z) = min K(x, z; y), x∈X P (z) = min max K(x, z; y). x∈X y∈Y Also note that if p>L, K(x, z; y) is strongly convex of x with modular p L and K(x, z; y) is − ∇x Lipschitz-continuous of x with a constant L + p. We also use the following notations:

1. h(x, z) = maxy∈Y K(x, z; y). ∗ 2. x(y, z) = argminx∈X K(x, z; y), x (z) = argminx∈X h(x, z).

3. The set Y (z) = argmaxy∈Y d(y, z). 4. y (z)= P (y + α K(x(y, z), z; y)). + Y ∇y 5. x (y, z)= P (x c K(x, z; y)). + X − ∇x First of all, we can prove that φt is bounded from below: Lemma B.1 We have φ(x,y,z) f. ≥ Proof By the definition of d( ) and P ( ), we have · · K(x, z; y) d(y, z), P (z) d(y, z), , P (z) f. (B.1) ≥ ≥ ≥ Hence, we have φ(x,y,z) = P (z) + (K(x, z; y) d(y, z)) + (P (z) d(y, z)) − − P (z) ≥ f. ≥

Next, we state some “error bounds”.

18 Lemma B.2 There exist constants σ1,σ2,σ3 independent of y such that

x(y, z) x(y, z′) σ z z′ , (B.2) k − k≤ 1k − k x∗(z) x∗(z′) σ z z′ , (B.3) k − k≤ 1k − k x(y, z) x(y′, z) σ y y′ , (B.4) k − k≤ 2k − k xt+1 x(yt, zt) σ xt xt+1 , (B.5) k − k≤ 3k − k for any y,y′ Y and z, z′ X, where σ = p , σ = 2(p+L) , σ = 1+(c(−L+p)) .. ∈ ∈ 1 −L+p 2 p−L 3 c(−L+p) Proof The proofs of (B.2), (B.3) and (B.5) are the same as those in Lemma 3.6 in [30] and hence omitted. We only need to prove (B.4). Using the strong convexity of K( , z; y) of x, we have · L + p K(x(y, z), z; y) K(x(y′, z), z; y) − x(y, z) x(y′, z) 2, (B.6) − ≤− 2 k − k L + p K(x(y, z), z; y′) K(x(y′, z), z; y′) − x(y, z) x(y′, z) 2. (B.7) − ≥ 2 k − k Moreover, using the concavity of K(x, z; ) of y, we have · K(x(y, z), z; y′) K(x(y, z), z; y) − K(x(y, z), z; y),y′ y . (B.8) ≤ h∇y − i Using the Lipschitz-continuity of K(x, z; ), we have ∇y · K(x(y′, z), z; y) K(x(y′, z), z; y′) − L K(x(y′, z), z; y),y′ y + y y′ 2. (B.9) ≤ h∇y − i 2 k − k Combining (B.6) and (B.9), we have

( L + p) x(y, z) x(y′, z) 2 (B.10) − k − k L K(x(y, z), z; y) K(x(y′, z), z; y),y′ y + y y′ 2 (B.11) ≤ h∇y −∇y − i 2 k − k L (p + L) x(y′, z) x(y, z) y y′ + y y′ 2, (B.12) ≤ k − kk − k 2 k − k where the last inequality uses the Cauchy-schwarz inequality and the Lipschitz-continuity of K( , z; y) ∇x · of x. Let ζ = x(y, z) x(y′, z) / y y′ . Then (B.10) becomes k − k k − k p + L L ζ2 ζ + . ≤ p L 2(p L) − −

19 Hence, we only need to solve the above quadratic inequality. We have

1 1 p + L 2 L ζ2 ζ2 + + ≤ 2 2 p L 2(p L)  −  − 1 1 p + L 2 p + L ζ2 + + ≤ 2 2 p L 2(p L)  −  − 1 1 p + L 2 1 p + L 2 ζ2 + + ≤ 2 2 p L 2 p L  −   −  1 p + L 2 = ζ2 + , 2 p L  −  where the first inequality is due to the AM-GM inequality and the third inequality is because (p + L)/(p L) > 1. Therefore − p + L p + L ζ √2 < 2 . ≤ p L p L − − Hence, we can take σ = 2(p + L)/(p L) and finish the proof. 2 − The following lemma is a direct corollary of the above lemma:

Lemma B.3 The dual function d( , z) is a differentiable function of ywith Lipschitz continuous · gradient d(y, z)= K(x(y, z), z; y)= f(x(y, z),y) ∇y ∇y ∇y and d(y′, z) d(y′′, z) L y′ y′′ , y′,y′′ Y. k∇y −∇y k≤ dk − k ∀ ∈ with Ld = L + Lσ2. Remark.Note that if p 3L, then we have σ = 2(p + L)/(p L) 4L and hence ≥ 2 − ≥ L 5L. (B.13) d ≥ Proof Using Danskin’s theorem in [52], we know that d is a differentiable function with d(y, z)= K(x(y, z), z; y)= f(x(y, z),y). ∇y ∇y ∇y To prove the Lipschitz-continuity, we have

d(y′, z) d(y′′, z) = K(x(y′, z), z; y′) K(x(y′′, z), z; y′′) k∇y −∇y k k∇y −∇y k K(x(y′, z), z; y′) K(x(y′, z), z; y′′) ≤ k∇y −∇y k + K(x(y′, z), z; y′′) K(x(y′′, z), z; y′′) k∇y −∇y k L y′ y′′ + L x(y′, z) x(y′′, z) ≤ k − k k − k L y′ y′′ + Lσ y′ y′′ = L y′ y′′ , ≤ k − k 2k − k dk − k where the last inequality is due to Lemma B.2. We then prove the following basic estimate.

20 Proposition B.4 We let

1 1 1 1 c2(p L)2 1 (p L)2 p> 3L,c < ,α< min , = min , − ,β < min , − . p + L {11L 4Lσ2 } {11L 4L(1 + c(p L))2 } {36 384p(p + L)2 } 3 − (B.14)

Then we have

φt φt+1 (B.15) − 1 xt xt+1 2 ≥ 8ck − k 1 p + yt yt (zt) 2 + zt zt+1 2 (B.16) 8αk − + k 8β k − k 24pβ x∗(zt) x(yt (zt), zt) 2 (B.17) − k − + k To prove this basic estimate, we need a series of lemmas.

Lemma B.5 (Primal Descent) For any t, we have 1 K(xt, zt; yt) K(xt+1, zt+1; yt+1) xt xt+1 2 + K(xt+1, zt; yt),yt yt+1 − ≥ 2ck − k h∇y − i L p yt yt+1 2 + zt zt+1 2. (B.18) − 2 k − k 2β k − k Proof Notice that the step of updating x is a standard gradient projection, hence we have 1 K(xt, zt; yt) K(xt+1, zt; yt) xt xt+1 2. (B.19) − ≥ 2ck − k Next, because K(x, z; y) is L-Lipschitz-continuous of y, we have ∇y L K(xt+1, zt; yt) K(xt+1, zt; yt+1) K(xt+1, zt; yt),yt yt+1 yt yt+1 2. (B.20) − ≥ h∇y − i− 2 k − k Based on the update of variable zt+1, i.e. zt+1 = zt + β(xt+1 zt), it is easy to show that − p K(xt+1, zt; yt+1) K(xt+1, zt+1; yt+1) zt zt+1 2. − ≥ 2β k − k

Combining (B.19)-(B.21), we finish the proof.

Lemma B.6 (Dual Ascent) For any t, we have

L d(yt+1, zt+1) d(yt, zt) d(yt, zt),yt+1 yt d yt yt+1 2 − ≥ h∇y − i− 2 k − k p + (zt+1 zt)T (zt+1 + zt 2x(yt+1, zt+1)) (B.21) 2 − − L = K(x(yt, zt), zt; yt),yt+1 yt d yt yt+1 2 (B.22) h∇y − i− 2 k − k p + (zt+1 zt)T (zt+1 + zt 2x(yt+1, zt+1)) (B.23) 2 − −

21 Proof Using Lemma B.3, we have L d(yt+1, zt) ( d(yt, zt)) d(yt, zt),yt+1 yt + d yt yt+1 2 − − − ≤ −h∇y − i 2 k − k L = K(xt+1, zt; yt),yt+1 yt + d yt yt+1 2. (B.24) −h∇y − i 2 k − k Next, d(yt+1, zt+1) d(yt+1, zt) − = K(x(yt+1, zt+1), zt+1; yt+1) K(x(yt+1, zt), zt; yt+1) − K(x(yt+1, zt+1), zt+1; yt+1) K(x(yt+1, zt+1), zt; yt+1) ≥ p −p = x(yt+1, zt+1) zt+1 2 x(yt+1, zt+1) zt 2 2k − k − 2k − k p = (zt+1 zt)T (zt+1 + zt 2x(yt+1, zt+1)). (B.25) 2 − − Finally, using (B.24)-(B.25) we finish the proof. Recall that Y (z)= y Y arg max d(y, z) { ∈ | y∈Y } Note that P (z)= d(y(z), z), for any y(z) Y (z). ∈ Lemma B.7 (Proximal Descent) For any t 0, there holds ≥ p P (zt+1) P (zt) (zt+1 zt)T (zt + zt+1 2x(y(zt+1, zt)), (B.26) − ≤ 2 − − where y(zt+1) is arbitrary y belongs to the set Y (zt+1). Proof In the rest of the Appendix, y(zt+1) denote any y belongs to the set Y (zt+1). Using Kakutoni’s Theorem, we have min max K(x, z; y) = max min K(x, z; y), x∈X y∈Y y∈Y x∈X which implies max d(y, z) = min h(x, z)= P (z). y∈Y x∈X Hence we have P (zt+1) P (zt) − (i) P (zt+1) d(y(zt+1), zt) ≤ − (ii) = d(y(zt+1), zt+1) d(y(zt+1), zt) − (iii) = K(x(y(zt+1), zt), zt+1; y(zt+1)) K(x(y(zt+1), zt), zt; y(zt+1)) − (iv) p = (zt+1 zt)T (zt+1 + zt 2x(y(zt+1), zt)), 2 − − where (i) and (iii) are because of (B.1), (ii) is because of the definition of y(zt+1) and (iv) is from direct calculation.

22 Lemma B.8 x∗(z)= x(y, z) for any y Y (z) and x∗(z) is continuous in z. Moreover, we have ∈ z = x∗(z) if and only if z X∗. ∈ We define y (z)= P (y + α K(x(y, z), z; y)). + Y ∇y Then we have

Lemma B.9 We have yt+1 yt (zt) κ xt xt+1 , k − + k≤ k − k where κ = αLσ3. Proof By the nonexpansiveness of the projection operator, we have

yt+1 yt (zt) = P (yt + α K(x(yt, zt), zt; yt)) P (yt + α K(xt+1, zt; yt)) k − + k k Y ∇y − Y ∇y k (yt + α K(x(yt, zt), zt; yt)) (yt + α K(xt+1, zt; yt)) ≤ k ∇y − ∇y k αL xt+1 x(yt, zt) ≤ k − k αLσ xt xt+1 , ≤ 3k − k where the first inequality is due to the nonexpansiveness of the projection operator, the second is because of the Lipschitz-continuity of K and the last inequality is because of (B.5). ∇y Now we can prove Proposition B.4. Proof Using the three descent lemma, we have

Φt Φt+1 1 − L p xt xt+1 2 + K(xt+1, zt; yt),yt yt+1 yt yt+1 2 + zt zt+1 2 ≥ 2ck − k h∇y − i− 2 k − k 2β k − k 2L +2 K(x(yt, zt), zt; yt),yt+1 yt d yt yt+1 2 + p(zt+1 zt)T (zt+1 + zt 2x(yt+1, zt+1)) h∇y − i− 2 k − k − − p(zt+1 zt)T (zt + zt+1 2x(y(zt+1), zt)) − − − 1 L + 2L p = xt xt+1 2 d yt yt+1 2 + zt zt+1 2 + K(xt+1, zt; yt),yt+1 yt 2ck − k − 2 k − k 2β k − k h∇y − i +2 K(x(yt, zt), zt; yt) K(xt+1, zt; yt),yt+1 yt (B.27) h∇y −∇y − i +2p(zt+1 zt)T (x(y(zt+1), zt) x(yt+1, zt+1)). (B.28) − − Using the property of the projection operator and the update of the dual variable yt, we have 1 K(xt+1, zt; yt),yt+1 yt yt yt+1 2. h∇y − i≥ αk − k

23 Substituting the above inequality into (B.27), we get Φt Φt+1 1 − 1 L + 2L p xt xt+1 2 + ( d ) yt yt+1 2 + zt zt+1 2 ≥ 2ck − k α − 2 k − k 2β k − k +2 K(x(yt, zt), zt; yt) K(xt+1, zt; yt),yt+1 yt h∇y −∇y − i +2p(zt+1 zt)T (x(y(zt+1), zt) x(yt+1, zt+1)) − − 1 1 L + 10L p xt xt+1 2 + ( ) yt yt+1 2 + zt zt+1 2 ≥ 2ck − k α − 2 k − k 2β k − k +2 K(x(yt, zt), zt; yt) K(xt+1, zt; yt),yt+1 yt h∇y −∇y − i +2p(zt+1 zt)T (x(y(zt+1), zt) x(yt+1, zt+1)) − − 1 1 p xt xt+1 2 + yt yt+1 2 + zt zt+1 2 ≥ 2ck − k 2αk − k 2β k − k +2 K(x(yt, zt), zt; yt) K(xt+1, zt; yt),yt+1 yt h∇y −∇y − i +2p(zt+1 zt)T (x(y(zt+1), zt) x(yt+1, zt+1)) − − where the second inequality is because of (B.13) and the last inequality is because α 1 . ≤ 11L Notice that 2p(zt+1 zt)T (x(y(zt+1), zt) x(yt+1, zt+1)) − − = 2p(zt+1 zt)T ((x(y(zt+1), zt) x(y(zt+1), zt+1)) + (x(y(zt+1)), zt+1) x(yt+1, zt+1))) − − − = 2p(zt+1 zt)T (x(y(zt+1), zt) x(y(zt+1), zt+1)) − − +2p(zt+1 zt)T (x(y(zt+1)), zt+1) x(yt+1, zt+1)) − − (i) 2pσ zt+1 zt 2 + 2p(zt+1 zt)T (x(y(zt+1)), zt+1) x(yt+1, zt+1)) ≥ − 1k − k − − (ii) p 2pσ zt+1 zt 2 zt+1 zt 2 6pβ x(y(zt+1), zt+1) x(yt+1, zt+1) 2, ≥ − 1k − k − 6β k − k − k − k where (i) is because of the Cauchy-Schwarz inequality and Lemma B.2 and (ii) is due to the AM-GM inequality. Also we have 2 K(x(yt, zt), zt; yt) K(xt+1, zt; yt),yt+1 yt h∇y −∇y − i 2 K(x(yt, zt), zt; yt) K(xt+1, zt; yt) yt+1 yt ≥ − k∇y −∇y k · k − k 2L xt+1 x(yt, zt) yt yt+1 ≥ − k − k · k − k Lσ2 yt yt+1 2 Lσ−2 xt+1 x(yt, zt) 2 ≥ − 3k − k − 3 k − k Lσ2 yt yt+1 2 L xt+1 xt 2, ≥ − 3k − k − k − k where the first inequality is because of the Cauchy-Schwarz in equality, the second inequality is because K = f is L-Lipschitz-continuous, the third inequality is due to the AM-GM inequality ∇y ∇y and the last is because of (B.5). Hence we have Φt Φt+1 1− 1 p p ( L) xt xt+1 2 + ( Lσ2) yt yt+1 2 + ( 2pσ ) zt zt+1 2 ≥ 2c − k − k 2α − 3 k − k 2β − 1 − 6β k − k 6pβ x(y(zt+1), zt+1) x(yt+1, zt+1) 2 − k − k

24 By the conditions of p, c, we have 1/2c L 1/4c. − ≥ By the condition for α, we have 2 α< 1/(4Lσ3 ), which yields 1/(2α) Lσ2 1/(4α) − 3 ≥ And by the conditions that β < 1 and p 3L together with the definition of σ , 36 ≥ 1 p p p 2pσ . 2β − 1 − 6β ≥ 4β Then we have Φt Φt+1 1 − 1 p xt xt+1 2 + yt yt+1 2 + zt zt+1 2 ≥ 4ck − k 4αk − k 4β k − k 6pβ x(y(zt+1), zt+1) x(yt+1, zt+1) 2 (B.29) − k − k 1 1 p = xt xt+1 2 + yt yt+1 2 + zt zt+1 2 4ck − k 4αk − k 4β k − k 6pβ x∗(zt+1) x(yt+1, zt+1) 2, (B.30) − k − k where the last equality is because of Lemma B.8. By Lemma B.9 and the convexity of the norm square function, we have yt+1 yt 2 = (yt+1 yt (zt)) + (yt (zt) yt) 2 (B.31) k − k k − + + − k yt yt (zt) 2/2 yt+1 yt (zt) 2 (B.32) ≥ k − + k − k − + k yt yt (zt) 2/2 κ2 xt xt+1 2. (B.33) ≥ k − + k − k − k Similarly, by Lemma B.9, (B.4) and the convexity of norm square function, we have x∗(zt+1) x(yt+1, zt+1) 2 (B.34) k − k = (x∗(zt+1) x∗(zt)) + (x∗(zt) x(yt (zt), zt)) (B.35) k − − + +(x(yt (zt), zt) x(yt+1, zt)) + (x(yt+1, zt) x(yt+1, zt+1)) 2 (B.36) + − − k 4 x∗(zt+1) x∗(zt) 2 + 4 x∗(zt) x(yt (zt), zt) 2 (B.37) ≤ k − k k − + k +4 x(yt (zt), zt) x(yt+1, zt) 2 + 4 x(yt+1, zt) x(yt+1, zt+1) 2 (B.38) k + − k k − k 4σ2 zt zt+1 2 + 4 x∗(zt) x(yt (zt), zt) 2 (B.39) ≤ 1k − k k − + k +4σ2κ2 xt xt+1 2 + 4σ2 zt zt+1 2 (B.40) 2 k − k 1 k − k = 8σ2 zt zt+1 2 + 4 x∗(zt) x(yt (zt), zt) 2 (B.41) 1k − k k − + k +4σ2κ2 xt xt+1 2. (B.42) 2 k − k Substituting (B.33) and (B.42) to (B.29) yields φt φt+1 1− ( 24pβσ2κ2 κ2/(4α)) xt xt+1 2 ≥ 4c − 2 − k − k 1 p + yt yt (zt) 2 + ( 48pβσ2) zt zt+1 2 8αk − + k 4β − 1 k − k 24pβ x∗(zt) x(yt (zt), zt) 2. − k − + k

25 Notice that 2 2 2 α< 1/(4Lσ3 ) < 1/(4cL σ3), 2 2 2 2 and hence κ /(4α)= α L σ3/(4α) < 1/(16c). Also we have

2 β < 1/(96pασ2 ), thus 24pβσ2κ2 < κ2/(4α) 1/(16c). 2 ≤ Consequently, we have 1 ( 24pβσ2κ2 κ2/(4α)) > 1/(8c). 4c − 2 − By the definition of σ and the conditions p 3L,β < 1 , we have 1 ≥ 36 p p ( 48pβσ2) . 4β − 1 ≥ 8β Combining the above, we have

φt φt+1 − 1 xt xt+1 2 ≥ 8ck − k 1 p + yt yt (zt) 2 + zt zt+1 2 8αk − + k 8β k − k 24pβ x∗(zt) x(yt (zt), zt) 2, − k − + k which finishes the proof.

B.2 General nonconvex-concave case We have the following error bound:

Lemma B.10 We have

(p L) x∗(zt) x(yt (zt), zt) 2 < (1 + αL) yt yt (zt) dist(yt (zt),Y (zt)). − k − + k k − + k · + (1 + αL) yt yt (zt) D(Y ), ≤ k − + k · where D(Y ) is the diameter of Y .

The proof will be given in the next section.

Lemma B.11 If max xt xt+1 , yt yt (zt) , zt xt+1 λǫ,¯ (B.43) {k − k k − + k k − k} ≤ then (xt+1,yt+1) is a λǫ¯ solution for some λ>¯ 0. −

26 Proof By the update of xt+1, we have

t+1 t t t t t 1 t 2 x = argmin xf(x ,y )+ p(x z ),x x + x x + ι(x) . x∈X{h∇ − − i 2ck − k } Therefore, we have 1 0 f(xt+1,yt)+ p(xt+1 zt)+ (xt+1 xt)+ ι(xt+1). (B.44) ∈∇x − 2c − Similarly, we have

t+1 t 1 t+1 t t+1 0 arg min yf(x ,y )+ (y y )+ ι(y ) . (B.45) ∈ y∈Y {−∇ α − } We let 1 u = ( f(xt+1,yt) f(xt,yt))+( f(xt+1,yt+1) f(xt+1,yt)) p(xt+1 zt) (xt+1 xt) ∇x −∇x ∇x −∇x − − − 2c − and 1 v = f(xt+1,yt) f(xt+1,yt+1) (yt+1 yt). ∇y −∇y − α − By the Lipschitz-continuity of f(x,y), Lemma B.9 and (B.43), we have ∇x 1 u L xt xt+1 + L yt yt+1 + pǫ + ǫ k k ≤ k − k k − k 2c (1 + p + 1/2c)ǫ + L yt yt (zt) + yt (zt) yt+1 ≤ k − + k k + − k (1 + p + 1/2c)ǫ + ǫ + κǫ, ≤ where the first and the second inequalities are both due to (B.43), the triangular inequality and the Lipschitz-continuity of f( ) and the last inequality is because of Lemma B.9. Similarly, we ∇x · can prove that 1 v (L +1+ κ + )ǫ. k k≤ α Hence, we finish the proof with η =2+ L + p + κ + max 1/2c, 1/α . { }

We say that φt decreases sufficiently if 1 1 pβ φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt xt+1 2. (B.46) − ≥ 16ck − k 16αk − + k 16 k − k

Lemma B.12 Let T > 0. Then if for any t 0, 1, , T 1 , (B.46) holds, there must exist a ∈ { · · · − } t 1, 2, , T such that (xt,yt) is an C/√T β-solution. Moreover, if for any t 0, (B.46) holds, ∈ { · · · } ≥ Then any limit point of (xt,yt) is a solution of (1.2), and the iteration complexity of attaining an ǫ solution is (1/ǫ2). − O

27 Proof We have

T −1 φ0 f (φt φt+1) (B.47) − ≥ − t=0 X T −1 (1/(16c) + 1/(16α)+ p/16) max xt xt+1 2, (B.48) ≥ {k − k t=0 X yt yt (zt) 2, β xt+1 zt 2 , (B.49) k − + k k − k } where the last inequality is due to (B.46). Therefore, there exists a t 0, 1, , T 1 such that ∈ { · · · − } (1/(16c) + 1/(16α)+ p/16) max xt xt+1 2, yt yt (zt) 2, β xt+1 zt 2 (φ0 f)/T. {k − k k − + k k − k }≤ − Since β < 1, we further get

(1/(16c) + 1/(16α)+ p/16) max xt xt+1 2, yt yt (zt) 2, xt+1 zt 2 (φ0 f)/(T β). {k − k k − + k k − k }≤ − Hence, by Lemma B.11, (xt+1,yt+1) is a (φ0 f)/((1/8c + 1/8α + 16)T β)-solution. According − to above analysis, If (B.46) holds for any t,q we can attain an ǫ-solution within (φ0 f)/(β(1/(16c)+ − 1/(16α)+ p/16)ǫ2) iterations. Moreover, if (B.46) holds for any t, by (B.47), we have

max xt xt+1 , yt yt (zt) , zt xt+1 0. (B.50) {k − k k − + k k − k} → Consequently, for any limit point (¯x, y¯) of (xt,yt), there exists az ¯ such that

max x¯ x¯ (¯y, z¯) , y¯ y¯ (¯z) , x¯ (¯y, z¯) z¯ = 0, {k − + k k − + k k + − k} which yields (¯x, y¯) is a stationary solution. Here

x (y, z)= P (x K(x, z; y)). + X −∇x

Now we are ready to prove Theorem 3.4. Proof [Proof of Theorem 3.4] There are two cases (B.51) and (B.52) as discussed in the proof for the general nonconvex-concave problems in last subsection.

1. For some t 0, 1, , T 1 , we have ∈ { · · · − } 1 1 1 p max xt xt+1 2, yt yt (zt) 2, zt zt+1 2 24pβ x∗(zt) x(yt (zt), zt) 2. 2 {8ck − k 8αk − + k 8β k − k }≤ k − + k (B.51)

2. For any t 0, 1, , T 1 , ∈ { · · · − } 1 1 1 p max xt xt+1 2, yt yt (zt) 2, zt zt+1 2 24pβ x∗(zt) x(yt (zt), zt) 2 2 {8c k − k 8αk − + k 8β k − k }≥ k − + k (B.52)

28 In the first case (B.51), we have

yt yt (zt) 2 384pβα x(yt (zt), zt) x∗(zt) 2 k − + k ≤ k + − k (1 + αL) 384pβα yt yt (zt) D(Y ). ≤ p L k − + k − Hence, letting λ = 384pα (1+αL) D(Y ), we have 1 p−L · yt yt (zt) λ β. (B.53) k − + k≤ 1 Moreover,

xt+1 zt 2 = (zt+1 zt)/β 2 (B.54) k − k k − k (i) 384p x(yt (zt), zt) x∗(zt) 2 (B.55) ≤ k + − k (ii) 1+ αL 384p D(Y ) yt yt (zt) (B.56) ≤ p L k − + k − (iii) 1+ αL 384p D(Y )λ β, (B.57) ≤ p L 1 − where Inequality (i) is due to Inequality (B.51) and (ii) is because of Lemma B.10 and (iii) is due to (B.53). We also have

(i) xt xt+1 2 384cpβ x∗(zt) x(yt (zt), zt) 2 (B.58) k − k ≤ k − + k (ii) 1+ αL 384pcβ D(Y ) yt yt (zt) (B.59) ≤ p L k − + k − (iii) 1+ αL 384pc λ D(Y )β2, (B.60) ≤ p L 1 − where (i) is due to (B.51), (ii) is due to Lemma B.10 and (iii) is because of (B.53). Combining the above, in the first case, we have

max xt xt+1 2, yt yt (zt) 2, zt xt+1 2 (B.61) {k − k k − + k k − k } max λ β2, λ2β2, λ β , (B.62) ≤ { 2 1 3 } 1+αL 1+αL where λ2 = 384p p−L D(Y )λ1 and λ3 = 192pc p−L λ1D(Y ) According to Lemma B.11, there exists a λ> 0 such that (xt+1,yt+1) is a λ max β, √β -solution. { } In the second case, we have 1 1 1 φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt zt+1 2 − ≥ 16ck − k 16αk − + k 16β k − k for any t 0, 1, , T 1 . By Lemma B.12, there exists a t 0, 1, , T 1 , such that ∈ { · · · − } ∈ { · · · − } (xt+1,yt+1) is a (φ0 f)/((1/8c + 1/8α + 16)T β)-solution. Finally taking β = 1/√T and com- − bining the two casesq with Lemma B.12 yield the desired results.

29 B.3 The max problem is over a discrete set In this subsection, we prove Theorem 3.8. We will prove that under the strict complementarity as- sumption, the potential function φt decreases sufficiently after any iteration. Then by the following simple lemma, we can prove Theorem 3.8. By the bounded level set assumption (Assumption 3.7) and the fact that ψ(z) P (z), for any ≤ (x0,y0, z0) Rn+m+n, there exists a constant R(x0,y0, z0) > 0 such that ∈ z P (z) φ(x0,y0, z0) (R(x0,y0, z0)). { | ≤ }⊆B Then we have the following “dual error bound”. Note that this error bound is homogeneous compared to Lemma B.10.

Lemma B.13 Let x (y, z)= P (x K(x, z; y)). + X −∇x If the strict complementarity assumption and the bounded level set assumption hold for (1.2) , there exists δ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ {k − + k k − + k k + − k} we have x(y (z), z) x∗(z) <σ y y (z) k + − k 5k − + k for some constant σ5 > 0. Equipped with the dual error bound, we can prove that the potential function decreases after any iteration in the following proposition:

Proposition B.14 Suppose the conditions in Theorem 3.8 holds, we have 1 1 p φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt zt+1 2. (B.63) − ≥ 16c k − k 16α k − + k 16β k − k

Proof We set β < min δ/√λ , δ/λ , δ2/λ , 1/(384pασ2) . First, we prove that { 2 1 3 5 } 1 1 p φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt zt+1 2. (B.64) − ≥ 16c k − k 16α k − + k 16β k − k and zt < R(x0,y0, z0) k k for any t 0. We prove it by induction. We will prove that ≥ 1. If zt R(x0,y0, z0), then k k≤ 1 1 p φt φt+1 xt xt+1 2 + yt yt (zt) 2 + zt zt+1 2. (B.65) − ≥ 16c k − k 16α k − + k 16β k − k

2. If φt+1 φt, we have zt+1 R(x0,y0, z0). ≤ k k≤

30 For t = 0, it is trivial that zt R(x0,y0, z0). For the first step, assume that we have k k ≤ zt R(x0,y0, z0). There are two cases: k k≤ 1. For some t, we have 1 1 1 p max xt xt+1 2, y yt (zt) 2, zt zt+1 2 24pβ x∗(zt) x(yt (zt), zt) 2. 2 {8ck − k 8αk − + k 8β k − k }≤ k − + k (B.66)

2. For any t, 1 1 1 p max xt xt+1 2, y yt (zt) 2, zt zt+1 2 24pβ x∗(zt) x(yt (zt), zt) 2 2 {8ck − k 8αk − + k 8β k − k }≥ k − + k (B.67)

For the first case, as in the last subsection, we have

max xt xt+1 2, yt yt (zt) 2, xt+1 zt 2 {k − k k − + k k − k } max λ β2, λ2β2, λ β ≤ { 2 1 3 } δ2. ≤ Hence, we can make use of Lemma B.13. In fact, we have

24pβ x(yt (zt), zt) x∗(zt) 2 24pβσ2 yt yt (zt) k + − k ≤ 5k − + k 1 yt yt (zt) 2, ≤ 16α k − + k which yields (B.46) together with (B.15). For the second case, (B.46) holds as in the last subsection. Hence, if zt R(x0,y0, z0), we have (B.46). For the second step, if (B.46) holds for 0, 1, , (t k k≤ · · · − 1), we have

P (zt+1) φt+1 ≤ φ0. ≤ Hence, zt+1 (R(x0,y0, z0)). Combining these, for any t 0, zt R(x0,y0, z0) and (B.46) ∈ B ≥ k k ≤ holds. Then the theorem comes from Lemma B.12.

C The multi-block cases

The proofs for the multi-block case is similar to the one-block case. In this section, we briefly introduce the proof of them. Note that the only differences for proving the theorem s are Lemma B.5, (B.5) and Proposition 4.1. Instead, we have the following:

Lemma C.1 (Primal Descent) For any t, we have 1 K(xt, zt; yt) K(xt+1, zt+1; yt+1) xt xt+1 2 + K(xt+1, zt; yt),yt yt+1 − ≥ 2ck − k h∇y − i L p yt yt+1 2 + zt zt+1 2. (C.1) − 2 k − k 2β k − k

31 The proof of it is the same as Lemma 5.3 in [30]. The error bound (B.5) becomes: Lemma C.2 We have xt+1 x(yt, zt) σ′ xt xt+1 , k − k≤ 3k − k where σ′ = (c(p L)+1+ c(L + p)N 3/2)/c(p L). 3 − − The proof of Lemma C.2 is similar to Lemma 5.2 in [30] hence omitted here. Because of the above two differences, we have a replacement of Proposition B.4: Proposition C.3 We let

1 1 c2(p L)2 1 (p L)2 p> 3L,c < ,α< min , − , min ,β < − . p + L {11L 4L(1 + c(p + L)N 3/2 + c(p L))2 } {36 384p(p + L)2 } − (C.2)

Then we have

φt φt+1 (C.3) − 1 xt xt+1 2 ≥ 4ck − k 1 p + yt yt (zt) 2 + zt zt+1 2 (C.4) 4αk − + k 8β k − k 24pβ x∗(zt) x(yt (zt), zt) 2 (C.5) − k − + k The proof of Proposition C.3 is similar to Proposition B.4 hence omitted.

D Proof of the error bound lemmas

D.1 Proof of lemma B.10 Let x (y, z)= P (x c K(x, z; y)) + X − ∇x and y (z)= P (y + α K(x(y, z), z; y)). + Y ∇y Then Lemma B.10 can be written as Lemma D.1 We have

(p L) x∗(z) x(y (z), z) 2 < (1 + αL) y y (z) dist(y (z),Y (z)). − k − + k k − + k · + (1 + αL) y y (z) D(Y ), ≤ k − + k · where D(Y ) is the diameter of Y . Proof By the strong convexity of K( , z; y), we have · p L K(x∗(z), z, ; y (z)) K(x(y (z), z)z; y (z)) − x(y (z), z) x∗(z) 2 (D.1) + − + + ≥ 2 k + − k p L K(x(y (z), z), z; y(z)) K(x∗(z), z; y(z)) − x(y (z), z) x∗(z) 2, (D.2) + − ≥ 2 k + − k

32 where y(z) is an arbitrary vector in Y (z). Notice that y+(z) is the maximizer of the following problem: T max K(x(y+(z), z), z;y ¯) δ (y,y+(z); z)¯y , y¯∈Y { − } where

δ(y,y (z); z) = (y (z)+ α K(x(y (z), z), z; y (z))) (y + α K(x(y (z), z), z; y)) + + ∇y¯ + + − y¯ + satisfies δ(y,y (z); z) < (1 + αL) y y (z) , k + k k − + k by the Lipschitz-continuity of K = f. Hence, we have ∇y ∇y K(x(y (z), z), z; y(z)) δT (y,y (z); z)y(z) + − + K(x(y (z), z), z; y (z)) δT (y,y (z); z)y (z). ≤ + + − + + Then, we have the following estimates:

K(x(y (z), z), z; y(z)) K(x(y (z), z), z; y (z)) (D.3) + − + + (y(z) y (z))T δ(y,y (z); z) (D.4) ≤ − + + y (z) y(z) (1 + αL) y y (z) . (D.5) ≤ k + − k · k − + k Also because y(z) maximizes max K(x∗(z), y¯; z), y¯∈Y we have K(x∗(z), z; y(z)) K(x∗(z), z; y (z)). (D.6) ≥ + Since y(z) is an arbitrary vector in Y (z), combining (D.1), (D.3), (D.6), we have

(p L) x∗(z) x(y (z), z) 2 < (1 + αL) y y (z) dist(y (z),Y (z)), − k − + k k − + k · + which is the desired result.

D.2 Proof of Lemma B.13 For a pair of min-max solution of (1.2), the KKT conditions in the following hold:

J T F (x∗)y = 0, (D.7) m yi = 1, (D.8) Xi=1 y 0, i [m] (D.9) i ≥ ∀ ∈ µ ν = f (x), i [m], (D.10) − i i ∀ ∈ ν 0,ν y = 0, i [m], (D.11) i ≥ i i ∀ ∈ m where µ is the multiplier of the equality constraint i=1 yi = 1 and νi is the multiplier for the inequality constraint y 0. i ≥ P

33 Definition D.2 For y Y , we define the active set ∈ [y]= i [m] y = 0 . A { ∈ | i } We also define the inactive set of y as follows:

[y]= i [m] y > 0 . I { ∈ | i } Definition D.3 For an x Rn, we define the top coordinate set (x) as the collection of all ∈ T indexes of the top coordinates of F (x), i.e., f (x) >f (x) if i (x), j / (x) and f (x) = f (x) i j ∈T ∈T i j if i, j (x). ∈T According to the KKT conditions, it is easy to see that for (x,y) W ∗, ∈ [y] (x). I ⊆T Recall that we have the following strict complementarity condition:

Assumption D.4 For any (x,y) satisfying (D.7), we have

ν > 0, i [y]. i ∀ ∈A It is easy to see that if the strict complementarity assumption holds,

[y]= (x) I T for (x,y) W ∗. Then we can prove the following “dual error bound”. ∈ Lemma D.5 If the strict complementarity assumption holds for (1.2) , there exists δ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ {k − + k k − + k k + − k} we have x(y (z), z) x∗(z) <σ y y (z) k + − k 5k − + k for some constant σ5 > 0. To prove this, we need the following lemmas. First, we prove that if the residuals go to zero, the iteration points converge to a solution.

Lemma D.6 If zk is a sequence with zk R(x0,y0, z0) and { } k k≤ max xk xk (yk, zk) , yk yk (zk) , xk (yk, zk) zk 0, {k − + k k − + k k + − k} → there exists a sub-sequence of zk converging to some z¯ X∗. { } ∈ Proof It is just a direct corollary of Lemma B.11.

34 Lemma D.7 Let M(x)= JT (x)F (x) 1 . Then if (x,y) W ∗, the matrix M(x) is of full row rank. ∈ Proof We prove it by contradiction. If for some (x∗,y∗,µ∗,ν∗) satisfying (D.7), M(x∗) is not of full row rank. Without loss of generality, we assume that (x∗) = 1, 2, , (x∗) Then there ∗ T { · · · |T |} exists a nonzero vector v R|T (x )| such that ∈ M T (x∗)v = 0.

Let d = min ∗ y / v . Then we define a vector y′ Rm as: i∈I[y ]{ i | i|} ∈ ′ ∗ ∗ yi = y dvi, when i [y ]; −′ ∈I yi = 0, when otherwise.

Notice that y∗ = 0 for any i / (x∗). Then y′ satisfies i ∈T J T F (x∗)y′ = 0, m ′ yi = 1, Xi=1 y′ 0, i (x∗), i ≥ ∈T y′ = 0, i / (x∗). i ∈T ∗ ′ ∗ ∗ Therefore, (x ,y ,µ,ν) still satisfies (D.7) . Moreover, let i0 [y ] satisfying d = yi0 /vi0 . Then ′ ∈I yi0 = νi0 = 0. This is a contradiction to the strict complementarity assumption. We then have the following corollary from the above lemma and (D.7):

Corollary D.8 For any x∗ X∗, there exists only one y Y such that (x∗,y) W ∗ and there ∈ ∈ ∈ exists only one (µ,ν) such that (x∗,y,µ,ν) satisfies (D.7).

Proof First, this (y,µ,ν) must exist due to the existence of a solution. Next, the solution y must satisfy M T (x∗)y = (0, 0, , 0, 1)T . · · · By Lemma D.7, M T (x∗) is of full column rank hence the solution of y is unique. Furthermore, since m i=1 yi = 1, there is at least one i such that yi > 0,νi = 0. Without loss of generality, we assume that y > 0,ν = 0. Then µ = f (x∗) by (3.5). Further by (3.5), ν = f (x∗) µ, i = 2, 3, ,m. P 1 1 1 i i − · · · Hence, µ,νi are uniquely defined.

Lemma D.9 If the strict complementarity assumption holds for (1.2) , there exists δ > 0,γ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ {k − + k k − + k k + − k} γ(M(x∗(z))) γ and γ(M(x(y, z))) γ. ≥ ≥

35 Proof We prove it by contradiction. Suppose it is not true, there exists zk (R(x0,y0, z0)) { }⊆B such that γ(M(x∗(zk))) 0 and → max xk xk (yk, zk) , yk yk (zk) , xk (yk, zk) zk 0. {k − + k k − + k k + − k} → Since (x) has only finite choice, without loss of generality, we assume that (x∗(zk)) = for T T T any k(passing to a sub-sequence if necessary). By Lemma D.6, there exists az ¯ X∗ such that ∈ zk z¯. We let → ∗ k ∗ M˜ (¯z) = lim M(x (z )) = JT (x (¯z)) 1 . k→∞ By the continuity of x∗( )((B.3) of Lemma B.2) and the continuity of the function of taking the · least singular value, we know that γ(M˜ (¯z)) = 0, where we also use the fact that x∗(¯z) =z ¯ by Lemma B.8. Moreover, according to the definition of [¯z], we have f (x∗(zk)) >f (x∗(zk)) for any k with i , j / . Therefore, we have f (¯z) f (¯z) T i j ∈T ∈T i ≥ j for i , j / . Consequently, we have ∈T ∈T [¯z]. T ⊆T Therefore M˜ (x∗(¯z)) is a row sub-matrix of M(¯z). Consequently, M(¯z) is not of full row rank. This is a contradiction! For x(y, z), it is similar to prove the desired result. Hence the details are omitted. The following lemma shows that if the residuals are small, the active set of y (z) and y(z) Y (z) + ∈ are the same. Lemma D.10 If the strict complementarity assumption holds for (1.2) , there exists δ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ, {k − + k k − + k k + − k} we have [y (z)] = [y(z)], for some y(z) Y (z). A + A ∈ Proof We prove it by contradiction. Suppose that there exists a sequence (xk,yk, zk) such that { } max xk xk (yk, zk) , yk yk (zk) , xk (yk, zk) zk 0 {k − + k k − + k k + − k} → and [yk (zk)] = [y(zk)]. A + 6 A k k k k k k Since y+(z ) , z are bounded, we assume that y+(z ) y,¯ z z¯. We write down the KKT { } { k } k k k k → → condition for (x(y+(z ), z ),y+(z )) as follows: T k k k k k k k k k J F (x(y+(z ), z ))y+(z )+ p(x(y+(z ), z ) z ) = 0, (D.12) m − k k (yi )+(z ) = 1, (D.13) Xi=1 (yk) (zk) 0, i [m] (D.14) i + ≥ ∀ ∈ 1 1 (yk) (zk) yk + f (x(yk, zk)) + µk νk = f (x(yk (zk), zk)), i [m], (D.15) α i + − α i i − i i + ∀ ∈ νk 0,νk(yk) (zk) = 0, i [m], (D.16) i ≥ i i + ∀ ∈

36 It is not hard to check that µ,ν are bounded. Hence, we assume that µk µ¯ and νk ν¯. We → → take limit to (D.12) and make use of the fact that

yk yk (zk) 0 k − + k→ together with Lemma B.2. We then attain that (x(¯y, z¯), y¯) is a min-max solution of (1.2), i.e., (x(¯y, barz), y,¯ µ,¯ ν¯) satisfies (D.7). By the strict complementarity assumption,ν ¯ > 0 for i [¯y] i ∈ A andy ¯ > 0 for i / [¯y]. Hence, for k sufficiently large, we have [yk (zk)] = [¯y]. Similarly, when i ∈A A + A k is sufficiently large, we have [y(zk)] = [¯y]. A A

We also write down the KKT conditions for x∗(z) for some z.

J T F (x∗(z))y + p(x∗(z) z) = 0, (D.17) m − yi = 1, (D.18) Xi=1 y 0, i [m] (D.19) i ≥ ∀ ∈ µ ν = f (x), i [m], (D.20) − i i ∀ ∈ ν 0,ν y = 0, i [m], (D.21) i ≥ i i ∀ ∈

Lemma D.11 If the strict complementarity assumption holds for (1.2) , there exists δ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ {k − + k k − + k k + − k} we have dist(y (z),y(z)) < λ x∗(z) x(y (z), z) + k − + k for some constant λ> 0.

Proof By Lemma D.10 , if the strict complementarity assumption holds for (1.2) , there exists δ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ, {k − + k k − + k k + − k} we have [y (z)] = [y(z)], A + A for some y(z) Y (z). Hence, we have ∈ (x∗(z)) = (x(y (z), z)). T T +

37 ∗ Let = (x (z)). Then for i / , yi(z) = (y+(z))i = 0and y(z) y+(z) = (y(z))T (y+(z))T . T T ∈T k∗ − k k − k Using the optimality conditions for x(y+(z), z) (D.12) and x (z) (D.17), we have p(x(y (z), z) z) M T (x(y (z), z))(y (z)) + + − = (0, 0, , 0, 1), (D.22) + + T 0 · · ·   and p(x∗(z) z) M T (x∗(z))(y(z)) + − = (0, 0, , 0, 1). (D.23) T 0 · · ·   Note that (D.22) can be written as p(x(y (z), z) z) M T (x∗(z))(y (z)) = M T (x∗(z))(y (z)) M T (x(y (z), z))(y (z)) + − . + T + T − + + T − 0   (D.24) By (D.23) and (D.24), we have p(x(y (z), z) x∗(z)) M T (x∗(z))((y(z)) (y (z)) ) = (M T (x(y (z), z)) M T (x∗(z)))(y (z)) + − . T − + T + − + T − 0   Therefore, taking norms to the above and the Lemma D.9, we have γ (y (z)) (y(z)) √mL x(y (z), z) x∗(z) (y (z)) + p x(y (z), z) x∗(z) k + T − T k ≤ k + − kk + T k k + − k (√mL + p) x∗(z) x(y (z), z) , ≤ k − + k where the first inequality uses the Lipschitz-continuity of f and the second is because y (z) ∇x i k + k≤ 1. Hence, we finish the proof with λ = (p + √mL)/γ. Proof [Proof of Lemma B.13] By Lemma B.10 and Lemma D.11, we have 1+ αL x(y (z), z) x∗(z) y y (z) , k + − k≤ λ(p L)k − + k − 1+αL which finishes the proof with σ5 = λ(p−L) .

E Discussion of the strict complementarity condition

In this section, we discuss some issues about the strict complementarity assumption. First, notice that the min-max problem (1.1) and (1.2) are both variational inequalities. As mentioned in the main text of the paper, the strict complementarity assumption is common in the field of variation inequality [39,40]. While this assumption is popular, it is still interesting to weaken the assumption. Inspired by Lemma D.7, we prove Theorem 3.8 and Lemma 4.2 using a weaker regularity assumption rather than the strict complementarity assumption: Assumption E.1 For any (x∗,y∗) W ∗, the matrix M(x∗) is of full column rank. ∈ Here recall that ∗ M(x )= JT (x∗) 1 . We say that Assumption E.1 is weaker since the strict complem entarity assumption (Assumption D.4) can imply Assumption E.1 according to Lemma D.7. For this assumption, we have the following two claims:

38 1. If we replace Assumption D.4 by Assumption E.1 in Theorem 3.8, we can attain a same result; 2. In a robust regression problem (will define in E.2), if the data is joint from a continuous distribution, this regularity assumption holds with probability 1.

E.1 Replacing Assumption D.4 by Assumption E.1 in Theorem 3.8 In this section, we will see that we can prove the dual error bound (Lemma 4.2) using Assumption E.1 instead of Assumption D.4. Lemma E.2 Let x (y, z)= P (x K(x, z; y)). + X −∇x If Assumption E.1 and the bounded level set assumption hold for (1.2) , there exists δ > 0, such that if z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ {k − + k k − + k k + − k} we have x(y (z), z) x∗(z) <σ y y (z) k + − k 5k − + k for some constant σ5 > 0. Using this Lemma, we can prove Theorem 3.8 using Assumption E.1: Theorem E.3 Consider solving problem 1.2 by Algorithm 2 or Algorithm 3. Suppose that As- sumption E.1 holds and either Assumption 3.7 holds or assume zt is bounded. Then there exist { } constants β′ and β′′ (independent of ǫ and T ) such that the following holds: 1. (One-block case) If we choose the parameters in Algorithm 2 as in (3.3) and further let β < β′ , then (a) Every limit point of (xt,yt) is a solution of (1.2). (b) The iteration complexity of Algorithm 2 to obtain an ǫ-stationary solution is (1/ǫ2). O 2. (Multi-block case) Consider using Algorithm 3 to solve Problem 1.2. If we replace the condition for α in (3.3) by (3.4) and require β satisfying β<ǫ2 and β < β′′, then we have the same results as in the one-block case.

E.2 The rationality of Assumption E.1 Intuitively, the assumption E.1 holds for “generic problem”. We rigorously justify this intuition for a simple problem. More specifically, we prove that this regularity assumption is generic for a robust regression problem using square loss, i.e., the regularity condition holds with probability 1 if the outputs of the data points are joint from some continuous distribution. Consider the following problem: 1 2 min max yi(ℓi Ψ(x,ξi)) , (E.1) x∈Rn y∈Y 2 − where Y is the probability simplex , Ψ( ) is a smooth function used to fit the data (for example · the neural network) and ξi,ℓi are the input and the output of the i-th data point. We define Ψi(x)=Ψ(x,ξi) for convenience. We further make the following mild assumptions:

39 Assumption E.4 ℓi is joint independently from a continuous distribution over a positive measure set R. Li ⊆ Here a continuous distribution over means that for any zero measure set , Pr(x )= Li S ⊆ R ∈ S∩Li 0. With assumption, for any zero measure set Rm, Pr((ℓ , ,ℓ )T m ) = 0. S ⊆ 1 · · · m ∈S∩ i=1 Li Assumption E.5 Let Ψ(x) = (Ψ (x), , Ψ (x))T . Then Ψ(Rn) m =Q Ω, where Ω is a 1 · · · m ∩ i=1 Li zero measure set in m . i=1 Li Q This assumptionQ means that minx maxy∈Y fi(x) > 0 with probability 1. This assumption is ∗ ∗ reasonable. If there exists an x such that maxi fi(x ) = 0, then becaus fi(x) 0, we have ∗ ≥ fi(x ) = 0 for all i. In this case, we do not need the min-max fomulation! We just need to solve m the finite sum problem minx i=1 fi(x). However, in many cases, the uncertainty is large, we do need the robust optimization formulation. So in these cases, Assumption E.5 is reasonable. P Moreover, we have the following lemma:

Lemma E.6 Suppose that Assumption E.4 holds. If m>n, Assumption E.5 holds with probability 1.

Proof It is direct from the claim that a smooth map Ψ maps a zero measure set into a zero measure set. Specializing to this lemma, the map Ψ maps Rn into Rm, hence the Ψ(Rn) is of zero measure since Rn is a zero measure set of Rm. Therefore, Ψ(Rn) m is zero measure ∩ i=1 Li in Rm. Q Then we have the following result:

Proposition E.7 Suppose that Assumption E.4 and Assumption E.5 hold. Then with probability 1, every solution of (E.1) satisfies Assumption E.1.

E.3 Proof of Lemma E.2 and Theorem E.3 For a set [m], we define S ⊆ MS (x)= JSF (x;ℓS ) 1 , where J F (x; ℓ ) = ((Ψ (x) ℓ ) Ψ (x) i  ). S S i − i ∇x i | ∈ S Similar to the proof of Theorem 3.8, to prove Theorem E.3, we only to prove Lemma E.2. Hence, in this section, we only prove Lemma E.2. The proof is similar to the proof of Lemma 4.2. Hence we only give the main steps. First, similar to Lemma D.9, we have the following:

Lemma E.8 If Assumption E.1 holds for Problem (E.1) , there exists δ > 0,γ > 0, such that if

z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ, {k − + k k − + k k + − k} then γ(M (x∗(z))) γ and γ(M (x(y (z), z))) γ, where T (y,z) ≥ T (y,z) + ≥ (y, z)= (x∗(z)) (x(y (z), z)). T T ∪T +

40 Proof We prove it by contradiction. Suppose it is not true, there exist xk , yk Y and k 0 0 0 ∗ k k k k { } { } ⊆ z (R(x ,y , z )) such that γ(M k (x (z ))), γ(M k (x(y (z ), z ))) 0 and { }⊆B T T + → max xk xk (yk, zk) , yk yk (zk) , xk (yk, zk) zk 0, {k − + k k − + k k + − k} → where k = (x∗(zk)) (x(yk (zk), zk)). Since k has only finite choice, without loss of generality, T T ∪T + T we assume that k = for any k(passing to a sub-sequence if necessary). By Lemma D.6, there T T exists az ¯ X∗ such that zk z¯. Hence, by Lemma B.2 and Lemma B.8, we have ∈ → x∗(zk) x∗(¯z) =z. ¯ → Therefore by the definition of (x), when k is sufficiently large, (x∗(zk) (x∗(¯z)) = (¯z)). T T ⊆ T T Moreover, since yk yk (zk) 0, by Lemma B.10, we have k − + k→ x(¯yk (zk), zk) x∗(zk) 0. k + − k→ k k k k and hence (x(y (z ), z )) (¯z). Then (¯z) and γ(M) k = 0, which contradicts As- T + ⊆ T T ⊆ T T sumption E.1. We then can attain a result similar to Lemma D.11. Lemma E.9 If Assumption E.1 holds for (1.2) , there exists δ > 0, such that if

z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ {k − + k k − + k k + − k} we have dist(y (z),y(z)) < λ x∗(z) x(y (z), z) + k − + k for some constant λ> 0. Proof By Lemma E.8, we can find a δ > 0 and a γ > 0, such that if

z R(x0,y0, z0), k k≤ and max x x (y, z) , y y (z) , x (y, z) z < δ, {k − + k k − + k k + − k} then γ(M (x∗(z))) γ and γ(M (x(y (z), z))) γ, where T (y,z) ≥ T (y,z) + ≥ (y, z)= (x∗(z)) (x(y (z), z)). T T ∪T +

Let = (y, z). Then for i / , yi(z) = (y+(z))i = 0 and y(z) y+(z) = (y(z))T (y+(z))T . T T ∈T k ∗ − k k − k Using the optimality conditions for x(y+(z), z) (D.12) and x (z) (D.17), we have

p(x(y (z), z) z) M T (x(y (z), z))(y (z)) + + − = (0, 0, , 0, 1), (E.2) T + + T 0 · · ·   and ∗ T ∗ p(x (z) z) M (x (z))(y(z))T + − = (0, 0, , 0, 1). (E.3) T 0 · · ·   41 Note that (E.2) can be written as p(x(y (z), z) z) M T (x∗(z))(y (z)) = M T (x∗(z))(y (z)) M T (x(y (z), z))(y (z)) + − . T + T T + T − T + + T − 0   (E.4) By (E.3) and (E.4), we have M T (x∗(z))(y(z) y (z)) = (M T (x(y (z), z)) M T (x∗(z)))(y (z)) p(x(y (z), z) x∗(z)). T − + T + − T + T − + − Therefore, taking norms to the above and the Lemma D.9, we have γ (y (z)) (y(z)) √mL x(y (z), z) x∗(z) (y (z)) + p x(y (z), z) x∗(z) k + T − T k ≤ k + − kk + T k k + − k (√mL + p) x∗(z) x(y (z), z) , ≤ k − + k where the first inequality uses the Lipschitz-continuity of f and the second is because y (z) ∇x i k + k≤ 1. Hence, we finish the proof with λ = (p + √mL)/γ. Then Lemma E.9 and Lemma 4.3 yield Theorem E.3.

E.4 Proof of Proposition E.7 For a set [m], we define S ⊆ MS(x; ℓS )= JSF (x;ℓS ) 1 , where JS F (x; ℓS ) = ((Ψi(x) ℓi) xΨi(x) i ). − ∇ | ∈ S Proof Define the event to be: there exists a solution (x∗,y∗) W ∗ , such that M(x∗) is ET ,P ∈ not of full row rank, (x∗) = and Ψ (x∗) ℓ 0 for i and Ψ (x∗) ℓ for i / . Then T T i − i ≥ ∈ P i − i ∈ P Proposition E.7 is equivalent to the claim: Pr( ) = 0. ∪T⊆[m],P⊆T ET ,P Since there are only finite choice of the sets and , we only need to prove that for any [m] T P T ⊆ and , holds with probability 0, Without loss of generality, we let = 1, 2, , k and P⊆T ET ,P T { · · · } = 1, 2, ,p with p k. We define δ for i [k] as δ = 1 for i and δ = 1 otherwise. P { · · · } ≤ i ∈ i ∈ P i − Then if holds, there exists an x∗ = (x∗, ,x∗ )T X∗ and x R, such that ET ,P 1 · · · n ∈ n+1 ∈ 1. (x∗, ,x∗ )T X∗; 1 · · · n ∈ 2. x∗ 0; n+1 ≥ 3. (x∗)= ; T T 4. Ψ (x∗) ℓ = x∗ 0 for i and Ψ (x∗) ℓ = x∗ 0 for i / . i − i n+1 ≥ ∈ P i − i − n+1 ≤ ∈ P 5. M (x∗, ,x∗ ; ℓ , ,ℓ ) is row rank deficient. T 1 · · · n 1 · · · k Define X¯ ∗ (ℓ , ,ℓ ) to be the set of all x∗ X∗ satisfying the above conditions. Consider the T ,P 1 · · · k ∈ map G : Rn+1 Rk defined as → G(x , ,x )=(Ψ (x , ,x ) δ x , , Ψ (x , ,x ) δ x )T . 1 · · · n+1 1 1 · · · n − 1 n+1 · · · k 1 · · · n − k n+1 Then G(x∗, ,x∗ ) = (ℓ , ,ℓ ) for any (x∗, ,x∗ )T X¯ ∗ (ℓ , ,ℓ ). Define the set 1 · · · n+1 1 · · · k 1 · · · n+1 ∈ T ,P 1 · · · k X¯ Rn+1 be the collection of all (x , ,x ) satisfying: T ,P ⊆ 1 · · · n+1

42 1. xn+1 > 0. 2. there exist ℓ¯ , , ℓ¯ with Ψ (x , ,x ) ℓ¯ = x for i and Ψ (x , ,x ) ℓ = 1 · · · k i 1 · · · n − i n+1 ∈ P i 1 · · · n − i x for i / . − n+1 ∈ P 3. M (x , ,x ; ℓ¯ , , ℓ¯ ) is rank deficient. T 1 · · · n 1 · · · k 4. (ℓ¯ , , ℓ¯ )T k . 1 · · · k ∈ i=1 Li Therefore, if T ,P holds,Q we have E k m (ℓ , ,ℓ )T (G(X¯ ) ) Ω. 1 · · · m ∈ T ,P ∩ Li × Li ∪ Yi=1 i=Yk+1 For (x , ,x )T X¯ , notice that JG(x , ,x ) is attained by doing elementary ma- 1 · · · n+1 ∈ T ,P 1 · · · n+1 trix transformation to the matrix M (x , ,x ; ℓ¯ , , ℓ¯ ), i.e., multiplying the first k columns of T 1 · · · n 1 · · · k M (x , ,x ; ℓ¯ , , ℓ¯ ) by 1/x and multiplying the k+1-th column of M (x , ,x ; ℓ¯ , , ℓ¯ ) T 1 · · · n 1 · · · k n+1 T 1 · · · n 1 · · · k by 1 and then multiplying the i-th row by δ for i [n]. Therefore, M (x , ,x ; ℓ¯ , , ℓ¯ ) is − i ∈ T 1 · · · n 1 · · · k also rank deficient. Consequently, G(x , ,x ) with (x , ,x )T X¯ is a critic value of G (see [53]). 1 · · · n+1 1 · · · n+1 ∈ T ,P Then by Sard’s Theorem [53], G(X¯ ) is a zero measure set in Rk. Hence, G(X¯ ) k is T ,P T ,P ∩ i=1 Li a zero measure set in k . Recall that if holds, we have i=1 Li ET ,P Q Q k m (ℓ , ,ℓ )T = (G(X¯ ) ) Ω. 1 · · · m ∈Z T ,P ∩ Li × Li ∪ Yi=1 i=Yk+1 By the above analysis, G(X¯ ) k is a zero measure set in k . Hence, (G(X¯ ) T ,P ∩ i=1 Li i=1 Li T ,P ∩ k ) m is a zero measure set in m . Also by Assumption E.5, Ω is a zero measure i=1 Li × i=k+1 Li Q i=1 Li Q set in m . Consequently, is a zero measure set in m . Then by the continuity of the Q i=1QLi Z Q i=1 Li distribution of ℓ, we finish the proof. Q Q

F Details in Experiments

Recall the procedure of training a robust neural network against adversarial attacks can be formu- lated as a min-max problem:

N min max ℓ(f(xi + δi; w),yi), (F.1) w δi, s.t. |δi|∞≤ε Xi=1 where w is the parameter of the neural network, the pair (xi,yi) denotes the i-th data point, and δi is the perturbation added to data point i. As (F.1) is nonconvex-nonconcave and thus difficult to solve directly, researchers introduce an approximation of (F.1) [20] where the approximated problem has a concave inner problem. The approximation is first replacing the inner maximization problem in F.1 with a finite max problem:

N min max ℓ(f(ˆx (w); w),y ),...,ℓ(f(ˆx (w); w),y ) , (F.2) w { i0 i i9 i } Xi=1 43 where eachx ˆij(w) is the result of a targeted attack on sample xi by changing the output of the network to label j. To obtain the targeted attackx ˆij(w), we need to introduce an additional procedure. Recall the images in MNIST have 10 classifications, thus the last layer of the neural network architecture for learning classification have 10 different neurons. To obain any targeted attackx ˆij(w), we perform gradient ascent for K times:

xk+1 = Proj xk + α (Z (xk , w) Z (xk , w)) , k = 0, , K 1, ij B(x,ε) ij ∇x j ij − yi ij · · · − h i K and letx ˆij(w) = xij . Here, Zj is the network logit before softmax corresponding to label j; α> 0 is the step-size; and Proj [ ] is the projection to the infinity ball with radius ε centered at B(x,ε) · x. Using the same setting in [20], we set the iteration number as K = 40, the stepsize as α = 0.01, and the perturbation level ǫ chosen from 0.0, 0.1, 0.2, 0.3, 0.4 . { } Now we can replace the finite max problem (F.2) with a concave problem over a probabilistic simplex, where the entire problem is non-convex in w, but concave in t:

N 9 m K min max tjℓ f xij ; w ,yi , = (t1, ,tm) ti = 1,ti 0 . (F.3) w t∈T T { · · · | ≥ } i=1 j=0 i=1 X X   X We use Convolutional Neural Network(CNN) with the architecture detailed in Table 3 in the experiments. This setting is the same as in [20]. The results are listed in Table 2. The first three lines are the results obtained from [20] and the fourth line is obtained by using the code provided in [20] to train their algorithm. As for comparison, we run our algorithm 2 for the same number of iterations (100 iterations) with parameter p = 0.2, β = 0.8 and α = 0.5. In the experiment, to compute the projection of a vector of dimension d over the probability simplex, we use the algorithm from [54] which has a complexity (d log d). O We also perform robust training on CIFAR10 [55] and comparing with the algorithm in [20] after 30 epochs. As shown in Figure 2, our algorithm still outperforms [20] in convergence speed. To obtain the targeted attack at each epoch, we set the iteration number K as 10, the stepsize as 0.007, and the perturbation level as 0.031 which are the same settings appear in [51]. We achieve robust accuracy 38.5% and testing accuracy 82.6% which are comparable to the results from the literature [56] in robust training.

44 Layer Type Shape Convolution + ReLU 5 5 20 × × Max Pooling 2 2 × Convolution + ReLU 5 5 50 × × Max Pooling 2 2 × Fully Connected + ReLU 800 Fully Connected + ReLU 500 Softmax 10

Table 3: Model Architecture for the MNIST dataset.

2.00 SmoothedGDA Multi-Step Projected Gradient Step 1.75

1.50

1.25

1.00 Loss

0.75

0.50

0.25

0.00 0 5 10 15 20 25 30 Epoch

Figure 2: Convergence speed of Smoothed-GDA and the algorithm in [20] on CIFAR10.

45