The Complexity of Constrained Min-Max Optimization Constantinos Daskalakis Stratis Skoulakis Manolis Zampetakis MIT SUTD MIT
[email protected] [email protected] [email protected] September 22, 2020 Abstract Despite its important applications in Machine Learning, min-max optimization of objective functions that are nonconvex-nonconcave remains elusive. Not only are there no known first- order methods converging even to approximate local min-max points, but the computational complexity of identifying them is also poorly understood. In this paper, we provide a charac- terization of the computational complexity of the problem, as well as of the limitations of first- order methods in constrained min-max optimization problems with nonconvex-nonconcave objectives and linear constraints. As a warm-up, we show that, even when the objective is a Lipschitz and smooth differ- entiable function, deciding whether a min-max point exists, in fact even deciding whether an approximate min-max point exists, is NP-hard. More importantly, we show that an approxi- mate local min-max point of large enough approximation is guaranteed to exist, but finding one such point is PPAD-complete. The same is true of computing an approximate fixed point of the (Projected) Gradient Descent/Ascent update dynamics. An important byproduct of our proof is to establish an unconditional hardness result in the Nemirovsky-Yudin [NY83] oracle optimization model. We show that, given oracle access to some function f : P! [−1, 1] and its gradient r f , where P ⊆ [0, 1]d is a known convex polytope, every algorithm that finds a #-approximate local min-max point needs to make a number of queries that is exponential in at least one of 1/#, L, G, or d, where L and G are respectively the smoothness and Lipschitzness of f and d is the dimension.