Low-Rank Inducing Norms with Optimality Interpretations Christian Grussler ∗ Pontus Giselsson y June 12, 2018 Abstract Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relax- ations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all ex- amples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable prox- imal mappings, which makes it possible to also solve problems of large size using first-order methods. 1 Introduction Many problems in machine learning, image analysis, model order reduction, multivari- ate linear regression, etc. (see [2,4,5,7, 22, 26, 27, 32, 33, 37]), can be posed as a low-rank estimation problems based on measurements and prior information about a arXiv:1612.03186v2 [math.OC] 11 Jun 2018 data matrix. These estimation problems often take the form minimize f0(M) M (1) subject to rank(M) ≤ r; ∗Department of Engineering, Control Group, Cambridge University, United Kingdom
[email protected].