Counterexamples to the Low-Degree Conjecture Justin Holmgren NTT Research, Palo Alto, CA, USA [email protected] Alexander S. Wein Courant Institute of Mathematical Sciences, New York University, NY, USA [email protected] Abstract A conjecture of Hopkins (2018) posits that for certain high-dimensional hypothesis testing problems, no polynomial-time algorithm can outperform so-called “simple statistics”, which are low-degree polynomials in the data. This conjecture formalizes the beliefs surrounding a line of recent work that seeks to understand statistical-versus-computational tradeoffs via the low-degree likelihood ratio. In this work, we refute the conjecture of Hopkins. However, our counterexample crucially exploits the specifics of the noise operator used in the conjecture, and we point out a simple way to modify the conjecture to rule out our counterexample. We also give an example illustrating that (even after the above modification), the symmetry assumption in the conjecture is necessary. These results do not undermine the low-degree framework for computational lower bounds, but rather aim to better understand what class of problems it is applicable to. 2012 ACM Subject Classification Theory of computation → Computational complexity and crypto- graphy Keywords and phrases Low-degree likelihood ratio, error-correcting codes Digital Object Identifier 10.4230/LIPIcs.ITCS.2021.75 Funding Justin Holmgren: Most of this work was done while with the Simons Institute for the Theory of Computing. Alexander S. Wein: Partially supported by NSF grant DMS-1712730 and by the Simons Collaboration on Algorithms and Geometry. Acknowledgements We thank Sam Hopkins and Tim Kunisky for comments on an earlier draft. 1 Introduction A primary goal of computer science is to understand which problems can be solved by efficient algorithms. Given the formidable difficulty of proving unconditional computational hardness, state-of-the-art results typically rely on unproven conjectures. While many such results rely only upon the widely-believed conjecture P 6= NP, other results have only been proven under stronger assumptions such as the unique games conjecture [19, 20], the exponential time hypothesis [16], the learning with errors assumption [25], or the planted clique hypothesis [17, 4]. It has also been fruitful to conjecture that a specific algorithm (or limited class of algorithms) is optimal for a suitable class of problems. This viewpoint has been particularly prominent in the study of average-case noisy statistical inference problems, where it appears that optimal performance over a large class of problems can be achieved by methods such as the sum-of-squares hierarchy (see [24]), statistical query algorithms [18, 5], the approximate message passing framework [9, 22], and low-degree polynomials [15, 14, 13]. It is helpful to have such a conjectured-optimal meta-algorithm because this often admits a systematic analysis of hardness. However, the exact class of problems for which we believe these methods © Justin Holmgren and Alexander S. Wein; licensed under Creative Commons License CC-BY 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Editor: James R. Lee; Article No. 75; pp. 75:1–75:9 Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 75:2 Counterexamples to the Low-Degree Conjecture are optimal has typically not been precisely formulated. In this work, we explore this issue for the class of low-degree polynomial algorithms, which admits a systematic analysis via the low-degree likelihood ratio. The low-degree likelihood ratio [15, 14, 13] has recently emerged as a framework for studying computational hardness in high-dimensional statistical inference problems. It has been shown that for many “natural statistical problems,” all known polynomial-time algorithms only succeed in the parameter regime where certain “simple” (low-degree) statistics succeed. The power of low-degree statistics can often be understood via a relatively simple explicit calculation, yielding a tractable way to precisely predict the statistical-versus- computational tradeoffs in a given problem. These “predictions” can rigorously imply lower bounds against a broad class of spectral methods [21, Theorem 4.4] and are intimately connected to the sum-of-squares hierarchy (see [14, 13, 24]). Recent work has (either explicitly or implicitly) carried out this type of low-degree analysis for a variety of statistical tasks [3, 15, 14, 13, 2, 1, 21, 8, 6, 23, 7]. For more on these methods, we refer the reader to the PhD thesis of Hopkins [13] or the survey article [21]. Underlying the above ideas is the belief that for certain “natural” problems, low-degree statistics are as powerful as all polynomial-time algorithms – we refer broadly to this belief as the “low-degree conjecture”. However, formalizing the notion of “natural” problems is not a straightforward task. Perhaps the easiest way to illustrate the meaning of “natural” is by example: prototypical examples (studied in the previously mentioned works) include planted clique, sparse PCA, random constraint satisfaction problems, community detection in the stochastic block model, spiked matrix models, tensor PCA, and various problems of a similar flavor. All of these can be stated as simple hypothesis testing problems between a “null” distribution (consisting of random noise) and a “planted” distribution (which contains a “signal” hidden in noise). They are all high-dimensional problems but with sufficient symmetry that they can be specified by a small number of parameters (such as a “signal- to-noise ratio”). For all of the above problems, the best known polynomial-time algorithms succeed precisely in the parameter regime where simple statistics succeed, i.e., where there exists a O(log n)-degree polynomial of the data whose value behaves noticeably different under the null and planted distributions (in a precise sense). Thus, barring the discovery of a drastically new algorithmic approach, the low-degree conjecture seems to hold for all the above problems. In fact, a more general version of the conjecture seems to hold for runtimes that are not necessarily polynomial: degree-D statistics are as powerful as all nΘ(˜ D)-time algorithms, where Θ˜ hides factors of log n [13, Hypothesis 2.1.5] (see also [21, 8]). A precise version of the low-degree conjecture was formulated in the PhD thesis of Hopkins [13]. This includes precise conditions on the null distribution ν and planted distribution µ which capture most of the problems mentioned above. The key conditions are that there should be sufficient symmetry, and that µ should be injected with at least a small amount of noise. Most of the problems above satisfy this symmetry condition (a notable exception being the spiked Wishart model1, which satisfies a mild generalization of it), but it remained unclear whether this assumption was needed in the conjecture. On the other hand, the noise assumption is certainly necessary, as illustrated by the example of solving a system of linear equations over a finite field: if the equations have an exact solution then it can be obtained via Gaussian elimination even though low-degree statistics suggest that the 1 Here we mean the formulation of the spiked Wishart model used in [1], where we directly observe Gaussian samples instead of only their covariance matrix. J. Holmgren and A. S. Wein 75:3 problem should be hard; however, if a small amount of noise is added (so that only a 1 − ε fraction of the equations can be satisfied) then Gaussian elimination is no longer helpful, and the low-degree conjecture seems to hold. In this work we investigate more precisely what kinds of noise and symmetry conditions are needed in the conjecture of Hopkins [13]. Our first result (Theorem 4) actually refutes the conjecture in the case where the underlying random variables are real-valued. Our counterexample exploits the specifics of the noise operator used in the conjecture, along with the fact that a single real number can be used to encode a large (but polynomially bounded) amount of data. In other words, we show that a stronger noise assumption than the one in [13] is needed; Remark 6 explains a modification of the conjecture that we do not know how to refute. Our second result (Theorem 7) shows that the symmetry assumption in [13] cannot be dropped, i.e., we give a counterexample for a weaker conjecture that does not require symmetry. Both of our counterexamples are based on efficiently decodable error-correcting codes. Notation Asymptotic notation such as o(1) and Ω(1) pertains to the limit n → ∞. We say that an event occurs with high probability if it occurs with probability 1 − o(1), and we use the abbreviation w.h.p. (“with high probability”). We use [n] to denote the set {1, 2, . , n}. The Hamming n distance between vectors x, y ∈ F (for some field F ) is ∆(x, y) = |{i ∈ [n]: xi 6= yi}| and the Hamming weight of x is ∆(x, 0). 2 The Low-Degree Conjecture We now state the formal variant of the low-degree conjecture proposed in the PhD thesis of Hopkins [13, Conjecture 2.2.4]. The terminology used in the statement will be explained below. n I Conjecture 1. Let X be a finite set or R, and let k ≥ 1 be a fixed integer. Let N = k . Let ν be a product distribution on X N . Let µ be another distribution on X N . Suppose that 1+Ω(1) µ is Sn-invariant and (log n) -wise almost independent with respect to ν. Then no polynomial-time computable test distinguishes Tδµ and ν with probability 1 − o(1), for any δ > 0. Formally, for all δ > 0 and every polynomial-time computable t : X N → {0, 1} there exists δ0 > 0 such that for every large enough n, 1 1 0 P (t(x) = 0) + P (t(x) = 1) ≤ 1 − δ .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-