On the Work of Madhu Sudan Von Avi Wigderson
Total Page:16
File Type:pdf, Size:1020Kb
Avi Wigderson On the Work of Madhu Sudan von Avi Wigderson Madhu Sudan is the recipient of the 2002 Nevanlinna Prize. Sudan has made fundamental contributions to two major areas of research, the connections between them, and their applications. The first area is coding theory. Established by Shannon and Hamming over fifty years ago, it is the mathematical study of the possibility of, and the limits on, reliable communication over noisy media. The second area is probabilistically checkable proofs (PCPs). By contrast, it is only ten years old. It studies the minimal resources required for probabilistic verification of standard mathematical proofs. My plan is to briefly introduce these areas, their mo- Probabilistic Checking of Proofs tivation, and foundational questions and then to ex- One informal variant of the celebrated P versus NP plain Sudan’s main contributions to each. Before we question asks,“Can mathematicians, past and future, get to the specific works of Madhu Sudan, let us start be replaced by an efficient computer program?” We with a couple of comments that will set up the con- first define these notions and then explain the PCP text of his work. theorem and its impact on this foundational question. Madhu Sudan works in computational complexity theory. This research discipline attempts to rigor- Efficient Computation ously define and study efficient versions of objects and notions arising in computational settings. This Throughout, by an efficient algorithm (or program, machine, or procedure) we mean an algorithm which focus on efficiency is of course natural when studying 1 computation itself, but it also has proved extremely runs at most some fixed polynomial time in the fruitful in studying other fundamental notions such length of its input. The input is always a finite string as proof, randomness, knowledge, and more. Here of symbols from a fixed, finite alphabet. Note that I will try to explain how the efficiency “eyeglasses” an algorithm is an object of fixed size which is sup- were key in this study of the notions proof (again) posed to solve a problem on all inputs of all (finite) and error correction. lengths. A problem is efficiently computable if it can be solved by an efficient algorithm. Theoretical computer science is an extremely inter- Definition 1. The class P is the class of all prob- active and collaborative community. Sudan’s work lems solvable by efficient algorithms. was not done in a vacuum, and much of the back- ground to it, conceptual and technical, was developed For example, the problems Integer Multiplication, Determinant, Linear Programming, Univariate Poly- by other people. The space I have does not allow me 2 to give proper credit to all these people. A much bet- nomial Factorization, and (recently established ) P ter job has been done by Sudan himself; his home- Testing Primality are in . page (http://theory.lcs.mit.edu/~madhu/)con- Let us restrict attention (for a while) to algorithms tains several surveys of these areas which give proper whose output is always “accept” or “reject”. Such an historical accounts and references. In particular see algorithm solves a decision problem. The set L of in- [13] for a survey on PCPs and [15] for a survey on puts which are accepted by A is called the language the work on error correction. recognized by A. Statements of the form “x ∈ L” 1 Time refers to the number of elementary steps taken by the algorithm. The choice of “polynomial” to represent efficiency is both small enough to often imply practicality and large enough to make the definition independent of particular aspects of the model, e.g., the choice of allowed “elementary operations”. 2[SeeMitteilungen 4–2002, S. 14–21. Anm. d. Red.] 64 DMV-Mitteilungen 1/2003 On the Work of Madhu Sudan are correctly classified as “true” or “false” by the ef- proof verification is defined by the well-known effi- ficient algorithm, deterministically (and without any cient (anonymous) algorithm referee.3 As humans “outside help”). we are simply not interested in theorems whose proofs take, say, longer than our lifetime (or the three-month Efficient Verification deadline given by editor)toreadandverify. In contrast, allowing an efficient algorithm to use But is this notion of efficient verification – reading “outside help” (a guess or an alleged proof) naturally through the statement and proof, and checking that defines a proof system. We say that a language L is every new lemma indeed follows from previous ones efficiently verifiable if there is an efficient algorithm (and known results) – the best we can hope for? Cer- V (for “Verifier”) and a fixed polynomial p for which tainly as referees we would love some shortcuts as the following completeness and soundness conditions long as they do not change our notion of mathemat- hold: ical truth too much. Are there such shortcuts? – For every x ∈ L there exists a string π of length |π|≤p(|x|) such that V accepts the joint input Efficient Probabilistic Verification (x, π). A major paradigm in computational complexity is al- ∈ – For every x L, for every string π of length lowing algorithms to flip coins. We postulate access | |≤ | | π p( x ), V rejects the joint input (x, π). to a supply of independent unbiased random vari- Naturally, we can view all strings x in L as theorems ables which the probabilistic (or randomized) algo- of the proof system V . Those strings π which cause rithm can use in its computation on a given input. V to accept x are legitimate proofs of the theorem We comment that the very rich theories (which we x ∈ L in this system. have no room to discuss) of pseudorandomness and Definition 2. The class NP is the class of all lan- of weak random sources attempt to bridge the gap guages that are efficiently verifiable. between this postulate and “real-life” generation of random bits in computer programs. It is clear that P⊆NP. Are they equal? This is the “P versus NP” question [5], one of the most impor- The notion of efficiency remains the same: Proba- tant open scientific problems today. Not only mathe- bilistic algorithms can make only a polynomial num- maticians but scientists and engineers as well daily ber of steps in the input length. However, the output attempt to perform tasks (create theories and de- becomes a random variable. We demand that the signs) whose success can hopefully be efficiently ver- probability of error, on every input, never exceed a ified. Reflect on the practical and philosophical im- given small bound. (Note that can be taken to be, pact of a positive answer to the question: If P = NP, e.g., 1/3, since repeating the algorithm with fresh in- then much of their (creative!) work can be performed dependent randomness and taking majority vote of efficiently by one computer program. the answers can decrease the error exponentially in Many important computational problems, like the the number of repetitions.) Travelling Salesman, Integer Programming, Map Returning to proof systems, we now allow the verifier Coloring, Systems of Quadratic Equations, and In- to be a probabilistic algorithm. As above, we allow it teger Factorization are (when properly coded as lan- to err (namely, accept false “proofs”) with extremely guages) in NP. small probability. The gain would be extreme effi- We stress two aspects of efficient verification. The ciency: The verifier will access only a constant num- purported “proof” π for the statement “x ∈ L”must ber of symbols in the alleged proof. Naturally, the be short, and the verification procedure must be effi- positions of the viewed symbols can be randomly cho- cient. It is important to note that all standard (log- sen. What kind of theorems can be proved in the ical) proof systems used in mathematics conform to resulting proof system? First, let us formalize it. the second restriction: Since only “local inferences” We say that a language L has a probabilistically are made from “easily recognizable” axioms, verifica- checkable proof if there is an efficient probabilis- tion is always efficient in the total length of statement tic algorithm V , a fixed polynomial p,andafixed and proof. The first restriction, on the length of the constant c for which the following completeness and proof, is natural, since we want the verification to be probabilistic soundness conditions hold. efficient in terms of the length of the statement. – For every x ∈ L there exists a string π,oflength An excellent, albeit informal, example is the lan- |π|≤p(|x|), such that V accepts the joint input guage math of all mathematical statements, whose (x, π) with probability 1. 3 This system can, of course, be formalized. However, it is better to have the social process of mathematics in mind before we plunge into the notions of the next subsection. DMV-Mitteilungen 1/2003 65 Avi Wigderson – For every x ∈ L, for every string π of length of proof systems to replace the referee for journal |π|≤p(|x|), V rejects the joint input (x, π) with papers). In this sense the PCP theorem is just a probability ≥ 1/2. statement of philosophical importance. However, the – On every input (x, π), V can access at most c bits PCP theorem does have significant implications of of π. immediate relevance to the theory of computation, Note again that executing the verifier independently and we explain this next. a constant number of times will reduce the “sound- ness” error to an arbitrarily small constant without Hardness of Approximation changing the definition. Also note that randomness The first and foremost contribution thus far to under- in the verifier is essential if it probes only a constant standing the mystery of the P versus NP question (or even logarithmic) number of symbols in the proof.