Average Case Complexity∗

Total Page:16

File Type:pdf, Size:1020Kb

Average Case Complexity∗ Average Case Complexity∗ Yuri Gurevich,† University of Michigan. Abstract. We attempt to motivate, jus- to wider classes. NP and the class of NP tify and survey the average case reduction search problems are sufficiently important. theory. The restriction to the case when in- stances and witnesses are strings is stan- dard [GJ], even though it means that one 1. Introduction may be forced to deal with string encodings of real objects of interest. The reason is An NP decision problem may be spec- that we need a clear computational model. ified by a set D of strings (instances)in If instances and witnesses are strings, the some alphabet, another set W of strings usual Turing machine model can be used. in some alphabet and a binary relation The size of a string is often taken to be its R ⊆ W × D that is polynomial time com- length, though this requirement can easily putable and polynomially bounded (which be relaxed. means that the size |w| of w is bounded by A solution of an NP problem is a feasible a fixed polynomial of the size |x| of x when- decision or search algorithm. The question ever wRx holds). If wRx holds, w is called arises what is feasible. It is common to a witness for x. The decision problem spec- adopt the thesis ified by D, W and R, call it DP(D, W, R), may be stated thus: Given an element x (1.0) Feasible algorithms are exactly of D, determine if there exists w ∈ W such polynomial time ones. that wRx holds. The corresponding search problem, call it SP(D, W, R) may be stated which can be split into two parts: thus: Given an element x of D, determine if there exists a witness w for x and if so (1.1) Every polynomial time algorithm is then exhibit such a witness. feasible. Problems of the form SP(D, W, R)may be called NP search problems even though (1.2) Every feasible algorithm is polyno- NP is supposed to be a class of decision mial time. problems. It will be convenient for us to use the term “an NP problem” to mean an We do not believe in a unique feasibil- NP decision problem (a genuine NP prob- ity concept; feasibility depends on appli- lem) or its search counterpart. In this talk, cation. In real time applications or in the we deal only with NP decision and search case of huge databases (like that of the US problems even though methods of the av- Internal Revenue Service), even linear time erage complexity theory may be applied may be prohibitively expensive. However, one important feasibility concept fits (1.1) ∗Springer LNCS 510, 1991, 615–628. well. It is often reasonable to assume that † Partially supported by NSF grant CCR89- if your customer had time to produce an 04728. Address: EECS Dept., University of Michi- gan, 1301 Beal Ave., Ann Arbor, MI 48109-2122. input object of some size n, then you can Email: [email protected] afford to spend time n2 or n3 etc. on the 1 object. Of course, this “etc.” should be 2. Polynomial on average treated with caution. Probably, you will be unable to spend time n64. Fortunately, In the average case approach, a decision in practically important cases, these poly- or search problem is supposed to be given nomials tend to be reasonable. together with a probability distribution on instances. A problem with a probability P-time (polynomial time) is a very distribution on instances is called random- robust and machine-independent concept ized (or distributional [BCGL]). Determin- closed under numerous operations. If you ing an appropriate distribution is a part of believe that (i) linear time algorithms are the formalization of the problem in ques- feasible and (ii) every algorithm with fea- tion and it isn’t necessarily easy. (The sible procedures running in feasible time robustness of the average case approach (counting a procedure call as one step) with respect to probability distributions is is feasible, then you are forced to accept of some help.) In this talk, we address all P-time algorithms as feasible. Find- only the task of solving a given random- ing a polynomial-time algorithm for an NP ized problem. The task is to devise an al- problem often requires ingenuity, whereas gorithm that solves the problem quickly on an exponential-time algorithm is there for average with respect to the given distribu- free. Proving that a given NP problem tion. No pretense is made that the algo- can be solved in polynomial time, you feel rithm is also good for the worst case. that you made a mathematical advance. It One advantage of the average case ap- seems that this feeling of mathematician’s proach is that it often works. It was pos- satisfaction contributed to the popularity sible a priori that, whenever you have a of the P-time concept. (Notice however fast on the average algorithm, you also that a superpolynomial bound may require have an algorithm that is fast even in the ingenuity and be a source of satisfaction as worst case. This is not at all the case. well.) Sometimes a natural probability distribu- We do not knowvery strong arguments tion makes the randomized problem ridicu- in favor of (1.2). Moreover, there are lously easy. Consider for example the 3- strong arguments against it. The most im- coloring search problem when all graphs portant one for our purpose in this talk is of the same size have the same probabil- that sometimes there are satisfying prac- ity. The usual backtracking solves this ran- tical ways to cope with hard NP problems domized search problem in (surprise!) at in the absence of P-time algorithms. In the most 197 steps on average, never mind the case of an optimization problem, one may size of the given instance [Wi]. The reason have a fast approximating algorithm. In is that there are very simple and probable the case of a decision or search problem, witnesses to non-colorability, like a clique one may have a decision or search algo- of 4. The distribution is greatly biased rithm that is usually fast or almost always toward the negative answer. The average fast or fast on average. Leonid Levin pro- time can be further cut down if the algo- posed one average case approach in [Le1]. rithm starts with a direct search for such His approach is the topic of this talk. witnesses. There are numerous examples of success in the cases of less biased dis- We hope this account is entertaining. It tributions. This is, however, a topic of a certainly isn’t complete. The references in- different talk. clude three student papers: [Gr], [Kn] and One may argue that, in many applica- [Sc]. tions, the average case may be more im- 2 portant than the worst case. Imagine a fac- ily its length. Third, the probability dis- tory that produces complete graphs. Each tribution is polynomial time computable edge of a graph is produced separately us- (P-time). The notion of P-time distribu- ing the same technology and there is a fixed tions was introduced in [Le1] and analyzed probability p for an edge to be faulty. Sup- to some extent in [Gu1]. The requirement pose that, for whatever reason, it happens that the distribution is P-time will be dis- to be important to knowwhethera given cussed and relaxed in Section 6. Meantime, graph has a hamiltonian circuit composed viewis as some technical restriction that is entirely of faulty edges. There is an algo- often satisfied in practice. rithm A that solves the hamiltonian search Consider a function T from a domain problem with any fixed edge-probability D to the interval [0..∞] of the real line distribution and that has an expected run- extended with ∞. Let En(T ) be the ex- ning time linear in the number of vertices pectation of T with respect to the condi- [GS]. You may want to use that algorithm tional probability Pn(x)=P{x ||x| = n}. and open a hamiltonian shop. There may The definition of polynomiality on average even be several such factories in your area. seems obvious: T is polynomial on average A customer will bring you a graph G with (relative to D)if some number n of vertices. If it is custom- ary that the charge depends only on the (2.1) En(T ) is bounded by a polynomial number of vertices, you may charge a fee of n. proportional to the expected running time of A on n-vertex graphs, where the propor- Unfortunately, this obvious answer is not tionality coefficient is chosen to ensure you satisfying. It is easy to construct D and a fair profit and competitiveness. T such that T is polynomial on average but T 2 is not. It seems reasonable to ac- For the sake of fairness, we should men- cept that (i) T is polynomial on average if tion situations where the failure to solve E (T )=O(n), and (ii) if T is bounded by quickly one instance means the failure of n a polynomial of a function which is poly- the whole enterprise. Even then the worst nomial on average then T is polynomial case approach may be too conservative. In on average. These two assumptions imply any case, it seems to us that, in many ap- that T is polynomial on average if plications where polynomial time is feasi- ble, polynomial on average time is feasi- (2.2) there exists ε>0 such that E (T ε) ble as well.
Recommended publications
  • Computational Complexity Computational Complexity
    In 1965, the year Juris Hartmanis became Chair Computational of the new CS Department at Cornell, he and his KLEENE HIERARCHY colleague Richard Stearns published the paper On : complexity the computational complexity of algorithms in the Transactions of the American Mathematical Society. RE CO-RE RECURSIVE The paper introduced a new fi eld and gave it its name. Immediately recognized as a fundamental advance, it attracted the best talent to the fi eld. This diagram Theoretical computer science was immediately EXPSPACE shows how the broadened from automata theory, formal languages, NEXPTIME fi eld believes and algorithms to include computational complexity. EXPTIME complexity classes look. It As Richard Karp said in his Turing Award lecture, PSPACE = IP : is known that P “All of us who read their paper could not fail P-HIERARCHY to realize that we now had a satisfactory formal : is different from ExpTime, but framework for pursuing the questions that Edmonds NP CO-NP had raised earlier in an intuitive fashion —questions P there is no proof about whether, for instance, the traveling salesman NLOG SPACE that NP ≠ P and problem is solvable in polynomial time.” LOG SPACE PSPACE ≠ P. Hartmanis and Stearns showed that computational equivalences and separations among complexity problems have an inherent complexity, which can be classes, fundamental and hard open problems, quantifi ed in terms of the number of steps needed on and unexpected connections to distant fi elds of a simple model of a computer, the multi-tape Turing study. An early example of the surprises that lurk machine. In a subsequent paper with Philip Lewis, in the structure of complexity classes is the Gap they proved analogous results for the number of Theorem, proved by Hartmanis’s student Allan tape cells used.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • A Variation of Levin Search for All Well-Defined Problems
    A Variation of Levin Search for All Well-Defined Problems Fouad B. Chedid A’Sharqiyah University, Ibra, Oman [email protected] July 10, 2021 Abstract In 1973, L.A. Levin published an algorithm that solves any inversion problem π as quickly as the fastest algorithm p∗ computing a solution for ∗ ∗ ∗ π in time bounded by 2l(p ).t , where l(p ) is the length of the binary ∗ ∗ ∗ encoding of p , and t is the runtime of p plus the time to verify its correctness. In 2002, M. Hutter published an algorithm that solves any ∗ well-defined problem π as quickly as the fastest algorithm p computing a solution for π in time bounded by 5.tp(x)+ dp.timetp (x)+ cp, where l(p)+l(tp) l(f)+1 2 dp = 40.2 and cp = 40.2 .O(l(f) ), where l(f) is the length of the binary encoding of a proof f that produces a pair (p,tp), where tp(x) is a provable time bound on the runtime of the fastest program p ∗ provably equivalent to p . In this paper, we rewrite Levin Search using the ideas of Hutter so that we have a new simple algorithm that solves any ∗ well-defined problem π as quickly as the fastest algorithm p computing 2 a solution for π in time bounded by O(l(f) ).tp(x). keywords: Computational Complexity; Algorithmic Information Theory; Levin Search. arXiv:1702.03152v1 [cs.CC] 10 Feb 2017 1 Introduction We recall that the class NP is the set of all decision problems that can be solved efficiently on a nondeterministic Turing Machine.
    [Show full text]
  • BU CS 332 – Theory of Computation
    BU CS 332 –Theory of Computation Lecture 21: • NP‐Completeness Reading: • Cook‐Levin Theorem Sipser Ch 7.3‐7.5 • Reductions Mark Bun April 15, 2020 Last time: Two equivalent definitions of 1) is the class of languages decidable in polynomial time on a nondeterministic TM 2) A polynomial‐time verifier for a language is a deterministic ‐time algorithm such that iff there exists a string such that accepts Theorem: A language iff there is a polynomial‐time verifier for 4/15/2020 CS332 ‐ Theory of Computation 2 Examples of languages: SAT “Is there an assignment to the variables in a logical formula that make it evaluate to ?” • Boolean variable: Variable that can take on the value / (encoded as 0/1) • Boolean operations: • Boolean formula: Expression made of Boolean variables and operations. Ex: • An assignment of 0s and 1s to the variables satisfies a formula if it makes the formula evaluate to 1 • A formula is satisfiable if there exists an assignment that satisfies it 4/15/2020 CS332 ‐ Theory of Computation 3 Examples of languages: SAT Ex: Satisfiable? Ex: Satisfiable? Claim: 4/15/2020 CS332 ‐ Theory of Computation 4 Examples of languages: TSP “Given a list of cities and distances between them, is there a ‘short’ tour of all of the cities?” More precisely: Given • A number of cities • A function giving the distance between each pair of cities • A distance bound 4/15/2020 CS332 ‐ Theory of Computation 5 vs. Question: Does ? Philosophically: Can every problem with an efficiently verifiable solution also be solved efficiently?
    [Show full text]
  • László Lovász Avi Wigderson of Eötvös Loránd University of the Institute for Advanced Study, in Budapest, Hungary and Princeton, USA
    2021 The Norwegian Academy of Science and Letters has decided to award the Abel Prize for 2021 to László Lovász Avi Wigderson of Eötvös Loránd University of the Institute for Advanced Study, in Budapest, Hungary and Princeton, USA, “for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics.” Theoretical Computer Science (TCS) is the study of computational lens”. Discrete structures such as the power and limitations of computing. Its roots go graphs, strings, permutations are central to TCS, back to the foundational works of Kurt Gödel, Alonzo and naturally discrete mathematics and TCS Church, Alan Turing, and John von Neumann, leading have been closely allied fields. While both these to the development of real physical computers. fields have benefited immensely from more TCS contains two complementary sub-disciplines: traditional areas of mathematics, there has been algorithm design which develops efficient methods growing influence in the reverse direction as well. for a multitude of computational problems; and Applications, concepts, and techniques from TCS computational complexity, which shows inherent have motivated new challenges, opened new limitations on the efficiency of algorithms. The notion directions of research, and solved important open of polynomial-time algorithms put forward in the problems in pure and applied mathematics. 1960s by Alan Cobham, Jack Edmonds, and others, and the famous P≠NP conjecture of Stephen Cook, László Lovász and Avi Wigderson have been leading Leonid Levin, and Richard Karp had strong impact on forces in these developments over the last decades. the field and on the work of Lovász and Wigderson.
    [Show full text]
  • Algorithms: a Quest for Absolute Definitions∗
    Algorithms: A Quest for Absolute De¯nitions¤ Andreas Blassy Yuri Gurevichz Abstract What is an algorithm? The interest in this foundational problem is not only theoretical; applications include speci¯cation, validation and veri¯ca- tion of software and hardware systems. We describe the quest to understand and de¯ne the notion of algorithm. We start with the Church-Turing thesis and contrast Church's and Turing's approaches, and we ¯nish with some recent investigations. Contents 1 Introduction 2 2 The Church-Turing thesis 3 2.1 Church + Turing . 3 2.2 Turing ¡ Church . 4 2.3 Remarks on Turing's analysis . 6 3 Kolmogorov machines and pointer machines 9 4 Related issues 13 4.1 Physics and computations . 13 4.2 Polynomial time Turing's thesis . 14 4.3 Recursion . 15 ¤Bulletin of European Association for Theoretical Computer Science 81, 2003. yPartially supported by NSF grant DMS{0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109{1109. zMicrosoft Research, One Microsoft Way, Redmond, WA 98052. 1 5 Formalization of sequential algorithms 15 5.1 Sequential Time Postulate . 16 5.2 Small-step algorithms . 17 5.3 Abstract State Postulate . 17 5.4 Bounded Exploration Postulate and the de¯nition of sequential al- gorithms . 19 5.5 Sequential ASMs and the characterization theorem . 20 6 Formalization of parallel algorithms 21 6.1 What parallel algorithms? . 22 6.2 A few words on the axioms for wide-step algorithms . 22 6.3 Wide-step abstract state machines . 23 6.4 The wide-step characterization theorem .
    [Show full text]
  • HISTORY of UKRAINE and UKRAINIAN CULTURE Scientific and Methodical Complex for Foreign Students
    Ministry of Education and Science of Ukraine Flight Academy of National Aviation University IRYNA ROMANKO HISTORY OF UKRAINE AND UKRAINIAN CULTURE Scientific and Methodical Complex for foreign students Part 3 GUIDELINES FOR SELF-STUDY Kropyvnytskyi 2019 ɍȾɄ 94(477):811.111 R e v i e w e r s: Chornyi Olexandr Vasylovych – the Head of the Department of History of Ukraine of Volodymyr Vynnychenko Central Ukrainian State Pedagogical University, Candidate of Historical Sciences, Associate professor. Herasymenko Liudmyla Serhiivna – associate professor of the Department of Foreign Languages of Flight Academy of National Aviation University, Candidate of Pedagogical Sciences, Associate professor. ɇɚɜɱɚɥɶɧɨɦɟɬɨɞɢɱɧɢɣɤɨɦɩɥɟɤɫɩɿɞɝɨɬɨɜɥɟɧɨɡɝɿɞɧɨɪɨɛɨɱɨʀɩɪɨɝɪɚɦɢɧɚɜɱɚɥɶɧɨʀɞɢɫɰɢɩɥɿɧɢ "ȱɫɬɨɪɿɹ ɍɤɪɚʀɧɢ ɬɚ ɭɤɪɚʀɧɫɶɤɨʀ ɤɭɥɶɬɭɪɢ" ɞɥɹ ɿɧɨɡɟɦɧɢɯ ɫɬɭɞɟɧɬɿɜ, ɡɚɬɜɟɪɞɠɟɧɨʀ ɧɚ ɡɚɫɿɞɚɧɧɿ ɤɚɮɟɞɪɢ ɩɪɨɮɟɫɿɣɧɨʀ ɩɟɞɚɝɨɝɿɤɢɬɚɫɨɰɿɚɥɶɧɨɝɭɦɚɧɿɬɚɪɧɢɯɧɚɭɤ (ɩɪɨɬɨɤɨɥʋ1 ɜɿɞ 31 ɫɟɪɩɧɹ 2018 ɪɨɤɭ) ɬɚɫɯɜɚɥɟɧɨʀɆɟɬɨɞɢɱɧɢɦɢ ɪɚɞɚɦɢɮɚɤɭɥɶɬɟɬɿɜɦɟɧɟɞɠɦɟɧɬɭ, ɥɶɨɬɧɨʀɟɤɫɩɥɭɚɬɚɰɿʀɬɚɨɛɫɥɭɝɨɜɭɜɚɧɧɹɩɨɜɿɬɪɹɧɨɝɨɪɭɯɭ. ɇɚɜɱɚɥɶɧɢɣ ɩɨɫɿɛɧɢɤ ɡɧɚɣɨɦɢɬɶ ɿɧɨɡɟɦɧɢɯ ɫɬɭɞɟɧɬɿɜ ɡ ɿɫɬɨɪɿɽɸ ɍɤɪɚʀɧɢ, ʀʀ ɛɚɝɚɬɨɸ ɤɭɥɶɬɭɪɨɸ, ɨɯɨɩɥɸɽ ɧɚɣɜɚɠɥɢɜɿɲɿɚɫɩɟɤɬɢ ɭɤɪɚʀɧɫɶɤɨʀɞɟɪɠɚɜɧɨɫɬɿ. ɋɜɿɬɭɤɪɚʀɧɫɶɤɢɯɧɚɰɿɨɧɚɥɶɧɢɯɬɪɚɞɢɰɿɣ ɭɧɿɤɚɥɶɧɢɣ. ɋɬɨɥɿɬɬɹɦɢ ɪɨɡɜɢɜɚɥɚɫɹ ɫɢɫɬɟɦɚ ɪɢɬɭɚɥɿɜ ɿ ɜɿɪɭɜɚɧɶ, ɹɤɿ ɧɚ ɫɭɱɚɫɧɨɦɭ ɟɬɚɩɿ ɧɚɛɭɜɚɸɬɶ ɧɨɜɨʀ ɩɨɩɭɥɹɪɧɨɫɬɿ. Ʉɧɢɝɚ ɪɨɡɩɨɜɿɞɚɽ ɩɪɨ ɤɚɥɟɧɞɚɪɧɿ ɫɜɹɬɚ ɜ ɍɤɪɚʀɧɿ: ɞɟɪɠɚɜɧɿ, ɪɟɥɿɝɿɣɧɿ, ɩɪɨɮɟɫɿɣɧɿ, ɧɚɪɨɞɧɿ, ɚ ɬɚɤɨɠ ɪɿɡɧɿ ɩɚɦ ɹɬɧɿ ɞɚɬɢ. ɍ ɩɨɫɿɛɧɢɤɭ ɩɪɟɞɫɬɚɜɥɟɧɿ ɪɿɡɧɨɦɚɧɿɬɧɿ ɞɚɧɿ ɩɪɨ ɮɥɨɪɭ ɿ ɮɚɭɧɭ ɤɥɿɦɚɬɢɱɧɢɯ
    [Show full text]
  • 17Th Knuth Prize: Call for Nominations
    17th Knuth Prize: Call for Nominations The Donald E. Knuth Prize for outstanding contributions to the foundations of computer science is awarded for major research accomplishments and contri- butions to the foundations of computer science over an extended period of time. The Prize is awarded annually by the ACM Special Interest Group on Algo- rithms and Computing Theory (SIGACT) and the IEEE Technical Committee on the Mathematical Foundations of Computing (TCMF). Nomination Procedure Anyone in the Theoretical Computer Science com- munity may nominate a candidate. To do so, please send nominations to : [email protected] with a subject line of Knuth Prize nomi- nation by February 15, 2017. The nomination should state the nominee's name, summarize his or her contributions in one or two pages, provide a CV for the nominee or a pointer to the nominees webpage, and give telephone, postal, and email contact information for the nominator. Any supporting letters from other members of the community (up to a limit of 5) should be included in the package that the nominator sends to the Committee chair. Supporting letters should contain substantial information not in the nomination. Others may en- dorse the nomination simply by adding their names to the nomination letter. If you have nominated a candidate in past years, you can re-nominate the candi- date by sending a message to that effect to the above address. (You may revise the nominating materials if you desire). Criteria for Selection The winner will be selected by a Prize Committee consisting of six people appointed by the SIGACT and TCMF Chairs.
    [Show full text]
  • Laszlo Lovasz and Avi Wigderson Share the 2021 Abel Prize - 03-20-2021 by Abhigyan Ray - Gonit Sora
    Laszlo Lovasz and Avi Wigderson share the 2021 Abel Prize - 03-20-2021 by Abhigyan Ray - Gonit Sora - https://gonitsora.com Laszlo Lovasz and Avi Wigderson share the 2021 Abel Prize by Abhigyan Ray - Saturday, March 20, 2021 https://gonitsora.com/laszlo-lovasz-and-avi-wigderson-share-2021-abel-prize/ Last time the phrase "theoretical computer science" found mention in an Abel Prize citation was in 2012 when the legendary Hungarian mathematician, Endre Szemerédi was awarded mathematics' highest honour. During the ceremony, László Lovász and Avi Wigderson were there to offer a primer into the majestic contributions of the laureate. Little did they know, nearly a decade later, they both would be joint recipients of the award. The Norwegian Academy of Science and Letters has awarded the 2021 Abel Prize to László Lovász of Eötvös Loránd University in Budapest, Hungary and Avi Wigderson of the Institute for Advanced Study, Princeton, USA, “for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics.” Widely hailed as the mathematical equivalent of the Nobel Prize, this year's Abel goes to two trailblazers for their revolutionary contributions to the mathematical foundations of computing and information systems. The citation of the award can be found here. László Lovász Lovász bagged three gold medals at the International Mathematical Olympiad, from 1964 to 1966, and received his Candidate of Sciences (Ph.D. equivalent) degree in 1970 at the Hungarian Academy of Sciences advised by Tibor Gallai. Lovász was initially based in Hungary, at Eötvös Loránd University and József Attila University, and in 1993 he was appointed William K Lanman Professor of Computer Science and Mathematics at Yale University.
    [Show full text]
  • Notes on Levin's Theory of Average-Case Complexity
    Notes on Levin’s Theory of Average-Case Complexity Oded Goldreich Abstract. In 1984, Leonid Levin initiated a theory of average-case com- plexity. We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these defini- tions. Keywords: Average-case complexity, reductions. This survey is rooted in the author’s (exposition and exploration) work [4], which was partially reproduded in [1]. An early version of this survey appeared as TR97-058 of ECCC. Some of the perspective and conclusions were revised in light of a relatively recent work of Livne [21], but an attempt was made to preserve the spirit of the original survey. The author’s current perspective is better reflected in [7, Sec. 10.2] and [8], which advocate somewhat different definitional choices (e.g., focusing on typical rather than average performace of algorithms). 1 Introduction The average complexity of a problem is, in many cases, a more significant mea- sure than its worst case complexity. This has motivated the development of a rich area in algorithmic research – the probabilistic analysis of algorithms [14, 16]. However, historically, this line of research focuses on the analysis of specific algorithms with respect to specific, typically uniform, probability distributions. The general question of average case complexity was addressed for the first time by Levin [18]. Levin’s work can be viewed as the basis for a theory of average NP-completeness, much the same way as Cook’s [2] (and Levin’s [17]) works are the basis for the theory of NP-completeness.
    [Show full text]
  • Cook's Theory and Twentieth Century Mathematics
    Journal of Scientific and Practical Computing Noted Reviews Vol. 3, No. 1 (2009) 57– 63 COOK'S THEORY AND TWENTIETH CENTURY MATHEMATICS Li Chen Today, many top mathematicians know about or even study computational complexity. As a result, mathematicians now pay more attention and respect to the foundation of computer science and mathematics. The foundation of mathematics, which is highly related to logic and philosophy, was for a longest time ignored by a considerable number of mathematicians. Steve Smale may have been the first pure mathematical figure who conducted research on complexity theory; he published his first complexity paper in 1985. Currently, Terry Tao is the most active mathematician doing research in this area. When Peter Lax, who may be the most renowned mathematician today, was recently asked about the development of mathematics, he mentioned that complexity might play a leading role. However, he also questioned active researchers of complexity why they love to search for something difficult while we (as mathematicians) want to find something simple. Charlie Fefferman chose a model of complexity that accounts for research in Whitney’s problem in mathematical analysis, which is highly related to the model used in computer science. (Among the four mathematicians mentioned above, three are Fields Medallists and one received the Wolf and Abel Prizes.) In the history of computational complexity, it is widely accepted that Stephen Arthur Cook, a professor at the University of Toronto, has made the most important contributions. What is the theory of complexity? Complexity deals with the time and space needs of solving one or a category of problems in a specific computing model.
    [Show full text]
  • L Aszl O Babai
    L aszl o Babai L aszl o Babai is a professor in the Departments of Computer Science and Mathematics at the University of Chicago. Born in Budap est in 1950, Professor Babai received his de- grees in mathematics from Eotvos University, Budap est, and the Hungarian Academy of Science, and he taughtatEotvos University for nearly two decades. He joined the University of Chicago in 1984 but maintained his aliation with Eotvos University until 1994, splitting his time b etween the two continents. Professor Babai has served as an editor of 11 p erio dicals, evenly split b etween mathematics and computer science. He has written over 150 research pap ers in combinatorics, group theory, algorithms, and complexity theory, forging many links b etween these areas. As a \mathematical grandchild of Paul Erd}os," he has used probabilistic considerations in all these con- texts. He was one of the recipients of the rst ACM SIGACT{EATCS Godel Prize 1993. We quote the citation: \The 1993 Godel Prize is shared bytwo pap ers Babai and Moran [BM], Goldwasser, Micali and Racko [GMR]. These pap ers intro duced the concept of interactive pro of systems, which provides a rich new framework for addressing the question of what constitutes a mathematical pro of. The invention of this framework has already led to some of the most exciting developments in complexity theory in recentyears, includ- ing the discovery of close connections b etween interactive pro of systems and classical complexity classes, and the resolution of several ma jor op en problems in combinatorial optimization." Professor Babai received the Erd}os Prize of the Hungarian Academy of Science in 1983.
    [Show full text]