ARBOR Ciencia, Pensamiento y Cultura Vol. 189-764, noviembre-diciembre 2013, a080 | ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003

EL LEGADO DE ALAN TURING / THE LEGACY OF ALAN TURING

Turing’s algorithmic lens: LA LENTE ALGORÍTMICA DE From computability to TURING: DE LA COMPUTABILIDAD complexity theory A LA TEORÍA DE LA COMPLEJIDAD

Josep Díaz* and Carme Torras** *Llenguatges i Sistemes Informàtics, UPC. [email protected] **Institut de Robòtica i Informàtica Industrial, CSIC-UPC. [email protected]. edu

Citation/Cómo citar este artículo:Díaz, J. and Torras, C. (2013). Copyright: © 2013 CSIC. This is an open-access article distributed “Turing’s algorithmic lens: From computability to complexity under the terms of the Creative Commons Attribution-Non theory”. Arbor, 189 (764): a080. doi: http://dx.doi.org/10.3989/ Commercial (by-nc) Spain 3.0 License. arbor.2013.764n6003

Received: 10 July 2013. Accepted: 15 September 2013.

ABSTRACT: The decidability question, i.e., whether any RESUMEN: La cuestión de la decidibilidad, es decir, si es posible de- mathematical statement could be computationally proven mostrar computacionalmente que una expresión matemática es true or false, was raised by Hilbert and remained open until verdadera o falsa, fue planteada por Hilbert y permaneció abierta Turing answered it in the negative. Then, most efforts in hasta que Turing la respondió de forma negativa. Establecida la theoretical computer science turned to complexity theory no-decidibilidad de las matemáticas, los esfuerzos en informática and the need to classify decidable problems according to teórica se centraron en el estudio de la complejidad computacio- their difficulty. Among others, the classes P (problems nal de los problemas decidibles. En este artículo presentamos una solvable in polynomial time) and NP (problems solvable in breve introducción a las clases P (problemas resolubles en tiempo non-deterministic polynomial time) were defined, and one polinómico) y NP (problemas resolubles de manera no determinis- of the most challenging scientific quests of our days arose: ta en tiempo polinómico), al tiempo que exponemos la dificultad whether P = NP. This still open question has implications de establecer si P = NP y las consecuencias que se derivarían de not only in computer science, mathematics and physics, but que ambas clases de problemas fueran iguales. Esta cuestión tiene also in biology, sociology and economics, and it can be seen implicaciones no solo en los campos de la informática, las mate- as a direct consequence of Turing’s way of looking through máticas y la física, sino también para la biología, la sociología y la the algorithmic lens at different disciplines to discover how economía. La idea seminal del estudio de la complejidad computa- pervasive computation is. cional es consecuencia directa del modo en que Turing abordaba problemas en diferentes ámbitos mediante lo que hoy se denomi- na la lupa algorítmica. El artículo finaliza con una breve exposición de algunos de los temas de investigación más actuales: transición de fase en problemas NP, y demostraciones holográficas, donde se trata de convencer a un adversario de que una demostración es correcta sin revelar ninguna idea de la demostración.

Keywords: Computational complexity; the permanent Palabras clave: Complejidad computacional; el problema problem; P and NP problems; Interactive proofs and holographic de calcular la permanente de una matriz; problemas P y NP; proofs. demostraciones interactivas y holográficas. 1. INTRODUCTION nary sets. A set is defined to be extraordinary if it is a member of itself, otherwise it is called ordinary. Is The English mathematician Alan Mathison Turing the set of all ordinary sets also ordinary? (see Doxi- (1912-1954) is well-known for his key role in the a080 adis, Papadimitriou, Papadatos and di Donna, 2009, development of computer science, but he also

Turing’s algorithmic lens: From computability to complexity theory pp. 169 to 171 for a particular charming account of made important contributions to the foundations the effect of Russell's letter to Frege). of mathematics, numerical analysis, cryptography, quantum computing and biology. In the present Frege's work was the spark for 30 years of re- paper we focus exclusively on the influence of search on the foundations of mathematics, and Turing on computability and complexity theory, the end of the 19th c. and beginning of the 20th leaving aside other aspects of his work even within c. witnessed strong mathematical, philosophical computer science, most notably his contribution and personal battles between members of the in- to the birth of the digital computer and artificial tuitionist and formalist European schools (Davis, intelligence. 2000; Doxiadis, Papadimitriou, Papadatos and di Donna, 2009). Turing was an important player in the settle- ment of the decidability issue, which led naturally To better frame further developments, let us re- to the study of the complexity of decidable prob- call some notions. A formal system is a language, lems, i.e., once it was proved that there are prob- a finite set of axioms, and a set of inference rules lems unsolvable by a computer, research turned used to derive expressions (or statements) from the to the question of “how long would it take to solve set of axioms; an example is mathematics with the a decidable problem?” This is an exciting research first-order developed by Frege. A system is said field with lots of open questions. In this paper, we to be complete if every statement can be proved or try to give a gentle introduction to the historical disproved; otherwise the system is said to be incom- perspective of the search for efficient computing plete. A system is said to be consistent if there is not and its limitations. a step-by-step proof within the system that yields a false statement. 2. BEFORE TURING: THE DREAM OF MECHANICAL A system is said to be decidable if, for every state- REASONING AND DECIDABILITY ment, there exists an algorithm to determine whether Gottfried W. Leibniz (1646-1716) was an impor- the statement is true. tant mathematician, philosopher, jurist and inven- In 1928, at the International Congress of Math- tor, whose dream was to devise an automatic rea- ematicians in Bologna, David Hilbert (1862-1943), soning machine, based on an alphabet of unam- a leading figure in the formalist school, presented biguous symbols manipulated by mechanical rules, to his fellow mathematicians three problems, the which could formalize any consistent linguistic or second of which would have a big impact on the mathematical system. He produced a calculus ra- development of computer science: the Entschei- tiocinator, an algebra to specify the rules for ma- dungsproblem. In English, it is named the decision nipulating logical concepts. More than a century problem, and it can be formulated as follows: “pro- later, George Boole (1815-1864) and Augustus De vide a method that, given a first-order logic state- Morgan (1806-1871) converted the loose ideas ment, would determine in a finite number of steps of Leibniz into a formal system, Boolean algebra, whether the statement is true". In other words, the from which Gottlob Frege (1848-1925) finally pro- asks for the existence of a duced the fully developed system of axiomatic finite procedure that could determine whether a predicate logic. statement in mathematics is true of false. Hilbert Frege went on to prove that all the laws of arith- was convinced that the answer would be yes, as metic could be logically deduced from a set of axi- mathematics should be complete, consistent and oms. He wrote a two-volume book with the formal decidable. development of the foundations of arithmetic, and when the second volume was already in print, Frege 3. TURING ESTABLISHES THE LIMITS OF COMPUTABILITY received the celebrated letter from Bertrand Russell In 1936 Turing completed the draft of his paper (1872-1970) showing his theory was inconsistent. "On Computable Numbers, with an Application to the The counterexample was the paradox of extraordi- Entscheidungsproblem", where he proposed a formal

2 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 model of computation, the a-machine, today known ity, as it could be used to compare the relative dif- as the Turing machine (TM), and he proved that any ficulty of problems; thus he defined the concept of function that can be effectively computed (in Hilbert’s Turing-reducibility (T-reducibility). Given problems P1 a080 T sense) by a human being could also be mechanically and P2, P1 is said to be Turing-reducible to P2 (P1 ≤ P2) if calculated in a finite number of steps by a mechanical P2 can be used to construct in a finite number of steps, Josep T procedure, namely the TM. Turing used his formalism a program to solve P1. Note that if P1 ≤ P2 and P2 is de- Díaz & T to define the set of computable numbers, those reals cidable then P1 is also decidable, and if P1 ≤ P2 and P1

for which their ith decimal can be computed in a finite is undecidable, then P2 is also undecidable. Therefore, Carme Torras number of steps. The number of computable numbers T-reducibility can be used to enlarge the class of know is countable but the number of reals is uncountable, undecidable problems. so most of the reals are not computable. Some well- known computable numbers are 1/7, π and e. 4. AFTER TURING: PROBLEM COMPLEXITY CLASSES In his model, Turing introduced two essential as- We have seen that the work of Turing and others let sumptions, the discretization of time and the discre- us determine, for any given problem, whether there tization of the state of mind of a human calculator. A is an algorithm to solve it (i.e., the problem is decid- TM consisted of an infinite tape divided into squares, able) or not. Let us turn into the realm of solvable a finite input on a finite alphabet, and a write-read problems, and ask the question of how long it takes head that could be in a finite number of states repre- to solve a given decidable problem. It may seem that senting those of the human calculator's brain (Davis, this is a pure technological issue, however we will see 2000; Moore and Mertens, 2011). The TM was a theo- that there are problems for which today's computers retical model of astored-in program machine. An out- would take more that the age of universe to find a so- standing achievement of Turing was the concept of lution, when the input to the problem is a bit large. Universal Turing Machine (UTM). Given the codified The of a problem with input is a description of a TM as input, the UTM is able to simu- x measure of the number of steps that an algorithm will late the computation of that machine and write the take to solve it, as a function of the size of the input result. Hence, the UTM can simulate any other Turing | | = n. Let us consider the worst-case complexity, machine, thus opening the way to digital computers. x where among all the possible inputs of size n, we take Relying on his UTM, Turing made a decisive contri- the one that behaves worst in terms of the number bution to computability, namely he produced a prob- of steps performed by the algorithm. As an example, lem that was undecidable: the halting problem. It can consider the school algorithms for multiplying two be stated as follows: “Given a codified description of integers a and b of sizes na and nb, respectively, size a TM and an input to it, decide if the UTM will halt”. measured as the number of digits in the binary ex- The proof was a mimic of Geodel's incompleteness pression (i.e., na = log2a), and let n = max{na, nb}, then one, where a diagonalization argument produces a the worst-case complexity of the algorithm is T(n) = paradox (see Davis, 2000, chapter 7 for a nice infor- O(n2).2 mal explanation). Note that the halting problem is a For decidable problems, the time complexity ex- counterexample, a negative solution to the Entschei- pression has a tremendous impact on the running dungsproblem1. time of an algorithm. Figure 1 (taken from Moore and Turing’s PhD. thesis extended Gödel's incomplete- Mertens, 2011) is an indication of the running time ness theorem by proving that when new axioms are of different worst-case complexity figures, assuming added to an incomplete formal system the system a processor can solve an instance of size n = 1 in 10−3 remains incomplete (Turing, 1939). A very important seconds, which is consistent with today’s technology. contribution of that work was the Oracle Turing-ma- Note that an algorithm that solves a problem with chine model. In the words of Turing: “Let us suppose complexity O(2n), if given an input of size n = 90, may we are supplied with some unspecified means of yield the output after the universe would be finished solving number-theoretical problems, a kind of ora- (again, assuming today's technological level). cle as it were. We shall not go any further into the nature of this oracle apart from saying that it can't Let us try to get a feeling for the time-complexity be a machine”. of some problems. Given an n x n matrix M = (mij), its determinant is defined by the Leibniz formula Emil Post (1897-1954) realized that Turing's oracle machine had important implications for computabil-

3 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 Figure 1: Running times as a function of input size n a080 Turing’s algorithmic lens: From computability to complexity theory

where σ is one permutation from the symmetric plexity. All other decidable problems are said to be

group of permutations Sn, and sgn (σ) is the sign of unfeasible, which does not imply that in the near or the permutation. A similar associate value of a square distant future some problems may turn into feasible. matrix is the permanent, which is defined A particularly interesting kind of problems are the combinatorial optimization problems, where for any instance and a solution s for that instance, there is a x x cost function + and the objective is to find k(x, sx) For example if the * that optimizes (maximizes or minimizes) sx ∈ ℝ k(x, sx) over all possible solutions 3. Let Opt(x) denote the sx cost of an optimal solution. For example, consider the chromatic number prob- . . then det(M) = m11 m22– m12 m21 , while perm(M) = lem of a graph: “Given as input a graph G = (V, E), find . . m11 m22 + m12 m21. the minimum number of different colors needed to A direct implementation of the definition yields an have a valid coloring of the vertices inV ”, where a valid O(n!) algorithm for both problems. However, by using coloring of G is an assignment of labels χ : V → {1, 2, the LU decomposition by Gauss and Turing, the deter- …, k}, each integer representing a color, such that for minant of an n x n matrix can be computed in O(n3) any (u, v) Є E, χ(u) ≠ χ(v). Notice that, for any input G steps, see for example in Cormen, Leiserson, Rivest and solution χ, Opt(G) is a valid coloring with minimum and Stein (2001), while the complexity of comput- number of colors. ing the permanent seems to be much more difficult (Valiant, 1979). Recently D. Glynn has obtained a de- 4.1 The classes P, NP and NP-complete n terministic bound of O(n2 ) to the complexity of the In the early 1950’s, some mathematicians started to permanent (Glynn, 2010). Notice that ifM and M are 1 2 realize that some decidable problems took too long to square matrices then det(M M ) = det(M ) x det(M ), 1 2 1 2 be solvable for large inputs. In a series of letters to the while perm(M M ) ≠ perm(M ) x perm(M ). 1 2 1 2 US National Security Agency, John Nash proposed a A problem is said to be feasible if there exists an secure cryptographic encryption system based on the algorithm that solves it with polynomial time-com- computational difficulty of problems (Nissan, 2004).

4 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 These letters foresaw the classification of decidable theory. The same year, Edmonds (1965) gave an infor- problems according to their computational complex- mal description of non-deterministic polynomial-time ity and could have given birth to the field of complex- problems, i.e., those that permit verifying in polyno- ity theory if the letters had not remained a state secret mial time whether a guessed hypothetical solution is a080

until 2012. indeed a correct solution to the problem. Nondeter- Josep minism would play a key role in the development of

The second important historical event for the study Díaz & of problem complexity was the letter Kurt Gödel sent computational complexity theory. A nondeterministic to John von Neumann in March 1956. At the time, von algorithm could be seen as a game where a compu- Carme Torras Neumann was dying and he did not read the letter, tationally omnipotent prover P provides a witness which was lost until the 1980’s. An English transla- to the solution of the problem, and a verifier has to tion of the letter can be found in the appendix of Lip- prove deterministically that the witness is indeed a ton’s book (2010), which is a compilation of selected correct solution. posts of his useful blog. As it is beautifully explained in Lipton’s poetic version of the events, “Kurt Gödel is Consider the problem of the satisfiability of a con- walking along through the falling snow, he is thinking. junctive normal form (SAT): “Given a set of Boolean Gödel is the greatest logician of his time, perhaps of all variables X and a Boolean formula on X in conjunc- times, yet he is deeply troubled. He has found a prob- tive normal form, decide if there is횽 an assignment A : lem he cannot solve. The problem concerns a simple X {T, F} (truth, false) such that it satisfies ” (eval- but fundamental question. Suddenly he smiles, he has uates to T). A formula횽 con- an idea. If he cannot solve the problem, then he will → sists of횽 the conjunction of clauses, where each clause write a letter explaining it to John von Neumann; per- is a disjunction of literals (Boolean variables or their haps John will be able to solve the problem. After all negation). For example, ifX = {x , x , x , x }, consider John von Neumann has one of the fastest minds in the 1 2 3 4 the Boolean formula world. Gödel pulls his scarf tighter, gets his bearings in the heavy falling snow, and heads to his office to write his letter”. In his letter, Gödel considered the truncated Entsc- heidungsproblem problem: “Given any statement in first-order logic (for example x, y, z, n n n n ) decide if there is a proof of the (x + y = z ) ∃ ∈ ℕ - {0} : (n statement⋀ with finite length of at mostm lines, where ≥ 3) A backtracking deterministic algorithm would find m could be any large (1010) constant. As any math- an assignment satisfying . In a nondeterministic al- ematical proof could be represented using a small gorithm, P would supply 횽as witness an assignment A constant number c of symbols, the problem is decid- (for example, m A(x1) = A(x4) = T, A(x2) = A(x3) = F) able; just construct all the exponentially manyc pos- and would have to verify that the substitution of sible proofs of length m, over the finite alphabet, and V such an assignment makes satisfiable. Note that if check if any of them works. Notice that most of the proofs will be gibberish. Moreover, for large m, there has input size n (length of 횽 ), V could verify wheth- is a chance that the process could take longer than the er횽 the assignment satisfies 횽 in O(n) steps. existence of the earth! (see Fig. 1). Gödel in his letter In 1971 Stephen Cook showed횽 that the SAT problem 2 asked if it was possible to do it in O(m ) steps, mak- was a paradigmatic unfeasible problem in the sense ing explicit the possible partition between decidable that any problem that could be solved4 in polynomial problems solved in polynomial time, and decidable problems that cannot be solved in polynomial time. time by a nondeterministic Turing machine could be reduced to the SAT problem, where the reducibility is During the 1960’s, the existence of easy and hard the same concept as Turing-reducibility but imposing problems started to be more obvious to researchers. the condition that the construction should be done in As computers were solving problems with increas- polynomial time (Cook, 1971). The paper was present- ing input size, researchers begin to consider the ef- ed at the STOC-71 conference, and again using the fect of input size on the computation time (number of Lipton (2010) description: “A tall figure walks slowly steps) and space (number of cells visited), using the to the front of the conference room. Steve Cook is a Turing machine as the computational model. An im- young scientist, who is about to change the world. He portant contribution appeared in 1965 by Hartmanis has independently discovered the problem that trou- and Stearns (1965), which defined the multi-tape Tur- bled Gödel that snowy day. He gives his talk, and after ing machine and laid the foundations of complexity there is a polite applause, as there is for every talk”.

5 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 One attendee who understood perfectly the mean- where as a part of the input we also are given a value ing of Cook’s result was Richard Karp, who the follow- b that the cost function can take. If the optimization ing year published a seminal paper formally defining problem demands to maximize, the search version a080 the classes P, NP and NP-complete, and provided the would require that k(x) ≥ b, and if it is a minimization

Turing’s algorithmic lens: From computability to complexity theory proof that nine well-known problems were in the class problem the search version would require k(x) ≤ b. NP-complete (Karp, 1972). His starting seed was that Continuing with the example, the search version of SAT Є NP-complete, which he coined as Cook’s Theo- the chromatic number problem can be stated as fol- rem. At the same time, but independently from Cook lows: “Given as input G = (V, E) and a b > 0, find a valid and Karp, a Russian mathematician, Leonid Levin, was coloring χ of G such that |χ(V)| ≤ b”. basically defining the classN P-complete (Levin, 1973), the article appeared translated into English the same The search versions of optimization problems are 1973, but it took a while for the community working obviously in NP. Furthermore, by using binary search in complexity theory to realize the meaning of Levin’s it is a standard exercise to prove that if the search result (see Trakhtenbrot, 1984 for a history of early version of an optimization problem can be solved in complexity theory developments in Russia). Today the polynomial time then the optimization version can result that SAT is NP-complete is known as the Cook- also be solved in polynomial time (see Exercise 8.1 in Levin Theorem. Dasgupta, Papadimitriou and Vazirani, 2008). We next give an intuitive description of the com- Let us see some further examples of problems in P plexity classes. For a more technical description the and NP: reader is referred to any of the complexity or algorith- The primality problem: “Given an integer a with mic books listed in the bibliography of this paper. length n = log a bits, decide whether a is prime”. There P is the class of problems for which there is a de- is a non-trivial proof that primality Є NP due to V. terministic algorithm finding a solution in polynomial Pratt (1975), and for a long time it was open whether time, for any input. the problem was in P. In 2002, Agrawal, Kayal and Saxena provided a deterministic polynomial-time al- NP is the class of problems such that if an omnis- gorithm to solve the primality problem, so the prob- cient prover provides a polynomial-length witness P lem is in P (Agrawal, Kayal, and Saxena, 2002). to the solution of the problem, a verifier V can prove in polynomial timewhether the witness is indeed cor- The factorization problem: “Given an integer a of n rect. The term NP stands for non-deterministic poly- bits, find its prime decomposition”. This is an impor- nomial time. tant problem as, among other things, it is the basis of the RSA, one of the most used public-key cryp- From the previous remarks on the SAT problem, it tographic systems (see Chapter 8 in Singh, 2000). is clear that SAT NP: P supplies a valid assignment Factorization is in NP, since if P provides a witness and V checks in linear time that the assignment satis- p ,…,p , V can test that each p is prime (using the al- fies the formula.∈ However, if we consider a combina- 1 m i gorithm in Agrawal, Kayal, and Saxena, 2002) and see torial optimization problem, as the chromatic number that the product is the given number. Nevertheless, of a graph, it is not so easy. Given an input , G = (V, E) it is an important open question whether there is a the chromatic number problem consists of finding a polynomial-time algorithm to solve factorization5. minimum valid coloring χ* for G” , i.e., assigning the minimum number of colors to V such that for every The problem of 3-satisfiability(3-SAT) is the particu- edge (u,v) E, χ* (u) χ* (v). To prove this prob- lar case of SAT where every clause in the input has lem is not in NP, assume that provides a witness χ* exactly 3 literals. The problem is also in NP. However, ∈ ≠ P to the colorability of V, then V tests that indeed χ* is an easy greedy algorithm puts 2-satisfiability in the a valid coloring by looking at every edge of G, which class P. has a cost of O(n2). Moreover, must also verify that V Finally, the truncated Entscheidungsproblem is also there is no other valid coloring with less colors that in NP, since if we get a proof of less than 1010 pages χ*, i.e., he must compare with all other possible valid as a certificate, any mathematician can verify in a rea- colorings for G, which could be exponential. There- sonable time (polynomial in10 ) whether it is cor- fore, the problem is not in NP. rect (independently that there exist other valid proofs To circumvent this difficulty, we consider thesearch among the exponential number of generated ones). version of the combinatorial optimization problem,

6 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 Notice that P NP. If V can obtain a determinis- Figure 2: Complexity classes inside NP tic solution in ⊆polyno­mial time, he can also verify it in polynomial time. a080 The problem to decide whether P=NP was consid- ered one of the nine millennium problems posed by Josep 7 the Clay Mathematics Institutein the year 2000. A 10 Díaz & US$ prize is awarded for each solution to one of the problems. Carme Torras The answer P≠NP would mean that, for many im- portant search problems, findingis more difficult than verifying. If, on the contrary, the answer is P=NP, then there would be a feasible algorithm for the truncated Entscheidungsproblem, i.e., proofs with a large but fi- NP-intermediate. For a full taxonomy of NP-complete nite number of lines, which in practice would suffice problems see Crescenzi and Kann (2012). for most mathematical theorems. As already mentioned, SAT was the first problem known to be NP-complete; 3-SAT, the truncated Entsc- 4.2 Karp reducibility and NP-completeness heidungsproblem, and chromatic number of a graph are also NP-complete problems. Factoring an integer To further study the relationship between search as a product of primes is in NP-intermediate. Another problems and the P=NP question, Karp defined the fol- significant problem in the class NP-intermediate is lowing simplified variation of Turing-reducibility: Given graph isomorphism: Given two graphs G = (V , E ) and search problems A and B, with sets of instances I and I 1 1 1 A B G2 = (V2, E2), find if there is a permutation π: V1 → V2 and sets of solutionsS and S , a Karp reductionfrom A A B such that (u, v) Є E1 iff (π(u), π(v)) Є E2. to B (A ≤P B) is a polynomial-time computable function Consider the 3-coloring of a graph problem: “Given f: I I such that for any input x Є I , f(x)has a A B A input graph G = (V, E), decide if there is a valid coloring solution inS iffx has a solution inS . → B A of V with exactly 3 colors” (see Figure 3). The problem is In other words, if A ≤P B and we have a polynomial- in N P: if P provides a 3-color witness χ, then V can ver- 2 time algorithm AB to solve B then we have a polyno- ify in O(|V| ) whether χ is a valid coloring. It is known mial-time algorithm to solve A, for any x Є IA comput- that this problem is NP-complete. Deciding whether a ed in polynomial time (f(x)). Notice, a polynomial AB graph is 2-colorable and, if so, finding the coloring is in algorithm to solve A does not imply anything about P (Dasgupta, Papadimitriou and Vazirani, 2008). the existence of a polynomial algorithm to solve B. The following example taken from Moore and A search problem B is NP-complete if B Є NP and for Mertens (2011, p. 152) shows that NP-completeness any NP problem A, we have A ≤P B. can appear in calculus problems. The cosine integra- tion problem: “Given integers , ... , , decide if A useful property of reductions is that they are x1 xn transitive. This property, together with the previous remark about reductions as problems solvers, tells us that the class NP-complete is the most difficult class of NP problems, in the sense that if one NP-complete The class NP-complete contains many everyday problem is known to be in P then P=NP. practical problems. There is a series of problems deal- It is known that if P≠N P, there would be problems in ing with minimizing delivery costs or time, which de- NP that are neither in P nor NP-complete. These are in rive from the Traveling Salesman Problem (TSP). The the class NP-intermediate (see Figure 2). search version of TSP has as input a complete graph G = (V, E), with a weight w(v , v ) ≥ 0 on each edge (v , v ), It is beyond the scope of this paper to show de- i j i j together with a value c, the goal is to find a tour visit- tailed reductions between NP-complete problems, which can be found in any standard algorithmic book, ing all vertices exactly once, such that see for example Garey and Johnson, 1979; Moore and Mertens, 2011. But we would like to enumerate a few more examples of problems NP-complete and

7 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 Figure 3: 3-colorable G with a valid coloring

a080 Turing’s algorithmic lens: From computability to complexity theory

This problem is NP-complete. Another important ure 1, the problem becomes intractable as n grows. This problem is NP-complete. Another important set of Nevertheless, it turns out that for some NP-com- practical problems derive from thejob scheduling prob- plete problems there are only a few “bad” inputs. lem. In its most general setting, the problem can be for- The worst-case complexity assumes that there is a

mulated as follows: “Given n jobs J1,... Jn, where each Jk devil adversary that chooses the worst possible in-

has a processing timet k, and given m identical machines stance of the problem. In the early 1990’s there was

M1,..., Mm, then each job Jk must run on a machine Mi empirical evidence that for some problems such as

for tk consecutive units of time, and during that time no graph coloring, satisfiability, integer partition, etc., other job can run on the same machine. The prob­lem there was a sharp jump from instances which were is to find an assignment of jobs to machines such that it easily solved to instances that were easily shown not minimizes the makespan of the schedule to have a solution, and just a few instances were dif- ficult to solve. That phenomenon is denoted phase transition as it is similar to the phenomena studied

where Ti denotes the time at which machine Mi com- by physics of sudden changes of states, between pletes its jobs”. The search version of the problem is solid, liquid and gas. For instance, consider the 3-SAT NP-complete for m > 2. For more information on the problem. There is a well-know clever exhaustive problem and its variations, see Brucker (2006). search algorithm for SAT, the Davis-Putnam-Loge- As a final example, we would like to present evi- mann-Loveland (DPLL), which was shown to solve dence of the influence of the P versus N P prob- many random instances of 3-SAT in polynomial time lem in other disciplines, for example, economics. (Mitchell, Selman, and Levesque, 1992). To produce The Efficient Market Hypothesis (EMH) states that a random instance for 3-SAT with n boolean variables future prices cannot be predicted by analyzing and m = rn clauses, one has to choose uniformly at n prices from the past (Fama, 1965). In an enjoyable random each clause, with probability 1/(3) and then recent paper by the economist P. Maymin (2011), go over the variables in each clause and negate each he proves that the EMH is false iff P=NP. For peo- with probability = 1/2. The density of one of these ple with a little background in complexity theory, formulae is defined as r = n/m. the paper is quite straightforward, still it presents For example, the random 3-SAT formula a clear example of the spread of complexity theory to other fields.

4.3 Coping with NP-completeness What can be done when having to deal with an NP- complete problem? For inputs of very small size n, a complexity of O(2n) or O(n!) can be computed in reasonable computer time, but as we showed in Fig- has density r = 0.4.

8 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 Mitchell, Selman, and Levesque (1992) first experi- The fact that some NP-complete problems have a not mentally showed that the number of ran­dom 3-SAT too large number of bad instances indicates that some- formulae for which the DPLL algorithm took exponen- times heuristics can be used to achieve good results. tial time was small (see Figure 4a). Moreover, their ex- Heuristics are procedures with no guarantees either in a080 periments also showed that, with high probability, for the running time or on the accuracy of the obtained Josep densitiesr < 4.2 random 3-SAT formulae are satisfiable solution, but for many hard problems, like for example, and for r > 4.2 random 3-SAT formulae are not satisfi- the layout of VLSI circuits, heuristics are the best practi- Díaz & able (see Figure 4b). cal solution (Wolf, 2011). Some modern textbooks on al- gorithms include a chapter on heuristics, clever backtracking Carme Torras In 2002, using non-rigorous techniques from statisti- techniques, like the DPLL algorithm we mentioned for solv- cal physics (the replica method) on very large instanc- ing SAT, simulated annealing or different versions of local es of 3-SAT, Mézard, Parisi and Zecchina (2002) and search. For a general textbook on heuristics see Michale- Mézard and Zecchina (2002) showed that the thresh- wicz and Fogel (1998). old for 3-SAT occurs at formulae with density rc = 4.27, i.e., random 3-SAT formulae with density < 4.27 are Another practical alternative is using approximation satisfiable with high probability, and random 3-SAT algorithms. An algorithm is said to r-approximate an formulae with density < 4.27 are satisfiable with high optimization problem if, on every input, the algorithm probability, and radom 3-SAT formulae with density > finds a solution whose cost is ≤ 1/r if the problem asks 4.27 are NOT satisfiable with high probability. Since for the minimum value, or ≥ 1/r if the problem asks for then, there has been an effort to establish this sharp a maximum value. phase transition by rigorous analytical methods. So far for 3-SAT, the best lower bound is 3.52 (Hajiaghayi Almost since the beginning of the development of and Sorkin, 2003; Kaporis, Kirousis and Lalas, 2006) complexity theory, there was a parallel effort to develop and the corresponding upper bound is 4.4907 (Díaz, approximation algorithms for NP-complete problems. Al- Kirousis, Mitsche and Pérez, 2009). Closing the gap re- though the first such algorithm is due to Graham in 1966, mains an open problem. See Chapter 14 in Moore and who developed it to approximate a version of scheduling, Mertens (2011). the seminal paper for ap­proximation theory by Johnson

Figure 4: Experiments with random 3-SAT formulae, from Mitchell, Selman, and Levesque (1992). In (a) each dot represents the time DPLL takes on a random instance of 3-SAT. Light dots represent satisfiable instances and dark dots represent non-satisfiable instances. (b) The phase transition satisfiable to non-satisfiable for a random 3-SAT formula occurs at density 4.27.

9 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 (1974) came very little after Cook, Levin and Karp’s papers. To grasp the concept of zero-knowledge proof, let Since then, the theory of approximation and of inapprox­ us start with the very simple illustrative example from imability has become a very fruitful field in theoretical Quisquater, Quisquater, Quisquater, Quisquater, Guil- a080 computer science, with good books covering the topic, lou, Guillou, Guillou, Guillou, Guillou, Guillou, and 7

Turing’s algorithmic lens: From computability to complexity theory for example Williamson and Smoys, 2010. Berson (1989) . There is a cave with two paths A and B, which are connected at their ends by a secret pas- Parallelism is another powerful tool to speed up com- sage (see Figure 5). To cross that passage one must putation by a con­stant factor. Notice that anything that know the magic words. In our case, wants to con- can be done with 1010 processors in T steps, can be done P vince that he knows the magic words without tell- with one processor in 1010T steps. But unless P=NP, the V ing them to . They agree on the following protocol: difference between P and NP-complete problems is an V will remain outside of the cave, so he cannot see exponential cost of computation, therefore parallelism V which path takes. When arrives at the end of the would not be the tool to solve NP-complete problems in P P cave, tells which path to return along, and en- polynomial time. V P V ters the cave to make sure P is returning for the path In the same way, a result by Impagliazzo and Widg- he indicated. The probability that P took the path V erson (1997) implies that, unless P=NP, randomization asks him to return is 1/2, therefore if the experiment would not help to solve NP-complete problems. In fact, is repeated a sufficiently large number of times, and there is a stronger conjecture: randomization mainly each time P returns by the correct path, with prob- helps to improve the time complexity of problems inP . ability 1 P must know the magic words to cross the cave, and is convinced that knows them. At the present time, quantum computers are not an ex- V P isting reality, al­though D-Wave Systems, Inc. has managed Figure 5: Ali Baba’s magic cave. to build a 128-qubit processor (not general purpose). At the theory level, we know quantum computa­tion could solve NP-intermediate problems, for example factorization of an integer (Shor, 1997), but it is generally agreed that, unless P=NP, quantum com­putation would not help with NP-complete problems (Bernstein and Vazirani, 1997). See Aaronson (2008) for an extensive discussion on the limitations of quantum computers.

5. Beyond Turing: Interactive Proofs and Zero-Knowledge Proofs A fascinating line of research in theoretical com- ZKP are a particular case of the most general Inter- puter science started in the mid 1980’s, which led active Proofs Systems, introduced concurrently in Ba- to fruitful results, namely Interactive Proofs. In this bai (1985) and Goldwasser, Micali and Rackoff (1989)8. section, we aim to give a brief intuitive introduction Technically the word proof refers to a randomized to the topic, in particular to Zero-Knowledge Proofs interactive protocol between and , where has (ZKP) and Probabilistically Checkable Proofs (PCP).For P V P unlimited computational capabilities and tries to con- the interested reader, we recommend Chapter 11 in vince of the truth of a certain statement. Loosely Moore and Mertens (2011) and Chapters 8, 9 and 11 V speaking, the two characteristics that an interactive in Arora and Barak (2009). The topic has spanned 26 protocol must have to be an interactive proof is that years, with numerous papers and fruitful applications an honest should always be convinced by an honest in various fields, such as cryptography and approxima- V , but a cheater should have a very small probabil- tion algorithms, among others. P P ity of convincing an honest V that a false statement is The basic idea is how one researcher (the prover P) true. The interactive proof system is zero-knowledge can convince another researcher (the verifier V )6 that if V is not going to learn anything from the interac- he has a correct proof to a difficult theorem by only tion with P. In the previous example, V becomes showing to V a few random bits of the proof, in such a convinced that P knows the magic words to cross be- way that at the end V is convinced that the proof is cor- tween the two paths. rect without having any insight into how it works.

10 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 Let us describe a more interesting example. Consid- more difficult problems than NP (under the hypoth- er a given graph G = (V, E), where V n and E m. esis P≠NP) (Shamir, 1992). Computing the permanent As we saw before, to decide whether􏿖 􏿖 G = has chromatic􏿖 􏿖 = of a matrix is a prob­lem in the class IP, which means number 3 is an NP-complete problem, therefore in that even under the hypothesis P=NP, it would remain a080

general it is not a feasible problem to solve for large a hard problem. Josep values of n. In this setting, P wants to convince V that

The culmination of interactive proof systems re- Díaz & he has a valid 3-coloring of G, without revealing the search was one of the most beautiful and deep theo-

coloring. This example is from Goldreich, Micali and Carme Torras rems in computer science, the PCP-theorem. Although Wigderson (1991). At each iteration of the protocol, V the Gödel prize 2001 was shared by Arora and Safra only has access to the colorings at the ends of a single (1998), Arora, Lund, Motwani, Sudan and Szegedy edge he chooses at the iteration. The protocol is the (1998) and Feigue, Goldwasser, Lovasz, Safra, and following: First selects a valid 3-coloring (for every P Szegedy (1996) for their contribution to Probabilisti- ( u and v must have different colors) and E, cally Checkable Proofs and the PCP-theorem, many generates all 3! = 6 permutations of valid 3-colorings -﷙, ) ﷚ ⋲ of the techniques and ideas are due to a much larg for G; let C be the set of all 6 valid colorings. For n3 er number of researchers (see for example Johnson iterations, at each iteration i, chooses with prob- P (1992) for an extensive historical account, and Chap- ability 1/6 a new coloring C , selects an edge ci V ter 16 in Williamson and Smoys (2010) for further re- and verifies that the edge is correctly colored. Assume ⋲ cent work using the PCP-theorem to obtain inapproxi- that G has a valid 3-coloring (see for example Figure mability results). 3). Notice that if V chooses the same edge (u, v) at two different iterations, the colors assigned to each The rough idea of probabilistically checkable proof vertex may be different, as at each iterationP chooses systems is: “Given a conventional mathematical proof a new random coloring, therefore V will not be able in which a mistake could be hidden in any equation, to learn a valid coloring for the whole G. On the other transform the proof in such a way that the mistake hand, if G is not 3-colorable, at each round, at least is spread almost everywhere”. This kind of proof is one edge will have the same color for both vertices, denoted a holographic proof. A PCP system for an NP therefore the probability that V will discover a wrong problem encodes the witness to the problem in a way edge is at least 1/m per round. As m ≤ n2, aftern 3 itera- such that V can verify probabilistically the witness by tions with probability tending to 1,V will discover that looking only to a few of its bits, so that if it is true V G is not 3-colorable. accepts with probability 1, and if it is false V accepts with probability < 1/2. Technically the way P shows the colors to V is a bit more complicated, using a one-way-function.One-way- The PCP-theorem states that holographic proofs ex- functions are functions that can be easily computed but ist for problems in NP, i.e., that any problem in NP has are hard (exponential time) to invert. A trivial ­aex mple a polynomial length probabilistically checkable proof, of one-way-function is integer multiplication: it is easy where V flipsO (log n) random coins and need to look only at O(1) bits of the proof. to multiply m = al x a2 x ... x an, however as we already mentioned, there is no known polynomial-time algo- rithm for factorizing m. Another more interesting ex­ 6. CONCLUSIONS ample of a one-way-function is the discrete logarithm: Contrary to Hilbert’s Entscheidungsproblem, it re- “Given n-bit integers x, y, z, find whether there exists mains an open problem to decide whether the trun- an integer w such that y = xw mod z”. Given x, w and z, cated Entscheidungsproblem is feasible, i.e., it re- it is easy to compute y = f (x, w, z) = xw mod z, but it is mains open to decide whether P=NP. It follows from conjectured that finding f −1(y) takes exponential time. the arguments in the present paper that a positive an- One-way-functions are in standard use in cryptography. swer to that question may answer all seven remaining By applying reductions to the 3-colorability prob- open millennium problems. lem, it was shown in Goldreich, Micali and Wigderson At least there are 54 existing bogus proofs of the (1991) that under the assumption of the existence P=NP question. Of them, 26 “proving” the equality, of one-way-functions, every problem in class NP has 24 “proving” the strict inclusion, and 3 “proving” that a zero-knowledge proof. In fact, if we denote IP the the P=NP question is itself undecidable. For further class of problems having an interactive protocol, it details see Woeginger. Most scientists working in is known that IP, as a complexity class, contains far complexity theory believe that P NP, but there are ≠

11 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 some top scientists in the field that disagree with the case, average, smoothed). All this work has created a majority, see for instance Lipton’s blog. whole cosmos of about 500 complexity classes (Aar- onson, Kuperberg and Granade). For most of them, The aim of this manuscript was not to review the a080 strict relations are not known and, at the end, the complexity field or the status and future of the P=NP

Turing’s algorithmic lens: From computability to complexity theory main issue boils down to the basic intuition by Kurt question. As we pointed out, there are some outstand- Gödel in 1954. ing textbooks dealing with all the past, present and fu- ture attempts and results in complexity. Our only incur- We tried to convey the idea that this is one of the sion into the areas of modern com­plexity has been the deepest questions of today’s science, affecting not topic of interactive proofs, and this is because we think only computer science, mathematics and physics, but that it is a natural continuation to the truncated Entsc- also biology, sociology, economics and most human heidungsproblem, in the sense that PCP basically tells activities. We see this broad coverage of this question us how to convince our colleagues that we have a cor- as a direct consequence of Turing’s way of looking rect proof without giving away any real details. Moreo- through the algorithmic lens to problems in different ver, it turns out that PCP is a strong characterization of disciplines, from cryptography and physics to biology. N P problems. Could we one day even have a practical The spread of modern technologies is accelerating the holographic way to check proofs? need for an algorithmic view of today’s social, eco- nomical, cultural, and political interactions, leading Complexity theory has studied different models of directly to the questionP= NP? For an excellent survey computation (Turing machine with bounded number of the role of the algorithmic view into the activities of steps, Boolean circuits, quantum algorithms, rand- within the modern world, we recommend the book omized algorithms), the complexity of measuring dif- by Easley and Kleinberg (Easley and Kleinberg, 2010). ferent parameters (space, number of iterations, depth of circuits), and several measures of complexity (worst

NTS O E

1 In parallel to Turing, A. Church also gave 3 It may happen that there is no feasible 6 In a large part of the scientific bibliogra- a negative answer to the Entschei- solution for input x, then k(x, s) does phy, the prover and the verifier are res- dungsproblem using a logic formalism, not exist. pectively named Merlin and Arthur. λ-calculus. 4 The correct word is recognized, as the 7 The authors frame their explanation in 2 The complexity is expressed in asympto- problems are posed as recognition the arabic tale “Ali Baba and the forty tic notation, i.e., for very large values of problems, i.e. determining whether a thieves” from the classic “One thousand the input. The notation T(n) = O(f (n)) word belongs to a language over a finite and one nights”. alphabet. means that limn→∞ T(n)/f(n) = c , where 8 The conference version of Goldwasser, c is a constant. 5 A positive answer will render all transac- Micali and Rackoff (1989) appeared at tions done using RSA insecure. STOC-85.

rnefere ces

Aaronson, S. (2008). “The limits of quan- tion and the hardness of approximation Brucker, P. (2006). Scheduling Algorithms. tum computers”. Scientific American, problems”. Journal of the ACM, 45 (3), Springer, fifth edition. 289, pp. 62-69. pp. 501-555. Cook, S. (1971). “The complexity of theo- Aaronson, S.; Kuperberg, G. and Granade, Arora, S. and Safra, S. (1998). “Probabilistic rem-proving procedures”. In 3rd. ACM C. Complexity Zoo. http://qwiki.stan- checking of proofs: A new characteriza- Symposium on the Theory of Compu- ford.edu/index.php/Complexity_Zoo. tion of NP”. Journal of the ACM, 45 (1), ting, pages 151-158. pp. 70-122. Agrawal, M.; Kayal, N. and Saxena, N. Cormen, T. H.; Leiserson, C.; Rivest, R. and (2002). “Primes is in P”. Annals of Ma- Babai, L. (1985). “Trading group theory for Stein, C. (2001). Introduction to Algori- thematics, 2, pp. 781-793. randomness”. In Proc. 17th. ACM Sym- thms. The MIT Press, 3 edition. posium on the Theory of Computing, Arora, S. and Barak, B. (2009). Computa- Crescenzi, P. and Kann, V. (2012). A com- pages 421-429. tional Complexity: A Modern Approach. pendium of NP optimization problems. Cambridge University Press. Bernstein, E. and Vazirani, U. V. (1997). Dasgupta, S.; Papadimitriou, C. and Vazira- “Quantum complexity theory”. SIAM Jo- Arora, S.; Lund, C.; Motwani, R.; Sudan, M. ni, U. (2008). Algorithms. McGraw-Hill. and Szegedy, M. (1998). “Proof verifica- urnal Computing,26 (5), pp. 1411-1473.

12 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003 Davis, M. (2000). The universal computer: Hartmanis, J. and Steam, R. (1965). “On Mitchell, D.; Selman, B. and Levesque, H. the road from Leibniz to Turing. Norton. the computational complexity of algo- (1992). “Hard and easy distributions Díaz, J.; Kirousis, L.; Mitsche, D. and Pérez, rithms”. Transactions of the American of sat problems”. In Proceedings of the Mathematical Society,117, pp. 285-306. 10th. National Conference on Artificial X. (2009). “On the satisfiability thres- a080 hold of formulae with three literals per Impagliazzo, R. and Wigderson, A. (1997). Intelligence (AAAI), pp. 459-465.

clause”. Theoretical Computer Science, “P = BPP if E requires exponential cir- Moore, C. and Mertens, S. (2011). The Nature Josep 410, pp. 2920-2934. cuits: Derandomizing the XOR lemma”. of Computation.Oxford University Press. Díaz & Doxiadis, A.; Papadimitriou, C.; Papadatos, In Proceedings of tTwenty-Ninth Annual Nissan, N. (2004). John Nash’s letter to ACM Symposium on the Theory of Com- A. and di Donna, A. (2009). LOGICOMIX: the NSA. http: //agtb.wordpress. Carme Torras an epic search for truth. Bloomsbury. puting, pages 220-229. com/2012/02/17/john-nashs-letter-to- Easley, D. and Kleinberg, J. (2010). Net- Johnson, D. J. (1974). Approximation algo- the-nsa/, February 17. works, Crowds and Markets. Reasoning rithms for combinatorial problems. Jo- Pratt, V. R. (1975). “Every prime has a suc- about a highly connected world. Cam- urnal of Computer and System Sciences, cinct certificate”.SIAM J. Comput., 4 (3), bridge University Press. 9, 256-278. pp. 214-220. Edmonds, J. (1965). “Paths, trees, and Johnson, D. S. (1992). “The NP-comple- Quisquater, J.-J.; Quisquater, M.; Quisqua- flowers”. Canad. J. Math., 17, pp. 449- teness column. The tale of the second ter, M.; Quisquater, M.; Guillou, L. C.; 467. prover”. Journal of Algorithms, 13 (3), Guillou, M. A.; Guillou, G.; Guillou, A.; pp. 502-524. Fama, E. (1965). “The behavior of stock- Guillou, G.; Guillou, S. and Berson, T. A. market prices”. The Journal of Business, Kaporis, A. C.; Kirousis, L. and Lalas, E. G. (1989). “How to explain zero-knowled- 38 (1), pp. 34-105. (2006). “The probabilistic analysis of a ge protocols to your children”. In G. greedy satisfiability algorithm”.Random Brassard, editor, CRYPTO-89, volume Feigue, U.; Goldwasser, S.; Lovasz, L.; Safra, Struct. Algorithms, 28 (4), pp. 444-480. 435 of Lecture Notes in Computer Scien- S. and Szegedy, M. (1996). “Interactive ce, pp. 628-631. Springer. proofs and the hardness of approxima- Karp, R. M. (1972). “Reducibility among ting cliques”.Journal of the ACM, 43 (2), combinatorial problems”. In R. E. Miller Shamir, A. (1992). “IP = PSPACE”. Journal of pp. 268-292. and J. W. Thatcher (eds.), Complexity of the ACM, 39 (4), pp. 869-877. Computer Computations, pp. 85-104. Shor, P. W. (1997). “Polynomial-time algori- Garey, M. R. and Johnson, D. S. (1979). Com- NY: Plenum Press. puters and Intractability: A Guide to the thms for prime factorization and discre- Theory of NP-Completeness. Freeman. Levin, L. (1973). “Universal sequential te logarithms on a quantum computer”. search problems”. Probl. Peredachi Inf., SIAM J. Comput., 26 (5), pp. 1484-1509. Glynn, D. G. (2010). “The permanent of a 9, pp. 115-116. square matrix”. European J. of Combina- Singh, S. (2000). The Code Book. Anchor Books. torics, 31 (7), pp. 1887-1891. Lipton, R. Godel’s lost letter and P = NP. Trakhtenbrot, B. (1984). “A survey of rus- http://rjlipton.wordpress.com. Goldreich, O.; Micali, S. and Wigderson, A. sian approaches to perebor (brute-force (1991). “Proofs that yield nothing but Lipton, R. (2010). The P=NP Question and searches) algorithms”. Annals of the His- their validity or all languages in NP have Godel’s Lost Letter. Springer. tory of Computing, 6 (4), pp. 384-400. Zero-Knowledge Proof Systems”. Jour- Maymin, P. (2011). “Markets are efficient if Turing, A. M. (1939). Systems of logic based nal of the ACM, 38 (1), pp. 691-729. and only if P=NP”. Algorithmic Finance, on ordinals. Proceedings of the London Goldwasser, S.; Micali, S. and Rackoff, C. 1, pp. 1-11. Mathematical Society-2,45, pp. 161-228. (1989). “The knowledge complexity of Mezard, M.; Parisi, G. and Zecchina, R. Valiant, L. G. (1979). “The complexity of interactive proof systems”.SIAM J. Com- (2002). “Analytic and algorithmic solu- enumeration and reliability problems”. puting, 18 (1), pp. 186-208. tion of random satisfiability problems”. SIAM J on Computing, 8, pp. 410-421. Graham, R. (1966). “Bounds for certain Science, 297 (812). Williamson, D. and Smoys, D. (2010). The multiprocessing anomalies”.Bell System Mezard, M. and Zecchina, R. (2002). “The Design of Approximation Algorithms. Technology Journal, 45, pp. 1563-1581. random k-satisfiability problem: from Cambridge University Press. Hajiaghayi, M. T. and Sorkin, G. (2003). The an analytic solution to an efficient al- Woeginger, G. The P vs. NP page. http://www. satisfiability threshold of random 3-SAT gorithm”. Physics Review, E 66-056126. win.tue.nl/ gwoegi/P-versus-NP.htm. is at least 3.52. Technical report, IBM Michalewicz, Z. and Fogel, D. (1998). How Wolf, W. (2011). Modern VLSI Design. Pren- Research Report. to solve it: Modern Heuristics. Springer. tice-Hall, fourth edition.

13 ARBOR Vol. 189-764, noviembre-diciembre 2013, a080. ISSN-L: 0210-1963 doi: http://dx.doi.org/10.3989/arbor.2013.764n6003