NP-Completeness

Total Page:16

File Type:pdf, Size:1020Kb

NP-Completeness Introduction to the Analysis of Complexity Chapter 27 1 Complexity Theory Are all decidable languages equal? ● (ab)* ● WWR = {wwR : w Î {a, b}*} ● WW = {ww : w Î {a, b}*} ● SAT = {w : w is a wff in Boolean logic and w is satisfiable} Computability: classifies problems into: solvable and unsolvable Complexity: classifies solvable problems into: easy ones and hard ones • Easy: tractable • Hard: intractable • Complexity zoo: hierarchy of classes 2 The Traveling Salesman Problem 15 25 10 28 20 4 40 8 9 7 3 23 Given n cities and the distances between each pair of them, find the shortest tour that returns to its starting point and visits each other city exactly once along the way. 3 The Traveling Salesman Problem 15 25 10 28 20 4 40 8 9 7 3 23 Given n cities: Choose a first city n Choose a second n-1 Choose a third n-2 … n! 4 The Growth Rate of n! 2 2 11 479001600 3 6 12 6227020800 4 24 13 87178291200 5 120 14 1307674368000 6 720 15 20922789888000 7 5040 16 355687428096000 8 40320 17 6402373705728000 9 362880 18 121645100408832000 10 3628800 19 2432902008176640000 11 39916800 36 3.6×1041 5 Tackling Hard Problems 1. Use a technique that is guaranteed to find an optimal solution. 2. Use a technique that is guaranteed to run quickly and find a “good” solution. – The World Tour Problem Does it make sense to insist on true optimality if the description of the original problem was approximate? 6 Modern TSP From: http://xkcd.com/399/ 7 The Complexity Zoo The attempt to characterize the decidable languages by their complexity: http://qwiki.stanford.edu/wiki/Complexity_Zoo Notable ones: P: solvable (decidable) by deterministic TM in polynomial time • tractable • context-free (including regular) languages are in P NP: solvable (decidable) by nondeterministic TM in polynomial time • given solution can be verified by DTM in polynomial time NP-complete: as hard as any one in NP & in NP (hardest ones in NP) • no efficient algorithm is known • require non-trivial search, as in TSP NP-hard: as hard as any one in NP (not necessarily in NP) • Every problem in NP are reducible to it in polynomial time • L is NP-complete if it is in NP + it is NP-hard • Intractable = not in P. But since it’s believed P ¹ NP, loosely intractable = NP hard 8 Note • NP is a set of decision problems. Computational complexity primarily deals with decision problems. • There is a link between the "decision" and "optimization" problems in that if there exists a polynomial algorithm that solves the "decision" problem, then one can find the maximum value for the optimization problem in polynomial time by applying this algorithm iteratively while increasing the value of k . • On the other hand, if an algorithm finds the optimal value of the optimization problem in polynomial time, then the decision problem can be solved in polynomial time by comparing the value of the solution output by this algorithm with the value of k . • Thus, both versions of the problem are of similar difficulty. • Note that, NP-hard is not restricted to decision problems, it also includes optimization problems. • When the decision problem is NP-complete, the optimization problem is NP-hard 9 Methodology • Characterizing problems as languages to be comparable – SAT = {w : w is a wff in Boolean logic and w is satisfiable} – only applies to decidable languages. nothing to say about H – Including optimization problems • TSP-DECIDE = {<G, cost> : <G> encodes an undirected graph with a positive distance attached to each of its edges and G contains a Hamiltonian circuit whose total cost is less than <cost>}. • minimizing -> at most k • maximizing -> at least k • All problems are decision problems – requires at least enough time to write the solution – By restricting our attention to decision problems, the length of the answer is not a factor • Encoding matters as complexity is w.r.t. problem size – L is regular for one encoding, nonregular for another – E.g., for integers, we should use any base other than 1 111111111111 vs 1100 111111111111111111111111111111 vs 11110 10 Measuring Time and Space Complexity • Model of computation: TM • timereq(M) is a function of n: – If M is deterministic timereq(M) = f(n) = the maximum number of steps that M executes on any input of length n. – If M is nondeterministic timereq(M) = f(n) = the number of steps on the longest path that M executes on any input of length n. • spacereq(M) is a function of n: – If M is deterministic spacereq(M) = f(n) = the maximum number of tape squares that M reads on any input of length n. – If M is nondeterministic spacereq(M) = f(n) = the maximum number of tape squares that M reads on any path that it executes on any input of length n. • Focus on worst-case performance – Interested in upper bound – Easy to determine – Beware: could be quite different from average-case – for most real problems, worst case is rare 11 Growth Rates of Functions 12 Asymptotic upper bound - O f(n) Î O(g(n)) iff there exists a positive integer k and a positive constant c such that: "n ³ k (f(n) £ c g(n)). In other words, ignoring some number of small cases (all those of size less than k), and ignoring some constant factor c, f(n) is bounded from above by g(n). f (n) Alternatively, if the limit exists: lim < ¥ n®¥ g(n) In this case, we’ll say that f is “big-oh” of g or g asymptotically dominates f or g grows at least as fast as f does 13 Asymptotic upper bound - O • n3 Î O(n3) • n3 Î O(n4) • 3n3 Î O(n3) • n3 Î O(3n) • n3 Î O(n!) • log n Î O(n) 14 Summarizing O b n O(c) Í O(loga n) Í O(n ) Í O(d ) Í O(n!) 15 Asymptotic strong upper bound o f(n) Î o(g(n)) iff, for every positive c, there exists a positive integer k such that: "n ³ k (f(n) < c g(n)) Alternatively, if the limit exists: f (n) lim = 0 n®¥ g(n) In this case, we’ll say that f is “little-oh” of g or that g grows strictly faster than f does. 16 Asymptotic lower bound - W f(n) Î W(g(n)) iff there exists a positive integer k and a positive constant c such that: "n ³ k (f(n) ³ c g(n)) In other words, ignoring some number of small cases (all those of size less than k), and ignoring some constant factor c, f(n) is bounded from below by g(n). Alternatively, if the limit exists: f (n) lim > 0 n®¥ g(n) In this case, we’ll say that f is “big-Omega” of g or that g grows no faster than f. 17 Asymptotic strong lower bound w f(n) Î w(g(n)) iff, for every positive c, there exists a positive integer k such that: "n ³ k (f(n) > c g(n)) Alternatively, if the required limit exists: fn() lim =¥ n®¥ gn() In this case, we’ll say that f is “little-omega” of g or that g grows strictly slower than f does. 18 Asymptotic tight bound Q f(n) Î Q(g(n)) iff there exists a positive integer k and positive constants c1, and c2 such that: "n ³ k (c1 g(n) £ f(n) £ c2 g(n)) Or: Or: f(n) Î Q(g(n)) iff: f(n) Î Q(g(n)) iff: f(n) Î O(g(n)), and f(n) Î O(g(n)), and g(n) Î O(f(n)). f(n) Î W(g(n)). Is n3 Î Q(n3)? Is n3 Î Q(n4)? 3 5 Is n Î Q(n )? 19 Asymptotic Dominance 20 Algorithmic Gaps We’d like to show: 1. Upper bound: There exists an algorithm that decides L and that has complexity C1. 2. Lower bound: Any algorithm that decides L must have complexity at least C2. 3. C1 = C2 If C1 = C2, we are done. Often, we’re not done. 21 Time-Space Tradeoffs - Search Depth-first search: small space, big time • potential problem: get stuck in one path Breadth-first search: big space, small time Iterative-deepening search: compromise • depth-first on length 1 paths, length 2 paths and so on • Space same as depth-first, time slightly worse than breath-first 22 Time Complexity Classes Chapter 28 23 The Language Class P L Î P iff ● there exists some deterministic Turing machine M that decides L, and ● timereq(M) Î O(nk) for some k. We’ll say that L is tractable iff it is in P. To show a language is in P: • describe a one-tape, deterministic Turing machine • state an algorithm that runs on a regular random-access computer – easier 24 Languages That Are in P • Every regular language • Every context-free language since there exist context-free parsing algorithms that run in O(n3) time. • Others: – AnBnCn 25 The Language Class NP Nondeterministic deciding: L Î NP iff: ● there is some NDTM M that decides L, and ● timereq(M) Î O(nk) for some k. NDTM deciders: longest path is polynomial s,!abab q2,#abab q1,!abab q1,!abab q3,!bbab 26 Deterministic Verifying A Turing machine V is a verifier for a language L iff: w Î L iff $c (<w, c> Î L(V)). We’ll call c a certificate. An alternative definition for the class NP: L Î NP iff there exists a deterministic TM V such that: • V is a verifier for L, and • timereq(V) Î O(nk) for some k.
Recommended publications
  • Efficient Algorithms with Asymmetric Read and Write Costs
    Efficient Algorithms with Asymmetric Read and Write Costs Guy E. Blelloch1, Jeremy T. Fineman2, Phillip B. Gibbons1, Yan Gu1, and Julian Shun3 1 Carnegie Mellon University 2 Georgetown University 3 University of California, Berkeley Abstract In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fun- damentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We con- sider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M, ω)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost ω 1. For FFT and sorting networks we show a lower bound cost of Ω(ωn logωM n), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when ω is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(ω, log n)/ log(ωM)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n × n diamond DAG of Ω(ωn2/M) cost, which indicates no asymptotic improvement is achievable with fast reads.
    [Show full text]
  • Great Ideas in Computing
    Great Ideas in Computing University of Toronto CSC196 Winter/Spring 2019 Week 6: October 19-23 (2020) 1 / 17 Announcements I added one final question to Assignment 2. It concerns search engines. The question may need a little clarification. I have also clarified question 1 where I have defined (in a non standard way) the meaning of a strict binary search tree which is what I had in mind. Please answer the question for a strict binary search tree. If you answered the quiz question for a non strict binary search tree withn a proper explanation you will get full credit. Quick poll: how many students feel that Q1 was a fair quiz? A1 has now been graded by Marta. I will scan over the assignments and hope to release the grades later today. If you plan to make a regrading request, you have up to one week to submit your request. You must specify clearly why you feel that a question may not have been graded fairly. In general, students did well which is what I expected. 2 / 17 Agenda for the week We will continue to discuss search engines. We ended on what is slide 10 (in Week 5) on Friday and we will continue with where we left off. I was surprised that in our poll, most students felt that the people advocating the \AI view" of search \won the debate" whereas today I will try to argue that the people (e.g., Salton and others) advocating the \combinatorial, algebraic, statistical view" won the debate as to current search engines.
    [Show full text]
  • Efficient Algorithms with Asymmetric Read and Write Costs
    Efficient Algorithms with Asymmetric Read and Write Costs Guy E. Blelloch1, Jeremy T. Fineman2, Phillip B. Gibbons1, Yan Gu1, and Julian Shun3 1 Carnegie Mellon University 2 Georgetown University 3 University of California, Berkeley Abstract In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fun- damentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We con- sider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M, ω)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost ω 1. For FFT and sorting networks we show a lower bound cost of Ω(ωn logωM n), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when ω is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(ω, log n)/ log(ωM)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n × n diamond DAG of Ω(ωn2/M) cost, which indicates no asymptotic improvement is achievable with fast reads.
    [Show full text]
  • Independence Number, Vertex (Edge) ୋ Cover Number and the Least Eigenvalue of a Graph
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 433 (2010) 790–795 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa The vertex (edge) independence number, vertex (edge) ୋ cover number and the least eigenvalue of a graph ∗ Ying-Ying Tan a, Yi-Zheng Fan b, a Department of Mathematics and Physics, Anhui University of Architecture, Hefei 230601, PR China b School of Mathematical Sciences, Anhui University, Hefei 230039, PR China ARTICLE INFO ABSTRACT Article history: In this paper we characterize the unique graph whose least eigen- Received 4 October 2009 value attains the minimum among all graphs of a fixed order and Accepted 4 April 2010 a given vertex (edge) independence number or vertex (edge) cover Available online 8 May 2010 number, and get some bounds for the vertex (edge) independence Submitted by X. Zhan number, vertex (edge) cover number of a graph in terms of the least eigenvalue of the graph. © 2010 Elsevier Inc. All rights reserved. AMS classification: 05C50 15A18 Keywords: Graph Adjacency matrix Vertex (edge) independence number Vertex (edge) cover number Least eigenvalue 1. Introduction Let G = (V,E) be a simple graph of order n with vertex set V = V(G) ={v1,v2, ...,vn} and edge set E = E(G). The adjacency matrix of G is defined to be a (0, 1)-matrix A(G) =[aij], where aij = 1ifvi is adjacent to vj and aij = 0 otherwise. The eigenvalues of the graph G are referred to the eigenvalues of A(G), which are arranged as λ1(G) λ2(G) ··· λn(G).
    [Show full text]
  • 3.1 Matchings and Factors: Matchings and Covers
    1 3.1 Matchings and Factors: Matchings and Covers This copyrighted material is taken from Introduction to Graph Theory, 2nd Ed., by Doug West; and is not for further distribution beyond this course. These slides will be stored in a limited-access location on an IIT server and are not for distribution or use beyond Math 454/553. 2 Matchings 3.1.1 Definition A matching in a graph G is a set of non-loop edges with no shared endpoints. The vertices incident to the edges of a matching M are saturated by M (M-saturated); the others are unsaturated (M-unsaturated). A perfect matching in a graph is a matching that saturates every vertex. perfect matching M-unsaturated M-saturated M Contains copyrighted material from Introduction to Graph Theory by Doug West, 2nd Ed. Not for distribution beyond IIT’s Math 454/553. 3 Perfect Matchings in Complete Bipartite Graphs a 1 The perfect matchings in a complete b 2 X,Y-bigraph with |X|=|Y| exactly c 3 correspond to the bijections d 4 f: X -> Y e 5 Therefore Kn,n has n! perfect f 6 matchings. g 7 Kn,n The complete graph Kn has a perfect matching iff… Contains copyrighted material from Introduction to Graph Theory by Doug West, 2nd Ed. Not for distribution beyond IIT’s Math 454/553. 4 Perfect Matchings in Complete Graphs The complete graph Kn has a perfect matching iff n is even. So instead of Kn consider K2n. We count the perfect matchings in K2n by: (1) Selecting a vertex v (e.g., with the highest label) one choice u v (2) Selecting a vertex u to match to v K2n-2 2n-1 choices (3) Selecting a perfect matching on the rest of the vertices.
    [Show full text]
  • Catalogue of Graph Polynomials
    Catalogue of graph polynomials J.A. Makowsky April 6, 2011 Contents 1 Graph polynomials 3 1.1 Comparinggraphpolynomials. ........ 3 1.1.1 Distinctivepower.............................. .... 3 1.1.2 Substitutioninstances . ...... 4 1.1.3 Uniformalgebraicreductions . ....... 4 1.1.4 Substitutioninstances . ...... 4 1.1.5 Substitutioninstances . ...... 4 1.1.6 Substitutioninstances . ...... 4 1.2 Definabilityofgraphpolynomials . ......... 4 1.2.1 Staticdefinitions ............................... ... 4 1.2.2 Dynamicdefinitions .............................. 4 1.2.3 SOL-definablepolynomials ............................ 4 1.2.4 Generalizedchromaticpolynomials . ......... 4 2 A catalogue of graph polynomials 5 2.1 Polynomialsfromthezoo . ...... 5 2.1.1 Chromaticpolynomial . .... 5 2.1.2 Chromatic symmetric function . ...... 6 2.1.3 Adjointpolynomials . .... 6 2.1.4 TheTuttepolynomial . ... 7 2.1.5 Strong Tutte symmetric function . ....... 7 2.1.6 Tutte-Grothendieck invariants . ........ 8 2.1.7 Aweightedgraphpolynomial . ..... 8 2.1.8 Chainpolynomial ............................... 9 2.1.9 Characteristicpolynomial . ....... 10 2.1.10 Matchingpolynomial. ..... 11 2.1.11 Theindependentsetpolynomial . ....... 12 2.1.12 Thecliquepolynomial . ..... 14 2.1.13 Thevertex-coverpolynomial . ...... 14 2.1.14 Theedge-coverpolynomial . ..... 15 2.1.15 TheMartinpolynomial . 16 2.1.16 Interlacepolynomial . ...... 17 2.1.17 Thecoverpolynomial . 19 2.1.18 Gopolynomial ................................. 21 2.1.19 Stabilitypolynomial . ...... 23 1 2.1.20 Strong U-polynomial...............................
    [Show full text]
  • P Versus NP Richard M. Karp
    P Versus NP Computational complexity theory is the branch of theoretical computer science concerned with the fundamental limits on the efficiency of automatic computation. It focuses on problems that appear to Richard M. Karp require a very large number of computation steps for their solution. The inputs and outputs to a problem are sequences of symbols drawn from a finite alphabet; there is no limit on the length of the input, and the fundamental question about a problem is the rate of growth of the number of required computation steps as a function of the length of the input. Some problems seem to require a very rapidly growing number of steps. One such problem is the inde- pendent set problem: given a graph, consisting of points called vertices and lines called edges connect- ing pairs of vertices, a set of vertices is called independent if no two vertices in the set are connected by a line. Given a graph and a positive integer n, the problem is to decide whether the graph contains an independent set of size n. Every known algorithm to solve the independent set problem encounters a combinatorial explosion, in which the number of required computation steps grows exponentially as a function of the size of the graph. On the other hand, the problem of deciding whether a given set of vertices is an independent set in a given graph is solvable by inspection. There are many such dichotomies, in which it is hard to decide whether a given type of structure exists within an input object (the existence problem), but it is easy to decide whether a given structure is of the required type (the verification problem).
    [Show full text]
  • IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization
    IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization Professor Dorit S. Hochbaum Contents 1 Introduction 1 2 Formulation of some ILP 2 2.1 0-1 knapsack problem . 2 2.2 Assignment problem . 2 3 Non-linear Objective functions 4 3.1 Production problem with set-up costs . 4 3.2 Piecewise linear cost function . 5 3.3 Piecewise linear convex cost function . 6 3.4 Disjunctive constraints . 7 4 Some famous combinatorial problems 7 4.1 Max clique problem . 7 4.2 SAT (satisfiability) . 7 4.3 Vertex cover problem . 7 5 General optimization 8 6 Neighborhood 8 6.1 Exact neighborhood . 8 7 Complexity of algorithms 9 7.1 Finding the maximum element . 9 7.2 0-1 knapsack . 9 7.3 Linear systems . 10 7.4 Linear Programming . 11 8 Some interesting IP formulations 12 8.1 The fixed cost plant location problem . 12 8.2 Minimum/maximum spanning tree (MST) . 12 9 The Minimum Spanning Tree (MST) Problem 13 i IEOR269 notes, Prof. Hochbaum, 2010 ii 10 General Matching Problem 14 10.1 Maximum Matching Problem in Bipartite Graphs . 14 10.2 Maximum Matching Problem in Non-Bipartite Graphs . 15 10.3 Constraint Matrix Analysis for Matching Problems . 16 11 Traveling Salesperson Problem (TSP) 17 11.1 IP Formulation for TSP . 17 12 Discussion of LP-Formulation for MST 18 13 Branch-and-Bound 20 13.1 The Branch-and-Bound technique . 20 13.2 Other Branch-and-Bound techniques . 22 14 Basic graph definitions 23 15 Complexity analysis 24 15.1 Measuring quality of an algorithm .
    [Show full text]
  • New Approximation Algorithms for Minimum Weighted Edge Cover
    New Approximation Algorithms for Minimum Weighted Edge Cover S M Ferdous∗ Alex Potheny Arif Khanz Abstract The K-Nearest Neighbor graph is used to spar- We describe two new 3=2-approximation algorithms and a sify data sets, which is an important step in graph-based new 2-approximation algorithm for the minimum weight semi-supervised machine learning. Here one has a few edge cover problem in graphs. We show that one of the 3=2-approximation algorithms, the Dual Cover algorithm, labeled items, many unlabeled items, and a measure of computes the lowest weight edge cover relative to previously similarity between pairs of items; we are required to known algorithms as well as the new algorithms reported label the remaining items. A popular approach for clas- here. The Dual Cover algorithm can also be implemented to be faster than the other 3=2-approximation algorithms on sification is to generate a similarity graph between the serial computers. Many of these algorithms can be extended items to represent both the labeled and unlabeled data, to solve the b-Edge Cover problem as well. We show the and then to use a label propagation algorithm to classify relation of these algorithms to the K-Nearest Neighbor graph construction in semi-supervised learning and other the unlabeled items [23]. In this approach one builds applications. a complete graph out of the dataset and then sparsi- fies this graph by computing a K-Nearest Neighbor 1 Introduction graph [22]. This sparsification leads to efficient al- An Edge Cover in a graph is a subgraph such that gorithms, but also helps remove noise which can af- every vertex has at least one edge incident on it in fect label propagation [11].
    [Show full text]
  • Edge Guarding Plane Graphs
    Edge Guarding Plane Graphs Master Thesis of Paul Jungeblut At the Department of Informatics Institute of Theoretical Informatics Reviewers: Prof. Dr. Dorothea Wagner Prof. Dr. Peter Sanders Advisor: Dr. Torsten Ueckerdt Time Period: 26.04.2019 – 25.10.2019 KIT – The Research University in the Helmholtz Association www.kit.edu Statement of Authorship I hereby declare that this document has been composed by myself and describes my own work, unless otherwise acknowledged in the text. I also declare that I have read the Satzung zur Sicherung guter wissenschaftlicher Praxis am Karlsruher Institut für Technologie (KIT). Paul Jungeblut, Karlsruhe, 25.10.2019 iii Abstract Let G = (V; E) be a plane graph. We say that a face f of G is guarded by an edge vw 2 E if at least one vertex from fv; wg is on the boundary of f. For a planar graph class G the function ΓG : N ! N maps n to the minimal number of edges needed to guard all faces of any n-vertex graph in G. This thesis contributes new bounds on ΓG for several graph classes, in particular Γ Γ Γ on 4;stacked for stacked triangulations, on for quadrangulations and on sp for series parallel graphs. Specifically we show that • b(2n - 4)=7c 6 Γ4;stacked(n) 6 b2n=7c, b(n - 2)=4c Γ (n) bn=3c • 6 6 and • b(n - 2)=3c 6 Γsp(n) 6 bn=3c. Note that the bounds for stacked triangulations and series parallel graphs are tight (up to a small constant). For quadrangulations we identify the non-trivial subclass 2 Γ (n) bn=4c of -degenerate quadrangulations for which we further prove ;2-deg 6 matching the lower bound.
    [Show full text]
  • The P Versus Np Problem
    THE P VERSUS NP PROBLEM STEPHEN COOK 1. Statement of the Problem The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic) algorithm in polynomial time. To define the problem precisely it is necessary to give a formal model of a computer. The standard computer model in computability theory is the Turing machine, introduced by Alan Turing in 1936 [37]. Although the model was introduced before physical computers were built, it nevertheless continues to be accepted as the proper computer model for the purpose of defining the notion of computable function. Informally the class P is the class of decision problems solvable by some algorithm within a number of steps bounded by some fixed polynomial in the length of the input. Turing was not concerned with the efficiency of his machines, rather his concern was whether they can simulate arbitrary algorithms given sufficient time. It turns out, however, Turing machines can generally simulate more efficient computer models (for example, machines equipped with many tapes or an unbounded random access memory) by at most squaring or cubing the computation time. Thus P is a robust class and has equivalent definitions over a large class of computer models. Here we follow standard practice and define the class P in terms of Turing machines. Formally the elements of the class P are languages. Let Σ be a finite alphabet (that is, a finite nonempty set) with at least two elements, and let Σ∗ be the set of finite strings over Σ.
    [Show full text]
  • A Memorable Trip Abhisekh Sankaran Research Scholar, IIT Bombay
    A Memorable Trip Abhisekh Sankaran Research Scholar, IIT Bombay It was my first trip to the US. It had not yet sunk in that I had been chosen by ACM India as one of two Ph.D. students from India to attend the big ACM Turing Centenary Celebration in San Francisco until I saw the familiar face of Stephen Cook enter a room in the hotel a short distance from mine; later, Moshe Vardi recognized me from his trip to IITB during FSTTCS, 2011. I recognized Nitin Saurabh from IMSc Chennai, the other student chosen by ACM-India; 11 ACM SIG©s had sponsored students and there were about 75 from all over the world. Registration started at 8am on 15th June, along with breakfast. Collecting my ©Student Scholar© badge and stuffing in some food, I entered a large hall with several hundred seats, a brightly lit podium with a large screen in the middle flanked by two others. The program began with a video giving a brief biography of Alan Turing from his boyhood to the dynamic young man who was to change the world forever. There were inaugural speeches by John White, CEO of ACM, and Vint Cerf, the 2004 Turing Award winner and incoming ACM President. The MC for the event, Paul Saffo, took over and the panel discussions and speeches commenced. A live Twitter feed made it possible for people in the audience and elsewhere to post questions/comments which were actually taken up in the discussions. Of the many sessions that took place in the next two days, I will describe three that I found most interesting.
    [Show full text]