Time Complexity (1)

Total Page:16

File Type:pdf, Size:1020Kb

Time Complexity (1) Time Complexity (1) CSCI 2670 Original Slides were written by Dr. Frederick W Maier Spring 2014 CSCI 2670 Time Complexity (1) Time Complexity I So far we've dealt with determining whether or not a problem is decidable. I But even if it is, it might be \too difficult” in practice to decide. I For a given string w and language L, it might require too much time or too much memory to determine whether or not w 2 L. I The time required to solve a problem is called its time complexity. I The memory required to solve a problem is called its space complexity. I Computational complexity theory is the study of the time and space complexity of problems. I Chapter 7 deal with time complexity. CSCI 2670 Time Complexity (1) Time Complexity k k I Suppose we have a TM to decide A = f0 1 jk ≥ 0g. I This language is context free and so decidable. I Informally, the time complexity is the number of steps required by the TM, as a function of the input size. I We want to know the number of steps needed to determine whether w 2 A. I We usually express time complexity as a function of the length of the input string w. I Note that if different input strings u and v both have length n, it might take more time to process u than v. I In worst case analysis, we are interested in the maximum number of steps required for an input string of length n. I In average case analysis, we are interested in the average number of steps required for an input string of length n. CSCI 2670 Time Complexity (1) Time Complexity Definition I If M is a deterministic TM that halts on all inputs, then the time complexity (running time) of M is the function f : N ! N, where f (n) is the maximum number of steps M uses on an input of length n. I We say that M runs in time f (n) and M is an f (n) Turing machine. I It is often more convenient to use estimates of f (n) rather than f (n) itself to describe the running time of a TM. I In asymptotic analysis, we estimate the running time of the algorithm when it is run on large inputs. I Not all terms of the function contribute very much to the running time and so can be ignored. I The most common estimates are big-O and small-O (little-O) estimates. CSCI 2670 Time Complexity (1) Big O Notation Definition If f and g are functions such that f ; g : N ! R+, then f (n) is O(g(n)) iff there exist positive integers c and n0 such that f (n) ≤ c · g(n) for all n ≥ n0. g(n) is an asymptotic upper bound (cg(n) is an upper bound of f (n)). \f (n) is O(g(n))" means that if we ignore constant factors, f (n) ≤ g(n). Example 3 2 3 I 5n + 2n + 22n + 6 is O(n ): Let c = 6 and n0 = 10. 2 I n is O(n ). Let c = 1 and n0 = 1. 2 2 I n is not O(n): Not matter what c and n0 are chosen, n cn for some n ≥ n0. CSCI 2670 Time Complexity (1) Big O Notation and Logarithms I The base of a logarithm doesn't matter when using Big-O notation. Note that for any bases a and b, log (n) = loga(n) . I b loga(b) loga(n) So, if f (n) ≤ clogb(n), then f (n) ≤ c . I loga(b) c Letting c1 = d e, it follows that f (n) ≤ c1loga(n). I loga(b) I So, if f (n) is O(logb(n)), f (n) is O(loga(n)). I We don't even bother with the base: f (n) is O(log(n)). Example If f (n) = 3nlog2(n) + 5nlog2(log2(n)) + 2, then f (n) is O(nlog(n)). CSCI 2670 Time Complexity (1) Arithmetic and Big O Notation I If f1(n) is O(g1(n)) and f2(n) is O(g2(n)), then I f1(n) + f2(n) is O(g1(n)) + O(g2(n)). I f1(n) + f2(n) is max(O(g1(n)); O(g2(n))). I If f (n) appears in an exponent, we can use the Big-O estimate there: 3n3+2n2+n+6 O(n3) I 2 is 2 . c I Frequently we derive bounds of the form n for c > 0. Such bounds are called polynomial bounds. (nδ ) I Bounds of the form 2 are called exponential bounds when δ is a real number greater than 0. CSCI 2670 Time Complexity (1) Small-O Notation I In a way, Big-O notation says that f (n) is less than or equal to g(n). I Small-O notation says that f (n) is less than g(n). Definition If f and g are functions such that f ; g : N ! R+, then f (n) is o(g(n)) iff f (n) lim = 0. n!1 g(n) Alternatively, f (n) is o(g(n)) iff for all real constants c > 0, there is an n0 such that f (n) < cg(n) for all n ≥ n0. Example p I n is o(n). I n is o(nlog(log(n)). 2 I nlog(n) is o(n ). 2 3 I n is o(n ). CSCI 2670 Time Complexity (1) Analyzing Algorithms k k I Consider TM M1 which decides A = f0 1 jk ≥ 0g. It works in 4 phases. On input w: 1. Scan the tape, rejecting if a 0 is found to the right of a 1. 2. While both 0s and 1s are still on the tape: 3. Scan the tape, marking off a single 0 and 1. 4. Reject if a 0 remains but all 1s are marked, or vice versa. If not, accept. I What is the running time of M1 as a function of n? CSCI 2670 Time Complexity (1) Analyzing Algorithms k k I Consider TM M1 which decides A = f0 1 jk ≥ 0g. It works in 4 phases. On input w: 1. Scan the tape, rejecting if a 0 is found to the right of a 1. 2. While both 0s and 1s are still on the tape: 3. Scan the tape, marking off a single 0 and 1. 4. Reject if a 0 remains but all 1s are marked, or vice versa. If not, accept. I Phase 1 scans once through the tape, taking O(n) steps, where jwj = n. The tapehead returns left|another n steps. Phase 1 takes O(n) steps. I In Phase 2-3, the tape is scanned to check that both 1s and 0s appear; another scan marks off a single 0 and 1. I In each cycle, 2 symbols are marked, and so the total number of cycles is O(n=2). Phases 2 and 3 together take O(n2) steps. I In Phase 4, we check to see that all 0s and 1s are marked off. This takes only a single scan of the tape: O(n). 2 2 I And so the running time of M1 is O(n) + O(n ) + O(n) = O(n ). CSCI 2670 Time Complexity (1) Complexity Classes: TIME(t(n)) 2 I Observe that M1 ran in time O(n ), and it decides A. I We can classify languages by the algorithms that decide them. Definition Let t : N ! R+ be a function. The time complexity class TIME(t(n)) is the set of all languages that can be decided in time O(t(n)). 2 3 I Observe, e.g., if L 2 TIME(n ), then L 2 TIME(n ). I If a decider M for L runs in time O(t(n)), then L 2 TIME(t(n)). k k 2 I So, A = f0 1 jk ≥ 0g is in time TIME(n ). I Failing to find a O(t(n))-time decider doesn't imply L 2= TIME(t(n)). CSCI 2670 Time Complexity (1) Analyzing Algorithms k k I Consider TM M2 which decides A = f0 1 jk ≥ 0g. It works in 5 phases. On input w: 1. Scan the tape, rejecting if a 0 is found to the right of a 1. 2. While some 0s and some 1s are on the tape: 3. Scan the tape. Reject if the number of unmarked symbols is odd. 4. Scan the tape, crossing off every other 0 and every other 1. 5. Scan the tape. If all symbols are marked, accept. Otherwise reject. I What's the running time of M2? CSCI 2670 Time Complexity (1) Analyzing Algorithms k k I Consider TM M2 which decides A = f0 1 jk ≥ 0g. It works in 5 phases. On input w: 1. Scan the tape, rejecting if a 0 is found to the right of a 1. 2. While some 0s and some 1s are on the tape: 3. Scan the tape. Reject if the number of unmarked symbols is odd. 4. Scan the tape, crossing off every other 0 and every other 1. 5. Scan the tape. If all symbols are marked, accept. Otherwise reject. I Phase 1 again takes O(n) steps, as does phase 5. I To check that some 0s and 1s appear (phase 2) takes O(n) steps. I Each execution of phase 3 and 4 takes O(n) steps. I Each execution of phase 4 cuts the number of 0s and 1s by half.
Recommended publications
  • Time Complexity
    Chapter 3 Time complexity Use of time complexity makes it easy to estimate the running time of a program. Performing an accurate calculation of a program’s operation time is a very labour-intensive process (it depends on the compiler and the type of computer or speed of the processor). Therefore, we will not make an accurate measurement; just a measurement of a certain order of magnitude. Complexity can be viewed as the maximum number of primitive operations that a program may execute. Regular operations are single additions, multiplications, assignments etc. We may leave some operations uncounted and concentrate on those that are performed the largest number of times. Such operations are referred to as dominant. The number of dominant operations depends on the specific input data. We usually want to know how the performance time depends on a particular aspect of the data. This is most frequently the data size, but it can also be the size of a square matrix or the value of some input variable. 3.1: Which is the dominant operation? 1 def dominant(n): 2 result = 0 3 fori in xrange(n): 4 result += 1 5 return result The operation in line 4 is dominant and will be executedn times. The complexity is described in Big-O notation: in this caseO(n)— linear complexity. The complexity specifies the order of magnitude within which the program will perform its operations. More precisely, in the case ofO(n), the program may performc n opera- · tions, wherec is a constant; however, it may not performn 2 operations, since this involves a different order of magnitude of data.
    [Show full text]
  • Quick Sort Algorithm Song Qin Dept
    Quick Sort Algorithm Song Qin Dept. of Computer Sciences Florida Institute of Technology Melbourne, FL 32901 ABSTRACT each iteration. Repeat this on the rest of the unsorted region Given an array with n elements, we want to rearrange them in without the first element. ascending order. In this paper, we introduce Quick Sort, a Bubble sort works as follows: keep passing through the list, divide-and-conquer algorithm to sort an N element array. We exchanging adjacent element, if the list is out of order; when no evaluate the O(NlogN) time complexity in best case and O(N2) exchanges are required on some pass, the list is sorted. in worst case theoretically. We also introduce a way to approach the best case. Merge sort [4]has a O(NlogN) time complexity. It divides the 1. INTRODUCTION array into two subarrays each with N/2 items. Conquer each Search engine relies on sorting algorithm very much. When you subarray by sorting it. Unless the array is sufficiently small(one search some key word online, the feedback information is element left), use recursion to do this. Combine the solutions to brought to you sorted by the importance of the web page. the subarrays by merging them into single sorted array. 2 Bubble, Selection and Insertion Sort, they all have an O(N2) In Bubble sort, Selection sort and Insertion sort, the O(N ) time time complexity that limits its usefulness to small number of complexity limits the performance when N gets very big. element no more than a few thousand data points.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Chapter 24 Conp, Self-Reductions
    Chapter 24 coNP, Self-Reductions CS 473: Fundamental Algorithms, Spring 2013 April 24, 2013 24.1 Complementation and Self-Reduction 24.2 Complementation 24.2.1 Recap 24.2.1.1 The class P (A) A language L (equivalently decision problem) is in the class P if there is a polynomial time algorithm A for deciding L; that is given a string x, A correctly decides if x 2 L and running time of A on x is polynomial in jxj, the length of x. 24.2.1.2 The class NP Two equivalent definitions: (A) Language L is in NP if there is a non-deterministic polynomial time algorithm A (Turing Machine) that decides L. (A) For x 2 L, A has some non-deterministic choice of moves that will make A accept x (B) For x 62 L, no choice of moves will make A accept x (B) L has an efficient certifier C(·; ·). (A) C is a polynomial time deterministic algorithm (B) For x 2 L there is a string y (proof) of length polynomial in jxj such that C(x; y) accepts (C) For x 62 L, no string y will make C(x; y) accept 1 24.2.1.3 Complementation Definition 24.2.1. Given a decision problem X, its complement X is the collection of all instances s such that s 62 L(X) Equivalently, in terms of languages: Definition 24.2.2. Given a language L over alphabet Σ, its complement L is the language Σ∗ n L. 24.2.1.4 Examples (A) PRIME = nfn j n is an integer and n is primeg o PRIME = n n is an integer and n is not a prime n o PRIME = COMPOSITE .
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Time Complexity of Algorithms
    Time Complexity of Algorithms • If running time T(n) is O(f(n)) then the function f measures time complexity – Polynomial algorithms: T(n) is O(nk); k = const – Exponential algorithm: otherwise • Intractable problem: if no polynomial algorithm is known for its solution Lecture 4 COMPSCI 220 - AP G Gimel'farb 1 Time complexity growth f(n) Number of data items processed per: 1 minute 1 day 1 year 1 century n 10 14,400 5.26⋅106 5.26⋅108 7 n log10n 10 3,997 883,895 6.72⋅10 n1.5 10 1,275 65,128 1.40⋅106 n2 10 379 7,252 72,522 n3 10 112 807 3,746 2n 10 20 29 35 Lecture 4 COMPSCI 220 - AP G Gimel'farb 2 Beware exponential complexity ☺If a linear O(n) algorithm processes 10 items per minute, then it can process 14,400 items per day, 5,260,000 items per year, and 526,000,000 items per century ☻If an exponential O(2n) algorithm processes 10 items per minute, then it can process only 20 items per day and 35 items per century... Lecture 4 COMPSCI 220 - AP G Gimel'farb 3 Big-Oh vs. Actual Running Time • Example 1: Let algorithms A and B have running times TA(n) = 20n ms and TB(n) = 0.1n log2n ms • In the “Big-Oh”sense, A is better than B… • But: on which data volume can A outperform B? TA(n) < TB(n) if 20n < 0.1n log2n, 200 60 or log2n > 200, that is, when n >2 ≈ 10 ! • Thus, in all practical cases B is better than A… Lecture 4 COMPSCI 220 - AP G Gimel'farb 4 Big-Oh vs.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Classification Sorting algorithms used in computer science are often classified by: • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is .
    [Show full text]
  • TC Circuits for Algorithmic Problems in Nilpotent Groups
    TC0 Circuits for Algorithmic Problems in Nilpotent Groups Alexei Myasnikov1 and Armin Weiß2 1 Stevens Institute of Technology, Hoboken, NJ, USA 2 Universität Stuttgart, Germany Abstract Recently, Macdonald et. al. showed that many algorithmic problems for finitely generated nilpo- tent groups including computation of normal forms, the subgroup membership problem, the con- jugacy problem, and computation of subgroup presentations can be done in LOGSPACE. Here we follow their approach and show that all these problems are complete for the uniform circuit class TC0 – uniformly for all r-generated nilpotent groups of class at most c for fixed r and c. Moreover, if we allow a certain binary representation of the inputs, then the word problem and computation of normal forms is still in uniform TC0, while all the other problems we examine are shown to be TC0-Turing reducible to the problem of computing greatest common divisors and expressing them as linear combinations. 1998 ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems, G.2.0 Discrete Mathematics Keywords and phrases nilpotent groups, TC0, abelian groups, word problem, conjugacy problem, subgroup membership problem, greatest common divisors Digital Object Identifier 10.4230/LIPIcs.MFCS.2017.23 1 Introduction The word problem (given a word over the generators, does it represent the identity?) is one of the fundamental algorithmic problems in group theory introduced by Dehn in 1911 [3]. While for general finitely presented groups all these problems are undecidable [22, 2], for many particular classes of groups decidability results have been established – not just for the word problem but also for a wide range of other problems.
    [Show full text]
  • Introduction to the Theory of Computation Computability, Complexity, and the Lambda Calculus Some Notes for CIS262
    Introduction to the Theory of Computation Computability, Complexity, And the Lambda Calculus Some Notes for CIS262 Jean Gallier and Jocelyn Quaintance Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA e-mail: [email protected] c Jean Gallier Please, do not reproduce without permission of the author April 28, 2020 2 Contents Contents 3 1 RAM Programs, Turing Machines 7 1.1 Partial Functions and RAM Programs . 10 1.2 Definition of a Turing Machine . 15 1.3 Computations of Turing Machines . 17 1.4 Equivalence of RAM programs And Turing Machines . 20 1.5 Listable Languages and Computable Languages . 21 1.6 A Simple Function Not Known to be Computable . 22 1.7 The Primitive Recursive Functions . 25 1.8 Primitive Recursive Predicates . 33 1.9 The Partial Computable Functions . 35 2 Universal RAM Programs and the Halting Problem 41 2.1 Pairing Functions . 41 2.2 Equivalence of Alphabets . 48 2.3 Coding of RAM Programs; The Halting Problem . 50 2.4 Universal RAM Programs . 54 2.5 Indexing of RAM Programs . 59 2.6 Kleene's T -Predicate . 60 2.7 A Non-Computable Function; Busy Beavers . 62 3 Elementary Recursive Function Theory 67 3.1 Acceptable Indexings . 67 3.2 Undecidable Problems . 70 3.3 Reducibility and Rice's Theorem . 73 3.4 Listable (Recursively Enumerable) Sets . 76 3.5 Reducibility and Complete Sets . 82 4 The Lambda-Calculus 87 4.1 Syntax of the Lambda-Calculus . 89 4.2 β-Reduction and β-Conversion; the Church{Rosser Theorem . 94 4.3 Some Useful Combinators .
    [Show full text]