Computability

Total Page:16

File Type:pdf, Size:1020Kb

Computability Part II Computability 6 Models of Computation What is a computable function? Our modern intuition is that a function f : N ! N is computable if there is a a program in a computer language like C++, PASCAL or LISP such that if we had an idealized computer, with unlimited memory, and we ran the program on input n it would eventually halt and output f(n). While this de¯nition could be made precise, it is rather unwieldy to try to analyze a complex modern programming language and an idealized computer. We begin this section by presenting two possible candidates for the class of com- putable functions. The ¯rst will be based on register machines, a very primitive version of a computer. The second class will be de¯ned more mathematically. We will then prove that these two classes of functions are the same. Church's Thesis will assert that this class of functions is exactly the class of all computable functions. Church's Thesis should be though of as a statement of philosophy or physics rather than a mathematical conjecture. It asserts that the de¯nitions we have given completely capture our intuition of what can be computed. There is a great deal of evidence for Church's Thesis. In particular, there is no know notion of deterministic computation, including C++-computable or the modern notion of quantum computable, that gives rise to computable functions that are not included in our simple classes. 1 Register Machines We will take register machines as our basic model of computations. The pro- gramming language for register machines will be a simple type of assembly language. This choice is something of a compromise. Register machines are not as simple as Turing machines, but they are much easier to program. It is not as easy to write complicated programs as it would be in a modern programming language like C++ or PASCAL, but it will be much easier to analyze the basic steps of a computation. In our model of computation we have in¯nitely many registers R1; R2; : : :. At any stage of the computation register Ri will store a nonnegative integer ri. De¯nition 6.1 A register machine program is a ¯nite sequence I1; : : : ; In where each Ij is one of the following: i) Z(n): set Rn to zero; rn à 0; 1I do not include notions of random computations or analog computations. But our model could easily be extended to take these into account. On the other hand issues like time and space complexity or feasability may vary as we change the model. 35 ii) S(n): increment Rn by one; rn à rn + 1; iii) T(n,m): transfer contents of Rn to Rm; rm à rn; iv) J(n,m,s), where 1 · s · n: if rn = rm, then go to Is otherwise go to the next instruction; v) HALT and In is HALT. A register machine must be provided with both a program and an initial con¯guration of the registers. A computation procedes by sequentially following the instructions. Note that for any program P there is a number N such, no matter what the initial con¯guration of the registers is, any computation with P will use at most registers R1; : : : ; RN . Example 6.2 We give a program which, if we start with n in R1, ends with R1 containing n ¡ 1 if n > 0 and 0 if n = 0. 1) Z(2) 2) J(1,2,10) 3) Z(3) We ¯rst test to see if R1 contains 0. If it does 4) S(2) we halt. If not, we make r2 = 1 and r3 = 0 and 5) J(1,2,9) test to see if r1 = r2 have the same contents. If 6) S(2) they do, we move r3 to R1 and halt. Otherwise, 7) S(3) we increment r2 and r3 until r1 = r2. Since r3 8) J(1,1,4) will always be one less than r2, this produces the 9) T(3,1) desired result. 10) HALT Example 6.3 We give a program which if adds the contents of R1 and R2 and leaves the sum in R1. We set r3 à 0. We increment R3 and R1 until r3 = r2. 1) Z(3) 2) J(2,3,6) 3) S(1) 4) S(3) 5) J(1,1,2) 6) HALT Example 6.4 We give a program to multiply the contents of R1 and R2 and leave the product in R1. The main idea is that we will add r1 to itself r2 times using R3 to store the intermediate results. R4 will be a counter to tell us how many times we have already added r1 to itself. We add r1 to r3 by incrementing R3, r1 times. We use R5 to count how many times we have incremented R3. 36 1) Z(3) 2) Z(4) 3) J(2,4,10) 4) Z(5) 5) J(1,5,9) 6) S(3) 7) S(5) 8) J(1,1,5) 9) S(4) 10) T(3,1) 11) HALT Note that lines 4){8) are just a slightly modi¯ed version of Example 6.3. We add r1 to r3 storing the result in R3. We can think of this as a \subroutine". It is easy to see that we could add a command to our language A(n,m,s) that does: rs à rn + rm: Any program written with this additional command could be rewritten in our original language. Similarly, using Example 6.2 we could add a command D(n) that decrements Rn if rn > 0 and leaves Rn unchange if rn = 0. We give one more example of a program that, on some initial con¯gurations, runs for ever. Example 6.5 We give a program such that if r1 is even halts with r1=2 in R1 and otherwise never halts. 1) Z(2) 2) Z(3) 3) J(1,2,8) 4) S(2) 5) S(2) 6) S(3) 7) J(1,1,2) 8) T(3,1) 9) HALT We next de¯ne what it means for a function f : Nk ! N to be computable by a register machine. The last example shows that we need to take partial functions into account. Suppose P is a register machine program. If x = (x1; : : : ; xk) we consider the computation where we begin with initial con¯guration r1 = x1; : : : ; rk = xk and rn = 0 for n > k. If this computation halts we say that P halts on input x. De¯nition 6.6 Suppose A ⊆ Nk. We say f : A ! N is an RM-computable partial function if there is a register machine program P such that: i) if x 62 A, then P does not halt on input x; ii) if x 2 A, then P halts on input x with f(x) in register R1. 37 We could start showing more and more functions are RM-computable by writing more complicated programs. Instead we will give mathematically de¯ne an interesting class of fuctions and prove it is exactly the class of RM-computable functions. Primitive Recursive Functions De¯nition 6.7 The class of primitive recursive functions is the smallest class C of functions such that: i) the zero function, z(x) = 0 is in C, ii) the sucessor function s(x) = x + 1 is in C, n iii) for all n and all i · n the projection function ¼i (x1 : : : xn) = xi, is in C (in particular the identity function on N is in C), n m iv) (Composition Rule) If g1 : : : gm; h 2 C, where gi : N ! N and h : N ! N, then f(x) = h(g1(x) : : : gm(x)) is in C, ¡ v) (Primitive Recursion) If g; h 2 C where g : Nn 1 ! N and h : Nn+1 ! N, then f 2 C where: f(x; 0) = g(x) f(x; y + 1) = h(x; y; f(x; y)): We now give a large number of examples of primitive recursive functions with derivations showing that they are primitive recursive. A derivation is a sequence n of functions f1 : : : fm such that each fi is either z; s or ¼i or is obtained from earlier functions by compostion or primitive recursion. We will use Church's lambda notation. For example ¸x; y; z[xy + z] is the function f where f(x; y; z) = xy + z. 1) the n-ary zero function: ¸x1 : : : xn[0] n f1 = ¼i , f2 = z, f3 = f2 ± f1. 2) the constant function ¸x[2] f1 = s, f2 = z, f3 = s ± z, f4 = s ± f3. 3) ¸x; y[x + y] 1 3 f1 = ¼1 = ¸x[x], f2 = ¼3, f3 = s, f4 = f3 ± f4 = ¸x; y; z[z + 1], f5 = ¸x; y[x + y] (by primitive recursion using g = f1 and h = f4). The formal derivations are not very inlightening so we give an informal prim- itive recursive de¯ntion of addition (and henceforth only give informal de¯n- tions): x + 0 = x x + (y + 1) = s(x + y). 38 4) multiplication x ¢ 0 = 0 x ¢ (y + 1) = xy + x. 5) exponentiation x0 = 1 xy+1 = xy ¢ x. 6) predecesor: 0; if x = 0; pr(x) = ½ x ¡ 1; otherwise. pr(0) = 0 pr(y + 1) = y. 7) sign 0; if x = 0; sgn(x) = ½ 1; otherwise. sgn(0) = 0 sgn(y + 1) = 1 8) ¡¢ 0; if y · x; x ¡¢ y = ½ x ¡ y; otherwise. x ¡¢ 0 = x x ¡¢ (y + 1) = pr(x ¡¢ y) 9) Factorials 0! = 1 (n + 1)! = n!(n + 1) If f(x; y) is primitive recursive then so is ¸x; n[ f(x; y)] yX·n .
Recommended publications
  • “The Church-Turing “Thesis” As a Special Corollary of Gödel's
    “The Church-Turing “Thesis” as a Special Corollary of Gödel’s Completeness Theorem,” in Computability: Turing, Gödel, Church, and Beyond, B. J. Copeland, C. Posy, and O. Shagrir (eds.), MIT Press (Cambridge), 2013, pp. 77-104. Saul A. Kripke This is the published version of the book chapter indicated above, which can be obtained from the publisher at https://mitpress.mit.edu/books/computability. It is reproduced here by permission of the publisher who holds the copyright. © The MIT Press The Church-Turing “ Thesis ” as a Special Corollary of G ö del ’ s 4 Completeness Theorem 1 Saul A. Kripke Traditionally, many writers, following Kleene (1952) , thought of the Church-Turing thesis as unprovable by its nature but having various strong arguments in its favor, including Turing ’ s analysis of human computation. More recently, the beauty, power, and obvious fundamental importance of this analysis — what Turing (1936) calls “ argument I ” — has led some writers to give an almost exclusive emphasis on this argument as the unique justification for the Church-Turing thesis. In this chapter I advocate an alternative justification, essentially presupposed by Turing himself in what he calls “ argument II. ” The idea is that computation is a special form of math- ematical deduction. Assuming the steps of the deduction can be stated in a first- order language, the Church-Turing thesis follows as a special case of G ö del ’ s completeness theorem (first-order algorithm theorem). I propose this idea as an alternative foundation for the Church-Turing thesis, both for human and machine computation. Clearly the relevant assumptions are justified for computations pres- ently known.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly.
    [Show full text]
  • Church's Thesis and the Conceptual Analysis of Computability
    Church’s Thesis and the Conceptual Analysis of Computability Michael Rescorla Abstract: Church’s thesis asserts that a number-theoretic function is intuitively computable if and only if it is recursive. A related thesis asserts that Turing’s work yields a conceptual analysis of the intuitive notion of numerical computability. I endorse Church’s thesis, but I argue against the related thesis. I argue that purported conceptual analyses based upon Turing’s work involve a subtle but persistent circularity. Turing machines manipulate syntactic entities. To specify which number-theoretic function a Turing machine computes, we must correlate these syntactic entities with numbers. I argue that, in providing this correlation, we must demand that the correlation itself be computable. Otherwise, the Turing machine will compute uncomputable functions. But if we presuppose the intuitive notion of a computable relation between syntactic entities and numbers, then our analysis of computability is circular.1 §1. Turing machines and number-theoretic functions A Turing machine manipulates syntactic entities: strings consisting of strokes and blanks. I restrict attention to Turing machines that possess two key properties. First, the machine eventually halts when supplied with an input of finitely many adjacent strokes. Second, when the 1 I am greatly indebted to helpful feedback from two anonymous referees from this journal, as well as from: C. Anthony Anderson, Adam Elga, Kevin Falvey, Warren Goldfarb, Richard Heck, Peter Koellner, Oystein Linnebo, Charles Parsons, Gualtiero Piccinini, and Stewart Shapiro. I received extremely helpful comments when I presented earlier versions of this paper at the UCLA Philosophy of Mathematics Workshop, especially from Joseph Almog, D.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Chapter 24 Conp, Self-Reductions
    Chapter 24 coNP, Self-Reductions CS 473: Fundamental Algorithms, Spring 2013 April 24, 2013 24.1 Complementation and Self-Reduction 24.2 Complementation 24.2.1 Recap 24.2.1.1 The class P (A) A language L (equivalently decision problem) is in the class P if there is a polynomial time algorithm A for deciding L; that is given a string x, A correctly decides if x 2 L and running time of A on x is polynomial in jxj, the length of x. 24.2.1.2 The class NP Two equivalent definitions: (A) Language L is in NP if there is a non-deterministic polynomial time algorithm A (Turing Machine) that decides L. (A) For x 2 L, A has some non-deterministic choice of moves that will make A accept x (B) For x 62 L, no choice of moves will make A accept x (B) L has an efficient certifier C(·; ·). (A) C is a polynomial time deterministic algorithm (B) For x 2 L there is a string y (proof) of length polynomial in jxj such that C(x; y) accepts (C) For x 62 L, no string y will make C(x; y) accept 1 24.2.1.3 Complementation Definition 24.2.1. Given a decision problem X, its complement X is the collection of all instances s such that s 62 L(X) Equivalently, in terms of languages: Definition 24.2.2. Given a language L over alphabet Σ, its complement L is the language Σ∗ n L. 24.2.1.4 Examples (A) PRIME = nfn j n is an integer and n is primeg o PRIME = n n is an integer and n is not a prime n o PRIME = COMPOSITE .
    [Show full text]
  • NP-Completeness Part I
    NP-Completeness Part I Outline for Today ● Recap from Last Time ● Welcome back from break! Let's make sure we're all on the same page here. ● Polynomial-Time Reducibility ● Connecting problems together. ● NP-Completeness ● What are the hardest problems in NP? ● The Cook-Levin Theorem ● A concrete NP-complete problem. Recap from Last Time The Limits of Computability EQTM EQTM co-RE R RE LD LD ADD HALT ATM HALT ATM 0*1* The Limits of Efficient Computation P NP R P and NP Refresher ● The class P consists of all problems solvable in deterministic polynomial time. ● The class NP consists of all problems solvable in nondeterministic polynomial time. ● Equivalently, NP consists of all problems for which there is a deterministic, polynomial-time verifier for the problem. Reducibility Maximum Matching ● Given an undirected graph G, a matching in G is a set of edges such that no two edges share an endpoint. ● A maximum matching is a matching with the largest number of edges. AA maximummaximum matching.matching. Maximum Matching ● Jack Edmonds' paper “Paths, Trees, and Flowers” gives a polynomial-time algorithm for finding maximum matchings. ● (This is the same Edmonds as in “Cobham- Edmonds Thesis.) ● Using this fact, what other problems can we solve? Domino Tiling Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling Solving Domino Tiling The Setup ● To determine whether you can place at least k dominoes on a crossword grid, do the following: ● Convert the grid into a graph: each empty cell is a node, and any two adjacent empty cells have an edge between them.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Enumerations of the Kolmogorov Function
    Enumerations of the Kolmogorov Function Richard Beigela Harry Buhrmanb Peter Fejerc Lance Fortnowd Piotr Grabowskie Luc Longpr´ef Andrej Muchnikg Frank Stephanh Leen Torenvlieti Abstract A recursive enumerator for a function h is an algorithm f which enu- merates for an input x finitely many elements including h(x). f is a aEmail: [email protected]. Department of Computer and Information Sciences, Temple University, 1805 North Broad Street, Philadelphia PA 19122, USA. Research per- formed in part at NEC and the Institute for Advanced Study. Supported in part by a State of New Jersey grant and by the National Science Foundation under grants CCR-0049019 and CCR-9877150. bEmail: [email protected]. CWI, Kruislaan 413, 1098SJ Amsterdam, The Netherlands. Partially supported by the EU through the 5th framework program FET. cEmail: [email protected]. Department of Computer Science, University of Mas- sachusetts Boston, Boston, MA 02125, USA. dEmail: [email protected]. Department of Computer Science, University of Chicago, 1100 East 58th Street, Chicago, IL 60637, USA. Research performed in part at NEC Research Institute. eEmail: [email protected]. Institut f¨ur Informatik, Im Neuenheimer Feld 294, 69120 Heidelberg, Germany. fEmail: [email protected]. Computer Science Department, UTEP, El Paso, TX 79968, USA. gEmail: [email protected]. Institute of New Techologies, Nizhnyaya Radi- shevskaya, 10, Moscow, 109004, Russia. The work was partially supported by Russian Foundation for Basic Research (grants N 04-01-00427, N 02-01-22001) and Council on Grants for Scientific Schools. hEmail: [email protected]. School of Computing and Department of Mathe- matics, National University of Singapore, 3 Science Drive 2, Singapore 117543, Republic of Singapore.
    [Show full text]
  • Theory of Computation
    A Universal Program (4) Theory of Computation Prof. Michael Mascagni Florida State University Department of Computer Science 1 / 33 Recursively Enumerable Sets (4.4) A Universal Program (4) The Parameter Theorem (4.5) Diagonalization, Reducibility, and Rice's Theorem (4.6, 4.7) Enumeration Theorem Definition. We write Wn = fx 2 N j Φ(x; n) #g: Then we have Theorem 4.6. A set B is r.e. if and only if there is an n for which B = Wn. Proof. This is simply by the definition ofΦ( x; n). 2 Note that W0; W1; W2;::: is an enumeration of all r.e. sets. 2 / 33 Recursively Enumerable Sets (4.4) A Universal Program (4) The Parameter Theorem (4.5) Diagonalization, Reducibility, and Rice's Theorem (4.6, 4.7) The Set K Let K = fn 2 N j n 2 Wng: Now n 2 K , Φ(n; n) #, HALT(n; n) This, K is the set of all numbers n such that program number n eventually halts on input n. 3 / 33 Recursively Enumerable Sets (4.4) A Universal Program (4) The Parameter Theorem (4.5) Diagonalization, Reducibility, and Rice's Theorem (4.6, 4.7) K Is r.e. but Not Recursive Theorem 4.7. K is r.e. but not recursive. Proof. By the universality theorem, Φ(n; n) is partially computable, hence K is r.e. If K¯ were also r.e., then by the enumeration theorem, K¯ = Wi for some i. We then arrive at i 2 K , i 2 Wi , i 2 K¯ which is a contradiction.
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • A Study of the NEXP Vs. P/Poly Problem and Its Variants by Barıs
    A Study of the NEXP vs. P/poly Problem and Its Variants by Barı¸sAydınlıoglu˘ A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2017 Date of final oral examination: August 15, 2017 This dissertation is approved by the following members of the Final Oral Committee: Eric Bach, Professor, Computer Sciences Jin-Yi Cai, Professor, Computer Sciences Shuchi Chawla, Associate Professor, Computer Sciences Loris D’Antoni, Asssistant Professor, Computer Sciences Joseph S. Miller, Professor, Mathematics © Copyright by Barı¸sAydınlıoglu˘ 2017 All Rights Reserved i To Azadeh ii acknowledgments I am grateful to my advisor Eric Bach, for taking me on as his student, for being a constant source of inspiration and guidance, for his patience, time, and for our collaboration in [9]. I have a story to tell about that last one, the paper [9]. It was a late Monday night, 9:46 PM to be exact, when I e-mailed Eric this: Subject: question Eric, I am attaching two lemmas. They seem simple enough. Do they seem plausible to you? Do you see a proof/counterexample? Five minutes past midnight, Eric responded, Subject: one down, one to go. I think the first result is just linear algebra. and proceeded to give a proof from The Book. I was ecstatic, though only for fifteen minutes because then he sent a counterexample refuting the other lemma. But a third lemma, inspired by his counterexample, tied everything together. All within three hours. On a Monday midnight. I only wish that I had asked to work with him sooner.
    [Show full text]