The O(10) Commandments of TCS Theoretical Computer Science

Total Page:16

File Type:pdf, Size:1020Kb

The O(10) Commandments of TCS Theoretical Computer Science + insights The O(10Structure) commandments in of TCS Theoretical Computer Science Avi Wigderson IAS, Princeton Mathematics Computer Science Statistics Theoretical Social Computer Science Science Biology Economics Physics 2 Machine Learning Pseudorandomness Machine Learning Pseudorandomness Machine Learning Pseudorandomness Cryptography Cryptography Core TCS Cryptography Core TCS Core TCS Coding TheoryCoding TheoryCoding Quantum Theory Quantum ComputingQuantum Sublinear Computing Interactive Sublinear Computing Algorithms ComputationInteractive AlgorithmsSublinear ComputationInteractive Algorithms Computation CALVIN HALL LOBBY SCIENCE We bring: PHYSICS BIOLOGY - Computational modeling - Algorithmic ECONOMICS techniques - Lower bounds - Colorful beads “THE UNREASONABLE EFFECTIVENESS OF MATHEMATICS- … IN THE SCIENCES… TCS IS A WONDERFUL GIFT WHICH WE NEITHER UNDERSTAND NOR DESERVE” Wigner “THE REASONABLE EFFECTIVENESS OF TCS IN THE SCIENCESTHE PEACE … RISOOM SOMETHING WE DO UNDERSTAND” (AT LEAST FOR THE INFORMATION PROCESSES) Mathematics Computer Science Statistics Randomness Learning CoreTheoretical TCS Theoretical Cryptography Social Structure, Computer Science ComputerInterconnectivity,Science Science Insights,… Optimization Quantum Computing Lower Communication bounds Complexity Biology Economics Physics 6 Machine Learning Pseudorandomness Machine Learning Pseudorandomness Machine Learning Pseudorandomness Cryptography Cryptography Core TCS Cryptography Core TCS Core TCS Coding TheoryCoding TheoryCoding Quantum Theory Quantum ComputingQuantum Sublinear Computing Interactive Sublinear Computing Algorithms ComputationInteractive AlgorithmsSublinear ComputationInteractive Algorithms Computation Unsolvable Computational problems Halting Chess / Go PSPACE Solvable Strategies NP ComputationQP is TSP Linear Integer everywhereProgramming NP-complete Factoring Theorem Sudoku Error Proving Map Correction Shortest Coloring FFT Path P Multiplication 1000s problems L Addition Few complexity classes Algorithms, Analysis, Reductions, Completeness Cryptographic problems: -Privacy Poker over Oblivious -Resilience telephone Computationcomputation is Joint Secret Private-key coin Communication flipping everywhereencryption Key One-way Public-key exchange Minicrypt Function encryption Zero- Digital Cryptomania knowledge Signatures Commitment Trap-door proofs Pseudorandom schemes Function generation 1000s problems Few complexity classes Algorithms, Analysis Reductions, Completeness High level Structure in - Optimization problems - Approximation problems - Cryptographic problems - Probabilistic algorithms - Lower bounds Origins Birth of TCS [Turing 1936]: “On computable numbers, with an application to the entscheidungsproblem” Formal definition of computer & algorithm Turing Machine The amazing power of a good theory - Seed of the computer revolution - The power of computing: Church-Turing Thesis - The limits on of algorithms Technology Computer Revolution A Turing machine is easy to implement ∧ Turing Machine ∨ Simple Local Science Church-Turing Thesis Turing machine can emulate any computation! Computation: every process which is a sequence of simple, local steps, on Turing bits in computers Machine neurons in the brain atoms in matter cells in living tissue individuals in populations Simple Local … Math Theory A Turing machine is a formal model - basic step - basic memory unit Can prove theorems: Turing - analyze algorithms Machine - prove limits Simple Local Limits of computation Unsolvable Solvable CS [Turing]: Given a computer program, does it always halt? Logic[Turing]: Given a ’ statement, is it provable When? I m lateUS Math [Mattiasevich]: Given an equation, does it have integer solutions? Computational Biology [Conway]: Given Complexity A rule for an epidemic, Theory will it spread or die? Structure in decision, search,optimization problems + crash course on complexity & Classical reductions Easy and Hard Problems asymptotic complexity of functions 2-COL Shortest path Asymptotics + 2-SAT (x ∨x ) (x ∨x )… 2 4 5 n Worst case analysis - Forward looking Multiplication 23x67 = ? - Reveal structure! poly(n) steps algorithm Robust EASY P – Polynomial time to model variants Easy and Hard Problems asymptotic complexity of functions 2-COL 3-COL Shortest path Hamilton path 2-SAT (x2∨x4) (x5∨xn)… 3-SAT (x1∨x2∨x4)(x5∨x3∨xn)… Multiplication 23x67 = ? Factoring 1541 = ? x ? poly(n) steps algorithm best known alg exp(n) steps EASY P – Polynomial time HARD? we don’t know! P Polynomial – Possible, Practical, Pheasable E Exponential – Extremely hard, Eempossible Easy and Hard Problems asymptotic complexity of functions 2-COL 3-COL Shortest path Hamilton path 2-SAT (x2∨x4) (x5∨xn)… 3-SAT (x1∨x2∨x4)(x5∨x3∨xn)… Multiplication 23x67 = ? Factoring 1541 = ? x ? poly(n) steps algorithm best known alg exp(n) steps EASY P – Polynomial time HARD? we don’t know! Thm: If 3-COL is Easy then Factoring & 3-SAT are Easy Something unites all problems we have seen so far Computational notion of proof! EXP Solutions can be Computationeasily checked is Multiplication Everything we can 2-SAT hopeeverywhere to solve Short Path 2-COL Long ? 3-SAT Solutions can Path = NP P be easily found 3-COL Everything we Integer Can solve Factoring Miracle! One problem captures whole class! NP-complete problems [Cook, Levin’71] 3-SAT easy è P=NP [Karp‘72]: 3-COL easy è 3-SAT easy 21 problems, network, logic, scheduling… [‘00] Thousands across math & sciences If one is easy, all are. A universal phenomena If one is hard, all are. Why prove NP-completeness results? Programmers/CS – Hardness certificate Mathematicians – Structural nastiness Scientists – Model validation / sanity check NP-complete problems that “nature solves” Biology: Minimum energy Physics: Minimum Protein Folding surface area Foam Possibilities: model is wrong or inputs are special or P=NP Use P≠NP as a Add nature to laptop law of nature! Random, Quantum,… Why are there so many? [Cook, Levin’71] 3-SAT easy çè P=NP [Karp ‘72]:If 3-COL easy then 3-SAT easy Locality of efficient algorithm computation formula graph h → T F x ∨ y satisfying ↔ legal assignment coloring y x Claim: In every legal 3-coloring, z = x ∨ y Many structures encode computation z OR-gadget NP Integer ComputationFactoring is Multiplication Dark Hardest 2-SAT everywhereMatter? Shortest complete NP- Path 2-COL 3-SAT Long Dichotomy? Path Protein P Easiest Folding 3-COL Glue – polynomial-time algs polynomial- Correct resolution time algs Refine Structure in approximation problems (+ sophisticated reductions) The mystery of approximation 1970s: Essentially all optimization problems are either in P or are NP-complete Hard problems don’t go away… How well can we approximate the optimum? 3-SAT: ≤ 8/7 Set Cover: ≤ log n TSP: ≤ 3/2 3-COL: ≤ n.4 Vertex Cover: ≤ 2 Clique: ≤ n/log2n Are these good? Do better? Theory?? [Hastad’01] 8/7-ε is NP-complete for 3-SAT 2010 2000 1990 1980 1970 3-SAT is 8/7-ε hard 3-SAT is Decade 1.01 hard 2 Decades Interconnectivity of core TCS NP 3-SAT is hard Fourier analysis, Concentration Game inequalities Theory 3-SAT is Influences 8/7-ε hard in Boolean Distributed functions Computing 3-SAT is Decade 1.01 hard Interconnectivity of core TCS NP 3-SAT is hard 3-SAT is IP = Interactive Proofs 8/7-ε hard 2IP = 2-prover InteractiveInterconnectivity Proofs of core TCS PCP = Probabilistically Checkable Proofs 3-SAT is 1.01 hard Probabilistic PCP PCP= NP Optimization Interactive [AS,ALMSS] Approx Proof systems Algorithms NP 2IP 2IP = NEXP IP =IPPSPACE 3-SAT [BFL] [LFKN,S] is hard Arithmetization Program Cryptography Checking Zero-knowledge The last decade [Khot] Conjecture By 2000s: Exact approx Linear ratios for many problems equations are hard 3-SAT = 8/7 Decade 3-XOR = 2 Set Cover = log n [Raghavendra ‘08] … … A single, universal Where do these algorithm achieves 3-SAT is numbers come from? optimal approx ratio 8/7-ε hard for all constraint 3-SAT is satisfaction problems 1.01 hard Convex programming Analysis, Geometry complete NP 3-SAT algorithm Is hard Structure in cryptographic problems Complexity-based Cryptography Predated the Internet & E-commerce Enabled the Internet & E-commerce - Parties can only solve easy problems - Factoring is hard Asymptotic view Easy allows setting parameters p,q p⋅q Information Theory Impossible vs. Complexity Theory Secret communication Diffie-Hellman, Merkle ‘76 Public-key encryption Rivest-Shamir-Adleman ‘77 E-commerce security Goldwasser-Micali ‘81 I want to purchase “War and Peace”. My credit card number is EA EB E B ( 1111 2222 3333 4444 ) you E New reduction! C New standard [DH,M,RSA] Efficient recovery of x from EB(x) [GM] Efficient distinguishing EB(x) random è Factoring is easy Thm: Factoring hard è secret communication Ask the impossible What else can be done? - Digital signatures Different settings & - Secret exchange privacy constraints - Oblivious Transfer - Commitments Different reductions to Factoring - Digital cash - Coin Flipping [GMR ’85]- Zero-Knowledge proofs - ……… - Everything!! A unified reduction (in 2 steps) Eliminating bad guys Bad Malicious Fault- guys behavior tolerance Zero-Knowledge GMW ‘86 New, Compiler complete primitive Good Honest Privacy / guys But curious Secrecy Dealing with good guys Elections for honest players winner 0 Democrats g Si = 1 Republicans … S1 S2 Si … Sn Elections: g = Majority - All players learn g(S1,S2,…,Sn) - No subset learns anything more Yao ’86 Oblivious computation GMW ‘87 with secret inputs 1 g(inputs) ∧ 1 V a b 0 1 Alice Bob V V AND - 0 complete 0
Recommended publications
  • Invention, Intension and the Limits of Computation
    Minds & Machines { under review (final version pre-review, Copyright Minds & Machines) Invention, Intension and the Limits of Computation Hajo Greif 30 June, 2020 Abstract This is a critical exploration of the relation between two common as- sumptions in anti-computationalist critiques of Artificial Intelligence: The first assumption is that at least some cognitive abilities are specifically human and non-computational in nature, whereas the second assumption is that there are principled limitations to what machine-based computation can accomplish with respect to simulating or replicating these abilities. Against the view that these putative differences between computation in humans and machines are closely re- lated, this essay argues that the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing and on an inquiry into the scope and nature of human invention in mathematics, and their respective bearing on theories of computation. Keywords Artificial Intelligence · Intensionality · Extensionality · Invention in mathematics · Turing computability · Mechanistic theories of computation This paper originated as a rather spontaneous and quickly drafted Computability in Europe conference submission. It grew into something more substantial with the help of the CiE review- ers and the author's colleagues in the Philosophy of Computing Group at Warsaw University of Technology, Paula Quinon and Pawe lStacewicz, mediated by an entry to the \Caf´eAleph" blog co-curated by Pawe l. Philosophy of Computing Group, International Center for Formal Ontology (ICFO), Warsaw University of Technology E-mail: [email protected] https://hajo-greif.net; https://tinyurl.com/philosophy-of-computing 2 H.
    [Show full text]
  • Limits on Efficient Computation in the Physical World
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • The Limits of Knowledge: Philosophical and Practical Aspects of Building a Quantum Computer
    1 The Limits of Knowledge: Philosophical and practical aspects of building a quantum computer Simons Center November 19, 2010 Michael H. Freedman, Station Q, Microsoft Research 2 • To explain quantum computing, I will offer some parallels between philosophical concepts, specifically from Catholicism on the one hand, and fundamental issues in math and physics on the other. 3 Why in the world would I do that? • I gave the first draft of this talk at the Physics Department of Notre Dame. • There was a strange disconnect. 4 The audience was completely secular. • They couldn’t figure out why some guy named Freedman was talking to them about Catholicism. • The comedic potential was palpable. 5 I Want To Try Again With a rethought talk and broader audience 6 • Mathematics and Physics have been in their modern rebirth for 600 years, and in a sense both sprang from the Church (e.g. Roger Bacon) • So let’s compare these two long term enterprises: - Methods - Scope of Ideas 7 Common Points: Catholicism and Math/Physics • Care about difficult ideas • Agonize over systems and foundations • Think on long time scales • Safeguard, revisit, recycle fruitful ideas and methods Some of my favorites Aquinas Dante Lully Descartes Dali Tolkien • Lully may have been the first person to try to build a computer. • He sought an automated way to distinguish truth from falsehood, doctrine from heresy 8 • Philosophy and religion deal with large questions. • In Math/Physics we also have great overarching questions which might be considered the social equals of: Omniscience, Free Will, Original Sin, and Redemption.
    [Show full text]
  • Thermodynamics of Computation and Linear Stability Limits of Superfluid Refrigeration of a Model Computing Array
    Thermodynamics of computation and linear stability limits of superfluid refrigeration of a model computing array Michele Sciacca1;6 Antonio Sellitto2 Luca Galantucci3 David Jou4;5 1 Dipartimento di Scienze Agrarie e Forestali, Universit`adi Palermo, Viale delle Scienze, 90128 Palermo, Italy 2 Dipartimento di Ingegneria Industriale, Universit`adi Salerno, Campus di Fisciano, 84084, Fisciano, Italy 3 Joint Quantum Centre (JQC) DurhamNewcastle, and School of Mathematics and Statistics, Newcastle University, Newcastle upon Tyne, NE1 7RU, United Kingdom 4 Departament de F´ısica,Universitat Aut`onomade Barcelona, 08193 Bellaterra, Catalonia, Spain 5Institut d'Estudis Catalans, Carme 47, Barcelona 08001, Catalonia, Spain 6 Istituto Nazionale di Alta Matematica, Roma 00185 , Italy Abstract We analyze the stability of the temperature profile of an array of computing nanodevices refrigerated by flowing superfluid helium, under variations of temperature, computing rate, and barycentric velocity of helium. It turns out that if the variation of dissipated energy per bit with respect to temperature variations is higher than some critical values, proportional to the effective thermal conductivity of the array, then the steady-state temperature profiles become unstable and refrigeration efficiency is lost. Furthermore, a restriction on the maximum rate of variation of the local computation rate is found. Keywords: Stability analysis; Computer refrigeration; Superfluid helium; Thermodynamics of computation. AMS Subject Classifications: 76A25, (Superfluids (classical aspects)) 80A99. (all'interno di Thermodynamics and heat transfer) 1 Introduction Thermodynamics of computation [1], [2], [3], [4] is a major field of current research. On practical grounds, a relevant part of energy consumption in computers is related to refrigeration costs; indeed, heat dissipation in very miniaturized chips is one of the limiting factors in computation.
    [Show full text]
  • The Limits of Computation & the Computation of Limits: Towards New Computational Architectures
    The Limits of Computation & The Computation of Limits: Towards New Computational Architectures An Intensive Program Proposal for Spring/Summer of 2012 Assoc. Prof. Ahmet KOLTUKSUZ, Ph.D. Yaşar University School of Engineering, Dept. Of Computer Engineering Izmir, Turkey The Aim & Rationale To disseminate the acculuted information and knowledge on the limits of the science of computation in such a compact and organized way that no any single discipline and/or department offers in their standard undergrad or graduate curriculums. While the department of Physics evalute the limits of computation in terms of the relative speed of electrons, resistance coefficient of the propagation IC and whether such an activity takes place in a Newtonian domain or in Quantum domain with the Planck measures, It is the assumptions of Leibnitz and Gödel’s incompleteness theorem or even Kolmogorov complexity along with Chaitin for the department of Mathematics. On the other hand, while the department Computer Science would like to find out, whether it is possible to find an algorithm to compute the shortest path in between two nodes of a lattice before the end of the universe, the departments of both Software Engineering and Computer Engineering is on the job of actually creating a Turing machine with an appropriate operating system; which will be executing that long without rolling off to the pitfalls of indefinite loops and/or of indefinite postponoments in its process & memory management routines while having multicore architectures doing parallel execution. All the problems, stemming from the limits withstanding, the new three and possibly more dimensional topological paradigms to the architectures of both hardware and software sound very promising and therefore worth closely examining.
    [Show full text]
  • The Intractable and the Undecidable - Computation and Anticipatory Processes
    International JournalThe Intractable of Applied and Research the Undecidable on Information - Computation Technology and Anticipatory and Computing Processes DOI: Mihai Nadin The Intractable and the Undecidable - Computation and Anticipatory Processes Mihai Nadin Ashbel Smith University Professor, University of Texas at Dallas; Director, Institute for Research in Anticipatory Systems. Email id: [email protected]; www.nadin.ws Abstract Representation is the pre-requisite for any form of computation. To understand the consequences of this premise, we have to reconsider computation itself. As the sciences and the humanities become computational, to understand what is processed and, moreover, the meaning of the outcome, becomes critical. Focus on big data makes this understanding even more urgent. A fast growing variety of data processing methods is making processing more effective. Nevertheless these methods do not shed light on the relation between the nature of the process and its outcome. This study in foundational aspects of computation addresses the intractable and the undecidable from the perspective of anticipation. Representation, as a specific semiotic activity, turns out to be significant for understanding the meaning of computation. Keywords: Anticipation, complexity, information, representation 1. INTRODUCTION As intellectual constructs, concepts are compressed answers to questions posed in the process of acquiring and disseminating knowledge about the world. Ill-defined concepts are more a trap into vacuous discourse: they undermine the gnoseological effort, i.e., our attempt to gain knowledge. This is why Occam (to whom the principle of parsimony is attributed) advised keeping them to the minimum necessary. Aristotle, Maimonides, Duns Scotus, Ptolemy, and, later, Newton argued for the same.
    [Show full text]
  • Arxiv:1504.03303V2 [Cs.AI] 11 May 2016 U Nryrqie Opyial Rnmtamsae H Iiu En Minimum the Message
    Ultimate Intelligence Part II: Physical Complexity and Limits of Inductive Inference Systems Eray Ozkural¨ G¨ok Us Sibernetik Ar&Ge Ltd. S¸ti. Abstract. We continue our analysis of volume and energy measures that are appropriate for quantifying inductive inference systems. We ex- tend logical depth and conceptual jump size measures in AIT to stochas- tic problems, and physical measures that involve volume and energy. We introduce a graphical model of computational complexity that we believe to be appropriate for intelligent machines. We show several asymptotic relations between energy, logical depth and volume of computation for inductive inference. In particular, we arrive at a “black-hole equation” of inductive inference, which relates energy, volume, space, and algorithmic information for an optimal inductive inference solution. We introduce energy-bounded algorithmic entropy. We briefly apply our ideas to the physical limits of intelligent computation in our universe. “Everything must be made as simple as possible. But not simpler.” — Albert Einstein 1 Introduction We initiated the ultimate intelligence research program in 2014 inspired by Seth Lloyd’s similarly titled article on the ultimate physical limits to computa- tion [6], intended as a book-length treatment of the theory of general-purpose AI. In similar spirit to Lloyd’s research, we investigate the ultimate physical lim- its and conditions of intelligence. A main motivation is to extend the theory of intelligence using physical units, emphasizing the physicalism inherent in com- puter science. This is the second installation of the paper series, the first part [13] proposed that universal induction theory is physically complete arguing that the algorithmic entropy of a physical stochastic source is always finite, and argued arXiv:1504.03303v2 [cs.AI] 11 May 2016 that if we choose the laws of physics as the reference machine, the loophole in algorithmic information theory (AIT) of choosing a reference machine is closed.
    [Show full text]
  • A Decade of Lattice Cryptography
    Full text available at: http://dx.doi.org/10.1561/0400000074 A Decade of Lattice Cryptography Chris Peikert Computer Science and Engineering University of Michigan, United States Boston — Delft Full text available at: http://dx.doi.org/10.1561/0400000074 Foundations and Trends R in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends R in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. R This Foundations and Trends issue was typeset in LATEX using a class file designed by Neal Parikh. Printed on acid-free paper. ISBN: 978-1-68083-113-9 c 2016 C. Peikert All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for in- ternal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged.
    [Show full text]
  • The Best Nurturers in Computer Science Research
    The Best Nurturers in Computer Science Research Bharath Kumar M. Y. N. Srikant IISc-CSA-TR-2004-10 http://archive.csa.iisc.ernet.in/TR/2004/10/ Computer Science and Automation Indian Institute of Science, India October 2004 The Best Nurturers in Computer Science Research Bharath Kumar M.∗ Y. N. Srikant† Abstract The paper presents a heuristic for mining nurturers in temporally organized collaboration networks: people who facilitate the growth and success of the young ones. Specifically, this heuristic is applied to the computer science bibliographic data to find the best nurturers in computer science research. The measure of success is parameterized, and the paper demonstrates experiments and results with publication count and citations as success metrics. Rather than just the nurturer’s success, the heuristic captures the influence he has had in the indepen- dent success of the relatively young in the network. These results can hence be a useful resource to graduate students and post-doctoral can- didates. The heuristic is extended to accurately yield ranked nurturers inside a particular time period. Interestingly, there is a recognizable deviation between the rankings of the most successful researchers and the best nurturers, which although is obvious from a social perspective has not been statistically demonstrated. Keywords: Social Network Analysis, Bibliometrics, Temporal Data Mining. 1 Introduction Consider a student Arjun, who has finished his under-graduate degree in Computer Science, and is seeking a PhD degree followed by a successful career in Computer Science research. How does he choose his research advisor? He has the following options with him: 1. Look up the rankings of various universities [1], and apply to any “rea- sonably good” professor in any of the top universities.
    [Show full text]
  • Constraint Based Dimension Correlation and Distance
    Preface The papers in this volume were presented at the Fourteenth Annual IEEE Conference on Computational Complexity held from May 4-6, 1999 in Atlanta, Georgia, in conjunction with the Federated Computing Research Conference. This conference was sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing, in cooperation with the ACM SIGACT (The special interest group on Algorithms and Complexity Theory) and EATCS (The European Association for Theoretical Computer Science). The call for papers sought original research papers in all areas of computational complexity. A total of 70 papers were submitted for consideration of which 28 papers were accepted for the conference and for inclusion in these proceedings. Six of these papers were accepted to a joint STOC/Complexity session. For these papers the full conference paper appears in the STOC proceedings and a one-page summary appears in these proceedings. The program committee invited two distinguished researchers in computational complexity - Avi Wigderson and Jin-Yi Cai - to present invited talks. These proceedings contain survey articles based on their talks. The program committee thanks Pradyut Shah and Marcus Schaefer for their organizational and computer help, Steve Tate and the SIGACT Electronic Publishing Board for the use and help of the electronic submissions server, Peter Shor and Mike Saks for the electronic conference meeting software and Danielle Martin of the IEEE for editing this volume. The committee would also like to thank the following people for their help in reviewing the papers: E. Allender, V. Arvind, M. Ajtai, A. Ambainis, G. Barequet, S. Baumer, A. Berthiaume, S.
    [Show full text]
  • Interactions of Computational Complexity Theory and Mathematics
    Interactions of Computational Complexity Theory and Mathematics Avi Wigderson October 22, 2017 Abstract [This paper is a (self contained) chapter in a new book on computational complexity theory, called Mathematics and Computation, whose draft is available at https://www.math.ias.edu/avi/book]. We survey some concrete interaction areas between computational complexity theory and different fields of mathematics. We hope to demonstrate here that hardly any area of modern mathematics is untouched by the computational connection (which in some cases is completely natural and in others may seem quite surprising). In my view, the breadth, depth, beauty and novelty of these connections is inspiring, and speaks to a great potential of future interactions (which indeed, are quickly expanding). We aim for variety. We give short, simple descriptions (without proofs or much technical detail) of ideas, motivations, results and connections; this will hopefully entice the reader to dig deeper. Each vignette focuses only on a single topic within a large mathematical filed. We cover the following: • Number Theory: Primality testing • Combinatorial Geometry: Point-line incidences • Operator Theory: The Kadison-Singer problem • Metric Geometry: Distortion of embeddings • Group Theory: Generation and random generation • Statistical Physics: Monte-Carlo Markov chains • Analysis and Probability: Noise stability • Lattice Theory: Short vectors • Invariant Theory: Actions on matrix tuples 1 1 introduction The Theory of Computation (ToC) lays out the mathematical foundations of computer science. I am often asked if ToC is a branch of Mathematics, or of Computer Science. The answer is easy: it is clearly both (and in fact, much more). Ever since Turing's 1936 definition of the Turing machine, we have had a formal mathematical model of computation that enables the rigorous mathematical study of computational tasks, algorithms to solve them, and the resources these require.
    [Show full text]
  • Advances in Three Hypercomputation Models
    EJTP 13, No. 36 (2016) 169{182 Electronic Journal of Theoretical Physics Advances in Three Hypercomputation Models Mario Antoine Aoun∗ Department of the Phd Program in Cognitive Informatics, Faculty of Science, Universit´edu Qu´ebec `aMontreal (UQAM), QC, Canada Received 1 March 2016, Accepted 20 September 2016, Published 10 November 2016 Abstract: In this review article we discuss three Hypercomputing models: Accelerated Turing Machine, Relativistic Computer and Quantum Computer based on three new discoveries: Superluminal particles, slowly rotating black holes and adiabatic quantum computation, respectively. ⃝c Electronic Journal of Theoretical Physics. All rights reserved. Keywords: Hypercomputing Models; Turing Machine; Quantum Computer; Relativistic Computer; Superluminal Particles; Slowly Rotating Black Holes; Adiabatic Quantum Computation PACS (2010): 03.67.Lx; 03.67.Ac; 03.67.Bg; 03.67.Mn 1 Introduction We discuss three examples of Hypercomputation models and their advancements. The first example is based on a new theory that advocates the use of superluminal particles in order to bypass the energy constraint of a physically realizable Accel- erated Turing Machine (ATM) [1, 2]. The second example refers to contemporary developments in relativity theory that are based on recent astronomical observations and discovery of the existence of huge slowly rotating black holes in our universe, which in return can provide a suitable space-time structure to operate a relativistic computer [3, 4 and 5]. The third example is based on latest advancements and research in quantum computing modeling (e.g. The Adiabatic Quantum Computer and Quantum Morphogenetic Computing) [6, 7, 8, 9, 10, 11, 12 and 13]. Copeland defines the terms Hypercomputation and Hypercomputer as it follows: \Hypecomputation is the computation of functions or numbers that [.
    [Show full text]