Timeline of CS103 Results
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Efficient Algorithms with Asymmetric Read and Write Costs
Efficient Algorithms with Asymmetric Read and Write Costs Guy E. Blelloch1, Jeremy T. Fineman2, Phillip B. Gibbons1, Yan Gu1, and Julian Shun3 1 Carnegie Mellon University 2 Georgetown University 3 University of California, Berkeley Abstract In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fun- damentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We con- sider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M, ω)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost ω 1. For FFT and sorting networks we show a lower bound cost of Ω(ωn logωM n), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when ω is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(ω, log n)/ log(ωM)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n × n diamond DAG of Ω(ωn2/M) cost, which indicates no asymptotic improvement is achievable with fast reads. -
Great Ideas in Computing
Great Ideas in Computing University of Toronto CSC196 Winter/Spring 2019 Week 6: October 19-23 (2020) 1 / 17 Announcements I added one final question to Assignment 2. It concerns search engines. The question may need a little clarification. I have also clarified question 1 where I have defined (in a non standard way) the meaning of a strict binary search tree which is what I had in mind. Please answer the question for a strict binary search tree. If you answered the quiz question for a non strict binary search tree withn a proper explanation you will get full credit. Quick poll: how many students feel that Q1 was a fair quiz? A1 has now been graded by Marta. I will scan over the assignments and hope to release the grades later today. If you plan to make a regrading request, you have up to one week to submit your request. You must specify clearly why you feel that a question may not have been graded fairly. In general, students did well which is what I expected. 2 / 17 Agenda for the week We will continue to discuss search engines. We ended on what is slide 10 (in Week 5) on Friday and we will continue with where we left off. I was surprised that in our poll, most students felt that the people advocating the \AI view" of search \won the debate" whereas today I will try to argue that the people (e.g., Salton and others) advocating the \combinatorial, algebraic, statistical view" won the debate as to current search engines. -
Efficient Algorithms with Asymmetric Read and Write Costs
Efficient Algorithms with Asymmetric Read and Write Costs Guy E. Blelloch1, Jeremy T. Fineman2, Phillip B. Gibbons1, Yan Gu1, and Julian Shun3 1 Carnegie Mellon University 2 Georgetown University 3 University of California, Berkeley Abstract In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fun- damentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We con- sider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M, ω)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost ω 1. For FFT and sorting networks we show a lower bound cost of Ω(ωn logωM n), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when ω is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(ω, log n)/ log(ωM)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n × n diamond DAG of Ω(ωn2/M) cost, which indicates no asymptotic improvement is achievable with fast reads. -
P Versus NP Richard M. Karp
P Versus NP Computational complexity theory is the branch of theoretical computer science concerned with the fundamental limits on the efficiency of automatic computation. It focuses on problems that appear to Richard M. Karp require a very large number of computation steps for their solution. The inputs and outputs to a problem are sequences of symbols drawn from a finite alphabet; there is no limit on the length of the input, and the fundamental question about a problem is the rate of growth of the number of required computation steps as a function of the length of the input. Some problems seem to require a very rapidly growing number of steps. One such problem is the inde- pendent set problem: given a graph, consisting of points called vertices and lines called edges connect- ing pairs of vertices, a set of vertices is called independent if no two vertices in the set are connected by a line. Given a graph and a positive integer n, the problem is to decide whether the graph contains an independent set of size n. Every known algorithm to solve the independent set problem encounters a combinatorial explosion, in which the number of required computation steps grows exponentially as a function of the size of the graph. On the other hand, the problem of deciding whether a given set of vertices is an independent set in a given graph is solvable by inspection. There are many such dichotomies, in which it is hard to decide whether a given type of structure exists within an input object (the existence problem), but it is easy to decide whether a given structure is of the required type (the verification problem). -
A Short History of Computational Complexity
The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns. -
Toward a Mathematical Semantics for Computer Languages
(! 1 J TOWARD A MATHEMATICAL SEMANTICS FOR COMPUTER LANGUAGES by Dana Scott and - Christopher Strachey Oxford University Computing Laboratory Programming Research Group-Library 8-11 Keble Road Oxford OX, 3QD Oxford (0865) 54141 Oxford University Computing Laboratory Programming Research Group tI. cr• "';' """, ":.\ ' OXFORD UNIVERSITY COMPUTING LABORATORY PROGRAMMING RESEARCH GROUP ~ 4S BANBURY ROAD \LJ OXFORD ~ .. 4 OCT 1971 ~In (UY'Y L TOWARD A ~ATHEMATICAL SEMANTICS FOR COMPUTER LANGUAGES by Dana Scott Princeton University and Christopher Strachey Oxford University Technical Monograph PRG-6 August 1971 Oxford University Computing Laboratory. Programming Research Group, 45 Banbury Road, Oxford. ~ 1971 Dana Scott and Christopher Strachey Department of Philosophy, Oxford University Computing Laboratory. 1879 lIall, Programming Research Group. Princeton University, 45 Banbury Road. Princeton. New Jersey 08540. Oxford OX2 6PE. This pape r is also to appear in Fl'(.'ceedinBs 0;- the .';y-,;;;o:illT:: on ComputeT's and AutoJ7'ata. lo-licroloo'ave Research Institute Symposia Series Volume 21. Polytechnic Institute of Brooklyn. and appears as a Technical Monograph by special aJ"rangement ...·ith the publishers. RefeJ"~nces in the Ii terature should be:- made to the _"!'OL·,-,',~;r:gs, as the texts are identical and the Symposia Sl?ries is gcaerally available in libraries. ABSTRACT Compilers for high-level languages aTe generally constructed to give the complete translation of the programs into machme language. As machines merely juggle bit patterns, the concepts of the original language may be lost or at least obscured during this passage. The purpose of a mathematical semantics is to give a correct and meaningful correspondence between programs and mathematical entities in a way that is entirely independent of an implementation. -
Why Mathematical Proof?
Why Mathematical Proof? Dana S. Scott, FBA, FNAS University Professor Emeritus Carnegie Mellon University Visiting Scholar University of California, Berkeley NOTICE! The author has plagiarized text and graphics from innumerable publications and sites, and he has failed to record attributions! But, as this lecture is intended as an entertainment and is not intended for publication, he regards such copying, therefore, as “fair use”. Keep this quiet, and do please forgive him. A Timeline for Geometry Some Greek Geometers Thales of Miletus (ca. 624 – 548 BC). Pythagoras of Samos (ca. 580 – 500 BC). Plato (428 – 347 BC). Archytas (428 – 347 BC). Theaetetus (ca. 417 – 369 BC). Eudoxus of Cnidus (ca. 408 – 347 BC). Aristotle (384 – 322 BC). Euclid (ca. 325 – ca. 265 BC). Archimedes of Syracuse (ca. 287 – ca. 212 BC). Apollonius of Perga (ca. 262 – ca. 190 BC). Claudius Ptolemaeus (Ptolemy)(ca. 90 AD – ca. 168 AD). Diophantus of Alexandria (ca. 200 – 298 AD). Pappus of Alexandria (ca. 290 – ca. 350 AD). Proclus Lycaeus (412 – 485 AD). There is no Royal Road to Geometry Euclid of Alexandria ca. 325 — ca. 265 BC Euclid taught at Alexandria in the time of Ptolemy I Soter, who reigned over Egypt from 323 to 285 BC. He authored the most successful textbook ever produced — and put his sources into obscurity! Moreover, he made us struggle with proofs ever since. Why Has Euclidean Geometry Been So Successful? • Our naive feeling for space is Euclidean. • Its methods have been very useful. • Euclid also shows us a mysterious connection between (visual) intuition and proof. The Pythagorean Theorem Euclid's Elements: Proposition 47 of Book 1 The Pythagorean Theorem Generalized If it holds for one Three triple, Similar it holds Figures for all. -
The Origins of Structural Operational Semantics
The Origins of Structural Operational Semantics Gordon D. Plotkin Laboratory for Foundations of Computer Science, School of Informatics, University of Edinburgh, King’s Buildings, Edinburgh EH9 3JZ, Scotland I am delighted to see my Aarhus notes [59] on SOS, Structural Operational Semantics, published as part of this special issue. The notes already contain some historical remarks, but the reader may be interested to know more of the personal intellectual context in which they arose. I must straightaway admit that at this distance in time I do not claim total accuracy or completeness: what I write should rather be considered as a reconstruction, based on (possibly faulty) memory, papers, old notes and consultations with colleagues. As a postgraduate I learnt the untyped λ-calculus from Rod Burstall. I was further deeply impressed by the work of Peter Landin on the semantics of pro- gramming languages [34–37] which includes his abstract SECD machine. One should also single out John McCarthy’s contributions [45–48], which include his 1962 introduction of abstract syntax, an essential tool, then and since, for all approaches to the semantics of programming languages. The IBM Vienna school [42, 41] were interested in specifying real programming languages, and, in particular, worked on an abstract interpreting machine for PL/I using VDL, their Vienna Definition Language; they were influenced by the ideas of McCarthy, Landin and Elgot [18]. I remember attending a seminar at Edinburgh where the intricacies of their PL/I abstract machine were explained. The states of these machines are tuples of various kinds of complex trees and there is also a stack of environments; the transition rules involve much tree traversal to access syntactical control points, handle jumps, and to manage concurrency. -
Download This PDF File
News from New Zealand by C. S. Calude Department of Computer Science, University of Auckland Auckland, New Zealand [email protected] 1 Scientific and Community News The latest CDMTCS research reports are ( http://www.cs.auckland.ac.nz/ staff-cgi-bin/mjd/secondcgi.pl ): 403 U. Speidel. A Forward-Parsing Randomness Test Based on the Expected Codeword Length of T-codes 05 /2011 404. M.J. Dinneen, Y.-B. Kim and R. Nicolescu. An Adaptive Algorithm for P System Synchronization 05 /2011 405. A.A. Abbott, M. Bechmann, C.S. Calude, and A. Sebald. A Nuclear Mag- netic Resonance Implementation of a Classical Deutsch-Jozsa Algorithm 05 /2011 406. K. Tadaki. A Computational Complexity-Theoretic Elaboration of Weak Truth-Table Reducibility 07 /2011 29 BEATCS no 105 EATCS NEWS 2 A Dialogue with Juris Hartmanis about Com- plexity Professor Juris Hartmanis, http://en.wikipedia.org/wiki/Juris_ Hartmanis , a Turing Award Winner, is the Walter R. Read Professor of Engi- neering at Cornell University. He is a pioneer, founder and a major contributor to the area of computational complexity. Professor Hartmanis eminent career includes also a strong service compo- nent: he served in numerous important committees (Turing Award Committee, Gödel Prize Committee, Waterman Award Committee); he was director of NSF’s Directorate for Computer and Information Science and Engineering. Professor Hartmanis has been honoured with many awards and prizes. He was elected a member of the National Academy of Engineering and Latvian Academy of Sci- ences, and a fellow of the American Academy of Arts and Sciences, the Association for Computing Machinery, New York State Academy of Sciences, and the Ameri- can Association for the Advancement of Science. -
Formal Methods
Lecture Notes in Computer Science 1709 Edited by G. Goos, J. Hartmanis and J. van Leeuwen 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo Jeannette M. Wing Jim Woodcock Jim Davies (Eds.) FM’99 – Formal Methods World Congress on Formal Methods in the Development of Computing Systems Toulouse, France, September 20-24, 1999 Proceedings, Volume II 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Jeannette M. Wing Carnegie Mellon University, Computer Science Department 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: [email protected] Jim Woodcock Jim Davies Oxford University Computing Laboratory Software Engineering Programme Wolfson Building, Parks Road, Oxford OX1 3QD, UK E-mail: {jim.woodcock,jim.davies}@comlab.ox.ac.uk Cataloging-in-Publication data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Formal methods : proceedings / FM ’99, World Congress on Formal Methods in the Development of Computing Systems, Toulouse, France, September 20 - 24, 1999 / Jeannette M. Wing . (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer Vol. 2. - (1999) (Lecture notes in computer science ; Vol. 1709) ISBN 3-540-66588-9 CR Subject Classification (1998): F.3, D.2, F.4.1, D.3, D.1, C.2, C.3, I.2.3, B, J.2 ISSN 0302-9743 ISBN 3-540-66588-9 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. -
The P Versus Np Problem
THE P VERSUS NP PROBLEM STEPHEN COOK 1. Statement of the Problem The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic) algorithm in polynomial time. To define the problem precisely it is necessary to give a formal model of a computer. The standard computer model in computability theory is the Turing machine, introduced by Alan Turing in 1936 [37]. Although the model was introduced before physical computers were built, it nevertheless continues to be accepted as the proper computer model for the purpose of defining the notion of computable function. Informally the class P is the class of decision problems solvable by some algorithm within a number of steps bounded by some fixed polynomial in the length of the input. Turing was not concerned with the efficiency of his machines, rather his concern was whether they can simulate arbitrary algorithms given sufficient time. It turns out, however, Turing machines can generally simulate more efficient computer models (for example, machines equipped with many tapes or an unbounded random access memory) by at most squaring or cubing the computation time. Thus P is a robust class and has equivalent definitions over a large class of computer models. Here we follow standard practice and define the class P in terms of Turing machines. Formally the elements of the class P are languages. Let Σ be a finite alphabet (that is, a finite nonempty set) with at least two elements, and let Σ∗ be the set of finite strings over Σ. -
Actor Model of Computation
Published in ArXiv http://arxiv.org/abs/1008.1459 Actor Model of Computation Carl Hewitt http://carlhewitt.info This paper is dedicated to Alonzo Church and Dana Scott. The Actor model is a mathematical theory that treats “Actors” as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client- cloud computing and many-core computer architectures has galvanized interest in the Actor model. An Actor is a computational entity that, in response to a message it receives, can concurrently: send a finite number of messages to other Actors; create a finite number of new Actors; designate the behavior to be used for the next message it receives. There is no assumed order to the above actions and they could be carried out concurrently. In addition two messages sent concurrently can arrive in either order. Decoupling the sender from communications sent was a fundamental advance of the Actor model enabling asynchronous communication and control structures as patterns of passing messages. November 7, 2010 Page 1 of 25 Contents Introduction ............................................................ 3 Fundamental concepts ............................................ 3 Illustrations ............................................................ 3 Modularity thru Direct communication and asynchrony ............................................................. 3 Indeterminacy and Quasi-commutativity ............... 4 Locality and Security ............................................