Annual Report of the ACM Awards Committee for the Period July 1, 2005 - June 30, 2006
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
1 Oral History Interview with Brian Randell January 7, 2021 Via Zoom
Oral History Interview with Brian Randell January 7, 2021 Via Zoom Conducted by William Aspray Charles Babbage Institute 1 Abstract Brian Randell tells about his upbringing and his work at English Electric, IBM, and Newcastle University. The primary topic of the interview is his work in the history of computing. He discusses his discovery of the Irish computer pioneer Percy Ludgate, the preparation of his edited volume The Origins of Digital Computers, various lectures he has given on the history of computing, his PhD supervision of Martin Campbell-Kelly, the Computer History Museum, his contribution to the second edition of A Computer Perspective, and his involvement in making public the World War 2 Bletchley Park Colossus code- breaking machines, among other topics. This interview is part of a series of interviews on the early history of the history of computing. Keywords: English Electric, IBM, Newcastle University, Bletchley Park, Martin Campbell-Kelly, Computer History Museum, Jim Horning, Gwen Bell, Gordon Bell, Enigma machine, Curta (calculating device), Charles and Ray Eames, I. Bernard Cohen, Charles Babbage, Percy Ludgate. 2 Aspray: This is an interview on the 7th of January 2021 with Brian Randell. The interviewer is William Aspray. We’re doing this interview via Zoom. Brian, could you briefly talk about when and where you were born, a little bit about your growing up and your interests during that time, all the way through your formal education? Randell: Ok. I was born in 1936 in Cardiff, Wales. Went to school, high school, there. In retrospect, one of the things I missed out then was learning or being taught Welsh. -
Skip B-Trees
Skip B-Trees Ittai Abraham1, James Aspnes2?, and Jian Yuan3 1 The Institute of Computer Science, The Hebrew University of Jerusalem, [email protected] 2 Department of Computer Science, Yale University, [email protected] 3 Google, [email protected] Abstract. We describe a new data structure, the Skip B-Tree that combines the advantages of skip graphs with features of traditional B-trees. A skip B-Tree provides efficient search, insertion and deletion operations. The data structure is highly fault tolerant even to adversarial failures, and allows for particularly simple repair mechanisms. Related resource keys are kept in blocks near each other enabling efficient range queries. Using this data structure, we describe a new distributed peer-to-peer network, the Distributed Skip B-Tree. Given m data items stored in a system with n nodes, the network allows to perform a range search operation for r consecutive keys that costs only O(logb m + r/b) where b = Θ(m/n). In addition, our distributed Skip B-tree search network has provable polylogarithmic costs for all its other basic operations like insert, delete, and node join. To the best of our knowledge, all previous distributed search networks either provide a range search operation whose cost is worse than ours or may require a linear cost for some basic operation like insert, delete, and node join. ? Supported in part by NSF grants CCR-0098078, CNS-0305258, and CNS-0435201. 1 Introduction Peer-to-peer systems provide a decentralized way to share resources among machines. An ideal peer-to-peer network should have such properties as decentralization, scalability, fault-tolerance, self-stabilization, load- balancing, dynamic addition and deletion of nodes, efficient query searching and exploiting spatial as well as temporal locality in searches. -
Deterministic and Efficient Minimal Perfect Hashing Schemes
Deterministic and efficient minimal perfect hashing schemes Leandro M. Zatesko 1 Jair Donadelli 2 Abstract: This paper presents deterministic versions to the hashing schemes of Botelho, Kohayakawa and Ziviani (2005) and Botelho, Pagh and Ziviani (2007), also proves a statement left as open problem in the former work, related to the correct- ness proof and to the complexity analysis of their scheme. Our deterministic variants have been implemented and executed over datasets with up to 25,000,000 keys and have brought equivalent performance results between the deterministic and the origi- nal randomized algorithms. Resumo: Neste trabalho apresentamos versões determinísticas para os esquemas de hashing de Botelho, Kohayakawa e Ziviani (2005) e de Botelho, Pagh e Ziviani (2007). Também respondemos a um problema deixado em aberto no primeiro dos trabalhos, relacionado à prova da corretude e à análise de complexidade do esquema por eles proposto. As versões determinísticas desenvolvidas foram implementadas e testadas sobre conjuntos de dados com até 25.000.000 de chaves, e os resultados verificados se mostraram equivalentes aos dos algoritmos aleatorizados originais. 1 Introduction A minimal perfect hashing scheme, as defined in [1, 2, 3], is an algorithm that, given a set S with n keys from an universe U, constructs a hash function h: U →{0,...,n−1} which maps without collision S to {0,...,n − 1}. We are interested only in hashing schemes whose outputs are hash functions with O(1) lookup time. For our purposes, every key x is assumed to be a chain of at most L symbols taken from a finite alphabet Σ, for a fixed constant L. -
The Roots of Software Engineering*
THE ROOTS OF SOFTWARE ENGINEERING* Michael S. Mahoney Princeton University (CWI Quarterly 3,4(1990), 325-334) At the International Conference on the History of Computing held in Los Alamos in 1976, R.W. Hamming placed his proposed agenda in the title of his paper: "We Would Know What They Thought When They Did It."1 He pleaded for a history of computing that pursued the contextual development of ideas, rather than merely listing names, dates, and places of "firsts". Moreover, he exhorted historians to go beyond the documents to "informed speculation" about the results of undocumented practice. What people actually did and what they thought they were doing may well not be accurately reflected in what they wrote and what they said they were thinking. His own experience had taught him that. Historians of science recognize in Hamming's point what they learned from Thomas Kuhn's Structure of Scientific Revolutions some time ago, namely that the practice of science and the literature of science do not necessarily coincide. Paradigms (or, if you prefer with Kuhn, disciplinary matrices) direct not so much what scientists say as what they do. Hence, to determine the paradigms of past science historians must watch scientists at work practicing their science. We have to reconstruct what they thought from the evidence of what they did, and that work of reconstruction in the history of science has often involved a certain amount of speculation informed by historians' own experience of science. That is all the more the case in the history of technology, where up to the present century the inventor and engineer have \*-as Derek Price once put it\*- "thought with their fingertips", leaving the record of their thinking in the artefacts they have designed rather than in texts they have written. -
Grad Cohort 2015
2015 GradCohort www.cra-w.org @CRAWomen SAN FRANCISCO APRIL 10-11, 2015 Hyatt Regency San Francisco Dear Grad Cohort Participant, We welcome you to the 2015 CRA-Women Graduate 2015 Cohort Workshop! The next two days are filled with sessions where 25 senior computing researchers and professionals will be sharing their strategies and experiences to help increase your graduate school and career success. There are also plenty of opportunities to meet and network with these successful senior women as well as graduate students from other Grad universities. We hope that you will take the utmost advantage of this unique experience by actively participating in discussions, developing peer networks, and building mentoring relationships. The 2015 CRA-Women Graduate Cohort Workshop is Cohort made possible through generous contributions by Microsoft Research, Computing Research Association, National Science Foundation, Google, a private foundation, Alfred P. Sloan Foundation, Two Sigma, SAN FRANCISCO Intel Corporation, ACM SIGARCH, ACM SIGMICRO, ACM SIGGRAPH, ACM SIGOPS, ACM SIGPLAN, ACM SIGSOFT, Yahoo! Labs, Facebook, IBM and in some cases department funds from participating universities/ institutions. Please join us in thanking them for their kind support. We hope that you take home many nuggets and connections from this workshop to help you in your journey to make an impact in the world through computing. Be ready to be inspired, learn, and meet many interesting technical women. Sincerely, — Lori Clarke, Sandhya Dwarkadas, and Ayanna Howard CO-CHAIRS, -
The CNN Problem and Other K-Server Variants
The CNN Problem and Other k-Server Variants Elias Koutsoupias 1 Department of Informatics, University of Athens, Athens, Greece and Computer Science Department, University of California, Los Angeles, USA David Scot Taylor Department of Computer Science, San Jos´eState University, San Jos´e, California, USA Abstract We study several interesting variants of the k-server problem. In the cnn problem, one server services requests in the Euclidean plane. The difference from the k- server problem is that the server does not have to move to a request, but it has only to move to a point that lies in the same horizontal or vertical line with the request. This, for example, models the problem faced by a crew of a certain news network trying to shoot scenes on the streets of Manhattan from a distance; for any event at an intersection, the crew has only to be on a matching street or avenue. The cnn problem contains as special cases two important problems: the bridge problem, also known as the cow-path problem, and the weighted 2-server problem in which the 2 servers may have different speeds. We√ show that any deterministic online algorithm has competitive ratio at least 6 + 17. We also show that some successful algorithms for the k-server problem fail to be competitive. In particular, no memoryless randomized algorithm can be competitive. We also consider another variant of the k-server problem, in which servers can move simultaneously, and we wish to minimize the time spent waiting for service. This is equivalent to the regular k-server problem under the L∞ norm for movement 1 costs. -
A Purdue University Course in the History of Computing
Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1991 A Purdue University Course in the History of Computing Saul Rosen Report Number: 91-023 Rosen, Saul, "A Purdue University Course in the History of Computing" (1991). Department of Computer Science Technical Reports. Paper 872. https://docs.lib.purdue.edu/cstech/872 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. A PURDUE UNIVERSITY COURSE IN mE HISTORY OF COMPUTING Saul Rosen CSD-TR-91-023 Mrrch 1991 A Purdue University Course in the History of Computing Saul Rosen CSD-TR-91-023 March 1991 Abstract University COUISes in the history of computing are still quite rarc. This paper is a discussion of a one-semester course that I have taught during the past few years in the Department of Computer Sciences at Purdue University. The amount of material that could be included in such a course is overwhelming, and the literature in the subject has increased at a great rate in the past decade. A syllabus and list of major references are presented here as they are presented to the students. The course develops a few major themes, several of which arc discussed briefly in the paper. The coume has been an interesting and stimulating experience for me. and for some of the students who took the course. I do not foresee a rapid expansion in the number of uniVcCiities that will offer courses in the history of computing. -
The Price of Privacy for Low-Rank Factorization
The Price of Privacy for Low-rank Factorization Jalaj Upadhyay Johns Hopkins University Baltimore, MD - 21201, USA. [email protected] Abstract In this paper, we study what price one has to pay to release differentially private low-rank factorization of a matrix. We consider various settings that are close to the real world applications of low-rank factorization: (i) the manner in which matrices are updated (row by row or in an arbitrary manner), (ii) whether matrices are distributed or not, and (iii) how the output is produced (once at the end of all updates, also known as one-shot algorithms or continually). Even though these settings are well studied without privacy, surprisingly, there are no private algorithm for these settings (except when a matrix is updated row by row). We present the first set of differentially private algorithms for all these settings. Our algorithms when private matrix is updated in an arbitrary manner promise differential privacy with respect to two stronger privacy guarantees than previously studied, use space and time comparable to the non-private algorithm, and achieve optimal accuracy. To complement our positive results, we also prove that the space required by our algorithms is optimal up to logarithmic factors. When data matrices are distributed over multiple servers, we give a non-interactive differentially private algorithm with communication cost independent of dimension. In concise, we give algorithms that incur optimal cost across all parameters of interest. We also perform experiments to verify that all our algorithms perform well in practice and outperform the best known algorithm until now for large range of parameters. -
TCM Report, Spring
Alan F. Shugart Contents BOARD OF DIRECTORS CONTRIBUTING MEMBERS Richard L. Sites Ronald G. Smart 1 John Willia m Poduska. Sr.. Chairman David Ahl. Mr. and Mrs. Rolland B. Arndt. Charles E. Sporck The Director's Letter Apollo Computer. Inc. Isaac L. Auerbach. Robert W. Bailey. Ph.D.. Ivan and Marcia Sutherland John Banning. Alan G. Bell. Gregory C .F. Del Thorndike and Steve Teicher Dr. G w e n Bell C. Gordon Bell Bettice. Alfred M. Bertocchi. Richard Billings. Erwin Tomash Encore Computer Corporation Allen H. Brady. Daniel S. Bricklin. Fred and Jean De Val pine 3 Dr. Gwen Bell Nancy Brooks. David A. Brown. Gordon S. Charles P Waite Howard Hathaway Aiken Brown, Lawrence C. Brown, Marshall D. Stephen L. Watson The Computer Museum Butler. Charles T. and Virginia G. Casale. Harvey W. Wiggins. Jr. The Life of a Computer Pioneer Danald Christiansen. Richard J. Clayton. William Wolfson Erich Bloch George Towne Clifford. Howard E. Cox. Jr .. G regory W. Welch National Science Foundation Henry J. Crouse. David N. Cutler. Joe Cychosz. 13 Harvey D. Cragon Gerald Davis and Francoise Szigetti. Clive B. CORPORATE MEMBERS University of Texas, Austin Dawson. F. de Bros, Bruce A. and Frances M. A Conversation Delagi. Jack Dennis. Nick de Wolf. L. John David Donaldson Doerr. James R. Donaldson. Philip H. Dorn. BENEfACTOR- SIO.OOO or more with the Hackers Ropes and Gray Gregory L. Duckworth, Ray Duncan, Thomas Apollo Computer. Inc: Eggers. Dan L. Eisner. Bob O. Evans. Robert Bank of America Foundation· 16 Robert Everett A. Farmer, Andrew D. Feit. Tse-yun Feng. -
Cuckoo Hashing
Cuckoo Hashing Rasmus Pagh* BRICSy, Department of Computer Science, Aarhus University Ny Munkegade Bldg. 540, DK{8000 A˚rhus C, Denmark. E-mail: [email protected] and Flemming Friche Rodlerz ON-AIR A/S, Digtervejen 9, 9200 Aalborg SV, Denmark. E-mail: ff[email protected] We present a simple dictionary with worst case constant lookup time, equal- ing the theoretical performance of the classic dynamic perfect hashing scheme of Dietzfelbinger et al. (Dynamic perfect hashing: Upper and lower bounds. SIAM J. Comput., 23(4):738{761, 1994). The space usage is similar to that of binary search trees, i.e., three words per key on average. Besides being conceptually much simpler than previous dynamic dictionaries with worst case constant lookup time, our data structure is interesting in that it does not use perfect hashing, but rather a variant of open addressing where keys can be moved back in their probe sequences. An implementation inspired by our algorithm, but using weaker hash func- tions, is found to be quite practical. It is competitive with the best known dictionaries having an average case (but no nontrivial worst case) guarantee. Key Words: data structures, dictionaries, information retrieval, searching, hash- ing, experiments * Partially supported by the Future and Emerging Technologies programme of the EU under contract number IST-1999-14186 (ALCOM-FT). This work was initiated while visiting Stanford University. y Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation. z This work was done while at Aarhus University. 1 2 PAGH AND RODLER 1. INTRODUCTION The dictionary data structure is ubiquitous in computer science. -
The Price of Privacy for Low-Rank Factorization
The Price of Privacy for Low-rank Factorization Jalaj Upadhyay Johns Hopkins University Baltimore, MD - 21201, USA. [email protected] Abstract In this paper, we study what price one has to pay to release differentially private low-rank factorization of a matrix. We consider various settings that are close to the real world applications of low-rank factorization: (i) the manner in which matrices are updated (row by row or in an arbitrary manner), (ii) whether matrices are distributed or not, and (iii) how the output is produced (once at the end of all updates, also known as one-shot algorithms or continually). Even though these settings are well studied without privacy, surprisingly, there are no private algorithm for these settings (except when a matrix is updated row by row). We present the first set of differentially private algorithms for all these settings. Our algorithms when private matrix is updated in an arbitrary manner promise differential privacy with respect to two stronger privacy guarantees than previously studied, use space and time comparable to the non-private algorithm, and achieve optimal accuracy. To complement our positive results, we also prove that the space required by our algorithms is optimal up to logarithmic factors. When data matrices are distributed over multiple servers, we give a non-interactive differentially private algorithm with communication cost independent of dimension. In concise, we give algorithms that incur optimal cost across all parameters of interest. We also perform experiments to verify that all our algorithms perform well in practice and outperform the best known algorithm until now for large range of parameters. -
The 1968 NATO Software Engineering Conference
SOFTWARE ENGINEERING Report on a conference sponsored by the NATO SCIENCE COMMITTEE Garmisch, Germany, 7th to 11th October 1968 Chairman: Professor Dr. F. L. Bauer Co-chairmen: Professor L. Bolliet, Dr. H. J. Helms Editors: Peter Naur and Brian Randell January 1969 2 2 The present report is available from: Scientific Affairs Division NATO Brussels 39 Belgium Note for the current edition: The version of this report that you are reading was prepared by scan- ning the original edition, conversion to text through OCR, and then reformatting. Every effort has been made to do this as accurately as possible. However, it is almost certain that some errors have crept in despite best efforts. One of the problems was that the OCR software used kept trying to convert the original British spellings of words like ‘realise’ to the American spelling ‘realize’ and made other stupid mistakes. Whenever the OCR program was unsure of a reading, it called it to the attention of the operator, but there were a number of occasions in which it was sure, but wrong. Not all of these instances are guaranteed to have been caught. Although the editor tried to conform to the original presentation, certain changes were necessary, such as pagination. In order that the original Table of Contents and Indices would not have to be recalculated, an artifice was used. That is the original page breaks are indicated in the text thusly: 49 indicates that this is the point at which page 49 began in the original edition. If two such indicators appear together, this shows that there was a blank page in the original.