Signature Redacted a U Th O R

Total Page:16

File Type:pdf, Size:1020Kb

Signature Redacted a U Th O R Tensors, sparse problems and conditional hardness by Elena-Madalina Persu A.B., Harvard University (2013) S.M., Massachusetts Institute of Technology (2015) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2018 Massachusetts Institute of Technology 2018. All rights reserved. Signature redacted A u th o r ................. .................. Department of Electkical Engineering and Computer Science August 24, 2018 Signature redacted C ertified by ....... ...................... Ankur Moitra Rockwell International Associate Professor of Mathematics Thesis Supervisor Signature redacted Accepted by ............ .. ................ M ACHUSETTS INSTITUTE (] Leslie A. Kolodziejski OF TECHNOLOGY Professor of Electrical Engineering and Computer Science ocIf 1 0 2018 Chair, Department Committee on Graduate Students UAR IES ARCHNVES Tensors, sparse problems and conditional hardness by Elena-MAdalina Persu Submitted to the Department of Electrical Engineering and Computer Science on August 24, 2018, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract In this thesis we study the interplay between theoretical computer science and machine learn- ing in three different directions. First, we make a connection between two ubiquitous sparse problems: Sparse Principal Component Analysis (SPCA) and Sparse Linear Regression (SLR). We show how to effi- ciently transform a blackbox solver for SLR into an algorithm for SPCA. Assuming the SLR solver satisfies prediction error guarantees achieved by existing efficient algorithms such as those based on the Lasso, we show that the SPCA algorithm derived from it achieves state of the art performance, matching guarantees for testing and for support recovery under the single spiked covariance model as obtained by the current best polynomial-time algorithms. Second, we push forward the study of linear algebra properties of tensors by giving a tensor rank detection gadget for tensors in the smoothed model. Tensors have had a tremen- dous impact and have been extensively applied over the past few years to a wide range of machine learning problems for example in developing estimators for latent variable models, in independent component analysis or blind source separation. Unfortunately, their theoret- ical properties are still not well understood. We make a step in that direction. Third, we show that many recent conditional lower bounds for a wide variety of problems in combinatorial pattern matching, graph algorithms, data structures and machine learning, including gradient computation in average case neural networks, are true under significantly weaker assumptions. This highlights that the intuition from theoretical computer science can not only help us develop faster practical algorithms but also give us a better understanding of why faster algorithms may not exist. Thesis Supervisor: Ankur Moitra Title: Rockwell International Associate Professor of Mathematics 3 4 Acknowledgements I will never be able to thank my parents, Camelia and Ion, enough for all their love and dedication throughout the years. Growing up, they always believed in me, taught me how to persevere and placed a great emphasis on education. I am very grateful for their constant love, support, and encouragement. I dedicate this thesis to them. I have been extremely fortunate to have Ankur Moitra as my advisor. As a researcher, lie inspires me with his deep intellectual curiosity and a level of intuition that lets him go to the very core of difficult research problems. He savors and enjoys the process of doing research, both the challenges and the small discoveries along the way. It is Ankur's qualities as a person that I am most grateful for; his patience, compassion, and generosity. I would like to thank Guy Bresler and Costis Daskalakis for serving on my dissertation com- mittee. Special thanks to Madhu Sudan and Prateek Jain for expanding my research horizons and hosting me at Microsoft Research over two summers. There are certain moments that completely change one's life path; the first class of Computational Learning Theory was one of those for me. Hence, I would like to thank my undergraduate advisor, Leslie Valiant, for introducing me to the world of theoretical computer science. I am also very grateful for all my collaborators, I had a lot to learn from them: Arturs Backurs, Sam Park, Ankur Moitra. Piotr Indyk, Guy Bresler, Ryan Williams, Virginia Williams, Cristopher Musco, Cameron Musco, Sam Elder and Michael Cohen. I was very lucky to have the chance to work with Michael, one can only wonder what his mind could have achieved. I also want to thank the wonderful lab assistants, Debbie and Patrice, for always having a smile on and letting me borrow their keys the many times I got locked out of my office. Finally, thank you to all my friends during graduate school - from the MIT theory group: es- pecially Katerina, Manolis and the Greeks, Maryam, Ilya, Quanquan, Arturs, Sam, Prashant, 5 Akshay, Itay, Nicole, Aloni, Daniel, Luke, Adam, Rio, Sepideh, Pritish, Ludwig, Jerry, Gau- tam, Henry; and outside: Horia, Sergio, Andreea, Julia, Patricia, loana. Your friendship has meant a lot to me and my MIT experience would not have been the same without you. Thank you also to my Harvard friends who made my undergraduate years some of the best of my life. Thank you especially to Min and Currierism, Fiona, Lily, Robert, Katrina, Andrei, Miriam, Shiya, Gye-Hyun and Jao-ke. 6 Contents List of Symbols 13 1 Introduction 15 2 Sparse PCA from Sparse Linear Regression 21 2.1 Introduction ......... ........................... 21 2.1.1 Our contributions ......... .................... 22 2.1.2 Previous algorithms .................. .......... 23 2.2 Preliminaries ......... .............. ............ 30 2.2.1 Problem formulation for SPCA . .................... 30 2.2.2 Problem formulation for SLR ....... ............... 32 2.2.3 The linear model ........................ ..... 33 2.3 Algorithms and main results ...................... ..... 34 2.3.1 Intuition of test statistic ..... .................... 34 2.3.2 Algorithms ................. ............... 35 2.4 Analysis ........................... ........... 36 2.4.1 Analysis of Qj under H, . ............. ........... 36 2.4.2 Analysis of Qj under Ho ...... ............. ...... 39 2.4.3 Proof of Theorem 5 .................. .......... 41 2.4.4 Proof of Theorem 6 ....................... ..... 41 2.4.5 Discussion ........ ......................... 41 2.5 Experiments ......... ........................... 43 7 2.5.1 Support recovery . .. ............. ........... 44 2.5.2 Hypothesis testing .. ........... ............. 45 2.6 Conclusion ........... ....... .............. ... 46 3 Tensor rank under the smoothed model 49 3.1 Introduction .......... 49 3.1.1 Our results ...... 50 3.1.2 Our approach ..... 51 3.2 Preliminaries and notations . 52 3.3 Young flattenings ....... 55 3.3.1 Young flattenings in the smoothed model 57 3.4 Proof of Theorem 14 .. ... 62 3.5 Future directions .. ..... 63 3.6 Linear algebra lemmas .. .. 65 4 Stronger Fine Grained Hardness via #SETH 69 4.1 Prelim inaries ......... ......... .. .. .. .. 70 4.2 Pattern matching under edit distance ..... .. ...... 71 4.2.1 Preliminaries ..... ........... ...... 71 4.2.2 Reduction .... .............. ..... 73 4.3 Machine learning problems ............ ..... 76 4.3.1 Gradient computation in average case neural networks ..... 77 4.3.2 Reduction .. ........ ........ .... .. 77 4.3.3 Hardness results ........... .... .... .. 80 4.4 Wiener index .................... .. ... 80 4.5 Dynamic graph problems .. ............ .. .... 81 4.5.1 Reductions framework ........... ...... 82 4.6 Counting Matching Triangles ...... ..... ...... 84 4.7 Average case hardness for the Counting Orthogonal Vectors Problem 84 8 4.7.1 Reduction ........ ........ ........ ........ .8 85 A Vector Gadgets 89 A .1 Vector G adgets ............ ...................... 89 B Useful lemmas 93 B.1 Linear minimum mean-square-error estimation .......... ....... 93 B.2 Calculations for linear model from Section 2.2.3 ......... ....... 94 B.3 Properties of design matrix X .......... ................ 96 B.4 Tail inequalities - Chi-squared ........ .............. .... 99 Bibliography 101 9 10 List of Figures 2-1 Performance of diagonal thresholding (DT), covariance thresholding (CT), and Q for support recovery at n = d = 625, 1250, varying values of k, and 0 = 4 44 2-2 Performance of diagonal thresholding (D), MDP, and Q for hypothesis testing at n = 200, d = 500, k = 30, 0 = 4 (left and center). TO denotes the statistic T under H0 , and similarly for TI. Effect of rescaling covariance matrix to make variances indistinguishable is demonstrated (right) ........... 46 3-1 CANDECOMP/PARFAC tensor decomposition of a third-order tensor ... 53 11 12 List of Symbols covariance matrix sample covariance matrix E[-] expectation over the appropriate sample space M(p, o-) Gaussian distribution with mean vector yt and covariance matrix o In n x n identity matrix diagf di, .. , I} diagonal matrix with diagonal entries di inequality up to an absolute constant Sn n-dimensional unit sphere in Rn+l Bo(k) the set of k-sparse vectors in C R d [n] {1, ... , n} 0 tensor product A exterior product w.p. "with probability" w.h.p. "with high probability" 13 14 Chapter 1 Introduction In the past couple of decades machine
Recommended publications
  • Computational Learning Theory: New Models and Algorithms
    Computational Learning Theory: New Models and Algorithms by Robert Hal Sloan S.M. EECS, Massachusetts Institute of Technology (1986) B.S. Mathematics, Yale University (1983) Submitted to the Department- of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 1989 @ Robert Hal Sloan, 1989. All rights reserved The author hereby grants to MIT permission to reproduce and to distribute copies of this thesis document in whole or in part. Signature of Author Department of Electrical Engineering and Computer Science May 23, 1989 Certified by Ronald L. Rivest Professor of Computer Science Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students Abstract In the past several years, there has been a surge of interest in computational learning theory-the formal (as opposed to empirical) study of learning algorithms. One major cause for this interest was the model of probably approximately correct learning, or pac learning, introduced by Valiant in 1984. This thesis begins by presenting a new learning algorithm for a particular problem within that model: learning submodules of the free Z-module Zk. We prove that this algorithm achieves probable approximate correctness, and indeed, that it is within a log log factor of optimal in a related, but more stringent model of learning, on-line mistake bounded learning. We then proceed to examine the influence of noisy data on pac learning algorithms in general. Previously it has been shown that it is possible to tolerate large amounts of random classification noise, but only a very small amount of a very malicious sort of noise.
    [Show full text]
  • 1 Introduction and Contents
    University of Padova - DEI Course “Introduction to Natural Language Processing”, Academic Year 2002-2003 Term paper Fast Matrix Multiplication PhD Student: Carlo Fantozzi, XVI ciclo 1 Introduction and Contents This term paper illustrates some results concerning the fast multiplication of n × n matrices: we use the adjective “fast” as a synonym of “in time asymptotically lower than n3”. This subject is relevant to natural language processing since, in 1975, Valiant showed [27] that Boolean matrix multiplication can be used to parse context-free grammars (or CFGs, for short): as a consequence, a fast boolean matrix multiplication algorithm yields a fast CFG parsing algorithm. Indeed, Valiant’s algorithm parses a string of length n in time proportional to TBOOL(n), i.e. the time required to multiply two n × n boolean matrices. Although impractical because of its high constants, Valiant’s algorithm is the asymptotically fastest CFG parsing solution known to date. A simpler (hence nearer to practicality) version of Valiant’s algorithm has been devised by Rytter [20]. One might hope to find a fast, practical parsing scheme which do not rely on matrix multiplica- tion: however, some results seem to suggest that this is a hard quest. Satta has demonstrated [21] that tree-adjoining grammar (TAG) parsing can be reduced to boolean matrix multiplication. Subsequently, Lee has proved [18] that any CFG parser running in time O gn3−, with g the size of the context-free grammar, can be converted into an O m3−/3 boolean matrix multiplication algorithm∗; the constants involved in the translation process are small. Since canonical parsing schemes exhibit a linear depen- dence on g, it can be reasonably stated that fast, practical CFG parsing algorithms can be translated into fast matrix multiplication algorithms.
    [Show full text]
  • On Computational Tractability for Rational Verification
    Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) On Computational Tractability for Rational Verification Julian Gutierrez1 , Muhammad Najib1 , Giuseppe Perelli2 , Michael Wooldridge1 1Department of Computer Science, University of Oxford, UK 2Department of Informatics, University of Leicester, UK fjulian.gutierrez, mnajib, [email protected], [email protected] Abstract theoretic (e.g., Nash) equilibrium. Unlike model checking, rational verification is still in its infancy: the main ideas, Rational verification involves checking which formal models, and reasoning techniques underlying rational temporal logic properties hold of a concur- verification are under development, while current tool sup- rent/multiagent system, under the assumption that port is limited and cannot yet handle systems of industrial agents in the system choose strategies in game the- size [Toumi et al., 2015; Gutierrez et al., 2018a]. oretic equilibrium. Rational verification can be un- derstood as a counterpart of model checking for One key difficulty is that rational verification is computa- multiagent systems, but while model checking can tionally much harder than model checking, because checking be done in polynomial time for some temporal logic equilibrium properties requires quantifying over the strategies specification languages such as CTL, and polyno- available to players in the system. Rational verification is also mial space with LTL specifications, rational ver- different from model checking in the kinds of properties that ification is much more intractable: 2EXPTIME- each technique tries to check: while model checking is inter- any complete with LTL specifications, even when using ested in correctness with respect to possible behaviour of explicit-state system representations.
    [Show full text]
  • Mathematics People
    Mathematics People Srinivas Receives TWAS Prize Strassen Awarded ACM Knuth in Mathematics Prize Vasudevan Srinivas of the Tata Institute of Fundamental Volker Strassen of the University of Konstanz has been Research, Mumbai, has been named the winner of the 2008 awarded the 2008 Knuth Prize of the Association for TWAS Prize in Mathematics, awarded by the Academy of Computing Machinery (ACM) Special Interest Group on Sciences for the Developing World (TWAS). He was hon- Algorithms and Computation Theory (SIGACT). He was ored “for his basic contributions to algebraic geometry honored for his contributions to the theory and practice that have helped deepen our understanding of cycles, of algorithm design. The award carries a cash prize of motives, and K-theory.” Srinivas will receive a cash prize US$5,000. of US$15,000 and will deliver a lecture at the academy’s According to the prize citation, “Strassen’s innovations twentieth general meeting, to be held in South Africa in enabled fast and efficient computer algorithms, the se- September 2009. quence of instructions that tells a computer how to solve a particular problem. His discoveries resulted in some of the —From a TWAS announcement most important algorithms used today on millions if not billions of computers around the world and fundamentally altered the field of cryptography, which uses secret codes Pujals Awarded ICTP/IMU to protect data from theft or alteration.” Ramanujan Prize His algorithms include fast matrix multiplication, inte- ger multiplication, and a test for the primality
    [Show full text]
  • Downloaded from 128.205.114.91 on Sun, 19 May 2013 20:14:53 PM All Use Subject to JSTOR Terms and Conditions 660 REVIEWS
    Association for Symbolic Logic http://www.jstor.org/stable/2274542 . Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Association for Symbolic Logic is collaborating with JSTOR to digitize, preserve and extend access to The Journal of Symbolic Logic. http://www.jstor.org This content downloaded from 128.205.114.91 on Sun, 19 May 2013 20:14:53 PM All use subject to JSTOR Terms and Conditions 660 REVIEWS The penultimate chapter, Real machines, is the major exposition on Al techniques and programs found in this book. It is here that heuristic search is discussed and classic programs such as SHRDLU and GPS are described. It is here that a sampling of Al material and its flavor as research is presented. Some of the material here is repeated without real analysis. For example, the author repeats the standard textbook mistake on the size of the chess space. On page 178, he states that 10120 is the size of this space, and uses this to suggest that no computer will ever play perfect chess. Actually, an estimate of 1040 is more realistic. If one considers that no chess board can have more than sixteen pieces of each color and there are many configurations that are illegal or equivalent, then the state space is reduced considerably.
    [Show full text]
  • Amortized Circuit Complexity, Formal Complexity Measures, and Catalytic Algorithms
    Electronic Colloquium on Computational Complexity, Report No. 35 (2021) Amortized Circuit Complexity, Formal Complexity Measures, and Catalytic Algorithms Robert Roberey Jeroen Zuiddamy McGill University Courant Institute, NYU [email protected] and U. of Amsterdam [email protected] March 12, 2021 Abstract We study the amortized circuit complexity of boolean functions. Given a circuit model F and a boolean function f : f0; 1gn ! f0; 1g, the F-amortized circuit complexity is defined to be the size of the smallest circuit that outputs m copies of f (evaluated on the same input), divided by m, as m ! 1. We prove a general duality theorem that characterizes the amortized circuit complexity in terms of “formal complexity measures”. More precisely, we prove that the amortized circuit complexity in any circuit model composed out of gates from a finite set is equal to the pointwise maximum of the family of “formal complexity measures” associated with F. Our duality theorem captures many of the formal complexity measures that have been previously studied in the literature for proving lower bounds (such as formula complexity measures, submodular complexity measures, and branching program complexity measures), and thus gives a characterization of formal complexity measures in terms of circuit complexity. We also introduce and investigate a related notion of catalytic circuit complexity, which we show is “intermediate” between amortized circuit complexity and standard circuit complexity, and which we also characterize (now, as the best integer solution to a linear program). Finally, using our new duality theorem as a guide, we strengthen the known upper bounds for non-uniform catalytic space, introduced by Buhrman et.
    [Show full text]
  • Logics for Concurrency
    Lecture Notes in Computer Science 1043 Edited by G. Goos, J. Hartmanis and J. van Leeuwen Advisory Board: W. Brauer D. Gries J. Stoer Faron Moiler Graham Birtwistle (Eds.) Logics for Concurrency Structure versus Automata ~ Springer Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornetl University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Faron Moller Department ofTeleinformatics, Kungl Tekniska H6gskolan Electrum 204, S-164 40 Kista, Sweden Graham Birtwistle School of Computer Studies, University of Leeds Woodhouse Road, Leeds LS2 9JT, United Kingdom Cataloging-in-Publication data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Logics for concurrency : structure versus automata / Faron Moller; Graham Birtwistle (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Budapest ; Hong Kong ; London ; Milan ; Paris ; Santa Clara ; Singapore ; Tokyo Springer, 1996 (Lecture notes in computer science ; Voi. 1043) ISBN 3-540-60915-6 NE: Moiler, Faron [Hrsg.]; GT CR Subject Classification (1991): F.3, F.4, El, F.2 ISBN 3-540-60915-6 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer -Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg 1996 Printed in Germany Typesetting: Camera-ready by author SPIN 10512588 06/3142 - 5 4 3 2 1 0 Printed on acid-free paper Preface This volume is a result of the VIII TH BANFF HIGHER ORDER WORKSHOP held from August 27th to September 3rd, 1994, at the Banff Centre in Banff, Canada.
    [Show full text]
  • Michael Oser Rabin Automata, Logic and Randomness in Computation
    Michael Oser Rabin Automata, Logic and Randomness in Computation Luca Aceto ICE-TCS, School of Computer Science, Reykjavik University Pearls of Computation, 6 November 2015 \One plus one equals zero. We have to get used to this fact of life." (Rabin in a course session dated 30/10/1997) Thanks to Pino Persiano for sharing some anecdotes with me. Luca Aceto The Work of Michael O. Rabin 1 / 16 Michael Rabin's accolades Selected awards and honours Turing Award (1976) Harvey Prize (1980) Israel Prize for Computer Science (1995) Paris Kanellakis Award (2003) Emet Prize for Computer Science (2004) Tel Aviv University Dan David Prize Michael O. Rabin (2010) Dijkstra Prize (2015) \1970 in computer science is not classical; it's sort of ancient. Classical is 1990." (Rabin in a course session dated 17/11/1998) Luca Aceto The Work of Michael O. Rabin 2 / 16 Michael Rabin's work: through the prize citations ACM Turing Award 1976 (joint with Dana Scott) For their joint paper \Finite Automata and Their Decision Problems," which introduced the idea of nondeterministic machines, which has proved to be an enormously valuable concept. ACM Paris Kanellakis Award 2003 (joint with Gary Miller, Robert Solovay, and Volker Strassen) For \their contributions to realizing the practical uses of cryptography and for demonstrating the power of algorithms that make random choices", through work which \led to two probabilistic primality tests, known as the Solovay-Strassen test and the Miller-Rabin test". ACM/EATCS Dijkstra Prize 2015 (joint with Michael Ben-Or) For papers that started the field of fault-tolerant randomized distributed algorithms.
    [Show full text]
  • Conversational Concurrency Copyright © 2017 Tony Garnock-Jones
    CONVERSATIONALCONCURRENCY tony garnock-jones Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy College of Computer and Information Science Northeastern University 2017 Tony Garnock-Jones Conversational Concurrency Copyright © 2017 Tony Garnock-Jones This document was typeset on December 31, 2017 at 9:22 using the typographical look-and-feel classicthesis developed by André Miede, available at https://bitbucket.org/amiede/classicthesis/ Abstract Concurrent computations resemble conversations. In a conversation, participants direct ut- terances at others and, as the conversation evolves, exploit the known common context to advance the conversation. Similarly, collaborating software components share knowledge with each other in order to make progress as a group towards a common goal. This dissertation studies concurrency from the perspective of cooperative knowledge-sharing, taking the conversational exchange of knowledge as a central concern in the design of concur- rent programming languages. In doing so, it makes five contributions: 1. It develops the idea of a common dataspace as a medium for knowledge exchange among concurrent components, enabling a new approach to concurrent programming. While dataspaces loosely resemble both “fact spaces” from the world of Linda-style lan- guages and Erlang’s collaborative model, they significantly differ in many details. 2. It offers the first crisp formulation of cooperative, conversational knowledge-exchange as a mathematical model. 3. It describes two faithful implementations of the model for two quite different languages. 4. It proposes a completely novel suite of linguistic constructs for organizing the internal structure of individual actors in a conversational setting. The combination of dataspaces with these constructs is dubbed Syndicate.
    [Show full text]
  • Modular Algorithms in Symbolic Summation and Sy
    Lecture Notes in Computer Science 3218 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos New York University, NY, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany Jürgen Gerhard Modular Algorithms in Symbolic Summation and Symbolic Integration 13 Author Jürgen Gerhard Maplesoft 615 Kumpf Drive, Waterloo, ON, N2V 1K8, Canada E-mail: [email protected] This work was accepted as PhD thesis on July 13, 2001, at Fachbereich Mathematik und Informatik Universität Paderborn 33095 Paderborn, Germany Library of Congress Control Number: 2004115730 CR Subject Classification (1998): F.2.1, G.1, I.1 ISSN 0302-9743 ISBN 3-540-24061-6 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks.
    [Show full text]
  • Symposium on Principles of Database Systems Paris, France – June 14-16, 2004
    23ndACM SIGMOD-SIGACT- SIGART Symposium on Principles of Database Systems Paris, France – June 14-16, 2004 http://www.sciences.univ-nantes.fr/irin/SIGMODPODS04/ The symposium invites papers on fundamental aspects of data management. Original research papers on the theory, design, specification, or implementation of data management tools are solicited. Papers emphasizing new topics or foundations of emerging areas are especially welcome. The symposium will be held at the Maison de la Chimie, Paris, in conjunction with the ACM SIGMOD Intl. Conference on Management of Data (SIGMOD’04). Suggested topics include the following (this list is not exhaustive and the order does not reflect priorities): Access Methods & Physical Design Databases & Workflows Real-time Databases Active Databases Deductive Databases & Knowledge Bases Security & Privacy Complexity & Performance Evaluation Distributed Databases Semistructured Data & XML Data Integration & Interoperability Information Processing on the Web Spatial & Temporal Databases Data Mining Logic in Databases Theory of recovery Data Models Multimedia Databases Transaction Management Data Stream Management Object-oriented Databases Views & Warehousing Database Programming Languages Query Languages Web Services & Electronic Commerce Databases & Information Retrieval Query Optimization XML Databases Important Dates: Paper abstracts due: December 1, 2003 (11:59pm PST) Full papers due: December 8, 2003 (11:59pm PST) Notification of acceptance/rejection: February 23, 2004 Camera-ready due: March 22, 2004
    [Show full text]
  • Self-Sorting SSD: Producing Sorted Data Inside Active Ssds
    Foreword This volume contains the papers presentedat the Third Israel Symposium on the Theory of Computing and Systems (ISTCS), held in Tel Aviv, Israel, on January 4-6, 1995. Fifty five papers were submitted in response to the Call for Papers, and twenty seven of them were selected for presentation. The selection was based on originality, quality and relevance to the field. The program committee is pleased with the overall quality of the acceptedpapers and, furthermore, feels that many of the papers not used were also of fine quality. The papers included in this proceedings are preliminary reports on recent research and it is expected that most of these papers will appear in a more complete and polished form in scientific journals. The proceedings also contains one invited paper by Pave1Pevzner and Michael Waterman. The program committee thanks our invited speakers,Robert J. Aumann, Wolfgang Paul, Abe Peled, and Avi Wigderson, for contributing their time and knowledge. We also wish to thank all who submitted papers for consideration, as well as the many colleagues who contributed to the evaluation of the submitted papers. The latter include: Noga Alon Alon Itai Eric Schenk Hagit Attiya Roni Kay Baruch Schieber Yossi Azar Evsey Kosman Assaf Schuster Ayal Bar-David Ami Litman Nir Shavit Reuven Bar-Yehuda Johan Makowsky Richard Statman Shai Ben-David Yishay Mansour Ray Strong Allan Borodin Alain Mayer Eli Upfal Dorit Dor Mike Molloy Moshe Vardi Alon Efrat Yoram Moses Orli Waarts Jack Feldman Dalit Naor Ed Wimmers Nissim Francez Seffl Naor Shmuel Zaks Nita Goyal Noam Nisan Uri Zwick Vassos Hadzilacos Yuri Rabinovich Johan Hastad Giinter Rote The work of the program committee has been facilitated thanks to software developed by Rob Schapire and FOCS/STOC program chairs.
    [Show full text]