Randomized Algorithms for Scientific Computing (RASC)

Total Page:16

File Type:pdf, Size:1020Kb

Randomized Algorithms for Scientific Computing (RASC) Randomized Algorithms for Scientific Computing (RASC) Report from the DOE ASCR Workshop held December 2–3, 2020, and January 6–7, 2021 Prepared for U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program RASC Report Authors Aydın Bulu¸c(co-chair), Lawrence Berkeley National Laboratory Tamara G. Kolda (co-chair), Sandia National Laboratories Stefan M. Wild (co-chair), Argonne National Laboratory Mihai Anitescu, Argonne National Laboratory Anthony DeGennaro, Brookhaven National Laboratory John Jakeman, Sandia National Laboratories Chandrika Kamath, Lawrence Livermore National Laboratory Ramakrishnan (Ramki) Kannan, Oak Ridge National Laboratory Miles E. Lopes, University of California, Davis Per-Gunnar Martinsson, University of Texas, Austin Kary Myers, Los Alamos National Laboratory Jelani Nelson, University of California, Berkeley Juan M. Restrepo, Oak Ridge National Laboratory C. Seshadhri, University of California, Santa Cruz Draguna Vrabie, Pacific Northwest National Laboratory Brendt Wohlberg, Los Alamos National Laboratory Stephen J. Wright, University of Wisconsin, Madison Chao Yang, Lawrence Berkeley National Laboratory Peter Zwart, Lawrence Berkeley National Laboratory arXiv:2104.11079v1 [cs.AI] 19 Apr 2021 DOE/ASCR Point of Contact Steven Lee Randomized Algorithms for Scientific Computing i Additional Report Authors This report was a community effort, collecting inputs in a variety formats. We have taken many of the ideas in this report from the thesis statements collected from registrants of “Part 2” of the workshop. We further solicited input from community members with specific expertise. Much of the input came from discussions at the workshop, so we must credit the entire set of participants listed in full in AppendixB. Although we cannot do justice to the origination of every idea contained herein, we highlight those whose contributions were explicitly solicited or whose submissions were used nearly verbatim. Solicited and thesis statement inputs from Rick Archibald, Oak Ridge National Laboratory David Barajas-Solano, Pacific Northwest National Laboratory Andrew Barker, Lawrence Livermore National Laboratory Charles Bouman, Purdue University Moses Charikar, Stanford University Jong Choi, Oak Ridge National Laboratory Aurora Clark, Washington State University Victor DeCaria, Oak Ridge National Laboratory Zichao Wendy Di, Argonne National Laboratory Jack Dongarra, University of Tennessee, Knoxville Jed Duersch, Sandia National Laboratories Ethan Epperly, California Institute of Technology Benjamin Erichson, University of California, Berkeley Maryam Fazel, University of Washington, Seattle Andrew Glaws, National Renewable Energy Laboratory Carlo Graziani, Argonne National Laboratory Cory Hauck, Oak Ridge National Laboratory Paul Hovland, Argonne National Laboratory William Kay, Oak Ridge National Laboratory Nathan Lemons, Los Alamos National Laboratory Ying Wai Li, Los Alamos National Laboratory Dmitriy Morozov, Lawrence Berkeley National Laboratory Cynthia Phillips, Sandia National Laboratories Eric Phipps, Sandia National Laboratories Benjamin Priest, Lawrence Livermore National Laboratory Ruslan Shaydulin, Argonne National Laboratory Katarzyna Swirydowicz, Pacific Northwest National Laboratory Ray Tuminaro, Sandia National Laboratories ii Randomized Algorithms for Scientific Computing Contents Executive Summary iv 1 Introduction 1 2 Application Needs and Drivers4 2.1 Massive Data from Experiments, Observations, and Simulations . .4 2.2 Forward Models . .7 2.3 Inverse Problems . 12 2.4 Applications with Discrete Structure . 14 2.5 Experimental Designs . 17 2.6 Software and Libraries for Scientific Computing . 20 2.7 Emerging Hardware . 22 2.8 Scientific Machine Learning . 24 3 Current and Future Research Directions 27 3.1 Opportunities for Random Sampling Schemes . 27 3.2 Sketching for Linear and Nonlinear Problems . 29 3.3 Discrete Algorithms . 33 3.4 Streaming . 35 3.5 Complexity . 37 3.6 Verification and Validation . 40 3.7 Software Abstractions . 41 4 Themes and Recommendations 45 4.1 Themes . 45 4.2 Recommendations . 47 References 49 Appendices 61 A Workshop Information 61 B Workshop Participants 63 C Acknowledgments 74 Randomized Algorithms for Scientific Computing iii Executive Summary Randomized algorithms have propelled advances in artificial intelligence and represent a founda- tional research area in advancing AI for Science. Future advancements in DOE Office of Science priority areas such as climate science, astrophysics, fusion, advanced materials, combustion, and quantum computing all require randomized algorithms for surmounting challenges of complexity, robustness, and scalability. Advances in data collection and numerical simulation have changed the dynamics of scientific research and motivate the need for randomized algorithms. For instance, advances in imaging technologies such as X-ray ptychography, electron microscopy, electron energy loss spectrocopy, or adaptive optics lattice light-sheet microscopy collect hyperspectral imaging and scattering data in terabytes, at breakneck speed enabled by state-of-the-art detectors. The data collection is exceptionally fast compared with its analysis. Likewise, advances in high-performance architectures have made exascale computing a reality and changed the economies of scientific computing in the process. Floating-point operations that create data are essentially free in comparison with data movement. Thus far, most approaches have focused on creating faster hardware. Ironically, this faster hardware has exacerbated the problem by making data still easier to create. Under such an onslaught, scientists often resort to heuristic deterministic sampling schemes (e.g., low-precision arithmetic, sampling every nth element), sacrificing potentially valuable accuracy). Dramatically better results can be achieved via randomized algorithms, reducing the data size as much as or more than naive deterministic subsampling can achieve, while retaining the high accu- racy of computing on the full data set. By randomized algorithms we mean those algorithms that employ some form of randomness in internal algorithmic decisions to accelerate time to solution, increase scalability, or improve reliability. Examples include matrix sketching for solving large-scale least-squares problems and stochastic gradient descent for training machine learning models. We are not recommending heuristic methods but rather randomized algorithms that have certificates of correctness and probabilistic guarantees of optimality and near-optimality. Such approaches can be useful beyond acceleration, for example, in understanding how to avoid measure zero worst-case scenarios that plague methods such as QR matrix factorization. data sketch Figure 1: Randomized sketching can dramatically reduce the size of massive data sets. Randomized algorithms have a long history. The Markov chain Monte Carlo method was central to the earliest computing efforts of DOE’s precursor (the Atomic Energy Commission). By the 1990s, randomized algorithms were deployed in regimes such as randomized routing in Internet protocols, the well-known quicksort algorithm, and polynomial factoring for cryptography. Starting in the mid-1990s, random forests and other ensemble classifiers have shown how randomization in machine iv Randomized Algorithms for Scientific Computing learning improve the bias-variance tradeoff. In the early 2000s, compressed sensing, based on random matrices and sketching of signals, dramatically changed signal processing. The 2010s saw a flurry of novel randomization results in linear algebra and optimization, accelerated by the pursuit of problems of growing scale in AI and bringing non-asymptotic guarantees for many problems. The accelerating evolution of randomized algorithms and the unrelenting tsunami of data from experiments, observations, and simulations have combined to motivate research in randomized algorithms focused on problems specific to DOE. The new results in artificial intelligence and elsewhere are just the tip of the iceberg in foundational research, not to mention specialization of the methods for distinctive applications. Deploying randomized algorithms to advance AI for Science within DOE requires new skill sets, new analysis, and new software, expanding with each new application. To that end, the DOE convened a workshop to discuss such issues. This report summarizes the outcomes of that workshop, “Randomized Algorithms for Scientific Computing (RASC),” held virtually across four days in December 2020 and January 2021. Partic- ipants were invited to provide input, which formed the basis of the workshop discussions as well as this report, compiled by the workshop writing committee. The report contains a summary of drivers and motivation for pursuing this line of inquiry, a discussion of possible research directions (summarized in Table1), and themes and recommendations summarized in the next two pages. Table 1: A sampling of possible research directions in randomized algorithms for scientific computing Analysis of random algorithms for production conditions Randomized algorithms cost and error models for emerging hardware Incorporation of sketching for solving subproblems Specialized randomization for structured problems Overcoming of parallel computational bottlenecks with probabilistic estimates Randomized optimization for DOE applications Computationally efficient sampling Stratified and topologically aware sampling Scientifically informed sampling Reproducibility Randomized
Recommended publications
  • Anna C. Gilbert
    Anna C. Gilbert Dept. of Mathematics University of Michigan 530 Church Street Ann Arbor, MI 48109-1109 [email protected] 734-763-5728 (phone) http://www.math.lsa.umich.edu/~annacg 734-763-0937 (fax) Revision date: 30 October, 2017. Research • General interests: analysis, probability, signal processing, and algorithms. • Specialization: randomized algorithms with applications to massive datasets and signal processing. Education • Ph.D., Mathematics, Princeton University, June 1997, Multiresolution homogenization schemes for differential equations and applications, Professor Ingrid Daubechies, advisor. • S.B., Mathematics (with honors), University of Chicago, June 1993. Positions Held • Herman H. Goldstine Collegiate Professor. Department of Mathematics, University of Michigan, 2014{ present. • Full Professor (Courtesy appointment). Division of Electrical and Computer Engineering, University of Michigan, 2010{present. • Full Professor. Department of Mathematics, University of Michigan, 2010{present. • Associate Professor (Courtesy appointment). Division of Electrical and Computer Engineering, Uni- versity of Michigan, 2008{2010. • Associate Professor. Department of Mathematics, University of Michigan, 2007{2010. • Assistant Professor. Department of Mathematics, University of Michigan, 2004{2007. • Principal technical staff member. Internet and Network Systems Research Center, AT&T Labs-Research, 2002{2004. • Senior technical staff member. Information Sciences Research Center, AT&T Labs-Research, 1998{ 2002. • Visiting instructor. Department of Mathematics, Stanford University, Winter quarter 2000. • Postdoctoral research associate. Yale University and AT&T Labs-Research, 1997{1998. • Intern. Lucent Technologies Bell Laboratories, summer 1996. • Research assistant. Princeton University, 1994{1995. • Intern. AT&T Bell Laboratories, summers 1993{1995. Publications Refereed Journal Publications • F. J. Chung, A. C. Gilbert, J. G. Hoskins, and J. C. Schotland, \Optical tomography on graphs", Inverse Problems, Volume 33, Number 5, 2017.
    [Show full text]
  • Martin J. Strauss April 22, 2015
    Martin J. Strauss April 22, 2015 Work Address: Dept. of Mathematics University of Michigan Ann Arbor, MI 48109-1043 [email protected] http://www.eecs.umich.edu/~martinjs/ +1-734-717-9874 (cell) Research Interests • Sustainable Energy. • Fundamental algorithms, especially randomized and approximation algorithms. • Algorithms for massive data sets. • Signal processing and and computational harmonic analysis. • Computer security and cryptography. • Complexity theory. Pedagogical Interests • Manipulative Mathematics (teaching via pipe cleaners, paper folding and other mutilation, etc.) • Kinesthetic Mathematics (teaching through movement, including acting out algorithms, per- mutations, and other transformations). Education • Ph.D., Mathematics, Rutgers University, October 1995. Thesis title: Measure in Feasible Complexity Classes. Advisor: Prof. E. Allender, Dept. of Computer Science. • A.B. summa cum laude, Mathematics, Columbia University, May 1989. Minor in Computer Science. Employment History • Professor, Dept. of Mathematics and Dept. of Electrical Engineering and Computer Science (jointly appointed), University of Michigan, 2011{present. • Associate Professor, Dept. of Mathematics and Dept. of Electrical Engineering and Computer Science (jointly appointed), University of Michigan, 2008{2011. • Assistant Professor, Dept. of Mathematics and Dept. of Electrical Engineering and Computer Science (jointly appointed), University of Michigan, 2004{2008. • Visiting Associate Research Scholar, Program in Applied and Computational Mathematics, Princeton University, Sept. 2006{Feb. 2007. Page 1 of 17 • Principal investigator, AT&T Laboratories|Research, 1997{2004. Most recent position: Principal Technical Staff Member, Internet and Network Systems Research Center. • Consultant, Network Services Research Center, AT&T Laboratories, 1996. • Post-Doctoral Research Associate, Department of Computer Science, Iowa State University, September 1995{May 1996. • Intern. Speech recognition group, IBM Watson Research Center, summers, 1989{1990.
    [Show full text]
  • Iasinstitute for Advanced Study
    IAInsti tSute for Advanced Study Faculty and Members 2012–2013 Contents Mission and History . 2 School of Historical Studies . 4 School of Mathematics . 21 School of Natural Sciences . 45 School of Social Science . 62 Program in Interdisciplinary Studies . 72 Director’s Visitors . 74 Artist-in-Residence Program . 75 Trustees and Officers of the Board and of the Corporation . 76 Administration . 78 Past Directors and Faculty . 80 Inde x . 81 Information contained herein is current as of September 24, 2012. Mission and History The Institute for Advanced Study is one of the world’s leading centers for theoretical research and intellectual inquiry. The Institute exists to encourage and support fundamental research in the sciences and human - ities—the original, often speculative thinking that produces advances in knowledge that change the way we understand the world. It provides for the mentoring of scholars by Faculty, and it offers all who work there the freedom to undertake research that will make significant contributions in any of the broad range of fields in the sciences and humanities studied at the Institute. Y R Founded in 1930 by Louis Bamberger and his sister Caroline Bamberger O Fuld, the Institute was established through the vision of founding T S Director Abraham Flexner. Past Faculty have included Albert Einstein, I H who arrived in 1933 and remained at the Institute until his death in 1955, and other distinguished scientists and scholars such as Kurt Gödel, George F. D N Kennan, Erwin Panofsky, Homer A. Thompson, John von Neumann, and A Hermann Weyl. N O Abraham Flexner was succeeded as Director in 1939 by Frank Aydelotte, I S followed by J.
    [Show full text]
  • Curriculum Vitae
    Massachusetts Institute of Technology School of Engineering Faculty Personnel Record Date: April 1, 2020 Full Name: Charles E. Leiserson Department: Electrical Engineering and Computer Science 1. Date of Birth November 10, 1953 2. Citizenship U.S.A. 3. Education School Degree Date Yale University B. S. (cum laude) May 1975 Carnegie-Mellon University Ph.D. Dec. 1981 4. Title of Thesis for Most Advanced Degree Area-Efficient VLSI Computation 5. Principal Fields of Interest Analysis of algorithms Caching Compilers and runtime systems Computer chess Computer-aided design Computer network architecture Digital hardware and computing machinery Distance education and interaction Fast artificial intelligence Leadership skills for engineering and science faculty Multicore computing Parallel algorithms, architectures, and languages Parallel and distributed computing Performance engineering Scalable computing systems Software performance engineering Supercomputing Theoretical computer science MIT School of Engineering Faculty Personnel Record — Charles E. Leiserson 2 6. Non-MIT Experience Position Date Founder, Chairman of the Board, and Chief Technology Officer, Cilk Arts, 2006 – 2009 Burlington, Massachusetts Director of System Architecture, Akamai Technologies, Cambridge, 1999 – 2001 Massachusetts Shaw Visiting Professor, National University of Singapore, Republic of 1995 – 1996 Singapore Network Architect for Connection Machine Model CM-5 Supercomputer, 1989 – 1990 Thinking Machines Programmer, Computervision Corporation, Bedford, Massachusetts 1975
    [Show full text]
  • Focm 2017 Foundations of Computational Mathematics Barcelona, July 10Th-19Th, 2017 Organized in Partnership With
    FoCM 2017 Foundations of Computational Mathematics Barcelona, July 10th-19th, 2017 http://www.ub.edu/focm2017 Organized in partnership with Workshops Approximation Theory Computational Algebraic Geometry Computational Dynamics Computational Harmonic Analysis and Compressive Sensing Computational Mathematical Biology with emphasis on the Genome Computational Number Theory Computational Geometry and Topology Continuous Optimization Foundations of Numerical PDEs Geometric Integration and Computational Mechanics Graph Theory and Combinatorics Information-Based Complexity Learning Theory Plenary Speakers Mathematical Foundations of Data Assimilation and Inverse Problems Multiresolution and Adaptivity in Numerical PDEs Numerical Linear Algebra Karim Adiprasito Random Matrices Jean-David Benamou Real-Number Complexity Alexei Borodin Special Functions and Orthogonal Polynomials Mireille Bousquet-Mélou Stochastic Computation Symbolic Analysis Mark Braverman Claudio Canuto Martin Hairer Pierre Lairez Monique Laurent Melvin Leok Lek-Heng Lim Gábor Lugosi Bruno Salvy Sylvia Serfaty Steve Smale Andrew Stuart Joel Tropp Sponsors Shmuel Weinberger 2 FoCM 2017 Foundations of Computational Mathematics Barcelona, July 10th{19th, 2017 Books of abstracts 4 FoCM 2017 Contents Presentation . .7 Governance of FoCM . .9 Local Organizing Committee . .9 Administrative and logistic support . .9 Technical support . 10 Volunteers . 10 Workshops Committee . 10 Plenary Speakers Committee . 10 Smale Prize Committee . 11 Funding Committee . 11 Plenary talks . 13 Workshops . 21 A1 { Approximation Theory Organizers: Albert Cohen { Ron Devore { Peter Binev . 21 A2 { Computational Algebraic Geometry Organizers: Marta Casanellas { Agnes Szanto { Thorsten Theobald . 36 A3 { Computational Number Theory Organizers: Christophe Ritzenhaler { Enric Nart { Tanja Lange . 50 A4 { Computational Geometry and Topology Organizers: Joel Hass { Herbert Edelsbrunner { Gunnar Carlsson . 56 A5 { Geometric Integration and Computational Mechanics Organizers: Fernando Casas { Elena Celledoni { David Martin de Diego .
    [Show full text]
  • Dynamic Ham-Sandwich Cuts in the Plane∗
    Dynamic Ham-Sandwich Cuts in the Plane∗ Timothy G. Abbott† Michael A. Burr‡ Timothy M. Chan§ Erik D. Demaine† Martin L. Demaine† John Hugg¶ Daniel Kanek Stefan Langerman∗∗ Jelani Nelson† Eynat Rafalin†† Kathryn Seyboth¶ Vincent Yeung† Abstract We design efficient data structures for dynamically maintaining a ham-sandwich cut of two point sets in the plane subject to insertions and deletions of points in either set. A ham- sandwich cut is a line that simultaneously bisects the cardinality of both point sets. For general point sets, our first data structure supports each operation in O(n1/3+ε) amortized time and O(n4/3+ε) space. Our second data structure performs faster when each point set decomposes into a small number k of subsets in convex position: it supports insertions and deletions in O(log n) time and ham-sandwich queries in O(k log4 n) time. In addition, if each point set has convex peeling depth k, then we can maintain the decomposition automatically using O(k log n) time per insertion and deletion. Alternatively, we can view each convex point set as a convex polygon, and we show how to find a ham-sandwich cut that bisects the total areas or total perimeters of these polygons in O(k log4 n) time plus the O((kb) polylog(kb)) time required to approximate the root of a polynomial of degree O(k) up to b bits of precision. We also show how to maintain a partition of the plane by two lines into four regions each containing a quarter of the total point count, area, or perimeter in polylogarithmic time.
    [Show full text]
  • Jelani Nelson Current Position Previous Positions Education
    Jelani Nelson Curriculum Vitae John A. Paulson School of Engineering and Applied Sciences Tel: (617) 496-1440 Maxwell Dworkin 125 Fax: (617) 496-3012 33 Oxford St. [email protected] Cambridge, MA 02138 http://people.seas.harvard.edu/~minilek/ Current Position Harvard University Cambridge, MA • John L. Loeb Associate Professor of Engineering and Applied Sciences, Jul 2017{present • Associate Professor of Computer Science, Jul 2017{present Previous Positions Harvard University Cambridge, MA • Assistant Professor of Computer Science, Jul 2013{Jun 2017 Institute for Advanced Study Princeton, NJ • Member, Sep 2012{Jun 2013 • Mentor: Avi Wigderson Princeton University Princeton, NJ • Postdoctoral fellow, Center for Computational Intractibility, Jan 2012{Aug 2012 Mathematical Sciences Research Institute Berkeley, CA • Postdoctoral fellow, Program in Quantitative Geometry, Aug 2011{Dec 2011 • Mentor: Adam Klivans Education Massachusetts Institute of Technology Cambridge, MA • Ph.D. in Computer Science, June 2011. • Advisors: Professor Erik D. Demaine, Professor Piotr Indyk. • Thesis: Sketching and Streaming High-Dimensional Vectors. Massachusetts Institute of Technology Cambridge, MA • M.Eng. in Computer Science, June 2006. • Advisors: Dr. Bradley C. Kuszmaul, Professor Charles E. Leiserson. • Thesis: External-Memory Search Trees with Fast Insertions. Massachusetts Institute of Technology Cambridge, MA • S.B. in Computer Science, June 2005. • S.B. in Mathematics, June 2005. Honors • Presidential Early Career Award for Scientists and Engineers (PECASE), 2017. • Alfred P. Sloan Research Fellow, 2017. • ONR Director of Research Early Career Grant, 2017{2022. • Harvard University Clark Fund Award, 2017. • ONR Young Investigator Award, 2015{2018. • NSF Early Career Development (CAREER) Award, 2014{2019. • George M. Sprowls Award, given for best doctoral theses in computer science at MIT, 2011.
    [Show full text]
  • Focm 2017 Foundations of Computational Mathematics Barcelona, July 10Th-19Th, 2017 Organized in Partnership With
    FoCM 2017 Foundations of Computational Mathematics Barcelona, July 10th-19th, 2017 http://www.ub.edu/focm2017 Organized in partnership with Workshops Approximation Theory Computational Algebraic Geometry Computational Dynamics Computational Harmonic Analysis and Compressive Sensing Computational Mathematical Biology with emphasis on the Genome Computational Number Theory Computational Geometry and Topology Continuous Optimization Foundations of Numerical PDEs Geometric Integration and Computational Mechanics Graph Theory and Combinatorics Information-Based Complexity Learning Theory Plenary Speakers Mathematical Foundations of Data Assimilation and Inverse Problems Multiresolution and Adaptivity in Numerical PDEs Numerical Linear Algebra Karim Adiprasito Random Matrices Jean-David Benamou Real-Number Complexity Alexei Borodin Special Functions and Orthogonal Polynomials Mireille Bousquet-Mélou Stochastic Computation Symbolic Analysis Mark Braverman Claudio Canuto Martin Hairer Pierre Lairez Monique Laurent Melvin Leok Lek-Heng Lim Gábor Lugosi Bruno Salvy Sylvia Serfaty Steve Smale Andrew Stuart Joel Tropp Sponsors Shmuel Weinberger Wifi access Network: wifi.ub.edu Open a Web browser At the UB authentication page put Identifier: aghnmr.tmp Password: aqoy96 Venue FoCM 2017 will take place in the Historical Building of the University of Barcelona, located at Gran Via de les Corts Catalanes 585. Mapa The plenary lectures will be held in the Paraninf (1st day of each period) and Aula Magna (2nd and 3rd day of each period). Both auditorium are located in the first floor. The poster sessions (2nd and 3rd day of each period) will be at the courtyard of the Faculty of Mathematics, at the ground floor. Workshops will be held in the lecture rooms B1, B2, B3, B5, B6, B7 and 111 in the ground floor, and T1 in the second floor.
    [Show full text]
  • 40 IS10 Abstracts
    40 IS10 Abstracts IP1 age and video restoration. I will present the framework Approximate Inference in Graphical Models: The and provide numerous examples showing state-of-the-art Fenchel Duality Perspective results. I will then briefly show how to extend this to im- age classification, deriving energies and optimization pro- Quite a number of problems involving inference from data, cedures that lead to learning non-parametric dictionaries whether visual data or otherwise, can be set as a prob- for sparse representations optimized for classification. I lem of finding the most likely setting of variables of a will conclude by showing results on the extension of this to joint distribution over locally interacting variables. These sensing and the learning of incoherent dictionaries. Models include stereopsis, figure-ground segmentation, model-to- derived from universal coding are presented as well. The image matching, super-resolution, and so forth. Inference work I present in this talk is the result of great collabo- over locally interacting variables can be naturally repre- rations with J. Mairal, F. Rodriguez, J. Martin-Duarte, I. sented by a graphical model, however exact inference over Ramirez, F. Lecumberry, F. Bach, M. Elad, J. Ponce, and general graphs is computationally unwieldy. In the context A. Zisserman. of approximate inference, I will describe a general scheme for message passing update rules based on the framework Guillermo Sapiro of Fenchel duality. Using the framework we derive all University of Minnesota past inference algorithms like the Belief Propagation sum- Dept Electrical & Computer Engineering product and max-product, the Tree-Re-Weighted (TRW) [email protected] model as well as new convergent algorithms for maximum- a-posteriori (MAP) and marginal estimation using ”convex free energies”.
    [Show full text]
  • Curriculum Vitae
    Curriculum Vitae David P. Woodruff Biographical Computer Science Department Gates-Hillman Complex Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Citizenship: United States Email: [email protected] Home Page: http://www.cs.cmu.edu/~dwoodruf/ Research Interests Compressed Sensing, Data Stream Algorithms and Lower Bounds, Dimensionality Reduction, Distributed Computation, Machine Learning, Numerical Linear Algebra, Optimization Education All degrees received from Massachusetts Institute of Technology, Cambridge, MA Ph.D. in Computer Science. September 2007 Research Advisor: Piotr Indyk Thesis Title: Efficient and Private Distance Approximation in the Communication and Streaming Models. Master of Engineering in Computer Science and Electrical Engineering, May 2002 Research Advisor: Ron Rivest Thesis Title: Cryptography in an Unbounded Computational Model Bachelor of Science in Computer Science and Electrical Engineering May 2002 Bachelor of Pure Mathematics May 2002 Professional Experience August 2018-present, Carnegie Mellon University Computer Science Department Associate Professor (with tenure) August 2017 - present, Carnegie Mellon University Computer Science Department Associate Professor June 2018 – December 2018, Google, Mountain View Research Division Visiting Faculty Program August 2007 - August 2017, IBM Almaden Research Center Principles and Methodologies Group Research Scientist Aug 2005 - Aug 2006, Tsinghua University Institute for Theoretical Computer Science Visiting Scholar. Host: Andrew Yao Jun - Jul
    [Show full text]
  • Visions in Theoretical Computer Science
    VISIONS IN THEORETICAL COMPUTER SCIENCE A Report on the TCS Visioning Workshop 2020 The material is based upon work supported by the National Science Foundation under Grants No. 1136993 and No. 1734706. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. VISIONS IN THEORETICAL COMPUTER SCIENCE A REPORT ON THE TCS VISIONING WORKSHOP 2020 Edited by: Shuchi Chawla (University of Wisconsin-Madison) Jelani Nelson (University of California, Berkeley) Chris Umans (California Institute of Technology) David Woodruff (Carnegie Mellon University) 3 VISIONS IN THEORETICAL COMPUTER SCIENCE Table of Contents Foreword ................................................................................................................................................................ 5 How this Document came About ................................................................................................................................... 6 Models of Computation .........................................................................................................................................7 Computational Complexity ................................................................................................................................................7 Sublinear Algorithms ........................................................................................................................................................
    [Show full text]
  • Future Directions in Compressed Sensing and the Integration of Sensing and Processing What Can and Should We Know by 2030?
    Future Directions in Compressed Sensing and the Integration of Sensing and Processing What Can and Should We Know by 2030? Robert Calderbank, Duke University Guillermo Sapiro, Duke University Workshop funded by the Basic Research Office, Office of the Assistant Prepared by Brian Hider and Jeremy Zeigler Secretary of Defense for Research & Engineering. This report does not Virginia Tech Applied Research Corporation necessarily reflect the policies or positions of the US Department of Defense Preface OVER THE PAST CENTURY, SCIENCE AND TECHNOLOGY HAS BROUGHT REMARKABLE NEW CAPABILITIES TO ALL SECTORS of the economy; from telecommunications, energy, and electronics to medicine, transportation and defense. Technologies that were fantasy decades ago, such as the internet and mobile devices, now inform the way we live, work, and interact with our environment. Key to this technological progress is the capacity of the global basic research community to create new knowledge and to develop new insights in science, technology, and engineering. Understanding the trajectories of this fundamental research, within the context of global challenges, empowers stakeholders to identify and seize potential opportunities. The Future Directions Workshop series, sponsored by the Basic Research Office of the Office of the Assistant Secretary of Defense for Research and Engineering, seeks to examine emerging research and engineering areas that are most likely to transform future technology capabilities. These workshops gather distinguished academic and industry researchers from the world’s top research institutions to engage in an interactive dialogue about the promises and challenges of these emerging basic research areas and how they could impact future capabilities. Chaired by leaders in the field, these workshops encourage unfettered considerations of the prospects of fundamental science areas from the most talented minds in the research community.
    [Show full text]