Know Your Limits

Total Page:16

File Type:pdf, Size:1020Kb

Know Your Limits Know Your Reviewed in this issue Limits to Parallel Computation: P- Limits Completeness Theory, by Raymond Greenlaw, H. James Hoover, Walter Igor L. Markov L. Ruzzo. (Oxford University Press, University of Michigan 1995, ISBN-10: 0195085914, ISBN-13: 978-0195085914.) h IT IS UNUSUAL to review a book published nineteenth century we understood that chemical 18 years ago. However, some books are ahead of reactions do not alter chemical elements listed in their time, and some prospective readers may have periodic tables. Both stories show that fundamental gotten behind the curve. To this end, the develop- limits were discovered, prohibiting initial scenarios. ment of commercial parallel software is clearly lag- However, this is not how these stories end. Perpetual ging behind initial hopes and promises, perhaps motion can be successfully emulated by tapping an because known limits to parallel computation have abundant energy source while the system remains been overlooked. isolated for practical purposes, e.g., GPS navigation The history of humankind includes several strik- satellites use solar energy to power their continual ing technological scenarios that seemed feasible transmissions. Another example is nuclear pro- and admitted promising demonstrations, but could pulsion in ballistic missile submarines that remain not be applied in practice. One example was perpe- submerged and isolated for years. Even the transmu- tual motion, defined as ‘‘motion that continues inde- tation of cheap metals into gold has been demon- finitely without any external source of energy’’. The strated in particle accelerators, and platinum-group hope was to build a machine doing useful work metals can be commercially extracted from spent without being resupplied with fuel. Records of per- nuclear fuel. Once scientists develop an understand- petual motion trials date back to the seventeenth ing of fundamental limits, engineers circumvent century. It took two centuries to formulate the laws these limits by reformulating the challenge or by of thermodynamics to show why perpetual motion other clever workarounds. in an isolated system is not possible. A second ex- Today, the business value in many industries is ample is the mythical philosopher’s stone that trans- fueled by computation, just like it was driven by muted base metals into gold through chemical steam engines during the industrial revolution and processes (in fact, published accounts with exper- backed up by precious metals during the tumultu- imental validation were as respected as modern-day ous Middle Ages. The need for faster computation research publications). However, by the late leads to significant investments into computing hardware and software. Just like Chemistry and Phy- sics were developed to study chemical reactions and energy conversion, Computer Science was Digital Object Identifier 10.1109/MDAT.2012.2237133 developed in the last 60 years to study algorithms Date of current version: 11 April 2013. and computation. In particular, Complexity theory 78 2168-2356/13/$31.00 B 2013 IEEE Copublished by the IEEE CEDA, IEEE CASS, IEEE SSCS, and TTTC IEEE Design & Test studies the limits of computation, as illustrated by the CPU caches can boost memory performance, but notion of NP-complete problems (a standard text- only by a constant factor, and not entirely due to book is Michael Sipser’s ‘‘Introduction to the Theory parallel algorithms). Other signs that a claimed of Computation’’). Current consensus is that these speed-up is bogus can be subtle and ad hoc.For problems cannot be solved in worst-case polyno- fundamental problems, like Boolean SATand circuit mial time without major theoretical breakthroughs, simulation, that have consistently defied paralleli- and the knowledge accumulated in the field allows zation efforts by sophisticated researchers, a spec- one to quickly evaluate and diagnose purported tacular speed-up (e.g., 220 times claimed at ICCAD breakthroughs. Even the least-informed funding 2011 for SAT) better have a convincing and agencies would now recognize naBve attempts at unexpected explanation. Patrick Madden’s ASPDAC solving NP-complete problems in polynomial time. 2011 paper illustrates how academics often over- On the other hand, the understanding of such limits simplify the challenge they are studying and ignore guided applied algorithm development to identify best known techniques in their empirical compar- and exploit useful features of problem instances. An isons. David Bailey’s SC 1992 paper ‘‘Misleading example end-to-end discussion can be found in the Performance Claims in the Supercomputing Field’’ DAC 1999 paper ‘‘Why is ATPG Easy?’’ by Prasad, and its DAC 2009 reprise suggest that this phenom- Chong, and Keutzer. Moreover, for optimization enon is not new. problems, the notion of NP-hardness can sometimes The article ‘‘Parallel Logic Simulation: Myth or be circumvented by approximating optimal solu- Reality?’’ in the April 2012 issue of IEEE Computer tions (typical for geometric tasks, such as the Travel- offers a great exposition of the promise and the ling Salesman Problem). As a result, the software failure of parallel functional logic simulation (e.g., and hardware industries have been quite successful evaluating new circuit designs before silicon pro- in circumventing computational complexity limits duction). Many people find it obvious that Boolean in applications ranging from formal verification to circuit simulation should be easy to parallelize, and large-scale interconnect routing. And chess-playing academic papers claim such results. But imple- computers go far beyond NP. menting this idea in successful commercial software History does not repeat itself, but it often rhymes, has been a losing proposition for many years (leav- as Mark Twain noted. The latest craze in softwareV ing the market open to expensive hardware emula- parallel computingVhas given us hope to turn tors developedbyIBM,EVE,Cadence,Synplicity/ silicon (predesigned processor cores) into compu- Synopsys, and others). The authors of the IEEE tation without increasing clock speed and power Computer article dissect many failed attempts and dissipation per core. As top-of-the-line integrated the obstacles encountered. This is where careful circuits cost more than their weight in gold, the phi- observers may suspect fundamental limits. losopher’s stone pales in comparison to the value Enter the book Limits to Parallel Computation: proposition of turning not base metals, but sand into P-Completeness Theory by Greenlaw, Hoover, and something more expensive than gold. And we now Ruzzo. Just like NP-complete problems defy worst- see academics, instigated by U.S. funding agencies case polynomial-time algorithms, P-complete prob- left unnamed (to protect the guilty!), claim fantastic lems defy significant speed-ups through parallel parallel speed-ups that do not survive scrutiny. computation. The Preface says: Those who attended the panel on parallel Electronic Design Automation at ICCAD 2011 may recall that This book is an introduction to the rapidly I questioned claims of algorithmic ‘‘superlinear’’ growing theory of P-completenessVthe branch speed-up (more than k times when using k pro- of complexity theory that focuses on identifying cessors, for large k). If using k parallel threads of the ‘‘hardest’’ problems in the class P of prob- execution consistently improves single-thread run- lems solvable in polynomial time. P-complete time by more than a factor of k,thenwecouldjust problems are of interest because they all appear simulate k threads by time-slicing a single thread, to lack highly parallel solutions. That is, algo- with a factor-of-k slowdown. This yields a better se- rithm designers have failed to find NC algo- quential algorithm. Thus, the original comparison rithms, feasible highly parallel solutions that was to suboptimal sequential algorithms (using k take time polynomial in the logarithm of the January/February 2013 79 problem size while using only a polynomial processor by unrolled combinational circuits. number of processors, for them. Consequently, Further analysis is based on formal notions of a the promise of parallel computation, namely computational problem, reducibility and complete- that applying more processors to a problem can ness. These notions lead to complexity classes, greatly speed its solution, appears to be broken such as P (problems solvable in polynomial time) by the entire class of P-complete problems. and NC (problems solvable by poly-sized circuits of polylogarithmic depth/delay, named ‘‘Nick’s class’’ Just like the well-known book ‘‘Computers after Nicholas Pippenger). Clearly, NC is contained and Intractability: A Guide to the Theory of NP- in P, but is believed to be smaller than P (just like Completeness’’ by Garey and Johnson, this book P is believed to be smaller than NP). Because consists of two partsVan introduction to the any problem in P can be efficiently reduced P-completeness theory, and a catalog of P-complete (NC-reduced) to any P-complete problem, finding a problems. It starts with an anecdote about a com- P-complete problem inside NC would contradict pany that was forced by its competitors to look into P 6¼ NC (Theorem 3.5.4). So, if you are comfortable parallel platforms and thus developed parallel sort- interpreting NP-complete as ‘‘likely not solvable in ing of n elements using n2 processors in Oðlog nÞ polynomial time,’’ you should be comfortable in- time. This example is used to motivate key concepts, terpreting
Recommended publications
  • Invention, Intension and the Limits of Computation
    Minds & Machines { under review (final version pre-review, Copyright Minds & Machines) Invention, Intension and the Limits of Computation Hajo Greif 30 June, 2020 Abstract This is a critical exploration of the relation between two common as- sumptions in anti-computationalist critiques of Artificial Intelligence: The first assumption is that at least some cognitive abilities are specifically human and non-computational in nature, whereas the second assumption is that there are principled limitations to what machine-based computation can accomplish with respect to simulating or replicating these abilities. Against the view that these putative differences between computation in humans and machines are closely re- lated, this essay argues that the boundaries of the domains of human cognition and machine computation might be independently defined, distinct in extension and variable in relation. The argument rests on the conceptual distinction between intensional and extensional equivalence in the philosophy of computing and on an inquiry into the scope and nature of human invention in mathematics, and their respective bearing on theories of computation. Keywords Artificial Intelligence · Intensionality · Extensionality · Invention in mathematics · Turing computability · Mechanistic theories of computation This paper originated as a rather spontaneous and quickly drafted Computability in Europe conference submission. It grew into something more substantial with the help of the CiE review- ers and the author's colleagues in the Philosophy of Computing Group at Warsaw University of Technology, Paula Quinon and Pawe lStacewicz, mediated by an entry to the \Caf´eAleph" blog co-curated by Pawe l. Philosophy of Computing Group, International Center for Formal Ontology (ICFO), Warsaw University of Technology E-mail: [email protected] https://hajo-greif.net; https://tinyurl.com/philosophy-of-computing 2 H.
    [Show full text]
  • Limits on Efficient Computation in the Physical World
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • The Limits of Knowledge: Philosophical and Practical Aspects of Building a Quantum Computer
    1 The Limits of Knowledge: Philosophical and practical aspects of building a quantum computer Simons Center November 19, 2010 Michael H. Freedman, Station Q, Microsoft Research 2 • To explain quantum computing, I will offer some parallels between philosophical concepts, specifically from Catholicism on the one hand, and fundamental issues in math and physics on the other. 3 Why in the world would I do that? • I gave the first draft of this talk at the Physics Department of Notre Dame. • There was a strange disconnect. 4 The audience was completely secular. • They couldn’t figure out why some guy named Freedman was talking to them about Catholicism. • The comedic potential was palpable. 5 I Want To Try Again With a rethought talk and broader audience 6 • Mathematics and Physics have been in their modern rebirth for 600 years, and in a sense both sprang from the Church (e.g. Roger Bacon) • So let’s compare these two long term enterprises: - Methods - Scope of Ideas 7 Common Points: Catholicism and Math/Physics • Care about difficult ideas • Agonize over systems and foundations • Think on long time scales • Safeguard, revisit, recycle fruitful ideas and methods Some of my favorites Aquinas Dante Lully Descartes Dali Tolkien • Lully may have been the first person to try to build a computer. • He sought an automated way to distinguish truth from falsehood, doctrine from heresy 8 • Philosophy and religion deal with large questions. • In Math/Physics we also have great overarching questions which might be considered the social equals of: Omniscience, Free Will, Original Sin, and Redemption.
    [Show full text]
  • Thermodynamics of Computation and Linear Stability Limits of Superfluid Refrigeration of a Model Computing Array
    Thermodynamics of computation and linear stability limits of superfluid refrigeration of a model computing array Michele Sciacca1;6 Antonio Sellitto2 Luca Galantucci3 David Jou4;5 1 Dipartimento di Scienze Agrarie e Forestali, Universit`adi Palermo, Viale delle Scienze, 90128 Palermo, Italy 2 Dipartimento di Ingegneria Industriale, Universit`adi Salerno, Campus di Fisciano, 84084, Fisciano, Italy 3 Joint Quantum Centre (JQC) DurhamNewcastle, and School of Mathematics and Statistics, Newcastle University, Newcastle upon Tyne, NE1 7RU, United Kingdom 4 Departament de F´ısica,Universitat Aut`onomade Barcelona, 08193 Bellaterra, Catalonia, Spain 5Institut d'Estudis Catalans, Carme 47, Barcelona 08001, Catalonia, Spain 6 Istituto Nazionale di Alta Matematica, Roma 00185 , Italy Abstract We analyze the stability of the temperature profile of an array of computing nanodevices refrigerated by flowing superfluid helium, under variations of temperature, computing rate, and barycentric velocity of helium. It turns out that if the variation of dissipated energy per bit with respect to temperature variations is higher than some critical values, proportional to the effective thermal conductivity of the array, then the steady-state temperature profiles become unstable and refrigeration efficiency is lost. Furthermore, a restriction on the maximum rate of variation of the local computation rate is found. Keywords: Stability analysis; Computer refrigeration; Superfluid helium; Thermodynamics of computation. AMS Subject Classifications: 76A25, (Superfluids (classical aspects)) 80A99. (all'interno di Thermodynamics and heat transfer) 1 Introduction Thermodynamics of computation [1], [2], [3], [4] is a major field of current research. On practical grounds, a relevant part of energy consumption in computers is related to refrigeration costs; indeed, heat dissipation in very miniaturized chips is one of the limiting factors in computation.
    [Show full text]
  • The Limits of Computation & the Computation of Limits: Towards New Computational Architectures
    The Limits of Computation & The Computation of Limits: Towards New Computational Architectures An Intensive Program Proposal for Spring/Summer of 2012 Assoc. Prof. Ahmet KOLTUKSUZ, Ph.D. Yaşar University School of Engineering, Dept. Of Computer Engineering Izmir, Turkey The Aim & Rationale To disseminate the acculuted information and knowledge on the limits of the science of computation in such a compact and organized way that no any single discipline and/or department offers in their standard undergrad or graduate curriculums. While the department of Physics evalute the limits of computation in terms of the relative speed of electrons, resistance coefficient of the propagation IC and whether such an activity takes place in a Newtonian domain or in Quantum domain with the Planck measures, It is the assumptions of Leibnitz and Gödel’s incompleteness theorem or even Kolmogorov complexity along with Chaitin for the department of Mathematics. On the other hand, while the department Computer Science would like to find out, whether it is possible to find an algorithm to compute the shortest path in between two nodes of a lattice before the end of the universe, the departments of both Software Engineering and Computer Engineering is on the job of actually creating a Turing machine with an appropriate operating system; which will be executing that long without rolling off to the pitfalls of indefinite loops and/or of indefinite postponoments in its process & memory management routines while having multicore architectures doing parallel execution. All the problems, stemming from the limits withstanding, the new three and possibly more dimensional topological paradigms to the architectures of both hardware and software sound very promising and therefore worth closely examining.
    [Show full text]
  • The Intractable and the Undecidable - Computation and Anticipatory Processes
    International JournalThe Intractable of Applied and Research the Undecidable on Information - Computation Technology and Anticipatory and Computing Processes DOI: Mihai Nadin The Intractable and the Undecidable - Computation and Anticipatory Processes Mihai Nadin Ashbel Smith University Professor, University of Texas at Dallas; Director, Institute for Research in Anticipatory Systems. Email id: [email protected]; www.nadin.ws Abstract Representation is the pre-requisite for any form of computation. To understand the consequences of this premise, we have to reconsider computation itself. As the sciences and the humanities become computational, to understand what is processed and, moreover, the meaning of the outcome, becomes critical. Focus on big data makes this understanding even more urgent. A fast growing variety of data processing methods is making processing more effective. Nevertheless these methods do not shed light on the relation between the nature of the process and its outcome. This study in foundational aspects of computation addresses the intractable and the undecidable from the perspective of anticipation. Representation, as a specific semiotic activity, turns out to be significant for understanding the meaning of computation. Keywords: Anticipation, complexity, information, representation 1. INTRODUCTION As intellectual constructs, concepts are compressed answers to questions posed in the process of acquiring and disseminating knowledge about the world. Ill-defined concepts are more a trap into vacuous discourse: they undermine the gnoseological effort, i.e., our attempt to gain knowledge. This is why Occam (to whom the principle of parsimony is attributed) advised keeping them to the minimum necessary. Aristotle, Maimonides, Duns Scotus, Ptolemy, and, later, Newton argued for the same.
    [Show full text]
  • Arxiv:1504.03303V2 [Cs.AI] 11 May 2016 U Nryrqie Opyial Rnmtamsae H Iiu En Minimum the Message
    Ultimate Intelligence Part II: Physical Complexity and Limits of Inductive Inference Systems Eray Ozkural¨ G¨ok Us Sibernetik Ar&Ge Ltd. S¸ti. Abstract. We continue our analysis of volume and energy measures that are appropriate for quantifying inductive inference systems. We ex- tend logical depth and conceptual jump size measures in AIT to stochas- tic problems, and physical measures that involve volume and energy. We introduce a graphical model of computational complexity that we believe to be appropriate for intelligent machines. We show several asymptotic relations between energy, logical depth and volume of computation for inductive inference. In particular, we arrive at a “black-hole equation” of inductive inference, which relates energy, volume, space, and algorithmic information for an optimal inductive inference solution. We introduce energy-bounded algorithmic entropy. We briefly apply our ideas to the physical limits of intelligent computation in our universe. “Everything must be made as simple as possible. But not simpler.” — Albert Einstein 1 Introduction We initiated the ultimate intelligence research program in 2014 inspired by Seth Lloyd’s similarly titled article on the ultimate physical limits to computa- tion [6], intended as a book-length treatment of the theory of general-purpose AI. In similar spirit to Lloyd’s research, we investigate the ultimate physical lim- its and conditions of intelligence. A main motivation is to extend the theory of intelligence using physical units, emphasizing the physicalism inherent in com- puter science. This is the second installation of the paper series, the first part [13] proposed that universal induction theory is physically complete arguing that the algorithmic entropy of a physical stochastic source is always finite, and argued arXiv:1504.03303v2 [cs.AI] 11 May 2016 that if we choose the laws of physics as the reference machine, the loophole in algorithmic information theory (AIT) of choosing a reference machine is closed.
    [Show full text]
  • Advances in Three Hypercomputation Models
    EJTP 13, No. 36 (2016) 169{182 Electronic Journal of Theoretical Physics Advances in Three Hypercomputation Models Mario Antoine Aoun∗ Department of the Phd Program in Cognitive Informatics, Faculty of Science, Universit´edu Qu´ebec `aMontreal (UQAM), QC, Canada Received 1 March 2016, Accepted 20 September 2016, Published 10 November 2016 Abstract: In this review article we discuss three Hypercomputing models: Accelerated Turing Machine, Relativistic Computer and Quantum Computer based on three new discoveries: Superluminal particles, slowly rotating black holes and adiabatic quantum computation, respectively. ⃝c Electronic Journal of Theoretical Physics. All rights reserved. Keywords: Hypercomputing Models; Turing Machine; Quantum Computer; Relativistic Computer; Superluminal Particles; Slowly Rotating Black Holes; Adiabatic Quantum Computation PACS (2010): 03.67.Lx; 03.67.Ac; 03.67.Bg; 03.67.Mn 1 Introduction We discuss three examples of Hypercomputation models and their advancements. The first example is based on a new theory that advocates the use of superluminal particles in order to bypass the energy constraint of a physically realizable Accel- erated Turing Machine (ATM) [1, 2]. The second example refers to contemporary developments in relativity theory that are based on recent astronomical observations and discovery of the existence of huge slowly rotating black holes in our universe, which in return can provide a suitable space-time structure to operate a relativistic computer [3, 4 and 5]. The third example is based on latest advancements and research in quantum computing modeling (e.g. The Adiabatic Quantum Computer and Quantum Morphogenetic Computing) [6, 7, 8, 9, 10, 11, 12 and 13]. Copeland defines the terms Hypercomputation and Hypercomputer as it follows: \Hypecomputation is the computation of functions or numbers that [.
    [Show full text]
  • Regularity and Symmetry As a Base for Efficient Realization of Reversible Logic Circuits
    Portland State University PDXScholar Electrical and Computer Engineering Faculty Publications and Presentations Electrical and Computer Engineering 2001 Regularity and Symmetry as a Base for Efficient Realization of Reversible Logic Circuits Marek Perkowski Portland State University, [email protected] Pawel Kerntopf Technical University of Warsaw Andrzej Buller ATR Kyoto, Japan Malgorzata Chrzanowska-Jeske Portland State University Alan Mishchenko Portland State University SeeFollow next this page and for additional additional works authors at: https:/ /pdxscholar.library.pdx.edu/ece_fac Part of the Electrical and Computer Engineering Commons Let us know how access to this document benefits ou.y Citation Details Perkowski, Marek; Kerntopf, Pawel; Buller, Andrzej; Chrzanowska-Jeske, Malgorzata; Mishchenko, Alan; Song, Xiaoyu; Al-Rabadi, Anas; Jozwiak, Lech; and Coppola, Alan, "Regularity and Symmetry as a Base for Efficient Realization of vRe ersible Logic Circuits" (2001). Electrical and Computer Engineering Faculty Publications and Presentations. 235. https://pdxscholar.library.pdx.edu/ece_fac/235 This Conference Proceeding is brought to you for free and open access. It has been accepted for inclusion in Electrical and Computer Engineering Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Authors Marek Perkowski, Pawel Kerntopf, Andrzej Buller, Malgorzata Chrzanowska-Jeske, Alan Mishchenko, Xiaoyu Song, Anas Al-Rabadi, Lech Jozwiak, and Alan Coppola This conference proceeding is available at PDXScholar: https://pdxscholar.library.pdx.edu/ece_fac/235 Regularity and Symmetry as a Base for Efficient Realization of Reversible Logic Circuits Marek Perkowski, Pawel Kerntopf+, Andrzej Buller*, Malgorzata Chrzanowska-Jeske, Alan Mishchenko, Xiaoyu Song, Anas Al-Rabadi, Lech Jozwiak@, Alan Coppola$ and Bart Massey PORTLAND QUANTUM LOGIC GROUP, Portland State University, Portland, Oregon 97207-0751.
    [Show full text]
  • Limits on Efficient Computation in the Physical World by Scott Joel
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • Quantum Computing and the Ultimate Limits of Computation: the Case for a National Investment
    Quantum Computing and the Ultimate Limits of Computation: The Case for a National Investment Scott Aaronson Dave Bacon MIT University of Washington Version 6: December 12, 20081 For the last fifty years computers have grown faster, smaller, and more powerful — transforming and benefiting our society in ways too numerous to count. But like any exponential explosion of resources, this growth — known as Moore's law — must soon come to an end. Research has already begun on what comes after our current computing revolution. This research has discovered the possibility for an entirely new type of computer, one that operates according to the laws of quantum physics — a quantum computer. A quantum computer would not just be a traditional computer built out of different parts, but a machine that would exploit the laws of quantum physics to perform certain information processing tasks in a spectacularly more efficient manner. One demonstration of this potential is that quantum computers would break the codes that protect our modern computing infrastructure — the security of every Internet transaction would be broken if a quantum computer were to be built. This potential has made quantum computing a national security concern. Yet at the same time, quantum computers will also revolutionize large parts of science in a more benevolent way. Simulating large quantum systems, something a quantum computer can easily do, is not practically possible on a traditional computer. From detailed simulations of biological molecules which will advance the health sciences, to aiding research into novel materials for harvesting electricity from light, a quantum computer will likely be an essential tool for future progress in chemistry, physics, and engineering.
    [Show full text]
  • 9.Abstracts of the Newly Adopted Projects(Science and Engineering)
    ޣGrant́ińAid for Scientific Research on Innovative Areas(Research in a proposed research area)ޤ Science and Engineering Title of Project㧦New Polymeric Materials Based on Element-Blocks Yoshiki Chujo ( Kyoto University, Graduate School of Engineering, Professor ) *UDQWLQ$LGIRU6FLHQWLÀF ޣPurpose of the Research Projectޤ structures, non-bonding interaction of the polymers are studied, respectively. In A04 (Research in a proposed research area) The main purpose of this research project is (Development of New Ideas and Collaboration), to establish new and innovative polymeric theoretical analysis of the element-block materials based on “element-blocks”. Recently, polymers is studied to gain new ideas. Co- Areas Research on Innovative organic-inorganic hybrids that have effectively llaboration based on new seeds and challenging combined properties of organic and inorganic ideas of element blocks is encouraged in this materials and organic polymer materials group. hybridized with various inorganic elements at the molecular level have been extensively studied as new functional materials, such as electronic device materials. In this research project, such concept of hybridization is applied to element blocks. New element blocks with a variety of elements are synthesized by several approaches including organic, inorganic, and organometallic reactions, which are polymer- ized, and the polymer integration is studied with respect to the regulation of interface and higher ordered structures in the solid states. Based on this strategy, we develop new innovative element-block polymer materials and establish the new theory of polymeric Figure 2. Structure of research group. materials based on element-blocks (Figure 1). ޣExpected Research Achievements and Scientific Significanceޤ In this project, not only the development of individual functional materials based on the idea of element-blocks, but also the establish- ment of a new theory concerning element- blocks, which can provide new ideas of molecular design and methodology of material synthesis are expected.
    [Show full text]