The Work of Peter Shor Has a Strong Geometrical Avor Typically Coupled

Total Page:16

File Type:pdf, Size:1020Kb

The Work of Peter Shor Has a Strong Geometrical �Avor� Typically Coupled Doc Math J DMV The Work of Peter W Shor Ronald Graham Much of the work of Peter Shor has a strong geometrical avor typically coupled with deep ideas from probability complexity theory or combinatorics and always woven together with brilliance and insight of the rst magnitude Due to the space limitations of this note I will restrict myself to brief descriptions of just four of his remarkable achievements unfortunately omitting discussions of his seminal work on randomized incremental algorithms of fundamental imp ortance in computational geometry and his provo cative results in computational biology on selfassembling virus shells Twodimensional discrepancy minimax grid matchings and online bin packing The minimax grid matching problem is a fundamental combinatorial problem aris ing the the average case analysis of algorithms To state it we consider a square S p p of area N in the plane and a regularly spaced N N array G grid of p oints in S Let P be a set of N points selected indep endently and uniformly in S By a p erfect matching of P to G wemeanamap P G For each selection P dene LP min max dp p where ranges over all p erfect matchings pP of P to Gandd denotes Euclidean distance Theorem Shor LeightonShor With very high probability ELP logN The pro of is very intricate and ingenious and contains a wealth in new ideas which havespawned a variety of extensions and generalizations notably in the work of M Talagrand on ma jorizing measures and discrepancy A classical paradigm in the analysis of algorithms is the socalled bin packing in which a list W w w w of weights is given and problem n we are to required to pack all the w into bins with the constraint that no bin i can contain a weight total of more than Since it is NPhard to determine the minimum numb er of bins which W requires for a successful packing oreven to decide if this minimum numb er is extensive eorts have b een made for nding go o d approximation algorithms for pro ducing nearoptimal packings In the Best Fit algorithm after the rst i weights are packed the next weight w is placed into the bin in which it ts b est ie so that the unused space i DocumentaMathematica Extra Volume ICM I Ronald Graham in that bin is less than it would b e in any other bin This is actually an online algorithm In his thesis Shor proved the very surprising and deep result that when the w are chosen uniformly at random from then with very high i probability the amountofwasted space has size n log n An upright region R R f of the square S is dened as the region in S lying ab ovesomecontinuous monotonically nonincreasing function f eg S is itself upright If P isasetof N points chosen uniformly and indep endently at random in S we can dene the discrepancy R jj R P jar eaR j An old problem in mathematical statistics from the s see was the estimation of sup R over all upright regions of S This was nally answered byLeighton R and Shor in and it is nowknown that sup R N log N R The preceding results give just a hint of the numerous applications these fertile techniques have found to suchdiverse areas as pseudorandom numb er gen eration dynamic storage allo cation waferscale integration and twodimensional bin packing see DavenportSchinzel Sequences ADavenp ortSchinzel sequence DS n s is a sequence U u u u com m p osed of n distinct symb ols suchthat u u for all i and such that U con i i tains no alternating subsequence of length s ie there do not exist indices i i i such that u u u a b u u s i i i i i We dene n maxfm u u isaDS n s sequenceg S m Davenp ortSchinzel sequences have turned out to b e of central imp ortance in com putational and combinatorial geometryandhave found many applications in such Voronoi diagrams and shortest path algo areas as motion planning visibility rithms It is known that DS n ssequences provide a combinatorial character ization of the lower envelop e of n continuous univariate functions each pair of whichintersectinatmostspoints Hence n is just the maximum number of s connected comp onents of the graphs of such functions and accurate estimates of n can often b e translated into sharp b ounds for algorithms which dep end on s function minimization It is trivial to showthat nn and nn The rst surprise came when it was shown that n nn where n is dened to b e the functional inverse of the Ackermann function At ie nminft At ng Note that n is an extremely slowly growing function of n since A is dened as follows A tt t and A tA A t k t k k k t Thus A t A t is an exp onential to wer of n s and so on Then At is dened to be A t The best bounds for n s in were rather t s DocumentaMathematica Extra Volume ICM I The Work of Peter W Shor weak This was remedied in where Shor and his coauthors managed to showby n extremely delicate and clever techniques that nn Thus DS n sequences can b e much longer than DS n sequences but are still only slightly nonlinear In addition they also obtained almost tight bounds on all other ns s n Tiling R with cubes In Minkowski made the conjecture in connection with his work on extremal n lattices that in any lattice tiling of R with unit ncub es there must b e two cub es having a complete facet n face in common This was generalized by n O Keller in to the conjecture that any tiling of R by unit ncub es must have this prop erty This was conrmed by Perron in for n and shortly thereafter Ha jos proved Minkowskis original conjecture for all n However in spite of rep eated eorts no further progress was made in proving Kellers conjecture for the next years Then in Shor struck He showed with his colleague J Lagarias that in fact Kellers conjecture is false for all dimensions n They managed to do this with an very ingenious argument n showing that certain sp ecial graphs suggested by Corradi and Szabo of size n must always have cliques of size contrary to the prevailing opinions then from n which it followed that Kellers conjecture must fail for R The reader is referred to for the details of this combinatorial gem and to for a fascinating history of this problem I mightpoint out that this is another example of an old conjecture in geometry b eing shattered by a subtle combinatorial construction an earlier one b eing the recent dispro of of the Borsuk conjecture by Kahn and Kalai It is still not known what the truth for Kellers conjecture is when n or Quantum computation It has b een generally b elieved that a digital computer or more abstractlya Turing machine can simulate any physically realizable computational device This in fact is the thrust of the celebrated ChurchTuring thesis Moreover it was also assumed that this could always b e done in an ecientway ie involving at most a p olynomial expansion in the time required However it was rst p ointed out by Feynman that certain quantum mechanical systems seemed to b e extremely on Neumann dicult in fact imp ossible to simulate eciently on a standard v computer This led him to suggest that it might b e p ossible to take advantage of the quantum mechanical b ehavior of nature itself in designing a computer which overcame these diculties In fact in doing so such a quantum computer might be able to solvesomeofthe classical dicult problems much more eciently as well These ideas were pursued by Benio Deutsch Bennett and others and slowly a mo del of quantum computation b egan to evolve However the rst bombshell in this embryonic eld o ccurred when Peter Shor in announced the rst signicant algorithm for such a hyp othetical quantum DocumentaMathematica Extra Volume ICM I Ronald Graham computer namely a metho d for factoring an arbitrary comp osite integer N in clog N log log N log log log N steps This should be contrasted with the b est current algorithm on classical digital computers whose b est running time estimates growlike expcN log N Of course no one has yet ruled out the p ossibility that a p olynomialtime factoring algorithm exists for classical computers cf the infamous P vs NP problem but it is felt by most knowledgeable people that this is extremely unlikely In the same pap er Shor also gives a p olynomialtime algorithm for a quantum computer for computing discrete logarithms another apparently intractable problem for classical computers There is not space here to describ e these algorithms in any detail but a few remarks may b e in order In a classical computer information is represented by n binary symbols and bits An n bit memory can exist in any of logical states Such computers also manipulate this binary data using functions likethe Bo olean AND and NOT By contrast a quantum bit or qubit is typically a microscopic system such as an electron with its spin or a p olarized photon The Bo olean states and are represented by reliably distinguishable states of the and ji spin However according to the laws of qubit eg ji spin quantum mechanics the qubit can also exist in a continuum of intermediate states or sup erp ositions ji ji where and are complex numb ers satisfying jj j j More generally a string of n qubits can exist in any state of the form X jxi x x P where the are complex numbers suchthat j j In other words a quan x x x n tum state of n qubits is represented byaunitvector in a dimensional complex Hilb ert space dened as the tensor pro duct of the n copies of the dimensional Hilb ert space representing the state of a single qubit It is the exp onentially large dimensionality of this space which
Recommended publications
  • Four Results of Jon Kleinberg a Talk for St.Petersburg Mathematical Society
    Four Results of Jon Kleinberg A Talk for St.Petersburg Mathematical Society Yury Lifshits Steklov Institute of Mathematics at St.Petersburg May 2007 1 / 43 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 / 43 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 2 / 43 4 Navigation in a Small World 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 2 / 43 5 Bursty Structure in Streams Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 2 / 43 Outline 1 Nevanlinna Prize for Jon Kleinberg History of Nevanlinna Prize Who is Jon Kleinberg 2 Hubs and Authorities 3 Nearest Neighbors: Faster Than Brute Force 4 Navigation in a Small World 5 Bursty Structure in Streams 2 / 43 Part I History of Nevanlinna Prize Career of Jon Kleinberg 3 / 43 Nevanlinna Prize The Rolf Nevanlinna Prize is awarded once every 4 years at the International Congress of Mathematicians, for outstanding contributions in Mathematical Aspects of Information Sciences including: 1 All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.
    [Show full text]
  • Navigability of Small World Networks
    Navigability of Small World Networks Pierre Fraigniaud CNRS and University Paris Sud http://www.lri.fr/~pierre Introduction Interaction Networks • Communication networks – Internet – Ad hoc and sensor networks • Societal networks – The Web – P2P networks (the unstructured ones) • Social network – Acquaintance – Mail exchanges • Biology (Interactome network), linguistics, etc. Dec. 19, 2006 HiPC'06 3 Common statistical properties • Low density • “Small world” properties: – Average distance between two nodes is small, typically O(log n) – The probability p that two distinct neighbors u1 and u2 of a same node v are neighbors is large. p = clustering coefficient • “Scale free” properties: – Heavy tailed probability distributions (e.g., of the degrees) Dec. 19, 2006 HiPC'06 4 Gaussian vs. Heavy tail Example : human sizes Example : salaries µ Dec. 19, 2006 HiPC'06 5 Power law loglog ppk prob{prob{ X=kX=k }} ≈≈ kk-α loglog kk Dec. 19, 2006 HiPC'06 6 Random graphs vs. Interaction networks • Random graphs: prob{e exists} ≈ log(n)/n – low clustering coefficient – Gaussian distribution of the degrees • Interaction networks – High clustering coefficient – Heavy tailed distribution of the degrees Dec. 19, 2006 HiPC'06 7 New problematic • Why these networks share these properties? • What model for – Performance analysis of these networks – Algorithm design for these networks • Impact of the measures? • This lecture addresses navigability Dec. 19, 2006 HiPC'06 8 Navigability Milgram Experiment • Source person s (e.g., in Wichita) • Target person t (e.g., in Cambridge) – Name, professional occupation, city of living, etc. • Letter transmitted via a chain of individuals related on a personal basis • Result: “six degrees of separation” Dec.
    [Show full text]
  • A Decade of Lattice Cryptography
    Full text available at: http://dx.doi.org/10.1561/0400000074 A Decade of Lattice Cryptography Chris Peikert Computer Science and Engineering University of Michigan, United States Boston — Delft Full text available at: http://dx.doi.org/10.1561/0400000074 Foundations and Trends R in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends R in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. R This Foundations and Trends issue was typeset in LATEX using a class file designed by Neal Parikh. Printed on acid-free paper. ISBN: 978-1-68083-113-9 c 2016 C. Peikert All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for in- ternal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged.
    [Show full text]
  • Constraint Based Dimension Correlation and Distance
    Preface The papers in this volume were presented at the Fourteenth Annual IEEE Conference on Computational Complexity held from May 4-6, 1999 in Atlanta, Georgia, in conjunction with the Federated Computing Research Conference. This conference was sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing, in cooperation with the ACM SIGACT (The special interest group on Algorithms and Complexity Theory) and EATCS (The European Association for Theoretical Computer Science). The call for papers sought original research papers in all areas of computational complexity. A total of 70 papers were submitted for consideration of which 28 papers were accepted for the conference and for inclusion in these proceedings. Six of these papers were accepted to a joint STOC/Complexity session. For these papers the full conference paper appears in the STOC proceedings and a one-page summary appears in these proceedings. The program committee invited two distinguished researchers in computational complexity - Avi Wigderson and Jin-Yi Cai - to present invited talks. These proceedings contain survey articles based on their talks. The program committee thanks Pradyut Shah and Marcus Schaefer for their organizational and computer help, Steve Tate and the SIGACT Electronic Publishing Board for the use and help of the electronic submissions server, Peter Shor and Mike Saks for the electronic conference meeting software and Danielle Martin of the IEEE for editing this volume. The committee would also like to thank the following people for their help in reviewing the papers: E. Allender, V. Arvind, M. Ajtai, A. Ambainis, G. Barequet, S. Baumer, A. Berthiaume, S.
    [Show full text]
  • Inter Actions Department of Physics 2015
    INTER ACTIONS DEPARTMENT OF PHYSICS 2015 The Department of Physics has a very accomplished family of alumni. In this issue we begin to recognize just a few of them with stories submitted by graduates from each of the decades since 1950. We want to invite all of you to renew and strengthen your ties to our department. We would love to hear from you, telling us what you are doing and commenting on how the department has helped in your career and life. Alumna returns to the Physics Department to Implement New MCS Core Education When I heard that the Department of Physics needed a hand implementing the new MCS Core Curriculum, I couldn’t imagine a better fit. Not only did it provide an opportunity to return to my hometown, but Carnegie Mellon itself had been a fixture in my life for many years, from taking classes as a high school student to teaching after graduation. I was thrilled to have the chance to return. I graduated from Carnegie Mellon in 2008, with a B.S. in physics and a B.A. in Japanese. Afterwards, I continued to teach at CMARC’s Summer Academy for Math and Science during the summers while studying down the street at the University of Pittsburgh. In 2010, I earned my M.A. in East Asian Studies there based on my research into how cultural differences between Japan and America have helped shape their respective robotics industries. After receiving my master’s degree and spending a summer studying in Japan, I taught for Kaplan as graduate faculty for a year before joining the Department of Physics at Cornell University.
    [Show full text]
  • The BBVA Foundation Recognizes Charles H. Bennett, Gilles Brassard
    www.frontiersofknowledgeawards-fbbva.es Press release 3 March, 2020 The BBVA Foundation recognizes Charles H. Bennett, Gilles Brassard and Peter Shor for their fundamental role in the development of quantum computation and cryptography In the 1980s, chemical physicist Bennett and computer scientist Brassard invented quantum cryptography, a technology that using the laws of quantum physics in a way that makes them unreadable to eavesdroppers, in the words of the award committee The importance of this technology was demonstrated ten years later, when mathematician Shor discovered that a hypothetical quantum computer would drive a hole through the conventional cryptographic systems that underpin the security and privacy of Internet communications Their landmark contributions have boosted the development of quantum computers, which promise to perform calculations at a far greater speed and scale than machines, and enabled cryptographic systems that guarantee the inviolability of communications The BBVA Foundation Frontiers of Knowledge Award in Basic Sciences has gone in this twelfth edition to Charles Bennett, Gilles Brassard and Peter Shor for their outstanding contributions to the field of quantum computation and communication Bennett and Brassard, a chemical physicist and computer scientist respectively, invented quantum cryptography in the 1980s to ensure the physical inviolability of data communications. The importance of their work became apparent ten years later, when mathematician Peter Shor discovered that a hypothetical quantum computer would render effectively useless the communications. The award committee, chaired by Nobel Physics laureate Theodor Hänsch with quantum physicist Ignacio Cirac acting as its secretary, remarked on the leap forward in quantum www.frontiersofknowledgeawards-fbbva.es Press release 3 March, 2020 technologies witnessed in these last few years, an advance which draws heavily on the new .
    [Show full text]
  • Quantum-Proofing the Blockchain
    QUANTUM-PROOFING THE BLOCKCHAIN Vlad Gheorghiu, Sergey Gorbunov, Michele Mosca, and Bill Munson University of Waterloo November 2017 A BLOCKCHAIN RESEARCH INSTITUTE BIG IDEA WHITEPAPER Realizing the new promise of the digital economy In 1994, Don Tapscott coined the phrase, “the digital economy,” with his book of that title. It discussed how the Web and the Internet of information would bring important changes in business and society. Today the Internet of value creates profound new possibilities. In 2017, Don and Alex Tapscott launched the Blockchain Research Institute to help realize the new promise of the digital economy. We research the strategic implications of blockchain technology and produce practical insights to contribute global blockchain knowledge and help our members navigate this revolution. Our findings, conclusions, and recommendations are initially proprietary to our members and ultimately released to the public in support of our mission. To find out more, please visitwww.blockchainresearchinstitute.org . Blockchain Research Institute, 2018 Except where otherwise noted, this work is copyrighted 2018 by the Blockchain Research Institute and licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License. To view a copy of this license, send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA, or visit creativecommons.org/ licenses/by-nc-nd/4.0/legalcode. This document represents the views of its author(s), not necessarily those of Blockchain Research Institute or the Tapscott Group. This material is for informational purposes only; it is neither investment advice nor managerial consulting. Use of this material does not create or constitute any kind of business relationship with the Blockchain Research Institute or the Tapscott Group, and neither the Blockchain Research Institute nor the Tapscott Group is liable for the actions of persons or organizations relying on this material.
    [Show full text]
  • Quantum Error Correction
    Quantum Error Correction Peter Young (Dated: March 6, 2020) I. INTRODUCTION Quantum error correction has developed into a huge topic, so here we will only be able to describe the main ideas. Error correction is essential for quantum computing, but appeared at first to be impossible, for reasons that we shall see. The field was transformed in 1995 by Shor[1] and Steane[2] who showed that quantum error correction is feasible. Before Shor and Steane, the goal of building a useful quantum computer seemed clearly unattainable. After those two papers, while building a quantum computer obviously posed enormous challenges, it was not necessarily impossible. Some general references on quantum error correction are Refs. [3{6]. Let us start by giving a simple discussion of classical error correction which will motivate our study of quantum error correction. In classical computers error correction is not necessary. This is because the hardware for one bit is huge on an atomic scale and the states 0 and 1 are so different that the probability of an unwanted flip is tiny. However, error correction is needed classically for transmitting a signal over large distances where it attenuates and can be corrupted by noise. To do error correction one needs to have redundancy. One simple way of doing classical error correction is to encode each logical bit by three physical bits, i.e. j0i ! j0i ≡ j0ij0ij0i ≡ j000i ; (1a) j1i ! j1i ≡ j1ij1ij1i ≡ j111i : (1b) The sets of three bits, j000i and j111i, are called codewords. One monitors the codewords to look for errors. If the bits in a codeword are not all the same one uses \majority rule" to correct.
    [Show full text]
  • Pairwise Independence and Derandomization
    Pairwise Independence and Derandomization Pairwise Independence and Derandomization Michael Luby Digital Fountain Fremont, CA, USA Avi Wigderson Institute for Advanced Study Princeton, NJ, USA [email protected] Boston – Delft Foundations and TrendsR in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 USA Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 A Cataloging-in-Publication record is available from the Library of Congress The preferred citation for this publication is M. Luby and A. Wigderson, Pairwise R Independence and Derandomization, Foundation and Trends in Theoretical Com- puter Science, vol 1, no 4, pp 237–301, 2005 Printed on acid-free paper ISBN: 1-933019-22-0 c 2006 M. Luby and A. Wigderson All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Cen- ter, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged.
    [Show full text]
  • Quantum Error Correction Todd A
    Quantum Error Correction Todd A. Brun Summary Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space, without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state. Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely-used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected.
    [Show full text]
  • Quantum Error Correcting Codes and Quantum Cryptography Peter Shor M.I.T
    Quantum Error Correcting Codes and Quantum Cryptography Peter Shor M.I.T. Cambridge, MA 02139 1 We start out with two processes which are fundamentally quantum: superdense coding and teleportation. Superdense coding begins with the question: How much information can one qubit convey? Holevo's bound (we discuss this later) says that one qubit can only convey one bit. However, if a sender and receiver share entanglement, i.e., one EPR pair, they can use this entanglement and one qubit to send two classical bits. 2 The four states (the Bell basis) are mutually orthogonal 1 ( 00 + 11 ) 1 ( 00 11 ) 1 ( 01 + 10 ) 1 ( 01 10 ) p2 j i j i p2 j i−j i p2 j i j i p2 j i−j i Any one can be turned into any other (up to a phase factor) by applying the appropriate Pauli matrix to either of the two qubits. 1 0 0 1 id = σx = 0 0 1 1 0 1 0 1 @ A @ A 0 i 1 0 σy = − σz = 0 i 0 1 0 0 1 1 − @ A @ A 3 Superdense Coding: Receiver Two classical bits Sender One qubit Two classical bits EPR Pair of qubits 4 Superdense Coding Alice and Bob start with the state 1 ( 00 + 11 ), where p2 j iAB j iAB Alice holds the first qubit and Bob the second. Alice applies one of the four Pauli matrices to her qubit, and sends her qubit to Bob. He can then tell all four Bell states apart, so he knows which Pauli matrix Alice applied.
    [Show full text]
  • Quantum Computation Beyond the Circuit Model by Stephen Paul Jordan
    Quantum Computation Beyond the Circuit Model by Stephen Paul Jordan Submitted to the Department of Physics in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Physics at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2008 c Stephen Paul Jordan, MMVIII. All rights reserved. The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part. Author............................................. ............................... Department of Physics May 2008 Certified by.......................................... .............................. Edward H. Farhi Professor of Physics Thesis Supervisor arXiv:0809.2307v1 [quant-ph] 13 Sep 2008 Accepted by......................................... .............................. Thomas J. Greytak Professor of Physics 2 Quantum Computation Beyond the Circuit Model by Stephen Paul Jordan Submitted to the Department of Physics on May 2008, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Physics Abstract The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, several other models of quantum computation exist which provide useful alternative frameworks for both discovering new quantum algorithms and devising new physical implementations of quantum computers. In this thesis, I first present necessary
    [Show full text]