Quantum Complexity Theory

Total Page:16

File Type:pdf, Size:1020Kb

Quantum Complexity Theory Quantum Complexity Theory Ronald de Wolf April 4, 2011 1 Most functions need exponentially many gates As we have seen, quantum computers seem to provide enormous speed-ups for problems like fac- toring, and square-root speed-ups for various search-related problems. Could they be used to speed up almost all problems, at least by some amount? Here we will show that this is not the case: as it turns out, quantum computers are not significantly better than classical computers for most computational problems. Consider the problem of computing a Boolean function f : f0; 1gn ! f0; 1g by means of a quantum circuit. Ideally, most such functions would be computable by efficient quantum circuits (i.e., using at most poly(n) elementary gates). Instead, we will show by means of a simple counting argument that almost all such functions f have circuit complexity nearly 2n. This is a variant of a well-known counting argument for classical Boolean circuits due to Shannon. Let us fix some finite set of elementary gates, for instance the Shor basis fH; Pπ=8; CNOTg or fH; Toffolig. Suppose this set has k gates, of maximal fanout 3. Let us count the number of distinct circuits that have at most C elementary gates. For simplicity we include the initial qubits (the n input bits as well as workspace qubits, which are initially j0i) among those C gates. First we need to choose which type of elementary gate each of the C gates is; this can be done in kC ways. Now every gate has at most 3 ingoing and 3 outgoing wires. For each of its 3 outgoing wires we can choose an ingoing wire for one of the gates in the following level; this can be done in at most C3 ways. Hence the total number of circuits with up to C elementary gates is at most kC C3C = CO(C). We are clearly overcounting here, but that's OK because we want an upper bound on the number of circuits. We'll say that a specific circuit computes a Boolean function f : f0; 1gn ! f0; 1g if for every input x 2 f0; 1gn, a measurement of the first qubit of the final state (obtained by applying the circuit to intial state jx; 0i) gives value f(x) with probability at least 2=3. Each of our CO(C) circuits can compute at most one f (in fact some of those circuits don't compute any function at all). Accordingly, with C gates we can compute at most CO(C) distinct Boolean functions f : f0; 1gn ! f0; 1g. Hence even if we just want to be able to compute 1% of all 22n Boolean functions then we already need 1 n CO(C) ≥ 22 , which implies C ≥ Ω(2n=n): 100 Accordingly, very few computational problems will be efficiently solvable on a quantum computer. Below we will try to classify those using the tools of complexity theory. 1 2 Classical and quantum complexity classes A \complexity class" is a set of decision problems (a.k.a. \languages") that all have similar com- plexity in some sense, for instance the ones that can be solved with polynomial time or polynomial space. Let us first mention some of the main classical complexity classes: • P. The class of problems that can be solved by classical deterministic computers using polynomial time. • BPP. The problems that can be solved by classical randomized computers using polynomial time (and with error probability ≤ 1=3 on every input). • NP. The problems where the `yes'-instances can be verified in polynomial time if some prover gives us a polynomial-length \witness". Some problems in this class are NP-complete, meaning that any other problem in NP can be reduced to it in polynomial time. Hence the NP-complete problems are the hardest problems in NP. An example is the problem of satisfiability: we can verify that a given n-variable Boolean formula is satisfiable if a prover gives us a satisfying assignment, so it is in NP, but one can even show that is NP-complete. Other examples are integer linear programming, travelling salesman, graph-colorability, etc. • PSPACE. The problems that can be solved by classical deterministic computers using polynomial space. We can consider quantum analogues of all such classes, an enterprise that was started by Bernstein and Vazirani [1]: • EQP. The class of problems that can be solved exactly by quantum computers using poly- nomial time. This class depends on the set of elementary gates one allows, and is not so interesting. • BQP. The problems that can be solved by quantum computers using polynomial time (and with error probability ≤ 1=3 on every input). This class is the accepted formalization of “efficiently solvable by quantum computers." • QNP. The problems where the `yes'-instances can be verified efficiently if some prover gives us a polynomial-length \quantum witness." This is again dependent on the elementary gates one allows, and not so interesting. Allowing error probability ≤ 1=3 on every input, we get to a class called QMA (\quantum Merlin-Arthur"). This is a more robust and more interesting quantum version of NP; unfortunately we don't have time to study it in this course. • QPSPACE. The problems that can be solved by quantum computers using polynomial space. This turns out to be the same as classical PSPACE. We should be a bit careful about what we mean by a \polynomial time [or space] quantum algo- rithm." Our model for computation has been quantum circuits, and we need a separate quantum circuits for each new input length. So a quantum algorithm of time p(n) would correspond to a family of quantum circuits fCng, where Cn is the circuit that is used for inputs of length n; it should have at most p(n) elementary gates.1 1 To avoid smuggling loads of hard-to-compute information into this definition (e.g., Cn could contain information about whether the nth Turing machine halts or not), we will require this family to be efficiently describable: there 2 In the next section we will prove that BQP ⊆ PSPACE. We have BPP ⊆ BQP, because a BPP-machine on a fixed input length n can be written as a polynomial-size reversible circuit (i.e., consisting of Toffoli gates) that starts from a state that involves some coin flips. Quantum computers can generate those coin flips using Hadamard transforms, then run the reversible circuit, and measure the final answer bit. It is believed that BQP contains problems that aren't in BPP, for example factoring large integers: this problem (or rather decision-version thereof) is in BQP because of Shor's algorithm, and is generally believed not to be in BPP. Thus we have the following sequence of inclusions: P ⊆ BPP ⊆ BQP ⊆ PSPACE: It is generally believed that P = BPP, while the other inclusions are believed to be strict. Note that a proof that BQP is strictly greater than BPP (for instance, a proof that factoring cannot be solved efficiently by classical computers) would imply that P 6= PSPACE, solving what has been one of the main open problems in computers science since the 1960s. Hence such a proof|if it exists at all|will probably be very hard. What about the relation between BQP and NP? It's generally believed that NP-complete problems are probably not in BQP. The main evidence for this is the lower bound for Grover search: a quantum brute-force search on all 2n possible assignments to an n-variable formula gives a square-root speed-up, but not more. This is of course not a proof, since there might be clever, non-brute-force methods to solve satisfiability. There could also be problems in BQP that are not in NP, so it may well be that BQP and NP are incomparable. Much more can be said about quantum complexity classes; see for instance Watrous's survey [3]. 3 Classically simulating quantum computers in polynomial space When Richard Feynman first came up with quantum computers [2], he motivated them by \the full description of quantum mechanics for a large system with R particles is given by a function q(x1; x2; : : : ; xR; t) which we call the amplitude to find the particles x1; : : : ; xR [RdW: think of xi as one qubit], and therefore, because it has too many variables, it cannot be simulated with a normal computer with a number of elements proportional to R or proportional to N." [. ] \Can a quantum system be probabilistically simulated by a classical (probabilistic, I'd assume) universal computer? In other words, a computer which will give the same probabilities as the quantum system does. If you take the computer to be the classical kind I've described so far, (not the quantum kind described in the last section) and there are no changes in any laws, and there's no hocus-pocus, the answer is certainly, No!" The suggestion to devise a quantum computer to simulate quantum physics is of course a brilliant one, but the main motivation is not quite accurate. As it turns out, it is not necessary to keep track of all (exponentially many) amplitudes in the state to classically simulate a quantum system. Here we will show that it can actually be simulated efficiently in terms of space [1], though not necessarily in terms of time. should be a classical Turing machine which, on input n and j, outputs (in time polynomial in n) the jth elementary gate of Cn, with information about where its incoming and outcoming wires go.
Recommended publications
  • Quantum Computing Overview
    Quantum Computing at Microsoft Stephen Jordan • As of November 2018: 60 entries, 392 references • Major new primitives discovered only every few years. Simulation is a killer app for quantum computing with a solid theoretical foundation. Chemistry Materials Nuclear and Particle Physics Image: David Parker, U. Birmingham Image: CERN Many problems are out of reach even for exascale supercomputers but doable on quantum computers. Understanding biological Nitrogen fixation Intractable on classical supercomputers But a 200-qubit quantum computer will let us understand it Finding the ground state of Ferredoxin 퐹푒2푆2 Classical algorithm Quantum algorithm 2012 Quantum algorithm 2015 ! ~3000 ~1 INTRACTABLE YEARS DAY Research on quantum algorithms and software is essential! Quantum algorithm in high level language (Q#) Compiler Machine level instructions Microsoft Quantum Development Kit Quantum Visual Studio Local and cloud Extensive libraries, programming integration and quantum samples, and language debugging simulation documentation Developing quantum applications 1. Find quantum algorithm with quantum speedup 2. Confirm quantum speedup after implementing all oracles 3. Optimize code until runtime is short enough 4. Embed into specific hardware estimate runtime Simulating Quantum Field Theories: Classically Feynman diagrams Lattice methods Image: Encyclopedia of Physics Image: R. Babich et al. Break down at strong Good for binding energies. coupling or high precision Sign problem: real time dynamics, high fermion density. There’s room for exponential
    [Show full text]
  • Simulating Quantum Field Theory with a Quantum Computer
    Simulating quantum field theory with a quantum computer John Preskill Lattice 2018 28 July 2018 This talk has two parts (1) Near-term prospects for quantum computing. (2) Opportunities in quantum simulation of quantum field theory. Exascale digital computers will advance our knowledge of QCD, but some challenges will remain, especially concerning real-time evolution and properties of nuclear matter and quark-gluon plasma at nonzero temperature and chemical potential. Digital computers may never be able to address these (and other) problems; quantum computers will solve them eventually, though I’m not sure when. The physics payoff may still be far away, but today’s research can hasten the arrival of a new era in which quantum simulation fuels progress in fundamental physics. Frontiers of Physics short distance long distance complexity Higgs boson Large scale structure “More is different” Neutrino masses Cosmic microwave Many-body entanglement background Supersymmetry Phases of quantum Dark matter matter Quantum gravity Dark energy Quantum computing String theory Gravitational waves Quantum spacetime particle collision molecular chemistry entangled electrons A quantum computer can simulate efficiently any physical process that occurs in Nature. (Maybe. We don’t actually know for sure.) superconductor black hole early universe Two fundamental ideas (1) Quantum complexity Why we think quantum computing is powerful. (2) Quantum error correction Why we think quantum computing is scalable. A complete description of a typical quantum state of just 300 qubits requires more bits than the number of atoms in the visible universe. Why we think quantum computing is powerful We know examples of problems that can be solved efficiently by a quantum computer, where we believe the problems are hard for classical computers.
    [Show full text]
  • On Uniformity Within NC
    On Uniformity Within NC David A Mix Barrington Neil Immerman HowardStraubing University of Massachusetts University of Massachusetts Boston Col lege Journal of Computer and System Science Abstract In order to study circuit complexity classes within NC in a uniform setting we need a uniformity condition which is more restrictive than those in common use Twosuch conditions stricter than NC uniformity RuCo have app eared in recent research Immermans families of circuits dened by rstorder formulas ImaImb and a unifor mity corresp onding to Buss deterministic logtime reductions Bu We show that these two notions are equivalent leading to a natural notion of uniformity for lowlevel circuit complexity classes Weshow that recent results on the structure of NC Ba still hold true in this very uniform setting Finallyweinvestigate a parallel notion of uniformity still more restrictive based on the regular languages Here we givecharacterizations of sub classes of the regular languages based on their logical expressibility extending recentwork of Straubing Therien and Thomas STT A preliminary version of this work app eared as BIS Intro duction Circuit Complexity Computer scientists have long tried to classify problems dened as Bo olean predicates or functions by the size or depth of Bo olean circuits needed to solve them This eort has Former name David A Barrington Supp orted by NSF grant CCR Mailing address Dept of Computer and Information Science U of Mass Amherst MA USA Supp orted by NSF grants DCR and CCR Mailing address Dept of
    [Show full text]
  • Slides 6, HT 2019 Space Complexity
    Computational Complexity; slides 6, HT 2019 Space complexity Prof. Paul W. Goldberg (Dept. of Computer Science, University of Oxford) HT 2019 Paul Goldberg Space complexity 1 / 51 Road map I mentioned classes like LOGSPACE (usually calledL), SPACE(f (n)) etc. How do they relate to each other, and time complexity classes? Next: Various inclusions can be proved, some more easy than others; let's begin with \low-hanging fruit"... e.g., I have noted: TIME(f (n)) is a subset of SPACE(f (n)) (easy!) We will see e.g.L is a proper subset of PSPACE, although it's unknown how they relate to various intermediate classes, e.g.P, NP Various interesting problems are complete for PSPACE, EXPTIME, and some of the others. Paul Goldberg Space complexity 2 / 51 Convention: In this section we will be using Turing machines with a designated read only input tape. So, \logarithmic space" becomes meaningful. Space Complexity So far, we have measured the complexity of problems in terms of the time required to solve them. Alternatively, we can measure the space/memory required to compute a solution. Important difference: space can be re-used Paul Goldberg Space complexity 3 / 51 Space Complexity So far, we have measured the complexity of problems in terms of the time required to solve them. Alternatively, we can measure the space/memory required to compute a solution. Important difference: space can be re-used Convention: In this section we will be using Turing machines with a designated read only input tape. So, \logarithmic space" becomes meaningful. Paul Goldberg Space complexity 3 / 51 Definition.
    [Show full text]
  • NP-Completeness: Reductions Tue, Nov 21, 2017
    CMSC 451 Dave Mount CMSC 451: Lecture 19 NP-Completeness: Reductions Tue, Nov 21, 2017 Reading: Chapt. 8 in KT and Chapt. 8 in DPV. Some of the reductions discussed here are not in either text. Recap: We have introduced a number of concepts on the way to defining NP-completeness: Decision Problems/Language recognition: are problems for which the answer is either yes or no. These can also be thought of as language recognition problems, assuming that the input has been encoded as a string. For example: HC = fG j G has a Hamiltonian cycleg MST = f(G; c) j G has a MST of cost at most cg: P: is the class of all decision problems which can be solved in polynomial time. While MST 2 P, we do not know whether HC 2 P (but we suspect not). Certificate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language. For example, the language HC above, a certificate could be a sequence of vertices along the cycle. (If the string is not in the language, the certificate can be anything.) NP: is defined to be the class of all languages that can be verified in polynomial time. (Formally, it stands for Nondeterministic Polynomial time.) Clearly, P ⊆ NP. It is widely believed that P 6= NP. To define NP-completeness, we need to introduce the concept of a reduction. Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the class NP) that no one knows how to solve efficiently, but if there were a polynomial time solution for even a single NP-complete problem, then every problem in NP would be solvable in polynomial time.
    [Show full text]
  • Quantum Clustering Algorithms
    Quantum Clustering Algorithms Esma A¨ımeur [email protected] Gilles Brassard [email protected] S´ebastien Gambs [email protected] Universit´ede Montr´eal, D´epartement d’informatique et de recherche op´erationnelle C.P. 6128, Succursale Centre-Ville, Montr´eal (Qu´ebec), H3C 3J7 Canada Abstract Multidisciplinary by nature, Quantum Information By the term “quantization”, we refer to the Processing (QIP) is at the crossroads of computer process of using quantum mechanics in order science, mathematics, physics and engineering. It con- to improve a classical algorithm, usually by cerns the implications of quantum mechanics for infor- making it go faster. In this paper, we initiate mation processing purposes (Nielsen & Chuang, 2000). the idea of quantizing clustering algorithms Quantum information is very different from its classi- by using variations on a celebrated quantum cal counterpart: it cannot be measured reliably and it algorithm due to Grover. After having intro- is disturbed by observation, but it can exist in a super- duced this novel approach to unsupervised position of classical states. Classical and quantum learning, we illustrate it with a quantized information can be used together to realize wonders version of three standard algorithms: divisive that are out of reach of classical information processing clustering, k-medians and an algorithm for alone, such as being able to factorize efficiently large the construction of a neighbourhood graph. numbers, with dramatic cryptographic consequences We obtain a significant speedup compared to (Shor, 1997), search in a unstructured database with a the classical approach. quadratic speedup compared to the best possible clas- sical algorithms (Grover, 1997) and allow two people to communicate in perfect secrecy under the nose of an 1.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • Varying Constants, Gravitation and Cosmology
    Varying constants, Gravitation and Cosmology Jean-Philippe Uzan Institut d’Astrophysique de Paris, UMR-7095 du CNRS, Universit´ePierre et Marie Curie, 98 bis bd Arago, 75014 Paris (France) and Department of Mathematics and Applied Mathematics, Cape Town University, Rondebosch 7701 (South Africa) and National Institute for Theoretical Physics (NITheP), Stellenbosch 7600 (South Africa). email: [email protected] http//www2.iap.fr/users/uzan/ September 29, 2010 Abstract Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to mat- ter. This will induce a violation of the universality of free fall. It is thus of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We thus detail the relations between the constants, the tests of the local posi- tion invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, Solar system observations, meteorites dating, quasar absorption spectra, stellar physics, pul- sar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we arXiv:1009.5514v1 [astro-ph.CO] 28 Sep 2010 describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of differ- ent constants.
    [Show full text]
  • Limits on Efficient Computation in the Physical World
    Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Bachelor of Science (Cornell University) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA, BERKELEY Committee in charge: Professor Umesh Vazirani, Chair Professor Luca Trevisan Professor K. Birgitta Whaley Fall 2004 The dissertation of Scott Joel Aaronson is approved: Chair Date Date Date University of California, Berkeley Fall 2004 Limits on Efficient Computation in the Physical World Copyright 2004 by Scott Joel Aaronson 1 Abstract Limits on Efficient Computation in the Physical World by Scott Joel Aaronson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Umesh Vazirani, Chair More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In partic- ular, any quantum algorithm that solves the collision problem—that of deciding whether a sequence of n integers is one-to-one or two-to-one—must query the sequence Ω n1/5 times.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Arxiv:1506.08857V1 [Quant-Ph] 29 Jun 2015
    Absolutely Maximally Entangled states, combinatorial designs and multi-unitary matrices Dardo Goyeneche National Quantum Information Center of Gda´nsk,81-824 Sopot, Poland and Faculty of Applied Physics and Mathematics, Technical University of Gda´nsk,80-233 Gda´nsk,Poland Daniel Alsina Dept. Estructura i Constituents de la Mat`eria,Universitat de Barcelona, Spain. Jos´e I. Latorre Dept. Estructura i Constituents de la Mat`eria,Universitat de Barcelona, Spain. and Center for Theoretical Physics, MIT, USA Arnau Riera ICFO-Institut de Ciencies Fotoniques, Castelldefels (Barcelona), Spain Karol Zyczkowski_ Institute of Physics, Jagiellonian University, Krak´ow,Poland and Center for Theoretical Physics, Polish Academy of Sciences, Warsaw, Poland (Dated: June 29, 2015) Absolutely Maximally Entangled (AME) states are those multipartite quantum states that carry absolute maximum entanglement in all possible partitions. AME states are known to play a relevant role in multipartite teleportation, in quantum secret sharing and they provide the basis novel tensor networks related to holography. We present alternative constructions of AME states and show their link with combinatorial designs. We also analyze a key property of AME, namely their relation to tensors that can be understood as unitary transformations in every of its bi-partitions. We call this property multi-unitarity. I. INTRODUCTION entropy S(ρ) = −Tr(ρ log ρ) ; (1) A complete characterization, classification and it is possible to show [1, 2] that the average entropy quantification of entanglement for quantum states re- of the reduced state σ = TrN=2j ih j to N=2 qubits mains an unfinished long-term goal in Quantum Infor- reads: N mation theory.
    [Show full text]