And Max-Relative Entropies

Total Page:16

File Type:pdf, Size:1020Kb

And Max-Relative Entropies Min- and Max-Relative Entropies Nilanjana Datta University of Cambridge,U.K. Focus : on an important quantity which arises in Quantum Mechanics & Quantum Information Theory Quantum Entropy or von Neumann Entropy and its parent quantity: Quantum Relative Entropy & on new quantum relative entropies : Min- & Max-Relative Entropies Quantum Entropy or von Neumann Entropy In 1927 von Neumann introduced the notion of a mixed state, represented by a density matrix: ρ ≥=0; Tr ρ 1 k ρ = ∑ piiψψ i; ρψ↔ { pii, } i=1 & defined its entropy: S(ρ) :Tr=− (ρ log ρ ) Why? -To extend the classical theory of Statistical Mechanics developed by Gibbs, Boltzmann et al to the quantum domain; not for the development of Quantum Information Theory In fact, this was well before Shannon laid the foundations of Classical Information Theory (1948). S(ρ) :Tr=− (ρ log ρ ) k p ; ρ = ∑ iiψψ i ψ ijψδ≠ ij i=1 S()ρ = 0if and only if ρ is a pure state: ρ = ΨΨ ∴S()ρ = a measure of the “mixedness” of the state ρ Relation with Statistical Mechanics The finite volume Gibbs state given by the density matrix: −β HΛ Gibbs e ρΛ = Tr e−β HΛ maximizes the von Neumann entropy, given the expected value of the energy i.e., the functional : ρ SH()ρρΛ − βTr (ΛΛ ) is maximized if and only if Gibbs ρρΛΛ= log ZSΛΛΛΛ=−max( (ρρ )βTr ( H )) ρΛ In 1948 Shannon defined the entropy of a random variable Let X ∼ px (); xJ ∈ ; J = a finite alphabet Shannon entropy of X ; HX():=−∑ px ()log() px xJ∈ H ()XHpx≡ ({ ()}) Supposedly von Neumann asked Shannon to call this quantity entropy, saying “You should call it ‘entropy’ for two reasons; first, the function is already in use in thermodynamics; second, and most importantly, most people don’t know what entropy really is, & if you use ‘entropy’ in an argument, you will win every time.” Relation between Shannon entropy & thermodynamic entropy total # of Sk:log= Ω microstates th Suppose the r microstate occurs with prob. pr ………. consider v replicas of the system th Then on average vvp rr ≈ [] replicas are in the r state v Total # of vv! Ω= (by Stirling’s) microstates vv12 vr vv12!!..! vrr vv 12 .. v Th.dyn.entropy of compound Skvpp=− log system of v replicas vrr∑ r SSv==−/ k plog p Shannon vrr∑ = k H ({ pr }) r entropy Relation between Shannon entropy & von Neumann entropy k S()ρ :Tr=− (ρ log ρ ); ρ = ∑ piiψψ i; i=1 ρψ↔ { pii,;} ψ ijψδ≠ ij eigenvalues n † BUT ρ = ρ , Spectral decomposition ρλ= ∑ iiΠ ; i=1 n ρ ≥ 0; a probability λλ≥=0, 1; λ n = ⇒ ii∑ {}i i=1 distribution Tr ρ =1 i=1 n S()ρ :log=−∑λiiλ = H ({λi}) i=1 Classical Information Theory [Shannon 1948] = Theory of information-processing tasks e.g. storage & transmission of information Quantum Information Theory: A study of how such tasks can be accomplished using quantum-mechanical systems not just a generalization of Classical Information Theory to the quantum realm! novel features! The underlying which have no quantum mechanics classical analogue! In Quantum Information Theory, information is carried by physical states of quantum-mechanical systems: e.g. polarization states of photons, spin states of electrons etc. Fundamental Units Classical Information Theory: a bit ; takes values 0 and 1 Quantum Information Theory: a qubit = state of a two-level quantum-mechanical system Physical representation of a qubit Any two-level system; e.g. spin states of an electron, polarization states of a photon…… A multi-level system which has 2 states which can be effectively E1 decoupled from the rest; E0 Operational Significance of the Shannon Entropy = optimal rate of data compression for a classical i.i.d. (memoryless) information source successive signals emitted by source : indept. of each other Modelled by a sequence of i.i.d. random variables ∼ UU12, ,..., Un Upui () uJ∈ signals emitted by the source = (uu12 , ,..., un ) Shannon entropy of the source: H ():Upupu=−∑ ()log() uJ∈ Operational Significance of the Shannon Entropy (Q) What is the optimal rate of data compression for such a source? [ min. # of bits needed to store the signals emitted per use of the source] (for reliable data compression) Optimal rate is evaluated in the asymptotic limit n →∞ n = number of uses of the source ()n One requires pnerror →→∞0 ; (A) optimal rate of data compression = HU() Shannon entropy of the source Operational Significance of the von Neumann Entropy = optimal rate of data compression for a memoryless (i.i.d.) quantum information source A quantum info source emits: signals (pure states) ψ12,ψψ ...., k ∈ / p ,pp ,..., with probabilities 12 k Hilbert space Then source characterized by: {ρ, / } density matrix k ρ = p ψψ ∑ ii i ψ ijψδ≠ ij i=1 To evaluate data compression limit : Consider a sequence {ρnn,/ } If the quantum info source is memoryless (i.i.d.) ⊗U ρ = ρ ⊗n ρ ∈/ / n =/ ; n Optimal rate of data compression = S()ρ NOTE: Evaluated in the asymptotic limit n →∞ n = number of uses of the source One requires ()n perr → 0 n→∞ e.g. A memoryless quantum info source emitting qubits Characterized by ⊗n ⊗U {ρnn,;/ } ρρn = ; / n =/ ; Or simply by {ρ,/ } Consider nn successive uses of the source ; qubits emitted Stored in m n qubits (data compression) mn rate of data compression = n m Optimal rate of data compression n R∞ := lim = S()ρ n→∞ n under the requirement that ()n perror → 0 n→∞ Quantum Relative Entropy A fundamental quantity in Quantum Mechanics & Quantum Information Theory is the Quantum Relative Entropy of ρ w.r.t. σ , ρ ≥=≥ 0, Tr ρσ0, 0: S(ρ ||)σ :Tr= ρ log ρρσ− Tr lgo well-defined if supp ρ ⊆ supp σ It acts as a parent quantity for the von Neumann entropy: SSI()ρ :Tr= −= ρ log ρ − (ρ ||) ()σ = I It also acts as a parent quantity for other entropies: e.g. for a bipartite state ρ A B : A B Conditional entropy SAB(|):(= SSρ AB )− ()ρB = −⊗ S (ρ ABA || I ρ B ) ρ = Tr ρ Mutual information BAAB IAB(:):()=+SSSρ A ()ρρBAB)(− = S (ρ AB || ρ A ⊗ ρB ) Some Properties of S(ρ ||σ ) Klein’s inequality: S(||)0ρ σ ≥ “distance” = 0 if & only if ρ = σ Joint convexity: SS()(||)ppρ )|| σ ≤ p ρ σ ∑ kk∑ k k ∑ k kk kk k Monotonicity under completely positive trace preserving (CPTP) maps Λ : SS(()||())Λ ρ Λ≤σρσ (||) What is a CPTP map? For an isolated quantum system: time evolution is unitary -- dynamics governed by the Schroedinger equation. In Quantum Info. Theory one deals with open systems unavoidable interactions between system (A) & its environment (B) A B Time evolution not unitary in general Most general description of time evolution of an open system given by a completely positive trace-preserving (CPTP) map Λ Λ→: ρ σ describes discrete state changes resulting from any allowed physical density process : a superoperator operators Properties satisfied by a superoperator Λ : ρ → σ Linearity: ppρ ρ Λ=Λ()∑ kk∑ k () k kk Positivity: σ = Λ≥()ρ 0 Trace-preserving: (TP) Trσ =Λ Tr (ρρ ) = Tr =1 Complete positivity: / AB⊗/ (CP) A B ()0Λ⊗ABABid )(ρ ≥ ρ AB Kraus Representation † AA† Λ=()ρρ∑ AAk k ;∑ kk= I Theorem: k k Monotonicity of Quantum Relative Entropy under CPTP map Λ : ……….(1) v.powerful! SS(()||())ΛΛ≤ρ σρσ (||) Many properties of other entropic quantities can be proved using (1) e.g. Strong subadditivity of the von Neumann entropy Conjecture by Lanford, Robinson – proved by Lieb & Ruskai ‘73 A SSSS()()()()ρABC+≤ρρ B AB + ρ BC B C Outline of the rest Define 2 new relative entropy quantities Discuss their properties and operational significance Give a motivation for defining them Define 2 entanglement monotones (I will explain what that means ) Discuss their operational significance Two new relative entropies Definition 1 : The max- relative entropy of a density matrix ρ & a positive operator σ is Smax (||):logρ σλρλσ=≤ (min:{}) ()0λσρ− ≥ Definition 2: The min- relative entropy of a state ρ & a positive operator σ is Smin (||):logρ σπσ= − Tr (ρ ) where π ρ denotes the projector onto the support of ρ (supp ρ) Remark: The min- relative entropy Smin (||):logρ σπσ= − Tr ρ (where π ρ denotes the projector onto supp ρ ) is expressible in terms of: quantum relative Renyi entropy of order α with α ≠ 1 1 S (||):ρ σρσ= log Tr α 1−α α α −1 as follows: supp ρ ⊆ supp σ SSmin (||)ρ σρσ= lim(||)α α →0+ Min- and Max-Relative Entropies Like S (||) ρ σ we have for S* (ρ ||σ )0≥ *max,= min SS**(()||())ΛΛ≤ρ σρσ (||) for any CPTP map Λ for any unitary Also †† SSUUUU**(ρσ|| )(= ρ|| σ ) operator U Most interestingly SSSmin(||)ρ σρσρσ≤ (||)≤ max (||) The min-relative entropy is jointly convex in its arguments: n n For two mixtures of states ρ = p ρ & ∑ ii σ = ∑ piiσ i=1 i=1 n as for S(||)ρ σ SpSmin(ρ ||σρσ)(≤ ∑ iii min || ) i=1 The max-relative entropy is quasiconvex: SSmax(||)ρ σρσ≤ max max (ii ||) 1≤≤in Min- and Max- entropies H ()ρ :(||)= −S ρ I Hmin ()ρ :(||)=−Smax ρ I max min = log rank(ρ ) =−log||ρ ||∞ analogous to: SSI()ρ = − (ρ ||) For a bipartite state ρ AB : HAmin (|B)ρ :(= −⊗Smax ρ ABA||I ρ B ) etc. analogous to: SAB(|)= −⊗ S (ρ ABA || I ρ B ) HABmin (): ρ :(= Smin ρ ABA|| ρρ⊗ B) etc. analogous to: SAB(:)= S (ρ ABA ||ρρ⊗ B ) Min- and Max- Relative Entropies satisfy the: A B (1) Strong Subadditivity Property C just as SSSS()()()()ρABC+ ρρ B≤+ AB ρ BC HHHHminABCminBminABminBC()ρ +≤ ()ρρ () + () ρ (2) Araki-Lieb inequality A just as B SSS()ρAB≥− ()()ρρ A B HHHmin()ρ AB≥− min ()ρρ A min () B (Q) What is the operational significance of the min- and max- relative entropies? In Quantum information theory, initially one evaluated: optimal rates of info-processing tasks: e.g.
Recommended publications
  • Quantum Information, Entanglement and Entropy
    QUANTUM INFORMATION, ENTANGLEMENT AND ENTROPY Siddharth Sharma Abstract In this review I had given an introduction to axiomatic Quantum Mechanics, Quantum Information and its measure entropy. Also, I had given an introduction to Entanglement and its measure as the minimum of relative quantum entropy of a density matrix of a Hilbert space with respect to all separable density matrices of the same Hilbert space. I also discussed geodesic in the space of density matrices, their orthogonality and Pythagorean theorem of density matrix. Postulates of Quantum Mechanics The basic postulate of quantum mechanics is about the Hilbert space formalism: • Postulates 0: To each quantum mechanical system, has a complex Hilbert space ℍ associated to it: Set of all pure state, ℍ⁄∼ ≔ {⟦푓⟧|⟦푓⟧ ≔ {ℎ: ℎ ∼ 푓}} and ℎ ∼ 푓 ⇔ ∃ 푧 ∈ ℂ such that ℎ = 푧 푓 where 푧 is called phase • Postulates 1: The physical states of a quantum mechanical system are described by statistical operators acting on the Hilbert space. A density matrix or statistical operator, is a positive operator of trace 1 on the Hilbert space. Where is called positive if 〈푥, 푥〉 for all 푥 ∈ ℍ and 〈⋇,⋇〉. • Postulates 2: The observables of a quantum mechanical system are described by self- adjoint operators acting on the Hilbert space. A self-adjoint operator A on a Hilbert space ℍ is a linear operator 퐴: ℍ → ℍ which satisfies 〈퐴푥, 푦〉 = 〈푥, 퐴푦〉 for 푥, 푦 ∈ ℍ. Lemma: The density matrices acting on a Hilbert space form a convex set whose extreme points are the pure states. Proof. Denote by Σ the set of density matrices.
    [Show full text]
  • A Mini-Introduction to Information Theory
    A Mini-Introduction To Information Theory Edward Witten School of Natural Sciences, Institute for Advanced Study Einstein Drive, Princeton, NJ 08540 USA Abstract This article consists of a very short introduction to classical and quantum information theory. Basic properties of the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information. A few more detailed topics are considered in the quantum case. arXiv:1805.11965v5 [hep-th] 4 Oct 2019 Contents 1 Introduction 2 2 Classical Information Theory 2 2.1 ShannonEntropy ................................... .... 2 2.2 ConditionalEntropy ................................. .... 4 2.3 RelativeEntropy .................................... ... 6 2.4 Monotonicity of Relative Entropy . ...... 7 3 Quantum Information Theory: Basic Ingredients 10 3.1 DensityMatrices .................................... ... 10 3.2 QuantumEntropy................................... .... 14 3.3 Concavity ......................................... .. 16 3.4 Conditional and Relative Quantum Entropy . ....... 17 3.5 Monotonicity of Relative Entropy . ...... 20 3.6 GeneralizedMeasurements . ...... 22 3.7 QuantumChannels ................................... ... 24 3.8 Thermodynamics And Quantum Channels . ...... 26 4 More On Quantum Information Theory 27 4.1 Quantum Teleportation and Conditional Entropy . ......... 28 4.2 Quantum Relative Entropy And Hypothesis Testing . ......... 32 4.3 Encoding
    [Show full text]
  • Some Trace Inequalities for Exponential and Logarithmic Functions
    Bull. Math. Sci. https://doi.org/10.1007/s13373-018-0123-3 Some trace inequalities for exponential and logarithmic functions Eric A. Carlen1 · Elliott H. Lieb2 Received: 8 October 2017 / Revised: 17 April 2018 / Accepted: 24 April 2018 © The Author(s) 2018 Abstract Consider a function F(X, Y ) of pairs of positive matrices with values in the positive matrices such that whenever X and Y commute F(X, Y ) = X pY q . Our first main result gives conditions on F such that Tr[X log(F(Z, Y ))]≤Tr[X(p log X + q log Y )] for all X, Y, Z such that TrZ = Tr X. (Note that Z is absent from the right side of the inequality.) We give several examples of functions F to which the theorem applies. Our theorem allows us to give simple proofs of the well known logarithmic inequalities of Hiai and Petz and several new generalizations of them which involve three variables X, Y, Z instead of just X, Y alone. The investigation of these logarith- mic inequalities is closely connected with three quantum relative entropy functionals: The standard Umegaki quantum relative entropy D(X||Y ) = Tr[X(log X − log Y ]), and two others, the Donald relative entropy DD(X||Y ), and the Belavkin–Stasewski relative entropy DBS(X||Y ). They are known to satisfy DD(X||Y ) ≤ D(X||Y ) ≤ DBS(X||Y ). We prove that the Donald relative entropy provides the sharp upper bound, independent of Z on Tr[X log(F(Z, Y ))] in a number of cases in which F(Z, Y ) is homogeneous of degree 1 in Z and −1inY .
    [Show full text]
  • {Dоwnlоаd/Rеаd PDF Bооk} Entangled Kindle
    ENTANGLED PDF, EPUB, EBOOK Cat Clarke | 256 pages | 06 Jan 2011 | Hachette Children's Group | 9781849163941 | English | London, United Kingdom Entangled PDF Book A Trick of the Tail is the seventh studio album by English progressive rock band Genesis. Why the Name Entangled? Shor, John A. Annals of Physics. Bibcode : arXiv The Living Years. Suhail Bennett, David P. Part of a series on. Examples of entanglement in a Sentence his life is greatly complicated by his romantic entanglements. Take the quiz Forms of Government Quiz Name that government! These four pure states are all maximally entangled according to the entropy of entanglement and form an orthonormal basis linear algebra of the Hilbert space of the two qubits. The researchers used a single source of photon pairs that had been entangle d, which means their quantum states are intrinsically linked and any change or measurement of one is mirrored in the other. For example, an interaction between a qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting it interact with B's qubit which is now a LOCC operation, since both qubits are in B's lab and then teleporting the qubit back to A. Looking for some great streaming picks? In earlier tests, it couldn't be absolutely ruled out that the test result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location. Thesis University of California at Berkeley, The Hilbert space of the composite system is the tensor product.
    [Show full text]
  • Lecture 18 — October 26, 2015 1 Overview 2 Quantum Entropy
    PHYS 7895: Quantum Information Theory Fall 2015 Lecture 18 | October 26, 2015 Prof. Mark M. Wilde Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. 1 Overview In the previous lecture, we discussed classical entropy and entropy inequalities. In this lecture, we discuss several information measures that are important for quantifying the amount of information and correlations that are present in quantum systems. The first fundamental measure that we introduce is the von Neumann entropy. It is the quantum generalization of the Shannon entropy, but it captures both classical and quantum uncertainty in a quantum state. The von Neumann entropy gives meaning to a notion of the information qubit. This notion is different from that of the physical qubit, which is the description of a quantum state of an electron or a photon. The information qubit is the fundamental quantum informational unit of measure, determining how much quantum information is present in a quantum system. The initial definitions here are analogous to the classical definitions of entropy, but we soon discover a radical departure from the intuitive classical notions from the previous chapter: the conditional quantum entropy can be negative for certain quantum states. In the classical world, this negativity simply does not occur, but it takes a special meaning in quantum information theory. Pure quantum states that are entangled have stronger-than-classical correlations and are examples of states that have negative conditional entropy. The negative of the conditional quantum entropy is so important in quantum information theory that we even have a special name for it: the coherent information.
    [Show full text]
  • From Joint Convexity of Quantum Relative Entropy to a Concavity Theorem of Lieb
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 140, Number 5, May 2012, Pages 1757–1760 S 0002-9939(2011)11141-9 Article electronically published on August 4, 2011 FROM JOINT CONVEXITY OF QUANTUM RELATIVE ENTROPY TO A CONCAVITY THEOREM OF LIEB JOEL A. TROPP (Communicated by Marius Junge) Abstract. This paper provides a succinct proof of a 1973 theorem of Lieb that establishes the concavity of a certain trace function. The development relies on a deep result from quantum information theory, the joint convexity of quantum relative entropy, as well as a recent argument due to Carlen and Lieb. 1. Introduction In his 1973 paper on trace functions, Lieb establishes an important concavity theorem [Lie73, Thm. 6] concerning the trace exponential. Theorem 1 (Lieb). Let H be a fixed self-adjoint matrix. The map (1) A −→ tr exp (H +logA) is concave on the positive-definite cone. The most direct proof of Theorem 1 is due to Epstein [Eps73]; see Ruskai’s papers [Rus02, Rus05] for a condensed version of this argument. Lieb’s original proof develops the concavity of the function (1) as a corollary of another deep concavity theorem [Lie73, Thm. 1]. In fact, many convexity and concavity theorems for trace functions are equivalent with each other, in the sense that the mutual implications follow from relatively easy arguments. See [Lie73, §5] and [CL08, §5] for discussions of this point. The goal of this paper is to demonstrate that a modicum of convex analysis allows us to derive Theorem 1 directly from another major theorem, the joint convexity of the quantum relative entropy.
    [Show full text]
  • Classical, Quantum and Total Correlations
    Classical, quantum and total correlations L. Henderson∗ and V. Vedral∗∗ ∗Department of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW ∗∗Optics Section, Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ Abstract We discuss the problem of separating consistently the total correlations in a bipartite quantum state into a quantum and a purely classical part. A measure of classical correlations is proposed and its properties are explored. In quantum information theory it is common to distinguish between purely classical information, measured in bits, and quantum informa- tion, which is measured in qubits. These differ in the channel resources required to communicate them. Qubits may not be sent by a classical channel alone, but must be sent either via a quantum channel which preserves coherence or by teleportation through an entangled channel with two classical bits of communication [?]. In this context, one qubit is equivalent to one unit of shared entanglement, or `e-bit', together with two classical bits. Any bipartite quantum state may be used as a com- munication channel with some degree of success, and so it is of interest to determine how to separate the correlations it contains into a classi- cal and an entangled part. A number of measures of entanglement and of total correlations have been proposed in recent years [?, ?, ?, ?, ?]. However, it is still not clear how to quantify the purely classical part of the total bipartite correlations. In this paper we propose a possible measure of classical correlations and investigate its properties. We first review the existing measures of entangled and total corre- lations.
    [Show full text]
  • Quantum Information Chapter 10. Quantum Shannon Theory
    Quantum Information Chapter 10. Quantum Shannon Theory John Preskill Institute for Quantum Information and Matter California Institute of Technology Updated June 2016 For further updates and additional chapters, see: http://www.theory.caltech.edu/people/preskill/ph219/ Please send corrections to [email protected] Contents 10 Quantum Shannon Theory 1 10.1 Shannon for Dummies 2 10.1.1 Shannon entropy and data compression 2 10.1.2 Joint typicality, conditional entropy, and mutual infor- mation 6 10.1.3 Distributed source coding 8 10.1.4 The noisy channel coding theorem 9 10.2 Von Neumann Entropy 16 10.2.1 Mathematical properties of H(ρ) 18 10.2.2 Mixing, measurement, and entropy 20 10.2.3 Strong subadditivity 21 10.2.4 Monotonicity of mutual information 23 10.2.5 Entropy and thermodynamics 24 10.2.6 Bekenstein’s entropy bound. 26 10.2.7 Entropic uncertainty relations 27 10.3 Quantum Source Coding 30 10.3.1 Quantum compression: an example 31 10.3.2 Schumacher compression in general 34 10.4 Entanglement Concentration and Dilution 38 10.5 Quantifying Mixed-State Entanglement 45 10.5.1 Asymptotic irreversibility under LOCC 45 10.5.2 Squashed entanglement 47 10.5.3 Entanglement monogamy 48 10.6 Accessible Information 50 10.6.1 How much can we learn from a measurement? 50 10.6.2 Holevo bound 51 10.6.3 Monotonicity of Holevo χ 53 10.6.4 Improved distinguishability through coding: an example 54 10.6.5 Classical capacity of a quantum channel 58 ii Contents iii 10.6.6 Entanglement-breaking channels 62 10.7 Quantum Channel Capacities and Decoupling
    [Show full text]
  • Entanglement Structures in Qubit Systems Arxiv:1505.03696V3 [Hep-Th]
    Prepared for submission to JHEP DCPT-15/27 Entanglement structures in qubit systems Mukund Rangamani, Massimiliano Rota Centre for Particle Theory & Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE, UK. E-mail: [email protected], [email protected] Abstract: Using measures of entanglement such as negativity and tangles we pro- vide a detailed analysis of entanglement structures in pure states of non-interacting qubits. The motivation for this exercise primarily comes from holographic considera- tions, where entanglement is inextricably linked with the emergence of geometry. We use the qubit systems as toy models to probe the internal structure, and introduce some useful measures involving entanglement negativity to quantify general features of entanglement. In particular, our analysis focuses on various constraints on the pat- tern of entanglement which are known to be satisfied by holographic sates, such as the saturation of Araki-Lieb inequality (in certain circumstances), and the monogamy of mutual information. We argue that even systems as simple as few non-interacting qubits can be useful laboratories to explore how the emergence of the bulk geometry may be related to quantum information principles. arXiv:1505.03696v3 [hep-th] 27 Aug 2015 Contents 1 Introduction1 2 Measures of entanglement6 2.1 Bipartite entanglement6 2.2 Multipartite entanglement 10 2.3 Notation 12 3 Warm up: Three qubits 14 4 Four qubits 17 4.1 Generic states of 4 qubits 18 4.1.1 Monogamy of the negativity and disentangling theorem 19 4.1.2 Negativity to entanglement ratio 21 4.1.3 Monogamy of mutual information 25 4.2 SLOCC classification of 4 qubit states 26 5 Large N qubit systems 34 5.1 Negativity versus entanglement 35 5.2 Exploring multipartite entanglement 39 6 Discussion: Lessons for holography 41 A Four qubit states: Detailed analysis of SLOCC classes 47 1 Introduction One of the key features distinguishing quantum mechanics is the presence of entan- glement which is a natural consequence of the superposition principle.
    [Show full text]
  • Classical Simulation of Quantum Many-Body Systems by Yichen Huang Doctor of Philosophy in Physics University of California, Berkeley Professor Joel E
    Classical simulation of quantum many-body systems by Yichen Huang A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Physics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Joel E. Moore, Chair Professor Dung-Hai Lee Professor Umesh V. Vazirani Spring 2015 Classical simulation of quantum many-body systems Copyright 2015 by Yichen Huang 1 Abstract Classical simulation of quantum many-body systems by Yichen Huang Doctor of Philosophy in Physics University of California, Berkeley Professor Joel E. Moore, Chair Classical simulation of quantum many-body systems is in general a challenging problem for the simple reason that the dimension of the Hilbert space grows exponentially with the system size. In particular, merely encoding a generic quantum many-body state requires an exponential number of bits. However, condensed matter physicists are mostly interested in local Hamiltonians and especially their ground states, which are highly non-generic. Thus, we might hope that at least some physical systems allow efficient classical simulation. Starting with one-dimensional (1D) quantum systems (i.e., the simplest nontrivial case), the first basic question is: Which classes of states have efficient classical representations? It turns out that this question is quantitatively related to the amount of entanglement in the state, for states with \little entanglement" are well approximated by matrix product states (a data structure that can be manipulated efficiently on a classical computer). At a technical level, the mathematical notion for \little entanglement" is area law, which has been proved for unique ground states in 1D gapped systems.
    [Show full text]
  • Geometry of Quantum States
    Geometry of Quantum States Ingemar Bengtsson and Karol Zyczk_ owski An Introduction to Quantum Entanglement 12 Density matrices and entropies A given object of study cannot always be assigned a unique value, its \entropy". It may have many different entropies, each one worthwhile. |Harold Grad In quantum mechanics, the von Neumann entropy S(ρ) = Trρ ln ρ (12.1) − plays a role analogous to that played by the Shannon entropy in classical prob- ability theory. They are both functionals of the state, they are both monotone under a relevant kind of mapping, and they can be singled out uniquely by natural requirements. In section 2.2 we recounted the well known anecdote ac- cording to which von Neumann helped to christen Shannon's entropy. Indeed von Neumann's entropy is older than Shannon's, and it reduces to the Shan- non entropy for diagonal density matrices. But in general the von Neumann entropy is a subtler object than its classical counterpart. So is the quantum relative entropy, that depends on two density matrices that perhaps cannot be diagonalized at the same time. Quantum theory is a non{commutative proba- bility theory. Nevertheless, as a rule of thumb we can pass between the classical discrete, classical continuous, and quantum cases by choosing between sums, integrals, and traces. While this rule of thumb has to be used cautiously, it will give us quantum counterparts of most of the concepts introduced in chapter 2, and conversely we can recover chapter 2 by restricting the matrices of this chapter to be diagonal. 12.1 Ordering operators The study of quantum entropy is to a large extent a study in inequalities, and this is where we begin.
    [Show full text]
  • Arxiv:1512.06117V3 [Quant-Ph] 11 Apr 2017 Nrp 2] F Lo[6.Adffrn Ro O H Monotoni the for Proof Different : a Φ That Condition [36]
    Monotonicity of the Quantum Relative Entropy Under Positive Maps Alexander M¨uller-Hermes1 and David Reeb2 1QMATH, Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen, Denmark∗ 2Institute for Theoretical Physics, Leibniz Universit¨at Hannover, 30167 Hannover, Germany† We prove that the quantum relative entropy decreases monotonically under the applica- tion of any positive trace-preserving linear map, for underlying separable Hilbert spaces. This answers in the affirmative a natural question that has been open for a long time, as monotonicity had previously only been shown to hold under additional assumptions, such as complete positivity or Schwarz-positivity of the adjoint map. The first step in our proof is to show monotonicity of the sandwiched Renyi divergences under positive trace-preserving maps, extending a proof of the data processing inequality by Beigi [J. Math. Phys. 54, 122202 (2013)] that is based on complex interpolation techniques. Our result calls into ques- tion several measures of non-Markovianity that have been proposed, as these would assess all positive trace-preserving time evolutions as Markovian. I. INTRODUCTION For any pair of quantum states ρ, σ acting on the same Hilbert space H, the quantum relative entropy is defined by tr[ρ(log ρ − log σ)], if supp[ρ] ⊆ supp[σ] D(ρkσ) := (1) (+∞, otherwise. It was first introduced by Umegaki [43] as a quantum generalization of the classical Kullback- Leibler divergence between probability distributions [18]. Both are distance-like measures, also called divergences, and have found a multitude of applications and operational interpretations in diverse fields like information theory, statistics, and thermodynamics. We refer to the review [45] and the monograph [30] for more details on the quantum relative entropy.
    [Show full text]