On the Quantum Rényi Relative Entropies and Their

Total Page:16

File Type:pdf, Size:1020Kb

On the Quantum Rényi Relative Entropies and Their On the quantum R´enyirelative entropies and their use Mark M. Wilde Hearne Institute for Theoretical Physics, Department of Physics and Astronomy, Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana, USA [email protected] On Sabbatical with Stanford Institute for Theoretical Physics, Stanford University, Stanford, California 94305 Mark M. Wilde (LSU) 1 / 68 Motivation Entropy and relative entropy are at the core of information theory R´enyientropy and R´enyirelative entropy are fruitful generalizations of this concept, having many applications When generalizing to the quantum case, there is not a unique way to generalize R´enyirelative entropy In this talk, I'll review three quantum generalizations of R´enyirelative entropy and their information-processing properties I'll also discuss their applications in quantum information theory Mark M. Wilde (LSU) 2 / 68 Entropy One of the most famous formulas in all of science [Sha48]: X H(pX ) := pX (x) log2 pX (x) − x Figure: Screen capture from the movie Arrival (2016) Equal to the optimal limit of classical data compression [Sha48] Mark M. Wilde (LSU) 3 / 68 Relative entropy Relative entropy [KL51] generalizes entropy Let pX ( ) be a probability distribution, and let qX be a measure.2 PThenX X pX (x) D(p q ) := p (x) log X X X 2 q (x) k x X Distinguishability measure with interpretation in hypothesis testing Mark M. Wilde (LSU) 4 / 68 Special cases of relative entropy Entropy is a special case of relative entropy: H(pX ) = D(pX 1); − k where 1 is the vector of all ones. Mutual information is a special case of relative entropy: I (X ; Y ) := D(pXY pX pY ) = inf D(pXY pX qY ) k ⊗ qY 2P(Y) k ⊗ Mark M. Wilde (LSU) 5 / 68 Data-processing inequality Let NY jX be a classical channel and define qY := NY jX (qX ) where X qY (y) = N (y x)qX (x): Y jX j x2X Then the data-processing inequality for relative entropy is D(pX qX ) D(N (pX ) N (qX )) k ≥ Y jX k Y jX Mark M. Wilde (LSU) 6 / 68 R´enyirelative entropy R´enyirelative entropy is a generalization of relative entropy with parameter α (0; 1) (1; ) [R´en61]: 2 [ 1 ! 1 X α 1−α Dα(pX qX ) := log pX (x) qX (x) k α 1 2 − x2X Mark M. Wilde (LSU) 7 / 68 Special cases of R´enyirelative entropy R´enyientropy [R´en61]is a special case: ! 1 X α Hα(pX ) := log pX (x) = Dα(pX 1) 1 α 2 − k − x2X R´enyimutual information [Csi95] defined as Iα(X ; Y ) := inf Dα(pXY pX qY ) qY 2P(Y) k ⊗ Mark M. Wilde (LSU) 8 / 68 Properties of R´enyirelative entropy Converges to relative entropy: D(pX qX ) = lim Dα(pX qX ) k α!1 k Data-processing inequality for all α (0; 1) (1; ): 2 [ 1 Dα(pX qX ) Dα(N (pX ) N (qX )) k ≥ Y jX k Y jX Additivity: for probability distributions pX1 and pX2 , measures qX1 and qX , and for all α (0; 1) (1; ): 2 2 [ 1 Dα(pX pX qX qX ) = Dα(pX qX ) + Dα(pX qX ) 1 ⊗ 2 k 1 ⊗ 2 1 k 1 2 k 2 Ordering: For α > β > 0 Dα(pX qX ) Dβ(pX qX ) k ≥ k Mark M. Wilde (LSU) 9 / 68 Quantum states A quantum state ρA is a positive semi-definite, unit trace operator (i.e., Hermitian matrix with all eigenvalues non-negative and summing to one) Subscript notation indicates ρA is a state of a quantum system A Also called density operator or density matrix ( A) denotes set of density operators acting on a Hilbert space A D H H Classical probability distributions are a special case in which density operator is diagonal Mark M. Wilde (LSU) 10 / 68 Quantum channels A quantum channel A!B is a completely positive, trace-preserving map, which evolves systemN A to system B Complete positivity and trace preservation are equivalent to Choi state N Φ := (idR A!B )(ΦRA) RB ⊗N N 1 being positive semi-definite and TrB [Φ ] = IR , where R A and RB dR ' maximally entangled state ΦRA is defined as 1 X ΦRA := i j R i j A; dR j ih j ⊗ j ih j i;j A classical channel is a special case in which the Choi state is a diagonal density operator. Mark M. Wilde (LSU) 11 / 68 Quantum relative entropy Quantum relative entropy of a state ρ and a positive semi-definite operator σ is defined as [Ume62] D(ρ σ) := Tr[ρ(log ρ log σ)] k 2 − 2 Standard definition with operational meaning [HP91, ON00] Mark M. Wilde (LSU) 12 / 68 Special cases of quantum relative entropy Quantum entropy [vN27] is a special case of relative entropy: H(ρ) := Tr[ρ log ρ] = D(ρ I ) − 2 − k Quantum mutual information [Str65] is a special case of relative entropy: I (A; B)ρ := D(ρAB ρA ρB ) = inf D(ρAB ρA σB ) k ⊗ σB 2D(HB ) k ⊗ Mark M. Wilde (LSU) 13 / 68 Data-processing inequality for quantum relative entropy Let ρ be a state, σ a positive semi-definite operator, and a quantum channel. Then [Lin75] N D(ρ σ) D( (ρ) (σ)) k ≥ N kN Mark M. Wilde (LSU) 14 / 68 Petz{R´enyirelative entropy There are at least two meaningful ways to generalize the classical R´enyirelative entropy to the quantum case. Let us begin with the Petz{R´enyirelative entropy. Petz{R´enyirelative entropy [Pet86] defined for α (0; 1) (1; ): 2 [ 1 1 α 1−α Dα(ρ σ) := log Tr[ρ σ ] k α 1 2 − Mark M. Wilde (LSU) 15 / 68 Properties of Petz{R´enyirelative entropy Converges to quantum relative entropy in limit α 1: ! D(ρ σ) = lim Dα(ρ σ) k α!1 k Data-processing inequality [Pet86] for all α (0; 1) (1; 2]: 2 [ Dα(ρ σ) Dα( (ρ) (σ)) k ≥ N kN Additivity: For states ρ1 and ρ2, and positive semi-definite operators σ1 and σ2: Dα(ρ1 ρ2 σ1 σ2) = Dα(ρ1 σ1) + Dα(ρ2 σ2) ⊗ k ⊗ k k Ordering [MH11]: For α > β > 0 Dα(ρ σ) Dβ(ρ σ) k ≥ k Mark M. Wilde (LSU) 16 / 68 Sandwiched R´enyirelative entropy Different quantum generalization of classical R´enyirelative entropy: Sandwiched R´enyirelative entropy [MLDS+13, WWY14] defined for all α (0; 1) (1; ): 2 [ 1 1 (1−α)=2α (1−α)=2α α Deα(ρ σ) := log Tr[(σ ρσ ) ] k α 1 2 − Mark M. Wilde (LSU) 17 / 68 Properties of sandwiched R´enyirelative entropy Converges to quantum relative entropy in limit α 1 [MLDS+13, WWY14]: ! D(ρ σ) = lim Deα(ρ σ) k α!1 k Data-processing inequality for all α [1=2; 1) (1; ) [FL13]: 2 [ 1 Deα(ρ σ) Deα( (ρ) (σ)) k ≥ N kN Additivity: For states ρ1 and ρ2, and positive semi-definite operators σ1 and σ2: Deα(ρ1 ρ2 σ1 σ2) = Deα(ρ1 σ1) + Deα(ρ2 σ2) ⊗ k ⊗ k k Ordering [MLDS+13]: For α > β > 0, Deα(ρ σ) Deβ(ρ σ) k ≥ k Mark M. Wilde (LSU) 18 / 68 Quantum hypothesis testing [Hel67, Hel69, Hol72, Hol73] Quantum system prepared in the state ρ or σ and objective is to figure out which one was prepared Make a quantum measurement Λ; I Λ to figure out which was prepared f − g Assign outcome Λ to \guess ρ", and the outcome I Λ to \guess σ" − Probability of committing a Type I error (\false alarm"): Tr[(I Λ)ρ] − Probability of committing a Type II error (\missed detection"): Tr[Λσ] Mark M. Wilde (LSU) 19 / 68 Hypothesis testing relative entropy [BD10, BD11, WR12] Minimize Type II error probability subject to a constraint on Type I error probability: " DH (ρ σ) := log2 inf Tr[Λσ] : Tr[(I Λ)ρ] "; Λ I k − Λ≥0f − ≤ ≤ g More generally can define this quantity when σ is positive semi-definite Obeys data-processing inequality (operational argument): D" (ρ σ) D" ( (ρ) (σ)) H k ≥ H N kN Mark M. Wilde (LSU) 20 / 68 Quantum Stein's lemma [HP91, ON00] Optimal achievable rate for hypothesis testing: 1 " ⊗n ⊗n E(ρ, σ) := inf lim inf DH (ρ σ ) "2(0;1) n!1 n k Optimal strong converse rate for hypothesis testing: 1 " ⊗n ⊗n Ee(ρ, σ) := sup lim sup DH (ρ σ ) "2(0;1) n!1 n k Always have E(ρ, σ) Ee(ρ, σ) ≤ Quantum Stein's lemma: E(ρ, σ) = Ee(ρ, σ) = D(ρ σ) k Mark M. Wilde (LSU) 21 / 68 Relating quantum hypothesis testing and relative entropy Let ρ be a state and σ a positive semi-definite operator Lower bound [Hay07, AMV12, QWW18]: For " (0; 1], and α (0; 1): 2 2 " α 1 D (ρ σ) Dα(ρ σ) + log H k ≥ k α 1 2 " − Upper bound [CMW16]: For " [0; 1), and α (1; ): 2 2 1 " α 1 D (ρ σ) Deα(ρ σ) + log H k ≤ k α 1 2 1 " − − Mark M. Wilde (LSU) 22 / 68 Application of lower bound Apply lower bound and additivity to find for all " (0; 1] and α (0; 1) that 2 2 1 " ⊗n ⊗n 1 ⊗n ⊗n α 1 D (ρ σ ) Dα(ρ σ ) + log n H k ≥ n k n(α 1) 2 " − α 1 = Dα(ρ σ) + log k n(α 1) 2 " − Take n limit to find for all α (0; 1) that ! 1 2 1 " ⊗n ⊗n lim inf D (ρ σ ) Dα(ρ σ) n!1 n H k ≥ k Since lower bound holds for all α (0; 1), conclude that 2 1 " ⊗n ⊗n lim inf DH (ρ σ ) sup Dα(ρ σ) = D(ρ σ) n!1 n k ≥ α2(0;1) k k Mark M.
Recommended publications
  • Quantum Information, Entanglement and Entropy
    QUANTUM INFORMATION, ENTANGLEMENT AND ENTROPY Siddharth Sharma Abstract In this review I had given an introduction to axiomatic Quantum Mechanics, Quantum Information and its measure entropy. Also, I had given an introduction to Entanglement and its measure as the minimum of relative quantum entropy of a density matrix of a Hilbert space with respect to all separable density matrices of the same Hilbert space. I also discussed geodesic in the space of density matrices, their orthogonality and Pythagorean theorem of density matrix. Postulates of Quantum Mechanics The basic postulate of quantum mechanics is about the Hilbert space formalism: • Postulates 0: To each quantum mechanical system, has a complex Hilbert space ℍ associated to it: Set of all pure state, ℍ⁄∼ ≔ {⟦푓⟧|⟦푓⟧ ≔ {ℎ: ℎ ∼ 푓}} and ℎ ∼ 푓 ⇔ ∃ 푧 ∈ ℂ such that ℎ = 푧 푓 where 푧 is called phase • Postulates 1: The physical states of a quantum mechanical system are described by statistical operators acting on the Hilbert space. A density matrix or statistical operator, is a positive operator of trace 1 on the Hilbert space. Where is called positive if 〈푥, 푥〉 for all 푥 ∈ ℍ and 〈⋇,⋇〉. • Postulates 2: The observables of a quantum mechanical system are described by self- adjoint operators acting on the Hilbert space. A self-adjoint operator A on a Hilbert space ℍ is a linear operator 퐴: ℍ → ℍ which satisfies 〈퐴푥, 푦〉 = 〈푥, 퐴푦〉 for 푥, 푦 ∈ ℍ. Lemma: The density matrices acting on a Hilbert space form a convex set whose extreme points are the pure states. Proof. Denote by Σ the set of density matrices.
    [Show full text]
  • A Mini-Introduction to Information Theory
    A Mini-Introduction To Information Theory Edward Witten School of Natural Sciences, Institute for Advanced Study Einstein Drive, Princeton, NJ 08540 USA Abstract This article consists of a very short introduction to classical and quantum information theory. Basic properties of the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information. A few more detailed topics are considered in the quantum case. arXiv:1805.11965v5 [hep-th] 4 Oct 2019 Contents 1 Introduction 2 2 Classical Information Theory 2 2.1 ShannonEntropy ................................... .... 2 2.2 ConditionalEntropy ................................. .... 4 2.3 RelativeEntropy .................................... ... 6 2.4 Monotonicity of Relative Entropy . ...... 7 3 Quantum Information Theory: Basic Ingredients 10 3.1 DensityMatrices .................................... ... 10 3.2 QuantumEntropy................................... .... 14 3.3 Concavity ......................................... .. 16 3.4 Conditional and Relative Quantum Entropy . ....... 17 3.5 Monotonicity of Relative Entropy . ...... 20 3.6 GeneralizedMeasurements . ...... 22 3.7 QuantumChannels ................................... ... 24 3.8 Thermodynamics And Quantum Channels . ...... 26 4 More On Quantum Information Theory 27 4.1 Quantum Teleportation and Conditional Entropy . ......... 28 4.2 Quantum Relative Entropy And Hypothesis Testing . ......... 32 4.3 Encoding
    [Show full text]
  • Some Trace Inequalities for Exponential and Logarithmic Functions
    Bull. Math. Sci. https://doi.org/10.1007/s13373-018-0123-3 Some trace inequalities for exponential and logarithmic functions Eric A. Carlen1 · Elliott H. Lieb2 Received: 8 October 2017 / Revised: 17 April 2018 / Accepted: 24 April 2018 © The Author(s) 2018 Abstract Consider a function F(X, Y ) of pairs of positive matrices with values in the positive matrices such that whenever X and Y commute F(X, Y ) = X pY q . Our first main result gives conditions on F such that Tr[X log(F(Z, Y ))]≤Tr[X(p log X + q log Y )] for all X, Y, Z such that TrZ = Tr X. (Note that Z is absent from the right side of the inequality.) We give several examples of functions F to which the theorem applies. Our theorem allows us to give simple proofs of the well known logarithmic inequalities of Hiai and Petz and several new generalizations of them which involve three variables X, Y, Z instead of just X, Y alone. The investigation of these logarith- mic inequalities is closely connected with three quantum relative entropy functionals: The standard Umegaki quantum relative entropy D(X||Y ) = Tr[X(log X − log Y ]), and two others, the Donald relative entropy DD(X||Y ), and the Belavkin–Stasewski relative entropy DBS(X||Y ). They are known to satisfy DD(X||Y ) ≤ D(X||Y ) ≤ DBS(X||Y ). We prove that the Donald relative entropy provides the sharp upper bound, independent of Z on Tr[X log(F(Z, Y ))] in a number of cases in which F(Z, Y ) is homogeneous of degree 1 in Z and −1inY .
    [Show full text]
  • Lecture 18 — October 26, 2015 1 Overview 2 Quantum Entropy
    PHYS 7895: Quantum Information Theory Fall 2015 Lecture 18 | October 26, 2015 Prof. Mark M. Wilde Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. 1 Overview In the previous lecture, we discussed classical entropy and entropy inequalities. In this lecture, we discuss several information measures that are important for quantifying the amount of information and correlations that are present in quantum systems. The first fundamental measure that we introduce is the von Neumann entropy. It is the quantum generalization of the Shannon entropy, but it captures both classical and quantum uncertainty in a quantum state. The von Neumann entropy gives meaning to a notion of the information qubit. This notion is different from that of the physical qubit, which is the description of a quantum state of an electron or a photon. The information qubit is the fundamental quantum informational unit of measure, determining how much quantum information is present in a quantum system. The initial definitions here are analogous to the classical definitions of entropy, but we soon discover a radical departure from the intuitive classical notions from the previous chapter: the conditional quantum entropy can be negative for certain quantum states. In the classical world, this negativity simply does not occur, but it takes a special meaning in quantum information theory. Pure quantum states that are entangled have stronger-than-classical correlations and are examples of states that have negative conditional entropy. The negative of the conditional quantum entropy is so important in quantum information theory that we even have a special name for it: the coherent information.
    [Show full text]
  • From Joint Convexity of Quantum Relative Entropy to a Concavity Theorem of Lieb
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 140, Number 5, May 2012, Pages 1757–1760 S 0002-9939(2011)11141-9 Article electronically published on August 4, 2011 FROM JOINT CONVEXITY OF QUANTUM RELATIVE ENTROPY TO A CONCAVITY THEOREM OF LIEB JOEL A. TROPP (Communicated by Marius Junge) Abstract. This paper provides a succinct proof of a 1973 theorem of Lieb that establishes the concavity of a certain trace function. The development relies on a deep result from quantum information theory, the joint convexity of quantum relative entropy, as well as a recent argument due to Carlen and Lieb. 1. Introduction In his 1973 paper on trace functions, Lieb establishes an important concavity theorem [Lie73, Thm. 6] concerning the trace exponential. Theorem 1 (Lieb). Let H be a fixed self-adjoint matrix. The map (1) A −→ tr exp (H +logA) is concave on the positive-definite cone. The most direct proof of Theorem 1 is due to Epstein [Eps73]; see Ruskai’s papers [Rus02, Rus05] for a condensed version of this argument. Lieb’s original proof develops the concavity of the function (1) as a corollary of another deep concavity theorem [Lie73, Thm. 1]. In fact, many convexity and concavity theorems for trace functions are equivalent with each other, in the sense that the mutual implications follow from relatively easy arguments. See [Lie73, §5] and [CL08, §5] for discussions of this point. The goal of this paper is to demonstrate that a modicum of convex analysis allows us to derive Theorem 1 directly from another major theorem, the joint convexity of the quantum relative entropy.
    [Show full text]
  • Classical, Quantum and Total Correlations
    Classical, quantum and total correlations L. Henderson∗ and V. Vedral∗∗ ∗Department of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW ∗∗Optics Section, Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ Abstract We discuss the problem of separating consistently the total correlations in a bipartite quantum state into a quantum and a purely classical part. A measure of classical correlations is proposed and its properties are explored. In quantum information theory it is common to distinguish between purely classical information, measured in bits, and quantum informa- tion, which is measured in qubits. These differ in the channel resources required to communicate them. Qubits may not be sent by a classical channel alone, but must be sent either via a quantum channel which preserves coherence or by teleportation through an entangled channel with two classical bits of communication [?]. In this context, one qubit is equivalent to one unit of shared entanglement, or `e-bit', together with two classical bits. Any bipartite quantum state may be used as a com- munication channel with some degree of success, and so it is of interest to determine how to separate the correlations it contains into a classi- cal and an entangled part. A number of measures of entanglement and of total correlations have been proposed in recent years [?, ?, ?, ?, ?]. However, it is still not clear how to quantify the purely classical part of the total bipartite correlations. In this paper we propose a possible measure of classical correlations and investigate its properties. We first review the existing measures of entangled and total corre- lations.
    [Show full text]
  • Quantum Information Chapter 10. Quantum Shannon Theory
    Quantum Information Chapter 10. Quantum Shannon Theory John Preskill Institute for Quantum Information and Matter California Institute of Technology Updated June 2016 For further updates and additional chapters, see: http://www.theory.caltech.edu/people/preskill/ph219/ Please send corrections to [email protected] Contents 10 Quantum Shannon Theory 1 10.1 Shannon for Dummies 2 10.1.1 Shannon entropy and data compression 2 10.1.2 Joint typicality, conditional entropy, and mutual infor- mation 6 10.1.3 Distributed source coding 8 10.1.4 The noisy channel coding theorem 9 10.2 Von Neumann Entropy 16 10.2.1 Mathematical properties of H(ρ) 18 10.2.2 Mixing, measurement, and entropy 20 10.2.3 Strong subadditivity 21 10.2.4 Monotonicity of mutual information 23 10.2.5 Entropy and thermodynamics 24 10.2.6 Bekenstein’s entropy bound. 26 10.2.7 Entropic uncertainty relations 27 10.3 Quantum Source Coding 30 10.3.1 Quantum compression: an example 31 10.3.2 Schumacher compression in general 34 10.4 Entanglement Concentration and Dilution 38 10.5 Quantifying Mixed-State Entanglement 45 10.5.1 Asymptotic irreversibility under LOCC 45 10.5.2 Squashed entanglement 47 10.5.3 Entanglement monogamy 48 10.6 Accessible Information 50 10.6.1 How much can we learn from a measurement? 50 10.6.2 Holevo bound 51 10.6.3 Monotonicity of Holevo χ 53 10.6.4 Improved distinguishability through coding: an example 54 10.6.5 Classical capacity of a quantum channel 58 ii Contents iii 10.6.6 Entanglement-breaking channels 62 10.7 Quantum Channel Capacities and Decoupling
    [Show full text]
  • Geometry of Quantum States
    Geometry of Quantum States Ingemar Bengtsson and Karol Zyczk_ owski An Introduction to Quantum Entanglement 12 Density matrices and entropies A given object of study cannot always be assigned a unique value, its \entropy". It may have many different entropies, each one worthwhile. |Harold Grad In quantum mechanics, the von Neumann entropy S(ρ) = Trρ ln ρ (12.1) − plays a role analogous to that played by the Shannon entropy in classical prob- ability theory. They are both functionals of the state, they are both monotone under a relevant kind of mapping, and they can be singled out uniquely by natural requirements. In section 2.2 we recounted the well known anecdote ac- cording to which von Neumann helped to christen Shannon's entropy. Indeed von Neumann's entropy is older than Shannon's, and it reduces to the Shan- non entropy for diagonal density matrices. But in general the von Neumann entropy is a subtler object than its classical counterpart. So is the quantum relative entropy, that depends on two density matrices that perhaps cannot be diagonalized at the same time. Quantum theory is a non{commutative proba- bility theory. Nevertheless, as a rule of thumb we can pass between the classical discrete, classical continuous, and quantum cases by choosing between sums, integrals, and traces. While this rule of thumb has to be used cautiously, it will give us quantum counterparts of most of the concepts introduced in chapter 2, and conversely we can recover chapter 2 by restricting the matrices of this chapter to be diagonal. 12.1 Ordering operators The study of quantum entropy is to a large extent a study in inequalities, and this is where we begin.
    [Show full text]
  • Arxiv:1512.06117V3 [Quant-Ph] 11 Apr 2017 Nrp 2] F Lo[6.Adffrn Ro O H Monotoni the for Proof Different : a Φ That Condition [36]
    Monotonicity of the Quantum Relative Entropy Under Positive Maps Alexander M¨uller-Hermes1 and David Reeb2 1QMATH, Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen, Denmark∗ 2Institute for Theoretical Physics, Leibniz Universit¨at Hannover, 30167 Hannover, Germany† We prove that the quantum relative entropy decreases monotonically under the applica- tion of any positive trace-preserving linear map, for underlying separable Hilbert spaces. This answers in the affirmative a natural question that has been open for a long time, as monotonicity had previously only been shown to hold under additional assumptions, such as complete positivity or Schwarz-positivity of the adjoint map. The first step in our proof is to show monotonicity of the sandwiched Renyi divergences under positive trace-preserving maps, extending a proof of the data processing inequality by Beigi [J. Math. Phys. 54, 122202 (2013)] that is based on complex interpolation techniques. Our result calls into ques- tion several measures of non-Markovianity that have been proposed, as these would assess all positive trace-preserving time evolutions as Markovian. I. INTRODUCTION For any pair of quantum states ρ, σ acting on the same Hilbert space H, the quantum relative entropy is defined by tr[ρ(log ρ − log σ)], if supp[ρ] ⊆ supp[σ] D(ρkσ) := (1) (+∞, otherwise. It was first introduced by Umegaki [43] as a quantum generalization of the classical Kullback- Leibler divergence between probability distributions [18]. Both are distance-like measures, also called divergences, and have found a multitude of applications and operational interpretations in diverse fields like information theory, statistics, and thermodynamics. We refer to the review [45] and the monograph [30] for more details on the quantum relative entropy.
    [Show full text]
  • Lecture Notes Live Here
    Physics 239/139: Quantum information is physical Spring 2018 Lecturer: McGreevy These lecture notes live here. Please email corrections to mcgreevy at physics dot ucsd dot edu. THESE NOTES ARE SUPERSEDED BY THE NOTES HERE Schr¨odinger's cat and Maxwell's demon, together at last. Last updated: 2021/05/09, 15:30:29 1 Contents 0.1 Introductory remarks............................4 0.2 Conventions.................................7 0.3 Lightning quantum mechanics reminder..................8 1 Hilbert space is a myth 11 1.1 Mean field theory is product states.................... 14 1.2 The local density matrix is our friend................... 16 1.3 Complexity and the convenient illusion of Hilbert space......... 19 2 Quantifying information 26 2.1 Relative entropy............................... 33 2.2 Data compression.............................. 37 2.3 Noisy channels............................... 43 2.4 Error-correcting codes........................... 49 3 Information is physical 53 3.1 Cost of erasure............................... 53 3.2 Second Laws of Thermodynamics..................... 59 4 Quantifying quantum information and quantum ignorance 64 4.1 von Neumann entropy........................... 64 4.2 Quantum relative entropy......................... 67 4.3 Purification, part 1............................. 69 4.4 Schumacher compression.......................... 71 4.5 Quantum channels............................. 73 4.6 Channel duality............................... 79 4.7 Purification, part 2............................. 84 4.8 Deep
    [Show full text]
  • Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics
    Entropy 2010, 12, 1194-1245; doi:10.3390/e12051194 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Review Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics Masanori Ohya ? and Noboru Watanabe ? Department of Information Sciences, Tokyo University of Science, Noda City, Chiba 278-8510, Japan ? Author to whom correspondence should be addressed; E-Mails: [email protected] (M.O.); [email protected] (N.W.). Received: 10 February 2010 / Accepted: 30 April 2010 / Published: 7 May 2010 Abstract: Quantum entropy is a fundamental concept for quantum information recently developed in various directions. We will review the mathematical aspects of quantum entropy (entropies) and discuss some applications to quantum communication, statistical physics. All topics taken here are somehow related to the quantum entropy that the present authors have been studied. Many other fields recently developed in quantum information theory, such as quantum algorithm, quantum teleportation, quantum cryptography, etc., are totally discussed in the book (reference number 60). Keywords: quantum entropy; quantum information 1. Introduction Theoretical foundation supporting today’s information-oriented society is Information Theory founded by Shannon [1] about 60 years ago. Generally, this theory can treat the efficiency of the information transmission by using measures of complexity, that is, the entropy, in the commutative system of signal space. The information theory is based on the entropy theory that is formulated mathematically. Before Shannon’s work, the entropy was first introduced in thermodynamics by Clausius and in statistical mechanics by Boltzmann. These entropies are the criteria to characterize a property of the physical systems.
    [Show full text]
  • Arxiv:1912.00983V4 [Quant-Ph] 19 Apr 2021
    QUASI-FACTORIZATION AND MULTIPLICATIVE COMPARISON OF SUBALGEBRA-RELATIVE ENTROPY NICHOLAS LARACUENTE Abstract. Purely multiplicative comparisons of quantum relative entropy are desirable but challenging to prove. We show such comparisons for relative entropies between comparable densities, including subalgebra-relative entropy and its perturbations. These inequalities are asymptotically tight in approaching known, tight inequalities as perturbation size approaches zero. Following, we obtain a quasi-factorization inequality, which compares the relative entropies to each of several subalgebraic restrictions with that to their intersection. We apply quasi- factorization to uncertainty-like relations and to conditional expectations arising from graphs. Quasi-factorization yields decay estimates of optimal asymptotic order on mixing processes de- scribed by finite, connected, undirected graphs. 1. Introduction The strong subadditivity (SSA) of von Neumann entropy states that for a tripartite density ρABC on systems ABC, H(AC)ρ + H(BC)ρ ≥ H(ABC)ρ + H(C)ρ ; (1) where the von Neumann entropy H(ρ) := − tr(ρ log ρ), and H(A)ρ := H(trBC (ρ)). Lieb and Ruskai proved SSA in 1973 [LR73]. SSA's impacts range from quantum Shannon theory [Wil13] to holographic spacetime [CKPT14]. A later form by Petz in 1991 generalizes from subsystems to subalgebras. Let M be a von Neumann algebra and N be a subalgebra. Associated with N is a unique conditional expectation EN that projects an operator in M onto N . We will denote by EN the trace-preserving, unital, completely positive conditional expectations from M to N . If S; T ⊆ M as subalgebras, we call them a commuting square if ES ET = ET ES = ES\T , the conditional expectation onto their arXiv:1912.00983v5 [quant-ph] 31 Aug 2021 intersection.
    [Show full text]