Thermodynamics from Relative Entropy

Total Page:16

File Type:pdf, Size:1020Kb

Thermodynamics from Relative Entropy Thermodynamics from relative entropy Stefan Floerchinger (Heidelberg U.) work with Tobias Haas, Natalia Sanchez-Kuntz and Neil Dowling STRUCTURES Jour fixe, Heidelberg, 03 July 2020. STRUCTURES STRUCTURES JOUR FIXE JOUR FIXE PD DR. STEFAN FLÖRCHINGER PD DR. STEFAN FLÖRCHINGER ITP (Uni Heidelberg) ITP (Uni Heidelberg) “Thermodynamics from relative“ Thermodynamics from relative entropy“ entropy“ 03 July 2020 1:30 PM 03 July 2020 1:30 PM By ZOOM video webinar system By ZOOM video webinar system Website: https://zoom.us/join Website: https://zoom.us/join Meeting-ID: 994 4577 2932 Password: 744991 Meeting-ID: 994 4577 2932 Password: 744991 Contact: [email protected] Contact: [email protected] Entropy and information [Claude Shannon (1948), also Ludwig Boltzmann, Willard Gibbs (∼1875)] consider a random variable x with probability distribution p(x) information content or \surprise" associated with outcome x i(x) 8 6 i(x) = − ln p(x) 4 2 p(x) 0.0 0.2 0.4 0.6 0.8 1.0 entropy is expectation value of information content X S(p) = hi(x)i = − p(x) ln p(x) x p(x) p(x) p(x) 1 1 1 0 0 0 S = 0 S = ln(2) S = 2 ln(2) 1 / 27 Thermodynamics [..., Antoine Laurent de Lavoisier, Nicolas L´eonard Sadi Carnot, Hermann von Helmholtz, Rudolf Clausius, Ludwig Boltzmann, James Clerk Maxwell, Max Planck, Walter Nernst, Willard Gibbs, ...] micro canonical ensemble: maximum entropy S for given conserved quantities E; N in given volume V starting point for development of thermodynamics ... 1 µ p S(E; N; V ); dS = dE − dN + dV T T T ... grand canonical ensemble with density operator ... 1 − 1 (H−µN) ρ = e T Z ... Einsteins probability for classical thermal fluctuations ... dW ∼ eS(ξ)dξ 2 / 27 Fluid dynamics uses thermodynamics locally T (x); µ(x); uµ(x); ::: evolution from conservation laws µν µ rµT (x) = 0; rµN (x) = 0: local dissipation = local entropy production µ rµs (x) = @ts(x) + r~ · ~s(x) > 0 in Navier-Stokes approximation with shear viscosity η, bulk viscosity ζ 1 r sµ = 2ησ σµν + ζ(r uρ)2 µ T µν ρ how to understand this in quantum field theory? Entropy in quantum theory [John von Neumann (1932)] S = −Trfρ ln ρg based on the quantum density operator ρ for pure states ρ = j ih j one has S = 0 P for diagonal mixed states ρ = j pj jjihjj X S = − pj ln pj > 0 j unitary time evolution conserves entropy −Trf(UρU y) ln(UρU y)g = −Trfρ ln ρg ! S = const: quantum information is globally conserved 3 / 27 Quantum entanglement Can quantum-mechanical description of physical reality be considered complete? [Albert Einstein, Boris Podolsky, Nathan Rosen (1935), David Bohm (1951)] 1 = p (j "iAj #iB − j #iAj "iB ) 2 1 = p (j !iAj iB − j iAj !iB ) 2 Bertlemann's socks and the nature of reality [John Stewart Bell (1980)] 4 / 27 Entropy and entanglement consider a split of a quantum system into two A + B B A reduced density operator for system A ρA = TrB fρg entropy associated with subsystem A SA = −TrAfρA ln ρAg pure product state ρ = ρA ⊗ ρB leads to SA = 0 pure entangled state ρ 6= ρA ⊗ ρB leads to SA > 0 SA is called entanglement entropy 5 / 27 Entanglement entropy in relativistic quantum field theory B A entanglement entropy of region A is a local notion of entropy SA = −trA fρA ln ρAg ρA = trB fρg for relativistic quantum field theories it is infinite Z p const d−2 SA = d−2 d σ h + subleading divergences + finite @A UV divergence proportional to entangling surface relativistic quantum fields are very strongly entangled already in vacuum Theorem [Helmut Reeh & Siegfried Schlieder (1961)]: local operators in region A can create all (non-local) particle states 6 / 27 Entanglement entropy in non-relativistic quantum field theory [Natalia Sanchez-Kuntz & Stefan Floerchinger, in preparation] non-relativistic quantum field theory for Bose gas B Z d−1 n ∗ h r~ 2 i λ ∗2 2o S = dtd x ' i@tA+ 2m + µ ' − 2 ' ' Bogoliubov dispersion relation r ( p ~p2 ~p2 csj~pj for p 2Mλρ (phonons) ! = 2M 2M + 2λρ ≈ ~p2 p 2M for p 2Mλρ (particles) 4 3 2 0 2 2 1 0 0.0 0.5 1.0 1.5 2.0 p ~p2 entanglement entropy2 mSA0 vanishes for ρ = 0 and ! = 2M for large region A like in relativistic theory p inverse healing length 2Mλρ acts as UV regulator 7 / 27 Relative entropy classical relative entropy or Kullback-Leibler divergence X S(pkq) = pj ln(pj =qj ) j not symmetric distance measure, but a divergence S(pkq) ≥ 0 and S(pkq) = 0 , p = q quantum relative entropy of two density matrices (also a divergence) S(ρkσ) = Tr fρ (ln ρ − ln σ)g signals how well state ρ can be distinguished from a model σ 8 / 27 Significance of Kullback-Leibler divergence Uncertainty deficit true distribution pj and model distribution qj P uncertainty deficit is expected surprise h− ln qj i = − pj ln qj minus P j real information content − j pj ln pj ! X X S(pkq) = − pj ln qj − − pj ln pj j j Asymptotic frequencies N(xj ) true distribution qj and frequency after N drawings pj = N probability to find frequencies pj for large N goes like e−NS(pkq) probability for fluctuation around expectation value hpj i = qj tends to zero for large N and when divergence S(pkq) is large 9 / 27 Advantages of relative entropy Continuum limit pj ! f(x)dx qj ! g(x)dx not well defined for entropy B X Z S = − pj ln pj ! − A dxf(x) [ln f(x) + ln dx] relative entropy remains well defined Z S(pkq) ! S(fkg) = dx f(x) ln(f(x)=g(x)) Local quantum field theory entanglement entropy S(ρA) for spatial region divergent in relativistic QFT relative entanglement entropy S(ρAkσA) well defined rigorous definition in terms of Tomita{Takesaki theory of modular automorphisms on von-Neumann algebras [Huzihiro Araki (1976)] 10 / 27 Monotonicity of relative entropy [G¨oranLindblad (1975)] monotonicity of relative entropy S(N (ρ)jN (σ)) ≤ S(ρjσ) with N completely positive, trace-preserving map N unitary time evolution S(N (ρ)jN (σ)) = S(ρjσ) N open system evolution with generation of entanglement to environment S(N (ρ)jN (σ)) < S(ρjσ) basis for many proofs in quantum information theory leads naturally to second-law type relations 11 / 27 Thermodynamics from relative entropy [Stefan Floerchinger & Tobias Haas, arXiv:2004.13533 (2020)] relative entropy has very nice properties but can thermodynamics be derived from it ? can entropy be replaced by relative entropy ? 12 / 27 Principle of maximum entropy [Edwin Thompson Jaynes (1963)] take macroscopic state characteristics as fixed, e. g. energy E; particle number N; momentum ~p; principle of maximum entropy: among all possible microstates σ (or distributions q) the one with maximum entropy S is preferred S(σthermal) = max why? assume S(σ) < max, than σ would contain additional information not determined by macroscopic variables, which is not available maximum entropy = minimal information 13 / 27 Principle of minimum expected relative entropy [Stefan Floerchinger & Tobias Haas, arXiv:2004.13533 (2020)] take macroscopic state characteristics as fixed, e. g. energy E; particle number N; momentum ~p; principle of minimum expected relative entropy: preferred is the model σ from which allowed states ρ are least distinguishable on average Z hS(ρkσthermal)i = Dρ S(ρkσthermal) = min similarly for classical probability distributions Z hS(pkq)i = Dp S(pkq) = min need to define measures Dp and Dρ on spaces of probability distributions p and density matrices ρ, respectively 14 / 27 Measure on space of probability distributions consider set of normalized probability distributions p in agreement with macroscopic constraints manifold with local coordinates ξ = fξ1; : : : ; ξmg integration in terms of coordinates Z Z Dp = dξ1 ··· dξm µ(ξ1; : : : ; ξm) want this to be invariant under coordinate changes ξ ! ξ0(ξ) possible choice is Jeffreys prior as integral measure [Harold Jeffreys (1946)] p µ(ξ) = const × det gαβ (ξ) uses Riemannian metric gαβ (ξ) on space of probability distributions: Fisher information metric [Ronald Aylmer Fisher (1925)] X @pj (ξ) @ ln pj (ξ) g (ξ) = αβ @ξα @ξβ j 15 / 27 Permutation invariance can now integrate functions of p Z Z Dp f(p) = dmξ µ(ξ) f(p(ξ)) consider maps fp1; : : : pN g ! fpΠ(1); : : : pΠ(N )g where j ! Π(j) is a permutation, abbreviated p ! Π(p) want to show Dp = DΠ(p) such that Z Z Dp f(p) = Dp f(Π(p)) convenient to choose coordinates ( (ξj )2 for j = 1;:::; N − 1; pj = 1 − (ξ1)2 − ::: − (ξN −1)2 for j = N : wich allows to write 0 v 1 Z Z 1 u N Z 1 1 N uX Dp = dξ ··· dξ δ @1 − t (ξα)2A = DΠ(p) ΩN −1 α=1 16 / 27 Minimizing expected relative entropy consider now the functional " !# Z X B(q; λ) = Dp S(pkq) + λ qi − 1 i variation with respect to qj Z ! X pj 0 = δB = Dp − + λ δqj qj j leads by permutation invariance to the uniform distribution 1 q = hp i = j j N microcanonical distribution has minimum expected relative entropy! least distinguishable within the set of allowed distributions 17 / 27 Measure on space of density matrices measure on space of density matrices Dρ can be defined similarly in terms of coordinates ξ but using now quantum Fisher information metric @ρ(ξ) @ ln ρ(ξ) g (ξ) = Tr αβ @ξα @ξβ definition uses symmetric logarithmic derivative such that 1 1 ρ(d ln ρ) + (d ln ρ)ρ = dρ 2 2 appears also as limit of relative entropy for states that approach each other 1 S(ρ(ξ + dξ)kρ(ξ)) = g (ξ)dξαdξβ + ::: 2 αβ 18 / 27 Unitary transformations as isometries consider unitary map ρ(ξ) ! ρ0(ξ) = Uρ(ξ)U y = ρ(ξ0) again normalized density matrix but at coordinate point ξ0 induced map on coordinates ξ ! ξ0(ξ) is an isometry α β 0 0α 0β gαβ (ξ)dξ dξ = gαβ (ξ )dξ dξ can be used to show invariance of measure such that Z Z Dρ f(ρ) = Dρ f(UρU y) 19 / 27 Minimizing expected relative entropy
Recommended publications
  • Quantum Information, Entanglement and Entropy
    QUANTUM INFORMATION, ENTANGLEMENT AND ENTROPY Siddharth Sharma Abstract In this review I had given an introduction to axiomatic Quantum Mechanics, Quantum Information and its measure entropy. Also, I had given an introduction to Entanglement and its measure as the minimum of relative quantum entropy of a density matrix of a Hilbert space with respect to all separable density matrices of the same Hilbert space. I also discussed geodesic in the space of density matrices, their orthogonality and Pythagorean theorem of density matrix. Postulates of Quantum Mechanics The basic postulate of quantum mechanics is about the Hilbert space formalism: • Postulates 0: To each quantum mechanical system, has a complex Hilbert space ℍ associated to it: Set of all pure state, ℍ⁄∼ ≔ {⟦푓⟧|⟦푓⟧ ≔ {ℎ: ℎ ∼ 푓}} and ℎ ∼ 푓 ⇔ ∃ 푧 ∈ ℂ such that ℎ = 푧 푓 where 푧 is called phase • Postulates 1: The physical states of a quantum mechanical system are described by statistical operators acting on the Hilbert space. A density matrix or statistical operator, is a positive operator of trace 1 on the Hilbert space. Where is called positive if 〈푥, 푥〉 for all 푥 ∈ ℍ and 〈⋇,⋇〉. • Postulates 2: The observables of a quantum mechanical system are described by self- adjoint operators acting on the Hilbert space. A self-adjoint operator A on a Hilbert space ℍ is a linear operator 퐴: ℍ → ℍ which satisfies 〈퐴푥, 푦〉 = 〈푥, 퐴푦〉 for 푥, 푦 ∈ ℍ. Lemma: The density matrices acting on a Hilbert space form a convex set whose extreme points are the pure states. Proof. Denote by Σ the set of density matrices.
    [Show full text]
  • A Mini-Introduction to Information Theory
    A Mini-Introduction To Information Theory Edward Witten School of Natural Sciences, Institute for Advanced Study Einstein Drive, Princeton, NJ 08540 USA Abstract This article consists of a very short introduction to classical and quantum information theory. Basic properties of the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information. A few more detailed topics are considered in the quantum case. arXiv:1805.11965v5 [hep-th] 4 Oct 2019 Contents 1 Introduction 2 2 Classical Information Theory 2 2.1 ShannonEntropy ................................... .... 2 2.2 ConditionalEntropy ................................. .... 4 2.3 RelativeEntropy .................................... ... 6 2.4 Monotonicity of Relative Entropy . ...... 7 3 Quantum Information Theory: Basic Ingredients 10 3.1 DensityMatrices .................................... ... 10 3.2 QuantumEntropy................................... .... 14 3.3 Concavity ......................................... .. 16 3.4 Conditional and Relative Quantum Entropy . ....... 17 3.5 Monotonicity of Relative Entropy . ...... 20 3.6 GeneralizedMeasurements . ...... 22 3.7 QuantumChannels ................................... ... 24 3.8 Thermodynamics And Quantum Channels . ...... 26 4 More On Quantum Information Theory 27 4.1 Quantum Teleportation and Conditional Entropy . ......... 28 4.2 Quantum Relative Entropy And Hypothesis Testing . ......... 32 4.3 Encoding
    [Show full text]
  • Some Trace Inequalities for Exponential and Logarithmic Functions
    Bull. Math. Sci. https://doi.org/10.1007/s13373-018-0123-3 Some trace inequalities for exponential and logarithmic functions Eric A. Carlen1 · Elliott H. Lieb2 Received: 8 October 2017 / Revised: 17 April 2018 / Accepted: 24 April 2018 © The Author(s) 2018 Abstract Consider a function F(X, Y ) of pairs of positive matrices with values in the positive matrices such that whenever X and Y commute F(X, Y ) = X pY q . Our first main result gives conditions on F such that Tr[X log(F(Z, Y ))]≤Tr[X(p log X + q log Y )] for all X, Y, Z such that TrZ = Tr X. (Note that Z is absent from the right side of the inequality.) We give several examples of functions F to which the theorem applies. Our theorem allows us to give simple proofs of the well known logarithmic inequalities of Hiai and Petz and several new generalizations of them which involve three variables X, Y, Z instead of just X, Y alone. The investigation of these logarith- mic inequalities is closely connected with three quantum relative entropy functionals: The standard Umegaki quantum relative entropy D(X||Y ) = Tr[X(log X − log Y ]), and two others, the Donald relative entropy DD(X||Y ), and the Belavkin–Stasewski relative entropy DBS(X||Y ). They are known to satisfy DD(X||Y ) ≤ D(X||Y ) ≤ DBS(X||Y ). We prove that the Donald relative entropy provides the sharp upper bound, independent of Z on Tr[X log(F(Z, Y ))] in a number of cases in which F(Z, Y ) is homogeneous of degree 1 in Z and −1inY .
    [Show full text]
  • Lecture 18 — October 26, 2015 1 Overview 2 Quantum Entropy
    PHYS 7895: Quantum Information Theory Fall 2015 Lecture 18 | October 26, 2015 Prof. Mark M. Wilde Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. 1 Overview In the previous lecture, we discussed classical entropy and entropy inequalities. In this lecture, we discuss several information measures that are important for quantifying the amount of information and correlations that are present in quantum systems. The first fundamental measure that we introduce is the von Neumann entropy. It is the quantum generalization of the Shannon entropy, but it captures both classical and quantum uncertainty in a quantum state. The von Neumann entropy gives meaning to a notion of the information qubit. This notion is different from that of the physical qubit, which is the description of a quantum state of an electron or a photon. The information qubit is the fundamental quantum informational unit of measure, determining how much quantum information is present in a quantum system. The initial definitions here are analogous to the classical definitions of entropy, but we soon discover a radical departure from the intuitive classical notions from the previous chapter: the conditional quantum entropy can be negative for certain quantum states. In the classical world, this negativity simply does not occur, but it takes a special meaning in quantum information theory. Pure quantum states that are entangled have stronger-than-classical correlations and are examples of states that have negative conditional entropy. The negative of the conditional quantum entropy is so important in quantum information theory that we even have a special name for it: the coherent information.
    [Show full text]
  • From Joint Convexity of Quantum Relative Entropy to a Concavity Theorem of Lieb
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 140, Number 5, May 2012, Pages 1757–1760 S 0002-9939(2011)11141-9 Article electronically published on August 4, 2011 FROM JOINT CONVEXITY OF QUANTUM RELATIVE ENTROPY TO A CONCAVITY THEOREM OF LIEB JOEL A. TROPP (Communicated by Marius Junge) Abstract. This paper provides a succinct proof of a 1973 theorem of Lieb that establishes the concavity of a certain trace function. The development relies on a deep result from quantum information theory, the joint convexity of quantum relative entropy, as well as a recent argument due to Carlen and Lieb. 1. Introduction In his 1973 paper on trace functions, Lieb establishes an important concavity theorem [Lie73, Thm. 6] concerning the trace exponential. Theorem 1 (Lieb). Let H be a fixed self-adjoint matrix. The map (1) A −→ tr exp (H +logA) is concave on the positive-definite cone. The most direct proof of Theorem 1 is due to Epstein [Eps73]; see Ruskai’s papers [Rus02, Rus05] for a condensed version of this argument. Lieb’s original proof develops the concavity of the function (1) as a corollary of another deep concavity theorem [Lie73, Thm. 1]. In fact, many convexity and concavity theorems for trace functions are equivalent with each other, in the sense that the mutual implications follow from relatively easy arguments. See [Lie73, §5] and [CL08, §5] for discussions of this point. The goal of this paper is to demonstrate that a modicum of convex analysis allows us to derive Theorem 1 directly from another major theorem, the joint convexity of the quantum relative entropy.
    [Show full text]
  • Classical, Quantum and Total Correlations
    Classical, quantum and total correlations L. Henderson∗ and V. Vedral∗∗ ∗Department of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW ∗∗Optics Section, Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ Abstract We discuss the problem of separating consistently the total correlations in a bipartite quantum state into a quantum and a purely classical part. A measure of classical correlations is proposed and its properties are explored. In quantum information theory it is common to distinguish between purely classical information, measured in bits, and quantum informa- tion, which is measured in qubits. These differ in the channel resources required to communicate them. Qubits may not be sent by a classical channel alone, but must be sent either via a quantum channel which preserves coherence or by teleportation through an entangled channel with two classical bits of communication [?]. In this context, one qubit is equivalent to one unit of shared entanglement, or `e-bit', together with two classical bits. Any bipartite quantum state may be used as a com- munication channel with some degree of success, and so it is of interest to determine how to separate the correlations it contains into a classi- cal and an entangled part. A number of measures of entanglement and of total correlations have been proposed in recent years [?, ?, ?, ?, ?]. However, it is still not clear how to quantify the purely classical part of the total bipartite correlations. In this paper we propose a possible measure of classical correlations and investigate its properties. We first review the existing measures of entangled and total corre- lations.
    [Show full text]
  • Quantum Information Chapter 10. Quantum Shannon Theory
    Quantum Information Chapter 10. Quantum Shannon Theory John Preskill Institute for Quantum Information and Matter California Institute of Technology Updated June 2016 For further updates and additional chapters, see: http://www.theory.caltech.edu/people/preskill/ph219/ Please send corrections to [email protected] Contents 10 Quantum Shannon Theory 1 10.1 Shannon for Dummies 2 10.1.1 Shannon entropy and data compression 2 10.1.2 Joint typicality, conditional entropy, and mutual infor- mation 6 10.1.3 Distributed source coding 8 10.1.4 The noisy channel coding theorem 9 10.2 Von Neumann Entropy 16 10.2.1 Mathematical properties of H(ρ) 18 10.2.2 Mixing, measurement, and entropy 20 10.2.3 Strong subadditivity 21 10.2.4 Monotonicity of mutual information 23 10.2.5 Entropy and thermodynamics 24 10.2.6 Bekenstein’s entropy bound. 26 10.2.7 Entropic uncertainty relations 27 10.3 Quantum Source Coding 30 10.3.1 Quantum compression: an example 31 10.3.2 Schumacher compression in general 34 10.4 Entanglement Concentration and Dilution 38 10.5 Quantifying Mixed-State Entanglement 45 10.5.1 Asymptotic irreversibility under LOCC 45 10.5.2 Squashed entanglement 47 10.5.3 Entanglement monogamy 48 10.6 Accessible Information 50 10.6.1 How much can we learn from a measurement? 50 10.6.2 Holevo bound 51 10.6.3 Monotonicity of Holevo χ 53 10.6.4 Improved distinguishability through coding: an example 54 10.6.5 Classical capacity of a quantum channel 58 ii Contents iii 10.6.6 Entanglement-breaking channels 62 10.7 Quantum Channel Capacities and Decoupling
    [Show full text]
  • Geometry of Quantum States
    Geometry of Quantum States Ingemar Bengtsson and Karol Zyczk_ owski An Introduction to Quantum Entanglement 12 Density matrices and entropies A given object of study cannot always be assigned a unique value, its \entropy". It may have many different entropies, each one worthwhile. |Harold Grad In quantum mechanics, the von Neumann entropy S(ρ) = Trρ ln ρ (12.1) − plays a role analogous to that played by the Shannon entropy in classical prob- ability theory. They are both functionals of the state, they are both monotone under a relevant kind of mapping, and they can be singled out uniquely by natural requirements. In section 2.2 we recounted the well known anecdote ac- cording to which von Neumann helped to christen Shannon's entropy. Indeed von Neumann's entropy is older than Shannon's, and it reduces to the Shan- non entropy for diagonal density matrices. But in general the von Neumann entropy is a subtler object than its classical counterpart. So is the quantum relative entropy, that depends on two density matrices that perhaps cannot be diagonalized at the same time. Quantum theory is a non{commutative proba- bility theory. Nevertheless, as a rule of thumb we can pass between the classical discrete, classical continuous, and quantum cases by choosing between sums, integrals, and traces. While this rule of thumb has to be used cautiously, it will give us quantum counterparts of most of the concepts introduced in chapter 2, and conversely we can recover chapter 2 by restricting the matrices of this chapter to be diagonal. 12.1 Ordering operators The study of quantum entropy is to a large extent a study in inequalities, and this is where we begin.
    [Show full text]
  • Arxiv:1512.06117V3 [Quant-Ph] 11 Apr 2017 Nrp 2] F Lo[6.Adffrn Ro O H Monotoni the for Proof Different : a Φ That Condition [36]
    Monotonicity of the Quantum Relative Entropy Under Positive Maps Alexander M¨uller-Hermes1 and David Reeb2 1QMATH, Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen, Denmark∗ 2Institute for Theoretical Physics, Leibniz Universit¨at Hannover, 30167 Hannover, Germany† We prove that the quantum relative entropy decreases monotonically under the applica- tion of any positive trace-preserving linear map, for underlying separable Hilbert spaces. This answers in the affirmative a natural question that has been open for a long time, as monotonicity had previously only been shown to hold under additional assumptions, such as complete positivity or Schwarz-positivity of the adjoint map. The first step in our proof is to show monotonicity of the sandwiched Renyi divergences under positive trace-preserving maps, extending a proof of the data processing inequality by Beigi [J. Math. Phys. 54, 122202 (2013)] that is based on complex interpolation techniques. Our result calls into ques- tion several measures of non-Markovianity that have been proposed, as these would assess all positive trace-preserving time evolutions as Markovian. I. INTRODUCTION For any pair of quantum states ρ, σ acting on the same Hilbert space H, the quantum relative entropy is defined by tr[ρ(log ρ − log σ)], if supp[ρ] ⊆ supp[σ] D(ρkσ) := (1) (+∞, otherwise. It was first introduced by Umegaki [43] as a quantum generalization of the classical Kullback- Leibler divergence between probability distributions [18]. Both are distance-like measures, also called divergences, and have found a multitude of applications and operational interpretations in diverse fields like information theory, statistics, and thermodynamics. We refer to the review [45] and the monograph [30] for more details on the quantum relative entropy.
    [Show full text]
  • Lecture Notes Live Here
    Physics 239/139: Quantum information is physical Spring 2018 Lecturer: McGreevy These lecture notes live here. Please email corrections to mcgreevy at physics dot ucsd dot edu. THESE NOTES ARE SUPERSEDED BY THE NOTES HERE Schr¨odinger's cat and Maxwell's demon, together at last. Last updated: 2021/05/09, 15:30:29 1 Contents 0.1 Introductory remarks............................4 0.2 Conventions.................................7 0.3 Lightning quantum mechanics reminder..................8 1 Hilbert space is a myth 11 1.1 Mean field theory is product states.................... 14 1.2 The local density matrix is our friend................... 16 1.3 Complexity and the convenient illusion of Hilbert space......... 19 2 Quantifying information 26 2.1 Relative entropy............................... 33 2.2 Data compression.............................. 37 2.3 Noisy channels............................... 43 2.4 Error-correcting codes........................... 49 3 Information is physical 53 3.1 Cost of erasure............................... 53 3.2 Second Laws of Thermodynamics..................... 59 4 Quantifying quantum information and quantum ignorance 64 4.1 von Neumann entropy........................... 64 4.2 Quantum relative entropy......................... 67 4.3 Purification, part 1............................. 69 4.4 Schumacher compression.......................... 71 4.5 Quantum channels............................. 73 4.6 Channel duality............................... 79 4.7 Purification, part 2............................. 84 4.8 Deep
    [Show full text]
  • Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics
    Entropy 2010, 12, 1194-1245; doi:10.3390/e12051194 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Review Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics Masanori Ohya ? and Noboru Watanabe ? Department of Information Sciences, Tokyo University of Science, Noda City, Chiba 278-8510, Japan ? Author to whom correspondence should be addressed; E-Mails: [email protected] (M.O.); [email protected] (N.W.). Received: 10 February 2010 / Accepted: 30 April 2010 / Published: 7 May 2010 Abstract: Quantum entropy is a fundamental concept for quantum information recently developed in various directions. We will review the mathematical aspects of quantum entropy (entropies) and discuss some applications to quantum communication, statistical physics. All topics taken here are somehow related to the quantum entropy that the present authors have been studied. Many other fields recently developed in quantum information theory, such as quantum algorithm, quantum teleportation, quantum cryptography, etc., are totally discussed in the book (reference number 60). Keywords: quantum entropy; quantum information 1. Introduction Theoretical foundation supporting today’s information-oriented society is Information Theory founded by Shannon [1] about 60 years ago. Generally, this theory can treat the efficiency of the information transmission by using measures of complexity, that is, the entropy, in the commutative system of signal space. The information theory is based on the entropy theory that is formulated mathematically. Before Shannon’s work, the entropy was first introduced in thermodynamics by Clausius and in statistical mechanics by Boltzmann. These entropies are the criteria to characterize a property of the physical systems.
    [Show full text]
  • Arxiv:1912.00983V4 [Quant-Ph] 19 Apr 2021
    QUASI-FACTORIZATION AND MULTIPLICATIVE COMPARISON OF SUBALGEBRA-RELATIVE ENTROPY NICHOLAS LARACUENTE Abstract. Purely multiplicative comparisons of quantum relative entropy are desirable but challenging to prove. We show such comparisons for relative entropies between comparable densities, including subalgebra-relative entropy and its perturbations. These inequalities are asymptotically tight in approaching known, tight inequalities as perturbation size approaches zero. Following, we obtain a quasi-factorization inequality, which compares the relative entropies to each of several subalgebraic restrictions with that to their intersection. We apply quasi- factorization to uncertainty-like relations and to conditional expectations arising from graphs. Quasi-factorization yields decay estimates of optimal asymptotic order on mixing processes de- scribed by finite, connected, undirected graphs. 1. Introduction The strong subadditivity (SSA) of von Neumann entropy states that for a tripartite density ρABC on systems ABC, H(AC)ρ + H(BC)ρ ≥ H(ABC)ρ + H(C)ρ ; (1) where the von Neumann entropy H(ρ) := − tr(ρ log ρ), and H(A)ρ := H(trBC (ρ)). Lieb and Ruskai proved SSA in 1973 [LR73]. SSA's impacts range from quantum Shannon theory [Wil13] to holographic spacetime [CKPT14]. A later form by Petz in 1991 generalizes from subsystems to subalgebras. Let M be a von Neumann algebra and N be a subalgebra. Associated with N is a unique conditional expectation EN that projects an operator in M onto N . We will denote by EN the trace-preserving, unital, completely positive conditional expectations from M to N . If S; T ⊆ M as subalgebras, we call them a commuting square if ES ET = ET ES = ES\T , the conditional expectation onto their arXiv:1912.00983v5 [quant-ph] 31 Aug 2021 intersection.
    [Show full text]