Thermodynamics from Relative Entropy
Total Page:16
File Type:pdf, Size:1020Kb
Thermodynamics from relative entropy Stefan Floerchinger (Heidelberg U.) work with Tobias Haas, Natalia Sanchez-Kuntz and Neil Dowling STRUCTURES Jour fixe, Heidelberg, 03 July 2020. STRUCTURES STRUCTURES JOUR FIXE JOUR FIXE PD DR. STEFAN FLÖRCHINGER PD DR. STEFAN FLÖRCHINGER ITP (Uni Heidelberg) ITP (Uni Heidelberg) “Thermodynamics from relative“ Thermodynamics from relative entropy“ entropy“ 03 July 2020 1:30 PM 03 July 2020 1:30 PM By ZOOM video webinar system By ZOOM video webinar system Website: https://zoom.us/join Website: https://zoom.us/join Meeting-ID: 994 4577 2932 Password: 744991 Meeting-ID: 994 4577 2932 Password: 744991 Contact: [email protected] Contact: [email protected] Entropy and information [Claude Shannon (1948), also Ludwig Boltzmann, Willard Gibbs (∼1875)] consider a random variable x with probability distribution p(x) information content or \surprise" associated with outcome x i(x) 8 6 i(x) = − ln p(x) 4 2 p(x) 0.0 0.2 0.4 0.6 0.8 1.0 entropy is expectation value of information content X S(p) = hi(x)i = − p(x) ln p(x) x p(x) p(x) p(x) 1 1 1 0 0 0 S = 0 S = ln(2) S = 2 ln(2) 1 / 27 Thermodynamics [..., Antoine Laurent de Lavoisier, Nicolas L´eonard Sadi Carnot, Hermann von Helmholtz, Rudolf Clausius, Ludwig Boltzmann, James Clerk Maxwell, Max Planck, Walter Nernst, Willard Gibbs, ...] micro canonical ensemble: maximum entropy S for given conserved quantities E; N in given volume V starting point for development of thermodynamics ... 1 µ p S(E; N; V ); dS = dE − dN + dV T T T ... grand canonical ensemble with density operator ... 1 − 1 (H−µN) ρ = e T Z ... Einsteins probability for classical thermal fluctuations ... dW ∼ eS(ξ)dξ 2 / 27 Fluid dynamics uses thermodynamics locally T (x); µ(x); uµ(x); ::: evolution from conservation laws µν µ rµT (x) = 0; rµN (x) = 0: local dissipation = local entropy production µ rµs (x) = @ts(x) + r~ · ~s(x) > 0 in Navier-Stokes approximation with shear viscosity η, bulk viscosity ζ 1 r sµ = 2ησ σµν + ζ(r uρ)2 µ T µν ρ how to understand this in quantum field theory? Entropy in quantum theory [John von Neumann (1932)] S = −Trfρ ln ρg based on the quantum density operator ρ for pure states ρ = j ih j one has S = 0 P for diagonal mixed states ρ = j pj jjihjj X S = − pj ln pj > 0 j unitary time evolution conserves entropy −Trf(UρU y) ln(UρU y)g = −Trfρ ln ρg ! S = const: quantum information is globally conserved 3 / 27 Quantum entanglement Can quantum-mechanical description of physical reality be considered complete? [Albert Einstein, Boris Podolsky, Nathan Rosen (1935), David Bohm (1951)] 1 = p (j "iAj #iB − j #iAj "iB ) 2 1 = p (j !iAj iB − j iAj !iB ) 2 Bertlemann's socks and the nature of reality [John Stewart Bell (1980)] 4 / 27 Entropy and entanglement consider a split of a quantum system into two A + B B A reduced density operator for system A ρA = TrB fρg entropy associated with subsystem A SA = −TrAfρA ln ρAg pure product state ρ = ρA ⊗ ρB leads to SA = 0 pure entangled state ρ 6= ρA ⊗ ρB leads to SA > 0 SA is called entanglement entropy 5 / 27 Entanglement entropy in relativistic quantum field theory B A entanglement entropy of region A is a local notion of entropy SA = −trA fρA ln ρAg ρA = trB fρg for relativistic quantum field theories it is infinite Z p const d−2 SA = d−2 d σ h + subleading divergences + finite @A UV divergence proportional to entangling surface relativistic quantum fields are very strongly entangled already in vacuum Theorem [Helmut Reeh & Siegfried Schlieder (1961)]: local operators in region A can create all (non-local) particle states 6 / 27 Entanglement entropy in non-relativistic quantum field theory [Natalia Sanchez-Kuntz & Stefan Floerchinger, in preparation] non-relativistic quantum field theory for Bose gas B Z d−1 n ∗ h r~ 2 i λ ∗2 2o S = dtd x ' i@tA+ 2m + µ ' − 2 ' ' Bogoliubov dispersion relation r ( p ~p2 ~p2 csj~pj for p 2Mλρ (phonons) ! = 2M 2M + 2λρ ≈ ~p2 p 2M for p 2Mλρ (particles) 4 3 2 0 2 2 1 0 0.0 0.5 1.0 1.5 2.0 p ~p2 entanglement entropy2 mSA0 vanishes for ρ = 0 and ! = 2M for large region A like in relativistic theory p inverse healing length 2Mλρ acts as UV regulator 7 / 27 Relative entropy classical relative entropy or Kullback-Leibler divergence X S(pkq) = pj ln(pj =qj ) j not symmetric distance measure, but a divergence S(pkq) ≥ 0 and S(pkq) = 0 , p = q quantum relative entropy of two density matrices (also a divergence) S(ρkσ) = Tr fρ (ln ρ − ln σ)g signals how well state ρ can be distinguished from a model σ 8 / 27 Significance of Kullback-Leibler divergence Uncertainty deficit true distribution pj and model distribution qj P uncertainty deficit is expected surprise h− ln qj i = − pj ln qj minus P j real information content − j pj ln pj ! X X S(pkq) = − pj ln qj − − pj ln pj j j Asymptotic frequencies N(xj ) true distribution qj and frequency after N drawings pj = N probability to find frequencies pj for large N goes like e−NS(pkq) probability for fluctuation around expectation value hpj i = qj tends to zero for large N and when divergence S(pkq) is large 9 / 27 Advantages of relative entropy Continuum limit pj ! f(x)dx qj ! g(x)dx not well defined for entropy B X Z S = − pj ln pj ! − A dxf(x) [ln f(x) + ln dx] relative entropy remains well defined Z S(pkq) ! S(fkg) = dx f(x) ln(f(x)=g(x)) Local quantum field theory entanglement entropy S(ρA) for spatial region divergent in relativistic QFT relative entanglement entropy S(ρAkσA) well defined rigorous definition in terms of Tomita{Takesaki theory of modular automorphisms on von-Neumann algebras [Huzihiro Araki (1976)] 10 / 27 Monotonicity of relative entropy [G¨oranLindblad (1975)] monotonicity of relative entropy S(N (ρ)jN (σ)) ≤ S(ρjσ) with N completely positive, trace-preserving map N unitary time evolution S(N (ρ)jN (σ)) = S(ρjσ) N open system evolution with generation of entanglement to environment S(N (ρ)jN (σ)) < S(ρjσ) basis for many proofs in quantum information theory leads naturally to second-law type relations 11 / 27 Thermodynamics from relative entropy [Stefan Floerchinger & Tobias Haas, arXiv:2004.13533 (2020)] relative entropy has very nice properties but can thermodynamics be derived from it ? can entropy be replaced by relative entropy ? 12 / 27 Principle of maximum entropy [Edwin Thompson Jaynes (1963)] take macroscopic state characteristics as fixed, e. g. energy E; particle number N; momentum ~p; principle of maximum entropy: among all possible microstates σ (or distributions q) the one with maximum entropy S is preferred S(σthermal) = max why? assume S(σ) < max, than σ would contain additional information not determined by macroscopic variables, which is not available maximum entropy = minimal information 13 / 27 Principle of minimum expected relative entropy [Stefan Floerchinger & Tobias Haas, arXiv:2004.13533 (2020)] take macroscopic state characteristics as fixed, e. g. energy E; particle number N; momentum ~p; principle of minimum expected relative entropy: preferred is the model σ from which allowed states ρ are least distinguishable on average Z hS(ρkσthermal)i = Dρ S(ρkσthermal) = min similarly for classical probability distributions Z hS(pkq)i = Dp S(pkq) = min need to define measures Dp and Dρ on spaces of probability distributions p and density matrices ρ, respectively 14 / 27 Measure on space of probability distributions consider set of normalized probability distributions p in agreement with macroscopic constraints manifold with local coordinates ξ = fξ1; : : : ; ξmg integration in terms of coordinates Z Z Dp = dξ1 ··· dξm µ(ξ1; : : : ; ξm) want this to be invariant under coordinate changes ξ ! ξ0(ξ) possible choice is Jeffreys prior as integral measure [Harold Jeffreys (1946)] p µ(ξ) = const × det gαβ (ξ) uses Riemannian metric gαβ (ξ) on space of probability distributions: Fisher information metric [Ronald Aylmer Fisher (1925)] X @pj (ξ) @ ln pj (ξ) g (ξ) = αβ @ξα @ξβ j 15 / 27 Permutation invariance can now integrate functions of p Z Z Dp f(p) = dmξ µ(ξ) f(p(ξ)) consider maps fp1; : : : pN g ! fpΠ(1); : : : pΠ(N )g where j ! Π(j) is a permutation, abbreviated p ! Π(p) want to show Dp = DΠ(p) such that Z Z Dp f(p) = Dp f(Π(p)) convenient to choose coordinates ( (ξj )2 for j = 1;:::; N − 1; pj = 1 − (ξ1)2 − ::: − (ξN −1)2 for j = N : wich allows to write 0 v 1 Z Z 1 u N Z 1 1 N uX Dp = dξ ··· dξ δ @1 − t (ξα)2A = DΠ(p) ΩN −1 α=1 16 / 27 Minimizing expected relative entropy consider now the functional " !# Z X B(q; λ) = Dp S(pkq) + λ qi − 1 i variation with respect to qj Z ! X pj 0 = δB = Dp − + λ δqj qj j leads by permutation invariance to the uniform distribution 1 q = hp i = j j N microcanonical distribution has minimum expected relative entropy! least distinguishable within the set of allowed distributions 17 / 27 Measure on space of density matrices measure on space of density matrices Dρ can be defined similarly in terms of coordinates ξ but using now quantum Fisher information metric @ρ(ξ) @ ln ρ(ξ) g (ξ) = Tr αβ @ξα @ξβ definition uses symmetric logarithmic derivative such that 1 1 ρ(d ln ρ) + (d ln ρ)ρ = dρ 2 2 appears also as limit of relative entropy for states that approach each other 1 S(ρ(ξ + dξ)kρ(ξ)) = g (ξ)dξαdξβ + ::: 2 αβ 18 / 27 Unitary transformations as isometries consider unitary map ρ(ξ) ! ρ0(ξ) = Uρ(ξ)U y = ρ(ξ0) again normalized density matrix but at coordinate point ξ0 induced map on coordinates ξ ! ξ0(ξ) is an isometry α β 0 0α 0β gαβ (ξ)dξ dξ = gαβ (ξ )dξ dξ can be used to show invariance of measure such that Z Z Dρ f(ρ) = Dρ f(UρU y) 19 / 27 Minimizing expected relative entropy