Quantum Information, Entanglement and Entropy

Quantum Information, Entanglement and Entropy

<p>QUANTUM INFORMATION, <br>ENTANGLEMENT AND ENTROPY </p><p>Siddharth Sharma </p><p>Abstract </p><p>In this review I had given an introduction to axiomatic Quantum Mechanics, Quantum Information and its measure entropy. Also, I had given an introduction to Entanglement and its measure as the minimum of relative quantum entropy of a density matrix of a Hilbert space with respect to all separable density matrices of the same Hilbert space. I also discussed geodesic in the space of density matrices, their orthogonality and Pythagorean theorem of density matrix. </p><p>Postulates of Quantum Mechanics </p><p>The basic postulate of quantum mechanics is about the Hilbert space formalism: <br>• <strong>Postulates 0: </strong>To each quantum mechanical system, has a complex Hilbert space ℍ associated to it: </p><p>ℍ<br>⁄<br>⟦ ⟧&nbsp;⟦ ⟧ </p><p>{</p><p>}</p><p><strong>Set of all pure state, </strong></p><p>≔ {&nbsp;푓 | 푓&nbsp;≔ ℎ:&nbsp;ℎ ∼ 푓&nbsp;} and ℎ ∼ 푓 ⇔ ∃ 푧 ∈ ℂ such that <br>∼</p><p>ℎ = 푧 푓 where 푧 is called phase <br>• <strong>Postulates 1: </strong>The physical states of a quantum mechanical system are described by statistical operators acting on the Hilbert space. </p><p>A <strong>density matrix </strong>or <strong>statistical operator, </strong>휌 is a positive operator of trace 1 on the </p><p></p><ul style="display: flex;"><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">〈</li><li style="flex:1">〉</li></ul><p></p><p>Hilbert space. Where 휌 is called positive if&nbsp;푥, 휌푥&nbsp;for all 푥 ∈ ℍ and ⋇,⋇ . <br>• <strong>Postulates 2: </strong>The observables of a quantum mechanical system are described by selfadjoint operators acting on the Hilbert space. </p><p>A self-adjoint operator A on a Hilbert space ℍ is a linear operator 퐴: ℍ → ℍ which satisfies </p><p></p><ul style="display: flex;"><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">〈</li><li style="flex:1">〉</li></ul><p>퐴푥, 푦&nbsp;= 푥,&nbsp;퐴푦 for 푥, 푦 ∈ ℍ. </p><p><strong>Lemma: </strong>The density matrices acting on a Hilbert space form a convex set whose extreme points are the pure states. <strong>Proof</strong>. Denote by <strong>Σ </strong>the set of density matrices. It is obvious that a convex combination of density matrices is positive and of trace one. Therefore, <strong>Σ </strong>is a convex set. </p><p><strong>Lemma: </strong>Let </p><p></p><ul style="display: flex;"><li style="flex:1">|</li><li style="flex:1">⟩⟨ | </li><li style="flex:1">|</li><li style="flex:1">|</li><li style="flex:1">⟩⟨ </li></ul><p>휌 = ∑ 푥<sub style="top: 0.2em;">푒 </sub>푥<sub style="top: 0.2em;">푒 </sub>= ∑ 푦<sub style="top: 0.2em;">ℎ </sub>푦<sub style="top: 0.2em;">ℎ </sub></p><p></p><ul style="display: flex;"><li style="flex:1">푒∈픹 </li><li style="flex:1">ℎ∈픹 </li></ul><p></p><p>be decompositions of a density matrix and where 픹 is the basis of Hilbert space, ℍ. </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li></ul><p></p><p>푒ℎ∈픹 </p><p></p><ul style="display: flex;"><li style="flex:1">Then there exists a unitary matrix&nbsp;푈<sub style="top: 0.2em;">푒ℎ </sub></li><li style="flex:1">such that </li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">|</li><li style="flex:1">⟩</li><li style="flex:1">|</li><li style="flex:1">⟩</li><li style="flex:1">∑ 푈<sub style="top: 0.2em;">푒ℎ </sub>푥<sub style="top: 0.2em;">푒 </sub>= 푦<sub style="top: 0.2em;">ℎ </sub></li></ul><p></p><p>푒∈픹 </p><p>Where, </p><p></p><ul style="display: flex;"><li style="flex:1">⟨</li><li style="flex:1">|</li><li style="flex:1">⟩</li><li style="flex:1">푥<sub style="top: 0.2em;">푒 </sub>푦<sub style="top: 0.2em;">ℎ </sub></li></ul><p></p><p>푈<sub style="top: 0.2em;">푒ℎ </sub></p><p>≔</p><ul style="display: flex;"><li style="flex:1">⟨</li><li style="flex:1">|</li><li style="flex:1">⟩</li><li style="flex:1">√ 푥<sub style="top: 0.19em;">푒 </sub>푥<sub style="top: 0.19em;">ℎ </sub></li></ul><p></p><p>• <strong>Postulates 3: </strong>Let 풳 be a finite set and for 푥 ∈ 풳 an operator 푂<sub style="top: 0.2em;">ꢀ </sub>∈ ℬ(ℍ) be given such that </p><p>∑ 푂<sub style="top: 0.2em;">ꢀ</sub><sup style="top: -0.37em;">⋇</sup>푂<sub style="top: 0.2em;">ꢀ </sub>= 푖푑<sub style="top: 0.2em;">ℍ </sub></p><p>ꢀ∈풳 </p><p>where 푂<sub style="top: 0.2017em;">ꢀ</sub><sup style="top: -0.37em;">⋇ </sup>is conjugate of 푂<sub style="top: 0.2017em;">ꢀ</sub>. Such an indexed family of operators is a model of a measurement with values in 풳. If the measurement is performed in a state 휌, then the </p><p>⋇</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li></ul><p></p><p>outcome 푥 ∈ 풳 appears with probability tr 푂<sub style="top: 0.2em;">ꢀ </sub>휌푂<sub style="top: 0.2em;">ꢀ </sub>and after the measurement the state of the system is </p><p>푂<sub style="top: 0.2em;">ꢀ</sub><sup style="top: -0.37em;">⋇</sup>휌푂<sub style="top: 0.2em;">ꢀ </sub></p><p>⋇</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">푡푟 푂<sub style="top: 0.2em;">ꢀ </sub>휌푂<sub style="top: 0.2em;">ꢀ </sub></li></ul><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">(</li><li style="flex:1">)</li></ul><p></p><p>If 휑 ∶ ℬ(ℍ) → ℂ is a linear functional such that 휑 퐴&nbsp;≥ 0 if 퐴 is positive and 휑 푖푑&nbsp;= 1, Then there exists a density matrix 휌<sub style="top: 0.2em;">ꢁ </sub>such that, </p><p>( ) <br>휑 퐴&nbsp;= tr(휌<sub style="top: 0.2em;">ꢁ</sub>퐴) </p><p>The functional 휑 associates the expectation value to the observables 퐴. The density matrices 휌 and 휌′ are called <strong>orthogonal </strong>if any eigenvector of 휌 is orthogonal to any eigenvector of 휌′. </p><p>Let 픹 be an orthonormal basis in a Hilbert space ℍ . The unit vector 휉 ∈ ℍ&nbsp;is </p><p><strong>complementary </strong>to the given basis if </p><p>1</p><ul style="display: flex;"><li style="flex:1">|〈 〉| </li><li style="flex:1">ꢂ, 휉 </li><li style="flex:1">=</li></ul><p>dim ℍ </p><p>√</p><p>For all ꢂ ∈ 픹, where dim ℍ is dimension of ℍ. Two orthonormal bases are called complementary if all vectors in the first basis are complementary to the other basis. </p><p>• <strong>Postulates 4: </strong>The composite system is described by the tensor product Hilbert space </p><p>⨂ ℍ<sub style="top: 0.2em;">휗 </sub></p><p>휗</p><p>Given a density matrix 휌 on ⨂<sub style="top: 0.45em;">휗 </sub>ℍ<sub style="top: 0.2em;">휗 </sub>there are density matrices 휌<sub style="top: 0.2em;">휗 </sub>∈ ℬ(ℍ<sub style="top: 0.2em;">휗</sub>) such that </p><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">(</li><li style="flex:1">)</li></ul><p>tr (휌 ⨂ 풜<sub style="top: 0.2em;">훾 </sub>ꢃ ) = tr&nbsp;휌<sub style="top: 0.2em;">휗</sub>퐴<sub style="top: 0.2em;">휗 </sub></p><p>훾</p><p>Where, </p><p>퐴<sub style="top: 0.2em;">휗</sub>, ꢄ&nbsp;= ꢃ </p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">풜<sub style="top: 0.2em;">훾 </sub>ꢃ ≔&nbsp;{ </li></ul><p>푖푑<sub style="top: 0.2em;">ℍ</sub><sub style="top: 0.36em;">ꢅ</sub>, ꢄ&nbsp;≠ ꢃ </p><p>and 퐴<sub style="top: 0.2em;">휗 </sub>∈ ℬ(ℍ<sub style="top: 0.2em;">휗</sub>), here 휌<sub style="top: 0.2em;">휗 </sub>is called <strong>reduced density matrices</strong>. </p><p>Let, </p><p>휌 = ⨂ 휌<sub style="top: 0.2em;">휗 </sub></p><p>휗</p><p>( ) </p><p>Then we define 퐭퐫<sub style="top: 0.2em;">휸 </sub>흆 as follows: </p><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">tr<sub style="top: 0.2em;">훾 </sub>휌 ≔&nbsp;tr(휌<sub style="top: 0.2em;">훾</sub>) ⨂ 휌<sub style="top: 0.2em;">휗 </sub></li></ul><p></p><p>휗≠훾 </p><p>Information and its Measures </p><p>• <strong>Shannon entropy: </strong>In his revolutionary paper Shannon proposed a statistical approach and he posed the problem in the following way: “Suppose we have a set of possible events whose probabilities of occurrence are 푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . . . , 푝<sub style="top: 0.2em;">푛</sub>. These probabilities are known but that is all we know concerning which event will occur. Can we find a measure of how much “choice” is involved in the selection of the event or how uncertain we are of the outcome?” Denoting such a measure by 퐻(푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . . . , 푝<sub style="top: 0.2em;">푛</sub>), he listed three very reasonable requirements which should be satisfied, those three postulates are as follows: </p><p>a. <strong>Continuity</strong>: 퐻(푝, 1 −&nbsp;푝) is a continuous function of 푝. b. <strong>Symmetry: </strong>퐻(푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . . . , 푝<sub style="top: 0.2em;">푛</sub>), is&nbsp;a symmetric function of its variables. </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">)</li></ul><p>c. <strong>Recursion: </strong>For every 0 ≤ 휆 &lt; 1 the recursion 퐻 푝<sub style="top: 0.2017em;">1</sub>, . , 휆푝<sub style="top: 0.2017em;">ꢆ</sub>, . ,&nbsp;1 − 휆&nbsp;푝<sub style="top: 0.2017em;">ꢆ</sub>. , 푝<sub style="top: 0.2017em;">푛 </sub><br>=</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">퐻 푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . , 푝<sub style="top: 0.2em;">ꢆ</sub>. . , 푝<sub style="top: 0.2em;">푛 </sub>+ 푝<sub style="top: 0.2em;">ꢆ</sub>퐻(휆, 1 − 휆), holds </li></ul><p></p><p>According to him there is only one function satisfying all this postulate is: </p><p>푛</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">( ) </li><li style="flex:1">퐻 푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . , 푝<sub style="top: 0.2em;">ꢆ</sub>. . , 푝<sub style="top: 0.2em;">푛 </sub>= −휅 ∑ 푝<sub style="top: 0.2em;">ꢇ</sub>푙ꢈ 푝<sub style="top: 0.2em;">ꢇ </sub></li></ul><p></p><p>ꢇ=1 </p><p>푛</p><p>Let (푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>be random variables with values in the set 풳 ≔&nbsp;<sub style="top: 0.24em;">ꢇ=1 </sub>풳<sub style="top: 0.2em;">ꢇ</sub>. The following </p><p>∏</p><p>notation will be used. a. 푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>is probability of (푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>at (푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>b. 푝(푥<sub style="top: 0.2em;">ꢇ</sub>|푥 ) is probability of 푋<sub style="top: 0.2em;">ꢇ </sub>at 푥<sub style="top: 0.2em;">ꢇ </sub>knowing the probability of 푋 . at 푥 </p><p></p><ul style="display: flex;"><li style="flex:1">푗</li><li style="flex:1">푗</li><li style="flex:1">푗</li></ul><p></p><p>Then, </p><p></p><ul style="display: flex;"><li style="flex:1">퐻(푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>≔ − </li><li style="flex:1">∑</li><li style="flex:1">푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>ln 푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sub style="top: 0.24em;">ꢇ</sub><sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">=1 </sub></li></ul><p></p><p>ꢊ</p><p>(ꢀ )&nbsp;∈풳 </p><p>ꢉ</p><p>ꢉ=1 </p><p>Where if ꢈ = 2 </p><p>퐻(푋<sub style="top: 0.2em;">ꢇ</sub>, 푋 ) ≔ −&nbsp;∑ ∑&nbsp;푝(푥<sub style="top: 0.2em;">ꢇ</sub>, 푥&nbsp;) ln 푝(푥<sub style="top: 0.2em;">ꢇ</sub>, 푥&nbsp;) </p><p></p><ul style="display: flex;"><li style="flex:1">푗</li><li style="flex:1">푗</li><li style="flex:1">푗</li></ul><p>ꢀ ∈풳&nbsp;ꢀ ∈풳 </p><p></p><ul style="display: flex;"><li style="flex:1">ꢉ</li><li style="flex:1">ꢉ</li><li style="flex:1">ꢋ</li><li style="flex:1">ꢋ</li></ul><p></p><p>and </p><p>푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>= 푝(푥<sub style="top: 0.2em;">2</sub>)푝(푥<sub style="top: 0.2em;">1</sub>|푥<sub style="top: 0.2em;">2</sub>, 푥<sub style="top: 0.2em;">3</sub>, … . 푥<sub style="top: 0.2em;">푛</sub>) </p><p>Which leads to: </p><p>푛</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">푝 ((( 푥<sub style="top: 0.2em;">1</sub>|푥<sub style="top: 0.2em;">2 </sub>|푥<sub style="top: 0.2em;">3</sub>)| … . ) |푥<sub style="top: 0.2em;">푛</sub>) </li></ul><p></p><ul style="display: flex;"><li style="flex:1">푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub></li><li style="flex:1">=</li><li style="flex:1">∏ 푝(푥<sub style="top: 0.2em;">2</sub>) </li></ul><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">푝 푥<sub style="top: 0.2em;">1 </sub></li></ul><p></p><p>ꢇ=1 </p><p><strong>Theorem: </strong>If (푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>are random variables of finite range, then </p><p>푛</p><p>퐻(푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>≤ ∑ 퐻(푋<sub style="top: 0.2em;">ꢇ</sub>) </p><p>ꢇ=1 </p><p>Proof: </p><p></p><ul style="display: flex;"><li style="flex:1">퐻(푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>≔ − </li><li style="flex:1">∑</li><li style="flex:1">푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>ln 푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sub style="top: 0.24em;">ꢇ</sub><sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">=1 </sub></li></ul><p></p><p>ꢊ</p><p>(ꢀ )&nbsp;∈풳 </p><p>ꢉ</p><p>ꢉ=1 </p><p>And </p><p></p><ul style="display: flex;"><li style="flex:1">푛</li><li style="flex:1">푛</li><li style="flex:1">푛</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">∑ 퐻(푋<sub style="top: 0.2em;">ꢇ</sub>) ≔ − </li><li style="flex:1">∑</li><li style="flex:1">∏ 푝(푥<sub style="top: 0.2em;">2</sub>) ln ∏ 푝(푥<sub style="top: 0.2em;">2</sub>) </li></ul><p></p><p>ꢊ</p><p></p><ul style="display: flex;"><li style="flex:1">ꢇ=1 </li><li style="flex:1">ꢇ=1 </li><li style="flex:1">ꢇ=1 </li></ul><p></p><p>(ꢀ )&nbsp;∈풳 </p><p>ꢉ</p><p>ꢉ=1 </p><p>And </p><p>푛</p><p>∏ 푝(푥<sub style="top: 0.2em;">2</sub>) ≤ 푝(푥<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub></p><p>ꢇ=1 </p><p>As </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">푝 푥<sub style="top: 0.2em;">1 </sub>≤ 푝 ((( 푥<sub style="top: 0.2em;">1</sub>|푥<sub style="top: 0.2em;">2 </sub>|푥<sub style="top: 0.2em;">3</sub>)| … . ) |푥<sub style="top: 0.2em;">푛</sub>) </li></ul><p></p><p>Hence, </p><p>푛</p><p>퐻(푋<sub style="top: 0.2em;">ꢇ</sub>)<sup style="top: -0.41em;">푛</sup><sub style="top: 0.24em;">ꢇ=1 </sub>≤ ∑ 퐻(푋<sub style="top: 0.2em;">ꢇ</sub>) </p><p>ꢇ=1 </p><p><strong>Fano’s inequality: </strong>Let 푋 and 푌 be random variables such that their range is in a set of cardinality d and let 푝 ≔&nbsp;푃푟표푏(푋 ≠ 푌) .Then </p><p>퐻(푋|푌) ≤ 푝 푙표푔(푑 − 1) + 퐻(푝, 1 − 푝) </p><p>• <strong>von Neumann Entropy: </strong>Let 휌 be density matrix of a quantum system then von <br>Neumann Entropy of that quantum system: </p><p></p><ul style="display: flex;"><li style="flex:1">( )&nbsp;) </li><li style="flex:1">(</li><li style="flex:1">푆 휌&nbsp;≔ −휅 tr&nbsp;휌 ln 휌 </li></ul><p></p><p><strong>Theorem: </strong>Let 휌 and 휎 be densities on a 푑 −dimensional Hilbert space and let </p><p></p><ul style="display: flex;"><li style="flex:1">‖</li><li style="flex:1">‖</li></ul><p></p><p>1</p><p>휌 − 휎 <br>푝 ≔ </p><p>2</p><p>where Then, <br><sub style="top: 0.2em;">1</sub>: ℬ(ℍ) → ℝ<sup style="top: -0.39em;">+</sup><sub style="top: 0.22em;">0 </sub>, is the norm on operator space ℬ(ℍ) of Hilbert space ℍ. </p><p>‖ ‖ <br>⋇<br>|푆(휌) − 푆(휎)|&nbsp;≤ 푝&nbsp;푙표푔(푑 −&nbsp;1) +&nbsp;퐻(푝, 1 −&nbsp;푝) </p><p>Holds. </p><p>• <strong>Quantum Relative Entropy: </strong></p><p>The relative entropy, or I-divergence of the probability distributions 푝(푥) and 푞(푥), is defined as </p><p>∞</p><p>( ) <br>푝 푥 </p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">( ) </li><li style="flex:1">퐷 푝||&nbsp;푞 ≔&nbsp;∫ 푝&nbsp;푥 ln </li><li style="flex:1">푑푥 </li></ul><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">푞 푥 </li></ul><p></p><p>−∞ </p><p>Assume that 휌 and 휎 are density matrices on a Hilbert space ℍ , then </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">( ) </li><li style="flex:1">( ) </li><li style="flex:1">tr 휌&nbsp;ln 휌 − ln 휎&nbsp;if supp&nbsp;ρ ≤&nbsp;supp 휎 </li></ul><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li></ul><p>푆 휌||&nbsp;휎 ≔&nbsp;{ <br>+∞ 표푡ℎꢂ푟푤푖푠ꢂ </p><p>Hilbert–Schmidt inner product </p><p>훥푎 = 휌푎휎<sup style="top: -0.37em;">−1 </sup><br>( ) <br>For all 푎 ∈ ℬ&nbsp;ℍ </p><p>12<br>12</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">〈</li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">〉</li><li style="flex:1">tr 휌&nbsp;ln 휌 − ln 휎&nbsp;= −&nbsp;휌 ,&nbsp;ln 훥&nbsp;휌 </li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">⋇</li><li style="flex:1">⋇</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">where, 퐴, 퐵&nbsp;≔ tr&nbsp;퐴 퐵&nbsp;and 퐴 is conjugate of 퐴 </li></ul><p></p><p>⋇</p><p>12<br>12</p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">(</li><li style="flex:1">)</li></ul><p>tr 휌&nbsp;ln 휌 − ln 휎&nbsp;= −tr ((휌 ) (휌 ) ln 훥&nbsp;) </p><p>Let 휌 ≡ ꢂ<sup style="top: -0.37em;">ꢌ</sup>, 휔 and 휎 be three invertible densities. The ꢂ − 푔ꢂ표푑ꢂ푠푖푐 connecting 휌 and 휎 is the curve </p><p>ꢂ<sup style="top: -0.37em;">ꢌ+ꢍꢎ </sup></p><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">ꢄ<sub style="top: 0.2em;">푒 </sub>푡 ≔ </li></ul><p></p><p>ꢌ+ꢍꢎ </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">tr ꢂ </li></ul><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">( ) </li></ul><p>for all 푡 ∈ [0,1], where 퐴 ≔ ln 휎 − ln 휌. Then, ꢄ<sub style="top: 0.2em;">푒 </sub>0 =&nbsp;휌 and ꢄ<sub style="top: 0.2em;">푒 </sub>1 =&nbsp;휎. The 푚 − </p><p>푔ꢂ표푑ꢂ푠푖푐 connecting 휌 and 휔 is the curve. </p><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">ꢄ<sub style="top: 0.2em;">ꢏ </sub>푡 ≔&nbsp;휌 + 푡퐵 </li></ul><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">( ) </li></ul><p>for all 푡 ∈ [0,1] where 퐵 ≔ 휔 − 휌. Then&nbsp;ꢄ<sub style="top: 0.2em;">ꢏ </sub>0 =&nbsp;휌 and ꢄ<sub style="top: 0.2em;">ꢏ </sub>1 =&nbsp;ω </p><p>Assume that the ꢂ − 푔ꢂ표푑ꢂ푠푖푐 connecting 휌 and 휎 is orthogonal to the 푚 − 푔ꢂ표푑ꢂ푠푖푐 connecting 휌 and 휔 with respect to the inner product </p><p>∞</p><p>−1 </p><p>⋇</p><p>−1 </p><p></p><ul style="display: flex;"><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">(( </li><li style="flex:1">)</li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">)</li></ul><p>퐸, 퐹&nbsp;≔ ∫ tr 푠핀&nbsp;+ 휌&nbsp;퐸 푠핀&nbsp;+ 휌&nbsp;퐹 푑푠 </p><p>ꢐ</p><p>0</p><p>A plain computation yields </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">푆(휔||휌) +&nbsp;푆(휌||휎) − 푆(휔||휎) = tr&nbsp;퐴퐵 = 퐴,&nbsp;퐵 </li></ul><p></p><p>ꢌꢑ <br>∞</p><p>−1 </p><p></p><ul style="display: flex;"><li style="flex:1">푇 ∶&nbsp;푋 ↦ ∫ 푠핀 + 휌&nbsp;푋 푠핀&nbsp;+ 휌&nbsp;<sup style="top: -0.37em;">−1</sup>푑푠 </li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">(</li><li style="flex:1">)</li></ul><p></p><p>ꢐ</p><p>0</p><p>Them According Dénes Petz And, </p><p></p><ul style="display: flex;"><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">〈</li><li style="flex:1">( )&nbsp;〉 </li><li style="flex:1">= 푇 푋&nbsp;, 푌 </li></ul><p></p><p>ꢐ</p><p>푋, 푌 </p><p></p><ul style="display: flex;"><li style="flex:1">ꢌꢑ </li><li style="flex:1">ꢐ</li></ul><p></p><p>( ) <br>푇 퐴 =&nbsp;ꢄ<sub style="top: 0.2em;">푒 </sub><br>( ) </p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">̇</li></ul><p></p><p>ꢐ</p><p>It is obvious that: Therefore </p><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">0</li><li style="flex:1">퐵 = ꢄ<sub style="top: 0.2em;">ꢏ </sub></li><li style="flex:1">̇</li></ul><p></p><ul style="display: flex;"><li style="flex:1">〈</li><li style="flex:1">〉</li><li style="flex:1">〈</li><li style="flex:1">( ) </li><li style="flex:1">〉</li><li style="flex:1">〈 (&nbsp;) (&nbsp;)〉 </li><li style="flex:1">0 ,&nbsp;ꢄ<sub style="top: 0.2em;">ꢏ </sub></li></ul><p></p><p>ꢐ</p><p></p><ul style="display: flex;"><li style="flex:1">퐴, 퐵 </li><li style="flex:1">= 푇 퐴&nbsp;, 퐵&nbsp;= ꢄ<sub style="top: 0.2em;">푒 </sub></li><li style="flex:1">̇</li><li style="flex:1">̇</li><li style="flex:1">0</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">ꢌꢑ </li><li style="flex:1">ꢐ</li><li style="flex:1">ꢐ</li></ul><p></p><p>and we can conclude that if our assumption holds then that implies: </p><p></p><ul style="display: flex;"><li style="flex:1">〈 (&nbsp;) (&nbsp;)〉 |&nbsp;) </li><li style="flex:1">ꢄ<sub style="top: 0.2em;">푒 </sub>0 ,&nbsp;ꢄ<sub style="top: 0.2em;">ꢏ </sub>= 0 ⇒ 푆(휔||휌)&nbsp;+ 푆(휌| 휎&nbsp;= 푆(휔||휎) </li><li style="flex:1">̇</li><li style="flex:1">̇</li><li style="flex:1">0</li></ul><p></p><p>ꢐ</p><p>This is some time called Pythagorean theorem of density matrix. </p><p>• <strong>Renyi Entropy, Quantum Renyi Entropy and Quantum Relative Renyi Entropy: </strong></p><p>The Renyi entropy of order 훼 ≠ 1&nbsp;of the probability distribution (푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . . . , 푝<sub style="top: 0.2em;">푛</sub>) is defined by </p><p>푛</p><p>1</p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">퐻<sub style="top: 0.2em;">ꢒ </sub>푝<sub style="top: 0.2em;">1</sub>, 푝<sub style="top: 0.2em;">2</sub>, . . . , 푝<sub style="top: 0.2em;">푛 </sub></li><li style="flex:1">=</li><li style="flex:1">ln ∑ 푝<sub style="top: 0.23em;">푘</sub><sup style="top: -0.4em;">ꢒ </sup></li></ul><p>1 − 훼 </p><p>푘=1 </p><p>The Quantum Renyi entropy of order 훼 ≠ 1 of the density matrix 휌 is defined by </p><p>1</p><p>ꢒ</p><p></p><ul style="display: flex;"><li style="flex:1">( ) </li><li style="flex:1">푆<sub style="top: 0.2em;">ꢒ </sub>휌 = </li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">ln tr&nbsp;휌 </li></ul><p>1 − 훼 </p><p>The Quantum Renyi entropy of order 훼 ≠ 1 of the density matrix 휌 with respect 휎 to is defined by </p><p>1</p><p>ꢒ</p><p>1−ꢒ </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">(</li><li style="flex:1">)</li><li style="flex:1">푆<sub style="top: 0.2em;">ꢒ </sub>휌||휎 = </li><li style="flex:1">ln tr&nbsp;휌 휎 </li></ul><p>훼 − 1 </p><p>Entanglement </p><p>Let ℬ(ℍ<sub style="top: 0.2em;">ꢎ</sub>) and ℬ(ℍ<sub style="top: 0.2em;">ꢓ</sub>) be the algebras of bounded operators acting on the Hilbert spaces ℍ<sub style="top: 0.2em;">ꢎ </sub>and ℍ<sub style="top: 0.2em;">ꢓ</sub>. The Hilbert space of the composite system is ℍ<sub style="top: 0.2em;">ꢎꢓ </sub>≔ ℍ<sub style="top: 0.2em;">ꢎ </sub>⊗ ℍ<sub style="top: 0.2em;">ꢓ</sub>. The </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li></ul><p></p><p>algebra of the operators acting on ℍ<sub style="top: 0.2em;">ꢎꢓ </sub>is ℬ ℍ<sub style="top: 0.2em;">ꢎꢓ </sub>≔ ℬ(ℍ<sub style="top: 0.2em;">ꢎ</sub>) ⊗ ℬ(ℍ<sub style="top: 0.2em;">ꢓ</sub>). </p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us