<<

, ENTANGLEMENT AND ENTROPY

Siddharth Sharma Abstract In this review I had given an introduction to axiomatic Quantum Mechanics, Quantum Information and its measure entropy. Also, I had given an introduction to Entanglement and its measure as the minimum of relative quantum entropy of a of a Hilbert space with respect to all separable density matrices of the same Hilbert space. I also discussed geodesic in the space of density matrices, their orthogonality and Pythagorean theorem of

density matrix.

Postulates of Quantum Mechanics

The basic postulate of quantum mechanics is about the Hilbert space formalism:

• Postulates 0: To each quantum mechanical system, has a complex Hilbert space ℍ associated to it:

Set of all pure state, ℍ⁄∼ ≔ {⟦푓⟧|⟦푓⟧ ≔ {ℎ: ℎ ∼ 푓}} and ℎ ∼ 푓 ⇔ ∃ 푧 ∈ ℂ such that ℎ = 푧 푓 where 푧 is called phase

• Postulates 1: The physical states of a quantum mechanical system are described by statistical operators acting on the Hilbert space.

A density matrix or statistical operator, 𝜌 is a positive operator of trace 1 on the Hilbert space. Where 𝜌 is called positive if 〈푥, 𝜌푥〉 for all 푥 ∈ ℍ and 〈⋇,⋇〉.

• Postulates 2: The observables of a quantum mechanical system are described by self- adjoint operators acting on the Hilbert space.

A self-adjoint operator A on a Hilbert space ℍ is a linear operator 퐴: ℍ → ℍ which satisfies

〈퐴푥, 푦〉 = 〈푥, 퐴푦〉

for 푥, 푦 ∈ ℍ.

Lemma: The density matrices acting on a Hilbert space form a convex set whose extreme points are the pure states. Proof. Denote by Σ the set of density matrices. It is obvious that a convex combination of density matrices is positive and of trace one. Therefore, Σ is a convex set.

Lemma: Let

𝜌 = ∑|푥푒⟩⟨푥푒| = ∑|푦ℎ⟩⟨푦ℎ| 푒∈픹 ℎ∈픹 be decompositions of a density matrix and where 픹 is the basis of Hilbert space, ℍ.

Then there exists a unitary matrix (푈푒ℎ)푒ℎ∈픹 such that

∑ 푈푒ℎ|푥푒⟩ = |푦ℎ⟩ 푒∈픹 Where,

⟨푥푒|푦ℎ⟩ 푈푒ℎ ≔ √⟨푥푒|푥ℎ⟩

• Postulates 3: Let 풳 be a finite set and for 푥 ∈ 풳 an operator 푂푥 ∈ ℬ(ℍ) be given such that

⋇ ∑ 푂푥 푂푥 = 𝑖푑ℍ 푥∈풳

⋇ where 푂푥 is conjugate of 푂푥. Such an indexed family of operators is a model of a measurement with values in 풳. If the measurement is performed in a state 𝜌, then the ⋇ outcome 푥 ∈ 풳 appears with probability tr(푂푥 𝜌푂푥) and after the measurement the state of the system is

⋇ 푂푥 𝜌푂푥 ⋇ 푡푟 (푂푥 𝜌푂푥) If 휑 ∶ ℬ(ℍ) → ℂ is a linear functional such that 휑(퐴) ≥ 0 if 퐴 is positive and 휑(𝑖푑) =

1, Then there exists a density matrix 𝜌휑 such that,

휑(퐴) = tr(𝜌휑퐴)

The functional 휑 associates the expectation value to the observables 퐴.

The density matrices 𝜌 and 𝜌′ are called orthogonal if any eigenvector of 𝜌 is orthogonal to any eigenvector of 𝜌′.

Let 픹 be an orthonormal basis in a Hilbert space ℍ . The unit vector 휉 ∈ ℍ is complementary to the given basis if

1 |〈푒, 휉〉| = √dim ℍ For all 푒 ∈ 픹, where dim ℍ is dimension of ℍ.

Two orthonormal bases are called complementary if all vectors in the first basis are complementary to the other basis.

• Postulates 4: The composite system is described by the tensor product Hilbert space

⨂ ℍ휗 휗

Given a density matrix 𝜌 on ⨂ ℍ there are density matrices 𝜌 ∈ ℬ(ℍ ) such that 휗 휗 휗 휗

tr (𝜌 ⨂ 𝒜훾(휗)) = tr(𝜌휗퐴휗) 훾

Where,

퐴휗, 훾 = 휗 𝒜훾(휗) ≔ { 𝑖푑ℍ휗, 훾 ≠ 휗

and 퐴휗 ∈ ℬ(ℍ휗), here 𝜌휗 is called reduced density matrices. Let,

𝜌 = ⨂ 𝜌휗 휗

Then we define 퐭퐫휸(흆) as follows:

tr훾(𝜌) ≔ tr(𝜌훾) ⨂ 𝜌휗 휗≠훾

Information and its Measures

• Shannon entropy: In his revolutionary paper Shannon proposed a statistical approach and he posed the problem in the following way: “Suppose we have a set of possible

events whose probabilities of occurrence are 푝1, 푝2, . . . , 푝푛. These probabilities are known but that is all we know concerning which event will occur. Can we find a measure of how much “choice” is involved in the selection of the event or how uncertain we are

of the outcome?” Denoting such a measure by 퐻(푝1, 푝2, . . . , 푝푛), he listed three very reasonable requirements which should be satisfied, those three postulates are as follows:

a. Continuity: 퐻(푝, 1 − 푝) is a continuous function of 푝.

b. Symmetry: 퐻(푝1, 푝2, . . . , 푝푛), is a symmetric function of its variables. c. Recursion: For every 0 ≤ 휆 < 1 the recursion 퐻(푝1, . , 휆푝푟, . , (1 − 휆)푝푟. , 푝푛) =

퐻(푝1, 푝2, . , 푝푟. . , 푝푛) + 푝푟퐻(휆, 1 − 휆), holds According to him there is only one function satisfying all this postulate is:

퐻(푝1, 푝2, . , 푝푟. . , 푝푛) = −휅 ∑ 푝푖푙푛(푝푖) 푖=1

푛 푛 Let (푋푖)푖=1 be random variables with values in the set 풳 ≔ ∏푖=1 풳푖. The following notation will be used.

푛 푛 푛 a. 푝(푥푖)푖=1 is probability of (푋푖)푖=1 at (푥푖)푖=1

b. 푝(푥푖|푥푗) is probability of 푋푖 at 푥푖 knowing the probability of 푋푗. at 푥푗

Then,

푛 푛 푛 퐻(푋푖)푖=1 ≔ − ∑ 푝(푥푖)푖=1 ln 푝(푥푖)푖=1 푛 (푥푖)푖=1∈풳

Where if 푛 = 2

퐻(푋푖, 푋푗) ≔ − ∑ ∑ 푝(푥푖, 푥푗) ln 푝(푥푖, 푥푗)

푥푖∈풳푖 푥푗∈풳푗 and

푛 푝(푥푖)푖=1 = 푝(푥2)푝(푥1|푥2, 푥3, … . 푥푛) Which leads to:

푛 푝 ((((푥1|푥2)|푥3)| … . ) |푥푛) 푛 푝(푥푖)푖=1 = ∏ 푝(푥2) 푝(푥1) 푖=1

푛 Theorem: If (푋푖)푖=1 are random variables of finite range, then

푛 푛 퐻(푋푖)푖=1 ≤ ∑ 퐻(푋푖) 푖=1 Proof:

푛 푛 푛 퐻(푋푖)푖=1 ≔ − ∑ 푝(푥푖)푖=1 ln 푝(푥푖)푖=1 푛 (푥푖)푖=1∈풳 And 푛 푛 푛

∑ 퐻(푋푖) ≔ − ∑ ∏ 푝(푥2) ln ∏ 푝(푥2) 푛 푖=1 (푥푖)푖=1∈풳 푖=1 푖=1 And

푛 푛 ∏ 푝(푥2) ≤ 푝(푥푖)푖=1 푖=1 As

푝(푥1) ≤ 푝 ((((푥1|푥2)|푥3)| … . ) |푥푛)

Hence,

푛 푛 퐻(푋푖)푖=1 ≤ ∑ 퐻(푋푖) 푖=1 Fano’s inequality: Let 푋 and 푌 be random variables such that their range is in a set of cardinality d and let 푝 ≔ 푃푟표푏(푋 ≠ 푌) .Then

퐻(푋|푌) ≤ 푝 푙표푔(푑 − 1) + 퐻(푝, 1 − 푝)

: Let 𝜌 be density matrix of a quantum system then von Neumann Entropy of that quantum system:

푆(𝜌) ≔ −휅 tr( 𝜌 ln 𝜌)

Theorem: Let 𝜌 and 𝜎 be densities on a 푑 −dimensional Hilbert space and let

‖𝜌 − 𝜎‖ 푝 ≔ 1 2

+ where ‖⋇‖1: ℬ(ℍ) → ℝ0 , is the norm on operator space ℬ(ℍ) of Hilbert space ℍ. Then,

|푆(𝜌) − 푆(𝜎)| ≤ 푝 푙표푔(푑 − 1) + 퐻(푝, 1 − 푝)

Holds.

• Quantum Relative Entropy:

The relative entropy, or I-divergence of the probability distributions 푝(푥) and 푞(푥), is defined as

∞ 푝(푥) 퐷(푝|| 푞) ≔ ∫ 푝(푥) ln 푑푥 푞(푥) −∞ Assume that 𝜌 and 𝜎 are density matrices on a Hilbert space ℍ , then

tr 𝜌(ln 𝜌 − ln 𝜎) if supp(ρ) ≤ supp(𝜎) 푆(𝜌|| 𝜎) ≔ { +∞ 표푡ℎ푒푟푤𝑖푠푒

Hilbert–Schmidt inner product

훥푎 = 𝜌푎𝜎−1

For all 푎 ∈ ℬ(ℍ)

1 1 tr 𝜌(ln 𝜌 − ln 𝜎) = − 〈𝜌2, (ln 훥)𝜌2〉 where, 〈퐴, 퐵〉 ≔ tr (퐴⋇퐵) and 퐴⋇ is conjugate of 퐴

1 ⋇ 1 tr 𝜌(ln 𝜌 − ln 𝜎) = −tr ((𝜌2) (𝜌2) (ln 훥))

Let 𝜌 ≡ 푒퐻, 휔 and 𝜎 be three invertible densities. The 푒 − 푔푒표푑푒푠𝑖푐 connecting 𝜌 and 𝜎 is the curve

푒퐻+푡퐴 훾 (푡) ≔ 푒 tr(푒퐻+푡퐴) for all 푡 ∈ [0,1], where 퐴 ≔ ln 𝜎 − ln 𝜌. Then, 훾푒(0) = 𝜌 and 훾푒(1) = 𝜎. The 푚 − 푔푒표푑푒푠𝑖푐 connecting 𝜌 and 휔 is the curve.

훾푚(푡) ≔ 𝜌 + 푡퐵 for all 푡 ∈ [0,1] where 퐵 ≔ 휔 − 𝜌. Then 훾푚(0) = 𝜌 and 훾푚(1) = ω

Assume that the 푒 − 푔푒표푑푒푠𝑖푐 connecting 𝜌 and 𝜎 is orthogonal to the 푚 − 푔푒표푑푒푠𝑖푐 connecting 𝜌 and 휔 with respect to the inner product

∞ −1 ⋇ −1 〈퐸, 퐹〉휌 ≔ ∫ tr ((푠핀 + 𝜌) 퐸 (푠핀 + 𝜌) 퐹)푑푠 0 A plain computation yields

푆(휔||𝜌) + 푆(𝜌||𝜎) − 푆(휔||𝜎) = tr (퐴퐵) = 〈퐴, 퐵〉퐻푆

∞ −1 −1 푇휌 ∶ 푋 ↦ ∫ (푠핀 + 𝜌) 푋(푠핀 + 𝜌) 푑푠 0 Them According Dénes Petz

〈푋, 푌〉퐻푆 = 〈푇휌(푋), 푌〉휌

And,

푇휌(퐴) = 훾푒̇ (0) It is obvious that:

퐵 = 훾푚̇ (0) Therefore

〈퐴, 퐵〉퐻푆 = 〈푇휌(퐴), 퐵〉휌 = 〈훾푒̇ (0), 훾푚̇ (0)〉휌

and we can conclude that if our assumption holds then that implies:

〈훾푒̇ (0), 훾푚̇ (0)〉휌 = 0 ⇒ 푆(휔||𝜌) + 푆(𝜌||𝜎) = 푆(휔||𝜎)

This is some time called Pythagorean theorem of density matrix.

• Renyi Entropy, Quantum Renyi Entropy and Quantum Relative Renyi Entropy:

The Renyi entropy of order 훼 ≠ 1 of the probability distribution (푝1, 푝2, . . . , 푝푛) is defined by

푛 1 퐻 (푝 , 푝 , . . . , 푝 ) = ln ∑ 푝훼 훼 1 2 푛 1 − 훼 푘 푘=1 The Quantum Renyi entropy of order 훼 ≠ 1 of the density matrix 𝜌 is defined by 1 푆 (𝜌) = ln tr (𝜌훼) 훼 1 − 훼 The Quantum Renyi entropy of order 훼 ≠ 1 of the density matrix 𝜌 with respect 𝜎 to is defined by 1 푆 (𝜌||𝜎) = ln tr (𝜌훼𝜎1−훼) 훼 훼 − 1 Entanglement

Let ℬ(ℍ퐴) and ℬ(ℍ퐵) be the algebras of bounded operators acting on the Hilbert spaces

ℍ퐴 and ℍ퐵. The Hilbert space of the composite system is ℍ퐴퐵 ≔ ℍ퐴 ⊗ ℍ퐵. The

algebra of the operators acting on ℍ퐴퐵 is ℬ(ℍ퐴퐵) ≔ ℬ(ℍ퐴) ⊗ ℬ(ℍ퐵).

In the vector space ℬ(ℍ) the standard positive cone is the set of all positive matrices. This cone induces the partial ordering.

퐴 ≤ 퐵 ⇔ 〈 휂, 퐴휂〉 ≤ 〈휂, 퐵휂〉 푓표푟 푒푣푒푟푦 푣푒푐푡표푟 휂

In the product space ℬ(ℍ퐴퐵) ≔ ℬ(ℍ퐴) ⊗ ℬ(ℍ퐵), we have two natural positive cones, + ℬ(ℍ퐴퐵) consists of the positive matrices acting on ℍ퐴퐵 ≔ ℍ퐴 ⊗ ℍ퐵, and the cone 풮 consists of all operators of the form ∑ 퐴푖⨂퐵푖 푖

+ + + Where 퐴푖 ∈ ℬ(ℍ퐴) and 퐵푖 ∈ ℬ(ℍ퐵) . It is obvious that 풮 ⊂ ℬ(ℍ퐴퐵) . A state is called separable (or unentangled) if its density belongs to 풮. And if not then they are called entangled states.

pure states are extreme points in the state space. If a pure state is a convex combination

∑푖 푝푖푃푖⨂퐵푖 of product pure states, then this convex combination must be trivial, that is, 푃 ⊗ 푄.

Lemma: Any unit vector Ψ ∈ ℍ퐴퐵 can be written in the form

Ψ ∈ ∑ √푝푘푔푘⨂ℎ푘 푘

where the vectors 푔푘 ∈ ℍ퐴 and ℎ푘 ∈ ℍ퐵 are pairwise orthogonal and normalized;

moreover (푝푘) is a probability distribution. This expansion of Ψ is called Schmidt decomposition.

Let 푑𝑖푚 ℍ퐴 = 푑𝑖푚 ℍ퐵 = 푛. A pure state |Φ⟩⟨Φ| on the Hilbert space ℍ퐴 ⊗ ℍ퐵 is called maximally entangled if the following equivalent conditions hold:

a. The reduced densities are maximally mixed states.

−1 b. When the vector |Φ⟩ is written in the form, then 푝푘 = 푛 for every 1 ≤ 푘 ≤ 푛. c. There is a product basis such that |Φ⟩ is complementary to it.

The density matrix of a maximally entangled state on ℂ푛 ⊗ ℂ푛 is of the form 1 𝜌 = ∑ 푒휇휈⨂푒휇휈 √푛 휇휈

• Entanglement Measures:

The degree of entanglement of a state 𝜌AB on the bipartite Hilbert space ℍ퐴퐵 ≔ ℍ퐴 ⊗

ℍ퐵 is its distance from the convex set of separable states. One possibility is to use the relative entropy as a distance function. In this way, we can arrive at the concept of the relative entropy of entanglement:

퐸푅퐸(𝜌AB): = inf { 푆(𝜌AB||퐷) ∶ 퐷 ∈ 풮}

where 퐷 runs over the set of separable states on ℍ퐴퐵 ≔ ℍ퐴 ⊗ ℍ퐵. If 𝜌AB is faithful, then the minimizer is unique and can be called “the best separable approximation of

𝜌AB.

References

• S. AMARI AND H. NAGAOKA, Methods of information geometry, Transl. Math. Monographs 191, AMS.

• H. ARAKI, Relative entropy for states of von Neumann algebras, Publ. RIMS Kyoto Univ. 11(1976), 809–833

• J. ACZEL AND ´ Z. DAROCZY ´On measures of information and their characterizations, Academic Press, New York, San Francisco, London

• O. BRATTELI AND D. W. ROBINSON, Operator Algebras and Quantum Statistical Mechanics II, Springer-Verlag, New York-Heidelberg-Berlin

• M. B. PLENIO, S. VIRMANI AND P. PAPADOPOULOS, Operator monotones, the reduction criterion and relative entropy, J. Physics A 33(2000), L193–197.

• DÉNES PETZ, Quantum Information Theory and Quantum Statistics, Theoretical and Mathematical Physics Series, Springer

• A. RENYI ´ On measures of entropy and information, in Proceedings of the 4th Berkeley conference on mathematical statistics and probability, ed. J. Neyman, pp. 5 J. Rˇ EHA´ CEK AND ˇ Z. HRADIL, Quantification of entanglement by means of convergent iterations, Phys. Rev. Lett. 90(2003), 127904.47–561, University of California Press, Berkeley

.