<<

in Classical and Quantum

William Fedus Physics Department, University of California, San Diego.

Entropy is a central concept in both classical and theory, measuring the and the information content in the state of a physical system. This paper reviews classical information theory and then proceeds to generalizations into quantum information theory. Both and Von Neumann entropy are discussed, making the connection to of a message stream and the generalization of compressibility in a quantum system. Finally, the paper considers the application of Von Neumann entropy in entanglement of formation for both pure and mixed bipartite quantum states.

CLASSICAL INFORMATION THEORY information. For instance, if we use a block code which assigns integers to typical sequences, the information in In , entropy is the of a string of n letters can be compressed to H(A) . the number of arrangements a system can be configured With this framework and definition, we may now con- and still remain consistent with the thermodyanmic ob- sider the maximum compression of a length n message servables. From this original formulation, entropy has without loss of information. The number of bits neces- grown to become an important element in many diverse sary to transmit the message is given by fields of study. One of the first examples was in 1948 when adopted entropy as a of the uncertainty in a , or equivalently, the H(An) = nH(A). (2) of information content within a message. Classical information theory, as established by Claude Shannon, sought to resolve two central issues in which simply states that one needs (n the entropy processing of the ensemeble A)-bits. However, a more interesting case may be considered 1. The compression achievable for a message while if we allow a small error δ > 0 in the encoding of the preserving the fidelity of the original information message. Specifically, we will only encode messages in 2. The rate at which a message can be communicated set B ⊂ A such that P (B) ≥ 1 − δ. The information size reliabily over a noisy channel is given by Hδ(A) which is equal to H(A) in the limit that δ → 0. Shannon’s noiseless coding theorem then both of these address the issue of the degree of redun- states that dancy embedded within a message and, as we will see, entropy is found to be a rigorous and useful approach in this context. 1 n lim Hδ(A ) = H(A) (3) n→∞ n

Shannon Entropy and that is, the optimal compression rate is simply the Shan- non entropy; this is Shannon’s noiseless coding theorem. As alluded to previously, classical information theory As a basic example of this theorem, consider an infor- is framed in terms of the information content and the mation source which produces messages composed of the transmission of messages. A message may be defined as four element alphabet, a string of letters chosen from an alphabet of k letters {a1, a2, ..., ak}. Assuming that each letter ax occurs with p(ax) and is independently transmitted, this may be denoted as an ensemble A = {ax, p(ax)}; the A = {a1, a2, a3, a4}. (4) Shannon entropy of an ensemble is then defined as Clearly, without any compression, two bits of storage X are necessary for each letter transmitted ( strings 00, H(A) ≡ H (p(a ), ..., p(a )) ≡ − p(a )logp(a ). (1) x k x x 01, 10, 11) as depicted in the left side of Figure 1. How- x ever, suppose now that symbol a1 is produced with prob- where log is taken to be base-2 since we are transmitting ability 1/2, symbol a2 with probability 1/4 and symbols messages with binary bits. This formula is important as a3 and a4 with probability 1/8. We find that by applying it can be used to quantify the resources necessary to store Eq. 1, we can calculate the entropy to be, 2

quantum states, X = {px, ρx} where ρx is the and px is the a priori probability of selecting that quantum state. This quantum state may be completely characterized via the

X ρ = pxρx (7) x The Von Neumann entropy of a quantum state ρ is then defined as FIG. 1. Tree representation for the encoding of the above example. S(ρ) ≡ −Tr (ρlogρ) (8)

where log is taken to be base d, d being the dimension of X the containing ρ. H(A) = −p(ax)logp(ax) (5a) Von Neumann entropy enters quantum information x theory in three important ways. First, it is a measure 1 1 1 1 1 1 = − ∗ log − log − 2 ∗ log of the quantum information content of letters in the en- 2 2 4 4 8 8 semble, specifically, how many quibits are needed to en- (5b) code the message without loss of information. Second, 7 it is also a measure of the classical information content = (5c) 4 per letter, where we find the maximum amount of infor- mation in bits that can be recovered from the message. Therefore, a coding scheme exists in which the mes- And finally, as we will see later, Von Neumann entropy sages may be transmitted with only 7/4 bits per letter is used to quantify the entanglement of pure and mixed on average if we assume that the message being transmit- bipartite quantum states [1]. ted is of length n and n  1. To realize this compression, Now, returning to Eq. 8, if λxs are the eigenvalues we choose to encode a1 as it string 0, a2 as bit string 10, that diagonalize ρ, then we may rewrite it as, a3 as bit string 110, and a4 as bit string 111, we find that the average length of the compressed string is X S(ρ) = − λxlogλx. (9) 1 1 1 1 7 x ∗ 1 + ∗ 2 + ∗ 3 + ∗ 3 = (6) 2 4 8 8 4 One now observes that in this orthonormal basis, the Von Neumann entropy is equivalent to the Shannon entropy, indicating that we have achieved the best possible com- Eq. 1, pression for an encoding. Also, it should be noted that this encoding will not compress all messages since clearly an unlucky and improbable message of repeated a4s S(ρ) = H(A) (10) would result in a higher average bit content per letter than both encodings. Also, note that Figure 1 explicitly for the ensemble A = {a, λa}. This indicates that if a demonstrates that our encoding must not have duplicate quantum system is a pure separable system, it reduces prefixes, otherwise, the encoding is not valid. If letters to the classical system. For a separable quantum system, of the alphabet share a common prefix, message strings the Von Neumann entropy is another quantification of the may not be uniquely decodable. incompressibility of the information content, analagous to the Shannon entropy of a classical system.

QUANTUM INFORMATION THEORY Connection to In classical information theory, we have considered a message of n letters, where n  1 and each letter is Entropy in statistical thermodynamics and in informa- drawn independently from an ensemble A = {ax, p(ax)}. tion theory are not disjoint concepts. For instance, if we The information content of the ensemble is equal to the take an open quantum system (in contact with its en- Shannon information, H(A), in the limit that n → ∞ vironment), we may use the mathematical properties of To generalize this formulation to the quantum regime, and entropic invariance under unitary evo- the message may still be regarded as composed of n let- lution for Von Neumann entropy to imply the second law ters, but now the letters are drawn from an ensemble of of thermodynamics. 3

To see this, subadditivity may be succinctly expressed and the entire message of length n can then be charac- as, terized as the n-tensor product

n S(ρAB) ≤ S(ρA) + S(ρB) (11) ρ = ρ ⊗ · · · ⊗ ρ (17) where ρA is the partial trace over B (ρA ≡ TrB(ρAB)) Now, the best optimal compression of this quantum and ρB is defined similarly. If we consider the system (A) message to a Hilbert space H as n → ∞ is given by and the environment (E) to be initially uncorrelated, we can decompose the density matrix as the tensor product log(dimH) = S(ρn) = nS(ρ) (18) ρAB = ρA ⊗ρB which by Eq. 11 implies that the entropy is simply the sum of the two states A and B where dim is the dimension of the Hilbert space. This implies that Von Neumann entropy is equal to the num- S(ρ ) = S(ρ ) + S(ρ ). (12) ber of quibits of quantum information carried for each AB A B letter of the message Note that compression is always If we now allow the system to evolve under a unitary possible provided that the density matrix is not a max- 1 1 operator U which acts on the full system, the density imially mixed state, ρ = 2 , since we cannot compress matrix becomes random quibits, directly analagous to the classical incom- pressibility of random bits.

0 −1 ρAE = UρAEU (13) ENTANGLEMENT MEASURES but we know that the entropy of a system is invariant under unitary evolution S(ρ) = S(UρU −1) so therefore, Entanglement is a feature of quantum states to exhibit correlations that cannot be accounted for classically. En- tanglement theory is underpinned by the paradigm of 0 S(ρAE) = S(ρAE). (14) Local Operations and Classical Communication (LOCC) formulated by Bennett et al [2]. Under this approach, a 0 Finally, by invoking Eq. 11 again on the state ρAE and quantum state exists between different parties who can using Eq. 12 we find that perform arbitrary local unitary operations and may com- municate with each other over a classical channel. A state is separable if it can be created via only local operations S(ρ0 ) + S(ρ0 ) ≥ S(ρ ) (15a) A B AE and classical communication, otherwise, the state is con- 0 0 S(ρA) + S(ρB) ≥ S(ρA) + S(ρB) (15b) sidered entangled. Mathematically, a pure bipartite state ψAB is defined so therefore, the basic mathematical properties of Von to be entangled if the following tensor decomposition is Neumann entropy implies that the entropy of the uni- not possible verse, that is, the sum of A and E, may not decrease, analagous to the second law of thermodynamics.

|ψABi = |ψAi ⊗ |ψAi (19)

Quantum Data Compression and similarly, a mixed bipartite state ρAB is defined to be entangled if the density matrix may not be written as Now, as in the classical case, Von Neumann entropy provides us a tool to address the issue of redundancy within a message. However, now in the quantum regime, ρAB = ρA ⊗ ρB (20) we instead are seeking to compress the message into a Since entangled states cannot be generated locally, it smaller Hilbert space, H, without compromising the fi- may be considered as a resource of the system. This delity of the message. resource is important in different tasks such as quan- As before, consider a message of length n with each tum computation, quantum communication and quan- letter drawn from an ensemble of pure states {p , |ψ i}, x x tum cryptography. [3] where each |ψ i is not necessarily orthonormal. We may x Many important bipartite entanglement measurements characterize each letter via the density matrix exist (Schmidt number, k-concurrences, I-concurrence, Negativity, Convex roof extended negativity (CREN), X etc.), however, in this paper, we shall consider an ρ = px |ψxi hψx| (16) x entropy-based entanglement measure, entanglement of 4 formation [3]. We will first consider a pure quantum Pauli matrix. For n quibits, the spin flip operation may state and then we will consider the more complicated be applied to each individual quibit and for a density example of a mixed quantum state. matrix, each σy is applied to both the right and left sides. For instance,

Entanglement of Formation of Pure States ∗ ρ˜ = (σy ⊗ σy) ρ (σy ⊗ σy) (24) For any bipartite system in a pure state, Bennet et where the complex conjugate is in a fixed basis al [2] demonstrated that it is reasonable to define the {|↑↑i , |↑↓i , |↓↑i , |↓↓i} entanglement of the system as the von Neumann entropy With this definition, Hill and Wootters [4] demon- of either of its two parts. Given a density matrix ρAB of strated that Eq. 22 can written as, a pair of quantum systems A and B, consider all possible pure-state decompositions according to Eq. 7. E(ψ) = E(C(ψ)) (25) For each pure state, the entanglement, E, may then be defined as the entropy of either quantum subsystem, A where the concurrence C is defined as or B D E C(ψ) ≡ | ψ|ψ˜ | (26) E(ψ) = −Trρ logρ = −Trρ logρ . (21) A A B B and E is defined as This definition, utilizing the Von Neumann entropy, indi- √ √ cates the information content of the reduced state. Note 1 + 1 − C2 1 + 1 − C2 E(C) ≡ − log that this entanglement measure, E, ranges from 0 for a 2 2 separable pure state to logN for a maximally entangled √ √ 1 − 1 − C2 1 − 1 − C2 state. − log (27) 2 2 This form of entanglement can be motivated by the fact Entanglement of Formation of Mixed States that it allows us to transfer our analysis to the determi- nation of the concurrence. We observe that E monoton- To extend this definition to mixed states, the entan- ically increases from 0 → 1 as the concurrence increases glement of formation is defined as the minimum of the over that same range, therefore, concurrence may also average entanglement of each possible pure state of the be regarded as a valid measure of entanglement. This decomposition, proves useful as we may extend the definition to that of arbitrary bipartite mixed states. With the spin flip and the function E(C), we can now X E(ρ) = min piE(ψi) (22) write the entanglement of formation of a mixed state ρ i as The probability weighted sum of each individual entan- glement, E(ψi) is not unique since an infinite number of E(ρ) = E(C(ρ)) (28) decompositions exist, therefore, we take the minimum. where We choose the minimum (as opposed to another prop- erty) since a state which has any decomposition as a sum of separable pure states should have 0 entanglement and C(ρ) = max{0, λ1 − λ2 − λ3 − λ4} (29) therefore the density matrix should be found to have 0 where the λ s are the eigenvalues, in decreasing order, entanglement. i √ √ of the Hermitian matrix R ≡ p ρρ˜ ρ and each λ For a pair of quibits, Eq. 22 can be written as an ex- i is a non-negative . Eq. 28 was proved by plicit function of ρ. To rewrite this equation as such, one Wootters [5] to hold for arbitrary states of a two quibit must first introduce the spin flip transformation. For a system. For brevity, the proof will be omitted, but the pure state of a single quibit, the spin flip transformation, usefulness of entropy in entanglement measures is well- denoted by a tilde, is defined as demonstrated.

E ˜ ∗ ψ = σy |ψ i (23)

∗ where |ψ i is the complex conjugate of |ψi in a fixed basis [1] J. Preskill, “Lecture notes on quantum computation,” such as {|↑i , |↓i} and σy in the same basis is the standard (1998). 5

[2] C. Bennett, D. DiVincenzo, J. Smolin, and W. Wootters, (1997). Physical Review A 54, 3824 (1996). [5] W. Wootters, Physical Review Letters 14, 199 (1998). [3] C. Eltschka and J. Siewert, , 8 (2014), arXiv:1402.6710. [4] S. Hill and W. Wootters, Physical Review Letters 78, 5022