The Quantum Reverse Shannon Theorem and Resource Tradeoffs for Simulating Quantum Channels
The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.
Citation Bennett, Charles H., Igor Devetak, Aram W. Harrow, Peter W. Shor, and Andreas Winter. “The Quantum Reverse Shannon Theorem and Resource Tradeoffs for Simulating Quantum Channels.” IEEE Trans. Inform. Theory 60, no. 5 (May 2014): 2926–2959.
As Published http://dx.doi.org/10.1109/tit.2014.2309968
Publisher Institute of Electrical and Electronics Engineers (IEEE)
Version Author's final manuscript
Citable link http://hdl.handle.net/1721.1/93188
Terms of Use Creative Commons Attribution-Noncommercial-Share Alike
Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/ 1 The quantum reverse Shannon theorem and resource tradeoffs for simulating quantum channels Charles H. Bennett, Igor Devetak, Aram W. Harrow, Peter W. Shor and Andreas Winter
Abstract—Dual to the usual noisy channel coding problem, CONTENTS where a noisy (classical or quantum) channel is used to simulate a noiseless one, reverse Shannon theorems concern the use of I Introduction 2 noiseless channels to simulate noisy ones, and more generally I-A Motivation ...... 2 the use of one noisy channel to simulate another. For channels of nonzero capacity, this simulation is always possible, but for it to I-B Terminology ...... 3 be efficient, auxiliary resources of the proper kind and amount I-C Overview of results ...... 4 are generally required. In the classical case, shared randomness between sender and receiver is a sufficient auxiliary resource, II Statement of results 5 regardless of the nature of the source, but in the quantum case the requisite auxiliary resources for efficient simulation depend II-A Classical Reverse Shannon Theorem . .5 on both the channel being simulated, and the source from which II-B Quantum Reverse Shannon Theorem the channel inputs are coming. For tensor power sources (the (QRST) ...... 8 quantum generalization of classical IID sources), entanglement II-C Entanglement spread ...... 11 in the form of standard ebits (maximally entangled pairs of II-D Relation to other communication protocols 14 qubits) is sufficient, but for general sources, which may be arbi- trarily correlated or entangled across channel inputs, additional resources, such as entanglement-embezzling states or backward III Simulation of classical channels 15 communication, are generally needed. Combining existing and III-A Overview ...... 15 new results, we establish the amounts of communication and aux- III-B Proof of unweighted classical reverse iliary resources needed in both the classical and quantum cases, Shannon theorem ...... 16 the tradeoffs among them, and the loss of simulation efficiency when auxiliary resources are absent or insufficient. In particular III-C Classical types ...... 17 we find a new single-letter expression for the excess forward III-D Converses ...... 18 communication cost of coherent feedback simulations of quantum channels (i.e. simulations in which the sender retains what would IV Simulation of quantum channels on arbitrary escape into the environment in an ordinary simulation), on non- inputs 19 tensor-power sources in the presence of unlimited ebits but no other auxiliary resource. Our results on tensor power sources IV-A The case of flat spectra ...... 19 establish a strong converse to the entanglement-assisted capacity IV-B Tensor power inputs ...... 20 theorem. IV-C A quantum theory of types ...... 21 IV-C1 Schur duality and quantum Charles H. Bennett is with the IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 (USA). This work was funded in part by ARDA contract states ...... 21 DAAD19-01-0056 and DARPA QUEST contract HR0011-09-C0047. Email: IV-C2 Decomposition of memory- [email protected] less quantum channels . . . . 23 Igor Devetak is with IMC Financial Markets, Poststrasse 20, 6300 Zug, Switzerland. This work was performed while he was at IBM T.J. Watson IV-C3 Jointly typical projectors in Research Center and the Department of Electrical Engineering at USC. Email: the Schur basis ...... 24 arXiv:0912.5537v5 [quant-ph] 4 Mar 2014 [email protected] IV-D Reduction to the flat spectrum case . . 24 Aram W. Harrow is with the Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA. This IV-E Converses and strong converses . . . . . 27 work was also performed while he was at University of Bristol and the Uni- IV-E1 Strong converse for forward versity of Washington. He was funded by NSF grants CCF-0916400 and CCF- communication ...... 27 1111382 and ARO contract W911NF-12-1-0486. Email: [email protected] Peter W. Shor is with the Department of Mathematics, Massachusetts IV-E2 Converses for the use of en- Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, tanglement and back com- USA and was supported in part by NSF grants CCF-0431787 (“Quantum munication, based on spread 28 Channel Capacities and Quantum Complexity”) and CCF-0829421 (“Physics Based Approaches to Quantum Algorithms”), as well as the NSF STC on IV-E3 The clueless Eve channel . . 29 Science of Information. Email: [email protected] Andreas Winter is with ICREA and Física Teòrica: Informació i V Conclusion 31 Fenomens Quàntics, Universitat Autònoma de Barcelona, ES-08193 Bellaterra (Barcelona), Spain. During preparation of this paper he was also affiliated with the Department of Mathematics, University of Bristol and the Centre for VI Acknowledgments 32 Quantum Technologies, National University of Singapore. He acknowledges support by the U.K. EPRSC grant “QIP IRC”, the Royal Society, the Philip References 32 Leverhulme Trust, EC integrated project QAP (contract IST-2005-15848), as well as STREPs QICS and QCS, and finally the ERC Advanced Grant “IRQUAT”. Email: [email protected]. Appendix 33 2
I.INTRODUCTION transmitted reliably through the channel, with the help of a quantum encoder and decoder. A. Motivation • The ordinary quantum capacity Q, which is the maximum In classical information theory, Shannon’s celebrated noisy asymptotic rate at which qubits can be transmitted under channel coding theorem [71] establishes the ability of any similar circumstances. noisy memoryless channel N to simulate an ideal noiseless • The private classical capacity P , which is the maximum binary channel, and shows that its asymptotic efficiency or rate at which classical bits can be transmitted to Bob capacity for doing so is given by a simple expression while remaining private from Eve, who is assumed to hold the channel’s environment E. C(N) = max I(X; Y ) • p The classically assisted quantum capacity Q2, which is the maximum asymptotic rate of reliable qubit transmis- = maxH(X) + H(Y ) − H(XY ) , (1) p sion with the help of unlimited use of a 2-way classical side channel between sender and receiver. where H is the entropy, X the input random variable and • The entanglement-assisted classical capacity C [13], Y = N(X) the induced output variable. The capacity, in other E [14], which is the maximum asymptotic rate of reliable words, is equal to the maximum, over input distributions p, bit transmission with the help of unlimited pure state of the input-output mutual information for a single use of entanglement shared between the sender and receiver. the channel. Somewhat more recently, a dual theorem, the • Similarly, one can define the entanglement-assisted quan- classical “reverse Shannon theorem” was proved [14], which tum capacity Q [13], [14], which is simply 1 C , by states that for any channel N of capacity C, if the sender and E 2 E teleportation [9] and super-dense coding [15].1 receiver share an unlimited supply of random bits, an expected Cn + o(n) uses of a noiseless binary channel are sufficient Somewhat unexpectedly, the entanglement assisted capaci- to exactly simulate n uses of the channel. In [84] a version ties are the simplest to calculate, being given by an expression of this construction is given which achieves asymptotically analogous to Eq. (1). In [14] (see also [54]) it was shown that perfect simulation, works on a uniform blocksize Cn + o(n), and uses an amount of shared randomness increasing linearly CE(N ) = max H(ρ) + H(N (ρ)) − H(I ⊗ N (Φρ)) , (2) with n, in contrast to the exponential amount used in [14]. ρ These simulations do not depend on the nature of the source, where the optimization is over all density matrices ρ on A and work for arbitrarily varying as well as IID sources. RA and Φρ is a purification of ρ by a reference system R Together with the original Shannon theorem, these theorems RA RA A (meaning that Φρ is a pure state and TrR Φρ = ρ ). show that in the presence of shared randomness, the asymp- The entanglement-assisted capacity formula Eq. (2) is formally totic properties of a classical channel can be characterized identical to Eq. (1), but with Shannon entropies replaced by by a single parameter, its capacity; with all channels of von Neumann entropies. It shares the desirable property with equal capacity being able to simulate one another with unit Eq. (1) of being a concave function of ρ, making it easy to asymptotic efficiency in the presence of shared randomness. compute [2]. We can alternately write the RHS of Eq. (2) as In [14] a quantum analog of the reverse Shannon theorem max I(R; B)ρ, (3) was conjectured, according to which quantum channels should ρ be characterizable by a single parameter in the presence of unlimited shared entanglement between sender and receiver. using the definitions A (discrete memoryless) quantum channel can be viewed R A→BE RA |Ψi = (I ⊗ N ) Φ physically as a process wherein a quantum system, originating ρ I(R; B) = I(R; B) = H(R) + H(B) − H(RB) with a sender Alice, is split into a component for a receiver ρ Ψ Ψ Ψ Ψ Bob and another for an inaccessible environment (commonly = H(ΨR) + H(ΨB) − H(ΨRB). referred to as Eve). Mathematically it can be viewed as an We will use I(R; B) and I(R; B) interchangeably, since isometric embedding N A→BE of Alice’s Hilbert space (A) ρ Ψ the mutual information and other entropic properties of Ψ are into the joint Hilbert space of Bob (B) and Eve (E). Tracing uniquely determined by ρ. out Eve yields a completely positive, trace-preserving linear Aside from the constraints Q ≤ P ≤ C ≤ C , and map on density operators from A to B, which we denote E Q ≤ Q , which are obvious consequences of the definitions, N A→B. Operationally, the two pictures are equivalent, but 2 and Q ≤ Q = 1 C , which follows from [74], the five we will sometimes find it convenient mathematically to work 2 E 2 E capacities appear to vary rather independently (see for example with one or the other. [10] and [72]). Except in special cases, it is not possible, The theory of quantum channels is richer and less well without knowing the parameters of a channel, to infer any understood than that of classical channels. Unlike classical one of these capacities from the other four. channels, quantum channels have multiple inequivalent capac- ities, depending on what one is trying to use them for, and 1Another powerful assistive resource, unlimited noiseless quantum back- what additional resources are brought into play. These include communication from receiver to sender, turns out to be equivalent to unlimited shared entanglement [18]. Thus the capacity of a channel assisted by such • The ordinary classical capacity C, defined as the max- back-communication is CE for classical messages and QE for quantum imum asymptotic rate at which classical bits can be messages. 3
This complex situation naturally raises the question of how approaches zero as n → ∞. We use “error” to refer to the many independent parameters are needed to characterize the trace distance in the context of states, which is defined as important asymptotic, capacity-like properties of a general 1 1 kρ − σk = Tr |ρ − σ|. quantum channel. A full understanding of quantum channels 2 1 2 would enable us to calculate not only their capacities, but more For channels, “error” refers to the diamond norm [60] (see generally, for any two channels M and N , the asymptotic also [69], [63]). The example most studied in this paper is efficiency (possibly zero) with which M can simulate N , when the target resource β = hN i with a channel N . The both alone and in the presence of auxiliary resources such initial resource α is transformed, via a protocol involving local as classical communication or shared entanglement. operations, into a channel N 0(n), with diamond-norm error One motivation for studying communication in the presence of auxiliary resources is that it can simplify the classification ⊗n 0(n) ⊗n 0(n) kN − N k = max idR ⊗ (N − N ) (Φρ) , of channels’ capacities to simulate one another. This is so ρ 1 n because if a simulation is possible without the auxiliary where the maximization is over states ρ on A and Φρ is an resource, then the simulation remains possible with it, though arbitrary purification of it. not necessarily vice versa. For example, Q and C represent a The clean version of this reducibility, ≤CL , which is impor- channel’s asymptotic efficiencies of simulating, respectively, a tant when we wish to coherently superpose protocols, adds the noiseless qubit channel and a noiseless classical bit channel. restriction that any quantum subsystem discarded during the In the absence of auxiliary resources these two capacities transformation be in the |0i state up to an error that vanishes in can vary independently, subject to the constraint Q ≤ C, the limit of large n. When α≤ β and β≤ α we have a resource but in the presence of unlimited prior entanglement, the equivalence, designated = L, or = , or for the clean version relation between them becomes fixed: CE = 2QE, because =CL . Resource reducibilities and equivalences will often be entanglement allows a noiseless 2-bit classical channel to referred to as resource relations or RRs. simulate a noiseless 1-qubit channel and vice versa (via For example, the coding theorem for entanglement-assisted teleportation [9] and superdense coding [15]). Similarly the classical communication can be stated as auxiliary resource of shared randomness simplifies the theory of classical channels by allowing channels to simulate one hN i + ∞[qq] ≥ CE(N )[c → c]. (4) another efficiently according to the classical reverse Shannon where C (N ) is defined as in Eq. (2). theorem. E In this language, to simulate (resp. cleanly simulate) a channel N is to find standard resources α (made up of qubits, B. Terminology ebits, cbits and so on) such that hN i≤ α (resp. ≤CL ). For example, the simplest form of the classical reverse Shannon The various capacities of a quantum channel N may be theorem can be stated as ∀ hNi≤ C(N)[c → c]+∞[cc], with defined within a framework where asymptotic communication N C(N) defined in Eq. (1). resources and conversions between them are treated abstractly We will also introduce notation for two refinements of the [33]. Many independent uses of a noisy channel N , i.e. N ⊗n, problem. First, we (still following [33]) define the relative corresponds to an asymptotic resource hN i, while standard resource hN : ρi as many uses of a channel N whose resources such as ebits (maximally-entangled pairs of qubits, asymptotic accuracy is guaranteed or required only when n also known as EPR pairs), or instances of a noiseless qubit uses of N are fed an input of the form ρ⊗n. This means that channel from Alice to Bob are denoted [qq] and [q → q] the error is evaluated with respect to Φ⊗n rather than the worst respectively. Their classical analogues are [cc] and [c → c], ρ case entangled input state: which stand for bits of shared randomness (rbits), and uses of noiseless classical bit channels (cbits). Communication ⊗n 0(n) ⊗n 0(n) ⊗n kN − N kρ⊗n = idR ⊗ (N − N ) (Φρ ) . from Bob to Alice is denoted by [q ← q] and [c ← c]. 1 Within this framework, coding theorems can be thought of as Most coding theorems still apply to relative resources, once transformations from one communication resource to another, we drop the maximization over input distributions. So for analogous to reductions in complexity theory, but involving a classical channel hN : pi ≥ I(X; Y )p[c → c] and for resources that are quantitative rather than qualitative, the rate a quantum channel hN : ρi + ∞[qq] ≥ I(R; B)ρ[c → c] (if other than 1) being indicated by a coefficient preceding the (notation following Eq. (2)). resource expression. We consider two kinds of asymptotic re- Second, we will consider simulating channels with passive source reducibility or resource inequality [33]: viz. asymptotic feedback. The classical version of a passive feedback channel reducibility via local operations ≤ L, usually abbreviated ≤ , has Alice obtain a copy of Bob’s output Y = N(X). We and asymptotic reducibility via clean local operations ≤CL . denote this form of channel by NF if the original channel A resource β is said to be locally asymptotically reducible to is N. For a quantum channel, we cannot give Alice a copy α if there is an asymptotically faithful transformation from α of Bob’s output because of the no-cloning theorem [88], but to β via local operations: that is, for any , δ > 0 and for all instead define a coherent feedback version of the channel sufficiently large n, n(1 + δ) copies of α can be transformed as an isometry in which the part of the output that does into n copies of β with overall error < = o(1). Here, and not go to Bob is retained by Alice, rather than escaping to A→BE throughout the paper, we use o(1) to mean a quantity that the environment [87]. We denote this NF , where the 4 subscript F indicates that E is retained by Alice. When it simulate any quantum channel N on any input. This turns out is clear from the context, we will henceforth use "feedback" not to be true in general (see below), but it is true in some to mean conventional passive feedback for a classical channel important special cases: 2 ⊗n and coherent feedback for a quantum channel. • When the input is of tensor power form ρ , for some Coherent feedback is an example of quantum state redis- ρ. In this case, we are simulating the relative resource tribution [57], [35], [90] in which the same global pure state hN : ρi. Ψ is redistributed among a set of parties. The redistribution • When the channel N has the property that its output A→BE corresponding to a feedback channel NF involves Alice, entropy H(N (ρ)) is uniquely determined by the state Bob, and a purifying reference system R. Alice’s share A of of the environment. Such channels include those with A:R the initial state Ψ , is split into two parts, E and B, with E classical inputs or outputs. remaining with her party, while B passes to Bob, who initially However, for general channels on general (i.e. non-tensor- held nothing, leading to a final state ΨE:B:R. power) inputs, we show that efficient simulation requires Classical and coherent feedback are thus rather different additional resources beyond ordinary entanglement. Any of notions, indeed one might say opposite notions, since in the following resources will suffice: coherent feedback Alice gets to keep everything but what Bob receives, and as a result coherent feedback is sometimes • more general forms of entanglement, such as an a stronger resource than free classical back-communication. entanglement-embezzling state [78], in place of the sup- Despite these differences, there are close parallels in how feed- ply of ordinary ebits, or back affects the tradeoff between static resources (rbits, ebits) • additional communication from Alice to Bob, or and dynamic resources (cbits, qubits) required for channel • backward classical or quantum communication, from Bob simulation. In both cases, when the static resource is restricted, to Alice. simulating a non-feedback version of the channel requires less The quantum reverse Shannon theorem is thus more fastidious of the dynamic resource than simulating a feedback version, than its classical counterpart. While classical shared random because the non-feedback simulation can be economically split bits (rbits) suffice to make all classical channels equivalent into two sequential stages. For a feedback simulation, no such and cross-simulable, standard ebits cannot do so for quantum splitting is possible. channels. The reason is that quantum channels may require Other notational conventions we adopt are as follows. If different numbers of ebits to simulate on different inputs. |ψi is a pure state then ψ := |ψihψ| and ψX refers to the Therefore, to maintain coherence of the simulation across a su- state of the X subsystem of ψ. For a subsystem X, we define perposition of inputs, the simulation protocol must avoid leak- |X| to be the cardinality of X if X is classical or dim X ing to the environment these differences in numbers of ebits when X is quantum. We take log and exp to be base 2. used. Fortunately, if the input is of tensor power form ρ⊗n, √ √ √ The fidelity [77] between ρ and σ is k ρ σk1 and the trace the entanglement “spread” required is rather small (O( n)), 1 A→B distance is 2 kρ − σk1. For a channel N we observe that so it can be obtained at negligible additional cost by having N = TrE ◦NF and we define the complementary channel Alice initially share with Bob a slightly generous number of A→E Nˆ := TrB ◦NF . Since isometric extensions of channels ebits, then at the end of the protocol return the unused portion are unique only up to an overall isometry on E, the same is for him to destroy. On non-tensor-power inputs the spread true for the complementary channel [56], and our results will may be O(n), so other approaches are needed if one is to not be affected by this ambiguity. avoid bloating the forward communication cost. If the channel Additional definitions related to entanglement spread will itself already leaks complete information about the output be introduced in Sec. II-C. entropy to the environment, there is nothing more for the simulation to leak, so the problem becomes moot. Otherwise, C. Overview of results there are several ways of coping with a large entanglement spread without excessive forward communication, including: In this paper we consider what resources are required to 1) using a more powerful entanglement resource in place of simulate a quantum channel. In particular, one might hope standard ebits, namely a so-called entanglement-embezzling to show, by analogy with the classical reverse Shannon theo- state [78], rem, that QE(N ) qubits of forward quantum communication, together with a supply of shared ebits, suffice to efficiently N 1 X 1 |ϕN i = √ |ji|ji (5) 2 q N 1 j The term "feedback" has been used in multiple ways. Bowen[19] compares P j=1 several kinds of feedback, both quantum and classical. In his terminology, both j=1 j the classical and coherent feedbacks we consider here are passive, meaning that they do not grant the sender and receiver any additional resource but from which (in the limit of large N) a variable amount of require them to perform an additional task (e.g. giving the sender a copy of entanglement can be siphoned off without leaving evidence the output) beyond what would have been required in an ordinary execution or of how much was taken, or 2) using a generous supply of simulation of the channel. For this reason passive feedback capacities are never greater than the corresponding plain capacities. Active feedback, by contrast, standard ebits but supplementing the protocol by additional involves granting the sender and receiver an additional resource (e.g. unlimited backward classical communication to coherently “burn off” quantum back-communication, as in [18]), to perform the same task as in a the unused ebits. We discuss the role of entanglement spread plain execution or simulation of the channel. Accordingly, active feedback capacities are never less than the corresponding plain capacities. We do not in the quantum reverse Shannon theorem in Sec. II-C. There discuss active feedback further in this paper. we will precisely define the resource [¤¤], which informally 5
can be thought of as an embezzling state |ϕN i with N allowed and decoder (bottom right). Shared random bits (rbits) and to be arbitrarily large. forward classical communication (cbits) are used to simulate When simulating quantum feedback channels, we are some- the classical channel; shared entanglement (ebits) and forward times able to establish resource equivalences rather than quantum communication (qubits) are used to simulate the reducibilities, for example (as we will see in part (a) of quantum channel. As usual in Shannon theory, the encoder and Theorem 3) decoder typically must operate in parallel on multiple inputs 1 1 in order to simulate multiple channel uses with high efficiency hN : ρi= I(R; B)[q → q] + I(E; B)[qq]. (6) F 2 2 and fidelity. Where it is clear from context we will often use upper This both indicates the numbers of qubits and ebits asymp- case letters X, B, etc. to denote not only a classical ran- totically necessary and sufficient to perform the redistribution dom variable (or quantum subsystem) but also its marginal ΨA:R → ΨE:B:R on tensor powers of a source with density probability distribution (or density matrix) at the relevant matrix ρA, and expresses the fact that any combination of stage of a protocol, for example writing H(B) instead of resources asymptotically able to perform the feedback simula- H(ρB). Similarly we write I(E; B) for the quantum mutual tion of N on ρ can be converted into the indicated quantities information between outputs E and B in the upper right side of qubits and ebits. These results reflect the fact that the state of Figure 1. However, it is not meaningful to write I(A; B), redistribution performed by a quantum feedback channel is because subsystems A and B do not exist at the same time. asymptotically reversible. One interesting special case is when Thus the conventional classical notation I(X; Y ) for the input- N is a noiseless classical channel, in which case Eq. (6) output mutual information may be considered to refer, in the reduces to the “cobit” resource equality [40]. This observation, quantum way of thinking, to the mutual information between and our derivation of Eq. (6), are due to [32]. Y and a copy of X, which could always have been made in Applications: Our results also have implications for prov- the classical setting. ing rate-distortion theorems and strong converses for the entanglement-assisted capacities. The rate-distortion problem is a variant of the reverse Shannon theorem which differs in that instead of simulating a specific channel with high blockwise fidelity the goal is to minimize an average distortion X X R R condition. This is a less stringent condition than demanded by Y X Ψ E the reverse Shannon theorem, so our simulations imply rate- X A N Y N B distortion theorems at the rate one would expect: the least capacity of any channel satisfying the distortion bound. This connection was observed for classical channels in [84] (see n Xn R Rn also [73]) and for quantum channels in [31]. The second n X n n n application of our result is to derive a strong converse theorem, n Y Ψ E X An meaning that attempting to send classical bits through a Enc E cbits qubits n quantum channel at rates above CE results in an exponentially rbits Y ebits Bn small success probability. We discuss this application further Dec D in Sec. IV-E. Coordination capacity: Another interpretation of reverse Shannon theorems is in terms of “coordination capacities”, defined as the minimum rate of communication required to Fig. 1. Parties and subsystems associated with classical and quantum achieve certain correlated probability distributions subject to channels (top left and right, resp.) and with their simulation using standard resources (bottom left and right respectively). The dashed lines represent constraints on some of the variables [29]. For example, the systems that are sent to Alice only in the case of feedback simulations. classical reverse Shannon theorem corresponds to the goal of reproducing the input-output distribution of a channel given Figure 2 shows some of the known results on communi- one party’s knowledge of the input. However, the framework of cations resources required to simulate classical and quantum coordination capacity also encompasses many network-coding channels under various conditions. generalizations of this task.
II.STATEMENT OF RESULTS A. Classical Reverse Shannon Theorem Figure 1 shows the parties, and corresponding random Most of these results are not new; we collect them here variables or quantum subsystems, involved in the operation for completeness, and give alternate proofs that will help of a discrete memoryless classical channel (top left) and a prepare for the analogous quantum results. The high-shared- discrete memoryless quantum channel (top right). Dashed randomness and feedback cases below (a,b,e) were proved in arrows indicate additional data flows characterizing a feed- [13], [14], [84]. The low- and zero-shared-randomness cases back channel. The bottom of the figure gives flow diagrams (c,d,f) were demonstrated by Cuff [28] building on Wyner’s for simulating such channels using, respectively, a classical classic common randomness formula [89]. The connection to encoder and decoder (bottom left) or a quantum encoder rate distortion was first developed in the 1996 Steinberg-Verdú 6
Kind of Channel Classical Quantum Classical Coherent Kind of Simulation Non-feedback Non-feedback Feedback Feedback Tensor- Excess c = I(X; Y ) q = I(R; B)/2 power shared when r ≥ H(Y |X) when e ≥ I(E; B)/2 source ebits or General q = QE(N ) = maxρ I(R; B)/2 rbits c = C(N) = maxp I(X; Y ) source Ordinary ebits insufficient
c(r) = min{max( Tensor- c(r) = I(X; W ), q(e) = q(e) = lim max power n→∞ Limited max{I(X; Y ), I(XY ; W ) − r) max{ 1 I(R; B), { 1 I(R; E Bn), or IID 2 2n B shared H(Y ) − r} : W s.t. I(X; Y |W ) = H(B) − e} 1 H(E Bn) − e} source n B ebits 0}. or c(r) = max min rbits c(r) = max X W General X {max(I(X; W ), Various tradeoffs possible (see text) max{I(X; Y ), source I(XY ; W ) − r) : Ordinary ebits insufficient H(Y ) − r} I(X; Y |W ) = 0}.
Tensor- q = limn→∞ min c = min{ I(XY ; W ) 1 No power q = H(B) { n H(ω): ∃ω, N1, N2 c = H(Y ) : W s.t. I(X; Y |W ) = ⊗n shared or IID = H(N (ρ)) s.t. N1(ρ ) = ω & 0}. ⊗n ebits source N2(ω) = N (ρ) } or q = maxρ limn→∞ min rbits c = q = maxρ H(B) 1 General c = maxX { n H(ω): ∃ω, N1, N2 maxX minW {I(XY ; W ) = ⊗n source H(Y ) s.t. N1(ρ ) = ω & : I(X; Y |W ) = 0}. maxρ H(N (ρ)) ⊗n N2(ω) = N (ρ) } Fig. 2. Resource costs of simulating classical and quantum channels: Some known results on the forward communication cost (c=cbits or q=qubits) for simulating classical and quantum channels are tabulated as a function of the kind of source (tensor power or arbitrary), the kind of simulation (feedback or non-feedback), and the quantity of shared random bits (r) or ebits (e) available to assist simulation. For non tensor power quantum sources (green shaded cells), efficient entanglement-assisted simulation is not possible in general using ordinary ebits, because of the problem of entanglement spread. To obtain an efficient simulation in such cases requires additional communication (wlog backward classical communication), or a stronger form of entanglement resource than ordinary ebits, such as an entanglement-embezzling state. paper [73], which also proved a variant of the high-randomness shared randomness to minimize communication cost: case. hNF i≤ C(N)[c → c] + (max H(Y ) − C(N))[cc]. (9) p Theorem 1 (Classical Reverse Shannon Theorem (CRST)). Let N be a discrete memoryless classical channel with input (c) Non-feedback simulation on known sources, with limited X (a random variable) and induced output Y = N(X). We shared randomness: When shared randomness is present will use I(X; Y ) to indicate the mutual information between in abundance, feedback simulation requires no more input and output. Let NF denote the feedback version of N, communication than ordinary non-feedback simulation, which gives Alice a copy of Bob’s output Y = N(X). Trivially but when only limited shared randomness is available, N≤ NF and hN : pi≤ hNF : pi for all input distributions p. the communication cost of non-feedback simulation can be less. (a) Feedback simulation on known sources with sufficient hN : Xi≤ c[c → c] + r[cc] (10) shared randomness to minimize communication cost: if and only if there exists a random variable W with hNF : pi≤ I(X; Y )[c → c] + H(Y |X)[cc]. (7) I(X; Y |W ) = 0, such that c ≥ I(X; W ) and c + r ≥ I(XY ; W ). In fact this is tight up to the trivial reduction [cc]≤ [c → (d) Non-feedback simulation on known sources with no c]. In other words, for c and r nonnegative, shared randomness: A special case of case (c) is the fact that hNF : pi≤ c[c → c] + r[cc] (8) hN : pi≤ c[c → c] (11) iff c ≥ I(X; Y ) and c + r ≥ H(Y ). if and only if there exists W such that I(X; Y |W ) = 0 (b) Feedback simulation on general sources with sufficient and c ≥ I(XY ; W ). 7
(e) Feedback simulation on arbitrary sources, with arbitrary shared randomness: For non-negative r and c, c (0) = H(Y) Simulation with Feedback F Simulation w/o Feedback hNF i≤ c[c → c] + r[cc] (12)
iff c ≥ C(N) = maxp I(X; Y ) and r ≥ maxp H(Y ) − c maxp I(X; Y ). Because the two maxima may be achieved for different p the last condition is not simply r ≥ c(0) = min I(XY;W) H(Y |X). W s.t. (f) Without feedback we have, for non-negative r and c, I(X:Y|W)=0
hNi≤ c[c → c] + r[cc] (13) I(X:Y) c(r) = min max{I(XY:W)-r, I(X:W)} if and only if for all X there exists W with I(X; Y |W ) = W s.t. 0, such that c ≥ I(X; W ) and c + r ≥ I(XY ; W ). I(X:Y|W)=0 0 0 r Parts (b,e,f) of the theorem reflect the fact that the cost of a H(Y|X) channel simulation depends only on the empirical distribution or type class of the input3, which can be communicated in Fig. 3. Classical communication c versus shared randomness r tradeoff for at asymptotically negligible cost (O(log n) bits), and that feedback and non-feedback simulations of a classical channel on a specified 0 an i.i.d. source p is very likely to output a type p with source p (Theorem 1). 0 √ kp − p k1 ∼ 1/ n. Also note that in general the resource reducibility Eq. (12) is not a resource equivalence because c vs. r tradeoff for simulating a H(Y ) and I(X; Y ) may achieve their maxima on different classical erasure channel. X. Erasure Part (c), and the low-randomness simulations in general, are probability based on the possibility of splitting the simulation into two t = 0.2 stages with the second performed by Bob, and part of the first 4 stage’s randomness being recycled or derandomized . Since t = 0.4 Forward Alice does not get to see the output of the second stage, this is Classical t = 0.5 a non-feedback simulation. Indeed, part (a) implies that non- Commun- ication c trivial cbit-rbit tradeoffs are only possible for non-feedback t = 0.6 simulations. Fig. 3 and Fig. 4 schematically illustrate the form of the t = 0.8 cbit-rbit tradeoffs. For feedback simulation on a fixed source, t = 0.9 the tradeoff between communication and shared randomness is trivial: Beginning at the point c = I(X; Y ), r = H(Y |X) Shared Randomness r on the right, r can only be decreased by the same amount as c is increased, so that c = H(Y ) when r = 0. By contrast, if the Fig. 5. Classical communication c vs shared randomness r tradeoff for simulation is not required to provide feedback to the sender, non-feedback simulation of binary erasure channels with erasure probabilities a generally nontrivial tradeoff results, for which the amount t = 0.2, 0.4, 0.5, 0.6, 0.8 and 0.9 (colored graphs). One can show that in of communication at r = 0 is given by Wyner’s common Theorem 1 part (f) it is enough to consider W such that both legs X → W information expression min{I(XY ; W ): I(X; Y |W ) = 0}. and W → Y are erasure channels. The two black curves mark the boundaries of the region where the tradeoff has slope −1, viz. r ≤ H2(c/2) − c, and This is evident in Fig. 5 showing the tradeoff for non-feedback 1 where it is horizontal, r ≥ H2(c). Note that for t ≤ 2 , Wyner’s quantity simulation of the classical binary erasure channel for several c(0) = 1, and that for these channels the tradeoff graphs have no section of values of the erasure probability t. This figure also shows that slope −1. These tradeoff curves were first given in [28]. for some channels (in particular for erasure channels with t > 0.5), even the non-feedback tradeoff begins with a -45 degree Corollary 2. straight line section at low r values. The converse to (a) follows from Shannon’s original noisy hNF : pi= I(X; Y )[c → c] + H(Y |X)[cc] (14) channel coding theorem, which states that hN : pi ≥ hNF : pi + ∞[cc]= hN : pi + ∞[cc] I(X; Y )p[c → c]. A slight refinement [3], [4] implies that = I(X; Y )[c → c] + ∞[cc] (15) hNF : pi ≥ I(X; Y )p[c → c] + H(Y |X)p[cc]. Thus we have the following resource equivalences. hNF i + ∞[cc]= hNi + ∞[cc] = (max I(X; Y ))[c → c] + ∞[cc] (16) 3Types are defined and reviewed in Sec. III-C. p 4Here, “recycled” means that using a sublinear amount of additional randomness, privacy amplification can be used to make the shared randomness Remark: The task considered in case (d) above, of simulat- approximately independent of the output of the first stage. Our proof (in ing a channel on a known source by forward communication Sec. III) will instead use the somewhat simpler “derandomization” approach in which we argue that some of the random bits in the X–W stage can be alone without shared randomness, is a variant of the problem set in a way that works for all input strings xn simultaneously. originally considered by Wyner [89], who sought the minimum 8
W X Alice Shared W random- Bob Bob Y ness Two stage non-feedback simulation can give a nontrivial tradeoff Trivial c vs. r I(X:W) One-stage tradeoff feedback simulation I(X:Y) Forward Y X Commun- Alice Shared Y ication c random- Bob ness 0 0 I(Y:W|X) = Shared Randomness r H(Y|X) H(W|X) −H(W|X,Y )
Fig. 4. Two-stage non-feedback simulation of a classical channel, via a Markov chain X → W → Y allows a nontrivial tradeoff between forward communication c and shared randomness r. A typical point on the optimal tradeoff curve is shown with c = I(X : W ) and r = I(Y : W |X), and with a segment of the optimal tradeoff curve depicted. The second term, H(W |XY ), in the expression for r represents the portion of the shared randomness in the first stage simulation of X → W that can be recycled or derandomized. On the right side is also depicted the “full randomness” solution consisting of a one-stage feedback simulation that uses communication I(X : Y ) and randomness H(Y |X). Since cbits can always be traded for rbits, this yields the upper bound depicted by the 45-degree dashed line coming out of this point. rate of a source allowing two correlated random variables X communication rate necessary and sufficient for feedback and Y to be generated from it by separate decoders. He called simulation of a channel on a tensor power source using this the common information between X and Y , and showed ordinary entanglement at the rate e ebits per channel use it was given by min{I(XY ; W ): I(X; Y |W ) = 0}. is
1 qF (e) = max{ 2 I(R; B),H(B) − e}. (18) B. Quantum Reverse Shannon Theorem (QRST) Theorem 3 (Quantum Reverse Shannon Theorem). Let N be (b) Known tensor power input, non-feedback simulation, en- a quantum channel from A → B or equivalently an isometry tanglement possibly insufficient to minimize the forward from A → BE and NF the feedback channel that results communication cost: from giving system E to Alice. If we are given an input A density matrix ρ then entropic quantities such as I(R; B) or hN : ρi ≤ q[q → q] + e[qq], (19) RBE R A→BE RA I(R; B)ρ refer to the state Ψ = (I ⊗ N )(Φρ ), where Φ is any state satisfying ΦA = ρ. ρ ρ if and only if for all δ > 0 there exists an n > 0 and an (a) Feedback simulation on known tensor power input, with n isometry V :E → EAEB such that sufficient ebits of entanglement to minimize the forward qubit communication cost: 1 1 n n q ≥ · 2 I(R ; B EB)Ψ − δ and (20) ∀ hN : ρi = 1 I(R; B) [q → q] + 1 I(E; B) [qq]. n ρ F 2 ρ 2 ρ 1 (17) q + e ≥ H(BnE ) − δ where (21) n B Ψ In view of the trivial tradeoff between ebits and qubits for n n n R B EAEB E →EAEB ⊗n ⊗n simulating a feedback channel, this implies that the qubit |Ψi := V NF |Φρi . (22) 9
Thus the communication cost for non-feedback simulation was to simulate the channel using a noisy form of teleporta- on a tensor power source, as a function of e, is given by tion, and then to use measurement compression [85]5. The full statement of (a) has since been proved by Devetak [32] using q(e) = lim inf his triangle of dualities among protocols in the “family tree” n→∞,∃V :En→E ,E A B – see also [33]; by Horodecki et al. [57] as the inverse of the max{ 1 I(Rn; BnE )/n, H(BnE )/n − e}. (23) 2 B B “mother” protocol, a coherent version of state merging; and (c) Known tensor power input, non-feedback, no entangle- by Abeyesinghe et al. [1] in the context of a direct derivation ment: This is obtained from setting e = 0 in case (b) of the “mother” protocol. We will present another proof of (a) above. In this case, Eq. (20) is always dominated by in Sec. IV, partly in order to prepare for the proof of the rest Eq. (21) and we have that of Theorem 3. To prove (b), we argue that any protocol using only hN : ρi ≤ q[q → q], (24) qubits and ebits for a non-feedback simulation of N ⊗n is 1 n equivalent to one that performs a feedback simulation of iff q ≥ limn→∞ minV H(B EB), where the min- n n V E →EAEB ◦ N ⊗n. The argument is that the resources used imum is over isometries V : En → E E . F A B (qubits and ebits) leak nothing to the environment, so the only The latter is a well-known quantity: it is the non-unitary elements are those that are deliberately introduced regularized entanglement of purification (EoP) [75] by Alice and Bob. Thus, we can replace any non-unitary E∞(ΨRB) = lim 1 E ((ΨRB)⊗n) of the channel’s P n→∞ n P operation by an isometry that instead sends the system to Choi-Jamiołkowski state Ψ. be discarded to a local “environment”, labeled E for Alice (d) Arbitrary input, feedback simulation: For a communi- A and E for Bob. By Uhlmann’s theorem and the fact that cation resource α in the sense of [33] comprising any B any two purifications are related by an isometry, it follows combination of ebits, embezzling states [¤¤], backward that if our original simulation had fidelity 1 − with the cbits [c ← c], and/or forward or backward quantum action of N ⊗n, then this modified simulation has fidelity communication, n 1 − with V E →EAEB ◦ N ⊗n for some isometry V . This α ≥ hN i (25) F F is an equivalence, since this procedure turns any simulation of n ⊗n E →EAEB ⊗n iff there exists a resource β such that for all ρ, N into a method of simulating V ◦ NF for an isometry V , and the reverse direction is achieved simply by α ≥ hN : ρi + β. (26) CL F discarding the EA,EB systems. Specifically, using embezzling states we have Part (c) is simply a special case of (b), and was proven in the case when N is a CQ channel (that is, has classical inputs) by hNF i≤ QE(N )[q → q] + [¤¤] (27) Hayashi [46]. It corresponds to the regularized entanglement of purification [75] of |Ψi. In both cases, the additivity problem and when considering back communication (i.e. the question of whether regularization is necessary) is
hNF i ≤ QE(N )[q → q] + C[c ← c] open, although recent evidence suggests strongly that the (28) entanglement of purification is not additive [20] and thus that + (max H(B)ρ − QE(N ))[qq] ρ it is not a single-letter formula for the simulation cost. Proving, and indeed understanding, parts (d) and (e) will iff C ≥ maxρ H(B)ρ − minρ H(B|R)ρ − CE(N ). Other examples are discussed in Sec. II-C. require the concept of entanglement spread, which we will (e) Arbitrary input, no feedback: This case combines ele- introduce in Sec. II-C. At first glance, the statements of ments of cases (b) and (d), although we now consider the theorem may appear unsatisfying in that they reduce the only fully coherent input resources. If α is a combination question of whether hN i ≤ α or hNF i ≤ α to the question of of ebits, embezzling states [¤¤] and forward and/or back- whether certain other clean resource reductions hold. However, ward qubits, then α≥ hN i iff for all δ > 0 there exists an according to part (a) of Theorem 3, the corresponding clean n resource reductions involve the standard resources of qubits n > 0, a resource βn and an isometry Vn : E → EAEB such that and ebits. As we will explain further in Sec. II-C, this will allow us to quickly derive statements such as Eq. (27) and 1 ⊗n α≥ hVn ◦ N i + βn. (29) CL n F 5More concretely, suppose that Alice uses the “Homer Simpson protocol,” which means applying N to her input and then teleporting the output to Bob, Part (a) of Theorem 3 can equivalently be stated as using a classical message of size 2 log dA. Alice’s entire part of the protocol can be viewed as a measurement that she performs on her input state and hNF : ρi = I(R; B)ρ[q → qq] + H(B|R)ρ[qq], (30) on half of a maximally entangled state. The mutual information between her classical message and Bob’s residual quantum state is given by I(R; B). where [q → qq] denotes a co-bit [40], [33], which is equivalent Therefore [85] can be used to simulate n applications of this measurement to ([q → q] + [qq])/2. The formulation in Eq. (30) is parallel by a block measurement with ≈ exp(nI(R; B)) outcomes. Finally, it is necessary to observe that the error analysis in [85] shows that the simulated to the classical version in Eq. (14) if we replace quantum measurement not only has the correct output statistics, but essentially has feedback with classical feedback, co-bits with cbits and ebits the correct Kraus operators. Thus the compressed measurement gives a high- with rbits. fidelity simulation of the Homer Simpson protocol, and thus of the original channel. However, the measurement compression step relies on knowledge of A weaker version of (a) was proven in a long unpublished the input density matrix ρ, and so new ideas are necessary for the non-tensor- and now obsolete version of the present paper. The idea there power case. 10
Eq. (28). An alternate proof6 of the QRST for general sources the input, and upon outcome 0, outputs |0ih0|, and upon 1 Pd using embezzling states as the entanglement resource, Eq. (27), outcome 1, outputs d i=1 |iihi|. For any d, CE(Md) = 1, was given by Berta, Christandl, and Renner [16]. but ∆sim(Md) = log(d+1)−1, which is asymptotically larger The situation in part (d) when embezzling states are not as d grows7. Thus, when performing a feedback simulation of present (i.e. general input, unlimited ebits, and some com- Md using cbits and ebits, nearly all of the communication cost bination of forward quantum communication and backwards comes from the need to create entanglement spread (discussed quantum and classical communication) is somewhat surprising further in Sec. II-C). in that the simulation requires an asymptotically greater rate What about non-feedback simulations? In this case, it turns of communication than the communication capacity of the out that the variable-entropy and amplitude-damping channels channel. To capture this gap, we introduce the following can both be simulated at the communication rate given by definition. CE. We will discuss this in more detail in Sec. II-C, but the intuitive reason for this is that non-feedback simulations allow Definition 4. The spread deficit of a channel N is defined as us to damage the environment of the channel and in particular to measure it. This can result in collapsing superpositions
∆sim(N ) := max H(B)ρ − min H(B|R)σ − CE(N ). (31) between different amounts of entanglement, thus reducing the ρ σ contribution of entanglement spread. However, there remain Thus, we could equivalently say that the resource inequality channels whose simulation cost with ebits is higher than in Eq. (28) holds iff C ≥ ∆sim(N ). with embezzling states or back communication even for non- It is important to note that the maximization of H(B) and feedback simulation; we describe an example (the “Clueless the minimization of H(B|R) on the RHS of Eq. (31) are taken Eve” channel) in Sec. IV-E3. separately. Indeed, CE(N ) is simply the maximization of H(B)ρ−H(B|R)ρ over all ρ, so Eq. (31) expresses how much larger this expression can be by breaking up the optimization of those two terms. Fortunately, each term in the RHS of Eq. (31) is additive, so there is no need to take the limit over many channel uses. The additivity of H(B)ρ follows immediately from the subadditivity of the von Neumann entropy, or equivalently the nonnegativity of the quantum mutual information. The other two terms have already been proven to be additive in previous work: [34] showed that min H(B|R) = − max H(B|E) is additive and [2] showed that CE is additive. Thus, we again obtain a single-letter formula in the case of unlimited ebits. The fact that ∆sim(N ) provides a single-letter character- ization involving convex optimizations makes it possible to explicitly and efficiently evaluate it. For the important class of so-called covariant channels, such as the depolarizing and era- sure channels, entropic quantities are invariant under unitary rotation of the inputs. In this case, H(B), H(R) − H(E) and I(R; B) are all simultaneously maximized for the maximally- mixed input and ∆sim(N ) = 0. However, channels that lack this symmetry will generally have nonzero ∆sim. As an example, we plot the entanglement-assisted capacity CE(N ) γ Fig. 6. The amplitude√ damping√ channel with parameter has Kraus against the ebit-assisted simulation cost CE(N ) + ∆sim(N ) operators |0ih0|+ 1 − γ|1ih1| and γ|0ih1|. The lower, solid, curve is the for the amplitude-damping channel in Fig. 6. entanglement-assisted classical capacity of the amplitude-damping channel, For the amplitude damping channel, the spread deficit is or equivalently the (w.l.o.g. feedback) simulation cost in cbits when back communication or embezzling states are given, or when the source is a tensor comparable to the other costs of the simulation. But there power. The upper, dashed, curve is the feedback simulation cost in cbits exist other channels for which the spread deficit can dominate (calculated using Eq. (31)) when instead unlimited ebits are given. The gap the cost of the channel simulation. Here is an example: for between the two curves is the spread deficit from Definition 4, and illustrates the extra communication cost of producing entanglement spread. any d, define the “variable-entropy” channel Md mapping 2 dimensions to d + 1 dimensions as follows: it measures The proofs of parts (d) and (e) will be given in Sec. IV. 6This proof was developed in parallel with ours (cf discussion in [22]) To prove them, we restrict attention to the case when α and differs primarily by describing merging in terms of one-shot entropies 7 (compared with our applying merging only to “flat” spectra) and by reducing Proof: Md can be perfectly simulated with a single bit of forward classical to the tensor-power case using the post-selection principle of [22] (compared communication, which proves that CE (Md) ≤ 1, while the obvious protocol with our use of Schur duality to reduce to the flat case). Note that the post- for sending one classical bit through the channel proves that CE (Md) ≥ 1. selection principle can also be thought of in terms of the Schur basis as the To evaluate ∆sim(Md), observe that H(B) achieves its maximal value of 1 d statement that tensor-power states are “almost flat” in a certain sense (cf. log(d + 1) upon input d+1 |0ih0| + d+1 |1ih1|, while H(B|R) can be zero [47]). if the input |0ih0| is given. 11 is a combination of entanglement-embezzling states and/or Alice’s input is always purified by some reference and that the “standard” resources (qubits, cbits and ebits). However, for fidelity of any simulation is with respect to this purification. ⊗n ⊗n part (e), we need to further restrict our claim to exclude cbits, Assume that ρ1 and ρ2 are nearly perfectly distin- for reasons related to the fact that we do not know the tradeoff guishable and that the channel simulation should not break curve between quantum and classical communication when the coherence between these two states. Naively, we might ⊗n simulating classical channels. imagine that Alice could first determine whether she holds ρ1 ⊗n Remark: Analogously to the low-shared randomness regime or ρ2 and coherently store this in a register i ∈ {1, 2}. Next in classical channel simulation (Figure 4 and cases (c) and she could conditionally perform the protocol for i.i.d. inputs 1 1 (d) of the CRST), simulating a non-feedback channel permits that uses 2 I(R; B)ρi [q → q] + 2 I(B; E)ρi [qq]. To use a a nontrivial tradeoff between ebits and qubits, in contrast to variable amount of communication, it suffices to be given the 1 the trivial tradeoff for feedback simulation. While the cbit-rbit resource maxi 2 I(A; B)ρi [q → q], and to send |0i states when tradeoff curve for simulating classical channels is additive and we have excess channel uses. But unwanted entanglement given by a single-letter formula [89], [28], no such formula or cannot in general be thrown away so easily. Suppose that additivity result is known for the qubit cost in the zero- and I(B; E)ρ1 > I(B; E)ρ2 , so that simulating the channel on ⊗n low-entanglement regime. ρ1 requires a higher rate of entanglement consumption than ⊗n 1 Remark: Interestingly, quantum communication or entan- ρ2 . Then it is not possible to start with 2 nI(B; E)ρ1 (or glement can sometimes improve simulations of even classical indeed any number) of ebits and perform local operations to 1 1 channels. In [86] an example of a classical channel is given obtain a superposition of 2 nI(B; E)ρ1 ebits and 2 nI(B; E)ρ2 with d-dimensional inputs which requires Ω(log d) classi- pairs. cal bits to simulate, but can be simulated quantumly using The general task we need to accomplish is to coherently O(d−1/3) qubits of communication, asymptotically. Curiously, create a superposition of different amounts of entanglement. the classical reverse Shannon theorem (Theorem 1) is only a Often it is convenient to think about such superpositions as special case of the quantum reverse Shannon theorem (Theo- containing a small “control” register that describe how many rem 3) when in the unlimited shared entanglement regime; one ebits are in the rest of the state. For example, consider the of the problems left open by this work is to understand how state entanglement can be more efficient than shared randomness m X √ A B ⊗ni ⊗N−ni in creating correlated classical probability distributions. More |ψi = pi|ii |ii |Φi |00i , (32) generally, which values of c, q, r, e are consistent with the i=1 reducibility hN i≤ c[c → c] + q[q → q] + r[cc] + e[qq]? We where 0 ≤ n ≤ N for each i. Crudely speaking8, we say that know how to convert this problem to the equivalent relative i max n − min n is the amount of entanglement spread in resource problem with hN i replaced with hN : ρi, but this in i i i i the state |ψi, where the max and min are taken over values turn we do not have an answer for. of i for which p is nonnegligible. Remark: Our results imply unbounded gaps (for growing i A more precise and general way to define entanglement dimension) between the costs of simulating channels when (a) spread for any bipartite state |ψi is (following [52]) as no entanglement is given, (b) a linear or unlimited rate of ebits ∆(ψA) = H (ψA) − H (ψA), where H (ρ) = log rank ρ are given, and (c) stronger forms of entanglement, such as em- 0 ∞ 0 and H (ρ) = − log kρk . (The quantities H and H are bezzling states, are given. An example of a large gap between ∞ ∞ 0 ∞ also known as H and H respectively. Alternatively, they (a) and (b) is given by the Werner-Holevo channel [79], defined max min can be interpreted as Rényi entropies.) Ref. [52] also defined on d-dimensional inputs to be N (ρ) = ((Tr ρ)I −ρT )/(d−1). an -smoothed version of entanglement spread by This channel has CE(N ) / 1, but when acting on half of a maximally entangled state produces a state with entanglement ∆(ρ) = min{∆(σ) : 0 ≤ σ ≤ ρ, Tr σ ≥ 1 − } of purification equal to log d [24]. Thus, the gap between the ebit-assisted simulation cost and the unassisted simulation cost that reflects the communication cost of approximately prepar- grows with dimension. For an asymptotically growing gap ing |ψi. More precisely, we have between (b) and (c), we give an example in Sec. IV-E3. Theorem 5 (Theorem 8 of [52]). If |ψi can be created from ebits using C cbits of communication and error ≤ = δ8/4, C. Entanglement spread then A To understand parts (d) and (e) of Theorem 3, we need C ≥ ∆δ(ψ ) + 3 log(1 − δ) (33) to introduce the idea of entanglement spread. This concept is The factor of 3 in Eq. (33) is because the definition of ∆ further explored in [42], [52], but we review some of the key we have used is actually the alternate version used in Remark 4 ideas here. of [52]. We can similarly define H0,(ρ) := log min{rank σ : If Alice’s input is known to be of i.i.d. form ρ⊗n then 0 ≤ σ ≤ ρ, Tr σ ≥ 1 − } and H∞,(ρ) := − log min{kσk∞ : we know that the channel simulation can be done using 0 ≤ σ ≤ ρ, Tr σ ≥ 1 − }. Our definition of H0, is the same 1 I(R; B)[q → q] + 1 I(B; E)[qq]. To see the complications 2 2 as the one used in [52], but our definition of H∞, may be as that arise from a general input, it suffices to consider the case ⊗n ⊗n when Alice’s input is of the form (ρ1 + ρ2 )/2. We omit 8 This neglects the entanglement in the |iii register. However, in typical explicitly describing the reference system, but assume that applications, this will be logarithmic in the total amount of entanglement. 12
Wyner-like point q(0) = min {H(W)/n} An=>W => Bn
Rn Φρn EA V E An 1 B W V2 Bn Forward Two-stage non-feedback Qubits q simulation gives q = I(Rn;W)/2n,
e = I(EA;W)/2n ½ I(R;B)
One-stage Feedback Simulation 0 0 Shared Ebits e ½ I(E;B)
Fig. 7. Two-stage non-feedback simulation of a quantum channel (solid red curve) on a specified input ρ, via an intermediate state W , makes possible a nontrivial tradeoff between forward communication q and shared entanglement e. By contrast, for a feedback simulation (right, dashed blue curve) only a 1 trivial tradeoff is possible, where any deficit in ebits below the 2 I(E; B) needed for optimal simulation must be compensated by an equal increase in the number of qubits used. much as − log(1 − ) smaller. As with the definitions in [52], The lemma is proved in the appendix. It improves upon our quantities trivially satisfy Lemma 5 of [52], and could thus be used to tighten Theorem 5, although we do not carry out that exercise here. ∆ (ρ) ≥ H (ρ) − H (ρ). (34) 0, ∞, There are a few different ways of producing entanglement Our quantities can also be expressed as spread, which are summarized in [42]. For example, one cbit √ √ can be used to coherently eliminate one ebit, or to do nothing; H0,(ρ) = min H0( Mρ M) (35a) and since both of these tasks can be run in superposition, M √ √ this can also be used to create entanglement spread. Likewise H∞,(ρ) = max H∞( Mρ M), (35b) one qubit can coherently either create or disentangle one M ebit. To put this on a formal footing, we use the clean where in each case M must satisfy 0 ≤ M ≤ I and Tr Mρ ≥ clean resource reducibility ≤ (called ≤ in [42]). A resource 1−. In fact, we can WLOG assume that M commutes with ρ CL β is said to be “cleanly LO-reducible” to α iff there is an and Tr Mρ = 1−. Similarly, in the definitions that optimized asymptotically faithful clean transformation from α to β via over σ ≤ ρ, we can assume that ρ and σ commute. local operations: that is, for any , δ > 0 and for all sufficiently One advantage of this version of ∆ (ρ) is that it has the large n, n(1 + δ) copies of α can be transformed by local following natural interpretation as a minimization over nearby operations into n copies of β with overall diamond-norm normalized states. error ≤ , and moreover, any quantum subsystem discarded Lemma 6. during the transformation is in a standard |0i state, up to an error vanishing in the limit of large n. In particular, entangled
max(0, ∆(ρ)) states cannot be discarded. This restriction on discarding states 1 means that clean protocols can be safely run in superposition. = min{∆ (σ): kρ − σk ≤ , 0 ≤ σ, Tr σ = 1} (36) 0 2 1 Finally, we can define the clean entanglement capacity of a 13
resource α to be the set Eclean(α) = {E : α≥CL E[qq]} ⊆ R. • If Alice’s control qubit is zero, she sends half of one of Negative values of E correspond to the ability to coherently her ebits through the qubit channel and locally creates a eliminate entanglement. By time-sharing, we see that Eclean(α) |0i state. If her control qubit is one, she locally creates a is a convex set. However, it will typically be bounded both |Φ2i and sends half through the channel. from above and below, reflecting the fact that coherently un- • If Bob’s control qubit is zero, he now holds both halves doing entanglement is a nonlocal task. The clean entanglement of one of the |Φ2i states. He rotates this to a |00i state capacities of the basic resources are and discards one of the |0i qubits. If his control qubit is one, he keeps the transmitted qubit, but also creates and Eclean([q → q]) = Eclean([q ← q]) = [−1, 1] (37a) discards a |0i qubit. Alice and Bob are now left with the Eclean([c → c]) = Eclean([c ← c]) = [−1, 0] (37b) state in Eq. (38).
Eclean([qq]) = {1} (37c) Observe that in this example the classical and quantum com- munication could have been sent in either direction. Thus, To understand Eq. (37a), observe that transmitting one qubit while some parts of the simulation protocol can only use AB AB can map |Φ2i to or from |00i , in each case without forward communication, the spread requirements can be met changing anything in the environment. The reasoning behind with communication in either direction. Eq. (37b) is less obvious since sending any classical message While this framework gives us a fairly clear understanding leaks the message to the environment by definition. However, of the communication resources required to create entangle- the protocol can be made clean by always sending a uniformly ment spread, it also shows how unlimited ebits are not a random bit through the channel. If Alice generates this bit good model of unlimited entanglement. Instead of maximally locally (or more simply sends a |+i state) and Bob discards it, entangled states, we will use the so-called entanglement- AB then this does not change their amount of shared entanglement. embezzling [78] states |ϕN i , which are parameterized by Alternatively, if Alice sends her half of an ebit through their Schmidt rank N, and can be used catalytically to produce log k the classical channel and Bob performs a CNOT with the or destroy any Schmidt rank k state up to an error of log N in transmitted bit as control and his half of the ebit as target, the trace norm. See [78] for a definition of |ϕN i and a proof then this will eliminate the entanglement while presenting the of their entanglement-embezzling abilities. We let the resource same information to the environment. [¤¤] denote access to an embezzling state of arbitrary size: S These resources can be combined in various ways to create formally, [¤¤] = N≥1hϕN i and so we have entanglement spread. For example, to create a superposition E ([ ]) = (−∞, ∞). of 5 and 9 ebits, we might start with 8 ebits, use two cbits to clean ¤¤ create a superposition of 6 and 8 ebits and then use one qubit By the above discussion, this is strictly stronger than the to create the desired superposition of 5 and 9 ebits. Implicit resource ∞[qq]. in these sort of protocols is a pair of control registers AC ,BC We remark that these sorts of entanglement transformations that specify how many ebits Alice and Bob would like to were studied by Nielsen [67] who gave conditions for when end up with. In this example, they would like to map the an entangled state could be prepared using unlimited classical AC BC ⊗8 state (c0|00i+c1|11i) ⊗|Φ2i (for arbitrary coefficients communication. In this context, the term “maximally entan- c0, c1) to gled” makes sense for ebits, since together with unlimited classical communication they can be used to prepare any AC BC ⊗5 ⊗4 AC BC ⊗9 c0|00i |Φ2i |00i + c1|11i |Φ2i . (38) other state with the same or smaller Schmidt rank. The low- communication case was also considered by Daftuar and To achieve this transformation, Alice and Bob perform the Hayden [30]. following sequence of actions conditioned on their control We now return to parts (d) and (e) of Theorem 3. In (d), qubits: we need to run the simulation protocol for hNF : ρi for all • If Alice’s control qubit is zero, she sends two of her possible ρ in superposition.9 We can discard a resource β at entangled qubits through the classical channel. If it is one, the end of the protocol, but β must be either independent she sends two |+i states through the channel. Either way, of ρ for a feedback simulation or can depend only on Nˆ (ρ) the environment observes two random bits sent through for a non-feedback simulation. By the equality in Eq. (17), the channel. this reduces to producing coherent superpositions of varying • If Bob’s control qubit is zero, he uses the two bits he amounts of qubits and ebits. has received as controls for CNOTs that are applied to The simplest case is when α = Q(N )[q → q] + [¤¤]. In his halves of the ebits that Alice sent, thus mapping them this case, α≥CL Q(N )[q → q]+E[qq]+[¤¤] for any E. Thus to |00i. Then he discards the bits he received. If Bob’s we can take β = [¤¤] and so we have α≥CL hN : ρi + β for control bit is one, he simply discards the bit he received. all ρ. This establishes Eq. (27). Either way the environment sees another copy of the same The most general case without embezzling states is when random bit being discarded by Bob. Moreover, this bit is now independent of the residual quantum state held by α = Q1[q → q] + Q2[q ← q] + C2[c ← c] + E[qq]. (39) Alice and Bob. Alice and Bob now share 9For technical reasons, our coding theorem will adopt a slightly different ⊗6 ⊗2 ⊗8 approach. But for the converse and for the present discussion, we can consider c0|00i|Φ2i |00i + c1|11i ⊗ |Φ2i . general inputs to be mixtures of tensor power states. 14
In this case, we always have the constraint Unfortunately, even in the case of a fixed input ρ, the ad-
1 ditivity question is open. Until it is resolved, we cannot avoid Q1 ≥ Q(N ) = max I(R; B)ρ, (40) ρ 2 regularized formulas. However, conceptually part (e) adds to part (d) only the issues of regularization and optimization over since Q [q → q] is the only source of forward communication. 1 ways of splitting En into parts for Alice and Bob. Suppose that β = (E − e)[qq], for some 0 ≤ e ≤ E, i.e. we At this point it is natural to ask whether spread is only will use all of the communication, but discard E − e ebits helpful for feedback simulations. The amplitude damping of entanglement. Now, for each ρ, being able to simulate channel of Fig. 6 and the variable-entropy channel both have the channel on input ρ⊗n requires creating at least I(R; B) ρ efficient non-feedback simulations on general inputs, using C mutual information and I(B; E) + β entanglement which is E bits of forward communication, even when entanglement is only possible if supplied as ordinary ebits. One way to see why is to observe 1 1 that in each case H(N (ρ)) is uniquely determined by Nˆ (ρ), α≥CL 2 I(R; B)ρ[q → q] + ( 2 I(E; B)ρ + E − e)[qq]. so that measuring the average density matrix of Eve will leave Equivalently no room for spread. An optimal simulation can gently measure the average density matrix of Eve, transmit this information 1 (Q1 − 2 I(R; B)ρ)[q → q] + Q2[q ← q] + C2[c ← c] classically to Bob, and then use the appropriate number of 1 ≥CL ( 2 I(E; B)ρ − e)[qq]. (41) ebits. A second way to see that spread is not needed to simulate We can calculate when Eq. (41) holds by using the spread these two channels is to give explicit choices of the isometry in capacity expressions in Eq. (37). First, if 1 I(E; B) − e ≥ 0 2 ρ Eq. (29). This is easier to do for the variable-entropy channel, then the C2[c ← c] is not helpful and we simply have Q1 − 1 1 for which I(R; B)ρ + Q2 ≥ I(E; B)ρ − e, or equivalently 2 2 d BE A 1 X BE A Q1 + Q2 ≥ H(B)ρ − e. Md,F = |00i h0| + √ |iii h1| . d i=1 1 Alternatively, if 2 I(E; B)ρ−e ≤ 0 then we have the inequality 1 1 Define an isometry V1 : E → EAEB by Q1 − I(R; B)ρ + Q2 + C2 ≥ e − I(E; B)ρ, which is 2 2 d equivalent to X 1 EA EB E EA EB E V1 = √ |0ii |ii h0| + |1ii |ii h1| . d Q1 + Q2 + C2 ≥ e − H(B|R)ρ. i=1 Then V ◦ M is equivalent to transmitting a classical bit We will consider the case when E is sufficiently large so that 1 d,F from Alice to Bob and creating a d-dimensional maximally it does not impose any constraints on the other parameters. entangled state between Bob and Eve. This can be simulated This results in the bound using one cbit by having Bob locally create a d-dimensional
2(Q1 + Q2) + C2 ≥ max H(B)ρ − min H(B|R)ρ − CE(N )., maximally mixed state. ρ ρ (42) However, the above reasoning does not extend to more com- plicated situations. In Sec. IV-E3 we exhibit a channel whose whose RHS is precisely ∆sim(N ) from Definition 4. The role of communication can be thought of as both efficient simulation requires spread-generating resources such creating mutual information between R and B and in cre- as embezzling states or back communication even in the non- ating entanglement spread. Both are necessary for channel feedback setting. simulation, but only forward communication can create mutual information, while backwards or forward communication (or D. Relation to other communication protocols even other resources, such as embezzling states) can be used Special cases of Theorem 3 include remote state prepara- to create spread. tion [12] (and the qubit-using variant, super-dense coding of P The non-feedback case (e) of Theorem 3 adds one additional quantum states [43]) for CQ-channels N (ρ) = j hj|ρ|jiσj; subtlety: since the simulation gives part of the input to Eve, it the co-bit equality [q → qq] = ([q → q] + [qq])/2 [40]; does not have to preserve superpositions between as many measurement compression [85] (building on [65], [66]) for qc- P different input density matrices. In particular, if the input channels N (ρ) = j Tr(ρMj)|jihj| where (Mj) is a POVM; ⊗n ˆ density matrix is ρ , then Eve learns N (ρ). Thus, we need to entanglement dilution [8] for a constant channel N (ρ) = σ0; run our protocol in an incoherent superposition over different and entanglement of purification (EoP) [75] – it was shown by values of Nˆ (ρ) and then in a coherent superposition within Hayashi [46] that optimal visible compression of mixed state each Nˆ −1(ω). Intuitively we can think of the input as a sources is given by the regularized EoP. superposition over purifications of different tensor powers The Wyner protocol for producing a classical correlated ρ⊗n. This picture can be made rigorous by the post-selection distribution [89] is a static analogue of the cbit-rbit tradeoff. principle [22] and gentle tomography [50], [11], but we will Similarly, the entanglement of purification is a static version not explore this approach in detail.√ In this picture Eve learns of the qubits-but-no-ebits version of the QRST. ω = Nˆ (ρ) up to accuracy O(1/ n), and this collapses the For feedback channels, [32] showed that the QRST can be superposition to inputs ρ⊗n with ρ ∈ Nˆ −1(ω). Thus we need combined with the so-called “feedback father” to obtain the only consider entanglement spread over the sets Nˆ −1(ω). resource equivalence Eq. (17). On the other hand, [32] also 15 showed that running the QRST backwards on a fixed i.i.d. randomness to select which one to use. We will choose these source yields state merging [57], a.k.a. fully-quantum Slepian- channels such that their ranges are disjoint subsets of Y , and Wolf. This implies that merging can be used to provide an al- in fact, will construct them by starting with a partition of Y ternate construction of the QRST on a known i.i.d. source [1]. and working backwards. The resulting protocol is analyzed in More recently, [90] has introduced state redistribution which the following lemma. simultaneously generalizes state merging and splitting, by Lemma 7. Consider a channel N : X → Y with N(y|x) = determining the optimal rate at which a system can be sent 1 (y)/|Γ(x)|, where 1 denotes the indicator function for from one party to another when both parties hold ancilla Γ(x) S a set S. Choose positive integers r, m such that rm = |Y | systems that are in some way entangled with the system being and let γ = m|E|/|X| |Y |. Choose a random partition of Y sent. into subsets Y ,...,Y , each of size m, and for y ∈ Y define As remarked earlier, the version of the classical reverse 1 r i(y) to be the index of the block containing y. Define Shannon theorem proved here, Theorem 1, differs from the version originally proved in [14] (which also first conjectured ˜ 1Γ(x)(y) Theorem 3). In the earlier version, the simulation was exactly N(y|x) = r · |Γ(x) ∩ Yi(y)| faithful even for finite block size, and asymptotically efficient in the amount of communication used, but exponentially to be the channel that results from the following protocol: inefficient in the amount of shared randomness. The version 1) Let i ∈ [r] be a uniformly chosen random number shared proved here is only asymptotically faithful, but importantly by Alice and Bob. stronger in being asymptotically efficient in its use both of 2) Given input x, Alice chooses a random element of Γ(x)∩ classical communication and shared randomness. None of our Yi (assuming that one exists) and transmits its index simulations, nor other results in this area (apart from [27]), j ∈ [m] to Bob. th achieve the zero-error performance of [14]. We believe that 3) Bob outputs the j element of Yi. 2 zero-error simulation of classical channels using optimal rates Then it holds with probability ≥ 1 − 2re−γ that kN(·|x) − of communication requires exponential amounts of shared N˜(·|x)k1 ≤ for all x. randomness, and that for quantum channels, zero-error sim- 2 ulations do not exist in general. Apart from some easy special If we choose γ = 2(ln 4|E|)/ then there is a nonzero cases (e.g. quantum feedback channels), we do not know how probability of a good partition existing. In this case we can to prove these conjectures. derandomize the construction and simply say that a partition of Y exists such that the above protocol achieves low error on all inputs. III.SIMULATIONOFCLASSICALCHANNELS The idea behind Lemma 7 is that for each x and i, the A. Overview random variable |Γ(x) ∩ Yi| has expectation close to This section is devoted to the proof of Theorem 1 (the classi- |E| m cal reverse Shannon theorem). Previously the high-randomness |Γ(x)| · |Yi|/|Y | = · = γ, |X| |Y | cases of Theorem 1 were proved in [14], [84] and its converse √ was proved in [84]. Here we will review their proof and show with typical fluctuations on the order of γ. If γ is large how it can be extended to cover the low-randomness case then these fluctuations are relatively small, and the channel (parts (c,d,e) of Theorem 1). Similar results have been obtained simulation is faithful. Similar “covering lemmas” appeared in independently in [28]. Refs. [84], [28], and were anticipated by Ref. [39] and Thm The intuition behind the reverse Shannon theorem can be 6.3 of [89]. The details of the proof are described in Sec. III-B. seen by considering a toy version of the problem in which all The difference between Lemma 7 and the classical reverse probabilities are uniform. Consider a regular bipartite graph Shannon theorem (i.e. part (a) of Theorem 1) is that in the with vertices divided into (X,Y ) and with edges E ⊂ X × latter we are interested in an asymptotically growing number Y . Since the graph is regular, every vertex in X has degree of channel uses n and in simulating general channels N, |E|/|X| and every vertex in Y has degree |E|/|Y |. For x ∈ X, instead of unweighted channels. It turns out that when n is let Γ(x) ⊂ Y be its set of neighbors. We can use this to define large, N n looks mostly like an unweighted channel, in a sense a channel from X to Y : define N(y|x) to be 1/|Γ(x)| = that we will make precise in Sec. III-C. We will see that Alice |X|/|E| if y ∈ Γ(x) and 0 if not. In other words, N maps need communicate only O(log(n)) bits to reduce the problem x to a random one of its neighbors. We call these channels of simulating N n to the problem of simulating an unweighted “unweighted” since their transition probabilities correspond to channel. This will complete the proof of the direct part of part an unweighted graph. (a) of Theorem 1. In this case, it is possible to simulate the channel N One feature of the protocol in Lemma 7 is that Bob uses using a message of size ≈ log(|X| · |Y |/|E|) and using ≈ only shared randomness (i) and the message from Alice (j) log(|E|/|X|) bits of shared randomness. This can be thought in order to produce his output y. As a result, the protocol of as a special case of part (a) of Theorem 1 in which N is effectively simulates the feedback channel NF in which Alice an unweighted channel and we are only simulating a single also gets a copy of y. Conversely, in order to simulate a use of N. This is achieved by approximately decomposing N feedback channel, Bob cannot use local randomness in any into a probabilistic mixture of channels and using the shared significant way. 16
2 On the other hand, if Alice does not need to learn y, then We can take γ = 32(ln 2|EXY |)/ and derandomize the we can consider protocols in which some of the random bits statement of the Lemma. used are shared and some are local to Alice or Bob. This will Note that in general we will have rm < |W |, so that this allow us to reduce the use of shared randomness at the cost protocol does not use all of W . This should not be surprising, of some extra communication. The resulting trade-off between since faithfully simulating the channel N1 should in general the resources is given in part (c) of Theorem 1. In order to be more expensive than simulating N. The trick is to modify prove it, we will again first consider the unweighted case. the simulation of N1 that would be implied by Lemma 7 to The idea will be to decompose the channel N(y|x) as use less randomness, since we can rely on Bob’s application the composition of channels N1(w|x) and N2(y|w); i.e. of N2 to add in randomness at the next stage. P N(y|x) = (N2 ◦ N1)(x) = w∈W N1(w|x)N2(y|w). In this case Alice can simulate the channel N on input x, by B. Proof of unweighted classical reverse Shannon theorem N w simulating 1 to produce intermediate output on which Bob In this section we prove Lemma 7 and Lemma 8. The N y w locally applies 2 to produce . Since is generally more main tool in both proofs is the Hoeffding bound for the x y correlated with than , this will require more communication hypergeometric distribution [53]. The version we will need N than simply simulating directly as in Lemma 7. However, is since Bob simulates N2 using local randomness, the protocol may require less shared randomness, and more importantly, Lemma 9 (Hoeffding [53]). For integers 0 < a ≤ b < n, the total amount of communication plus shared randomness choose A and B to be random subsets of [n] satisfying |A| = a may be lower. and |B| = b. Then µ := E[|A ∩ B|] = ab/n and 2 We will assume that the channels N,N1 and N2 are all − µ unweighted channels. Let the corresponding bipartite graphs Pr [|A ∩ B| ≥ (1 + )µ] ≤ e 2 (45) 2 − µ for N,N1,N2 have edges EXY ⊂ X×Y , EXW ⊂ X×W and Pr [|A ∩ B| ≤ (1 − )µ] ≤ e 2 (46)
EYW ⊂ W × Y , respectively. We use ΓXY (x) to denote the µ2 − 2 neighbors of x in Y ; that is, ΓXY (x) = {y :(x, y) ∈ EXY }. Pr [| |A ∩ B| − µ| ≥ µ] ≤ 2e (47) Similarly, we can define ΓYX (y) to be the neighbors of y in X, Now we turn to Lemma 7. We can calculate ΓXW (x) to be the neighbors of x in W and so on. We assume X that the graphs are regular, so that |ΓXW (x)| = |EXW |/|X| kN(·|x) − N˜(·|x)k1 = |N(y|x) − N˜(y|x)| for all x, |ΓWY (w)| = |EWY |/|W | for all w, and so on. y∈Γ(x) Combined with the fact that N = N ◦ N , we find that 2 1 X 1 1 = − |Γ(x)| r · |Γ(x) ∩ Yi(y)| 1ΓXY (x)(y) X y∈Γ(x) = N(y|x) = N (y|w)N (w|x) 2 1 r |EXY |/|X| w∈W X X 1 1 = − X 1ΓXW (x)(w) 1ΓWY (w)(y) |Γ(x)| r · |Γ(x) ∩ Yi| = · i=1 y∈Γ(x)∩Yi |EXW |/|X| |EWY |/|W | r w∈W X |Γ(x) ∩ Yi| 1 = − (48) |ΓXW (x) ∩ ΓYW (y)| |Γ(x)| r = , (43) i=1 |EXW | · |EWY |/|X| |W | To apply Lemma 9, take A = Γ(x) and B = Yi, so that Rearranging terms yields the identity a = |E|/|X| = rγ, b = m = |Y |/r, n = |Y | and µ = γ. Then each term in the sum in Eq. (48) is ≤ /r with probability |EXW | |EWY | −γ2/2 |ΓXW (x) ∩ ΓYW (y)| = 1ΓXY (x)(y) . (44) ≥ 1 − 2e . Taking the union bound over all a and i |EXY | |W | completes the proof of Lemma 7. The protocol is now defined in a way similar to the one in The proof of Lemma 8 is similar. This time Lemma 7. r 1 X X N˜(y|x) = Pr [Alice sends w|x, i] N (y|w) Lemma 8. Choose positive integers r, m such that m = r 2 γ|X| |W |/|E | and r = |E | |W |/|E | |X|. Choose i=1 w∈Wi XW XY WY r disjoint sets W ,...,W ⊂ W at random, each of size m. Let 1 X X 1ΓXW (x)(w) 1ΓWY (w)(y) 1 r = · Wi = {wi,1, . . . , wi,m}. Let N˜(y|x) be the channel resulting r |ΓXW (x) ∩ Wi| |ΓWY (w)| i=1 w∈Wi from the following protocol: r |W | X |ΓXW (x) ∩ ΓYW (y) ∩ Wi| 1) Let i ∈ [r] be a uniformly chosen random number shared = . r|E | |Γ (x) ∩ W | by Alice and Bob. WY i=1 XW i
2) Given input x, Alice chooses a random wi,j ∈ Γ(x)∩Wi We will use Lemma 9 twice. First, consider |ΓXW (x) ∩ Wi|. (assuming that one exists) and transmits its index j ∈ This has expectation equal to γ and therefore
[m] to Bob. 2 Pr [|Γ (x) ∩ W | ≥ (1 + /4)γ] ≤ e−γ /32. 3) Bob outputs y with probability N2(y|wi,j). XW i −γ2/32 Then it holds with probability ≥ 1 − 2|EXY |e that (We will see that the one-sided bound simplifies some of the kN(·|x) − N˜(·|x)k1 ≤ for all x. later calculations.) Taking the union bound over all |X|r ≤ 17
|X | th |EXY | values of x, i, we find that |ΓXW (x)∩Wi| ≤ (1+/4)γ where ej ∈ Z is the unit vector with a one in the i −γ2/32 n for all x, i with probability ≥ 1 − |EXY |e . Assuming position. Thus t(x ) counts the frequency of each symbol n n that this is true, we obtain x ∈ X in x . Let TX denote the set of all possible types n n r of strings in X . Since an element of TX can be written |W | X |ΓXW (x) ∩ ΓYW (y) ∩ Wi| N˜(y|x) ≥ as |X | numbers ranging from 0, . . . , n we obtain the simple r|EWY | (1 + /4)γ n n+|X |−1 |X | i=1 bound |TX | = |X |−1 ≤ (n + 1) . For a type t, let |W | |Γ (x) ∩ Γ (y) ∩ W˜ | the normalized probability distribution t¯ := t/n denote its = XW YW , (49) r|EWY | (1 + /4)γ empirical distribution. n For a particular type t ∈ TX , denote the set of all strings in where we define W˜ = W1 ∪ ... ∪ Wr. Note that W˜ is a n n n n X with type t by Tt = {x ∈ X : t(x ) = t}. From [26], random subset of W of size rm. Using Eq. (44) we find that we have [|Γ (x)∩Γ (y)∩W˜ |] is equal to γ when (x, y) ∈ E E XW YW XY n and 0 otherwise. Again we use Lemma 9 to bound (n+1)−|X | exp(nH(t¯)) ≤ |T | = ≤ exp(nH(t¯)), (53) t t h ˜ i −γ2/32 Pr |ΓXW (x) ∩ ΓYW (y) ∩ W | ≤ (1 − /4)γ] ≤ e n n! where is defined to be Q . Next, let p be a prob- t x∈X tx! ⊗n for all (x, y) ∈ EXY . Now we take the union bound over all ability distribution on X and p the probability distribution n ⊗n n pairs (x, y) ∈ EXY to find that on X given by n i.i.d. copies of p, i.e. p (x ) := n ⊗n n ˜ p(x1) ··· p(xn). Then for any x ∈ Tt we have p (x ) = |ΓXW (x) ∩ ΓYW (y) ∩ W | ≥ (1 − /4)γ (50) Q tx ¯ ¯ x∈X p(x) = exp(−n(H(t) + D(tkp))). Combining this −γ2/32 with probability ≥ 1 − |EXY |e . When both Eq. (49) with Eq. (53), we find that and Eq. (50) hold and (x, y) ∈ E it follows that ¯ XY exp (−nD(tkp)) ⊗n ≤ p (Tt) ≤ exp (−nD(t¯kp)) , (54) |W | 1 − /4 W (n + 1)|X | N˜(y|x) ≥ > (1 − /2) r|EWY | 1 + /4 r|EWY | Thus, as n grows large, we are likely to observe an empirical |X| distribution t¯ that is close to the actual distribution p. To = (1 − /2) = (1 − /2)N(y|x). (51) n |EXY | formalize this, define the set of typical sequences Tp,δ by Finally we compare with Eq. (43) to obtain n [ Tp,δ := Tt. (55) X t∈T n kN(y|x) − N˜(y|x)k1 = 2 max(0,N(y|x) − N˜(y|x)) X kt¯−pk1≤δ y∈Y X To bound p⊗n(T n ), we apply Pinsker’s inequality [70]: < N(y|x) = . (52) p,δ y∈Y 1 2 D(qkp) ≥ kp − qk1 (56) This concludes the proof of Lemma 8. 2 ln 2 to show that 2 C. Classical types ⊗n n |X | nδ p (Tq,δ) ≥ 1 − (n + 1) exp − . (57) In this section we show how the classical method of 2 ln 2 types can be used to extend Lemmas 7 and 8 to prove the We will also need the Fannes-Audenaert inequality [36], [7] coding parts of Theorem 1. We begin with a summary of the which establishes the continuity of the entropy function. Let arguments aimed at readers already familiar with the method η(x) = −x log x − (1 − x) log(1 − x). Then if p, q are of types (a more pedagogical presentation is in [26]). The idea probability distribution on d letters, is to for Alice to draw a joint type according to the appropriate 1 1 distribution and to send this to Bob. This requires O(log(n)) |H(p)−H(q)| ≤ kp−qk1 log(d−1)+η kp − qk1 (58) bits of communication and conditioned on this joint type they 2 2 are left with an unweighted channel and can apply Lemma 7. If we have a pair of strings xn ∈ X n, yn ∈ Yn, then we can It is then a counting exercise to show that the communication define their joint type t(xnyn) simply to be the type of the n and randomness costs are as claimed. For the low-randomness string (x1y1, . . . , xnyn) ∈ (X × Y) . Naturally the bounds in case, the protocol is based on a decomposition N X→Y into Eq. (53) and Eq. (54) apply equally well to joint types, with W →Y X→W N2 ◦N1 . Alice draws an appropriate joint type for all X replaced by X ×Y. If t is a joint type then we can define its X |X | Y |Y| X P three variables (X, W, Y ) and transmits this to Bob. Again this marginals t ∈ Z and t ∈ Z by tx = y∈Y tx,y and Y P involves O(log(n)) bits of communication and leaves them ty = x∈X tx,y. Let N(y|x) denote a noisy channel from n n n with an unweighted channel, this time of the form that can be X → Y with N (y |x ) := N(y1|x1) ··· N(yn|xn). Then simulated with Lemma 8. N n(yn|xn) depends only on the type t = t(xnyn) according n n n Q tx,y To prove these claims, we begin by reviewing the method of to N (y |x ) = x,y N(y|x) . types, following [26]. We will use X , Y, W to denote single- We now have all the tools we need to reduce Theorem 1 letter alphabets, while reserving X,Y,W for block variables. to Lemmas 7 and 8. First, consider parts (a,b) of Theorem 1, n n Consider a string x = (x1, . . . , xn) ∈ X . Define the type where we have an ample supply of shared randomness. In n n Pn of x to be the |X |-tuple of integers t(x ) := j=1 exj , either case, the protocol is as follows: 18
1) Alice receives input xn. This may be expressed in a this to each term in I(X ; Y) = H(X ) + H(Y) − H(XY), n type-index representation as (tA, pA). Here tA = t(x ) we obtain that |I(X ; Y)t¯ − I(X ; Y)q| ≤ 3δ log(d/δ) and −1/4 is the input type, and pA ∈ [|TtA |] is defined by assign- |H(Y|X )t¯ − I(Y|X )q| ≤ 2δ log(d/δ). Taking δ to be n ,
ing the integers 1,..., |TtA | arbitrarily to the elements we obtain a sequence of protocols where both error and
of TtA . inefficiency simultaneously vanish as n → ∞. 2) Alice simulates the channel N n locally to generate a Similarly, for part (c), we need to consider the joint distri- n n provisional output y˜ . Let tB = t(˜y ) be the output bution q of XWY that results from drawing X according to |X ×Y| n type and tAB ∈ Z the joint type between x and p, sending it through N1 to obtain W and then sending W n n y˜ . Having determined these types, Alice discards y˜ , through N2 to obtain Y . The protocol is as follows: as it is no longer needed. 1) Suppose Alice’s input is xn. 3) Alice sends tB to Bob using |X × Y| log(n + 1) bits. n n n 2) Alice simulates N1 (x ) to obtain w˜ and then simulates 4) Alice and Bob use n(H(Y ) − C) + o(n) bits of shared n n n N2 (w ˜ ) to obtain y˜ . S n n n randomness to pick a subset i from a preagreed parti- 3) Alice sets tAW B = t(x w˜ y˜ ). She will not make any tioning of outputs of type tB into approximately equal further use of w˜n or y˜n. nC disjoint subsets, each of cardinality approximately 2 , 4) Alice sends t to Bob using |X ×W ×Y| log(n+1) bits. where C is the Shannon capacity of channel N. 5) Define X = TtA , W = TtW , Y = TtB , EXY = TtAB , yn ∈ S 5) Alice finds a string i having the same joint type EXW = Tt and EWY = Tt . To simulate the n n n AW WB with x as y˜ had. But because y lies in the chosen action of N n on xn ∈ X, conditioned on (xn, wn, yn) ∈ S subset i, which Bob already knows, Alice can transmit Tt, we need only to choose a random element of yn n to Bob more efficiently, by a message of size only ΓXY (x ) in this graph. This is achieved with Lemma 8. nC + o(n) bits, using the method of Lemma 7 (Let The analysis of this last step is similar to that of the X = T , Y = T and E = T ⊂ X × Y define a tA tB tAB previous protocol. The communication cost is log(m) = regular bipartite graph. To simulate the action of N n on log (|X| |W |γ/|E |) = nI(X ; W)¯ + O(log n) and the xn ∈ X, conditioned on (xn, yn) ∈ E, we need only to XW t randomness cost is choose a random neighbor yn of xn in this graph.) |E | · |W | This protocol is depicted in Fig. 8. log(r) = log XY It remains only to analyze the cost of this last step. The |EWY | |X| communication cost is (taking notation from the statement of = n(H(XY)t¯ + H(W)t¯ − H(WY)t¯ − H(X )t¯) + O(log n) Lemma 7) using Eq. (53) |X| |Y |γ = n(H(XY)¯+H(XW)¯−H(XWY)¯−H(W)¯)+O(log n) log(m) = log t t t t |E| using the Markov condition I(X ; Y|W)t¯ = 0 2 |Tt | |Tt | (2 ln(2|Tt |)/ ) = log A B AB = nI(Y; W|X )t¯ + O(log n) |Tt| = n(I(XY; W)t¯ − I(X ; W)τ¯) + O(log n) (59) ≤ n(H(t¯A) + H(t¯B) − H(t¯AB)) + |X × Y| log(n + 1) 2 This concludes the proofs of the existence of channel simula- + log(2 ln(2)nH(t¯AB)/ ) 2 tions claimed in Theorem 1. = nI(X ; Y)t¯AB + O(log(n)) + log(1/ ).
Since C(N) ≥ I(X ; Y)t¯ for all t, this establishes part (b) of D. Converses Theorem 1. Continuing, we estimate the randomness cost to In this section we discuss why the communication rates for be the above protocols cannot be improved. The lower bound for |E| |E| log(r) = log ≤ log simulating feedback channels was proven in [84] and for non- |X|γ |X| feedback channels in [28]. We will not repeat the proofs here,
≤ n(H(t¯AB) − H(t¯A)) + O(log(n)) but only sketch the intuition behind them. First, the communication cost must always be at least C(N), = nH(Y|X )t¯AB + O(log(n)). or I(X; Y )p if the input is restricted to be from the distribution To prove part (a), we need to relate entropic quantities p. Otherwise we could combine the simulation with Shannon’s defined for t¯ to the corresponding quantities for p. This [forward] noisy channel coding theorem to turn a small will be done with typical sets (Eq. (57)) and the Fannes- number of noiseless channel uses into a larger number of uses. Audenaert inequality (Eq. (58)). If p is a distribution on X This is impossible even when shared randomness is allowed. then let q = NF (p) be the joint distribution on X and Y Next, if NF (i.e., the channel including noiseless feedback) that results from sending X through N and obtaining output is to be simulated, then Bob’s output (with entropy H(Y )) Y . Then Eq. (57) implies that following the above protocol must be entirely determined by the C bits of classical com- results in values of t¯ that are very likely to be close to munication sent and the R bits of shared randomness used. ⊗n n d −nδ2/2 q. In particular, q (Tq,δ) ≥ 1 − (n + 1) e , where Therefore we must have C + R ≥ H(Y ). d = |X × Y|. Next, the Fannes-Audenaert inequality says The situation is more delicate when the simulation does not ¯ n ¯ that if t ∈ Tq,δ then |H(t) − H(q)| ≤ δ log(d/δ). Applying need to provide feedback to Alice. Suppose we have a protocol 19
Typewise compressed simulation giving classical reverse Shannon theorem
Optional feedback of output type and tB tA Simulate index to Alice channel Express Encoder p n locally to B x Input in get joint chooses an type/index tAB output index p Short O(log n) represent- type tAB B lying in the tB bit message ation conveys pA chosen set Si output type tB and having correct Alice’s Efficient coded ≈n(H(Y)−C) joint type with p Lab shared random A transmission of
bits used to index pB using choose a ≈ nC bits nC set Si of ≈2 tB outputs from a preagreed partitioning yn Decoder recon- p } of outputs of type tB B into disjoint subsets structs pB using shared randomness. S1, S2,…
Bob’s Lab
Fig. 8. The protocol for the classical reverse Shannon theorem (Theorem 1).
that uses C cbits and R rbits. Then let W = (W1,W2) com- IV. SIMULATION OF QUANTUM CHANNELS ON ARBITRARY prise both the message sent (W1) and the shared random string INPUTS (W2). We immediately obtain I(XY ; W ) ≤ H(W ) ≤ C + R. This section is devoted to proving parts (d) and (e) of Additionally, the shared randomness W2 and the message Theorem 3. X are independent even given the message W1; in other words I(X; W2|W1) = 0. Thus I(X; W ) = I(X; W1) ≤ A. The case of flat spectra H(W1) ≤ C. Finally we observe that X − W − Y satisfies the Markov chain condition since Bob produces Y only by By analogy with Sec. III-B, we will first state an unweighted observing W . This argument is discussed in more detail in or “flat” version of the quantum reverse Shannon theorem. [28], where it is also proven that it suffices to consider single- We will then use a quantum version of type theory (based letter optimizations. on Schur-Weyl duality) to extend this to prove the QRST for general inputs. Definition 10. An isometry V A→BE is called flat if, when These converses are also meaningful, and essentially un- applied to half of a maximally entangled state |ΦiRA, it changed, when we consider negative R, corresponding to produces a state |ψiRBE with ψR, ψB and ψE each maximally protocols that output shared randomness. mixed. We note two features of the definition. First, the requirement We observe that some of these converses are obtained from that ψA be maximally mixed is satisfied automatically, but we coding theorems and others are obtained from more traditional include it to emphasize that each marginal of ψ should be entropic bounds. In the cases where the converses are obtained maximally mixed. Second, the definition of a flat isometry from coding theorems then we in fact generally obtain strong does not depend on the choice of maximally entangled input converses, meaning that fidelity decreases exponentially when |Φi. we try to use less communication or randomness than neces- An important special case of flat channels occurs when sary. This is discussed in [84] and we will discuss a quantum A, B, E are irreps of some group G and V is a G-invariant analogue of this point in Sec. IV-E. map. We will return to this point in Sec. IV-C2. 20
Lemma 11 ([57], [1]). Let A, B, E have dimensions Bob that has been generated by the protocol. When executed DA,DB,DE respectively. Consider furthermore quantum sys- in reverse, the merging becomes splitting, and the KAKB tems KA,KB,M with dimensions DK ,DK ,DM , respectively, entanglement becomes a resource that is consumed, along with 256 p such that DB = DK DM and DM ≥ δ4 DADB/DE. If the quantum transmission of system M from Alice to Bob, in V A→BE is a flat isometry, it can be simulated up to error order to implement the state-splitting redistribution with respect to the maximally mixed input state, by consuming ΨR:A → ΨR:EB → ΨR:E:B. (62) log(DK ) ebits and sending log(DM ) qubits from Alice to Bob. KB M→B AKA→ME More precisely, there exist isometries SH ,SV such that B. Tensor power inputs
A→BE RA We next need to reduce the general channel simulation V |ΦD i ≈ A problem to the problem of simulating flat channels. To get the SKB M→BSAKA→ME|Φ iKAKB |Φ iRA. H V DK DR (60) idea of how this works, consider first the problem of simulating N ⊗n on a tensor power input ρ⊗n. While several solutions Furthermore, SH can be taken to be a Haar random unitary to this problem have been previously described in [57], [32], of dimension DB with SV chosen deterministically based on [1] and this section is not strictly necessary for the proof of SH and V . Eq. (60) holds with probability ≥ 1 − δ over the Theorem 3, we will present a protocol for tensor power inputs random choice of SH . here in a way that will help us understand the general case. The protocol in Eq. (60) is depicted in Fig. 9. ABE AA0 ⊗n Let |σi = (I ⊗ N )|Φρi and |ψi = |σi . Unfor- tunately, none of ψA, ψB nor ψE are in general maximally KB mixed. Even restricting to typical subspaces still leaves√ these ±O( n) |ΦDK i SH B states with eigenvalues that vary over a range of 2 . M On the other hand, these eigenvalues have a large amount KA of degeneracy. Let {|a1i,..., |adA i} be the eigenbasis of SV A A A E ρ = σ . Then the eigenvectors of ψ can be taken to be of the form |a i ⊗ · · · ⊗ |a i, for i = (i , . . . , i ) ∈ [d ]n. |ψi i1 in 1 n A R R Moreover the corresponding eigenvalue is determined entirely Fig. 9. The simulation of flat channels described in Lemma 11. The entangled by the type tA of i, just as in the classical case. There are state Φ is consumed in order to simulate the action of V A→BE on the n+dA−1 DK such types. For fixed dA, this number is polynomial RA n A part of |ψi . SH is chosen from the Haar measure on UDB , while SV RA in n, and thus the “which type” information can be transmitted is chosen to depend on SH and V , as described by [1]. While any |ψi can be input into the channel, the fidelity guarantee of Eq. (60) only holds using O(log n) qubits. Conditioned on this information, we when ψA is maximally mixed. are left with a flat spectrum over a space whose dimension depends on the type. In an earlier unpublished version of this work, we proved The same decomposition into types can be performed for a version of Lemma 11 using the measurement compression the B and E systems, and for constant dB and dE we will theorem of [85]. This version used classical instead of quantum still have at most poly(n) types tB and tE. Furthermore, we ⊗n communication (with correspondingly different rates), but by can decompose the action of UN into a map from tA to a making the communication coherent in the sense of [40], [33] superposition of tB and tE followed by a flat map within the ∼ it is possible to recover Lemma 11. type classes, which we call VtAtB tE . Thus, letting = denote a However, a conceptually simpler proof of Lemma 11 was global change of basis, we have later given by [32], [57], [1]. This proof is based on reversing X U ⊗n ∼ |t , t iht | ⊗ V . (63) “state merging,” which we can think of as the task of Bob N = B E A tA,tB ,tE sending a subsystem B to Alice in a way that preserves its tA,tB ,tE correlation with a subsystem E which Alice already has, as The only remaining question is to determine the commu- well as with a purifying reference system R. In other words, nication rate. Here we can use the classical theory of types merging is a state redistribution of the form from Sec. III-C to argue that almost all of the weight of ψ is concentrated in strings with t¯ , t¯ , t¯ close to the spectra ΨR:E:B → ΨR:EB. (61) A B E of σA, σB and σE respectively. If “close” is defined to be The simplest proof of state merging is given in [1], where it is distance δ, then ignoring the atypical types incurs error at 0 shown that if Bob splits B randomly into systems M and KB most exp(−nδ ) and we are left with subspaces of dimensions † 00 00 of the appropriate sizes (i.e. by applying S ), and sends M DA = exp(n(H(A)σ ±δ )), DB = exp(n(H(B)σ ±δ )) and H 00 0 00 to Alice, then Alice will be able to locally transform E,M DE = exp(n(H(E)σ ± δ )), where δ , δ are constants de- 10 into two subsystems A and KA such that A is completely pending on δ. depending on δ), . Applying Lemma 11 we ob- H(A)+H(B)−H(E) entangled with the reference system R (and thus can be tain the claimed communication rates of 2 = locally transformed by Alice into E,B, the desired goal of 10These claims are based on standard methods of information theory. By the merging.). On the other hand KA is nearly completely X X “distance” δ we refer to the trace distance kσ − t¯ k1. The error bound on entangled with the remaining KB system that Bob kept, so that ignoring atypical types is obtained from Eq. (57) and the bound on entropy it represents a byproduct of entanglement between Alice and is from the Fannes-Audenaert inequality (Eq. (58)). 21
1 H(B)+H(E)−H(A) 1 2 I(A; B) qubits and 2 = 2 I(B; E) ebits per protocol. use of N . Earlier versions of the quantum reverse Shannon theorem Two subtleties arise from combining communication pro- did not need to mention this sublinear amount of entanglement tocols involving different input and output types. The first spread because the extra sublinear communication cost could problem is that we have to be careful about who knows be handled automatically by the protocols used. However, what when: unlike in the classical channel simulation protocol, when we consider non-tensor power inputs in Sec. IV-D we Alice would like to communicate to Bob only tB and not tA will need to make a more explicit accounting of the costs of or tE. Indeed, she would like to forget tA and retain only entanglement spread. Thus, the reason our “warm-up” is more knowledge of tE for herself. This is addressed by using the complicated than the previous proofs of the i.i.d.-source QRST fact that Bob’s decoding unitary SH in Lemma 11 can be is that it already contains much of the complexity of the full chosen to depend only on tB, since we can choose a single proof. SH for each tB, and w.h.p. Eq. (60) holds simultaneously for all tA, tE. Denote the resulting decoding map SH,tB and call Alice’s encoding S . Then from Eqs. (60) and (63), VtA,tB ,tE ⊗n C. A quantum theory of types we have that UN can be approximately simulated by starting with an appropriate entangled state (more on this below) and There is one further difficulty which arises when considering applying non-tensor power inputs. This problem can already been seen ! in the case when the input to the channel is of the form X 1 ⊗n ⊗n (ρ + ρ ). If ρ1 and ρ2 do not commute, then we cannot |tBihtB| ⊗ SH,tB × 2 1 2 tB run the protocol of the previous section without first estimating ! the eigenbases of ρ and ρ . Moreover, we need to perform X 1 2 |t , t iht | ⊗ S , (64) this estimation in a non-destructive way and then be able to B E A VtA,tB ,tE tA,tB ,tE uncompute our estimates of the eigenbasis, as well as any intermediate calculations used. Such techniques have been where the first line is applied by Bob and the second line is used to perform quantum data compression of ρ⊗n when ρ applied by Alice. is unknown [58], [11]. However, even for that much simpler The second problem is that when we apply Lemma 11 to the problem they require delicate analysis. We believe that it is V , the dimensions D ,D ,D (and thus D ,D tA,tB ,tE A B E K M possible to prove the quantum reverse Shannon theorem by as well) vary by as much as exp(±nδ), and yet our protocol carefully using state estimation in this manner, but instead needs to act on a single entangled state and send a single will present a somewhat simpler proof that makes use of message. For the M register we can address this by simply 256 representation theory. taking DM to equal maxd 4 |Tt ||Tt |/|Tt |e, where the A B E The area of representation theory we will use is known as maximum is taken over all typical triples of tA, tB, tE. Thus, Schur duality (or Schur-Weyl duality). It has also been used DM is independent of any of the registers communicated during the protocol. for data compression of unknown tensor power states [49], [50], [45] and entanglement concentration from tensor powers However, since DRDM must equal |Tt |, we cannot avoid B of unknown pure entangled states [48], [51]. Some reviews having DR vary with tB. (There is a minor technical point of the role of Schur duality in quantum information can be related to DB needing to be an integer, but this can be ignored at the cost of an exponentially small error.) As a result, we found in Chapters 5 and 6 of [41] and Chapters 1 and 2 need to run the protocol of Lemma 11 in superposition using of [21]. A detailed explanation of the mathematics behind different numbers of ebits in different branches of the super- Schur duality can also be found in [37]. Our treatment will position. This cannot be accomplished simply by discarding follow [41]. In Sec. IV-C1, we will explain how Schur duality the unnecessary ebits in the branches of the superposition that can serve as a quantum analogue of the classical method of need less entanglement; instead we need to use one of the tech- types that we described in Sec. III-C. Then in Sec. IV-C2 niques from Sec. II-C. Fortunately, since the number of ebits we will show this can be applied to channels, allowing us to decompose N ⊗n into a superposition of flat channels. Finally, varies by only O(nδ) across different values of tB, we only need to generate O(nδ) bits of entanglement spread. This can in Sec. IV-C3 we will use this to describe quantum analogues be done with O(nδ) extra qubits of communication, leading of conditional types. We will use this to show that the atypical to an asymptotically vanishing increase in the communication flat sub-channels involve only an exponentially small amount rate. And since the amount of entanglement generated depends of amplitude. on only tB, this does not require leaking any information to In Sec. IV-D, we will use these tools to prove Theorem 3. Bob that he will not already have. Alice first creates her half 1) Schur duality and quantum states: This section will of the entanglement at the same time as she is transformating review the basics of Schur duality and will explain how it can |tAi into |tB, tEi. Then she sends her half of the entanglement serve as a quantum analogue of the classical method of types. to Bob (after it has been mixed with the input by S ) Let S denote the permutation group on n objects and let U VtA,tB ,tE n d along with her copy of |tBi. This ensures that Alice keeps no denote the d-dimensional unitary group. Both groups have a d ⊗n ⊗n record of the amount of entanglement she has created, while natural action on (C ) . For u ∈ Ud define Q(u) = u and Bob is able to perform his part of the entanglement-generation for s ∈ Sn define P(s) to permute the n systems according 22
↓ ↓ to s: namely, t1 ≥ · · · ≥ td (analogous to λ) and the Sd permutation that ↓ X maps t into t (analogous to the Qλ register). Quantumly, we P(s) = |i1, . . . , ini is(1), . . . , is(n) . (65) will see that for states of the form ρ⊗n, the λ register carries i1,...,in∈[d] information about the eigenvalues of ρ and the Qλ register is This convention is chosen so that P(s) is a representation11 determined by the eigenbasis of ρ. These two representations commute, and can be simulta- The main thing we will need to know about Qλ and Pλ neously decomposed into irreducible representations (a.k.a. is their dimension. Roughly speaking, if d is constant then n+d−1 irreps). We can also think of Q(u)P(s) as a reducible rep- |Id,n| ≤ ≤ poly(n), dim Qλ ≤ poly(n) and n ¯ resentation of Ud × Sn. dim Pλ ≈ exp(nH(λ)). For completeness, we also state Define Id,n to be the set of partitions of n into d parts: exact formulas for the dimensions of Qλ and Pλ, although d we will not need to use them. For λ ∈ I , define λ˜ := that is Id,n = {λ = (λ1, λ2, . . . , λd) ∈ Z : λ1 ≥ λ2 ≥ d,n Pd λ + (d − 1, d − 2,..., 1, 0). Then the dimensions of Qd and · · · ≥ λd ≥ 0 and i=1 λi = n}. Note that |Id,n| ≤ |Td,n| ≤ λ d P are given by [37] (n + 1) = poly(n). It turns out that Id,n labels the irreps of λ both Ud and Sn that appear in the decompositions of Q and Q (λ˜ − λ˜ ) d 1≤i