Geometry of Quantum States

Total Page:16

File Type:pdf, Size:1020Kb

Geometry of Quantum States Geometry of Quantum States Ingemar Bengtsson and Karol Zyczk_ owski An Introduction to Quantum Entanglement 12 Density matrices and entropies A given object of study cannot always be assigned a unique value, its \entropy". It may have many different entropies, each one worthwhile. |Harold Grad In quantum mechanics, the von Neumann entropy S(ρ) = Trρ ln ρ (12.1) − plays a role analogous to that played by the Shannon entropy in classical prob- ability theory. They are both functionals of the state, they are both monotone under a relevant kind of mapping, and they can be singled out uniquely by natural requirements. In section 2.2 we recounted the well known anecdote ac- cording to which von Neumann helped to christen Shannon's entropy. Indeed von Neumann's entropy is older than Shannon's, and it reduces to the Shan- non entropy for diagonal density matrices. But in general the von Neumann entropy is a subtler object than its classical counterpart. So is the quantum relative entropy, that depends on two density matrices that perhaps cannot be diagonalized at the same time. Quantum theory is a non{commutative proba- bility theory. Nevertheless, as a rule of thumb we can pass between the classical discrete, classical continuous, and quantum cases by choosing between sums, integrals, and traces. While this rule of thumb has to be used cautiously, it will give us quantum counterparts of most of the concepts introduced in chapter 2, and conversely we can recover chapter 2 by restricting the matrices of this chapter to be diagonal. 12.1 Ordering operators The study of quantum entropy is to a large extent a study in inequalities, and this is where we begin. We will be interested in extending inequalities that are valid for functions defined on R to functions of operators. This is a large morsel. But it is at least straightforward to define operator functions, that is functions of matrices, as long as our matrices can be diagonalized by unitary y y transformations: then, if A = Udiag(λi)U , we set f(A) Udiag f(λi) U , where f is any function on R. Our matrices will be Hermitian≡ and therefore 12.1 Ordering operators 269 they admit a partial order; B A if and only B A is a positive operator. It is a difficult ordering relation to≥ work with though,− ultimately because it does not define a lattice|the set X : X A and X B has no minimum point in general. f ≥ ≥ g With these observations in hand we can define an operator monotone func- tion as a function such that A B f(A) f(B) : (12.2) ≤ ) ≤ Also, an operator convex function is a function such that f(pA + (1 p)B) pf(A) + (1 p)f(B) ; p [0; 1] : (12.3) − ≤ − 2 Finally, an operator concave function f is a function such that f is operator convex. In all three cases it is assumed that the inequalities hold− for all ma- trix sizes (so that an operator monotone function is always monotone in the ordinary sense, but the converse may fail).1 The definitions are simple, but we have entered deep waters, and we will be submerged by difficulties as soon as we evaluate a function at two opera- tors that do not commute with each other. Quite innocent looking monotone functions fail to be operator monotone. An example is f(t) = t2. Moreover the function f(t) = et is neither operator monotone nor operator convex. To get serious results in this subject some advanced mathematics, including frequent excursions into the complex domain, are needed. We will confine ourselves to stating a few facts. Operator monotone functions are characterized by L¨owner's theorem. A function f(t) on an open interval is operator monotone if and only if it can be extended analytically to the upper half plane and transforms the upper half plane into itself. Therefore the following functions are operator monotone: f(t) = tγ ; t 0 if and only if γ [0; 1] ≥ 2 f(t) = at+b ; t = d=c ; ad bc > 0 (12.4) ct+d 6 − − f(t) = ln t ; t > 0 : This small supply can be enlarged by the observation that the composi- tion of two operator monotone functions is again operator monotone; so is f(t) = 1=g(t) if g(t) is operator monotone. The set of all operator mono- tone functions− is convex, as a consequence of the fact that the set of positive operators is a convex cone. A continuous function f mapping [0; ) into itself is operator concave if and only if f is operator monotone. Operator1 convex functions include f(t) = ln t, and f(t) = t ln t when t > 0; we will use the latter function to construct en− tropies. More generally f(t) = tg(t) is operator convex if g(t) is operator monotone. 1 The theory of operator monotone functions was founded by L¨owner (1934) [?]. An interesting early paper is by Bendat and Sherman (1955) [?]. For a survey see Bhatia's book [?], and (for matrix means) Ando (1994) [?]. 270 Density matrices and entropies Finally we define the mean AσB of two operators. We require that AσA = A, as well as homogeneity, α(AσB) = (αA)σ(αB), and monotonicity, AσB CσD if A C and B D. Moreover we require that (T AT y)σ(T BT y) ≥ T (AσB)T y,≥as well as a≥suitable continuity property. It turns out [?] that≥ every mean obeying these demands takes the form 1 1 A σ B = pA f B pA ; (12.5) p p A A where A > 0 and f is an operator monotone function on [0; ) with f(1) = 1. (It is not known how to define a mean of more than two operators.)1 The mean will be symmetric in A and B if and only if f is self inversive, that is if and only if f(1=t) = f(t)=t : (12.6) Special cases of symmetric means include the arithmetic mean for f(t) = (1 + t)=2, the geometric mean for f(t) = pt, and the harmonic mean for f(t) = 2t=(1+t). It can be shown that the arithmetic mean is maximal among symmetric means, while the harmonic mean is minimal [?]. We will find use for these results throughout the next three chapters. But to begin with we will get by with inequalites that apply, not to functions of operators directly but to their traces. The subject of convex trace functions is somewhat more manageable than that of operator convex functions. A key result is that the inequality (1.11) for convex functions can be carried over in this way;2 Klein's inequality. If f is a convex function and A and B are Hermitian operators, then Tr[f(A) f(B)] Tr[(A B)f 0(B)] : (12.7) − ≥ − As a special case Tr(A ln A A ln B) Tr(A B) (12.8) − ≥ − with equality if and only if A = B. To prove this, use the eigenbases: A e = a e B f = b f e f = c : (12.9) j ii ij ii j ii ij ii h ij j i ij A calculation then shows that e f(A) f(B) (A B)f 0(B) e = h ij − − − j ii = f(a ) c 2[f(b ) (a b )f 0(b )] = (12.10) i − j ij j j − i − j j j X = c 2[f(a ) f(b ) (a b )f 0(b )] : j ij j i − j − i − j j j X 2 The original statement here is due to Oskar Klein (1931) [?]. 12.2 Von Neumann entropy 271 This is positive by eq. (1.11). The special case follows if we specialize to f(t) = t ln t. The condition for equality requires some extra attention|it is true. Another useful result is Peierl's inequality. If f is a strictly convex function and A is a Hermitian operator, then Trf(A) f( f A f ) ; (12.11) ≥ h ij j ii i X where f is any complete set of orthonormal vectors, or more generally a fj iig resolution of the identity. Equality holds if and only if fi = ei for all i, where A e = a e . j i j i j ii ij ii To prove this, observe that for any vector f we have j ii f A f = f e 2f(a ) f( f e 2a ) = f( f A f ) : (12.12) h ij j ii jh ij j ij j ≥ jh ij j ij j h ij j ii j j X X Summing over all i gives the result. We quote without proofs two further trace inequalities, the Golden Thomp- son inequality TreAeB TreA+B ; (12.13) ≥ with equality if and only if the Hermitian matrices A and B commute, and its more advanced cousin, the Lieb inequality 1 ln A−ln C+ln B 1 1 Tre Tr A B du ; (12.14) ≥ C + u C + u Z0 where A; B; C are all positive.3 12.2 Von Neumann entropy Now we can begin. First we establish some notation. In chapter 2 we used S to denote the Shannon entropy S(p~) of a probability distribution. Now we use S to denote the von Neumann entropy S(ρ) of a density matrix, but we may want to mention the Shannon entropy too. When there is any risk of confusing these entropies, they are distinguished by their arguments. We will also use S S(ρ ) to denote the von Neumann entropy of a density matrix ρ acting i ≡ i i on the Hilbert space Hi.
Recommended publications
  • Quantum Information, Entanglement and Entropy
    QUANTUM INFORMATION, ENTANGLEMENT AND ENTROPY Siddharth Sharma Abstract In this review I had given an introduction to axiomatic Quantum Mechanics, Quantum Information and its measure entropy. Also, I had given an introduction to Entanglement and its measure as the minimum of relative quantum entropy of a density matrix of a Hilbert space with respect to all separable density matrices of the same Hilbert space. I also discussed geodesic in the space of density matrices, their orthogonality and Pythagorean theorem of density matrix. Postulates of Quantum Mechanics The basic postulate of quantum mechanics is about the Hilbert space formalism: • Postulates 0: To each quantum mechanical system, has a complex Hilbert space ℍ associated to it: Set of all pure state, ℍ⁄∼ ≔ {⟦푓⟧|⟦푓⟧ ≔ {ℎ: ℎ ∼ 푓}} and ℎ ∼ 푓 ⇔ ∃ 푧 ∈ ℂ such that ℎ = 푧 푓 where 푧 is called phase • Postulates 1: The physical states of a quantum mechanical system are described by statistical operators acting on the Hilbert space. A density matrix or statistical operator, is a positive operator of trace 1 on the Hilbert space. Where is called positive if 〈푥, 푥〉 for all 푥 ∈ ℍ and 〈⋇,⋇〉. • Postulates 2: The observables of a quantum mechanical system are described by self- adjoint operators acting on the Hilbert space. A self-adjoint operator A on a Hilbert space ℍ is a linear operator 퐴: ℍ → ℍ which satisfies 〈퐴푥, 푦〉 = 〈푥, 퐴푦〉 for 푥, 푦 ∈ ℍ. Lemma: The density matrices acting on a Hilbert space form a convex set whose extreme points are the pure states. Proof. Denote by Σ the set of density matrices.
    [Show full text]
  • A Mini-Introduction to Information Theory
    A Mini-Introduction To Information Theory Edward Witten School of Natural Sciences, Institute for Advanced Study Einstein Drive, Princeton, NJ 08540 USA Abstract This article consists of a very short introduction to classical and quantum information theory. Basic properties of the classical Shannon entropy and the quantum von Neumann entropy are described, along with related concepts such as classical and quantum relative entropy, conditional entropy, and mutual information. A few more detailed topics are considered in the quantum case. arXiv:1805.11965v5 [hep-th] 4 Oct 2019 Contents 1 Introduction 2 2 Classical Information Theory 2 2.1 ShannonEntropy ................................... .... 2 2.2 ConditionalEntropy ................................. .... 4 2.3 RelativeEntropy .................................... ... 6 2.4 Monotonicity of Relative Entropy . ...... 7 3 Quantum Information Theory: Basic Ingredients 10 3.1 DensityMatrices .................................... ... 10 3.2 QuantumEntropy................................... .... 14 3.3 Concavity ......................................... .. 16 3.4 Conditional and Relative Quantum Entropy . ....... 17 3.5 Monotonicity of Relative Entropy . ...... 20 3.6 GeneralizedMeasurements . ...... 22 3.7 QuantumChannels ................................... ... 24 3.8 Thermodynamics And Quantum Channels . ...... 26 4 More On Quantum Information Theory 27 4.1 Quantum Teleportation and Conditional Entropy . ......... 28 4.2 Quantum Relative Entropy And Hypothesis Testing . ......... 32 4.3 Encoding
    [Show full text]
  • Quasi-Probability Husimi-Distribution Information and Squeezing in a Qubit System Interacting with a Two-Mode Parametric Amplifier Cavity
    mathematics Article Quasi-Probability Husimi-Distribution Information and Squeezing in a Qubit System Interacting with a Two-Mode Parametric Amplifier Cavity Eied. M. Khalil 1,2, Abdel-Baset. A. Mohamed 3,4,* , Abdel-Shafy F. Obada 2 and Hichem Eleuch 5,6,7 1 Department of Mathematics, College of Science, P.O. Box11099, Taif University, Taif 21944, Saudi Arabia; [email protected] 2 Faculty of Science, Al-Azhar University, Nasr City 11884, Cairo, Egypt; [email protected] 3 Department of Mathematics, College of Science and Humanities in Al-Aflaj, Prince Sattam bin Abdulaziz University, Al-Aflaj 11942, Saudi Arabia 4 Faculty of Science, Assiut University, Assiut 71515, Egypt 5 Department of Applied Physics and Astronomy, University of Sharjah, Sharjah 27272, UAE; [email protected] 6 Department of Applied Sciences and Mathematics, College of Arts and Sciences, Abu Dhabi University, Abu Dhabi 59911, UAE 7 Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA * Correspondence: [email protected] Received: 22 September 2020; Accepted: 12 October 2020; Published: 19 October 2020 Abstract: Squeezing and phase space coherence are investigated for a bimodal cavity accommodating a two-level atom. The two modes of the cavity are initially in the Barut–Girardello coherent states. This system is studied with the SU(1,1)-algebraic model. Quantum effects are analyzed with the Husimi function under the effect of the intrinsic decoherence. Squeezing, quantum mixedness, and the phase information, which are affected by the system parameters, exalt a richer structure dynamic in the presence of the intrinsic decoherence.
    [Show full text]
  • Quantum Information Theory
    Lecture Notes Quantum Information Theory Renato Renner with contributions by Matthias Christandl February 6, 2013 Contents 1 Introduction 5 2 Probability Theory 6 2.1 What is probability? . .6 2.2 Definition of probability spaces and random variables . .7 2.2.1 Probability space . .7 2.2.2 Random variables . .7 2.2.3 Notation for events . .8 2.2.4 Conditioning on events . .8 2.3 Probability theory with discrete random variables . .9 2.3.1 Discrete random variables . .9 2.3.2 Marginals and conditional distributions . .9 2.3.3 Special distributions . 10 2.3.4 Independence and Markov chains . 10 2.3.5 Functions of random variables, expectation values, and Jensen's in- equality . 10 2.3.6 Trace distance . 11 2.3.7 I.i.d. distributions and the law of large numbers . 12 2.3.8 Channels . 13 3 Information Theory 15 3.1 Quantifying information . 15 3.1.1 Approaches to define information and entropy . 15 3.1.2 Entropy of events . 16 3.1.3 Entropy of random variables . 17 3.1.4 Conditional entropy . 18 3.1.5 Mutual information . 19 3.1.6 Smooth min- and max- entropies . 20 3.1.7 Shannon entropy as a special case of min- and max-entropy . 20 3.2 An example application: channel coding . 21 3.2.1 Definition of the problem . 21 3.2.2 The general channel coding theorem . 21 3.2.3 Channel coding for i.i.d. channels . 24 3.2.4 The converse . 24 2 4 Quantum States and Operations 26 4.1 Preliminaries .
    [Show full text]
  • Some Inequalities About Trace of Matrix
    Available online www.jsaer.com Journal of Scientific and Engineering Research, 2019, 6(7):89-93 ISSN: 2394-2630 Research Article CODEN(USA): JSERBR Some Inequalities about Trace of Matrix TU Yuanyuan*1, SU Runqing2 1Department of Mathematics, Taizhou College, Nanjing Normal University, Taizhou 225300, China Email:[email protected] 2College of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, 210044, China Abstract In this paper, we studied the inequality of trace by using the properties of the block matrix, singular value and eigenvalue of the matrix. As a result, some new inequalities about trace of matrix under certain conditions are given, at the same time, we extend the corresponding results. Keywords eigenvalue, singular value, trace, inequality 1. Introduction For a complex number x a ib , where a,b are all real, we write Re x a, Im x b , as usual , let I n mn be an n n unit matrix, M be the set of m n complex matrices, diag d1,,L dn be the diagonal matrix with diagonal elements dd1,,L n , if A be a matrix, denote the eigenvalues of by i A, H the singular values of A by i A, the trace of by trA , the associate matrix of A by A , A be a semi- positive definite matrix if be a Herimite matrix and i A 0. In 1937 , Von Neumann gave the famous inequality [1] n (1.1) |tr ( AB ) | ii ( A ) ( B ), i1 where AB, are nn complex matrices. Scholars were very active in the study of this inequality, and obtained many achievements.
    [Show full text]
  • Some Trace Inequalities for Exponential and Logarithmic Functions
    Bull. Math. Sci. https://doi.org/10.1007/s13373-018-0123-3 Some trace inequalities for exponential and logarithmic functions Eric A. Carlen1 · Elliott H. Lieb2 Received: 8 October 2017 / Revised: 17 April 2018 / Accepted: 24 April 2018 © The Author(s) 2018 Abstract Consider a function F(X, Y ) of pairs of positive matrices with values in the positive matrices such that whenever X and Y commute F(X, Y ) = X pY q . Our first main result gives conditions on F such that Tr[X log(F(Z, Y ))]≤Tr[X(p log X + q log Y )] for all X, Y, Z such that TrZ = Tr X. (Note that Z is absent from the right side of the inequality.) We give several examples of functions F to which the theorem applies. Our theorem allows us to give simple proofs of the well known logarithmic inequalities of Hiai and Petz and several new generalizations of them which involve three variables X, Y, Z instead of just X, Y alone. The investigation of these logarith- mic inequalities is closely connected with three quantum relative entropy functionals: The standard Umegaki quantum relative entropy D(X||Y ) = Tr[X(log X − log Y ]), and two others, the Donald relative entropy DD(X||Y ), and the Belavkin–Stasewski relative entropy DBS(X||Y ). They are known to satisfy DD(X||Y ) ≤ D(X||Y ) ≤ DBS(X||Y ). We prove that the Donald relative entropy provides the sharp upper bound, independent of Z on Tr[X log(F(Z, Y ))] in a number of cases in which F(Z, Y ) is homogeneous of degree 1 in Z and −1inY .
    [Show full text]
  • Arxiv:1904.05239V2 [Math.FA]
    ON MATRIX REARRANGEMENT INEQUALITIES RIMA ALAIFARI, XIUYUAN CHENG, LILLIAN B. PIERCE AND STEFAN STEINERBERGER Abstract. Given two symmetric and positive semidefinite square matrices A, B, is it true that any matrix given as the product of m copies of A and n copies of B in a particular sequence must be dominated in the spectral norm by the ordered matrix product AmBn? For example, is kAABAABABBk ≤ kAAAAABBBBk? Drury [10] has characterized precisely which disordered words have the property that an inequality of this type holds for all matrices A, B. However, the 1-parameter family of counterexamples Drury constructs for these characterizations is comprised of 3 × 3 matrices, and thus as stated the characterization applies only for N × N matrices with N ≥ 3. In contrast, we prove that for 2 × 2 matrices, the general rearrangement inequality holds for all disordered words. We also show that for larger N × N matrices, the general rearrangement inequality holds for all disordered words, for most A, B (in a sense of full measure) that are sufficiently small perturbations of the identity. 1. Introduction 1.1. Introduction. Rearrangement inequalities for functions have a long history; we refer to Lieb and Loss [20] for an introduction and an example of their ubiquity in Analysis, Mathematical Physics, and Partial Differential Equations. A natural question that one could ask is whether there is an operator-theoretic variant of such rearrangement inequalities. For example, given two operators A : X → X and B : X → X, is there an inequality kABABAk≤kAAABBk where k·k is a norm on operators? In this paper, we will study the question for A, B being symmetric and positive semidefinite square matrices and k·k denoting the classical operator norm kMk = sup kMxk2.
    [Show full text]
  • Lecture 18 — October 26, 2015 1 Overview 2 Quantum Entropy
    PHYS 7895: Quantum Information Theory Fall 2015 Lecture 18 | October 26, 2015 Prof. Mark M. Wilde Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. 1 Overview In the previous lecture, we discussed classical entropy and entropy inequalities. In this lecture, we discuss several information measures that are important for quantifying the amount of information and correlations that are present in quantum systems. The first fundamental measure that we introduce is the von Neumann entropy. It is the quantum generalization of the Shannon entropy, but it captures both classical and quantum uncertainty in a quantum state. The von Neumann entropy gives meaning to a notion of the information qubit. This notion is different from that of the physical qubit, which is the description of a quantum state of an electron or a photon. The information qubit is the fundamental quantum informational unit of measure, determining how much quantum information is present in a quantum system. The initial definitions here are analogous to the classical definitions of entropy, but we soon discover a radical departure from the intuitive classical notions from the previous chapter: the conditional quantum entropy can be negative for certain quantum states. In the classical world, this negativity simply does not occur, but it takes a special meaning in quantum information theory. Pure quantum states that are entangled have stronger-than-classical correlations and are examples of states that have negative conditional entropy. The negative of the conditional quantum entropy is so important in quantum information theory that we even have a special name for it: the coherent information.
    [Show full text]
  • Wehrl Entropy, Lieb Conjecture and Entanglement Monotones
    Wehrl entropy, Lieb conjecture and entanglement monotones Florian Mintert1,2 and Karol Zyczkowski˙ 2,3 1Max Planck Institute for the physics of complex systems N¨othnitzerstr. 38 01187 Dresden 2Uniwersytet Jagiello´nski, Instytut Fizyki im. M. Smoluchowskiego, ul. Reymonta 4, 30-059 Krak´ow, Poland and 3Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotnik´ow 32/44, 02-668 Warszawa, Poland (Dated: October 23, 2003) We propose to quantify the entanglement of pure states of N × N bipartite quantum systems by defining its Husimi distribution with respect to SU(N) × SU(N) coherent states. The Wehrl entropy is minimal if and only if the analyzed pure state is separable. The excess of the Wehrl entropy is shown to be equal to the subentropy of the mixed state obtained by partial trace of the bipartite pure state. This quantity, as well as the generalized (R´enyi) subentropies, are proved to be Schur-concave, so they are entanglement monotones and may be used as alternative measures of entanglement. PACS numbers: 03.67 Mn, 89.70.+c, I. INTRODUCTION original statement and its proof was given in [11, 13]. It is also straightforward to formulate the Lieb conjecture Investigating properties of a quantum state it is useful for Wehrl entropy computed with respect to the SU(K)– to analyze its phase space representation. The Husimi coherent states [14, 15], but this conjecture seems not to distribution is often very convenient to work with: it ex- be simpler than the original one. ists for any quantum state, is non-negative for any point In this work we consider the Husimi function of an α of the classical phase space Ω, and may be normalized arbitrary mixed quantum state ρ of size N with respect to as a probability distribution [1].
    [Show full text]
  • Spectral Inequalities for Schr¨Odinger Operators with Surface Potentials
    SPECTRAL INEQUALITIES FOR SCHRODINGER¨ OPERATORS WITH SURFACE POTENTIALS RUPERT L. FRANK AND ARI LAPTEV Dedicated to M. Sh. Birman on the occasion of his 80th birthday Abstract. We prove sharp Lieb-Thirring inequalities for Schr¨odingeroperators with po- tentials supported on a hyperplane and we show how these estimates are related to Lieb- Thirring inequalities for relativistic Schr¨odingeroperators. 1. Introduction The Cwikel-Lieb-Rozenblum and the Lieb-Thirring inequalities estimate the number and N moments of eigenvalues of Schr¨odingeroperators −∆ − V in L2(R ) in terms of an integral of the potential V . They state that the bound Z γ γ+N=2 tr(−∆ − V )− ≤ Lγ;N V (x)+ dx (1.1) N R holds with a constant Lγ;N independent of V iff γ ≥ 1=2 for N = 1, γ > 0 for N = 2 and γ ≥ 0 for N ≥ 3. Here and below t± := maxf0; ±tg denotes the positive and negative part of a real number, a real-valued function or a self-adjoint operator t. In particular, the problem of finding the optimal value of the constant Lγ;N has attracted a lot of attention recently. We refer to the review articles [H2, LW2] for background information, references and applications of (1.1). The purpose of the present paper is twofold. First, we would like to find an analog of in- equality (1.1) for Schr¨odingeroperators with singular potentials V (x) = v(x1; : : : ; xd)δ(xN ), d := N − 1, supported on a hyperplane. It turns out that such an inequality is indeed valid, provided the integral on the right hand side of (1.1) is replaced by Z 2γ+d v(x1; : : : ; xd)+ dx1 : : : dxd : d R We determine the complete range of γ's for which the resulting inequality holds.
    [Show full text]
  • Quantum Channels and Operations
    1 Quantum information theory (20110401) Lecturer: Jens Eisert Chapter 4: Quantum channels 2 Contents 4 Quantum channels 5 4.1 General quantum state transformations . .5 4.2 Complete positivity . .5 4.2.1 Choi-Jamiolkowski isomorphism . .7 4.2.2 Kraus’ theorem and Stinespring dilations . .9 4.2.3 Disturbance versus information gain . 10 4.3 Local operations and classical communication . 11 4.3.1 One-local operations . 11 4.3.2 General local quantum operations . 12 3 4 CONTENTS Chapter 4 Quantum channels 4.1 General quantum state transformations What is the most general transformation of a quantum state that one can perform? One might wonder what this question is supposed to mean: We already know of uni- tary Schrodinger¨ evolution generated by some Hamiltonian H. We also know of the measurement postulate that alters a state upon measurement. So what is the question supposed to mean? In fact, this change in mindset we have already encountered above, when we thought of unitary operations. Of course, one can interpret this a-posteriori as being generated by some Hamiltonian, but this is not so much the point. The em- phasis here is on what can be done, what unitary state transformations are possible. It is the purpose of this chapter to bring this mindset to a completion, and to ask what kind of state transformations are generally possible in quantum mechanics. There is an abstract, mathematically minded approach to the question, introducing notions of complete positivity. Contrasting this, one can think of putting ingredients of unitary evolution and measurement together.
    [Show full text]
  • Quantum Mechanics Quick Reference
    Quantum Mechanics Quick Reference Ivan Damg˚ard This note presents a few basic concepts and notation from quantum mechanics in very condensed form. It is not the intention that you should read all of it immediately. The text is meant as a source from where you can quickly recontruct the meaning of some of the basic concepts, without having to trace through too much material. 1 The Postulates Quantum mechanics is a mathematical model of the physical world, which is built on a small number of postulates, each of which makes a particular claim on how the model reflects physical reality. All the rest of the theory can be derived from these postulates, but the postulates themselves cannot be proved in the mathematical sense. One has to make experiments and see whether the predictions of the theory are confirmed. To understand what the postulates say about computation, it is useful to think of how we describe classical computation. One way to do this is to say that a computer can be in one of a number of possible states, represented by all internal registers etc. We then do a number of operations, each of which takes the computer from one state to another, where the sequence of operations is determined by the input. And finally we get a result by observing which state we have arrived in. In this context, the postulates say that a classical computer is a special case of some- thing more general, namely a quantum computer. More concretely they say how one describes the states a quantum computer can be in, which operations we can do, and finally how we can read out the result.
    [Show full text]