
CSE 599d Quantum Computing Problem Set 1 Solutions Author: Dave Bacon (Department of Computer Science & Engineering, University of Washington) Due: January 20, 2006 Exercise 1: Majorization and Random Permutations n ↓ Let x = (x1, x2, . , xn) denote a vector of n real numbers, x ∈ R . Define x as the vector x sorted such that the ↓ ↓ ↓ ↓ ↓ ↓ ↓ components of the vector are in decreasing order, x = (x1, x2, . , xn) where x1 ≥ x2 ≥ · · · ≥ xn. Thus, for example, ↓ Pk ↓ Pk ↓ x1 is the largest component of x. We say that the vector x is majorized by the vector y if i=1 xi ≤ i=1 yi for all Pn ↓ Pn ↓ k < n (i.e. k = 1, 2, . , n − 1) and i=1 xi = i=1 yi . When x is majorized by y we write x ≺ y. n Pn (a) Suppose that p ∈ R is such that pi ≥ 0 and i=1 pi = 1 (we say that p is a vector of probabilities). There is a single vector of probabilities which is majorized by all other vectors of probabilities. What is this vector and prove that it is the only vector which has this property. 1 The vector p which is majorized by all other vectors of probability is the vector with pi = n for all i. Let’s show that this vector is indeed majorized by all other vectors of probability (note that I didn’t ask for this, but it is nice to prove it.) Let x be an arbitrary probability vector which does not majorize p. We will prove by contradiction that such a vector cannot exist. For some k it must be that Pk ↓ Pk 1 1 i=1 xi < i=1 n which implies that at least one of the xl, 1 ≤ l ≤ k is less than n . But since the Pn ↓ Pn 1 probability vectors sum to 1, we also have that i=k+1 xi > i=k+1 n which implies that one of these ↓ 1 ↓ xl , k + 1 ≤ l ≤ 1 is less than n . But this contradicts the fact that the xi are sorted in decreasing order. Why is the uniform vector p the only probability vector which has the property of being ma- jorized by all probability vectors (this is what I asked you to show)? Suppose that there was another such probability vector x which is not uniform but which is majorized by all probability vectors. Then we will show that it is not majorized by the uniform vector p. A probability vector which is not 1 uniform must have at least one element xi > n . But this implies that at least the first element of x ↓ 1 must satisfy x1 > n = p1. But this implies that there is a vector, namely the uniform vector, which x is not majorize by. This is a contradiction. Pn (b) An n × n matrix A = (aij) is called doubly stochastic if aij ≥ 0 for all i and j, aij = 1 for all j and Pn i=1 j=1 aij = 1 for all i. Show that every convex combination of a doubly stochastic matrices is a doubly stochastic Pm matrix (recall that a convex combination of matrices A1,A2,...,Am is a sum of these matrices, j=1 qjAj with Pm qj ≥ 0 and j=1 qj = 1.) Denote the matrix elements of Ak by (ak)ij. A convex combination of these elements is a matrix Pm C with entries cij = qk(ak)ij. Since qk ≥ 0 (from definition of convex) and (ak)ij ≥ 0 from k=1 Pn definition of doubly stochastic, this implies that cij ≥ 0. Next we take the column sums cij = Pn Pm Pm Pn Pm i=1 i=1 k=1 qk(ak)ij = k=1 qk i=1(ak)ij = k=1 qk = 1, where we have used the fact that each Ak is doubly stochastic in the second to last equality and the convexity in the last equality. This holds Pn Pn Pm Pm Pn for all j. A similar sum holds for each row, j=1 cij = j=1 k=1 qk(ak)ij = k=1 qk j=1(ak)ij = Pm k=1 qk = 1 for all i. Thus we have shown that cij satisfies all of the properties of being doubly stochastic. (c) Prove that if Ax ≺ x for all x then A must be doubly stochastic (hint consider the vector from part (a) as well as vectors like (0, 0, 1, 0,..., 0).) If Ax ≺ x for all x then it must also do so for arbitrary vectors x and in particular for all probability vectors x. Consider the case where x has components xi = δik, i.e. only one component, the kth is Pn nonzero. Now the components of the vector Ax is (Ax)i = j=1 Aijxi = Aik. The requirement that Pn Pn Pn the sum over all the elements must be the same is then i=1(Ax)i = i=1 Aik = i=1 x = 1. This must hold for all k. 1 Ax ≺ x must also hold for the uniform probability vector x with components xi = n . But from above we know that the only probability vector which the uniform vector can majorize is itself. And in this Pn Pn 1 1 case all our inequalities become strick equalities. (Ax)i = j=1 Aijxj = j=1 Aij n = xi = n . Thus Pn j=1 Aij = 1 for all i. 2 Finally we need to show that each Aij ≥ 0. To show this consider again the vector x with Pn components xi = δik. Then (Ax)i = j=1 Aijxj = Aik. Now if we sort (Ax)i and xi we will obtain the ↓ largest value of (Ax)i first. The first inequality which must be satisfied is therefore (Ax)1 ≤ 1. The ↓ ↓ Pn−1 ↓ next inequality will be (Ax)2 +(Ax)1 ≤ 1. The last inequality will be i=1 (Ax)i ≤ 1. But the equality Pn ↓ Pn−1 ↓ ↓ will be i=1(Ax)i = 1. This implies that i=1 (Ax)i = 1 − (Ax)n, so the last inequality becomes ↓ 1 − (Ax)n ≤ 1 or (Ax)n ≥ 0 which is just Ank ≥ 0. Similarly we can turn the second to last inequality Pn−2 ↓ i=1 (Ax)i ≤ 1 into 1 − (Ax)n−1 − (Ax)n ≥ 1. Using these inequalities we obtain An−1,k ≥ 0. It is Pn−k ↓ easy to see that we can use induction to prove that Ai,k ≥ 0 (the inequality i=1 (Ax)i ≤ 1 can via Pn ↓ the inductive hypothesis be turned into 1 − i=n−k(Ax)i ≤ 1 and then using the previous inequalities we obtain the desired result.) Thus we have have shown if Ax ≺ x, then A must satisfy all of the properties of a doubly stochastic matrix. (d) Prove that if A is doubly stochastic then Ax ≺ x for all vectors x. For any vector x we can redefine A such that the columns of A are reordered so that x is a decreasing vector and the rows of A are reordered so that Ax is a decreasing vector. Without loss of generality we will use this A. Pk We will prove the kth inequality. Define gj = i=1 Aij. Then because A is doubly stochastic, Pn Pn Pk 0 ≤ gj ≤ 1. Notice also that j=1 gj = k. Define yi = j=1 Aijxj. We consider will i=1(yi − xi). By definitions this is equal to k k n k X X X X (yi − xi) = Aijxj − xi (1) i=1 i=1 j=1 i=1 Next we add a term which is zero k k n k n X X X X X (yi − xi) = Aijxj − xi + (k − gj)xk (2) i=1 i=1 j=1 i=1 j=1 Next rearrange the first sum k n k k n X X X X X (yi − xi) = Aijxj − xi + (k − gj)xk (3) i=1 j=1 i=1 i=1 j=1 which we can rearrange as k n k n X X X X (yi − xi) = gjxj − xi + (k − gj)xk (4) i=1 j=1 i=1 j=1 and split up the sums k k n k k n X X X X X X (yi − xi) = gjxj + gjxj − xi + kxk − gjxk − gjxk (5) i=1 j=1 j=k+1 i=1 j=1 j=k+1 Pk Using i=1 1 = k we can write this as k k n k k k n X X X X X X X (yi − xi) = gjxj + gjxj − xi + xk − gjxk − gjxk (6) i=1 j=1 j=k+1 i=1 i=1 j=1 j=k+1 which we then reexpress as k k n X X X (yi − xi) = (gi − 1)(xi − xk) + gj(xj − xk) (7) i=1 i=1 j=k+1 3 But notice that xi − xk ≥ 0 for i ≤ k and xi − xk ≤ 0 for i ≥ k. Further since 0 ≤ gi ≤ 1, gi − 1 ≤ 0 and gj ≥ 0. Thus both of these terms are negative. Hence we have shown that k X (yi − xi) ≤ 0 (8) i=1 Which is the required inequalities for 1 ≤ k ≤ n − 1. What about the equality? It is easy to show true Pn Pn Pn by the properties of the doubly stochastic matrix: i=1(Ax)i = i,j=1 Aijxj = j=1 xj. (e) Suppose that we have a machine with N configurations. One operation we can perform on such a system is to permute (map in a one-to-one manner) these configurations.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-