ON ALGEBRA-VALUED R-DIAGONAL ELEMENTS
MARCH BOEDIHARDJO∗ AND KEN DYKEMA†
Abstract. For an element in an algebra-valued ∗-noncommutative probability space, equivalent conditions for algebra-valued R-diagonality (a notion introduced by Sniady´ and Speicher) are proved. Formal power series relations involving the moments and cumulants of such R-diagonal elements are proved. Decompositions of algebra-valued R-diagonal elements into products of the form unitary times self-adjoint are investigated; sufficient conditions, in terms of cumulants, for ∗-freeness of the unitary and the self-adjoint part are proved, and a tracial example is given where ∗-freeness fails. The particular case of algebra-valued circular elements is considered; as an application, the polar decompostion of the quasinilpotent DT-operator is described.
1. Introduction Let B be a unital algebra over the complex numbers. A B-valued noncommutative probability space is a pair (A, E), where A is a unital algebra containing a unitally embedded copy of B and E is a conditional expectation, namely, an idempotent linear mapping E : A → B that restricts to the identity on B and satisfies E(b1ab2) = b1E(a)b2 for every a ∈ A and b1, b2 ∈ B. Elements of A are called random variables or B-valued random variables. When B and A are ∗-algebras and E is ∗-preserving, then the pair (A, E) is called a B-valued ∗-noncommutative probability space. The B-valued ∗-distribution of an element a ∈ A is, (1) (2) (n) loosely speaking, the collection of ∗-moments of the form E(a b1a ··· bn−1a ) for n ≥ 1, (1), . . . , (n) ∈ {1, ∗} and b1, . . . , bn−1 ∈ B. See Definition 2.5 for a formal version. We study B-valued R-diagonal random variables in B-valued ∗-noncommutative proba- bility spaces. Our motivation is the results of [1], where certain random matrices are proved to be asymptotically B-valued R-diagonal. In the scalar-valued case (B = C), R-diagonal random variables were introduced by Nica and Speicher [9] and these include many nat- ural and important examples in free probability theory. In a subsequent paper [7], Nica, Shlyakhtenko and Speicher found several equivalent characterizations of scalar-valued R- diagonal elements. In [12], Sniady´ and Speicher introduced B-valued R-diagonal elements and proved some equivalent characterizations of them. In this paper we firstly prove some arXiv:1512.06321v2 [math.OA] 12 Jan 2016 further characterizations of B-valued R-diagonal elements, which are analogous to those of [7]. Secondly, we prove a result involving power series for B-valued R-diagonal elements that is similar to the power series relation between the R-transform and the the moment series of a single random variable. Thirdly, we examine polar decompositions of B-valued R-diagonal elements, which is a more delicate topic than in the scalar-valued case. We also study B-valued circular elements (a special case of B-valued R-diagonal elements) and we prove a new result about the polar decomposition of a quasinilpotent DT-operator.
Date: January 12, 2016. 2000 Mathematics Subject Classification. 46L54. Key words and phrases. R-diagonal element, circular element, noncrossing cumulant. ∗Research supported in part by NSF grant DMS-1301604. †Research supported in part by NSF grant DMS-1202660. 1 2 BOEDIHARDJO AND DYKEMA
Definition 1.1. Given n ∈ N and = ((1), . . . , (n)) ∈ {1, ∗}n, we define the maximal alternating interval partition σ() to be the interval partition of {1, . . . , n} whose blocks B are the maximal interval subsets of {1, . . . , n} such that if j ∈ B and j + 1 ∈ B, then (j) 6= (j + 1). The following definition is a reformulation of one of the characterizations of R-diagonality found in [12] and (in the scalar-valued case) in [7]. Definition 1.2. We say that an element a in a B-valued ∗-noncommutative probability space (A, E) is B-valued R-diagonal if for every integer k ≥ 0 and every b1, . . . , b2k ∈ B we have ∗ ∗ ∗ E(ab1a b2ab3a ··· b2k−2ab2k−1a b2ka) = 0, (namely, odd alternating moments vanish) and, for every integer n ≥ 1, every ∈ {1, ∗}n and every choice of b1, b2, . . . bn ∈ B, we have Y Y (j) Y (j) E a bj − E a bj = 0, (1) B∈σ() j∈B j∈B where in each of the three products above, the terms are taken in order of increasing indices. Remark 1.3. Clearly a is B-valued R-diagonal if and only if a∗ is B-valued R-diagonal. Remark 1.4. It is not difficult to see, by expansion of the left-hand-side of (1) and induc- tion on n, that the B-valued ∗-distribution of a B-valued R-diagonal element is completely determined by the collection of even, alternating moments, namely, those of of the form ∗ ∗ ∗ ∗ ∗ ∗ E(a b1ab2a b3a ··· b2k−2a b2k−1a) and E(ab1a b2ab3a ··· b2k−2ab2k−1a ). In Theorem 3.1, we prove equivalence of seven conditions for a B-valued random variable a, including the condition for R-diagonalality in Definition 1.2. (For some of these equiva- lences we refer to [12].) The six others are, in fact, B-valued analogues of those found in [7] for the case B = C. One of these, condition (g), is in terms of Speicher’s noncrossing cumulants for the pair (a, a∗), namely, that only those associated with even alternating sequences in a and a∗ may be nonvanishing. Another, condition (f) of Theorem 3.1, is that 0 a the matrix a∗ 0 is free from M2(B) with amalgamation over the diagonal matrices with entries in B, while still another, condition (b), is an easy reformulation of Definition 1.2. The proofs of the various equivalences are similar to those found in [7]. A unitary element of a B-valued ∗-noncommutative probability space (A, E) is, of course, an element u ∈ A such that u∗u = uu∗ = 1. A Haar unitary element is a unitary element satisfying E(uk) = 0 for all integers k ≥ 1. In the tracial, scalar-valued case, it is well known (see Proposition 2.6 of [7]) that being R-diagonal is equivalent to having the same ∗-distribution as an element of the form up, where u is Haar unitary and p is self-adjoint and such that {u, u∗} and {p} are free. (In the case of a C∗-noncommutative probability space, one can also take p ≥ 0 — this is well known, or see, for example, Corollary 5.9 for a proof.) The analogous statement is not true in the tracial, algebra-valued case. Example 6.9 provides a counter example. However, in Proposition 5.5 and Theorem 5.8, we do characterize, in terms of cumulants, when an algebra-valued R-diagonal element has the same B-valued ∗-distribution as a product up with p self-adjoint, with u a B-normalizing Haar unitary element and with {u, u∗} and {p} free over B. As an application, Corollary 6.8 shows that the polar decomposition of the quasinilpotent DT-operator has this form. The contents of the rest of the paper are as follows: Section 2 briefly recalls the for- mulation from [8] of Speicher’s B-valued cumlants, introduces notation and proves some R-DIAGONAL 3 straightforward results about traces and about self-adjointness of B-valued distributions and cumulants. Section 3 deals with the equivalence of the seven conditions that charac- terize algebra-valued R-diagonal elements. Section 4 establishes formal power series rela- tions involving B-valued alternating moments and B-valued alternating cumulants of an R-diagonal element. Section 5 proves conditions for traciality of ∗–distributions of algebra- valued R-diagonal elements and examines polar decompositions and the like for R-diagonal elements. Section 6 examines algebra-valued circular elements, which comprise a special case of algebra-valued R-diagonal elements; the notion of these has appeared before, no- tably in work of Sniady´ [11]. This section also contains Example 6.9 concerning the polar decomposition, mentioned above. Finally, in an appendix, we investigate the distribution (with respect to a trace) of the positive part of the operator in this example.
2. Cumulants, traces, and ∗-distributions In this section, we briefly recall a formulation (from [8]) of Speicher’s theory [13] of B- valued cumulants and describe the notation we will use. We also prove some straightforward results about traciality and self-adjointness. Given a family (ai)i∈I of random variables in a B-valued noncommutative probability S n space (A, E), and given j = (j(1), . . . , j(n)) ∈ n≥1 I , the corresponding cumulant map n−1 is a C-multilinear map αj : B → B. These are defined, recursively, by the moment- cumulant formula X E(aj(1)b1aj(2) ··· bn−1aj(n)) = αˆj(π)[b1, . . . , bn−1], (2) π∈NC(n) where NC(n) is the set of all noncrossing partitions of {1, . . . , n} and where for π ∈ NC(n), 0 αˆj(π) is a multilinear map defined in terms of the cumulant maps αj0 for the j obtained by restricting j to the blocks of π. In detail,α ˆj(π) can be specified (recursively) as follows. If π = 1n, then αˆj(π)[b1, . . . , bn−1] = αj(b1, . . . , bn−1), while if π 6= 1n then, selecting an interval block {p, p + 1, . . . , p + q − 1} ∈ π with p ≥ 1 and q ≥ 1, letting π0 ∈ NC(n − q) be obtained by restricting π to {1, . . . , p − 1} ∪ {p + q, . . . , n} and renumbering to preserve order, and letting j0 ∈ In−q and j00 ∈ Iq be j0 = (j(1), j(2), . . . , j(p − 1), j(p + q), j(p + q + 1), . . . , j(n)), (3) j00 = (j(p), j(p + 1), . . . , j(p + q − 1)), (4) we have
αˆj(π)[b1, . . . , bn−1] = 0 αˆ 0 (π )[b , . . . ,b , j 1 p−2 bp−1αj00 (bp, . . . , bp+q−2)bp+q−1, = bp+q, . . . , bn−1], p ≥ 2, p + q − 1 < n (5) 0 αˆ 0 (π )[b , . . . , b ]b α 00 (b , . . . , b ), p ≥ 2, p + q − 1 = n, j 1 p−2 p−1 j p n−1 0 αj00 (b1, . . . , bq−1)bqαˆj0 (π )[bq+1, . . . , bn−1], p = 1, q < n.
We will use the notation ψj for the multilinear moment map
ψj(b1, . . . , bn−1) = E(aj(1)b1aj(2) ··· bn−1aj(n)). (6) Given a tracial linear functional on B and a B-valued noncommutative probability space (A, E), we will now characterize, in terms of B-valued cumulants, when the composition τ ◦ E is tracial. 4 BOEDIHARDJO AND DYKEMA
In the following lemma and its proof, we will use the notation c for cyclic left permuta- tions. In particular, if j = (j(1), . . . , j(n)) ∈ In then c(j) = (j(2), j(3), . . . , j(n), j(1)), and if π ∈ NC(n) for some n, then c(π) ∈ NC(n) is the permutation obtained from π by applying the mapping ( n, j = 1 j 7→ j − 1, j > 1 to the underlying set {1, . . . , n}; so, for example, if π = {{1, 2}, {3, 4, 5}} ∈ NC(5), then c(π) = {{1, 5}, {2, 3, 4}}.
Lemma 2.1. Let Θ be the B-valued distribution of a family (ai)i∈I with corresponding moment maps ψj and cumulant maps αj. Suppose τ is a tracial linear functional on B. 0 m Fix n ≥ 2 and suppose that for all m ∈ {1, 2, . . . , n − 1}, all j ∈ I and all b1, . . . , bm ∈ B, we have
τ(αj0 (b1, . . . , bm−1)bm) = τ(b1αc(j0)(b2, . . . , bm)). (7) n Then for every π ∈ NC(n)\{1n}, j ∈ I and b1, . . . , bn ∈ B, we have
τ(ˆαj(π)[b1, . . . , bn−1]bn) = τ(ˆαc(j)(c(π))[b2, . . . , bn]b1). (8) Proof. We proceed by induction on n. To begin, if n = 2, then π = {{1}, {2}} and using (5) we have
τ(ˆαj(π)[b1]b2) = τ(α(j(1))b1α(j(2))b2) = τ(b1α(j(2))b2α(j(1))) = τ(b1αˆc(j)(π)[b2]), as required. Assume n ≥ 3. By the induction hypothesis, for every m ∈ {1, . . . , n − 1}, 0 m 0 0 j ∈ I and π ∈ NC(m), including, by the original hypothesis (7), the case π = 1m, we have 0 0 τ(ˆαj0 (π )[b1, . . . , bm−1]bm) = τ(ˆαc(j0)(c(π ))[b2, . . . , bm]b1). (9)
Since π 6= 1n, there is an interval block {p, p + 1, . . . , p + q − 1} ∈ π with p ≥ 2 and q ≥ 1. Let j0 and j00 be as in (3)–(4) and let π0 ∈ NC(n−q) be as described above those equations. If p + q − 1 < n and p ≥ 3, then
τ(ˆαj(π)[b1, . . . , bn−1]bn) 0 = τ(ˆαj0 (π )[b1, . . . , bp−2, bp−1αj00 (bp, . . . , bp+q−2)bp+q−1, bp+q, . . . , bn−1]bn) 0 = τ(ˆαc(j0)(c(π ))[b2, . . . , bp−2, bp−1αj00 (bp, . . . , bp+q−2)bp+q−1, bp+q, . . . , bn]b1)
= τ(ˆαc(j)(c(π))[b2, . . . , bn]b1), where we have used, respectively, the first case on the right-hand-side of (5), (9) and again the first case on the right-hand-side of (5). If p + q − 1 < n and p = 2, then 0 τ(ˆαj(π)[b1, . . . , bn−1]bn) = τ(ˆαj0 (π )[b1αj00 (b2, . . . , bq)bq+1, bq+2, . . . , bn−1]bn) 0 = τ(ˆαc(j0)(c(π ))[bq+2, . . . , bn]b1αj00 (b2, . . . , bq)bq+1) 0 = τ(αj00 (b2, . . . , bq)bq+1αˆc(j0)(c(π ))[bq+2, . . . , bn]b1)
= τ(ˆαc(j)(c(π))[b2, . . . , bn]b1), where we have used, respectively, the first case on the right-hand-side of (5), (9), the trace property of τ and the third case on the right-hand-side of (5). The case of p ≥ 3 and R-DIAGONAL 5 p + q − 1 = n is done similarly, while if p = 2 and q = n − 1, then
τ(ˆαj(π)[b1, . . . , bn−1]bn) = τ(αj(1)b1αj00 (b2, . . . , bn−1)bn)
= τ(αj00 (b2, . . . , bn−1)bnαj(1)b1)
= τ(ˆαc(j)(c(π))[b2, . . . , bn]b1), where we have used, respecively, the second case on the right-hand-side of (5), the trace property of τ and the third case on the right-hand-side of (5). Thus, (8) holds and the lemma is proved.
Proposition 2.2. Let Θ be the B-valued distribution of a family (ai)i∈I with corresponding moment maps ψj and cumulant maps αj. Suppose τ is a tracial linear functional on B. Then the following are equivalent:
(i) the linear functional τ ◦ Θ on BhXi | i ∈ Ii is tracial, n (ii) ∀n ≥ 2 ∀j ∈ I ∀b1, . . . , bn ∈ B, we have
τ(ψj(b1, . . . , bn−1)bn) = τ(b1ψc(j)(b2, . . . , bn)), (10) n (iii) ∀n ≥ 1 ∀j ∈ I ∀b1, . . . , bn ∈ B, we have
τ(αj(b1, . . . , bn−1)bn) = τ(b1αc(j)(b2, . . . , bn)). (11)
Proof. The equivalence of (i) and (ii) is easily seen from the definition (6) of ψj. The proof of (iii) =⇒ (ii) follows from the moment-cumulant formula (2) and Lemma 2.1. We will prove (ii) =⇒ (iii) using the moment-cumulant formula (2) and Lemma 2.1. Suppose (ii) holds and let us show (11) holds by induction on n ≥ 1. The case n = 1 is from the tracial property of τ. Fix n0 ≥ 2 and suppose (11) holds for all n < n0. Then, by n0 Lemma 2.1, for all π ∈ NC(n0)\{1n0 }, all j ∈ I and all b1, . . . , bn0 ∈ B, we have
τ(ˆαj(π)[b1, . . . , bn0−1]bn0 ) = τ(ˆαc(j)(c(π))[b2, . . . , bn0 ]b1). Combining this with the moment-cumulant formula (2) and using (10), we get
τ(αj(b1, . . ., bn0−1)bn0 ) X = τ(ψj(b1, . . . , bn0−1)bn0 ) − τ(ˆαj(π)[b1, . . . , bn0−1]bn0 )
π∈NC(n0)\{1n0 } X = τ(ψc(j)(b2, . . . , bn0 )b1) − τ(ˆαc(j)(c(π))[b2, . . . , bn0 ]b1)
π∈NC(n0)\{1n0 }
= τ(αc(j)(b2, . . . , bn0 )b1), as required. We now turn to questions of self-adjointness. Suppose B is a ∗-algebra and consider a family (ai)i∈I of B-valued random variables in a B-valued noncommutative probability space (A, E). Consider an involution s : I → I. Let BhXi | i ∈ Ii be endowed with the ∗ ∗-algebra structure coming from the ∗-operation on B and by setting Xi = Xs(i) for all i ∈ I. For each n ≥ 1, lets ˜ : In → In be defined by s˜((j(1), j(2), . . . , j(n)) = (s(j(n)), . . . , s(j(2)), s(j(1))).
Lemma 2.3. Let αj be the cumulant maps of the family (ai)i∈I . Fix n ≥ 2 and suppose 0 m that for all m ∈ {1, 2, . . . , n − 1}, all j ∈ I and all b1, . . . , bm−1 ∈ B, we have ∗ ∗ ∗ αj0 (b1, . . . , bm−1) = αs˜(j0)(bm−1, . . . , b1). 6 BOEDIHARDJO AND DYKEMA
n Then for all π ∈ NC(n)\{1n}, all j ∈ I and all b1, . . . , bn−1 ∈ B, we have ∗ ∗ ∗ αˆj(π)[b1, . . . , bn−1] = αj(r(π))[bn−1, . . . , b1], where r(π) is the noncrossing partition obtained from π by applying the reflection j 7→ n−j to all elements in the underlying set {1, . . . , n − 1}. Proof. This follows in a straightforward manner by induction on the number of blocks in π, from the recursion formula (5) for cumulants. The following proposition gives a criterion for self-adjointness of the distribution of a family of B-valued random variables in terms of properties of the B-valued cumulant maps. Proposition 2.4. The following are equivalent:
(i) the linear mapping Θ: BhXi | i ∈ Ii → B is self-adjoint. n (ii) ∀n ≥ 1 ∀j ∈ I ∀b1, . . . , bn−1 ∈ B, we have ∗ ∗ ∗ ψj(b1, . . . , bn−1) = ψs˜(j)(bn−1, . . . , b1), n (iii) ∀n ≥ 1 ∀j ∈ I ∀b1, . . . , bn ∈ B, we have ∗ ∗ ∗ αj(b1, . . . , bn−1) = αs˜(j)(bn−1, . . . , b1). ∗ Moreover, if (A, E) is a ∗-noncommutative probability space and if ai = as(i) for all i ∈ I, then the above conditions are satisfied. Definition 2.5. If the conditions in the last sentence of Proposition 2.4 hold, then we say Θ is the B-valued ∗-distribution of the family (ai)i∈I . Proof of Proposition 2.4. The equivalence of (i) and (ii) follows directly from the definitions. The equivalence of (ii) and (iii) is a straightforward application of the moment-cumulant formula and Lemma 2.3. We now suppose that B is a C∗-algebra and Θ is a self-adjoint B-valued distribution as ∗ in (i) of Proposition 2.4. We say that Θ is positive if Θ(p p) ≥ 0 for every p ∈ BhXi | i ∈ Ii. It can be difficult to verify positivity of a ∗-distribution Θ only from knowing the cumulants, though there are special cases that are exceptions to this statement (for example, the B- valued semicircular and B-valued circular elements, discussed in Section 6).
3. Algebra-valued R-diagonal elements Let B be a unital ∗-algebra and let (A, E) be a B-valued ∗-noncommutative probability space. It is convenient to introduce here some notation we will use below. By an enlargement of (A, E), we will mean a B-valued ∗-noncommutative probability space (A,e Ee) with an embedding A,→ Ae so that the diagram
A,→ Ae ∪ ∪ B = B commutes and EeA = E. R-DIAGONAL 7
For an element a ∈ A, consider the following sets of words and their centerings, formed from alternating a and a∗, with elements of B between: ∗ ∗ ∗ P11 = {b0ab1a b2ab3a b4 ··· ab2k−1a b2kab2k+1 | k ≥ 0, b0, . . . , b2k+1 ∈ B} (12) ∗ ∗ ∗ ∗ P22 = {b0a b1ab2a b3ab4 ··· a b2k−1ab2ka b2k+1 | k ≥ 0, b0, . . . , b2k+1 ∈ B} (13) ∗ ∗ ∗ P12 = b0ab1a b2ab3a b4 ··· ab2k−1a b2k (14) ∗ ∗ ∗ − E(b0ab1a b2ab3a b4 ··· ab2k−1a b2k) | k ≥ 1, b0, . . . , b2k ∈ B ∗ ∗ ∗ P21 = b0a b1ab2a b3ab4 ··· a b2k−1ab2k (15) ∗ ∗ ∗ − E(b0a b1ab2a b3ab4 ··· a b2k−1ab2k) | k ≥ 1, b0, . . . , b2k ∈ B . ∗ Note that P1x means “starting with a” and P2x means “starting with a ”, while Px1 means ∗ “ending with a” and Px2 means “ending with a .” When we write that a unitary u normalizes B in item (c) below, we mean that ubu∗ ∈ B and u∗bu ∈ B for all b ∈ B. The next result is essentially, a B-valued version of Theorem 1.2 of [7]. The equivalence of (b), (d) and (g) was proved in [12]; here we will prove the equivalence of (a)–(f). Theorem 3.1. Let a ∈ A. Then the following are equivalent: (a) a is B-valued R-diagonal. (b) We have E(x1x2 ··· xn) = 0 (16)
whenever n ∈ N, i0, i1, . . . , in ∈ {1, 2} and xj ∈ Pij−1,ij . (c) There is a B-valued ∗-noncommuative probability space (A,e Ee), an element p ∈ Ae and a unitary u ∈ Ae such that (i) u normalizes B, (ii) Ee(p) = 0 and, for all k ≥ 1 and all b1, . . . , b2k ∈ B, we have ∗ ∗ ∗ Ee(pb1p b2pb3p b4 ··· pb2k−1p b2kp) = 0, (iii) {u, u∗} is free from {p, p∗} with respect to Ee, (iv) Ee(u) = 0, (v) a and up have the same B-valued ∗-distribution. (d) There is an enlargement (A,e Ee) of (A, E) and a unitary u ∈ Ae such that (i) u commutes with every element of B, (ii) {u, u∗} is free from {a, a∗} with respect to Ee, (iii) Ee(uk) = 0 for all k ∈ N (namely, u is Haar unitary), (iv) a and ua have the same B-valued ∗-distribution. (e) If (A,e Ee) is an enlargement of (A, E) and u ∈ Ae is a unitary such that (i) u commutes with every element of B, (ii) {u, u∗} is free from {a, a∗} with respect to Ee, then a and ua have the same B-valued ∗-distribution. (f) Consider the subalgebra (1) (2) b 0 (1) (2) B := b , b ∈ B ⊆ M2(A), 0 b(2) (2) (2) the conditional expectation E : M2(A) → B given by a a E(a ) 0 E(2) 11 12 = 11 , a21 a22 0 E(a22) 8 BOEDIHARDJO AND DYKEMA
the subalgebra M2(B) ⊆ M2(A) and the operator 0 a z = . a∗ 0 (2) (2) Then {z} and M2(B) are free with respect to E (i.e., with amalgamation over B ). ∗ n (g) Letting a1 = a and a2 = a , for every n ∈ N and j = (j(1), . . . , j(n)) ∈ {1, 2} , the B-valued cumulant map αj for the pair (a1, a2) is equal to zero if j is either not alternating or is not of even length, i.e., if j is not of the form (1, 2, 1, 2,..., 1, 2) or (2, 1, 2, 1,..., 2, 1). The rest of the section is devoted to the proof of (a)–(f). Proof of (a) ⇐⇒ (b). This is clear, because every expectation on the left-hand-side of (1) in Definition 1.2 is one of the expectations on the left-hand-side of (16), and vice-versa and because the vanishing of odd, alternating moments of a is equivalent to P11 ⊆ ker E. Next is a straighforward computation that we will use twice, so we state it as a separate lemma. Lemma 3.2. Let (A, E) be a B-valued ∗-noncommutative probability space. Assume u, p ∈ A are such that (i) u a unitary that normalizes B (ii) {u, u∗} and {p, p∗} are free with respect to E. ∗ o Let a = up and let Pij be as in (12)–(15). Let A = alg(B ∪ {p, p }) and A = A ∩ ker E. Then o ∗ P12 ⊆ uA u , (17) o P21 ⊆ A , (18)
P11 ⊆ uA, (19) ∗ P22 ⊆ Au . (20) 0 ∗ Proof. For b ∈ B, we use the notational convention b = u bu ∈ B. We examine P12. Consider, for b0, . . . , b2k ∈ B, ∗ ∗ ∗ ∗ 0 ∗ 0 ∗ 0 ∗ y := b0ab1a b2ab3a ··· b2k−2ab2k−1a b2k = b0upb1p b2pb3p ··· b2k−2pb2k−1p b2ku . ∗ 0 ∗ 0 ∗ 0 Lettingy ˜ = pb1p b2pb3p ··· b2k−2pb2k−1p b2k, using freeness and condition (i), we have ∗ ∗ E(y) = b0E(uyu˜ ) = b0uE(˜y)u . so an arbitrary element of P12 can be written in the form ∗ 0 ∗ x = y − E(y) = b0u(˜y − E(˜y))u = u(b0(˜y − E(˜y)))u . We have shown (17). Proving (18) is even easier. Indeed, we have ∗ ∗ ∗ ∗ 0 ∗ 0 ∗ 0 z := b0a b1ab2a b3a ··· b2k−2a b2k−1ab2k = b0p b1pb2p b3p ··· b2k−2p b2k−1pb2k ∈ A, o so an arbitrary element of P21 can be written x = z − E(z) ∈ A . Similarly, an arbitrary element of P11 is of the form ∗ ∗ 0 ∗ 0 ∗ 0 b0ab1a b2ab3 ··· a b2kab2k+1 = ub0pb1p b2pb3 ··· p b2kpb2k+1 ∈ uA, (21) proving (19). Taking conjugates proves (20). The next lemma is an analogue of Proposition 2.3 of [7]. R-DIAGONAL 9
Lemma 3.3. Let (A, E) be a B-valued ∗-noncommutative probability space. Assume u, p ∈ A satisfy (i) u is a unitary that normalizes B, (ii) {u, u∗} is free from {p, p∗} with respect to E, (iii) E(u) = 0, (iv) E(p) = 0 and, for all k ≥ 1 and all b1, . . . , b2k ∈ B, we have ∗ ∗ ∗ E(pb1p b2pb3p b4 ··· pb2k−1p b2kp) = 0. Then the element a = up satisfies the condition (b) of Theorem 3.1. Proof. From Lemma 3.2 we have (17) and (18). Examining (21) in the proof of Lemma 3.2 and invoking condition (iv), (then also taking conjugates), we find o P11 ⊆ uA , (22) o ∗ P22 ⊆ A u . (23)
Using (17)–(18) and (22)–(23), we see that any product of the form x1x2 ··· xn with xj ∈
Pij ,ij+1 for some i1, . . . , in+1 ∈ {1, 2} can be rewritten as a word with letters belonging to the sets Ao and {u, u∗}, in alternating fashion. By freeness and the hypothesis E(u) = 0, each such word evaluates to 0 under E. Next is an analogue of Lemma 2.5 of [7]. Lemma 3.4. Suppose u ∈ A is a B-normalizing Haar unitary element in a B-valued ∗- noncommutative probabiltiy space (A, E). Suppose D ⊆ A is a ∗-subalgebra that is free from ∗ {u, u } with respect to E. Let n ≥ 1, x1, . . . , xn ∈ D and h0, h1, . . . , hn ∈ Z be such that (i) hk 6= 0 if 1 ≤ k ≤ n − 1 (ii) hk−1hk ≥ 0 and at least one of hk−1 and hk is nonzero whenever 1 ≤ k ≤ n and E(xk) 6= 0. Then h0 h1 hn−1 hn E(u x1u x2 ··· u xnu ) = 0. (24) Proof. The proof is very similar to the proof found in [7], only slightly different to take B into account. We use induction on the cardinality m, of {k | E(xk) 6= 0}. If m = 0, then by freeness, (24) holds. If m > 0, then letting k be least such that E(kk) 6= 0, we write
h0 h1 hn−1 hn E(u x1u x2 ··· u xnu )
h0 h1 hk−1 hk hk+1 hn = E(u x1u x2 ··· xk−1u E(xk)u xk+1u ··· xnu )
h0 h1 hk−1 hk hk+1 hn + E(u x1u x2 ··· xk−1u (xk − E(xk))u xk+1u ··· xnu ). By the induction hypothesis, the second term on the right-hand-side equals 0. Letting
hk−1 −hk−1 b = u E(xk)u , we have b ∈ B and the first term equals
h0 h1 hk−2 hk−1+hk hk+1 hn E(u x1u ··· xk−2u xk−1bu xk+1u ··· xnu ). (25) We will show that, by induction hypothesis, the above quantity equals 0. Indeed, we have hk−1hk ≥ 0 and at most one of hk−1 and hk can be zero, so hk−1 + hk 6= 0. If k < n and E(xk+1) 6= 0, then hkhk+1 ≥ 0 and we conclude (hk−1 + hk)hk+1 ≥ 0. If k > 1 and E(xk−1b) 6= 0, then E(xk−1)b 6= 0 so E(xk−1) 6= 0. Consequently, hk−2hk−1 ≥ 0. Thus, hk−2(hk−1 + hk) ≥ 0. If k > 1 then we see that the word to which E is applied in (25) 10 BOEDIHARDJO AND DYKEMA satisfies the requirements (i)-(ii) and, applying the induction hypothesis, we conclude that this moment is 0. If k = 1, then the moment (25) becomes
h0+h1 h2 hn bE(u x2u ··· xnu ) and again, by the induction hypothesis, this is zero. The next result is an analogue of Proposition 2.4 of [7]. Lemma 3.5. Suppose that (A, E) is a B-valued ∗-noncommutative probability space and u, p ∈ A are such that (i) u is a B-normalizing Haar unitary element, (ii) {u, u∗} is free from {p, p∗} with respect to E. Let a = up. Then a satisfies condition (b) of Theorem 3.1. Proof. Lemma 3.2 applies and we may use the inclusions (17)–(20), where A is the algebra ∗ generated by B ∪ {p, p }. Thus, for arbitrary n ∈ N, i1, . . . , in+1 ∈ {1, 2} and xj ∈ Pij ij+1 , r the product x1x2 ··· xn can be re-written in the form c0u 1 c1u 2 ··· cr−1u cr for some r ≥ 0, some c0, . . . , cr ∈ A and some 1, . . . , r ∈ {−1, 1}, where whenever E(cj) 6= 0 for some 1 ≤ j ≤ r − 1, we have j = j+1 and in the case r = 0, we have E(c0) = 0. Now Lemma 3.4 applies and we conclude E(x1x2 ··· xn) = 0. Proof of (b) ⇐⇒ (c) ⇐⇒ (d) ⇐⇒ (e) of Theorem 3.1. We first show (e) =⇒ (d). There is a B-valued ∗-noncommutative probability space (B,e F) with a unitary v ∈ Be such that v commutes with every element of B and F(vn) = 0 for all n ∈ N. For example, we could take Be to be the algebra B ⊗C[C∞], where C[C∞] is the group ∗-algebra of the cyclic group of infinite order and where F(b ⊗ x) = b xe, where, for x ∈ C[Z], xe equals the coefficient in x of the identity element e ∈ C∞; let c be a generator of C∞ and denote also by c ∈ C[C∞] the corresponding element of the group algebra; then v = 1⊗c is a unitary with the desired properties. Then we let (A,e Ee) = (A, E) ∗B (B,e F) (26) be the algebraic (amalgamated) free product of B-valued ∗-noncommutative probability spaces and let u ∈ Ae be the copy of v ∈ Be arising from the free product construction. Then u commutes with every element of B, {u, u∗} and {a, a∗} are free and Ee(u) = 0. By (e), the elements a and ua have the same B-valued ∗-distribution. Therefore, the requirements of (d) are fulfilled. We now show (d) =⇒ (c), taking p = a. We need only show that condition (ii) of (c) holds, namely, that E(a) = 0 and ∗ ∗ ∗ E(ab1a b2ab3a b4 ··· ab2k−1a b2ka) = 0. Since, by hypothesis, u commutes with every element B and a has the same B-valued ∗-distribution as ua, we have E(a) = Ee(ua) = Ee(u)E(a) = 0, where the second equality is due to freeness namely, condition (ii) of (d), and the last equality is because Ee(u) = 0, namely, condition (iii) of (d). Similarly, we have ∗ ∗ ∗ ∗ ∗ ∗ E(ab1a b2ab3a b4 ··· ab2k−1a b2ka) = Ee(uab1a b2ab3a b4 ··· ab2k−1a b2ka) ∗ ∗ ∗ = Ee(u)E(ab1a b2ab3a b4 ··· ab2k−1a b2ka) = 0, with the penultimate equality due to freeness. Thus, (c) holds. The implication (c) =⇒ (b) follows immediately by appeal to Lemma 3.3. R-DIAGONAL 11
We now show (b) =⇒ (e). We suppose (A,e Ee) is an enlargement of (A, E) and u ∈ A is a unitary satisfying (i) and (ii) of (e). We must show that a and ua have the same B-valued ∗-distribution. Taking (B,e F) as in the proof of (e) =⇒ (d) above and replacing (A,e Ee) by the algebraic free product with amalgamation (A,e Ee)∗B (B,e F), we may assume there exists a Haar unitary v ∈ Ae such that v commutes with every element of B and {v, v∗} is free from {a, u, a∗, u∗}. Thus, the triple {v, v∗}, {u, u∗}, {a, a∗} forms a free family of sets. We make the following claims, which we will prove one after the other: (A) a and va have the same B-valued ∗-distribution, (B) ua and uva have the same B-valued ∗-distribution, (C) each of uva and ua satisfies condition (b) of Theorem 3.1, (D) a and ua have the same B-valued ∗-distribution. To prove (A), we note that both a and va satisfy condition (b) of Theorem 3.1; the operator a does so by hypothesis, and the operator va does so by Lemma 3.5. Moreover, for each k ≥ 1 and b1, . . . , b2k−1 ∈ B, we have