UNIVERSITY OF CALIFORNIA Los Angeles
On bi-free probability and free entropy.
A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics
by
Ian Lorne Charlesworth
2017 c Copyright by Ian Lorne Charlesworth 2017 ABSTRACT OF THE DISSERTATION
On bi-free probability and free entropy.
by
Ian Lorne Charlesworth Doctor of Philosophy in Mathematics University of California, Los Angeles, 2017 Professor Dimitri Y. Shlyakhtenko, Chair
Free probability is a non-commutative analogue of probability theory. Recently, Voiculescu has introduced bi-free probability, a theory which aims to study simultaneously “left” and “right” non-commutative random variables, such as those arising from the left and right regular representations of a countable group. We introduce combinatorial techniques to characterise bi-free independence, generalising results of Nica and Speicher from the free setting to the bi-free setting. In particular, we develop the lattice of bi-non-crossing partitions which is deeply tied to the action of bi-freely independent random variables on a free product space. We use these techniques to show that a conjecture of Mastnak and Nica holds, and bi-free independence is equivalent to the vanishing of mixed bi-free cumulants vanishing. Moreover, we extend the theory into the operator-valued setting, introducing operator-valued cumulants which correspond to bi-freeness with amalgamation in the same way. Finally, we investigate regularity problems in algebras of non-commuting random variables. Using operator theoretic techniques show that in an algebra generated by non-commutative random variables which admit a dual system, any self-adjoint element with spectral measure singular with respect to Lebesgue measure is a multiple of 1. We are also able to slightly improve on prior results in the literature and show that any non-constant self-adjoint polynomial evaluated at a set of non-commutative random variables which are free, algebraic, and have finite free entropy must produce a variable with finite free entropy.
ii The dissertation of Ian Lorne Charlesworth is approved.
Milos D. Ercegovac
Sorin Popa
Dimitri Y. Shlyakhtenko, Committee Chair
University of California, Los Angeles
2017
iii For Sue, Dave, Laurel, and Kate.
iv TABLE OF CONTENTS
1 Introduction...... 1
1.1 Preliminaries...... 1
1.1.1 Probability...... 2
1.1.2 Free probability...... 4
1.1.3 Free cumulants...... 7
1.1.4 The incidence algebra of non-crossing partitions...... 9
1.1.5 Bi-free probability...... 11
1.1.6 Free convolutions...... 14
1.1.7 Fock space...... 16
1.1.8 Operator-valued free probability...... 17
1.1.9 Free entropy and regularity...... 18
2 Bi-free independence...... 23
2.1 Some combinatorial structures...... 23
2.1.1 Bi-non-crossing partitions...... 23
2.1.2 Shaded LR-diagrams...... 24
2.2 Computing joint moments of bi-free variables...... 28
2.3 The incident algebra on bi-non-crossing partitions...... 32
2.3.1 Multiplicative functions...... 33
2.4 Unifying bi-free independence...... 35
2.4.1 Summation considerations...... 35
2.4.2 Bi-free independence is equivalent to combinatorial bi-free independence. 40
2.4.3 Voiculescu’s universal bi-free polynomials...... 41
v 2.5 An alternating moment condition for bi-free independence...... 43
2.5.1 The equivalence...... 45
2.5.2 Conditional bi-freeness...... 47
2.6 Bi-free multiplicative convolution...... 50
2.6.1 The Kreweras complement...... 51
2.6.2 Cumulants of products...... 52
2.6.3 Multiplicative convolution...... 53
2.7 An operator model for pairs of faces...... 54
2.7.1 Nica’s operator model...... 55
2.7.2 Skeletons corresponding to bi-non-crossing partitions...... 56
2.7.3 A construction...... 57
2.7.4 The operator model for pairs of faces...... 63
2.8 Additional cases of bi-free independence...... 68
2.8.1 Conjugation by a Haar pair of unitaries...... 68
2.8.2 Bi-free independence for bipartite pairs of faces...... 70
3 Amalgamated bi-free probability...... 72
3.1 Bi-free families with amalgamation...... 72
3.1.1 Concrete structures for bi-free probability with amalgamation. . . . . 72
3.1.2 Abstract structures for bi-free probability with amalgamation. . . . . 77
3.1.3 Bi-free families of pairs of B-faces...... 81
3.2 Operator-valued bi-multiplicative functions...... 82
3.2.1 Definition of bi-multiplicative functions...... 82
3.3 Bi-free operator-valued moment function is bi-multiplicative...... 86
3.3.1 Definition of the bi-free operator-valued moment function...... 86
vi 3.3.2 Verification of Property (iii) from Definition 3.2.1 for E...... 88
3.3.3 Verification of Property (iv) from Definition 3.2.1 for E...... 92
3.4 Operator-valued bi-free cumulants...... 99
3.4.1 Cumulants via convolution...... 99
3.4.2 Convolution preserves bi-multiplicativity...... 100
3.4.3 Bi-moment and bi-cumulant functions...... 101
3.4.4 Vanishing of operator-valued bi-free cumulants...... 104
3.5 Universal moment polynomials for bi-free families with amalgamation. . . . . 106
3.5.1 Bi-freeness with amalgamation through universal moment polynomials. 106
3.6 Additivity of operator-valued bi-free cumulants...... 112
3.6.1 Equivalence of bi-freeness with vanishing of mixed cumulants. . . . . 112
3.6.2 Moment and cumulant series...... 113
3.7 Amalgamated versions of results from Chapter 2...... 115
3.7.1 Amalgamated vaccine...... 115
3.7.2 Amalgamated multiplicative convolution...... 117
4 An investigation into regularity and free entropy...... 118
4.1 Regularity results...... 118
4.1.1 Useful results from the literature...... 118
4.2 Algebraicity and finite entropy...... 119
4.2.1 Properties of spectral measures...... 120
4.3 Bi-free unitary Brownian motion...... 124
4.3.1 Free Brownian motion...... 125
4.3.2 The free liberation derivation...... 127
4.3.3 A bi-free analogue to the liberation derivation...... 128
vii 4.3.4 Bi-free unitary Brownian motion...... 131
viii ACKNOWLEDGMENTS
I would like to thank my advisor, Dima Shlyakhtenko, for years of invaluable advice and support. Without his guidance and encouragement, I would never have attained the expertise and will necessary to complete this dissertation. I am in his debt for all the help he has given me, with respect to mathematics and to my future career. I could not have asked for a better advisor.
I am grateful to Ben Hayes, Brent Nelson, and Paul Skoufranis, in whose footsteps I have followed. They have served as great role models for me in graduate school, providing me with friendship, mentoring, and collaboration. I wish to extend thanks as well to the many people who have provided me variously with advice, opportunities, suggestions, insight, and camaraderie: Andreas Aaserud, Michael Anshelevich, Ken Dykema, Todd Kemp, Brian Leary, Dave Penneys, Jesse Peterson, Sorin Popa, and Dan Voiculescu.
Last but by no means least, I am thankful to the many people who have provided me love and support over the years which has helped me stay motivated and focused: my parents, Sue and Dave; my sisters Laurel and Kate; and Nadia and Rami, Sean and Becky, Fred and Steph, Brittany, Jason, Alec, and Danny.
The content of [9] is included in Chapter 2; this paper was joint work with Brent Nelson and Paul Skoufranis and arose from useful discussions we shared while at the University of California, Los Angeles. Thus some parts of Chapter 2 were published in the Cana- dian Journal of Mathematics, http://dx.doi.org/10.4153/CJM-2015-002-6. [8] has been distributed between Chapters 2 and 3, and arose from continuations of those same dis- cussions. The final publication is available at Springer via http://dx.doi.org/10.1007/ s00220-015-2326-8. Some of the arguments from [10] appear in Chapter 4: the main re- sult of that paper was discovered by Dima Shlyakhtenko and has not been included; the included pieces were arguments I discovered with his advice which were also included in the paper. The full publication appears at http://dx.doi.org/10.1016/j.jfa.2016.05.001. Finally, the preprint [7] contains content from Chapters 2 and 4.
I have received support from the Natural Sciences and Engineering Research Council of ix Canada in the form of scholarships, and the National Science Foundation through grants made to my advisor. I would also like to thank the Hausdorff Research Institute for Math- ematics at the Universit¨atBonn for providing an excellent research environment during the von Neumann Algebras trimester program in the summer of 2016.
x VITA
2007-2012 B.Math. Honours Pure Mathematics & Honours Computer Science, Uni- versity of Waterloo, Waterloo, Ontario, Canada.
2012-2016 Natural Sciences and Engineering Research Council of Canada Postgradu- ate Scholarship.
2016-2017 U.C.L.A. Dissertation Year Fellowship.
PUBLICATIONS
A. Nica, I. Charlesworth, and M. Panju. Analyzing Query Optimization Process: Portraits of Join Enumeration Algorithms. IEEE 28th International Conference on Data Engineering (2012).
I. Charlesworth, B. Nelson, and P. Skoufranis. On Two-faced Families of Non-commutative Random Variables, Canadian Journal of Mathematics 67 (2015), no. 6, 1290-1325.
I. Charlesworth, B. Nelson, and P. Skoufranis. Combinatorics of Bi-Freeness with Amalga- mation, Communications in Mathematical Physics 338 (2015), 801-847.
I. Charlesworth and D. Shlyakhtenko. Free Entropy Dimension and Regularity of Non- commutative Polynomials, Journal of Functional Analysis 271 (2016), no. 8, 2274-2292.
I. Charlesworth. An alternating moment condition for bi-freeness. arXiv preprint 1611.01262 (2016).
xi I. Charlesworth, K. Dykema, F. Sukochev, and D. Zanin. Joint spectral distributions and invariant subspaces for commuting operators in a finite von Neumann algebra. arXiv preprint 1703.05695 (2017).
xii CHAPTER 1
Introduction.
Free probability is a non-commutative probability theory, introduced by Voiculescu in the 1980’s. While probability theory concerns itself with studying random variables – measur- able functions on a probability space – non-commutative probability attempts to describe phenomena exhibited in algebras of non-commutative random variables, which are ∗-algebras equipped with positive states. The original motivations for developing free probability re- lated to the study of von Neumann algebras, but it has since been connected to other fields, such as random matrix theory and combinatorics. The theory of free probability has been developed through both analytic and combinatorial techniques, and the interplay between these two has has led to many beautiful results.
Conventions. We will adopt the following conventions and notation:
• We take inner products to be linear in the second entry and conjugate linear in the first.
• Given n ∈ N, we will denote by [n] the n-element set {1, 2, . . . , n}.
• Unless we explicitly state otherwise, all algebras are unital ∗-algebras and all sub- algebras include the unit of the larger algebra.
1.1 Preliminaries.
We will provide a brief review of several ideas essential to free probability, which will be of use to us later on. First, though, we will provide a lens through which probability theory
1 may be viewed, which clarifies several analogies to free probability.
1.1.1 Probability.
To begin, we will recall some basics of probability. Our presentation will differ from the usual,
in order to make the generalisation to the free setting more natural. Suppose that (Ω, F, P) is a probability space (so Ω is a set, F is a σ-algebra on F, and P is a positive measure of total mass 1), and let A = L∞(Ω, P) be the algebra of essentially bounded functions on Ω (i.e., the algebra of bounded random variables on Ω). Then A can be embedded in the
bounded operators on the Hilbert space L2(Ω, P) via pointwise multiplication. Notice that the expectation function E : A → C can be extended to B(L2(Ω, P)) as the vector state corresponding to 1 ∈ L2(Ω, P): for any X ∈ A,
E[X] = h1,X(1)i .
We will often forget the underlying probability space (Ω, P) and consider normed commuta-
tive unital ∗-algebras equipped with states (i.e., positive linear functionals which map 1A to 1); overloading terminology, we will also refer to such a pair as a probability space.
(ι) Recall that subalgebras (A )ι∈I are independent if for any X1,...,Xn ∈ A with Xk ∈
(ik) A such that i1, . . . , in are distinct, we have the identity
E[X1 ··· Xn] = E[X1] ··· E[Xn].
˚ ˚ ˚ Notice that it is equivalent to ask that this identity holds only for X1,..., Xn when E[Xk] = 0 ˚ for each k. Indeed, given arbitrary X1,...,Xn, we can set Xj = Xj − E[Xj] and obtain the relation by expanding the multiplication in
h ˚ ˚ i E X1 ··· Xn = 0.
Random variables are then said to be independent if they generate independent subalgebras
(ι) of A. Note that the value of E on the algebra generated by the (A )ι∈I is completely prescribed by its values on the individual A(ι) and the condition that they be independent. In fact, our condition for independence still makes sense and still prescribes joint moments 2 in terms of pure ones if we take A(ι) ⊂ A which commute with one another, but may not themselves be commutative!
Algebras of random variables may always be embedded into a larger algebra in a manner which makes them independent. The usual approach is to take products on the level of probability spaces (Ω, F, P); however, since we have forgotten the underlying probability space, this is not possible in our context. We will perform the construction in a more complicated manner, which will be easily duplicated in the non-commutative case.
A vector space with specified state vector is a triple (V, V˚ , ξ) where V is a vector space ˚ such that V = V ⊕ Cξ. We define a state ϕξ on L(V ), the space of linear operators on V , by ˚ taking ϕξ(T ) to be the unique element of C so that T (ξ) ∈ V +ϕξ(T )ξ. This makes (L(V ), ϕξ) into a probability space (and any probability space (A, E) can be realized in this form by ˚ letting V = A, ξ = 1A, and V = ker E, then having A act on V by multiplication). Now suppose that for each ι ∈ I we have A(ι) ⊂ L(V (ι)) with state vectors ξ(ι); for simplicity, assume I = {1, . . . , n} is finite. Consider the space V = V (1) ⊗ · · · ⊗ V (n) and let ξ = (1) (n) ˚ ˚(j) N (ι) ξ ⊗ · · · ⊗ ξ , with V = spanj∈I V ⊗ ι6=j V ; note that ϕξ = ϕξ(1) ⊗ · · · ⊗ ϕξ(n) . Then A(ι) embeds in L(V ) as 1 ⊗ · · · ⊗ 1 ⊗ A(ι) ⊗ 1 ⊗ · · · ⊗ 1 in a state preserving way, and
these embeddings are independent from one another under ϕξ.
(ι) This gives us another way of recognising independence: subalgebras (A )ι∈I of (A, ϕ) are independent if there exist vector spaces with specified state vectors (V (ι), V˚(ι), ξ(ι)) and embeddings ρ(ι) : A(ι) → L(V (ι)) such that the following diagram commutes:
O (ι) ev A A ι∈I
N (ι) ι∈I ρ ϕ ! ϕ O (ι) ξ L V C ι∈I
3 1.1.2 Free probability.
We will be working in the context of a non-commutative algebra, so it will be useful to have some operator algebraic concepts. A (concrete unital) C∗-algebra is a ∗-subalgebra of B(H) containing the identity operator which is closed under the operator norm topol- ogy. A (concrete) von Neumann algebra is a ∗-subalgebra of B(H) which is closed un- der the weak operator topology, or equivalently is closed under the strong operator topol- ogy, or equivalently is equal to its own bicommutant: that is, if A0 ⊂ B(H) is defined as {x ∈ B(H): xa = ax ∀a ∈ A}, a von Neumann algebra satisfies A00 = A. A von Neumann
algebra is called a factor if its centre is C1. A state on a ∗-algebra A is a linear map ϕ : A → C such that ϕ(1) = 1 and ϕ is positive in the sense that ϕ(a∗a) ≥ 0 for a ∈ A.A state is
• faithful if ϕ(a ∗ a) = 0 implies a = 0;
• normal if for every bounded increasing net aα → a, ϕ(a) = limα ϕ(aα); and
• tracial if ϕ(ab) = ϕ(ba).
When we deal with von Neumann algebras, we will usually assume that they are finite (in the sense that no projection is equivalent to a proper subprojection) and equipped with faithful normal tracial states. A finite factor which is not isomorphic to B(H) and admits a faithful
normal tracial state is said to be of type II1.
A non-commutative probability space is a pair (A, ϕ) where A is a unital ∗-algebra and
ϕ : A → C is a state. Elements of A can be thought of as non-commutative random variables. Note that we make no assumption that A is non-commutative, so L∞(Ω, P) is a non-commutative probability space; the “non-commutative” merely references the potential of A to contain elements which do not commute, in much the same way that “densely-defined unbounded operators” may be defined everywhere and bounded.
Suppose that A is a C∗-algebra, and x ∈ A is normal. Then by Gelfand duality, the C∗-algebra generated by x is isometrically isomorphic to the continuous functions on the
4 (compact) spectrum of x, which when coupled with the spectral measure of x becomes a probability space. Thus normal elements of C∗ non-commutative probability spaces are
random variables in a very concrete sense. Given a collection of elements x1, . . . , xn ∈ A, their law is the map
µ(x1,...,xn) : C hX1,...,Xni → C
given by µ(x1,...,xn)(P (X1,...,Xn)) = ϕ(P (x1, . . . , xn)). When x1, . . . , xn are commuting normal elements of a C∗ algebra, this corresponds to their joint law in the usual probability sense.
We are now ready to introduce free independence as a non-commutative analogue of independence, using the final picture of independence as our starting point. For this we introduce the free product of vector spaces with specified state vectors, an analogue to the tensor product in much the same way that the free product of groups is a non-commutative (ι) ˚(ι) (ι) analogue of their direct product. Given a collection (V , V , ξ )ι∈I of vector spaces with specified state vectors (so once again V (ι) = V˚(ι) ⊕ Cξ(ι)) we define their free product to be the triple (V, V˚ , ξ) where V = V˚ ⊕ Cξ and M M V˚ := V˚(i1) ⊗ · · · ⊗ V˚(in).
n≥1 i16=i26=···6=in
Here the condition i1 6= i2 6= · · · 6= in requires only that adjacent indices be unequal. We denote ∗ V (ι) := V and ∗ ξ(ι) := ξ. ι∈I ι∈I (ι) (i) Suppose V = ∗ι∈I V ; then we can embed L(V ) in L (V ) as follows. First, we define
M M (i ) (i ) V (`, j) := Cξ ⊕ V˚ 1 ⊗ · · · ⊗ V˚ n . n≥1 j6=i16=···6=in The space V (`, j) can be thought of as words in V which do not begin with a letter from V˚(j). (j) ∼ Then we have V ⊗ V (`, j) = V via the map Wj which makes the obvious identifications:
(j) Wj(ξ ⊗ ξ) = ξ
˚(i1) ˚(in) ˚(i1) ˚(in) Wj(ξ ⊗ (V ⊗ · · · ⊗ V )) = V ⊗ · · · ⊗ V ˚(j) ˚(j) Wj(V ⊗ ξ) = V
˚(j) ˚(i1) ˚(in) ˚(j) ˚(i1) ˚(in) Wj(V ⊗ (V ⊗ · · · ⊗ V )) = V ⊗ V ⊗ · · · ⊗ V . 5 (ι) (j) (ι) −1 This gives us a left representation λ : L(V ) → L(V ) via λ (T ) = Wj ◦ (T ⊗ 1) ◦ Wj . Note that we may perform a similar decomposition by factoring V (j) out of V on the right,
(j) say Uj : V (r, j) ⊗ V → V (where V (r, j) is defined analogously to be words which do not end with a letter from V (j)), and produce a right representation on the free product via
(ι) −1 ρ (T ) = Uj ◦ (1 ⊗ T ) ◦ Uj .
(ι) Now, given ∗-subalgebras (A )ι∈I of a non-commutative probability space (A, ϕ), we say they are freely independent (or free) if there exist vector spaces with specified state vectors (V (ι), V˚(ι), ξ(ι)) and embeddings π(ι) : A(ι) → L(V (ι)) such that the following diagram commutes: ∗ A(ι) ev ι∈I A
(ι) (ι) ∗ι∈I λ ◦ π ϕ
ϕξ L ∗ V (ι) ι∈I C
(ι) Here ∗ι∈I A we mean the formal algebraic free product consisting of reduced words with
(ι) (ι) (ι) letters coming from the A , and by ∗ι∈I λ ◦π we mean the unique algebra homomorphism
(ι) (ι) (ι) (ι) which is λ ◦ π when restricted to A . Notice that given a family of algebras (A )ι∈I we can use this construction to produce a state on their algebraic free product with respect to which they are free.
Example 1.1.1. Suppose that Γ1,..., Γn are countable groups and Γ = Γ1 ∗ · · · ∗ Γn. Let
τ : C[Γ] → C be the linear functional given by τ(e) = 1 and τ(g) = 0 for other g ∈ Γ. Then n (C[Γ], τ) is a non-commutative probability space, and the family of algebras (C[Γi])i=1 are freely independent. Notice that C[Γ] can be represented on `2(Γ) via left translation, and
τ(x) = hδ0, x · δ0i.
Unfortunately, this is not a very practical condition for checking free independence. For- tunately, it can be reformulated in terms of conditions on mixed moments which are far more approachable.
(ι) Proposition 1.1.2. Suppose (A )ι∈I is a family of unital ∗-subalgebras within a non- commutative probability space (A, ϕ). Then the family is freely independent if and only 6 (ij ) if whenever a1, . . . , an ∈ A are such that aj ∈ A with i1 6= i2 6= · · · 6= in and ϕ(aj) = 0
for each j, we have ϕ(a1 ··· an) = 0. That is, if and only if alternating products of centred variables are centred.
(ι) (ι) Proof. Suppose (A )ι∈I are free, with their freeness realised via ∗ι∈I V , and let a1, . . . , an
(in) (in) ˚(in) be as in the statement of the proposition. Notice that vn := λ ◦ π (an)ξ ∈ V as
(in−1) (in−1) ˚(in−1) ˚(in) ϕ(an) = 0. Similarly, vn−1 := λ ◦ π (an−1)vn ∈ V ⊗ V since ϕ(an−1) = 0.
˚(i1) ˚(in) Continuing this in the obvious manner, we find v1 ∈ V ⊗ · · · ⊗ V and hence (ι) (ι) ϕξ ∗ λ ◦ π (a1 ··· an) = 0. ι∈I
Now, on the other hand, suppose that all alternating products of centred variables are themselves centred. Notice that this determines all mixed moments in terms of pure moments
(ij ) from the families: given arbitrary b1, . . . , bn with bj ∈ A and i1 6= i2 6= · · · 6= in, we find
ϕ ((b1 − ϕ(b1)) ··· (bn − ϕ(bn))) = 0;
expanding this gives us an equation for ϕ(b1 ··· bn) in terms of products of mixed moments with fewer terms, which may themselves be recursively reduced to pure moments. Thus
(ι) there is a unique state on ∗ι∈I A which satisfies this condition. However, if we let µ be the
(ι) state on ∗ι∈I A which makes the individual algebras free, we find µ satisfies the alternating moment condition by the first part of this argument. Then by uniqueness, µ is equal to ϕ
(ι) and so the (A )ι∈I are free with respect to ϕ.
1.1.3 Free cumulants.
The free cumulants were introduced by Speicher to better understand the combinatorics of free independence, and free additive convolution [20,21].
Definition 1.1.3. A partition of a finite set S is a set π = {B1,...,Bn} of non-empty Sn subsets of S which are pairwise disjoint with S = i=1 Bi. The sets Bi are referred to as the
blocks of π. For x, y ∈ S we write x ∼π y if there is some B ∈ π with x ∈ B, y ∈ B. We define P(n) := {π : π is a partition of [n]}. 7 A partition π ∈ P(n) is said to be non-crossing if whenever 1 ≤ i < j < k < ` ≤ n are such that i ∼π k and j ∼π `, we have j ∼π k. The set of non-crossing partitions of n elements is denoted NC(n) := {π ∈ P(n): π is non-crossing}.
Non-crossing partitions are so named because they are exactly those which can be dia- grammatically represented above the number line without needing to draw crossing blocks.
Example 1.1.4. There are 15 elements in P(4), of which 14 are non-crossing.
1 2 3 4 1 2 3 4 1 2 3 4 The partitions {{1, 4} , {2, 3}} and {{1, 2, 4} , {3}} are non-crossing, while {{1, 3} , {2, 4}} is the only element of P(4) which is not non-crossing.
The set P(n) is partially ordered by refinement: given π, σ ∈ P(n) we say σ refines π, and write σ ≺ π, if every block of π is a union of blocks of σ. This ordering makes P(n) into a lattice with minimum element {{1} ,..., {n}} and maximum element {[n]}, denoted respectively 0P(n) and 1P(n) (although when context makes it clear we may instead write 0n and 1n).
We are now ready to define the free cumulants.
Definition 1.1.5. Suppose (A, ϕ) is a non-commutative probability space. The free cumu-
n lants are a family of linear functionals (κn)n≥1 with κn : A → C defined recursively by the moment-cumulant formula: for any a1, . . . , an ∈ A, X Y ϕ(a1 ··· an) = κk (ai1 , . . . , aik ) .
π∈N C(n) {i1<··· We may denote the product appearing in this relation as κπ(a1, . . . , an) when it is con- venient to do so, and given B = {i1 < ··· < ik} ⊂ [n] we may abbreviate κk(ai1 , . . . , aik ) =: κk(aB). Likewise, it will often be convenient to write Y ϕ(aB) := ϕ(ai1 ··· aik ) and ϕπ(a1, . . . , an) := ϕ(aB). B∈π 8 (ι) Theorem 1.1.6 ([20]). Let (A )ι∈I be subalgebras of a non-commutative probability space (ι) (A, ϕ). Then (A )ι∈I are free if and only if all mixed cumulants vanish, i.e., whenever (ij ) a1, . . . , an ∈ A with aj ∈ A and ij 6= ik for some j, k, we have κn(a1, . . . , an) = 0. 1.1.4 The incidence algebra of non-crossing partitions. To better understand the cumulant functionals, Speicher in [20] made a study of the incidence algebra of non-crossing partitions. We provide an overview of some of those arguments, as they will provide a starting point for some of our arguments later on. Proposition 1.1.7 ([20, Proposition 1]). Suppose σ ≺ π ∈ N C(n). Then there is a canonical decomposition of [σ, π] := {ρ ∈ N C(n): σ ρ π} as a product of full lattices: k Y NC(aj). j=1 Sketch of proof. First, we notice that we may decompose the intervals into pieces correspond- ing to the blocks of π: Y Y [σ, π] ∼= [{C ∈ σ : C ⊂ B} , {B}] ⊂ NC(#B). B∈π B∈π (Here we have abused notation slightly: {B} ∈/ N C(#B) in general since the labels are wrong; this can be fixed in the obvious way, by relabeling with the numbers 1,..., #B while maintaining ordering.) Now, given an interval [σ, 1n], if possible we select a block {i1 < ··· < ik} ∈ σ and 1 ≤ j < k so that ij + 1 < ij+1. We then take σ0 = {C ∈ σ : C ⊂ [n] \{ij + 1, . . . , ij+1 − 1}} , σ1 = {C ∈ σ : C ⊂ {ij + 1, . . . , ij+1 − 1}} . Note that the non-crossing condition ensures that every block of σ is accounted for above. We now make the identification ∼ [σ, 1n] = [σ0, 1n−(ij+1−ij −1)] × [σ1 ∪ {{0}} , 1ij+1−ij ]. 9 We have added a dummy node to the right hand product to represent the block which is dividing the partition. The left hand interval here shoud be ignored if σ0 = 1n−(ij+1−ij −1). This may be continued until we are left with a product of terms of the form [σ, 1n], with ∼ σ consisting of intervals. But in this case, [σ, 1n] = NC(#σ). There is some potential ambiguity due to the fact that as lattices, [σ, π] ∼= NC(1)×[σ, π]; we resolve this by insisting that we only take copies of NC(1) when the last step in the decomposition requires us to do so. Example 1.1.8. ∼ , 17 = , 14 × , 17 1 2 3 4 5 6 7 1 5 6 7 0 2 3 4 ∼= NC(2) × N C(3) The incidence algebra on the lattice of non-crossing partitions consists of the following set of functions: ( ) a IA(NC) := f : NC(n) × N C(n) → C f(σ, π) = 0 unless σ π . n≥1 We equip IA(NC) with the convolution product given as follows: X (f ? g)(σ, π) := f(σ, ρ)g(ρ, π). σ≤ρ≤π Definition 1.1.9. A function f ∈ IA(NC) is said to be multiplicative if whenever [σ, π] ∼= Qk j=1 NC(aj) as in Proposition 1.1.7, one has k Y f(σ, π) = f(0aj , 1aj ). j=1 Notice that for a given sequence of numbers (xn), there is a unique multiplicative function so that f(0n, 1n) = xn for every n. Proposition 1.1.10 ([20, Proposition 2]). The convolution of two multiplicative functions is multiplicative. 10 ∼ Qk Sketch of proof. Suppose [σ, π] = j=1 NC(aj). Then X (f ? g)(σ, π) = f(σ, ρ)g(ρ, π) σ≤ρ≤π k Y X = f(0aj , ρ)g(ρ, 1aj ) j=1 ρ∈N C(aj ) k Y = (f ? g)(0aj , 1aj ) j=1 We next label some particular elements of IA(NC): 1 if σ = π 1 if σ π δNC(σ, π) := , and ζNC(σ, π) := . 0 otherwise 0 otherwise One can check that ζNC is invertible and compute its inverse recursively; we define µNC, the M¨obiusfunction, to be the inverse of ζNC under convolution, so µNC ?ζNC = δNC = ζNC ?µNC. We can now relate this to cumulants in the following natural way: suppose a1, . . . , an ∈ A; then the moment-cumulant formula may be “convolved” with ζNC to give the following equivalent formulations: X κn(a1, . . . , an) = ϕπ(a1, . . . , an)µNC(π, 1n), π∈N C(n) X κπ(a1, . . . , an) = ϕσ(a1, . . . , an)µNC(σ, π). σ∈N C(n) σ≤π Moreover, given a ∈ A, if m and k are the multiplicative functions corresponding to the n n sequences (ϕ(a ))n and (κn(a, . . . , a))n in the sense that m(0n, 1n) = ϕ(a ) and k(0n, 1n) = κn(a, . . . , a), we find that m = k ? ζNC and k = m ? µNC. 1.1.5 Bi-free probability. In Subsection 1.1.2, we remarked that given a collection of algebras represented on vector spaces with specified state vectors, we could as easily represent them on the right of the 11 free product of those vector spaces as on the left. Pondering this situation, Voiculescu introduced in [30] the concept of bi-free independence as an independence relation on pairs of sub-algebras in a non-commutative probability space. Bi-free probability aims to study simultaneously the behaviour of “left” and “right” variables. Definition 1.1.11. Suppose (A, ϕ) is a non-commutative probability space. A pair of faces in A is a pair (A`, Ar) of unital ∗-algebras together with a pair of unital embeddings α` : A` → A, αr : Ar → A, though we will often take A`, Ar ⊂ A with the inclusion map for simplicity. (ι) (ι) A family of pairs of faces (A` , Ar ) is said to be bi-free if there exist vector spaces ι∈I (ι) ˚(ι) (ι) (ι) (ι) (ι) (ι) with specified state vectors (V , V , ξ ) and embeddings π` : A` → L(V ) and πr : (ι) (ι) Ar → L(V ) such that the following diagram commutes: ∗ (α(ι) ∗ α(ι)) (ι) (ι) ι∈I ` r ∗ A` ∗ Ar A ι∈I (ι) (ι) (ι) (ι) ∗ι∈I (λ ◦ π` ) ∗(ρ ◦ πr ) ϕ ϕξ L ∗ V (ι) ι∈I C (ι) (ι) Notice that once again, given a family of pairs of faces (A` , Ar ) we can use this ι∈I construction to produce a state on their algebraic free product with respect to which they (ι) (ι) (ι) (ι) are bi-free. Given maps ϕ : A` ∗ Ar → C we will denote this bi-free state by ∗∗ι∈I ϕ . Example 1.1.12. As in Example 1.1.1, let Γ1,..., Γn be countable groups, and Γ = Γ1 ∗ · · · ∗ Γn. 2 Let α`, αr : C[Γ] → B(` (Γ)) be the left and right regular representations of Γ, respectively, (i) (i) 2 and set α` = α`|C[Γi], αr = αr|C[Γi]. Equip B(` (Γ)) with the state τ(x) = hδ0, x · δ0i. Then n (i) (i) 2 the family of pairs of faces α` (C[Γi]), α` (C[Γi]) is bi-free in B(` (Γ)). i=1 Proposition 2.9 of [30] demonstrates that checking for bi-freeness in a single free product (ι) (ι) representation suffices, and shows that there is a unique joint law on (A` , Ar ) which ι∈I 12 makes them bi-free. In particular, it suffices to check bi-free independence in the following (ι) (ι) universal representation: suppose (A` , Ar ) are a family of pairs of faces in a non- ι∈I (ι) (ι) (ι) commutative probability space (A, ϕ). Take V = A` ∗ Ar , a free product of algebras viewed as a vector space, and set ξ(ι) = 1 ∈ V (ι), V˚(ι) = ker ϕ(ι), where ϕ(ι) is the restriction (ι) (ι) (ι) of ϕ to the algebra generated by A` and Ar , viewed as a map on V by evaluation in A. (ι) (ι) (ι) (ι) (ι) Then let π` , πr be the left actions of A` and Ar on V . (We do want the left action (ι) (ι) of Ar here; the fact that it is a right face will be reflected in its action on ∗ι∈I V .) Let (i) (i) (i) (i) (i) (i) qi = λ if χ(i) = `, and qi = ρ if χ(i) = r, where λ , ρ : L(V ) → L ∗ι∈I V are as (ι) (ι) (ι(i)) above. Then (A` , Ar ) are bi-free if and only if for every z1, . . . , zn with zi ∈ Aχ(i) , ι∈I we have (ι) (ι(1)) (ι(n)) ϕ(z1 ··· zn) = ∗ ϕ qi ◦ π (z1) ··· qi ◦ π (zn)1 . ι∈I χ(1) χ(n) Further, some relations between bi-free independence and both free and classical indepen- (ι) (ι) dence were established in [30]. In particular, if (A` , Ar ) are bi-freely independent, ι∈I (ι) (ι) we have that (A` )ι∈I are free, (Ar )ι∈I are free, and if I,J ⊂ I are disjoint, the algebras (ι) (ι) generated by (A` )ι∈I and (Ar )ι∈J are classically independent. Moreover, if all the right (ι) (ι) faces are C then bi-freeness of (A` , Ar ) is equivalent to freeness of the left faces; if all ι∈I (ι) (ι) the left faces are C then bi-freeness of (A` , Ar ) is equivalent to freeness of the right ι∈I (1) (2) (1) faces; and (A` , C) is bi-free from (C, Ar ) if and only if A` is classically independent from (2) Ar . In [14], Mastnak and Nica introduced a set of linear functionals which they conjectured to play the role of bi-free cumulants, which we will now introduce. Their cumulants are indexed not just by a number of arguments, but also by a choice of left or right for each, to account for the extra dynamics present in the bi-free setting. Let n ∈ N, and take χ :[n] → {`, r}. We then define the permutation sχ as follows: if −1 −1 χ (`) = {i1 < ··· < ik} and χ (r) = {ik+1 > ··· > in}, then sχ(j) := ij. Definition 1.1.13. Let (A, ϕ) be a non-commutative probability space. The (`, r)-cumulant functionals (or bi-free cumulants, or when context makes it clear, cumulants) are the maps n (κχ : A → C)n≥1,χ:[n]→{`,r} , 13 which are defined recursively by the moment-cumulant relation: X Y ϕ(z1 ··· zn) = κχ|B (zB). π∈P(n) B∈π −1 sχ ·π∈N C(n) As in the free case, we may denote the product on the right hand side by κπ(z1, . . . , zn). Taking inspiration from Theorem 1.1.6, Mastnak and Nica defined a family of pairs of (ι) (ι) faces (A` , Ar ) in a non-commutative probability space (A, ϕ) to be combinatorially ι∈I bi-free if whenever z1, . . . , zn ∈ A, i1, . . . , in ∈ I are not all equal, and χ :[n] → {`, r} with (ij ) zj ∈ Aχ(j), it follows that κχ(z1, . . . , zn) = 0. Moreover, they conjectured that combinatorial bi-free independence was equivalent to bi-free independence. Notice that the bi-free cumulants corresponding to χ ≡ ` are just the free cumulants. 1.1.6 Free convolutions. Free probability provides us with operations defined on the set of laws of random variables. Indeed, suppose that a, b ∈ A are free in some non-commutative probability space. We then define the additive convolution of their laws to be the law µa µb := µa+b : C hXi → C; since all moments of a+b may be computed from the laws of a and b, this is well-defined and doesn’t depend on the realization of a and b. Moreover, if a and b are self-adjoint elements of a C∗-algebra, then since a + b remains self-adjoint, free additive convolution gives a map from the space of pairs of laws of real-valued bounded random variables to the space of laws of real-valued bounded random variables. The situation of multiplicative convolution is slightly more intricate. Once again, given a, b ∈ A which are free, we define the multiplicative convolution of their laws to be the law µa µb := µab. Since the joint law of a and b is tracial (which can be checked using the cyclic symmetry of the non-crossing partitions) we find that is commutative. If a and b ∗ are unitaries in a C -algebra, then so is ab, and we find maps pairs of distributions on T 14 to distributions on T. Similarly, if a and b are positive, then a1/2ba1/2 is positive and has the same distribution as ab (though not the same ∗-distribution, as the former is self-adjoint while the latter is not); thus takes pairs of bounded distributions on R+ to bounded distributions on R+. These operations can be extended to the setting of distributions without compact support, but we do not need these technicalities here; see, e.g., [12]. The effect of additive convolution may be described by the free cumulants. Indeed, if a, b are free, using the multi-linearity of cumulants and the vanishing of mixed cumulants, we find that κk(a + b, . . . , a + b) = κk(a, . . . , a) + κk(b, . . . , b). In order to understand the combinatorics of the multiplicative convolution, we need to in- troduce the Kreweras complement; this description is due to Nica and Speicher in [17]. The Kreweras complement KNC : NC(n) → N C(n) is a lattice anti-isomorphism defined ¯ as follows. Take π ∈ NC(n). Then KNC(π) is the maximum partition of {1¯,..., n¯} such ¯ that π ∪ KNC(π) is non-crossing as a partition of the ordered set {1 < 1¯ < 2 < ··· < n < n¯}. ¯ Finally, we define KNC(π) := B ⊂ [n]: {¯j : j ∈ B} ∈ KNC(π) ; that is, KNC(π) is the ¯ 2 non-crossing partition obtained by removing the bars from KNC(π). Note that KNC(π) corresponds to preforming a cyclic permutation on the labels of π. Example 1.1.14. Suppose π = {{1} , {2, 4, 8} , {3} , {5} , {6, 7}}. 1 1¯ 2 2¯ 3 3¯ 4 4¯ 5 5¯ 6 6¯ 7 7¯ 8 8¯ Then we see that KNC(π) = {{1, 8} , {2, 3} , {4, 5, 7} , {6}}. On the other hand, we find that 2 KNC(π) = {{8} , {1, 3, 7} , {2} , {4} , {5, 6}}. The Kreweras complement is useful to us because it describes multiplicative convolution. Indeed, if a and b are free, we have X κn(ab, . . . , ab) = κπ(a, . . . , a)κKNC(π)(b, . . . , b). π∈N C(n) 15 1.1.7 Fock space. We mention here the notion of a Fock space, which will be useful for many future arguments and examples. Given a Hilbert space H, we define the Fock space F(H) as follows: M ⊗n F(H) := CΩ ⊕ H . n≥1 The vector Ω ∈ F(H) is referred to as the vacuum vector. We refer to the vector state corresponding to the vacuum vector, ω : T 7→ hΩ,T Ωi, as the vacuum state. Given ξ ∈ H, we define its left creation operator: `(ξ)Ω = ξ and `(ξ)ξ2 ⊗ · · · ξn = ξ ⊗ ξ2 ⊗ · · · ξn. The left annihilation operator corresponding to ξ is `∗(ξ), defined as the adjoint of `(ξ), and satisfies ∗ ∗ ` (ξ)Ω = 0 and ` (ξ)ξ1 ⊗ · · · ⊗ ξn = hξ, ξ1i ξ2 ⊗ · · · ⊗ ξn, where we interpret an empty product as the vector Ω. We also define analogously the right creation and annihilation operators r(ξ) and r∗(ξ). To simplify notation, we will allow ourselves to factor tensor products over sums, and assume that tensoring with Ω corresponds to the identity map. Thus, for example, (ξ1 + ξ2 ⊗ ξ3) ⊗ ξ4 = ξ1 ⊗ ξ4 + ξ2 ⊗ ξ3 ⊗ ξ4, and ξ ⊗ Ω = ξ = Ω ⊗ ξ. We state the following useful proposition without proof: Proposition 1.1.15. Suppose that (Hι)ι∈I are pairwise orthogonal subspaces of H. Then the ∗ ∗ pairs of faces (C h`(ξ), ` (ξ): ξ ∈ Hιi , C hr(ξ), r (ξ): ξ ∈ Hιi)ι∈I are bi-freely independent. ∗ A fortiori, the families (C h`(ξ), ` (ξ)i)i∈I are freely independent. Suppose that K ⊂ H is an R-linear subspace with the property that hk1, k2i ∈ R for every k1, k2 ∈ R. Then the von Neumann algebra generated by elements of the form s(ξ) := ∗ `(ξ) + ` (ξ) for ξ ∈ K is a II1 factor (provided dim(K) > 1, as otherwise it is abelian) with tracial state given by the vacuum state. 16 1.1.8 Operator-valued free probability. We introduce briefly the concept of operator-valued free probability, also called free proba- bility with amalgamation. The reader is encouraged to consult any of [16,18,22] for a more in-depth treatment. The concepts here are meant to serve as an analogue to conditional probability in classical probability theory; this discussion will necessitate returning briefly to the context of measure spaces, probability measures, and σ-algebras. Suppose that (Ω, F0, P) is a probability space, and F ⊂ F0 is a sub-σ-algebra. A condi- tional expectation is a map ∞ ∞ E [·|F]: L (Ω, P) → L (Ω, P) with the properties that for any X ∈ L∞(Ω, P), E [X|F] is F-measurable and for all A ∈ F, Z Z X dP = E [X|F] dP. A A We notice that the range of E [·|F] is a subalgebra of L∞(Ω, P) consisting of those func- tions which are F-measurable. It also follows that if Y ∈ L∞(Ω, P) is F-measurable, then E [XY |F] = E [X|F] Y for any X ∈ L∞(Ω, P). With these properties in hand, we are able to define a conditional expectation in the non-commutative setting. Definition 1.1.16. Suppose that A is a ∗-algebra, and 1A ∈ B ⊂ A is a ∗-subalgebra. A conditional expectation is a linear map EB : A → B such that: 0 0 0 • for any b, b ∈ B and a ∈ A, EB (bab ) = bEB (a) b , and • for any b ∈ B, EB(b) = b. If such a map EB exists, the pair (A, EB) is said to be a B-valued probability space, and its elements are referred to as B-valued random variables. Once again, one can do a free product construction of B-valued probability spaces. This leads to the following definition of amalgamated free independence. 17 Definition 1.1.17. Suppose (A, E) is a B-valued probability space, and for ι ∈ I, let (ι) (ι) A ⊂ A be subalgebras with B ⊂ A . Then (Aι)ι∈I are said to be freely independent with (ij ) amalgamation over B if whenever a1, . . . , an ∈ A are such that aj ∈ A with i1 6= · · ·= 6 in and E(aj) = 0 for each j, we have E(a1 ··· an) = 0. Example 1.1.18. Suppose that A(1), A(2) ⊂ A are freely independent in the tracial non- ∼ commutative probability space (A, τ). Let B = Mk(C) ⊂ Mk(C) ⊗ A = Mk(A), and (1) (2) E := 1 ⊗ τ. Then Mk(A ) and Mk(A ) are free with amalgamation over B. Indeed, suppose A1,...,An ∈ Mk(A) come from alternating subalgebras and satisfy E(Ai) = 0. Then if Ai = (ai;j,k)j,k, we have τ(ai;j,k) = 0. But each entry in A1 ··· An is a sum of (1) products of the form a1;j1,k1 a2;j2,k2 ··· an;jn,kn and since these come from alternating A , (2) A we find τ(a1;j1,k1 ··· an;jn,kn ) = 0. Hence E(A1 ··· An) = 0. 1.1.9 Free entropy and regularity. Free entropy was originally introduced by Voiculescu in [24] as an analogue of Shannon’s entropy from information theory. Roughly speaking, it may be thought of as giving a measure of how smoothly a tuple of non-commutative random variables is distributed. If we are to speak of smoothly distributed variables, it would be helpful to know the free analogue of the Gaussian distribution. Definition 1.1.19. Suppose that (A, ϕ) is a non-commutative probability space. A semi- circular family in A is a tuple (S1,...,Sn) of self-adjoint elements of A such that for any n > 0 and i1, . . . , in, κn(Si1 ,...,Sin ) = 0 unless n = 2. The reason for the term “semicircular” is that the density of the random variable Si is given by 2 √ R2 − x2χ (x), πR2 [−R,R] 18 where R = 2κ2(Si,Si). A family of free semicircular variables may be realized on a Fock n ∗ space: taking e1, . . . , en ∈ C to be the standard orthonormal basis, if Si = `(ei) + ` (ei) then (S1,...,Sn) are freely independent semicirculars with κ2(Si,Sj) = δi=j. Semicircular families take on the role of Gaussian variables from free probability: for example, central limit-type sums converge in distribution to semicircular families. There are two approaches to free entropy theory; so far it is unknown whether or not they produce different theories. The first historically is the “microstates” version of free entropy, introduced and developed by Voiculescu in [24–27]. The entropy of a tuple of operators, χ(x1, . . . , xn), is related to the how many microstates the operator has among the N × N matrices in the limit as N tends to ∞, where a microstate is tuple of matrices whose distribution under the normalized trace is approximately that of the operators x1, . . . , xn. We will not give a definition of microstates free entropy here, as it is complex and we will be more concerned with the “non-microstates” approach detailed below. However, we will state the following theorems from [25] which will be of great use to us, as they allow us to compute entropy directly in some cases. ∞ Theorem 1.1.20. Suppose that X ∈ L (Ω, P) is a self-adjoint random variable, and µX its law. Then 3 1 χ(X) = log |s − t| dµ (s) dµ (t) + + log 2π. x X X 4 2 R2 An immediate consequence of the above is that if X has any atoms in its spectral measure, χ(X) = −∞. Theorem 1.1.21. Suppose that X1,...,Xn are self-adjoint and freely independent in the non-commutative probability space (M, τ). Then n X χ(X1,...,Xn) = χ(Xi). i=1 1.1.9.1 Non-microstates free entropy. Non-microstates free entropy was introduced by Voiculescu in [28] in an attempt to circum- vent the computational difficulties of the microstates version. The approach is to begin by 19 defining a free analogue of Fisher information, and using an analogy with classical probabil- ity, taking it to be the derivative of the entropy function. Definition 1.1.22. Suppose that X1,...,Xn ∈ M are self-adjoint elements in a finite tracial von Neumann algebra (M, τ) which are algebraically free (i.e., satisfy no polynomial identity). Let A := C hX1,...,Xni ⊂ M be the algebra of polynomials in X1,...,Xn. Finally, suppose 2 that A generates M. Then for 1 ≤ i ≤ n, ∂i is the densely-defined derivation from L (M) → 2 2 L (M) ⊗ L (M) with domain A, defined by linearity, the Leibniz rule (i.e., ∂i(fg) = f · ∂i(g) + ∂i(f) · g), and the condition ∂i(Xj) = δi=j1 ⊗ 1. We pause here to introduce some notation: if X is an N-N-bimodule, given ξ ∈ X and a, b ∈ N we write (a ⊗ b)#ξ := a · ξ · b, and extend linearly to N ⊗ N. Now, if n = 1, and one interprets C hXi ⊗ C hXi as polynomial functions from C2 → C, one finds that f(s) − f(t) ∂ (f) = . i s − t More generally, for arbitrary n, we have n X ∂i (f)#(Xi ⊗ 1 − 1 ⊗ Xi) = f ⊗ 1 − 1 ⊗ f. i=1 For this reason, ∂i is often referred to as the free difference quotient. Also, notice that if m = Xi1 ··· Xin is a monomial, then X ∂i(m) = Xi1 ··· Xij−1 ⊗ Xij+1 ··· Xin . j:ij =i Definition 1.1.23. Suppose that X1,...,Xn are self-adjoint and generate a finite tracial ∗ von Neumann algebra (M, τ). Suppose further that 1 ⊗ 1 ∈ D(∂i ). Then the conjugate ∗ 2 variable of Xi is defined to be ∂i (1 ⊗ 1) ∈ L (M). Suppose that ξ1, . . . , ξn are the conjugate variables for X1,...,Xn. Then for any f ∈ C hX1,...,Xni, we have τ (ξif) = τ ⊗ τ (∂if) . 20 Definition 1.1.24. Suppose that X1,...,Xn are self-adjoint variables which generate a finite tracial von Neumann algebra (M, τ). If the conjugate variables to X1,...,Xn exist and are given by ξ1, . . . , ξn, then the (non-microstates) free Fisher information is defined as n ∗ X 2 Φ (X1,...,Xn) := kξik2 . i=1 ∗ If any of the conjugate variables fails to exist, we set Φ (X1,...,Xn) := ∞. We further define the (non-microstates) free entropy to be Z ∞ ∗ 1 n ∗ 1 1 n χ (X1,...,Xn) := − Φ (X1 + t 2 S1,...,Xn + t 2 Sn) dt + log 2πe, 2 0 1 + t 2 where (S1,...,Sn) is a free semicircular system with variance 1, free from (X1,...,Xn). Theorem 1.1.25. Suppose that X1,...,Xn are self-adjoint random variables in a finite tracial von Neumann algebra (M, τ). Then we have the following: ∗ • χ (X1) = χ(X1) (due to Voiculescu in [28]); ∗ • χ(X1,...,Xn) ≤ χ (X1,...,Xn) (due to Biane, Capitaine, and Guionnet in [5]). Example 1.1.26. Suppose that S1,...,Sn are a free semicircular family with variance 1. Then ξi = Si are the conjugate variables. This statement is established by showing the following identity holds for arbitrary f ∈ C hS1,...,Sni: τ(Sif) = τ ⊗ τ(∂if ⊗ f). The above identity can be thought of as the free version of the following identity which holds for Gaussian variables X1,...,Xn: 0 E [Xif(X1,...,Xn)] = E [f (X1,...,Xn)] . ∗ Moreover, it follows that Φ (S1,...,Sn) = n. A related concept is that of a dual system. 21 Definition 1.1.27. Suppose that X1,...,Xn are self-adjoint and generate a finite tracial von Neumann algebra (M, τ). A dual system to the operators is a tuple (Y1,...,Yn) of operators in B(L2(M)) such that [Yi,Xj] = δi=jP1, 2 where P1 is the orthogonal projection onto 1 ∈ L (M). More generally, if M ⊂ B(H), we may replace P1 by any fixed rank one projection. ∼ 2 Notice that with the identification C hX1,...,Xni ⊗C hX1,...,Xni = FR(L (M)) given by a ⊗ b 7→ aτ(b·), one has [Yi,T ] = (∂i(T ))#(1 ⊗ 1). One of the results in [28] is that existence of a dual system for a tuple of operators is stronger than the assumption of finite free fisher information. ∗ n Example 1.1.28. Suppose that Si := `(ei) + ` (ei) ∈ B(F(C )), so that (S1,...,Sn) is a ∗ ∗ free semicircular family. Then the family admits a dual system given by (r (e1), . . . , r (en)). ∗ ∗ ∗ Indeed, [r (ei), ` (ej)] = 0 while [r (ei), `(ej)] = δi=jPΩ. 1.1.9.2 Algebraic variables. Another useful sense of regularity of the distribution of a random variable is that of alge- braicity. Definition 1.1.29. Suppose that X ∈ L∞(Ω, P) is self-adjoint. Then X is said to be algebraic if the Cauchy transform of its distribution, Z 1 SµX (z) := dµX (ζ), R z − ζ is algebraic as a formal power series in z. 22 CHAPTER 2 Bi-free independence. The goal of this chapter is to make an examination of bi-free independence as introduced by Voiculescu in [30], and as introduced in Subsection 1.1.5. We will show that the combinatorial bi-free independence of Mastnak and Nica is equivalent to bi-free independence, and also develop a vanishing moment condition inspired by the one mentioned in Proposition 1.1.2. To do this, we will need to introduce some combinatorial machinery. 2.1 Some combinatorial structures. We need two main combinatorial structures for our arguments: bi-non-crossing partitions, and shaded LR-diagrams. 2.1.1 Bi-non-crossing partitions. −1 Definition 2.1.1. Let χ :[n] → {`, r}, and let sχ ∈ Sn be as above: if χ (`) = −1 {i1 < ··· < ik} and χ (r) = {ik+1 > ··· > in}, then sχ(j) := ij. Then χ induces an or- −1 −1 dering on [n] via i ≺χ j if and only if sχ (i) < sχ (j). We will also denote intervals in this ordering by [i, j]χ := {k ∈ [n]: i χ k χ j} (and use similar notation for open and half-open intervals), and refer to such sets as χ-intervals. The set of bi-non-crossing partitions with respect to χ (or χ-non-crossing partitions) is equal to the set of non-crossing partitions on the ordered set {(i, χ(i)) : i ∈ [n]}, with the ordering given by ≺χ on the first coordinate. We denote the set of such partitions by BNC(χ). It will be convenient for us to think of elements of BNC(χ) as partitions of [n], and this is accomplished by projecting onto the first coordinate; from here on we will always make this 23 identification implicitly. The reason we have formally make π include the information from χ is two-fold: we wish to be able to recover χ from π, and we wish BNC(χ) to be formally disjoint for different χ. However, this technicality is almost never worth calling attention to. We also remark that χ-non-crossing partitions correspond precisely to those which may be drawn without crossings between a pair of vertical lines, with labelled nodes added to the left or right line according to χ. In this picture, the ordering ≺χ corresponds to reading the labels down the left line and then up the right line. If we have such a diagram drawn using horizontal and vertical lines so that every block of π contains at most one vertical line segment, we will refer to that segment as the spine of the block it corresponds to, and the horizontal pieces as ribs. Example 2.1.2. Suppose χ : [8] → {`, r} is such that χ−1(`) = {1, 3, 4, 6, 8} and χ−1(r) = {2, 5, 7}. 1 2 3 4 5 6 7 8 Then {{1, 3} , {2, 4} , {5, 6, 7, 8}} ∈ BN C(χ) even though it is not a non-crossing partition. We also have the ordering 1 ≺χ 3 ≺χ 4 ≺χ 6 ≺χ 8 ≺χ 7 ≺χ 5 ≺χ 2. −1 Finally, observe that BNC(χ) = π ∈ P(n): sχ · π ∈ N C(n) ; thus we have the follow- ing expression for the bi-free cumulants: X ϕ(z1 ··· zn) = κπ(z1, . . . , zn). π∈BN C(χ) 2.1.2 Shaded LR-diagrams. We will now introduce a set of diagrams, called shaded LR-diagrams, which will be of use when computing joint moments of bi-free variables. In this context, a diagram consists of 24 the following data: • n ∈ N0; • a map χ :[n] → {`, r}; • a map ι :[n] → I; • a χ-non-crossing partition π which in P(n) satisfies π ≺ {ι−1(j): j ∈ I}; and • a subset S ⊂ π which are outer in the sense that if B ∈ S and C ∈ π, and there are i, k ∈ C, j ∈ B with i ≺χ j ≺χ k, then B = C. The diagrammatic representation will be given by drawing the partition π, colouring the blocks of π according to ι, and extending the spines of blocks in S to the top of the diagram. Note that the outerness assumption on the blocks in S ensures that this can be done in a non-crossing fashion; moreover, it allows us to order the blocks in S by ≺χ. We will often abbreviate the condition π ≺ {ι−1(j): j ∈ I} simply as π ≺ ι. Given B ∈ π, we will often write ι(B) as shorthand for the value ι(b) for b ∈ B. Example 2.1.3. Let χ : [8] → {`, r} be as in Example 2.1.2, define ι by the sequence ( , , , , , , , ) , and let S = {{1, 3} , {2, 4}}. Then we have the following drawing of this diagram: 1 2 3 4 5 6 7 8 We now construct the set of shaded LR-diagrams recursively. First, we insist that the unique diagram with n = 0 (hence no nodes) is a shaded LR-diagram. Now, suppose D = (n, χ, ι, π, S) is a shaded LR-diagram, and letχ ˆ :[n + 1] → {`, r}, ˆι :[n + 1] → I be such that for i > 1,χ ˆ(i + 1) = χ(i) and ˆι(i + 1) = ι(i). We prescribeπ ˆ in the following way: 25 • (i + 1) ∼πˆ (j + 1) if and only if i ∼π j; • if S = ∅, then 1 is a singleton inπ ˆ; • if B ∈ S is the block with spine closest to theχ ˆ(1) side of the diagram and the colour of B matches ˆι(1), then 1 ∼πˆ (j + 1) for every j ∈ B; • if neither of the above conditions occurs, 1 is a singleton inπ ˆ. Notice thatπ ˆ ∈ BN C(ˆχ) as π ∈ BN C(χ) and all blocks of S are outer; notice also that the block containing 1 inπ ˆ is also outer. Finally, let Sˆ be the set of blocks in S which, after relabelling, are still blocks ofπ ˆ; that is, if 1 was added to a pre-existing block of π, ˆ ˆ we exclude it from S. Then if B1 ∈ πˆ is the block containing 1, both (n + 1, χ,ˆ ˆι, π,ˆ S) and ˆ (n+1, χ,ˆ ˆι, π,ˆ S ∪{B1}) are shaded LR-diagrams. The set of shaded LR-diagrams, LR, is the smallest set which is closed under the above extension. Notice that each shaded LR-diagram on n+1 nodes arises through this construction from a unique LR-diagram on n nodes, which can be recovered simply by deleting the top part of the diagram. We define some particular subsets of LR: LRk := {(n, χ, ι, π, S) ∈ LR : |S| = k} , LR(χ, ι) := {(n, χ0, ι0, π, S) ∈ LR : χ0 = χ, ι0 = ι} , LRk(χ, ι) := LRk ∩ LR(χ, ι). Example 2.1.4. Let D = (8, χ, ι, π, S) be as in Example 2.1.3. Suppose we wish to extend D with ˆι(1) = . Then we have the following four diagrams as options, depending on the choice ofχ ˆ(1). 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 6 6 6 6 7 7 7 7 8 8 8 8 9 9 9 9 26 The first and last diagrams lie in LR2, the second lies in LR3, and the third lies in LR1. Notice that we can only increase the number of spines reaching the top of the diagram when we add a node whose colour does not match that of the extended spine nearest it; thus the spines reaching the top of the diagram will always be of alternating colours. Definition 2.1.5. Fix n ∈ N, and let χ :[n] → {`, r} and ι :[n] → I. We define the set BNC(χ, ι) as BNC(χ, ι) := {π ∈ BN C(χ):(n, χ, ι, π, ∅) ∈ LR} . Note that even if ι is constant on the blocks of π ∈ BN C(χ), it is not necessarily true that π ∈ BN C(χ, ι)! This is because we have asked that π can be constructed through a certain process which connects blocks of the same colour when both are on the same level and such a connection maintains the bi-non-crossing property. Example 2.1.6. If n = 3, ι ≡ , and χ ≡ `, π = {{1, 3} , {2}} ∈/ BN C(χ, ι). 1 2 3 If it were being constructed as an LR-diagram, it would be necessary to connect 2 to the spine extending upwards from 3 when it is added. In order to better discuss these conditions, we introduce some terminology: Definition 2.1.7. Let π ∈ BN C(χ), and B,C ∈ π. Then B and C are said to be piled if max(min(B), min(C)) ≤ min(max(B), max(C)). Diagrammatically, this means that there is necessarily some horizontal level on which both the spines of B and C are present. A third block D ∈ π separates B from C if it is piled with both, equal to neither, and its spine lies between the spines of B and C. Equivalently, D is piled with B and C, and there are j, k ∈ D so that B ⊂ (j, k)χ while C ∩ [j, k]χ = ∅, or vice versa. Finally, B and C are tangled if they are piled and no block separates them. We extend these notions to LR in the obvious way. 27 We can now succinctly state the condition on partitions being in BNC(χ, ι): there must be no two tangled blocks of the same colour. We also remark that any element π ∈ BN C(χ) lies in BNC(χ, ι) for some ι :[n] → I provided |I| ≥ 2: one may first colour a block of π, then colour all blocks it is tangled with in the opposite colour, and continue in this fashion until π is coloured or there are no more restrictions, at which point another uncoloured block may be coloured. Since the relation of entanglement does not allow for any cycles (any given non-singleton block splits the diagram into two or more regions, and only it may be tangled with blocks in more than one of these regions), this will always produce a consistent colouring. We conclude [ BNC(χ) = BNC(χ, ι). ι∈In Definition 2.1.8. Suppose π, σ ∈ BN C(χ). We say σ is a lateral refinement of π and write σ Example 2.1.9. Suppose χ = (`, r, `, r) and π = 1χ. Then {{1, 2} , {3, 4}} {{1, 3} , {2, 4}}< 6 lat π. 1 1 2 2 3 3 4 4 2.2 Computing joint moments of bi-free variables. We will now make explicit the connection between shaded LR-diagrams and joint moments (ι) ˚(ι) (ι) of bi-free variables. Let (V , V , ξ )ι∈I be a family of vector spaces with specified state (ι) (ι) (ι) ˚ vectors, and let π` , πr be the left and right representations of L(V ) on (V, V , ξ) := (ι) ˚(ι) (ι) (ι) (ι) ˚(ι) (ι) (ι) ∗ι∈I (V , V , ξ ). Also, define ψ : V → C as the functional with v ∈ V + ψ ξ , and let p(ι)(x) = ψ(ι)(x)ξ(ι), and ψ, p in a similar fashion for V . Notice that, for T ∈ L(V (ι)), (ι) (ι) (ι) p T p = ϕξ(ι) (T )p . Let n ∈ N. We define a map Ξ which takes as arguments B ⊂ [n] and T1,...,Tn ∈ 28 S (ι) ι∈I L(V ) with the requirement that all operators corresponding to elements of B come ( (θ) (θ) from a single L V ), and produces a vector in V as follows: with B = k1 < ··· < k|B| , (θ) (θ) (θ) (θ) Ξ(B; T1,...,Tn) := Tk1 (1 − p )Tk2 (1 − p )Tk3 ··· (1 − p )Tk|B| ξ . S (ι) Given a map ι :[n] → I, we say a sequence of operators T1,...,Tn ∈ ι∈I L(V ) (ι) is consistent with ι if Ti ∈ L(V ) for each i ∈ [n]. We will now define a map Ψ which takes as arguments a shaded LR-diagram D = (n, χ, ι, π, S) and a sequence of operators T1,...,Tn consistent with ι, and produces an element of V . Let S = {B1,...,Bk} with B1 ≺χ · · · ≺χ Bk. Then k ! Y (ι) O (ι(Bi)) Ψ(D; T1,...,Tn) := ψ (Ξ(B; T1,...,Tn)) (1 − p )Ξ(Bi; T1,...,Tn) . B∈π\S i=1 Here the tensor product should be interpreted as ordered from least i to largest, and if k = 0, the empty tensor product factor should be read as the vector ξ. Example 2.2.1. Let D = (8, χ, ι, π, S) be as in Example 2.1.3, and take T1,...,T8 consistent with ι. Then ( ) ( ) ( ) ( ) ( ) Ψ(D; T1,...,T8) = ψ T5(1 − p )T6(1 − p )T7(1 − p )T8ξ ( ) ( ) ( ) ( ) ( ) ( ) · (1 − p )T1(1 − p )T3ξ ⊗ (1 − p )T2(1 − p )T4ξ . Proposition 2.2.2. Fix χ :[n] → {`, r} and ι :[n] → I. Let T1,...,Tn be consistent with ι. Then (ι(1)) (ι(n)) X πχ(1) (T1) ··· πχ(n) (Tn)ξ = Ψ(D; T1,...,Tn). D∈LR(χ,ι) Moreover, (ι(1)) (ι(n)) X X |π|−|σ| ϕ(π (T1) ··· π (Tn)) = (−1) ϕσ(T1,...,Tn). χ(1) χ(n) σ∈BN C(χ) π∈BN C(χ,ι) σ≤latπ Proof. We will establish the first identity by induction on n. The case n = 0 is immediate since if D is the empty diagram, Ξ(D; ) = ξ. Therefore suppose n > 0; our induction 29 hypothesis applied to T2,...,Tn tells us that, withχ ˆ(j) = χ(j + 1) and ˆι(j) = ι(j + 1) for 1 ≤ j < n, (ι(2)) (ι(n)) X πχ(2) (T2) ··· πχ(n) ξ = Ψ(D; T2,...,Tn). D∈LR(ˆχ,ˆι) Fix D = (n − 1, χ,ˆ ˆι, π, S) ∈ LR(ˆχ, ˆι), and suppose for the moment that χ(1) = `. Let D0,D1 ∈ BN C(χ, ι) be the two diagrams constructed from D, where D1 has a spine extending from 1 to the top and D0 does not. (j) Recall the bijections Wj : V ⊗ V (`, j) → V from Subsection 1.1.2. If S is empty or its ≺χ-least element (i.e., the left-most spine reaching the top) is B with ι(B) 6= ι(1), then w := Ψ(D; T2,...,Tn) ∈ V (`, ι(1)); hence (ι(1)) (ι(1)) π` (T1)w = Wj (T1 ⊗ 1)(ξ ⊗ w) (ι(1)) (ι(1)) (ι(1)) = ψ (T1)w + (1 − p )T1ξ ⊗ w (ι(1)) (ι(1)) = ψ (Ξ({1} ; T1,...,Tn)) w + (1 − p )Ξ({1} ; T1,...,Tn) ⊗ w = Ψ(D0; T1,...,Tn) + Ψ(D1; T1,...,Tn). ˆ ˆ On the other hand, suppose the ≺χ-least element of S is B with ι(B) = ι(1); let B = n o ˆ ˆ ˚(ι(1)) {1} ∪ j + 1 : j ∈ B . Let v := Ξ(B; T2,...,Tn) ∈ V , and let w ∈ V (`, ι(1)) be so that ˚ (ι(1)) W(ι(1))(v ⊗ w) = Ψ(D; T2,...,Tn). Since v ∈ V (ι(1)), we have (1 − p )v = v. Then we have (ι(1)) π` (T1)Ψ(D; T2,...,Tn) = Wj ((T1 ⊗ 1)(v ⊗ w)) (ι(1)) (ι(1)) (ι(1)) (ι(1)) = ψ (T1(1 − p )v)w + (1 − p )T1(1 − p )v ⊗ w (ι(1)) (ι(1)) = ψ (Ξ(B; T1,...,Tn)) w + (1 − p )Ξ(B; T1,...,Tn) ⊗ w = Ψ(D0; T1,...,Tn) + Ψ(D1; T1,...,Tn). As each D0 ∈ LR(χ, ι) arises from exactly one diagram in LR(ˆχ, ˆι), we have (ι(1)) (ι(n)) X π` (T1) ··· πχ(n) ξ = Ψ(D; T1,...,Tn). D∈LR(χ,ι) The argument for χ(1) = r is the same, except we use the decomposition V ∼= V (r, ι(1)) ⊗ (ι(1)) V and work with the ≺χ-greatest element of S (i.e., the furthest right spine which reaches the top). 30 We now must establish the second claimed identity, which will follow from the first. Indeed, notice that (ι(1)) (ι(n)) (ι(1)) (ι(n)) ϕ(πχ(1) (T1) ··· πχ(n) (Tn)) = ψ(πχ(1) (T1) ··· πχ(n) (Tn)ξ). Applying ψ to the first equation, we see that the only terms on the right hand side which survive correspond to diagrams in LR0(χ, ι). Let D = (n, χ, ι, π, ∅) ∈ LR0(χ, ι) be such a diagram, and take B = {k1 < ··· < kb} ∈ π. Then (ι(B)) ψ (Ξ(B; T1,...,Tn)) (ι(B)) (θ) (θ) (θ) (θ) = ψ Tk1 (1 − p )Tk2 (1 − p )Tk3 ··· (1 − p )Tk|B| ξ X X m = (−1) ϕ (ι(B)) T ··· T ··· ϕ (ι(B)) T ··· T ξ ξ k1 kq1 ξ kqm+1 kb m≥0 1≤q1<··· Each term in the sum corresponds to a lateral refinement of π with cuts only in the block B, with sign determined by the parity of the number of cuts. Then there is a correspondence between lateral refinements of π and terms in ! Y X X m (−1) ϕ (ι(B)) T ··· T ··· ϕ (ι(B)) T ··· T ξ , ξ k1 kq1 ξ kqm+1 kb B∈π m≥0 1≤q1<··· ψ (Ψ(D; T1,...,Tn)) ! Y X X m = (−1) ϕ (ι(B)) T ··· T ··· ϕ (ι(B)) T ··· T ξ ξ k1 kq1 ξ kqm+1 kb B∈π m≥0 1≤q1<··· X |π|−|σ| = (−1) ϕσ(T1,...,Tn). σ∈BN C(χ) σ≤latπ Summing over D ∈ LR0(χ, ι) (or equivalently, π ∈ BN C(χ, ι)) and reversing the order of the summations yields the desired identity. (ι) (ι) Corollary 2.2.3. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I (ι) (ι) probability space (A, ϕ). Then (A` , Ar ) are bi-freely independent if and only if for ι∈I 31 (ι(i)) every n ∈ N, χ :[n] → {`, r}, ι :[n] → I, and z1, . . . , zn ∈ A with zi ∈ Aχ(i) , we have X X |π|−|σ| ϕ(z1 ··· zn) = (−1) ϕσ(z1, . . . , zn). σ∈BN C(χ) π∈BN C(χ,ι) σ≤latπ (ι) (ι) Proof. If (A` , Ar ) are bi-free, this follows from applying the previous proposition to ι∈I the representation guaranteed by the definition of bi-freeness. Conversely, suppose that the above equation holds whenever appropriate. Let (ι) (ι) (ι) µ = ∗∗ ϕ : ∗ A` ∗ Ar → ι∈I ι∈I C (ι) (ι) (ι) (ι) be the state with respect to which (A` , Ar ) are bi-free in ∗ι∈I A` ∗ Ar such that ι∈I (ι) µ| (ι) (ι) = ϕ . Then we have A` ∗ Ar X X |π|−|σ| ϕ(z1 ··· zn) = (−1) ϕσ(z1, . . . , zn) σ∈BN C(χ) π∈BN C(χ,ι) σ≤latπ X X |π|−|σ| = (−1) µσ(z1, . . . , zn) σ∈BN C(χ) π∈BN C(χ,ι) σ≤latπ = µ(z1 ··· zn). (ι) (ι) But then (A` , Ar ) are bi-free with respect to ϕ since they are bi-free with respect to ι∈I µ. 2.3 The incident algebra on bi-non-crossing partitions. The main goal of this section is to develop the appropriate M¨obiusfunction for bi-non- crossing partitions, with an eye towards applying it to the bi-free cumulant functionals. Definition 2.3.1. The incidence algebra on the lattice of bi-non-crossing partitions consists of the following set of functions: a a IA(BNC) := f : BNC(χ) × BN C(χ) → C f(σ, π) = 0 unless σ ≤ π . n≥1 χ:[n]→{`,r} 32 Once again, we define the convolution product: X (f ? g)(σ, π) := f(σ, ρ)g(ρ, π). σ≤ρ≤π It is immediately verified that the convolution is associative, essentially because X X X X X = = . σ≤ρ≤π ρ≤ρ0≤π σ≤ρ≤ρ0≤π σ≤ρ0≤π σ≤ρ≤ρ0 2.3.1 Multiplicative functions. We wish to demonstrate that the same kind of decomposition into products of full intervals holds in the bi-non-crossing case as in the non-crossing case, so that we can make sense of multiplicative functions. However, we want this decomposition to be aware of more than just the lattice structure; it should be consistent with the choice of sides. We adapt the decomposition of Proposition 1.1.7 from the free setting as follows. Suppose that we have σ ≤ π ∈ BN C(χ). We first split both partitions according to the blocks of π: Y [σ, π] −→ [σ|B, {B}], B∈π where σ|B and {B} are thought of as elements of BNC(χ|B); thus we will from here on assume π = 1χ. Now, if possible, let B = {i1 ≺χ · · · ≺χ ik} be a block of σ so that there is some a ∈ [n] 0 and 1 ≤ j < k so that ij ≺χ a ≺χ ij+1. this case, let I1 = (ij, ij+1)≺χ , I0 = [n] \ I1, and 0 0 I1 = I1 ∪ {max(ij, ij+1)}; then take σ0 and σ1 to be the sub-partitions of σ corresponding to 0 0 the sets I0 and I1, and σ1 = σ1 ∪ {max(ij, ij+1)}. We then have h i h i [σ, 1 ] −→ σ , 1 × σ , 1 . χ 0 χ|I0 1 χ|I1 If σ = 1 , we exclude the left hand term above. 0 χ|I0 Lastly, if σ consists entirely of χ-intervals, we retract each interval to its lowest node. 0 That is, if σ = {B1,...,Bm}, ij = max(Bj), and i1 < ··· < im, then taking χ (j) = χ(ij) we have [σ, 1χ] −→ [0χ0 , 1χ0 ]. 33 Example 2.3.2. Letting a diagram below stand for the interval between it and the corre- sponding maximum partition, ∼= × ∼= × [0(r,r), 1(r,r)] × [0(`,r,r), 1(`,r,r)] Proposition 2.3.3. The decomposition described above provides an isomorphism of lattices. ∼ Proof. We note that as lattices, [σ, π] = [sχ · σ, sχ · π]. After performing this identifica- tion, every step of the decomposition above corresponds to one in the decomposition of [sχ · σ, sχ · π] in the lattice of non-crossing partitions, as in [20, Proposition 1] (restated in Proposition 1.1.7). All we have done is make choices of sides for the new nodes which are added, and for nodes obtained by contracting blocks, but we have done so in such a way as to make them consistent with the original lattice. Definition 2.3.4. A function f ∈ IA(BNC) is said to be multiplicative if whenever σ, π ∈ BNC(χ) are such that [σ, π] decomposes as m Y BNC(χj), j=1 it follows that m Y f(σ, π) = f(0χj , 1χj ). j=1 Lemma 2.3.5. The convolution of multiplicative functions is multiplicative. The proof is identical to that of Proposition 1.1.10. Definition 2.3.6. Mirroring the free case, we define some particular elements of IA(BNC): 1 if σ = π 1 if σ ≤ π δBNC(σ, π) := , and ζBNC(σ, π) := . 0 otherwise 0 otherwise We further define the M¨obiusfunction µBNC by the relation µBNC ? ζBNC = δBNC. 34 To check that ζBNC is actually invertible, one may appeal to the fact that ζNC is. Indeed, −1 −1 if σ, π ∈ BN C(χ), then ζBNC(σ, π) = ζNC(sχ ·σ, sχ ·π) (as the lattice BNC(χ) is isomorphic −1 −1 to NC(n), with isomorphism given by sχ ). Hence if one defines µBNC(σ, π) := µNC(sχ · −1 σ, sχ · π) it follows that µBNC ? ζBNC = δBNC = ζBNC ? µBNC. Further, notice that δBNC is the convolutive identity, and δBNC and ζBNC are clearly multiplicative. We also have µBNC is multiplicative, and this may be verified by once again appealing to its relation to µNC. Remark 2.3.7. Suppose that a`, ar ∈ A are elements in a non-commutative probabil- ity space. Let m, k ∈ IA(BNC) be the unique multiplicative functions with m(0χ, 1χ) = ϕ(aχ(1) ··· aχ(n)) and k(0χ, 1χ) = κχ(aχ(1), . . . , aχ(n)). Then we once again have the relation m = k ? ζBNC and k = m ? µBNC. Likewise, we once again have X κπ(a1, . . . , an) = ϕσ(a1, . . . , an)µBNC(σ, π). σ∈BN C(χ) σ≤π 2.4 Unifying bi-free independence. We are now ready to start the main thrust of our argument to relate the concepts of bi-free independence and combinatorial bi-free independence. 2.4.1 Summation considerations. Our first goal is to put the condition from Corollary 2.2.3 into a more convenient form. Having introduced the M¨obiusfunction for the lattice BNC, we are now able to do so. Proposition 2.4.1. Let χ :[n] → {`, r} and ι :[n] → I. Then for every σ ∈ BN C(χ) such that σ ≤ ι, X |π|−|σ| X (−1) = µBNC(σ, π). π∈BN C(χ,ι) π∈BN C(χ) σ≤latπ σ≤π≤ι Our strategy is to first establish a simple case by appealing to free probability, and then reduce all other cases to it. 35 Lemma 2.4.2. The formula in Proposition 2.4.1 holds when χ ≡ `. Proof. Notice that if χ ≡ `, we have BNC(χ) = NC(n) and < ++ > µBNC|BNC(χ)×BN C(χ) = µNC|NC(n)×N C(n). (ι) (ι(i)) Now if (A` )ι∈I are taken to be free, we have for any z1, . . . , zn with zi ∈ A` that X ϕ(z1 ··· zn) = κπ(z1, . . . , zn) π∈N C(n) X = κπ(z1, . . . , zn) π∈N C(n) π≤ι X X = µNC(σ, π)ϕσ(z1, . . . , zn) π∈N C(n) σ∈N C(n) π≤ι σ≤π X X = µNC(σ, π) ϕσ(z1, . . . , zn). σ∈N C(n) π∈N C(n) σ≤ι σ≤π≤ι (ι) On the other hand, (A` , C)ι∈I are bi-free, so we find X X |π|−|σ| ϕ(z1 ··· zn) = (−1) ϕσ(z1, . . . , zn) σ∈BN C(χ) π∈BN C(χ,ι) σ≤latπ X X |π|−|σ| = (−1) ϕσ(z1, . . . , zn). σ∈N C(n) π∈BN C(χ,ι) σ≤latπ Now, fix π ∈ N C(n) with π ≤ ι, and construct variables y1, . . . , yn so that the appropriate collections are free and the only pure moments of these variables which do not vanish are those precisely corresponding to the blocks of π. Then only the term corresponding to π will survive in each of the above sums, and we conclude X X X |π|−|σ| µBNC(σ, π) = µNC(σ, π) = (−1) . π∈BN C(χ) π∈N C(n) π∈BN C(χ,ι) σ≤π≤ι σ≤π≤ι σ≤latπ 36 Lemma 2.4.3. Let χ, ~χ :[n] → {`, r} be so that χ(j) = ~χ(j) for 1 ≤ j < n, χ(n) = `, and ~χ(n) = r. Take ι :[n] → I and let σ ∈ BN C(χ) be such that σ ≤ ι; let ~σ ∈ BN C (~χ) be the ~χ-non-crossing partition with the same blocks as σ. Then X X (−1)|π|−|σ| = (−1)|~π|−|~σ|, π∈BN C(χ,ι) ~π∈BN C(~χ,ι) σ≤latπ ~σ≤lat~π and X X µBNC(σ, π) = µBNC(~σ, ~π). π∈BN C(χ) ~π∈BN C(~χ) σ≤π≤ι ~σ≤~π≤ι Proof. Notice that there is a correspondence between BNC(χ) and BNC(~χ) given by iden- tifying elements which have the same block structure, and moreover this also holds for BNC(χ, ι) and BNC(~χ,ι). Moreover, this identification preserves the partial orderings ≤ and ≤lat, as well as µBNC. Thus both identities hold. Lemma 2.4.4. Let χ :[n] → {`, r} and k ∈ [n − 1] be so that χ(k) = ` and χ(k + 1) = r. Take ι :[n] → I and let σ ∈ BN C(χ) be such that σ ≤ ι. Now, define χ0 :[n] → {`, r} and ι0 :[n] → I as follows: χ(k + 1) if j = k ι(k + 1) if j = k χ0(j) = χ(k) if j = k + 1 , and ι0(j) = ι(k) if j = k + 1 . χ(j) otherwise ι(j) otherwise Let σ0 ∈ BN C(χ0) be the bi-non-crossing partition obtained by interchanging k and k + 1 in σ (note that σ0 ≤ ι0). Then X X 0 0 (−1)|π|−|σ| = (−1)|π |−|σ |, π∈BN C(χ,ι) π0∈BN C(χ0,ι0) 0 0 σ≤latπ σ ≤latπ and X X 0 0 µBNC(σ, π) = µBNC(σ , π ). π∈BN C(χ) π0∈BN C(χ0) σ≤π≤ι σ0≤π0≤ι0 37 Proof. Once again, observe that the correspondence between BNC(χ) and BNC(χ0) obtained by exchanging k and k + 1 is a bijection which preserves the partial orders ≤ and ≤lat, and 0 0 leaves µBNC invariant; this follows because sχ = sχ0 . Moreover, π ≤ ι if and only if π ≤ ι , so the second claimed equality holds. However, it is not necessarily the case that the bijection takes BNC(χ, ι) to BNC(χ0, ι0), as it may take a partition π ∈ BN C(χ) which has no tangled blocks of the same colour to one which does, or vice versa. 0 0 Note that this correspondence only fails when either exactly one of σ ≤lat π and σ ≤lat π holds (which requires that k σ k + 1 and k ∼π k + 1, which ensures ι(k) = ι(k + 1)) or exactly one of π ∈ BN C(χ, ι) and π0 ∈ BN C(χ0, ι0) holds (which requires that blocks of the 0 same colour are entangled in exactly one of π and π , so k π k + 1 but ι(k) = ι(k + 1)). Thus the two sums agree unless k σ k + 1 and ι(k) = ι(k + 1). Therefore let us assume we are in this case. We will split the sum on the left hand side of the first equation into three sums: one over partitions where k and k + 1 are in separated blocks of π; one over terms with k ∼π k + 1; and one over the rest. X |π|−|σ| X X X |π|−|σ| (−1) = + + (−1) . π∈BN C(χ,ι) π∈BN C(χ,ι) π∈BN C(χ,ι) π∈BN C(χ,ι) σ≤latπ σ≤latπ σ≤latπ σ≤latπ k,k+1 separated in π k∼πk+1 kπk+1 unseparated in π We now claim that the second and third sums cancel. Indeed, if k and k + 1 are in blocks which are distinct and not separated in π, then it must be the case that π ≤ {[1, k], [k + 1, n]}: k may not be connected to a lower node, nor k + 1 to a higher, or else they would be tangled and of the same colour; meanwhile, no other block may have nodes both less than and greater than k, or else it would separate the two. In particular, the partition ρ obtained by joining the blocks containing k and k + 1 is still an element of BNC(χ, ι) and still has σ ≤lat ρ as the only additional cut required is a lateral one between k and k + 1. Further, it is clear that |ρ| = |π| − 1 so their contributions to the sums cancel. As this correspondence is easily 38 inverted, the contributions of the two sums vanish, and we are left with X X (−1)|π|−|σ| = (−1)|π|−|σ|. π∈BN C(χ,ι) π∈BN C(χ,ι) σ≤latπ σ≤latπ k,k+1 separated in π But now if π ∈ BN C(χ, ι) is such that k and k + 1 are in distinct separated blocks of π, then k and k+1 are in distinct separated blocks of π0 and in particular none of the conditions 0 0 we are concerned with can fail: σ ≤lat π as no cuts are necessary between k and k + 1, which is the only region where changes have been made; and no blocks have become entangled which were not before as the only blocks which have changed are separated. Likewise every π0 ∈ BN C(χ0, ι0) with k and k + 1 in separated blocks comes from such a π ∈ BN C(χ, ι). We conclude X X (−1)|π|−|σ| = (−1)|π|−|σ| π∈BN C(χ,ι) π∈BN C(χ,ι) σ≤latπ σ≤latπ k,k+1 separated in π X 0 0 X 0 0 = (−1)|π |−|σ | = (−1)|π |−|σ |. π0∈BN C(χ0,ι0) π0∈BN C(χ0,ι0) 0 0 0 0 σ ≤latπ σ ≤latπ k,k+1 separated in π0 Example 2.4.5. We take a moment here to show that the correspondence in the proof above can indeed fail to carry BNC(χ, ι) onto BNC(χ0, ι0), and so our proof is not more complicated than necessary. Let χ be given by the sequence (`, `, r) and ι ≡ 1. Then χ0 corresponds to the sequence (`, r, `). Yet if σ = {{1, 2} , {3}} then σ0 = {{1, 3} , {2}}, and we have σ ∈ BN C(χ, ι) while σ0 ∈/ BN C(χ0, ι0): the issue is that in σ0, {2} is tangled with {1, 3}, even though {1, 2} and {3} were not tangled in σ. 1 1 2 2 3 3 σ σ0 It is likewise easy to obtain examples of σ ∈ BN C(χ) \BNC(χ, ι) with σ0 ∈ BN C(χ0, ι0). 39 Proof of Proposition 2.4.1. By Lemma 2.4.2, we have that the equation holds whenever χ ≡ `. By repeatedly applying Lemmata 2.4.2 and 2.4.3, we conclude that holds for all other choices of χ as well. (ι) (ι) Corollary 2.4.6. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I (ι) (ι) probability space (A, ϕ). Then (A` , Ar ) are bi-freely independent if and only if for ι∈I (ι(i)) every n ∈ N, χ :[n] → {`, r}, ι :[n] → I, and z1, . . . , zn ∈ A with zi ∈ Aχ(i) , we have X X ϕ(z1 ··· zn) = µBNC(σ, π) ϕσ(z1, . . . , zn). σ∈BN C(χ) π∈BN C(χ) σ≤π≤ι Proof. This follows immediately from Proposition 2.4.1 and Corollary 2.2.3. 2.4.2 Bi-free independence is equivalent to combinatorial bi-free independence. (ι) (ι) Theorem 2.4.7. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I (ι) (ι) probability space (A, ϕ). Then (A` , Ar ) are bi-freely independent if and only if they ι∈I are combinatorially bi-freely independent. (ι) (ι) Proof. Suppose (A` , Ar ) are bi-free. We will show that all mixed bi-free cumulants ι∈I vanish by induction on n, with the case n = 1 being vacuously satisfied. Therefore fix n ∈ N and χ :[n] → {`, r}, and let ι :[n] → I be non-constant; we assume that all mixed (ι(i)) cumulants with fewer than n arguments vanish. Take z1, . . . , zn ∈ A so that zi ∈ Aχ(i) . By Corollary 2.4.6, we have X X ϕ(z1 ··· zn) = µBNC(σ, π) ϕσ(z1, . . . , zn) σ∈BN C(χ) π∈BN C(χ) σ≤π≤ι X X = µBNC(σ, π)ϕσ(z1, . . . , zn) π∈BN C(χ) σ∈BN C(χ) π≤ι σ≤π X = κπ(z1, . . . , zn). π∈BN C(χ) π≤ι 40 On the other hand, using the moment-cumulant formula we have X ϕ(z1 ··· zn) = κπ(z1, . . . , zn). π∈BN C(χ) But now any π ∈ BN C(χ) with π ι and |π| > 1 is a product of the cumulants corresponding to its blocks, at least one of which must be mixed, and so κπ(z1, . . . , zn) = 0. Hence X ϕ(z1 ··· zn) = κ1χ (z1, . . . , zn) + κπ(z1, . . . , zn) = κ1χ (z1, . . . , zn) + ϕ(z1 ··· zn). π∈BN C(χ) π≤ι We conclude that κ1χ (z1, . . . , zn) = 0. (ι) (ι) Now, for the other direction, suppose (A` , Ar ) are combinatorially bi-free. Once ι∈I again, let n ∈ N, χ :[n] → {`, r}, and ι :[n] → I. Using successively the moment-cumulant formula, combinatorial bi-free independence, and the formula for computing cumulants from Remark 2.3.7, we find: X X ϕ(z1 ··· zn) = κπ(z1, . . . , zn) = κπ(z1, . . . , zn) π∈BN C(χ) π∈BN C(χ) π≤ι X X = µBNC(σ, π)ϕσ(z1, . . . , zn) π∈BN C(χ) σ∈BN C(χ) π≤ι σ≤π X X = µBNC(σ, π) ϕσ(z1, . . . , zn). σ∈BN C(χ) π∈BN C(χ) σ≤π≤ι (ι) (ι) Then by Corollary 2.4.6, (A` , Ar ) are bi-freely independent. ι∈I 2.4.3 Voiculescu’s universal bi-free polynomials. In [30, Section 5], Voiculescu provided a non-constructive proof of universal polynomials characterising bi-free independence; using Theorem 2.4.7 we are able to produce them con- cretely. (ι) (ι) We will first introduce some convenient notation. Suppose (zi )i∈I , (zj )j∈J are ι∈I pairs of two-faced families of non-commutative random variables, and suppose α :[n] → 41 I ` J, ι :[n] → I. Then we write (1) (1) (1) ι (ι(1)) (ι(n)) ϕα(z ) := ϕ(zα(1) ··· zα(n)) and ϕα(z ) := ϕ(zα(1) ··· zα(n) ). We also denote by χα :[n] → {`, r} the map such that χ(k) = ` if and only if α(k) ∈ I. ` Proposition 2.4.8. For each ι :[n] → I and α :[n] → I J we define the polynomial Pα,ι (ι) on indeterminates XK indexed by non-empty subsets K ⊂ [n] by the formula X X Y ι(V ) Pα,ι := µBNC(σ, π) X . V σ∈BN C(χα,ι) π∈BN C(χα) V ∈σ σ≤π≤ι (κ) (κ) Then for (zi )i∈I , (zj )j∈J a bi-free collection of two-faced families in (A, ϕ) we have κ∈I ι (κ) ϕα(z ) = Pα,ι((z )κ∈I ) where P ((z(κ)) ) is given by evaluating P at Xκ = ϕ(zκ ··· zκ ) for κ ∈ I. α,ι κ∈I α,ι {k1<··· X Qα := Pα,ι, ι:[n]→I then (1) (2) X Qα = X[n] + X[n] + Pα,ι, ι:[n]→I ι non-constant and (1) (2) ϕα(z) = Qα(z , z ) (1) (2) where Qα(z , z ) is Qα evaluated at the same point as the Pα,ι above. ι Proof. The claim that ϕα(z ) may be computed by evaluating Pα,ι at the correct point is the (1) (2) content of Corollary 2.4.6. The claim that ϕα(z) = Qα(z , z ) is obtained by expanding (1) (2) (1) (2) X ι ϕα(z) = ϕ (zα(1) + zα(1)) ··· (zα(n) + zα(n)) = ϕα(z ), ι:[n]→I and then applying the first claim. 42 To establish the final piece of the proposition, that Qα has the presentation claimed, it (ι) suffices to show that Pα,ι = X[n] when ι is constant. But in this case, X X µBNC(σ, π) = µBNC(σ, π) = δBNC(σ, 1), π∈BN C(χα) π∈BN C(χα) σ≤π≤ι (ι) and we have that the only term surviving in Pα,ι is X[n] . ` Proposition 2.4.9. For α :[n] → I J, recursively define polynomials Rα on indetermi- nates XK indexed by non-empty subsets K ⊆ [n] by the formula X Y Rα = µBNC(π, 1) XV . π∈BNC(χα) V ∈π If XK is given degree |K|, then Rα is homogeneous with degree n. Given a two-faced family ((zi)i∈I , (zj)j∈J ) in a non-commutative probability space and setting Rα(z) to be Rα evaluated with X{k1<··· Proof. The identity for Rα follows directly from the formula in Remark 2.3.7. The cumulant property then follows since κ possesses it. The polynomials Pα,ι, Qα, and Rα are precisely the universal polynomials from Proposi- tions 2.18, 5.2, 5.6, and 5.7 of [30]. 2.5 An alternating moment condition for bi-free independence. So far we have still not presented a characterisation of bi-free independence as straightforward as the characterisation of free independence mentioned in Proposition 1.1.2: that families are free if and only if alternating products of centred variables are centred. In this section we will develop an analogous condition for bi-free independence. 43 (ι) (ι) Definition 2.5.1. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I probability space (A, ϕ). We say the family has the vanishing alternating centred χ-interval Eigenschaft (which we will abbreviate as vaccine) if whenever: • n ≥ 1, χ :[n] → {`, r}, ι :[n] → I, and • z1, . . . , zn ∈ A are such that: (ι(i)) – zi ∈ Aχ(i) ; and – whenever {i1 < ··· < ik} is a maximal ι-monochromatic χ-interval, ϕ(zi1 ··· zik ) = 0, it follows that ϕ(z1 ··· zn) = 0. Example 2.5.2. Suppose χ is such that χ−1(`) = {2, 5, 6, 7, 8}, χ−1(r) = {1, 3, 4, 9, 10}, and ι corresponds to the colouring below (i.e., ι−1 ( ) = {1, 2, 3, 5, 8, 9, 10} and ι−1 ( ) = {4, 6, 7}). 1 2 3 4 5 6 7 8 9 10 The maximal ι-monochromatic χ-intervals are {2, 5} , {6, 7} , {8, 9, 10} , {4}, and {1, 3}. Vac- cine would imply ϕ(z1 ··· z10) = 0 whenever z1, . . . , z10 are chosen corresponding to χ and ι with 0 = ϕ(z2z5) = ϕ(z6z7) = ϕ(z8z9z10) = ϕ(z4) = ϕ(z1z3). The reason this condition becomes more complicated than in the free case amounts to the fact that we cannot replace z1z3 or z8z9z10 with single elements of either the left or right faces; in the latter case, because neither face may contain an appropriate operator, and in the former because z2 may not commute with z3 or z1 and so the two may not be moved next to each other. 44 2.5.1 The equivalence. (ι) (ι) Lemma 2.5.3. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I probability space (A, ϕ). Then the family has vaccine if the pairs of faces are bi-free. Proof. Let n ≥ 1, χ :[n] → {`, r}, and ι :[n] → I, and denote by J the set of maximal ι-monochromatic χ-intervals in [n]. Note that J ∈ BNC(χ) may be thought of as a χ-non- crossing partition in its own right, which will sometimes be of use notationally. For P ⊂ [n], let m(P ) denote the minimum element of BNC(χ) containing P as a block, so all blocks of m(P ) except P are singletons. Let b : {π ∈ BNC(χ): π ≤ ι} → J be a function with the following properties: • if π ∈ BNC(χ), j ∈ b(π), and j ∼π k, then k ∈ b(π) (i.e., the interval b(π) is isolated in π: π ≤ {b(π), b(π)c}); and • if π, σ ∈ BNC(χ) satisfy π ∨ m(b(π)) = σ ∨ m(b(π)) then b(π) = b(σ) (i.e., any partition obtained from π by only modifying the part of π in b(π) is mapped to the same χ-interval by b). For example, one could take b(π) to be the χ-minimal element of J which is isolated in π. Any partition π ∈ BNC(χ) with π ≤ ι must leave one element of J isolated; indeed, if one takes π ∨ J ≤ ι (the element of BNC(χ) obtained from π by joining all points lying in the same ι-monochromatic χ-intervals), it must contain a χ-interval (as any non-crossing −1 partition, in particular sχ · (π ∨ J ), must contain an interval) and since J ≤ π ∨ J ≤ ι, any interval it contains must be isolated and maximal ι-monochromatic. Then this same interval is isolated in π. Denote S(B) = {π ∈ BNC(χ): B ∈ π} the set of χ-non-crossing partitions in which B is a block. Note that if σ ∈ S(B) and ρ ∈ S(Bc), then σ ∧ ρ is a partition with blocks under B corresponding to ρ and blocks outside of B corresponding to σ. Further, any partition π ∈ BNC(χ) with π ≤ {B,Bc} may be expressed in this form: π ∨ m(B) ∈ S(B), π ∨ m(Bc) ∈ S(Bc), and (π ∨ m(B)) ∧ (π ∨ m(Bc)) = π. 45 Now, let n ∈ N, χ :[n] → {`, r}, and ι :[n] → I, and take z1, . . . , zn as in the definition of vaccine. Using the moment-cumulant formula and the vanishing of mixed cumulants from bi-freeness, we have X ϕ(z1 ··· zn) = κπ(z1, . . . , zn) π∈BNC(χ) π≤ι X X = κπ(z1, . . . , zn) B∈J π∈b−1(B) X X X = κσ∧ρ(z1, . . . , zn) B∈J σ∈S(B) ρ∈S(Bc) b(σ)=B X X X = κρ(z1, . . . , zn) κσ\{B}(z1, . . . , zn) B∈J σ∈S(B) ρ∈BNC(χ|B ) b(σ)=B X X = ϕ(zB)κσ\{B}(z1, . . . , zn) B∈J σ∈S(B) b(σ)=B = 0. Essentially, in the sum of cumulants representing ϕ(z1 ··· zn), we have grouped terms together by isolated intervals, and used the fact that when we sum over the entire lattice of bi-non- crossing partitions over one of these intervals, we recover the moment corresponding to that interval, which is zero by assumption. We remark that a bi-free family enjoys an even stronger property: one may drop the requirement that the first and last χ-intervals be centred, provided that there are at least three maximal ι-monochromatic χ-intervals. Our proof above hinged on being able to find an isolated maximal χ-interval in any χ-non-crossing partition; but notice that not only does such an interval always exist, there is always one which is neither the first nor last interval. (ι) (ι) Lemma 2.5.4. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I probability space (A, ϕ). Then the pairs of faces are bi-free if the family has vaccine. Proof. We will show that vaccine uniquely specifies mixed moments in terms of pure ones. (ι(i)) Let n ≥ 1, χ :[n] → {`, r}, ι :[n] → I, and suppose z1, . . . , zn ∈ A with zi ∈ Aχ(i) . Denote 46 by J the set of maximal ι-monochromatic χ-intervals in [n]. For each I = {a1 < ··· < aj} ∈ J , let λI be a (complex) root of the polynomial ϕ (za1 − w) ··· (zaj − w) . Then if f :[n] → J is the unique map so that i ∈ f(i) for every i, we have ϕ (z1 − λf(1)) ··· (zn − λf(n)) = 0, as the λ’s were chosen precisely to make the vaccine property apply. Expanding this equation gives us an expression for ϕ(z1 ··· zn) in terms of mixed moments with at most n − 1 terms; by recursively applying the same procedure we find an expression for ϕ(z1 ··· zn) in terms of pure moments. (ι) D (ι) (ι)E ∗∗ (ι) Now, for ι ∈ I let ϕ be the restriction of ϕ to A` , Ar , and µ = ι∈Iϕ their bi-free product distribution, which by Lemma 2.5.3 also has vaccine. We then find that the same expressions for joint moments in terms of pure ones hold under µ as under ϕ, which is to (ι) (ι) say that (A` , Ar are bi-free. ι∈I (ι) (ι) Theorem 2.5.5. Let (A` , Ar ) be a family of pairs of faces in a non-commutative ι∈I probability space (A, ϕ). Then the family has vaccine if and only if the pairs of faces are bi-free. 2.5.2 Conditional bi-freeness. We remark that this condition may be extended to other settings in bi-free probability. Conditional bi-freeness was studied by Gu and Skoufranis in [11], building off of conditional free independence which was introduced by Bo˙zejko, Leinert, and Speicher [6]. We will show that conditional bi-free independence also admits a characterization in terms of χ-intervals. First, though, we take the time to introduce some notation. Suppose that π ∈ BNC(χ). A a block B ∈ π is said to be inner if there is another block C ∈ π and j, k ∈ C so that for every i ∈ B, j ≺χ i ≺χ k; a block which is not inner is said to be outer. This corresponds with our earlier use of the term “outer” in Subsection 2.1.2. Let (A, ϕ) be a non-commutative probability space, and θ a state on A. The conditional n cumulants with respect to (θ, ϕ) are multi-linear functionals Kχ : A → C defined by the 47 requirement that for any z1, . . . , zn ∈ A, X Y Y θ(z ··· z ) = κ ((z , . . . , z )| ) K ((z , . . . , z )| ) . 1 n χ|V 1 n V χ|V 1 n V π∈BNC(χ) V ∈π V ∈π V inner V outer Here κ represents the usual bi-free cumulants taken with respect to ϕ. For π ∈ BNC(χ), we will denote by Kπ(z1, . . . , zn) the term in the above sum corresponding to π, a product of κχ|V (ι) (ι) terms and Kχ|V terms. We will say a family A` , Ar is conditionally bi-free in (A, θ, ϕ) ι∈I if it is bi-free with respect to ϕ and all mixed conditional cumulants vanish; it was shown in [11] that this is equivalent to their definition in terms of free product representations, and moreover, that being conditionally bi-free uniquely specifies the mixed θ-moments in terms of the pure θ-moments and ϕ-moments. Theorem 2.5.6. Let (A, ϕ) be a non-commutative probability space and θ : A → C a (ι) (ι) state on A. Suppose A` , Ar is a family of pairs of faces in A. Then the family is ι∈I conditionally bi-free if and only if whenever: • n ≥ 1, χ :[n] → {`, r}, ι :[n] → I, •J is the set of maximal ι-monochromatic χ-intervals, and • z1, . . . , zn ∈ A are such that: (ι(i)) – zi ∈ Aχ(i) ; and – ϕ(zJ ) = 0 for each J ∈ J it follows that Y ϕ(z1 ··· zn) = 0 and θ(a1 ··· an) = θ(zJ ). J∈J Proof. By the same argument as in the proof of Lemma 2.5.4 it follows that the conditions assumed above suffice to uniquely specify all mixed ϕ- and θ-moments in terms of pure ϕ- and θ-moments; hence if we can show that conditionally bi-free families satisfy this condition the proof will be complete. The condition on ϕ is precisely vaccine, so we need only show that our expression for mixed θ-moments is correct. 48 We take an approach similar to that of Lemma 2.5.3 for deducing the value of θ. We claim that the only terms which contribute to the value of θ in the cumulant expansion are those corresponding to partitions π < J , where once again J is the set of maximal ι-monochromatic χ-intervals. Towards this end, let b : {π ∈ BNC(χ): π ≤ ι} → J be as in Lemma 2.5.3, with the additional constraint that b picks interior intervals whenever π J . That is, b should have the following properties: • if π ∈ BNC(χ), j ∈ b(π), and j ∼π k, then k ∈ b(π) (i.e., the interval b(π) is isolated in π: π ≤ {b(π), b(π)c}) • if π, σ ∈ BNC(χ) satisfy π ∨ m(b(π)) = σ ∨ m(b(π)) then b(π) = b(σ) (i.e., any partition obtained from π by only modifying the part of π in b(π) is mapped to the same χ-interval by b); and • if π ∈ BNC(χ) and π J , then b(π) is an inner block in π ∨ J . Such functions exist: for example, one could take b(π) to be the χ-minimal element of J which is inner and isolated in π, if such exists, and the χ-minimal element of J otherwise. Note that if π J , π must connect two intervals in J and so there must be an inner block in J ∨ π between these two intervals. As before, let S(B) = {π ∈ BNC(χ): B ∈ π}, and set Si(B) = {π ∈ S(B): B inner in π ∨ J }. We now compute much as in the proof of Lemma 2.5.3. Supposing n ∈ N, χ :[n] → {`, r}, and ι :[n] → I, and with z1, . . . , zn 49 meeting the hypotheses of the lemma: X θ(z1 ··· zn) = Kπ(z1, . . . , zn) π∈BNC(χ) π≤ι X X X = Kπ(z1, . . . , zn) + Kπ(z1, . . . , zn) B∈J π∈b−1(B) π∈b−1(B) B inner in π∨J B outer in π∨J X X X = ϕ(zB)Kπ\{B}(z1, . . . , zn) + Kπ(z1, . . . , zn) −1 −1 B∈J π∈Si(B)∩b (B) π∈b (B) B outer in π∨J X = Kπ(z1, . . . , zn) π∈b−1(B) B outer in π∨J X = Kπ(z1, . . . , zn) π∈BNC(χ) π≤J Y X = KπJ ((z1, . . . , zn)|J ) J∈J πJ ∈BNC(χ|J ) Y = θ(zJ ). J∈J Here in the last few lines we have noted that summing over all partitions sitting under J is the same as summing over partitions sitting under each interval individually, and then taking the product; this is valid since every term in Kπ is a product of terms corresponding to blocks, and each block must be contained in a single interval in J . 2.6 Bi-free multiplicative convolution. Much as in the free setting, we can view use bi-free probability to describe a multiplicative convolution on laws. D (ι) (ι)E Definition 2.6.1. If µι : C X` ,Xr → C are states for ι ∈ {1, 2}, there is a a unique state D (1) (2) (1) (2)E (ι) (ι) µ : C X` ,X` ,Xr ,Xr → C so that µ restricted to the algebra generated by X` ,Xr (ι) (ι) is equal to µι and the pairs (X` ,Xr ) are bi-free. This allows us to define µ1 K µ2 : 50 C hX`,Xri → C via (1) (2) (2) (1) µ1 K µ2 (f(X`,Xr)) = µ f(X` X` ,Xr Xr ) . There is of course a second option for defining the multiplicative convolution, which would (1) (2) be to sue Xr Xr in the second coordinate; the corresponding operation is denoted by and has been studied by Voiculescu in [32] and Skoufranis in [19]. As we will see, though, the operation K fits more naturally with the combinatorics of bi-free independence. 2.6.1 The Kreweras complement. To describe the law µ1 K µ2, we will adapt the Kreweras complement approach of Nica and Speicher in [17] to our setting. Definition 2.6.2. Let π ∈ BN C(χ). Then the Kreweras complement of π is the χ-non- crossing partition KBNC(π) defined by −1 KBNC(π) := sχ · KNC(sχ · π). ¯ ¯ Equivalently, if KBNC(π) is the maximum partition on {1¯,..., n¯} such π ∪ KBNC(π) is bi- non-crossing on the set {1, . . . , n, 1¯,..., n¯} with choice of side given by χ0(j) = χ(j) = 0 χ (¯j) and ordering such that j < ¯j on the left and ¯j < j on the right, then KBNC(π) := ¯ B ⊂ [n]: {¯j : j ∈ B} ∈ KBNC(π) . ¯ Remark 2.6.3. We point out that KBNC(π) is the unique partition on {1¯,..., n¯} such that ¯ 0 0 π ∪ KBNC(π) is χ -non-crossing and, if β = {{j, ¯j} : j ∈ [n]} ∈ BN C(χ ), then we have ¯ ¯ π ∪ KBNC(π) ∨ β = 1χ0 . Indeed, the fact that KBNC(π) is maximum ensures that any larger partition is not χ0-non-crossing, while any smaller partition must fail to connect two ¯ nodes which are connected by KBNC(π), which will then remain disconnected in ρ ∪ π ∨ β. Example 2.6.4. Suppose that χ is given by the sequence (`, r, `, `, r, `, r, r) and π = {{1} , {2, 4, 8} , {3} , {5, 7} , {6}} ∈ BN C(χ). 51 Then KBNC(π) = {{1, 2, 3} , {4, 6} , {5, 8} , {7}} ∈ BN C(χ), as illustrated below. 1 1¯ ¯ 22 3 3¯ 4 4¯ ¯ 55 6 6¯ ¯ 77 ¯ 88 We notice that since KNC is order reversing while sχ is order-preserving on P(n), KBNC ∼ is order-reversing. Then for π ∈ BN C(χ), we have [π, 1χ] = [KBNC(1χ),KBNC(π)] = [0χ,KBNC(π)]. Then if f, g ∈ IA(BNC) are multiplicative, X (f ? g)(0χ, 1χ) = f(0χ, π)g(0χ,KBNC(π)) = (g ? f)(0χ, 1χ); π∈BN C(χ) hence f ? g = g ? f. 2.6.2 Cumulants of products. Let n ∈ N, and suppose 0 = k0 < k1 < ··· < km = n. For χ :[m] → {`, r}, define χˆ :[n] → {`, r} to be constant on intervals (kj−1, kj] withχ ˆ(kj) = χ(j). There is an embedding of BNC(χ) ,→ BN C(ˆχ) via π 7→ πˆ, the unique partition with kj ∼πˆ ki if and only if j ∼π i and {(kj−1, kj] : 1 ≤ j ≤ m} ≤ πˆ; note that this partition of intervals is precisely 0ˆχ, while 1ˆχ = 1χˆ. Further, {πˆ : π ∈ BN C(χ)} = 0ˆχ, 1ˆχ = 0ˆχ, 1χˆ ⊆ BN C(ˆχ) is an interval of BNC(ˆχ) and π 7→ πˆ is a lattice morphism, so µBNC(σ, π) = µBNC(ˆσ, πˆ) for σ, π ∈ BN C(χ). We will now exploit a useful combinatorial fact (see, e.g., [18, Proposition 10.11]): if functions f, g : BNC(ˆχ) → C (or any finite lattice) are related via X f(π) = g(σ), σ∈BN C(χ) σ≤π 52 then X X f(τ)µBNC(τ, π) = g(ω). τ∈BN C(ˆχ) ω∈BN C(ˆχ) σ≤τ≤π ω∨σ=π Proposition 2.6.5. Let (A, ϕ) be a non-commutative probability space, n ∈ N, 0 = k0 < k1 < ··· < km = n, and χ :[m] → {`, r}. If π ∈ BN C(χ) and Tj ∈ Aχˆ(j) for j ∈ [n], then X κπ T1 ··· Tk1 ,Tk1+1 ··· Tk2 ,...,Tkm−1+1 ··· Tkm = κσ(T1,...,Tn). σ∈BN C(ˆχ) σ∨0ˆχ=ˆπ Proof. Expressing the cumulant above in terms of moments via the moment-cumulant rela- tion mentioned in Remark 2.3.7, we find