Free probability and combinatorics Preliminary version
Michael Anshelevich c 2012
July 17, 2018 Preface
These notes were used in a topics course Free probability and combinatorics taught at Texas A&M Uni- versity in the fall of 2012. The course was based on Nica and Speicher’s textbook [NS06], however there were sufficient differences to warrant separate notes. I tried to minimize the list of references, but many more are included in the body of the text. Thanks to the students in the course, and to March Boedihardjo, for numerous comments and corrections; further corrections or comments are welcome. Eventually I will probably replace hand-drawn figures by typeset ones. Contents
1 Noncommutative probability spaces and distributions. 3 1.1 Non-commutative probability spaces...... 3 1.2 Distributions. Weak law of large numbers...... 6
2 Functional analysis background. 9 2.1 C∗- and von Neumann algebras. Spectral theorem...... 9 2.2 Gelfand-Naimark-Segal construction...... 11 2.3 Other topics...... 14
3 Free independence. 19 3.1 Independence. Free independence...... 19 3.2 Free products...... 22
4 Set partition lattices. 29 4.1 All, non-crossing, interval partitions. Enumeration...... 29 4.2 Incidence algebra. Multiplicative functions...... 33
5 Free cumulant machinery. 38 5.1 Free cumulants...... 38 5.2 Free independence and free cumulants...... 45 5.3 Free convolution, distributions, and limit theorems...... 49
1 6 Lattice paths and Fock space models. 54 6.1 Dyck paths and the full Fock space...... 54 6.2 Łukasiewicz paths and Voiculescu’s operator model...... 62 6.3 Motzkin paths and Schurmann’s¨ operator model. Freely infinitely divisible distributions. 63 6.4 Motzkin paths and orthogonal polynomials...... 69 6.5 q-deformed free probability...... 74
7 Free Levy´ processes. 77
8 Free multiplicative convolution and the R-transform. 81
9 Belinschi-Nica evolution. 85
10 ∗-distribution of a non-self-adjoint element 93
11 Combinatorics of random matrices. 97 11.1 Gaussian random matrices...... 97 11.2 Map enumeration...... 104 11.3 Partitions and permutations...... 108 11.4 Asymptotic free independence for Gaussian random matrices...... 112 11.5 Asymptotic free independence for unitary random matrices...... 115
12 Operator-valued free probability. 121
13 A very brief survey of analytic results. 128 13.1 Complex analytic methods...... 128 13.2 Free entropy...... 132 13.3 Operator algebras...... 136
2 Chapter 1
Noncommutative probability spaces and distributions.
See Lectures 1, 4, 8 of [NS06].
1.1 Non-commutative probability spaces.
Definition 1.1. An (algebraic)(non-commutative) probability space is a pair (A, ϕ), where A is a unital ∗-algebra and ϕ is a state on A. That is:
•A is an algebra over C, with operations za + wb, ab for z, w ∈ C, a, b ∈ A. Unital: 1 ∈ A. • ∗ is an anti-linear involution, (za)∗ = za∗, (ab)∗ = b∗a∗.
• ϕ : A → C is a linear functional. Self-adjoint: ϕ [a∗] = ϕ [a]. Unital: ϕ [1] = 1. Positive: ϕ [a∗a] ≥ 0.
Definition 1.2. a ∈ A is symmetric if a = a∗. A symmetric a ∈ (A, ϕ) is a (n.c.) random variable.
Examples of commutative probability spaces.
Example 1.3. Let (X, Σ,P ) be a measure space, P a probability measure. Take Z ∞ ∗ A = L (X,P ), f = f, E(f) = f dP. X
Then E (usually called the expectation) is a state, in particular positive and unital. Thus (A, E) is a (commutative) probability space. Note: f = f ∗ means f real-valued.
3 A related construction is \ A = L∞−(X,P ) = Lp(X,P ), p≥1 the space of complex-valued random variables all of whose moments are finite. Example 1.4. Let X be a compact topological space (e.g. X = [0, 1]), and µ a Borel probability measure X on . Then for Z A = C(X), ϕ [f] = f dµ, X (A, ϕ) is again a commutative probability space.
Example 1.5. Let A = C[x] (polynomials in x with complex coefficients), x∗ = x. Then any state ϕ on C[x] gives a commutative probability space. Does such a state always come from a measure?
Examples of non-commutative probability spaces.
Example 1.6. Let x1, x2, . . . , xd be non-commuting indeterminates. Let
A = Chx1, x2, . . . , xdi = Chxi be polynomials in d non-commuting variables, with the involution
∗ ∗ xi = xi, (xu(1)xu(2) . . . xu(n)) = xu(n) . . . xu(2)xu(1).
Then any state ϕ on Chxi gives a non-commutative probability space. These do not come from measures. d One example: for z = (z1, z2, . . . , zd) ∈ R ,
δz(f(x1, x2, . . . , xd)) = f(z1, z2, . . . , zd) is a state (check!). Other examples?
∗ Example 1.7. A = Mn(C), the n × n matrices over C, with the involution (A )i,j = Aji and n 1 X 1 ϕ [A] = A = Tr(A) = tr(A), N ii N i=1 the normalized trace of A. Note that indeed, tr(A∗) = tr(A) and tr(A∗A) ≥ 0. Definition 1.8. Let ϕ be a state on A.
a. ϕ is tracial, or a trace, if for any a, b ∈ A,
ϕ [ab] = ϕ [ba] .
Note that A is, in general, not commutative.
4 b. ϕ is faithful if ϕ [a∗a] = 0 only if a = 0.
Example 1.9. For a probability space (X, Σ,P ), let
∞− ∞− A = Mn(C) ⊗ L (X,P ) ' Mn(L (X,P )). These are random matrices = matrix-valued random variables = matrices with random entries. Take Z ϕ [A] = (tr ⊗ E)(A) = tr(A) dP. X Example 1.10. Let H be a Hilbert space, and A a ∗-subalgebra of B(H), the algebra of bounded linear operators on H. a∗ is the adjoint operator to a. If ξ ∈ H is a unit vector, then
ϕ [a] = haξ, ξi is a state. Why unital: ϕ [1] = hξ, ξi = kξk2 = 1. Why self-adjoint: ϕ [a∗] = ha∗ξ, ξi = hξ, aξi = haξ, ξi. Why positive: ϕ [a∗a] = ha∗aξ, ξi = haξ, aξi = kaξk2 ≥ 0. Typically not tracial or faithful.
n Example 1.11 (Group algebra). Let Γ be a discrete group (finite, Z , Fn, etc.).
C[Γ] = functions Γ → C of finite support = finite linear combinations of elements of Γ with C coefficients. X (f : x 7→ f(x)) ↔ f(x)x. x∈Γ This is a vector space. It is an algebra with multiplication ! ! X X X f(x)x g(y)y = (f(x)g(y))z, x∈Γ y∈Γ z∈Γ z=xy in other words X X (fg)(z) = f(x)g(y) = f(x)g(x−1z), z=xy x∈Γ the convolution multiplication. The involution is
f ∗(x) = f(x−1).
5 Check that indeed, (fg)∗ = g∗f ∗.
So C[Γ] is a unital ∗-algebra, with the unit δe, where for x ∈ Γ ( 1, y = x, δx(y) = 0, y 6= x and e is the unit of Γ. Moreover, define τ [f] = f(e).
Exercise 1.12. Prove that τ is a faithful, tracial state, called the von Neumann trace.
Later: other versions of group algebras.
1.2 Distributions. Weak law of large numbers.
Definition 1.13. Let (A, ϕ) be an n.c. probability space.
a. Let a ∈ (A, ϕ) be symmetric. The distribution of a is the linear functional
ϕa : C[x] → C, ϕa [p(x)] = ϕ [p(a)] .
Note that ϕa is a state on C[x]. The sequence of numbers
n n {mn[ϕa] = ϕa [x ] = ϕ [a ] , n = 0, 1, 2,...}
are the moments of a. In particular m0[ϕa] = 1 and m1[ϕa] = ϕ [a] is the mean of a.
b. More generally, let a1, a2, . . . , ad ∈ (A, ϕ) be symmetric. Their joint distribution is the state
ϕa1,...,ad : Chx1, . . . , xdi → C, ϕa1,...,ad [p(x1, . . . , xd)] = ϕ [p(a1, . . . , ad)] .
The numbers ϕ au(1)au(2) . . . au(n) : n ≥ 0, 1 ≤ u(i) ≤ d
are the joint moments of a1, . . . , ad. Denote by D(d) the space of all joint distributions of d-tuples of symmetric random variables, which is the space of all states on Chx1, . . . , xdi.
6 c. We say that (N) (N) (a1 , . . . , ad ) → (a1, . . . , ad) in moments (or, for d > 1, in distribution) if for each p,
ϕ (N) (N) [p] → ϕ(a1,...,ad) [p] (a1 ,...,ad ) as N → ∞. Remark 1.14. Each µ ∈ D(d) can be realized as a distribution of a d-tuple of random variables, namely (x1, x2, . . . , xd) ⊂ (Chxi, µ).
Definition 1.15. a1, a2, . . . , ad ∈ (A, ϕ) are singleton independent if ϕ au(1)au(2) . . . au(n) = 0 whenever all ai are centered (that is, ϕ [ai] = 0) and some index in ~u appears only once.
Proposition 1.16 (Weak law of large numbers). Suppose {an : n ∈ N} ⊂ (A, ϕ) are singleton indepen- dent, identically distributed (that is, all ϕai are the same), and uniformly bounded, in the sense that for a fixed C and all ~u, n ϕ au(1)au(2) . . . au(n) ≤ C . Denote 1 s = (a + a + ... + a ). n n 1 2 n
Then sn → ϕ [a1] in moments (sn converges to the mean, a scalar).
Proof. Note first that 1 (a − ϕ [a ]) + ... + (a − ϕ [a ]) = s − ϕ [a ] . n 1 1 n n n 1
So without loss of generality, may assume ϕ [a1] = 0, and we need to show that sn → 0 in moments.
n n 1 1 X X ϕ sk = ϕ (a + ... + a )k = ... ϕ a a . . . a . n nk 1 n nk u(1) u(2) u(k) u(1)=1 u(k)=1 How many non-zero terms? Denote B(k) the number of partitions of k points into disjoint non-empty subsets. Don’t need the exact value, but see Chapter 4. For each partition
π = (B1,B2,...,Br),B1 ∪ B2 ∪ ...Br = {1, . . . , k} ,Bi ∩ Bj = ∅ for i 6= j.
How many k-tuples (u(1), u(2), . . . , u(k)) such that
u(i) = u(j) ⇔ i ∼π j ( i.e. i, j in the same block of π)?
7 n(n − 1)(n − 2) ... (n − r + 1) ≤ nr ≤ nk/2 because of the singleton condition. Each term in the sum bounded by
k ϕ au(1)au(2) . . . au(k) ≤ C .
Thus k 1 k/2 k ϕ x ≤ B(k)n C → 0 n nk as n → ∞ for any k > 0.
Remark 1.17. For centered independent random variables, showed a + ... + a s = 1 n → 0. n n Expect a non-trivial limit for a + ... + a 1 √ n . n What limit depends on the notion of independence used. See Lecture 8 of [NS06] for more.
8 Chapter 2
Functional analysis background.
See Lectures 3, 7 of [NS06].
2.1 C∗- and von Neumann algebras. Spectral theorem.
Definition 2.1. An (abstract) C∗-algebra is a Banach ∗-algebra with an extra axiom
ka∗ak = kak2 .
A C∗-probability space is a pair (A, ϕ), where A is a C∗-algebra and ϕ is a state on it continuous in the norm topology. (It follows from Corollary 2.13(c) that continuity is actually automatic).
Example 2.2. If X is a compact Hausdorff space, the algebra of continuous functions on it C(X) is a C∗- algebra (with the uniform norm). States on C(X) come from integration with respect to Borel probability measures.
Theorem 2.3 (Gelfand-Naimark theorem). Any unital, commutative C∗-algebra is of the form in the preceding example.
Example 2.4. If A is a ∗-subalgebra of B(H) closed in the norm topology, then A is a C∗-algebra. If ξ ∈ H is a unit vector, then ϕ = h·ξ, ξi is a state on A.
Theorem 2.5. Any abstract C∗-algebra is a concrete C∗-algebra, that is, it has a representation as in the preceding example.
Definition 2.6. A ∗-subalgebra A ⊂ B(H) is a von Neumann algebra (or a W ∗-algebra) if A is closed in the weak operator topology. A W ∗-probability space is a pair (A, ϕ), where A is a W ∗-algebra and ϕ is a normal state (continuous in the ultraweak operator topology).
9 Example 2.7. Let X be a compact Hausdorff space, and µ be a Borel probability measure on X. Then (C(X), µ) is a C∗-probability space. But C(X) ⊂ B(L2(X, µ)) is not WOT-closed. However, L∞(X, µ) is so closed, and therefore is a von Neumann algebra. In fact, the WOT on L∞(X, µ) is the weak-∗ topology on L∞ = (L1)∗. Normal states on L∞(X, µ) come from integration with respect to f dµ, f ∈ L1(X, µ).
Definition 2.8. For a ∈ B(H), its spectrum is the set
σ(a) = {z ∈ C :(z − a) is not invertible} .
The spectrum is always a compact, non-empty subset of C. The spectral radius of a is r = sup {|z| : z ∈ σ(a)} .
We always have r(a) ≤ kak. An operator a ∈ B(H) is positive (written a ≥ 0) if a = a∗ and σ(a) ⊂ [0, ∞).
Proposition 2.9. Let A be a C∗-algebra and a ∈ A.
a. kak2 = r(a∗a). Thus norm (topology) is determined by the spectrum (algebra).
b. a ≥ 0 if and only if for some b ∈ A, a = b∗b. (In fact, may take b ≥ 0 and a = b2).
Definition 2.10. In an (algebraic) n.c. probability space A, say a ≥ 0 if ∃b1, . . . , bk such that
k X ∗ a = bi bi. i=1
Remark 2.11. Note that in the algebra C[x1, x2, . . . , xd] of multivariate polynomials in commuting vari- ables, we can have p(x) ≥ 0 for all x without being able to write p as a sum of squares.
Theorem 2.12 (Spectral theorem and continuous functional calculus, bounded symmetric case). Let a = a∗ ∈ B(H). Then the C∗-algebra generated by a
C∗(a) ' C(σ(a))
(isometric C∗-isomorphism). Moreover, this defines a map f 7→ f(a) ∈ B(H), so that f(a) is a well- defined operator for any continuous f.
Corollary 2.13. Let (A, ϕ) be a C∗-probability space.
a. For any a = a∗ ∈ A, the operator kak − a is positive.
b. For any a = a∗, b ∈ A, ϕ [b∗ab] ≤ kak ϕ [b∗b] (why real?).
10 c. For any a ∈ A, |ϕ [a]| ≤ kak.
∗ n 1/2n d. Assume ϕ is faithful. Then for any a ∈ A, kak = limn→∞ (ϕ [(a a) ]) . Thus the norm can be computed by using moments.
Proof. For (a), using the identification in the spectral theorem, in C(σ(a)), a corresponds to the identity function (f(x) = x), kak = kfku, and kak − a corresponds to kfku − f, which is positive. For (b), using part (a) we can write kak − a = c∗c, from which
ϕ [b∗(kak − a)b] = ϕ [b∗c∗cb] ≥ 0.
If a is symmetric, (c) follows from (b) (with b = 1). In general, Cauchy-Schwartz inequality below implies |ϕ [a]| ≤ pϕ [a∗a] ϕ [1∗1] ≤ pka∗ak = kak .
Finally, applying the spectral theorem to a symmetric element a∗a, the last statement follows from that fact that for a finite measure µ, limn→∞ kfk2n,µ = kfk∞,µ and, if µ has full support, kfk∞,µ = kfku.
2.2 Gelfand-Naimark-Segal construction.
Remark 2.14 (GNS construction I.). Let (A, ϕ) be an n.c. probability space. Let V = A as a vector space. For a, b ∈ V , define ∗ ha, biϕ = ϕ [b a] .
This is a possibly degenerate inner product, i.e. may have kakϕ = 0 for a 6= 0 in V . In fact, inner product non-degenerate if and only if ϕ faithful. For a ∈ A, define λ(a) ∈ L(V ) (linear, not necessarily bounded operators) by
λ(a)b = ab.
Note that ∗ ∗ ∗ ∗ hλ(a )b, ciϕ = ϕ [c a b] = ϕ [(ac) b] = hb, λ(a)ciϕ , so with respect to this inner produce, the adjoint of the operator λ(a) is λ(a∗). We conclude that {λ(a): a ∈ A} is a ∗-representation of A on V . Moreover,
ϕ [a] = hλ(a)1, 1iϕ .
Even if h·, ·iϕ is degenerate, we still have the Cauchy-Schwartz inequality
2 ha, bi ≤ ha, ai hb, bi , ϕ ϕ ϕ
11 in other words |ϕ [b∗a]|2 ≤ ϕ [a∗a] ϕ [b∗b]. Let
n ∗ 2 o N = a ∈ A : ϕ [a a] = kakϕ = 0 .
If a, b ∈ N so are their linear combinations. So N is a subspace of V , on V/N , the inner product is non-degenerate, and induces a norm k·kϕ. Let
k·k H = (V/N ) which is a Hilbert space. Denote H (or perhaps V ) by L2(A, ϕ). We have a natural map a 7→ aˆ = a + N of A → Aˆ ⊂ L2(A, ϕ) with dense range. Moreover, for any a ∈ A and b ∈ N
2 ∗ ∗ ∗ kabkϕ = ϕ [b a ab] ≤ ka abkϕ kbkϕ = 0. So N is a left ideal in A. For a ∈ A,
λ(a)ˆb = ab + aN = ab.b Thus have a (not necessarily faithful) ∗-representation of A on the Hilbert space H, by a priori unbounded operators with a common dense domain Aˆ = V/N . Corollary 2.15. Each µ ∈ D(d) can be realized as the joint distribution of (possibly unbounded) opera- tors on a Hilbert space with respect to a vector state.
Proof. µ is the joint distribution of the operators λ(x1), . . . , λ(xd) in the GNS representation of
(Chx1, . . . , xdi, µ) with respect to the GNS state.
Would like to say (A, ϕ) ' (A ⊂ B(H), ϕ = h·ξ, ξi). Implies A is a C∗-algebra! So have to assume this. Remark 2.16 (GNS construction II.). Let (A, ϕ) be an C∗-probability space. Use the notation from the preceding remark. To show: each kλ(a)k < ∞, in fact kλ(a)k ≤ kak. Indeed,
2 2 ˆ 2 2 ∗ ∗ ∗ ∗ 2 ˆ λ(a)b = kab + N kϕ ≤ kabkϕ = ϕ [b a ab] ≤ ka ak ϕ [b b] = kak b . ϕ ϕ Finally, ˆ ˆ ϕ [a] = λ(a)1, 1 ϕ .
12 Corollary 2.17. If (A, ϕ) is a C∗-probability space and ϕ is faithful, may assume A ⊂ B(H), ϕ = h·ξ, ξi. In addition, ξ is cyclic, that is k·k Aξ = H. w Moreover, in this case (A , h·ξ, ξi) is a W ∗-probability space.
How is the preceding statement related to Theorem 2.5? Does it give a complete proof for it? The following result is inspired by Proposition 1.2 in [PV10].
Proposition 2.18. Let µ be a state on Chx1, . . . , xdi such that for a fixed C and all ~u, n µ xu(1)xu(2) . . . xu(n) ≤ C . Then µ can be realized as a joint distribution of a d-tuple of bounded operators on a Hilbert space.
Proof. The construction of the Hilbert space H is as in the GNS construction I. We need to show that the representation is by bounded operators. It suffices to show that each λ(xi) is bounded. Define the “non-commutative ball of radius C” to be the space of formal power series ( ) X X |~u| BC hxi = α~ux~u : α~u ∈ C, |α~u| C < ∞ . ~u ~u It is easily seen to be a ∗-algebra, to which µ extends via " # X X µe α~ux~u = α~uµ [x~u] . ~u ~u
2 1/2 xi Define gi to be the power series expansion of 1 − 4C2 . This series has radius of convergence 2C, and so lies in BC hxi. Since 1 g2 = 1 − x2, i 4C2 i for f ∈ Aˆ, 1 1 0 ≤ µ [f ∗g∗g f] = µ f ∗ 1 − x2 f = µ [f ∗f] − µ [(x f)∗(x f)] . e i i 4C2 i 4C2 i i It follows that
kxifkµ ≤ 2C kfkµ , so each operator λ(xi) is bounded. Remark 2.19. Any µ ∈ D(d) as in the previous proposition produces, via the GNS construction, a C∗- algebra and a von Neumann algebra. Thus, at least in principle, one can study von Neumann algebras by studying such joint distributions. See also Theorem 4.11 of [NS06].
13 2.3 Other topics.
Remark 2.20 (Group algebras). ( ) 2 2 X 2 L (C[Γ], τ) = L (Γ) = f :Γ → C, |f(x)| < ∞ . x∈Γ
Γ acts on L2(Γ) on the left by −1 (λ(y)g)(x) = (δyg)(x) = g(y x). C[Γ] acts on L2(Γ) on the left by X (λ(f)g)(x) = (fg)(x) = f(y)g(y−1x). y∈Γ
The reduced group C∗-algebra is
∗ k·k 2 Cr (Γ) = C[Γ] ⊂ B(L (Γ))
(there is also a full C∗-algebra C∗(Γ)). The group von Neumann algebra is
∗ weak 2 L(Γ) = W (Γ) = C[Γ] ⊂ B(L (Γ)).
The vector state τ = h·δe, δei is the extension of the von Neumann trace, which is still faithful and tracial ∗ on Cr (Γ) and L(Γ). Remark 2.21 (The isomorphism problem). It is easy to show that
F2 6' F3 and not too hard that C[F2] 6' C[F3]. Using K-theory, Pimsner and Voiculescu showed that
∗ ∗ Cr (F2) 6' Cr (F3).
The question of whether ? L(F2) ' L(F3) is open.
Remark 2.22 (Distributions and moments). Let a be a symmetric, bounded operator. C∗(a) ' C(σ(a)), σ(a) compact. So (continuous) states on C∗(a) correspond to Borel probability measures on σ(a). In
14 particular, if a is a symmetric, bounded operator in (A, ϕ), there exists a Borel probability measure on R such that for any f ∈ C(σ(a)), Z ϕ [f(a)] = f dµa.
µa is the distribution of a (with respect to ϕ). Note that µa is supported on σ(a), so in particular (for a bounded operator) compactly supported. Note also that for a polynomial p, Z ϕa [p(x)] = p(x) dµa(x).
By the Weierstrass theorem and continuity of ϕ, ϕa determines µa. On the other hand, if ϕ is faithful on C∗(a), for a = a∗ kak = sup {|z| : z ∈ supp(µa)} .
Exercise 2.23. Let A ∈ (Mn(C), tr) be a Hermitian matrix, with real eigenvalues λ1, . . . , λn. Compute the algebraic distribution ϕA and the analytic distribution µA of A with respect to tr. Remark 2.24 (Generating functions). Z n n mn = ϕ [a ] = x dµa(x) are the moments of a. Always take m0 = 1. For a formal indeterminate z,
∞ X n M(z) = mnz n=0 is the (formal) moment generating function of a (or of ϕa, or of µa). If a is bounded, then in fact Z 1 M(z) = dµa(x) R 1 − xz −1 for z ∈ C, |z| ≤ kak . The (formal) Cauchy transform is ∞ X 1 1 G (z) = m = M(1/z). µ n zn+1 z n=0 But in fact, the Cauchy transform is defined by Z 1 Gµ(z) = dµ(x) R z − x + − for any finite measure µ on R, as an analytic map Gµ : C → C . The measure can be recovered from its Cauchy transform via the Stieltjes inversion formula 1 dµ(x) = − lim =Gµ(x + iy) dx. π y→0+
15 Example 2.25. Let Γ = F1 = Z, with a single generator x. In (C[Z], τ), have the generating element u = δx. Note that ∗ −1 u = δx−1 = u , so u is a unitary. Moreover, n τ [u ] = δn=0, n ∈ Z, which by definition says that u is a Haar unitary. What is the distribution of the symmetric operator u + u∗? Moments: τ (u + u−1)n =?
Number of walks with n steps, starting and ending at zero, with steps of length 1 to the right or to the left. 2n τ (u + u−1)2n+1 = 0, τ (u + u−1)2n = . n Cauchy transform of the distribution is
∞ ∞ ∞ X 2n X (2n)! X 2n1 · 3 · ... · (2n − 1) G(z) = z−2n−1 = z−2n−1 = z−2n−1 n n!n! n! n=0 n=0 n=0 ∞ X 4n 1 3 2n − 1 = ... z−2n−1 n! 2 2 2 n=0 ∞ n X (−4) 1 3 2n − 1 −1/2 1 = − − ... − z−2n−1 = z−1 1 − 4z−2 = √ . n! 2 2 2 2 n=0 z − 4 Therefore the distribution is 1 1 1 1 dµ(x) = − lim =p dx = √ dx. π y→0+ (x + iy)2 − 4 π 4 − x2 This is the arcsine distribution. We will see this example again. Exercise 2.26. Verify Corollary 2.13(d) for the operator u+u∗ in the preceding example. You might want to use Remark 2.22,
Unbounded operators.
Remark 2.27 (Commutative setting). Let (X, Σ,P ) be a probability space. A = L∞(X, µ).
Ae are all the measurable functions, which form an algebra. Moreover,
f ∈ Ae ⇔ ∀g ∈ Cb(C, C), g ◦ f ∈ A.
16 Remark 2.28. On a Hilbert space H, an unbounded operator a is defined only on a (dense) subspace D(a). Since may have D(a) ∩ D(b) = {0}, cannot in general define a + b. For a von Neumann algebra A, an unbounded, self-adjoint operator a is affiliated to A, if f(a) ∈ A for all bounded continuous f. A general operator T is affiliated to A, T ∈ Ae, if in its polar decomposition T = ua, the partial isometry u ∈ A and the positive operator a is affiliated to A. If (A, ϕ) is a W ∗-probability space with ϕ a faithful, tracial state (so that A is a finite von Neumann algebra), then Ae form an algebra.
Moment problem.
If a is an unbounded, self-adjoint operator, there exists a probability measure µa on R, not necessarily compactly supported, such that Z ϕ [f(a)] = f dµa R n R n for f ∈ Cb( ). Moments ϕ [a ] = x dµa(x) need not be finite, so ϕa may be undefined. Also, can R R have non-uniqueness in the moment problem: µ 6= ν with Z Z p(x) dµ = p(x) dν ∀p ∈ C[x]. R
Definition 2.29. ak → a in distribution if µak → µa weakly, that is Z Z
f dµak → f dµ ∀f ∈ Cb(R).
Proposition 2.30. If all µak , µa compactly supported (in particular, if ak, a are bounded), then
ak → a in moments ⇔ ak → a in distribution.
More generally, if ak → a in moments and µa is determined by its moments, then ak → a is distribution.
Tensor products.
Remark 2.31 (Vector spaces). Let V1,...,Vn be vector spaces. Their algebraic tensor product is
( k ) X (i) (i) (i) V1 ⊗ V2 ⊗ ... ⊗ Vn = a1 ⊗ ... ⊗ an , k ≥ 0, aj ∈ Vj /linearity relations i=1
a1 ⊗ ... ⊗ (za + wb) ⊗ ... ⊗ an = za1 ⊗ ... ⊗ a ⊗ ... ⊗ an + wa1 ⊗ ... ⊗ b ⊗ ... ⊗ an. Thus for example a ⊗ b + a ⊗ c = a ⊗ (b + c), but a ⊗ b + c ⊗ d cannot in general be simplified. Note also that a ⊗ b 6= b ⊗ a.
17 Remark 2.32 (Algebras). Let A1,..., An be algebras. Their algebraic tensor product A1 ⊗ ... ⊗ An has the vector space structure from the preceding remark and algebra structure
(a1 ⊗ ... ⊗ an)(b1 ⊗ ... ⊗ bn) = a1b1 ⊗ a2b2 ⊗ ... ⊗ anbn.
If the algebras are unital, we have natural embeddings
n O Ai ,→ Aj j=1 via ai 7→ 1 ⊗ ... ⊗ ai ⊗ ... ⊗ 1.
Note that the images of Ai, Aj commute:
(a ⊗ 1)(1 ⊗ b) = a ⊗ b = (1 ⊗ b)(a ⊗ 1).
This is a universal object for this property.
Also have tensor products of Hilbert spaces, C∗, von Neumann algebras. Require the taking of closures, which in some cases are not unique.
Remark 2.33 (Probability spaces). Let (A1, ϕ1),..., (An, ϕn) be n.c. probability spaces. The tensor product state n O ϕ = ϕi i=1 Nn on i=1 Ai is defined via the linear extension of
ϕ [a1 ⊗ ... ⊗ an] = ϕ1[a1]ϕ2[a2] . . . ϕn[an].
18 Chapter 3
Free independence.
See Lectures 5, 6 of [NS06].
3.1 Independence. Free independence.
Definition 3.1. Subalgebras A1, A2,..., An ⊂ (A, ϕ) are independent (with respect to ϕ) if they com- mute and for any ai ∈ Ai, 1 ≤ i ≤ n
ϕ [a1a2 . . . an] = ϕ [a1] ϕ [a2] . . . ϕ [an] .
Elements a1, a2, . . . , an are independent if the ∗-subalgebras they generate are independent.
Note that we do not assume that each Ai is itself commutative. Example 3.2. If n O (A, ϕ) = (Ai, ϕi), i=1 then A1,..., An considered as subalgebras of A are independent. For this reason, sometimes call inde- pendence tensor independence.
Remark 3.3. If a1, . . . , an are independent (and in particular commute), their individual distributions
ϕa1 , . . . , ϕan completely determine their joint distribution ϕa1,...,an . Indeed, commutativity implies that all the joint moments can be brought into the form
h u(1) u(2) u(n)i h u(1)i h u(2)i u(n) ϕ a1 a2 . . . an = ϕ a1 ϕ a2 . . . ϕ an .
This is not true for singleton independence: if ϕ [a1] = ϕ [a2] = 0, then
ϕ [a1a2a1] = 0,
19 but ϕ [a1a2a1a2] is not determined. Want an independence-type rule for joint distributions of non-commuting variables. Definition 3.4 (Voiculescu). Let (A, ϕ) be an n.c. probability space.
a. Subalgebras A1, A2,..., Ak ⊂ (A, ϕ) are freely independent (or free) with respect to ϕ if whenever u(1) 6= u(2), u(2) 6= u(3), u(3) 6= u(4),...,
ai ∈ Au(i), ϕ [ai] = 0, then ϕ [a1a2 . . . an] = 0.
b. Elements a1, a2, . . . , ak are freely independent if the ∗-subalgebras they generate are freely inde- pendent.
Proposition 3.5. Let Fn be the free group with generators x1, x2, . . . , xn, and consider the n.c. prob- ability space (C[Fn], τ). Then with respect to τ, the Haar unitaries λ(x1), λ(x2), . . . , λ(xn) are freely independent.
Proof. We want to show that " n # Y −1 τ Pj λ(xu(j)), λ(xu(j)) = 0 j=1 whenever −1 τ Pj(λ(x), λ(x )) = 0 (3.1) and u(1) 6= u(2) 6= u(3) 6= ... 6= u(n). (3.2) Note first that −1 X (j) k Pj(λ(x), λ(x )) = αk λ(x ), k∈Z (j) and so equation (3.1) implies that each α0 = 0. Moreover, n Y −1 X (1) (n) k(1) k(n) Pj λ(xu(j)), λ(xu(j)) = αk(1) . . . αk(n)λ(xu(1) . . . xu(n)), j=1 ~k where all k(i) 6= 0. But then by (3.2), the word
k(1) k(n) xu(1) . . . xu(n) is reduced, has no cancellations, and in particular never equals e. It follows that τ applied to any of the terms in the sum is zero.
20 Remark 3.6. Pairwise free does not imply free. For example, in (C[F2], τ), let
a = λ(x1), b = λ(x2), c = λ(x1x2) = ab.
Then the elements in each pair {a, b}, {a, c}, {b, c} are free, but the triple {a, b, c} is not free.
Remark 3.7. If A1,..., An ⊂ (A, ϕ) ⊂ B(H) are star-subalgebras which are free, then ∗ ∗ ∗ ∗ C (A1),...,C (An) are free, and W (A1),...,W (An) are free.
Claim 3.8. Suppose know ϕa1 , . . . , ϕan and a1, . . . , an are free. Then ϕa1,...,an is determined. Example 3.9. Let a, b ∈ (A, ϕ) be free. How to compute ϕ [abab]? Write a◦ = a − ϕ [a]. Note
ϕ (a◦)2 = ϕ a2 − 2aϕ [a] + ϕ [a]2 = ϕ a2 − ϕ [a]2 .
Then ϕ [abab] = ϕ [(a◦ + ϕ [a])(b◦ + ϕ [b])(a◦ + ϕ [a])(b◦ + ϕ [b])] . Using freeness and linearity, this reduces to
ϕ [abab] = ϕ [a] ϕ [b] ϕ [a] ϕ [b] + ϕ [a]2 ϕ (b◦)2 + ϕ [b]2 ϕ (a◦)2 = ϕ [a] ϕ [b] ϕ [a] ϕ [b] + ϕ [a]2 (ϕ b2 − ϕ [b]2) + ϕ [b]2 (ϕ a2 − ϕ [a]2) = ϕ a2 ϕ [b]2 + ϕ [a]2 ϕ b2 − ϕ [a]2 ϕ [b]2 .
Moral: not a good way to compute.
Proof of Claim. Inside A, the ∗-algebra generated by a1, . . . , an is
∗ Alg (a1, a2, . . . , an)
∞ [ [ ∗ = C ⊕ Span b1b2 . . . bk : bi ∈ Alg au(i) , ϕ [bi] = 0 . k=1 u(1)6=u(2)6=...6=u(k)
ϕ is zero on the second component, and is determined by ϕ [1] = 1 on the first.
Exercise 3.10. Exercise 5.25 in [NS06]. In this exercise we prove that free independence behaves well under successive decompositions and thus is associative. Let {Ai : i ∈ I} be unital ∗-subalgebras of n (j) o (A, ϕ), and Bi : j ∈ J(i) be unital ∗-subalgebras of Ai. Then we have the following.
n (j) o a. If {Ai : i ∈ I} are freely independent in (A, ϕ) and for each i ∈ I, Bi : j ∈ J(i) are freely n (j) o independent in (Ai, ϕ|Ai ), then all Bi : i ∈ I, j ∈ J(i) are freely independent in (A, ϕ).
21 n (j) o b. If all Bi : i ∈ I, j ∈ J(i) are freely independent in (A, ϕ) and if, for each i ∈ I, Ai is the n (j) o ∗-algebra generated by Bi : j ∈ J(i) , then {Ai : i ∈ I} are freely independent in (A, ϕ).
Exercise 3.11. Let (A, τ) be a tracial n.c. probability space, a1, a2, . . . , an free, and u a unitary. Then ∗ {u aiu, 1 ≤ i ≤ n} are free. Is the conclusion still true if τ is not assumed to be a trace? Exercise 3.12. Exercise 5.24 in [NS06]. Let (A, ϕ) be a n.c. probability space. Consider a unital ∗- subalgebra B ⊂ A and a Haar unitary u ∈ A freely independent from B. Show that then also B and u∗Bu are free.
3.2 Free products.
Groups.
Γ1, Γ2,..., Γn groups with units ei ∈ Γi. Sn Any word x1x2 . . . xk with all xi ∈ i=1 Γi can be reduced by identifying
x1x2 . . . xi−1ejxi . . . xk = x1x2 . . . xi−1xi . . . xk and x1x2 . . . xi−1yzxi . . . xk = x1x2 . . . xi−1(yz)xi . . . xk if y, z ∈ Γi for the same i. In this way any word can be reduced to a unique (!) reduced word: unit e or x1x2 . . . xk, xi ∈ Γu(i) \ eu(i) , u(1) 6= u(2) ... 6= u(k).
Define the free product of groups
( n ) n [ Γ = Γ1 ∗ Γ2 ∗ ... ∗ Γn = ∗i=1Γi = reduced words in elements of Γi i=1 with unit e, inverse −1 −1 −1 −1 (x1x2 . . . xk) = xk . . . x2 x1 and product (x1x2 . . . xk)(y1y2 . . . yl) = reduced version of (x1 . . . xky1 . . . yl).
22 Reduced free product of n.c. probability spaces.
Let (A1, ϕ1),..., (An, ϕn) be n.c. probability spaces. Denote
◦ Ai = {a ∈ Ai : ϕi [a] = 0} ,
W∅ = C = C1, and for u(1) 6= u(2) ... 6= u(k), define the vector space
◦ ◦ ◦ W~u = Au(1) ⊗ Au(2) ⊗ ... ⊗ Au(k). Let ∞ ∞ M M M M A = W~u = C1 ⊕ W~u k=0 |~u|=k k=1 u(1)6=u(2)...6=u(k) u(1)6=u(2)...6=u(k)
By abuse of notation we will write a1 ⊗ a2 ⊗ ... ⊗ ak as a1a2 . . . ak. Define
∗ ∗ ∗ ∗ (a1a2 . . . ak) = ak . . . a2a1. On A, we define multiplication as follows. First,
z(a1a2 . . . ak) = (za1)a2 . . . ak.
◦ ◦ Next, let ai ∈ Au(i), bi ∈ Av(i), where ~u and ~v are alternating. If u(1) 6= v(1),
(ak . . . a2a1)(b1b2 . . . bl) = ak . . . a2a1b1b2 . . . bl.
In general,
◦ ◦ (ak . . . a2a1)(b1b2 . . . bl) = ak . . . a2(a1b1) b2 . . . bl + ϕu(1) [a1b1] ak . . . a3(a2b2) b3 . . . bl + ...
+ ϕu(1) [a1b1] ϕu(2) [a2b2] . . . ϕu(j−1) [aj−1bj−1] ak . . . ajbj . . . bl, where j ≤ min(k, l) is the first index (if it exists) such that u(j) 6= v(j). On A, define the free product state n ϕ = ∗i=1ϕi by: ϕ [1] = 1 and on an alternating word in centered elements, ϕ [a1a2 . . . ak] = 0. We will prove later ϕ is positive; other properties are clear.
Remark 3.13. This is a reduced free product, since we identify the units of all Ai with the unit 1 ∈ A. Thus this is a product with amalgamation over C. There is also a full free product construction. Remark 3.14. If we identify
◦ Ai ↔ C ⊕ Ai ⊂ A, a ↔ ϕ [a] ⊕ (a − ϕ [a])
then ϕi = ϕ|Ai and A1,..., An are freely independent with respect to ϕ.
23 Remark 3.15. n n ∗i=1C[Γi] = C [∗i=1Γi] .
Proposition 3.16. If each ϕi is tracial, so is their free product ϕ. Corollary 3.17. Free product of commutative algebras is tracial. In particular, free product of one- dimensional distributions is tracial.
◦ ◦ Proof of the Proposition. Let ai ∈ Au(i), bi ∈ Av(i), u(1) 6= u(2) 6= ... 6= u(k), v(1) 6= v(2) 6= ... 6= v(l). Suppose u(1) = v(1), u(2) = v(2), . . . u(j) = v(j), u(j + 1) 6= v(j + 1). Then ◦ ◦ ϕ [ak . . . a1b1 . . . bl] = ϕ [ak . . . a2(a1b1) b2 . . . bl] + ϕu(1) [a1b1] ϕ [ak . . . a3(a2b2) b3 . . . bl] + ...
+ ϕu(1) [a1b1] ϕu(2) [a2b2] . . . ϕu(j−1) [aj−1bj−1] ϕ [ak . . . ajbj . . . bl]
This is zero unless j = k = l, in which case, since each ϕi is tracial,
ϕ [ak . . . a1b1 . . . bk] = ϕu(1) [a1b1] ϕu(2) [a2b2] . . . ϕu(j) [ajbj]
= ϕu(1) [b1a1] ϕu(2) [b2a2] . . . ϕu(j) [bjaj] = ϕ [b1 . . . bkak . . . a1] .
This implies that ϕ has the trace property for general ai, bi (why?). Remark 3.18 (Reduced free product of Hilbert spaces with distinguished vectors). Given Hilbert spaces n with distinguished vectors (H1, ξ1)i=1, define their reduced free product n (H, ξ) = ∗i=1(Hi, ξi) ◦ as follows. Denote Hi = Hi Cξi. Then ∞ M M ◦ ◦ ◦ Halg = Cξ ⊕ H~u = Hu(1) ⊗ Hu(2) ⊗ ... ⊗ Hu(k) . k=1 |~u|=k u(1)6=u(2)...6=u(k)
H is the completion of Halg with respect to the inner product for which H~u ⊥ H~v for ~u 6= ~v, and on each H~u we use the usual tensor inner product k Y hf1 ⊗ f2 ⊗ ... ⊗ fk, g1 ⊗ g2 ⊗ ... ⊗ gki = hfi, giiu(i) i=1 ◦ for fi, gi ∈ Hu(i). 2 If (Hi, ξi) = L (Ai, ϕ1), then, at least in the faithful case, 2 n (H, ξ) = L (∗i=1(Ai, ϕi)) . n More generally, if each Ai is represented on Hi with ϕi = h·ξi, ξii, then can represent ∗i=1(Ai, ϕi) on H so that ϕ = h·ξ, ξi. This implies in particular that ϕ is positive. By using this representation and taking appropriate closures, can define reduced free products of C∗ and W ∗-probability spaces.
24 Positivity.
How to prove positivity of a joint distribution? Represent it as a joint distribution of symmetric operators on a Hilbert space. How to prove positivity of the free product state? One way is to use the preceding remark. We will use a different, more algebraic proof.
Remark 3.19. A matrix A ∈ Mn(C) is positive if and only if one of the following equivalent conditions holds:
a. A = A∗ and σ(A) ⊂ [0, ∞) (non-negative eigenvalues).
b. A = B∗B for some B (may take B = B∗).
t n c. for all z = (z1, z2, . . . , zn) ∈ C ,
n X hAz, zi = ziAijzj ≥ 0. i,j=1 Proposition 3.20. A linear functional ϕ on A is positive if and only if for all n and all ∗ n a1, a2, . . . , an ∈ A, the (numerical) matrix [ϕ(ai aj)]i,j=1 is positive.
Proof. ⇐ clear by taking n = 1. For the converse, let a1, . . . , an ∈ A and z1, . . . zn ∈ C. Then since ϕ is positive, " n !∗ n # n X X X ∗ 0 ≤ ϕ ziai zjaj = ziϕ [ai aj] zj. i=1 j=1 i,j=1 ∗ n By the preceding remark, the matrix [ϕ(ai aj)]i,j=1 is positive.
Definition 3.21. Let A be a ∗-algebra. Then Mn(A) = Mn(C) ⊗ A is also a ∗-algebra, and so has a notion of positivity. If T : A → B, define
n n Tn : Mn(A) → Mn(B) Tn([aij]i,j=1) = [T (aij)]i,j=1.
We say that T is completely positive if each Tn is positive. Remark 3.22. Usually defined only for C∗-algebras. Even in that case, positive does not imply com- pletely positive.
Compare with Section 3.5 of [Spe98].
Proposition 3.23. If (A, ϕ) is an n.c. probability space, so that ϕ is positive, then each ϕn : Mn(A) → Mn(C) is positive, so that ϕ is completely positive.
25 PN ∗ Proof. Let A ∈ Mn(A) be positive. By definition, that means A = i=1 Bi Bi. It suffices to show that ∗ ∗ each ϕn [Bi Bi] is positive. So without loss of generality, assume that A = B B. That is
n X ∗ Aij = bkibkj, buv ∈ A. k=1 Then n n X ∗ n [ϕn(A)]i,j=1 = [ϕ(bkibkj)]i,j=1 k=1 and for each k, this matrix is positive.
Definition 3.24. For A, B, C ∈ Mn(C), C is the Schur product of A and B if
Cij = AijBij.
Proposition 3.25. If A, B are positive, so is their Schur product.
∗ Pn Proof. Let A = D D, Aij = k=1 DkiDkj. Then
n n n n ! X X X X ziCijzj = ziDkiDkjBijzj = (Dkizi)Bij(Dkjzj) ≥ 0, i,j=1 i,j,k=1 k=1 i,j=1 since B is positive. Therefore C is positive.
n n Theorem 3.26. Let (Ai, ϕi)i=1 be n.c. probability spaces and (A, ϕ) = ∗i=1(Ai, ϕi). Then ϕ is positive. If each ϕi is faithful, so is ϕ.
Proof. Recall the representation ∞ M M A = W~u k=1 |~u|=k u(1)6=u(2)...6=u(k) L and ϕ [ξη] = 0 unless ξ, η ∈ W~u for the same ~u. Thus for ξ = ~u ξ~u,
∗ X ∗ ϕ [ξ ξ] = ϕ [ξ~uξ~u] . ~u
∗ So it suffices to show that ϕ [ξ ξ] ≥ 0 for ξ ∈ W~u. In its turn,
r X (j) (j) (j) (j) ◦ ξ = a1 a2 . . . ak , ai ∈ Au(i). j=1
26 Then " r r # ∗ ∗ ∗ X (i) (i) X (j) (j) ϕ [ξ ξ] = ϕ ak ... a1 a1 . . . ak i=1 j=1 r X h (i) ∗ (j)i h (i) ∗ (j)i = ϕu(1) (a1 ) a1 . . . ϕu(k) (ak ) ak . i,j=1
r h (i) ∗ (j)i Each matrix ϕu(s) (as ) as is positive, therefore so is their Schur product. i,j=1 The proof of faithfulness is similar, see Proposition 6.14 in [NS06].
Remark 3.27 (Free convolutions). Let µ, ν be two distributions (probability measures on R). Free prod- ucts allow us to construct two symmetric, freely independent variables a, b in some tracial C∗-probability space (A, ϕ) such that µ = µa, ν = µb (why?). Know the distribution of a + b is a probability measure on R, depends only on µ, ν and not on a, b. Thus define the additive free convolution µ ν by µ ν = µa+b. How to compute?
Similarly, define the multiplicative free convolution µ ν = ϕab. Note that ab is not symmetric, so in general µ ν is not positive and does not correspond to a measure on R. Also note that since ϕ is tracial, is commutative. Suppose that µ, ν are supported in [0, ∞). Then we may choose a, b to be positive. In that case, a1/2ba1/2 is also positive, and since ϕ is trace,
ϕab = ϕa1/2ba1/2 . In this case µ ν may be identified with a probability measure on [0, ∞). Now suppose instead that µ, ν are supported on the unit circle T = {z ∈ C : |z| = 1}. Then we may choose a, b to be unitary. Then ab is also unitary, so in this case µ ν may be identified with a probability measure on T.
Proposition 3.28 (Compare with Exercise 3.11). Let (A, τ) be a n.c. probability space, a1, a2, . . . , an ∗ free, and u a unitary free from them. Then {u aiu, 1 ≤ i ≤ n} are free.
Proof. The key point is that even if τ is not tracial, if a and u are freely independent then τ [u∗au] = τ [u∗τ [a] u] + τ [u∗a◦u] = τ [a] τ [u∗u] + τ [(u∗)◦a◦u◦] + τ [u∗] τ [a◦u◦] + τ [(u∗)◦a◦] τ [u] + τ [u∗] τ [a◦] τ [u] = τ [a] . In particular, τ [a] = 0 if and only if τ [u∗au] = 0. The rest of the argument proceeds as in the solution to the exercise.
By a similar argument, we can also get (how?) the following useful weakening of the hypothesis in the definition of free independence:
27 Proposition 3.29. Let (A, ϕ) be an n.c. probability space. Subalgebras A1, A2,..., Ak ⊂ (A, ϕ) are freely independent with respect to ϕ if and only if whenever
u(1) 6= u(2) 6= u(3) 6= ... 6= u(n), ai ∈ Au(i) for 1 ≤ i ≤ n, and ϕ [ai] = 0 for 2 ≤ i ≤ n − 1, then
ϕ [a1a2 . . . an] = 0.
28 Chapter 4
Set partition lattices.
See Lectures 9, 10 of [NS06]. For general combinatorial background, see [Sta97]. The general theory of incidence algebras, Mobius¨ inversion etc. goes back to the series On the foundations of combinatorial theory I–X by Gian-Carlo Rota and various co-authors.
4.1 All, non-crossing, interval partitions. Enumeration.
Let S be a finite ordered set. π = {B1,B2,...,Bk} is a (set) partition of S, π ∈ P(S) if
B1 ∪ B2 ∪ ... ∪ Bk = S,Bi ∩ Bj = ∅ for i 6= j, Bi 6= ∅.
{Bi} are the blocks or classes of π. We also write i ∼π j if i and j are in the same block of π. Denote |π| the number of blocks of π. Usually take S = [n] = {1, 2, . . . , n}. Write P([n]) = P(n). Partitions are partially ordered by reverse refinement: if
π = {B1,...,Bk} , σ = {C1,...,Cr} then π ≤ σ ⇔ ∀i ∃j : Bi ⊂ Cj. For example, if π = {(1, 3, 5)(2)(4, 6)} and σ = {(1, 3, 5)(2, 4, 6)}, then π ≤ σ. The largest partition is
1ˆ = 1ˆn = {(1, 2, . . . , n)} and the smallest one 0ˆ = 0ˆn = {(1)(2) ... (n)} .
29 30 Example 4.1. P(3) has 5 elements: one largest, one smallest, and three which are not comparable to each other.
Partitions form a lattice: for any π, σ ∈ P(S), there exists a largest partition τ such that τ ≤ π, τ ≤ σ, called the meet and denoted π ∧ σ; and the smallest partition τ such that π ≤ τ, σ ≤ τ, called the join and denoted π ∨ σ. Clearly i π∼∧σ j ⇔ i ∼π j and i ∼σ j; π ∨ σ can be thought of as the equivalence relation on S generated by π and σ. For example, if π = {(1, 3, 5)(2, 4, 6)} and σ = {(1, 2, 3)(4, 5, 6)}, then π ∧ σ = {(1, 3)(2)(4, 6)(5)}. If π = {(1, 2)(3, 4)(5, 6)(7)} and σ = {(1)(2, 3)(4, 5)(6, 7)}, then π ∨ σ = 1ˆ7. Definition 4.2. A partition π ∈ P(S) of an ordered set S is a non-crossing partition if
i ∼π j, k ∼π l, i < k < j < l ⇒ i ∼π k ∼π j ∼π l.
See Figure 4. Denote non-crossing partitions by NC(S), NC(n).
Note that 0ˆ, 1ˆ ∈ NC(S). With the same operations ∧ and ∨ as above, NC(S) is a lattice. If π, σ ∈ NC(S) they have the same π∧σ in P(S) and in NC(S). However, for π = {(1, 3)(2)(4)} and σ = {(1)(2, 4)(3)}, π ∨ σ = {(1, 3)(2, 4)} in P(4) but π ∨ σ = 1ˆ4 in NC(4). Definition 4.3. π ∈ NC(S) ⊂ P(S) is an interval partition if its classes are intervals. See Figure 4. They also form a lattice. Denote them by Int(S), Int(n).
Exercise 4.4. Denote S(n) = {subsets of [n]} . Then S(n) is a lattice (by reverse inclusion), and
Int(n) 'S(n − 1).
For this reason, interval partitions are also called Boolean partitions.
Will study NC(n), but also P(n), Int(n), in a lot more detail.
Enumeration I.
Definition 4.5. B(n) = |P(n)| = Bell number. S(n, k) = |{π ∈ P(n): |π| = k}| = Stirling number of the second kind. No formula.
31 Lemma 4.6. The Bell numbers satisfy a recursion relation
n X n B(n + 1) = B(n − i), n ≥ 1, i i=0 where by convention B(0) = 1. Consequently, their exponential generating function is
∞ X 1 z F (z) = B(n)zn = ee −1. n! n=0
Remark 4.7. eez−1 grows faster than exponentially, so B(n) grows faster than any an.