LAMBDA-RINGS
ZEHUI CHEN
Abstract. We give a brief introduction to the theory of λ-rings accessible to undergraduate students. We begin by introducing the concepts of tensor product, exterior power, and graded vector spaces. Then we give the definition of λ-rings and discuss several examples. Finally, we explain how direct sums, tensor products, and exterior powers of (graded) vector spaces induce λ-ring structures on the Grothendieck groups of these categories.
Contents 1. Introduction1 2. Tensor Product2 3. Exterior Power4 4. Graded Vector Spaces6 5. λ-Rings8 6. Direct Sum, Tensor Product, Exterior Power and λ-Ring Structure 12 References 14
1. Introduction In algebra, a λ-ring is a commutative ring equipped with a λ-operation. The study of λ-rings is mainly motivated by the behaviour of tensor products and exterior powers over vector spaces. The categories of vector spaces and graded vector spaces, with direct sum, tensor product, and exterior power, induce the addition, multiplication, and λ-operation in λ-rings of integers Z and Laurent polynomials. The purpose of this paper is to give a brief introduction to the subject accessible to under- graduate students with basic knowledge of linear algebra and group theory. In Section2, we provide the definition of the tensor product and show its basic properties, i.e. the universal property, its dimension, and a kind of distributivity. In Section3, we define the exterior power of vector spaces and prove the theorem of dimensions of exterior powers. After that, we explain graded vector spaces and graded dimension of tensor products and exterior powers in Section4. Then, in Section5, the axioms of λ-rings are given as well as some simple exam- ples of λ-rings, including the integers Z, the ring of power series, and the ring of symmetric functions. Finally, we explain how the operations of direct sum, tensor product, and exterior power of (graded) vector spaces, induce λ-ring structures on the integers and on the ring of Laurent polynomials, when we identify these rings with the Grothendieck groups of the categories of finite-dimensional vector spaces and finite-dimensional graded vector spaces. 2 ZEHUI CHEN Acknowledgements. This project was completed as part of an Undergraduate Research Project at the University of Ottawa. I would like to thank Professor Alistair Savage at the University of Ottawa for his advice and support.
2. Tensor Product Definition 2.1. (Bilinear map). Let U, V , and W be three vector spaces over the same field F . Then a bilinear map b: U × V → W is a mapping such that
(a) b(u1 + u2, v) = b(u1, v) + b(u2, v) for all u1, u2 ∈ U, v ∈ V (b) b(u, v1 + v2) = b(u, v1) + b(u, v2) for all u ∈ U, v1, v2 ∈ V (c) b(αu, v) = αb(u, v) = b(u, αv) for all u ∈ U, v ∈ V , α ∈ F In other words, b(·, v) is linear from U to W for all v ∈ V , and b(u, ·) is linear from V to W for all u ∈ U.
Example 2.2. Let M be an m × n complex matrix, U be the vector space Cm, and V be the vector space Cn. Define the map b: U × V → C by b(u, v) = uT Mv for all u ∈ U and v ∈ V where uT is the transpose of vector u. Then, the function b is a bilinear map. Definition 2.3. (Tensor product). Let U, V be vector spaces. The tensor product of U and V , denoted U ⊗ V , is a vector space together with a bilinear map ⊗: U × V → U ⊗ V such that for any bilinear map b: U × V → W , there exists a unique linear map `: U ⊗ V → W such that b = ` ◦ ⊗, that is, such that the following diagram commutes: ⊗ U × V U ⊗ V
` b W
Definition 2.4. (Free vector space). Let S be a set and F be a field. The free vector space F (S) generated by S is defined as F (S) = {f : S → F | f(s) = 0 for all s ∈ S \ S0 where S0 is some finite subset of S} where the operations on F (S) are defined as pointwise addition and scalar multiplication. Definition 2.5. (Quotient space). Let V be a vector space and W ⊆ V be a subspace of V . Define ∼ to be the equivalence relation such that v1 ∼ v2 if and only if v1 − v2 ∈ W for all v1, v2 ∈ V . Then, the quotient space V/W is defined as V/ ∼ and the elements in V/W are equivalence classes [v] for v ∈ V . Definition 2.6. (Tensor product). Let U and V be vector spaces over field F . Define W ⊆ F (U × V ) to be
W = {(u1 + u2, v) − (u1, v) − (u2, v) | u1, u2 ∈ U, v ∈ V }
∪ {(u, v1 + v2) − (u, v1) − (u, v2) | u ∈ U, v1, v2 ∈ V } ∪ {α(u, v) − (αu, v) | u ∈ U, v ∈ V, α ∈ F } ∪ {α(u, v) − (u, αv) | u ∈ U, v ∈ V, α ∈ F }. LAMBDA-RINGS 3 Then, the tensor product of U and V , denoted U ⊗V , is defined as U ⊗V = F (U ×V )/W . For u ∈ U and v ∈ V , the tensor product of u and v, denoted u ⊗ v, is defined as u ⊗ v = [(u, v)]. It follows from the definition of the quotient space that the tensor product satisfies the following properties:
(a)( u1 + u2) ⊗ v = u1 ⊗ v + u2 ⊗ v for all u1, u2 ∈ U and v ∈ V . (b) u ⊗ (v1 + v2) = u ⊗ v1 + u ⊗ v2 for all u ∈ U and v1, v2 ∈ V . (c) αu ⊗ v = α(u ⊗ v) = u ⊗ αv for all u ∈ U, v ∈ V , and α ∈ F Remark 2.7. One can show that the tensor product as defined in Definition2 .6 satisfies the universal property of Definition2 .3. Theorem 2.8. Let U and V be two finite-dimensional vector spaces over field F . Then, dim(U ⊗ V ) = dim U dim V.
Proof. Let dim U = m and dim V = n. Then, U has a basis S1 = {u1, u2, . . . , um} and V has a basis S2 = {v1, v2, . . . , vn}. Now, define S = {ui ⊗ vj : ui ∈ S1, vj ∈ S2}. The set S is a subset of U ⊗ V with mn elements, so it is enough to prove that S is a basis of U ⊗ V . For any element that can be written as u ⊗ v ∈ U ⊗ V , let u = α1u1 + α2u2 + ... + αmum and v = β1v1 + β2v2 + ... + βnvn for αi, βj ∈ F . Then, it becomes m n X X u ⊗ v = (α1u1 + α2u2 + ... + αmum) ⊗ (β1v1 + β2v2 + ... + βnvn) = αiβj(ui ⊗ vj). i=1 j=1 Then, any arbitrary element x = u ⊗ v + u0 ⊗ v0 + ... can be written as m n X X x = cij(ui ⊗ vj). i=1 j=1 It follows x ∈ Span(S), so U ⊗ V ⊆ Span(S). Obviously, S ⊆ U ⊗ V , so Span(S) ⊆ U ⊗ V . Thus, we have U ⊗ V = Span(S). Let ψ1, . . . , ψm be the dual basis to S1 and let φ1, . . . , φn be the dual basis to S2. Then define the maps bi,j(u, v) = ψi(u)φj(v) for all 1 ≤ i ≤ m and 1 ≤ j ≤ n. When v is fixed, φj(v) becomes a fixed scalar c ∈ F , then bi,j(u, v) = cψi(u). The dual basis ψi(u) is a linear transformation, so bi,j(·, v) is linear from U to F for all v ∈ V . Similarly, bi,j(u, ·) is linear from V to F for all u ∈ U. Hence, bi,j(u, v) = ψi(u)φj(v) are bilinear maps. By the universal property, these induce maps on the tensor product, i.e. for all 1 ≤ k ≤ m, 1 ≤ l ≤ n, there exist `k,l such that `k,l(u ⊗ v) = bk,l(u, v). Now, fix 1 ≤ k ≤ m, 1 ≤ l ≤ n, suppose m n X X ci,jui ⊗ vj = 0 for some ci,j ∈ F, 1 ≤ i ≤ m, 1 ≤ j ≤ n. i=1 j=1
Then, `k,l(u ⊗ v) = `k,l(0) = 0, which means m n X X ci,jbk,l(ui, vj) = ck,l = 0 for 1 ≤ i ≤ m, 1 ≤ j ≤ n. i=1 j=1
Thus, ck,l = 0 for all 1 ≤ k ≤ m, 1 ≤ l ≤ n. Consequently, S is linearly independent. Therefore, S is a basis of U ⊗V . So, we have dim(U ⊗V ) = |S| = mn = dim U dim V . 4 ZEHUI CHEN Proposition 2.9 (Distributive law of tensor product over direct sum). Let U, V, W be finite- dimensional vector spaces, then U ⊗ (V ⊕ W ) ∼= (U ⊗ V ) ⊕ (U ⊗ W ). Proof. Let U be an l-dimensional vector space, V be an m-dimensional vector space and W be an n-dimensional vector space. Then, by Theorem 2.8, the dimension of U ⊗ (V ⊕ W ) is dim(U ⊗ (V ⊕ W )) = dim(U) dim(V ⊕ W ) = dim(U)(dim(V ) + dim(W )) = dim(U) dim(V ) + dim(U) dim(W ). On the other hand, we have dim((U ⊗ V ) ⊕ (U ⊗ W )) = dim(U ⊗ V ) + dim(U ⊗ W ) = dim(U) dim(V ) + dim(U) dim(W ). Thus, U ⊗(V ⊕W ) and (U ⊗V )⊕(U ⊗W ) have the same dimension, so they are isomorphic. 3. Exterior Power If we generalize the idea of tensor product in Definition2 .3, we can replace the vector spaces U, V with vector spaces V1,V2,...Vk and change the bilinear map to multilinear map. Then, we have the tensor product of k vector spaces, V1 ⊗ V2 ⊗ ... ⊗ Vk. Let V be some vector space over field F , define T 0(V ) = F, T 1(V ) = V , and T k(V ) = V ⊗k = V ⊗ V ⊗ ...V for k ≥ 2. | {z } k Definition 3.1. (Exterior power). Let Ak be the subspace of T k(V ) spanned by vectors in
S = {v1 ⊗ v2 ⊗ ... ⊗ vk | v1, v2, . . . vk ∈ V, vi = vj for some i 6= j}. Then, the k-th exterior power of V is the quotient space Vk(V ) = T k(V )/Ak. It follows that Vk each element in (V ) is of the form [u1 ⊗ u2 ⊗ ... ⊗ uk + v1 ⊗ v2 ⊗ ... ⊗ vk + ...], denoted u1 ∧ u2 ∧ · · · ∧ uk + v1 ∧ v2 ∧ · · · ∧ vk + ... for some u1, u2, . . . , uk, v1, v2, . . . , vk,... ∈ V . Definition 3.2. (Alternating multilinear map) Let f : V k → W be a multilinear map. Then, f is alternating if f(v1, . . . , vk) = 0 whenever vi = vi+1 for some 1 ≤ i ≤ n − 1. Then, all alternating multilinear maps have the property f(v1, . . . , vk) = 0 whenever vi = vj for some i 6= j. Remark 3.3. It follows from the definition that the exterior power satisfies a universal property. Precisely, for any alternating multilinear map m: V k → W , there exists a unique linear map `: Vk(V ) → W such that m = ` ◦ Vk, that is, such that the following diagram commutes: ∧k V k Vk(V )
m ` W LAMBDA-RINGS 5
Lemma 3.4. Let V be a vector space, and v1, . . . , vk ∈ V . Then,
v1 ∧ · · · ∧ vk = −v1 ∧ · · · ∧ vi−1 ∧ vj ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vi ∧ vj+1 ∧ · · · ∧ vk for all 1 ≤ i < j ≤ k. Proof. The element
x = v1 ∧ · · · ∧ vi−1 ∧ (vi − vj) ∧ vi+1 ∧ · · · ∧ vj−1 ∧ (vi − vj) ∧ vj+1 ∧ · · · ∧ vk
= − v1 ∧ · · · ∧ vi−1 ∧ (vi − vj) ∧ vi+1 ∧ · · · ∧ vj−1 ∧ (vj − vi) ∧ vj+1 ∧ · · · ∧ vk
k Vk has vi − vj at positions i and j where i 6= j. So, by the definition of A , x = 0 in (V ). Then,
0 = (v1 ∧ · · · ∧ vk) − (−v1 ∧ · · · ∧ vi−1 ∧ vi ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vi ∧ vj+1 ∧ · · · ∧ vk)
− (v1 ∧ · · · ∧ vi−1 ∧ vj ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vj ∧ vj+1 ∧ · · · ∧ vk)
+ (v1 ∧ · · · ∧ vi−1 ∧ vj ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vi ∧ vj+1 ∧ · · · ∧ vk)
= (v1 ∧ · · · ∧ vk) − 0 − 0 + (v1 ∧ · · · ∧ vi−1 ∧ vj ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vi ∧ vj+1 ∧ · · · ∧ vk)
= (v1 ∧ · · · ∧ vk) + (v1 ∧ · · · ∧ vi−1 ∧ vj ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vi ∧ vj+1 ∧ · · · ∧ vk).
Therefore, v1 ∧ · · · ∧ vk = −v1 ∧ · · · ∧ vi−1 ∧ vj ∧ vi+1 ∧ · · · ∧ vj−1 ∧ vi ∧ vj+1 ∧ · · · ∧ vk.
Theorem 3.5. Let V be an n-dimensional vector space with a basis {e1, e2, . . . , en}. Then Vk the set S = {ei1 ∧ ei2 ∧ · · · ∧ eik | 1 ≤ i1 < i2 < . . . < ik ≤ n} is a basis of (V ), and Vk n dim (V ) = k .
Proof. For any element that can be written as v1 ∧ · · · ∧ vk where v1, . . . , vk are vectors in V , let vi = αi,1e1 + αi,2e2 + ... + αi,nen for 1 ≤ i ≤ n and αi ∈ F . Then, v is in span of 0 0 the set S = {ei1 ∧ · · · ∧ eik | 1 ≤ ij ≤ n for all 1 ≤ j ≤ k} and we have S ⊆ S . For any 0 element in v = ei1 ∧ · · · ∧ eik ∈ S \ S, v has either ia = ib or ia > ib for some a < b. The former case leads v to be 0, and the later case can be reduced to ±v0 for some v0 ∈ S by Vk applying Lemma3 .4. So, any element u1 ∧ u2 ∧ · · · ∧ uk + v1 ∧ v2 ∧ · · · ∧ vk + ... in (V ) is in Span(S0) = Span(S). Obviously, S ⊆ Vk(V ), so Span(S) ⊆ Vk(V ). Thus, we have Vk(V ) = Span(S). Let ψ1, . . . , ψn be the dual basis to {e1, e2, . . . , en}. We can define the n × k matrix k A(v1, . . . , vk) to be A = [ai,j] where ai,j = ψi(vj). Then, we define the maps mi1,...,ik : V → F for 1 ≤ i1 < . . . < ik ≤ n to be mi1,...,ik (v1, . . . , vk) = det B(v1, . . . , vk) where B(v1, . . . , vk) is the k × k submatrix of A(v1, . . . , vk) consisting of the i1, . . . , ik rows of A(v1, . . . , vk). Then, the maps mi1,...,ik are multilinear by properties of the determinant. If vi = vi+1 for some 1 ≤ i ≤ n−1, then the matrix A(v1, . . . , vk) will have the same i-th and (i+1)-th column. So, the submatrix B(v1, . . . , vk) have two identical columns. Since B(v1, . . . , vk) is a singular matrix, it has determinant of 0. Then, we have mi1,...,ik (v1, . . . , vk) = 0. Hence, the maps mi1,...,ik are alternating. By the universal property, these induce maps on the exterior power, i.e. for all
1 ≤ i1 < . . . < ik ≤ n, there exist `i1,...,ik such that `i1,...,ik (v1 ∧ · · · ∧ vk) = mi1,...,ik (v1, . . . , vk). Now, fix 1 ≤ j1 < . . . < jk ≤ n, and suppose X ci1,...,ik ei1 ∧ · · · ∧ eik = 0 for some ci1,...,ik ∈ F.
1≤i1<... 1≤i1<... However, mj1,...,jk (ei1 , . . . , eik ) = 1 if i1 = j1, . . . , ik = jk, and mj1,...,jk (ei1 , . . . , eik ) = 0 otherwise. Hence, X ci1,...,ik mj1,...,jk (ei1 , . . . , eik ) = cj1,...,jk = 0. 1≤i1<... Thus, cj1,...,jk = 0 for all 1 ≤ j1 < . . . < jk ≤ n. So, the set S is linearly independent. Since Span(S) = Vk(V ) and S is linearly independent, we have that S is a basis of Vk(V ). n Vk n Since the basis S has cardinality k , it follows that dim (V ) = k . 4. Graded Vector Spaces Definition 4.1. (Graded vector space) A graded vector space, is a vector space V together with a decomposition into a direct sum of the form M V = Vi. i∈Z We say that elements of Vi are homogeneous of degree i. Definition 4.2. (Graded dimension) Let V be a graded vector space, then the graded di- mension of V is defined as X i grdim V = (dim Vi)q i∈Z where q is an indeterminate. Example 4.3. Let V be an n-dimensional vector space, then W = V∗(V ) = L Vk(V ) k∈Z is a graded vector space, and n X n grdim W = qi = (1 + q)n. i i=0 Example 4.4. Let V be an n-dimensional vector space, then W = T ∗(V ) = L T k(V ) is k∈Z a graded vector space, and X X grdim W = dim T k(V )qk = nkqk. k≥0 k≥0 Definition 4.5. (Direct sum of graded vector spaces) Let V,W be two graded vector spaces. Then, the direct sum of V and W is a graded vector space with the decomposition M V ⊕ W = (V ⊕ W )n n∈Z where (V ⊕ W )n = Vn ⊕ Wn. Theorem 4.6. Let V,W be two graded vector spaces with grdim V = f1(q) and grdim W = f2(q), then grdim(V ⊕ W ) = f1(q) + f2(q). LAMBDA-RINGS 7 −1 2 −2 Proof. Let f1(q) = a0 + a1q + a−1q + a2q + a−2q + ... where ai = dim Vi for all i ∈ Z. −1 2 −2 Let f2(q) = b0 + b1q + b−1q + b2q + b−2q + ... where bj = dim Wj for all j ∈ Z. Then, dim(V ⊕ W )n = dim(Vn ⊕ Wn) = an + bn. Thus, grdim(V ⊕ W ) = f1(q) + f2(q). Definition 4.7. (Tensor product of graded vector spaces) Let V,W be two graded vector spaces. Then, the tensor product of V and W is a graded vector space with the decomposition M V ⊗ W = (V ⊗ W )n n∈Z where M (V ⊗ W )n = (Vi ⊗ Wj). i+j=n Theorem 4.8. Let V,W be two graded vector spaces with grdim V = f1(q) and grdim W = f2(q), then grdim(V ⊗ W ) = f1(q)f2(q). −1 2 −2 Proof. Let f1(q) = a0 + a1q + a−1q + a2q + a−2q + ... where ai = dim Vi for all i ∈ Z. −1 2 −2 Let f2(q) = b0 + b1q + b−1q + b2q + b−2q + ... where bj = dim Wj for all j ∈ Z. Now, we regard (V ⊗ W )n for all n ∈ Z as direct sum and compute its dimension as X X X dim(V ⊗ W )n = dim(Vi ⊗ Wj) = dim Vi dim Wj = aibj, i+j=n i+j=n i+j=n and all elements in (V ⊗ W )n are homogeneous of degree n. Now, the graded dimension of V ⊗ W is X n X X n grdim(V ⊗ W ) = dim(V ⊗ W )nq = aibjq . n∈Z n∈Z i+j=n Thus, we have grdim(V ⊗ W ) = f1(q)f2(q). Proposition 4.9. Let U, V, W be graded vector spaces, then U⊗(V ⊕W ) ∼= (U⊗V )⊕(U⊗W ). Proof. The proof is exactly same as Proposition 2.9. Lemma 4.10. Let V,W be vector spaces, then k M ∧k(V ⊕ W ) ∼= ∧i(V ) ⊗ ∧k−i(W ). i=0 Proof. Let V be an m-dimensional vector space with basis {v1, . . . , vm} and W be an n- dimensional vector space with basis {w1, . . . , wn}. Then, a basis of V ⊕ W is {e1, . . . , em, em+1, . . . , em+n} where e1 = v1, . . . , em = vm, em+1 = w1, . . . , em+n = wn. By Theorem3 .5, a basis of k ∧ (V ⊕ W ) is S = {ei1 ∧ ei2 ∧ · · · ∧ eik | 1 ≤ i1 < i2 < . . . < ik ≤ m + n}. Now, we define a bijective linear transformation k M T : ∧k (V ⊕ W ) → ∧i(V ) ⊗ ∧k−i(W ) i=0 where T (ei1 ∧ei2 ∧· · ·∧eik ) = (ei1 ∧· · ·∧eij )⊗(eij+1 ∧· · ·∧eik ) for each s = ei1 ∧ei2 ∧· · ·∧eik ∈ S and 1 ≤ i1 < . . . < ij ≤ m, m + 1 ≤ ii+1 < . . . < ik ≤ m + n and extend linearly. Then, T is an isomorphism, so the two vector spaces are isomorphic. 8 ZEHUI CHEN Definition 4.11. (Exterior power of graded vector spaces) Let V be a graded vector space. Then, the k-th exterior power of V is a graded vector space with the decomposition k M k ∧ (V ) = (∧ (V ))n n∈Z where k M k1 km (∧ (V ))n = ∧ (Vi1 ) ⊗ · · · ⊗ ∧ (Vim ). k1i1+···+kmim=n, k1+···+km=k −1 Theorem 4.12. Let V be a graded vector space with grdim V = f(q)a0 + a1q + a−1q + 2 −2 a2q + a−2q + ··· where ai = dim Vi for all i ∈ Z, then the graded dimension of the k-th exterior power of V is X ai ai grdim ∧k(V ) = 1 ··· m qi1k1+···imkm k1 km k1+···+km=k . −1 2 −2 Proof. Let f(q) = a0 + a1q + a−1q + a2q + a−2q + ··· where ai = dim Vi for all i ∈ Z. Then, we have k M k1 km dim(∧ (V ))n = dim ∧ (Vi1 ) ⊗ · · · ⊗ ∧ (Vim ) k1i1+···+kmim=n, k1+···+km=k X k1 km = dim(∧ (Vi1 ) ⊗ · · · ⊗ ∧ (Vim )) k1i1+···+kmim=n, k1+···+km=k X ai ai = 1 ··· m k1 km k1i1+···+kmim=n, k1+···+km=k Therefore, X ai ai grdim ∧k(V ) = 1 ··· m qi1k1+···imkm k1 km k1+···+km=k 5. λ-Rings Definition 5.1. (λ-ring) A λ-ring is a commutative ring R together with operations λn : R → R for every non-negative integer n such that (a) λ0(r) = 1 for all r ∈ R (b) λ1(r) = r for all r ∈ R (c) λn(1) = 0 for all n > 1 n Pn k n−k (d) λ (r + s) = k=0 λ (r)λ (s) for all r, s ∈ R n 1 n 1 n (e) λ (rs) = Pn(λ (r), . . . , λ (r), λ (s), . . . , λ (s)) for all r, s ∈ R m n 1 mn (f) λ (λ (r)) = Pm,n(λ (r), . . . , λ (r)) for all r ∈ R LAMBDA-RINGS 9 where Pn and Pm,n are certain universal polynomials with integer coefficients such that if x = x1 + x2 + ··· , y = y1 + y2 + ··· , then X 1 n 1 n n Y Pn(λ (x), . . . , λ (x), λ (y), . . . , λ (y))t = (1 + txiyj), n i,j 1 n 1 n n i.e. each Pn(λ (x), . . . , λ (x), λ (y), . . . , λ (y)) is the coefficient of t in the expansion of the 1 1 P right hand side. So, we have P0 = 1, P1(λ (x), λ (y)) = i,j xiyj, and so on. X 1 mn m Y Pm,n(λ (x), . . . , λ (x))t = (1 + txi1 xi2 ··· xin ), m i1 Lemma 5.2. Given any λ-ring R and x ∈ R, P1,0(x) = 1 and Pm,0(x) = 0 for all m > 1. 1 0 Proof. Let x be an arbitrary element in R. By axiom (f), P1,0(x) = λ (λ (x)). Then, by 1 0 1 applying axioms (a) and (b), λ (λ (x)) = λ (1) = 1. So, P1,0(x) = 1. Similarly, let m > 1. m 0 m Then, by axiom (f), (a), and (c), Pm,0(x) = λ (λ (x)) = λ (1) = 0. Example 5.3. The ring (Z, +, ·) together with the operation of binomial coefficients λn(x) where x if 0 ≤ x ≤ n n n n −x λ (x) = (−1) n if − n ≤ x < 0 0 elsewhere is a λ-ring. n x The binomial coefficient λ (x) = n satisfies the axioms: (a), (b), (c), and (d) hold by definition and properties of binomial coefficient. For (e), let r = r1 + r2 + ··· where 1 if r ≥ 0, 1 ≤ i ≤ r, ri = −1 if r < 0, 1 ≤ i ≤ −r, 0 elsewhere. Similarly, we define s = s1 + s2 + ··· where 1 if s ≥ 0, 1 ≤ j ≤ s, sj = −1 if s < 0, 1 ≤ j ≤ −s, 0 elsewhere. Then, when r ≥ 0, s ≥ 0 or r < 0, s < 0, X Y X rs P (λ1(r), . . . , λn(r), λ1(s), . . . , λn(s))tn = (1 + tr s ) = (1 + t)rs = tn. n i j n n i,j n When r < 0, s ≥ 0 or r ≥ 0, s < 0, X Y X −rs P (λ1(r), . . . , λn(r), λ1(s), . . . , λn(s))tn = (1+tr s ) = (1−t)−rs = (−1)n tn. n i j n n i,j n 10 ZEHUI CHEN Hence, by equating the coefficients of each tn from both sides, (e) holds. For (f), when r ≥ 0, we have r X 1 mn m Y r X m P (λ (r), . . . , λ (r))t = (1 + tr r ··· r ) = (1 + t)(n) = n t . m,n i1 i2 in m m i1