Chapter (2) Quaternions, Clifford Algebras, and Matrix Groups as Lie Groups. Now we will discuss algebras. Section (2.1): Quaternions, Clifford Algebras, and Matrix Groups as Lie Groups First will denote any field, although our main interest will be in the cases ℝ, ℂ. Definition (2.1.1): finite dimensional (associative and unital) algebra A is a finite dimensional - vector space which is an associative and unital ring such that for all r; s ∈ and a; b ∈ A, ()() = ()(). If A is a ring then A is a commutative -algebra. If every non-zero element ∈ A is a unit, i.e., is invertible, then A is a division algebra. In this last equation, and are scalar products in the vector space structure, while ()() is the scalar product of with the ring product . Furthermore, if 1 ∈ is the unit of A, for t ∈ , the element 1 ∈ A satisfies (1) = = (1) = (1). If dim A > 0, then 1 ≠ 0, and the function : → ; () = 1 is an injective ring homomorphism; we usually just write t for (t) = t1. Example (2.1.2): For ≥ 1, Mn() is a -algebra. Here we have (t) = t ,ℂ is non- commutative. 57 Example (2.1.3): The ring of complex numbers ℂ is an ℝ-algebra. Here we have (t) = t. C is commutative. Notice that ℂ is a commutative division algebra. A commutative division algebra is usually called a field while a non- commutative division algebra is called a skew field. In French corps (~field) is often used in sense of possibly non-commutative division algebra. In any algebra, the set of units of A forms a group A* under multiplication, and this contains x. x For A = Mn(), Mn() = GLn(). Definition (2.1.4): Let , be two -algebras. A -linear transformation that is also a ring homomorphism is called a -algebra homomorphism or homomorphism of - algebras. A homomorphism of -algebras : A → B which is also an isomorphism of rings or equivalently of -vector spaces is called isomorphism of -algebras. Notice that the unit : → A is always a homomorphism of -algebras. There are obvious notions of kernel and image for such homomorphisms, and of subalgebra. Definition (2.1.5): Given two -algebras , , their direct product has underlying set with sum and product (a1, b1) + (a2, b2) = (a1 + a2, b1 + b2) , (a1, b1)(a2, b2) = (a1a2, b1b2). The zero is (0,0) while the unit is (1,1). It is easy to see that there is an isomorphism of -algebras A x B ≅ B x A. Given a -algebra A, it is also possible to consider the ring Mn(A) consisting of matrices with entries in A; this is also a -algebra of dimension 2 DimkMm(A) = m dimK A. 58 It is often the case that a -algebra A contains a subalgebra1⊆ A which is also a field. In that case A can be viewed as a over 1 in two different ways, corresponding to left and right multiplication by elements of 1. Then for t ∈ 1, ∈ A, ( ) → . = ; (ℎ ) → . = . These give different 1-vector space structures unless all elements of 1 commute with all elements of A, in which case 1 is said to be a central subfield of A. We sometimes write 1A and to indicate which structure is being considered. 1 is itself a finite dimensional commutative -algebra of some dimension 1. Proposition (2.1.6): Each of the 1-vector spacesk1 A and is finite dimensional and in fact dimk A = dimk1 (k1A) dimk1 = dimk A|1 dimk: Example (2.1.7): Let = ℝ and A = M2(ℝ), so dimR A = 4. Let 1 =− : , ∈ ℝ ⊆ (ℝ) Then 1≅ ℂ so is a subfield of M2(ℝ), but it is not a central subfield. Also dimk1 A = 2. Example (2.1.8): Let = ℝ and A = M2(ℂ), so dimR A = 8. Let 1 =− : , ∈ ℝ ⊆ (ℂ) Then 1≅ so is subfield of M2(ℂ), but it is not a central subfield. Here dimk1 A = 4. Given a -algebra A and a subfield 1⊆A containing (possibly equal to ), an element ∈ acts on A by left multiplication: . = ( ∈ ). 59 This is always a -linear transformation of A, and if we view A as the 1vector space A 1, it is always a 1-linear transformation. Given a 1-basis {v1,……,vm} for A 1, there is an matrix (a) with entries in 1 defined by () = () It is easy to check that : → (); ↦ () is a homomorphism of -algebras, called the left regular representation of A over 1 with respect to the basis {v1,…, vm}. Lemma (2.1.9): A →Mm(1) has trivial kernel ker = 0, hence it is an injection. Proof: If a ∈ker then (a)(1) = 0, giving a1 = 0, so a = 0. Definition (2.1.10): The -algebra A is simple if it has only one proper two sided ideal, namely (0), hence every non-trivial |-algebra homomorphism : A →B is an injection. Proposition (2.1.11): Let be a field. i) For a division algebra over , is simple. ii) For a simple -algebra A, Mn(A) is simple. In particular, Mn() is a simple -algebra. On restricting the left regular representation to the group of units of Ax, we obtain an injective group homomorphism : → (); ()() = , where 1⊆ A is a subfield containing and we have chosen a 1-basis of Because ≅ ≤ (1) 60 Ax and its subgroups give groups of matrices. Given a -basis of A, we obtain a group homomorphism X : A → (); (a)(u)= We can combine and to obtain two further group homomorphisms : → (); (, )() = Δ: → (); Δ()() = Notice that these have non-trivial kernels, Ker : = {(1,1),(-1,-1)}, Ker ∆ = {1,-1} In the following we will discuss linear algebra over a division algebra let be a finite dimensional division algebra over a field . Definition (2.1.12): A (right) -vector space V is a right -module, i.e., an abelian group with a right scalar multiplication by elements of so that for ; ∈ V , ; ∈ , v(xy) = (vx)y, v(x + y) = vx + vy, (u + v)x = ux + vx, v1 = v: All the obvious notions of -linear transformations, subspaces, kernels and images make sense as do notions of spanning set and linear independence over . Theorem (2.1.13): Let V be a -vector space. Then V has a -basis. If V has a finite spanning set over then it has a finite -basis; furthermore any two such finite bases have the same number of elements. Definition (2.1.14): A -vector space V with a finite basis is called finite dimensional and the number of elements in a basis is called the dimension of V over , denoted dimD V . 61 For n > 1, we can view n as the set of 1 column vectors with entries in and this becomes a -vector space with the obvious scalar multiplication ⋮ = ⋮ Proposition (2.1.15): Let V,W be two finite dimensional vector spaces over , of dimensions dimD V = m, dimDW = n and with bases {v1; : : : ; vm}, {w1; : : : ;wn}. Then a -linear transformation : V →W is given by () = For unique elements ∈ Hence if = , Then 1 ⋯ 2 ⋯ = ⋮ ⋮ ⋱ ⋱ ⋮ ⋮ ⋯ In particular, for V=m and W=n, every -linear transformation is obtained in this way from left multiplication by a fixed matrix. This is of course analogous to what happens over a field except that we are careful to keep the scalar action on the right and the matrix action on the left. We will be mainly interested in linear transformations which we will identify with the corresponding matrices. If : k→: k and m→ n are -linear transformations with corresponding matrices [] , [], then [] []= [Ο], (2.1) m m Also, the identity and zero functions Id; 0: → have [Id] = Im and [0] = Om. 62 Notice that given a -linear transformation : V→W, we can 'forget' the - structure and just view it as a -linear transformation. Given -bases {v1…, vm}, {w1,…..,wn} and a basis {b1, …, bd} say for , the elements vrbt (r = 1, …. ,m , t = 1 , …. d), wsbt (s = 1, …, n , t = 1 , … d) form -bases for V;W as -vector spaces. We denote the set of all matrices with entries in by Mm,n() and Mn() 2 = Mn,n(). Then Mn() is a -algebra of dimension dim Mn() = n dimk. The group of units of Mn() is denoted GLn(). However, for non-commutative there is no determinant function so we cannot define an analogue of the special linear group. We can however use the left regular representation to overcome this problem with the aid of some algebra. Proposition (2.1.16): Let A be algebra over a field and B ⊆ A a finite dimensional subalgebra. If u∈ B is a unit in A then ∈ B, hence u is a unit in B. Proof: Since B is finite dimensional, the powers uk (k≥0) are linearly dependent over , so for some tr∈ (r = 0,….,ℯ ) with ℯ ≠ 0 and ℯ ≥ 1, there is a relation ℯ = 0 If we choose k suitably and multiply by a non-zero scalar, then we can assume that ℯ − = 0. If v is the inverse of in A, then multiplication by vk+1 gives ℯ − = 0. from which we obtain 63 ℯ − ∈ For a division algebra , each matrix A ∈Mn() acts by multiplication on the n left of . For any subfield 1⊆ containing , A induces a (right) 1-linear transformation, Dn→Dn; x →Ax If we choose a 1-basis for , A gives rise to a matrix AA∈Mnd(1) where d = dimk1 . It is easy to see that the function ⋀: Mn() →Mnd(1) ; ⋀(A) = ⋀A. is a ring homomorphism with ker⋀ = 0. This allows us to identify Mn() with the subring im⋀ ⊆ Mnd(1).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages48 Page
-
File Size-