<<

On Inverses and

A dissertation presented to the faculty of the College of Arts and Sciences of Ohio University

In partial fulfillment of the requirements for the degree Doctor of Philosophy

Jeremy S. Moore June 2011

© 2011 Jeremy S. Moore. All Rights Reserved. 2

This dissertation titled On Inverses and Linear Independence

by JEREMY S. MOORE

has been approved for the Department of Mathematics and the College of Arts and Sciences by

Sergio Lopez-Permouth´ Professor of Mathematics

Benjamin M. Ogles Dean for College of Arts and Sciences 3 Abstract

MOORE, JEREMY S., Ph.D., June 2011, Mathematics On Inverses and Linear Independence (66 pp.) Director of Dissertation: Sergio Lopez-Permouth´ We study various connections between the notions of invertibility of elements and linear independence of subsets of algebras over (not necessarily commutative) rings. The main emphasis is on two such notions: invertible and fluid algebras. We introduce a hierarchy of notions about algebras having a B consisting entirely of units. Such a basis is called an invertible basis and algebras that have invertible bases are said to be invertible algebras. The conditions considered in that hierarchy include the requirement that for an invertible basis B, the set of inverses B−1 be itself a basis, the notion that B be closed under inverses and the idea that B be closed under products under a slight commutativity requirement. Among other results, it is shown that this last property is unique of rings. Many examples are considered and it is

determined that the hierarchy is for the most part strict. For any field F , F2, all

semisimple F-algebras are invertible. Semisimple invertible F2-algebras are fully characterized. Likewise, the question of which single-variable over a field yield invertible quotient rings of the F-algebra F[x] is completely answered. Connections between invertible algebras and S-rings (rings generated by units) are also explored. While group rings are the archetype of invertible algebras, this notion is general enough to include many other families of algebras. For example, field extensions and all crossed products (including in particular skew and twisted group rings) are invertible algebras. We consider invertible bases B such that for any two elements from B, a multiple of their product belongs to B. Alternatively, one may consider invertible bases with the requirement that for every basis element, a scalar multiple of its inverse must also be in the basis. We refer to these algebras, respectively, as being scalarly closed under 4 products and scalarly closed under inverses. We explore connections between these ideas and crossed products, twisted group rings, and skew group rings. We show that rings over arbitrary rings are invertible algebras and also determine some types of infinite matrix rings which are invertible too. We conclude the dissertation considering the property that sets of inverses of linearly independent invertible elements be also linearly independent. We refer to algebras with this property as fluid algebras. Fluidity of direct sums will be considered. We will characterize which single-variable polynomials over a field yield fluid quotient algebras of the F-algebra F[x]. We apply those results to establish when finite field extensions are fluid algebras. Also we will show that infinite field extensions are rarely fluid. We then define the fluidity of an R-algebra A to be an , such that for every set of n or less linearly independent invertible elements, their inverses are also linearly independent. The fluidity of various families of algebras such as matrix rings and field extensions is explored. Approved: Sergio Lopez-Permouth´ Professor of Mathematics 5

I dedicate this work to all my friends and family. I feel truly blessed to have these people in my life and would not be where or who I am today without them. 6 Acknowledgements

I would like to thank Dr. Sergio Lopez-Permouth, my advisor, for his assistance and guidance. I have learned extensively from his knowledge and wisdom of mathematics. Not only that but his dedication, joy, and patience in this process will strengthen me as a mathematician and educator. I would like to thank Kelly, my future wife, who, throughout this process has been nothing short of amazing. Her care and support has been invaluable and words would not be able to express my gratitude. I would like to thank Cindy, Eric and Josh Moore, my mother, father and brother, for their love and understanding during this process and all through my life. I would like to thank Dr. Steve Noltie, who, encouraged me to pursue mathematics as an undergraduate freshman. I would like to thank Dr. Franco Guerriero, for being available and helping with questions as I studied for the algebra comprehensive exam and participating as one of my committee members. I would like to thank the following people for their insight and communication during my studies: Dr. Steve Szabo, Dr. Ryan Botts, Josh Beal, Don Daws, Chris Holston, Ryan Schweibert, Dr. Benigno Parra, Eric Heinzman, Doug Hoffman, and Dr. Ashish Srivastava. In addition, I would like to thank the various faculty members of the Ohio University Mathematics Department. Your assistance and care for students’ success has been admirable. 7 Table of Contents

Abstract...... 3

Dedication...... 5

Acknowledgements...... 6

List of Figures...... 8

1 Preface...... 9

2 Introduction and Preliminaries...... 11 2.1 Introduction...... 11 2.2 Definitions...... 13

3 Invertible Algebras...... 16 3.1 Definitions and Preliminary Results...... 16 3.2 General Results and Some Families of Invertible Algebras...... 21 3.3 Trivial Units...... 29

4 Further Results on Invertible Algebras...... 33 4.1 Infinite Dimensional Invertible Algebras...... 33 4.2 Linear Closed Under Products and Inverses...... 36 4.3 Scalar Closed Under Products and Inverses...... 41

5 Fluidity...... 51 5.1 Definitions and Preliminary Results...... 51 5.2 Fluid Extensions and Related Ideas...... 54 5.3 Fluid Matrix Algebras...... 60

References...... 66 8 List of Figures

4.1 Hierarchy of Invertibility...... 50 9 1 Preface

In this dissertation we initiate the study of two new notions regarding the units of an algebra A over a (not necessarily commutative) R and the linear independence of its subsets. We call them invertible and fluid algebras. An algebra A over a ring R is said to be an invertible algebra if it has a basis B consisting entirely of units; such a basis is said to be an invertible basis. There are many instances of invertible algebras in the literature. For example, a field extension E of a field F is an example of a case when E is viewed as an algebra over F. In fact, obviously the same is true for ring extensions. A group ring R[G] over a ring R is a strikingly different example; twisted and skew group rings as well as their common generalization the crossed products are also examples of invertible algebras. When the base ring R is a field, invertible algebras are an example of the S -rings studied in [13] but, in general, the connection does not necessarily hold. In fact, while there is some affinity between the notion of invertible algebras and the various concepts in the literature dealing with rings generated by their units [2], [14], the affinity is only superficial and no real connection seems to exist. Obviously, a notion general enough to encompass so many different types of algebras will be too general to yield very many results since these results would have to be properties that are common to such a diverse collection of examples. For that reason, the reasonable way to proceed is to consider various layers of restrictions on invertible algebras and establish a hierarchy of notions that bring us closer or move us farther away from the known examples. Taking group rings as a motivation, in Chapter3 we consider various such restrictions and find that they constitute a hierarchy. We notice, in particular, that the hierarchy is for the most part strict. Similar conditions but based on the two main examples of crossed products (the skew and twisted group rings) are also considered in Chapter4. A common thread in Chapters3 and4 is the consideration of the extent to 10 which known results about group rings and crossed products extend to invertible algebras. The invertibility of specific families of algebras is also a recurring theme as we discover, for example, that matrix rings and single-variable factor rings over a field

(other than F2) are examples of invertible algebras. A new idea arose during our research of invertible algebras. This idea requires for every set of linearly independent units of an algebra, we have their inverses also remain linearly independent, such algebras we call fluid algebras. Despite the naturality of this notion, there does not seem to be any literature on this type of problem. A motivation for this study was the idea of invertible-2. However, for fluid algebras it is required that for every set of linearly independent invertible elements S, the set of inverses S−1 also be linearly independent, while invertible-2 only asks that there exist one such a set (and that it be a basis!) In fact, a fluid algebra need not even be an invertible algebra. 11 2 Introduction and Preliminaries

2.1 Introduction

This dissertation is about the interactions between two topics: invertibility of elements and linear independence of subsets. We have noticed that there does not seem to be much published mathematical research on such connections. First we investigate the idea of algebras having bases consisting entirely of units. We consider various types of these so-called invertible bases. Secondly, we consider algebras such that for every set of linearly independent units S, the set of inverses S−1 remains linearly independent. Throughout this dissertation if S is a set of units, then S −1 will be used to denote the set of inverses of the elements of S . In this dissertation we adopt a less restrictive definition of an R-algebra A. First we do not require R to be commutative and second, we only require for r ∈ R, and a, b ∈ A, r(ab) = (ra)b and not that r(ab) = a(rb). This will allow for a wider class of algebras, encompassing such rings as matrix rings over noncommutative rings. In view of this we are forced to make a choice and therefore A as a left R-module. This will be emphasized as needed or omitted when R is commutative and A is standard. A first inspiration for studying invertible algebras is the theory of group rings, as these are clearly invertible algebras. However, it is difficult to generalize theorems about group rings to this setting since invertible algebras are quite abundant. Therefore, we aim at classifying the various types of invertible algebras. We start by studying four types of invertible algebras. Then we show that these notions yield a hierarchy which is for the most part strict. The idea of these notions is to investigate how far invertible algebras are from being group rings. The expectation is that group ring results are more likely to generalize to invertible algebras that are closer to group rings in the hierarchy. 12

We define several families of invertible algebras that occur naturally. Some of these are readily seen to be invertible algebras, such as group rings and field extensions, while others need to be proven, such as matrix rings. In Chapter 3 we give these results and answer some additional questions connecting invertible algebras with properties like direct sums. We also provide a complete characterization of which single variable polynomial factor rings are invertible. In addition, we draw some connections between invertible algebras and S -rings. In the case when R is a field, invertible algebras are S -rings. If A is an invertible R-algebra, where R is an S -ring, then A is also an S -ring. To every existence problem there corresponds a uniqueness one. In studying when invertible bases may be in some sense ”unique”, we ran into ideas that relate to the study of ”trivial units” originally done in the context of group rings (see [12], [10]). Given an invertible R-algebra A with invertible basis B, we say that an invertible base B∗ is trivial with respect to B (or B∗ is a trivial modification of B) if it is obtained by either multiplying B by a of A or by multiplying each element of B by a unit of R. An element, α ∈ R[G], is said to be a trivial unit if α = ug, where u ∈ U(R) and g ∈ G. We follow this notion in the invertible algebra setting. Given an R-algebra with invertible basis B we say unit α ∈ A is trivial with respect to B if α = uv, where u ∈ U(R) and v ∈ B. Crossed products are other naturally occurring examples of invertible algebras. Crossed products motivate two new definitions: those of scalar closed under products and of scalar closed under inverses. The first is defined by the existence of a unit scalar multiple of each product of basis elements also in the basis, and the latter is defined by the existence of unit scalar multiples of inverses of basis elements in the basis. Clearly crossed products satisfy these requirements. We explore these properties and their precise connections with skew group rings, twisted group rings, and crossed products in Chapter 4. 13

In the final chapter we explore a new idea motivated to some extent by invertible algebras. We will consider algebras such that given any set of linearly independent units S we have S−1 remains linearly independent. We will call them fluid algebras. For example, if in a given algebra, if inversion were determined by a linear (or even pseudo-linear) monomorphism then certainly that particular algebra would be fluid. Further analysis of this idea leads us to define the parameter of an algebra called its fluidity. The fluidity of an algebra is the integer n, such that for every set of n or less linearly independent units, their

F[x] inverses are also linearly independent. In Chapter 5 we will characterize when h f (x)i is fluid. In fact, we will compute the fluidity of all such algebras. As a consequence we will be able to determine when a finite degree field extension is fluid and otherwise determine its fluidity.

2.2 Definitions

This section contains some definitions that will be used throughout this dissertation.

Definition 2.2.1. Let R be a ring and let M be an under addition. Then M is a left R-module if and only if there exists a map φ : R × M → M such that for r, s ∈ R and x, y ∈ M the following hold.

1. φ(r, x + y) = φ(r, x) + φ(r, y)

2. φ(r + s, x) = φ(r, x) + φ(s, x)

3. φ(rs, x) = φ(r, φ(s, x)).

Definition 2.2.2. Let R be any ring. An R-algebra A is a ring which is also an R-module such that r(ab) = (ra)b for r ∈ R, a, b ∈ A.

Note that throughout this paper the definition of an algebra A over R will only require that for r ∈ R, a, b ∈ A, r(ab) = (ra)b and not the common additional requirement that 14 r(ab) = a(rb) which makes the action of R on A ambidextrous. We do this to allow group rings R[G] and matrix rings Mn(R) over a noncommutative ring R to be R-algebras.

Definition 2.2.3. Let R be a ring and G be a multiplicative group. Then the group ring R[G] is an associative R-algebra with the elements of G as a basis. So given an element α ∈ R[G] we may write α as a formal finite sum of the form X α = ag · g g∈G where ag ∈ R.

When the ring in consideration is a field, R = F, we call the group ring, FG, a group algebra.

Definition 2.2.4. Let R be a ring with 1 and G be a group. Then a crossed product R ∗ G is an associative ring with G¯, a copy of G, as an R-basis. is determined by the following two rules:

1. For x, y ∈ G there exists a unit τ(x, y) ∈ U(R) such thatx ¯y¯ = τ(x, y)xy. This action is called the twisting of the crossed product.

2. For x ∈ G there exists a σ(x) ∈ Aut(R) such that for every r ∈ R,xr ¯ = rσ(x) x¯. This action is called the skewing of the crossed product.

The assignments τ and σ must satisfy certain conditions to guarantee the associativity of multiplication in R ∗ G. Since these conditions are not really relevant in the context of this paper, we do not explicitly include them here. However, an interested reader may find them at [11] or the user-friendly survey [9].

A couple of important particular cases are also of interest:

Definition 2.2.5. When the assignment σ(x) in the above definition is the identity automorphism in Aut(R), we say that R ∗ G is a twisted group ring and we denote it as Rt[G]. 15

Definition 2.2.6. When the assignment τ(x, y) in the above definition is trivial, we say that R ∗ G is a skew group ring and we denote it as RG.

Definition 2.2.7. A left R-module M is injective if, given N and K left R-modules and f : N → K is an monomorphism and g : N → M is a homomorphism, then there exists a homomorphism h : K → M such that h f = g.

Definition 2.2.8. A ring R is called left self-injective if it is injective as a left R-module.

Definition 2.2.9. A ring R is called an S -ring if every element can be expressed as the sum of units.[13]

Definition 2.2.10. Let F be a field and σ ∈ Aut(F). If V, W are F-vector spaces, then we say a map φ : V → W is pseudo-linear if

φ(v + w) = φ(v) + φ(w),

φ(αv) = σ(α)φ(v), for all v ∈ V, w ∈ W and α ∈ F. 16 3 Invertible Algebras

3.1 Definitions and Preliminary Results

A group ring A = R[G] is an R-algebra exhibiting the interesting property of having a basis B = G whose every element is invertible. Similarly, if A = E is a field extension (or even a division ring extension) of a field R = F then any basis of E over F consists entirely of units. That property reasonably leads to the following definition.

Definition 3.1.1. Given an algebra A over a ring R, an invertible basis B is an R-basis B such that each element of B is invertible in A. If A has an invertible basis, A is called an invertible algebra.

It is easy to see that not all algebras are invertible, even when they are free R modules. Consider, for example, the polynomial ring A = F[x] over an arbitrary field F.

∗ Here the only units of A would be F . For a finite dimensional example, A = F2 ⊕ F2 which has 2 as a over F2 but only one invertible element. The purpose of this dissertation is to investigate the basic properties of invertible algebras over rings and fields. We first give a lemma realizing each invertible algebra is the homomorphic image of a group algebra.

Lemma 3.1.2. Every invertible algebra is the homomorphic image of a group algebra.

Proof. Let A be an invertible R-algebra for some ring R. Then consider the group ring R[G] formed from R and the group of units of A, U(A) = G. Define φ : R[G] → A by

X X φ( αgg) = αgg g∈G g∈G P P where g∈G αgg ∈ R[G] is generated by G and g∈G αgg ∈ A is generated by a subset of G. It is routine to check that φ is a homomorphism.  17

We notice that in the case R = F is a field, every invertible F-algebra A is an S -ring

(see the Introduction). Notice, however that Z is not an invertible algebra over any field, yet Z is an S -ring. So, we extend our observation as follows.

Proposition 3.1.3. An invertible algebra A over an S -ring R is itself an S -ring.

Proof. Straightforward from the definitions. 

A group ring also satisfies the property that the collection B−1 of inverses of the elements of the basis B = G equals G. This motivates the remaining three definitions in this section.

Definition 3.1.4. Given an algebra A over a ring R, an invertible-2 basis is an invertible R-basis B such that the collection B−1 of the inverses of the elements of B also constitutes a basis. If A has an invertible-2 basis, A is called an invertible-2 algebra.

Proposition 3.1.5. Let F ⊂ E be a finite degree field extension, i.e. |E : F| < ∞. Then there is a basis B for E over F such that B is invertible-2.

Proof. Since |E : F| < ∞, E = F(α1, α2, . . . , αn) for αi ∈ E such that

2 ki F(α1, α2, . . . , αi−1) , F(α1, α2, . . . , αi). Let Bi = {αi, αi , . . . , αi } be a basis for the

j1 j2 jn extension F(α1, α2, . . . , αi) over F(α1, α2, . . . , αi−1). Then B = {α1 α2 . . . αn |1 ≤ ji ≤ ki} is

−1 − j1 − j2 − jn P a basis for E over F. Let B = {α1 α2 . . . αn |1 ≤ ji ≤ ki}. Consider λ∈B−1 βλλ = 0.

k1 k2 kn P −1 Multiplying by α1 α2 . . . αn gives λ∈B βλλ = 0 which shows B is a linearly independent set. Since |B−1| = |B|, B−1 is a basis and thus B is an invertible-2 basis. 

Next, we introduce two other properties of group rings that invertible algebras may or may not have in general.

Definition 3.1.6. Given an algebra A over a ring R, an invertible-3 basis is an invertible R-basis B such that B is closed under inverses. If A has an invertible-3 basis, A is called 18 an invertible-3 algebra. Notice that if B is an invertible-3 basis then it follows easily that B = B−1.

Lemma 3.1.7. Let A be an invertible algebra. Then A has an invertible basis B with

1 ∈ B.

Proof. Let A = {v1, v2,..., vn,...} be an invertible basis for A. Now multiply each

−1 Pm element of A by v1 to obtain a new set B containing 1. Let w ∈ A. Then wv1 = j=1 α jv j. −1 Pm −1 Multiply by v1 to obtain w = j=1 α jv jv1 . Therefore, any element in A can be P −1 represented as a of elements from B. Let i αiviv1 = 0. Then P multiplying by v1 we get i αivi = 0 which implies that αi = 0 for all i as A is a basis. Therefore B is a linearly independent set and thus a basis. 

While, by the previous lemma, the existence of an invertible basis guarantees the existence of an invertible basis containing 1 the same is not true in general of invertible-3 bases, (see Example 3.1.10). For that reason we coin the following definition.

Definition 3.1.8. Given an algebra A over a ring R, an invertible-4 basis is an invertible-3 R-basis which includes the identity. If A has an invertible-4 basis, A is called an invertible-4 algebra.

The following hierarchy for algebras is obvious.

group rings ⊂ invertible-4 ⊂ invertible-3 ⊂ invertible-2 ⊆ invertible

The following three examples show that the first three inclusions are indeed proper. However, while we show below an algebra with an invertible base that is not invertible-2, we do not know yet an invertible algebra which is not invertible-2. That is the subject of our fourth example below. 19

F2[x,y] Example 3.1.9 (Invertible-4 not Group Algebra). Consider A = hx,yi2 . Then A has 4 invertible elements, namely {1, 1 + x, 1 + y, 1 + x + y}. To form an invertible-4 basis, B, we must have 1 ∈ B. Therefore, we must have two of the other three remaining invertible elements. Since (1 + x)(1 + y) = (1 + x + y) we see that any invertible-4 basis we form cannot be closed under products.

F3[x] Example 3.1.10 (Invertible-3 not Invertible-4). Consider A = hx2i . Then the group of units of A is U(A) = {1, −1, 1 + x, 1 − x, −1 + x, −1 − x}. Now, B = {1 + x, 1 − x} is an invertible-3 basis for A over F. Hence, A is invertible-3. Since a basis contains two elements and an invertible-4 basis contains the identity, if A had an invertible-4 basis, it would have two elements {1, a} where a is self-invertible. Besides the identity, -1 is the only self-invertible element. Since {1, −1} does not form a basis, A is not invertible-4.

F3[x,y] Example 3.1.11 (Invertible-2 not Invertible-3). Consider A = hx,yi2 . Let U(A) be the group of units of A. Then U(A) = {α + βx + γy | α, β, γ ∈ F3, α , 0}. Note that for a = α + βx + γy ∈ U(A), a−1 = (α + βx + γy)−1 = α − βx − γy . An invertible-2 F-basis for A is {1 + x, 1 + y, 1 + x + y}. So A is invertible-2. Let B be an invertible F-basis for A. Assume a ∈ B ∩ F. We know a−1 = ba for some b ∈ F. So, any invertible-3 basis for A does not have constants in it. Since non-constant units are not self-invertible in A, any invertible-3 basis for A has an even number of elements. Since |B| = 3 it cannot be invertible-3. Hence, A is not invertible-3

The following example guarantees the existence of invertible bases that are not invertible-2.

Example 3.1.12. Let F be an algebraic extension of a finite field. Consider the F-algebra A = F(x) of rational functions as a sub-algebra of the field of formal Laurent series F((x)). By Corollary 2.3 of [3], A consists precisely of those Laurent series that are (eventually) p(x) j 2 j ··· periodic. Since a periodic power series is of the form 1−x j = p(x)(1 + x + x + ) for 20

p(x) a polynomial of degree less than j, where j ∈ Z+, then periodic power series are xi ≤ linear combinations of elements of the form 1−x j with 0 i < j. It follows that eventually G { k | ∈ } ∪ { xi | ∈ + ≤ ≤ − } periodic Laurent series are generated by = x k Z 1−x j j Z , 0 i j 1 . Notice, however, that G−1 ⊂ F[x, x−1] (the ring of Laurent polynomials). In particular, G−1 does not generate A. Any basis B contained in G will be an invertible basis that is not invertible-2.

An alternate direction in which to explore properties of a group ring R[G] is by considering the fact that G is an invertible basis which is closed under products. It turns out that this property completely characterizes group rings.

Proposition 3.1.13. If the R-algebra A has an invertible basis B which is closed under products then B is a group G. If, in addition, R commutes with B then A is a group ring. P Proof. Let A have basis B = {v1,..., vn,...}. Then k αkvk = 1. Let v ∈ B. Then multiply P through by v. So we have k αkvkv = v. But each vkv ∈ B as B is closed under products.

Therefore there exists i such that αk = 0 for all k , i and vi = 1. Thus 1 ∈ B.

−1 P −1 Now let v ∈ B. We claim v ∈ B. Let k αkvk = v . Multiply through by v to obtain

P −1 k αkvkv = 1. Then there exists i such that αk = 0 for all k , i and vi = v . But vi ∈ B and so v−1 ∈ B. Therefore B is a group. 

Proposition 3.1.13 has as a corollary which strengthens a result about field extensions reported in [5] for reals over rationals and in general in [8]. Namely, Corollary 3.1.14 extends the result that no proper field extension has a basis that is closed under multiplication.

Corollary 3.1.14. If a simple ring A is an invertible R-algebra with invertible basis B , 1 then B is not closed under products.

Proof. If such a basis existed then A would be a group algebra over R by the above

Proposition. But then A would have a proper ideal I (the augmentation ideal).  21

3.2 General Results and Some Families of Invertible Algebras

In this section we study the behavior of the invertibility properties with respect to standard constructions such as matrix rings and direct sums. We then apply those results to characterize certain families of invertible algebras.

+ Proposition 3.2.1. Let R be an arbitrary ring and n ∈ Z . Then Mn(R) is invertible-2 over R.

Proof. Consider the following: Let vnn be the and A = {ei j|i, j = 1,..., n}

th th where ei j is the matrix unit with 1 in the i j coordinate and zeros elsewhere. For i , j,

Pn let vi j = vnn + ei j. For 1 ≤ i ≤ n − 2, let vii = vnn + l=i+1 el,i+1+(l−i) (mod n−i) − ell. Let

vn−1,n−1 = vnn − enn + en−1,n + en,n−1. Now let B = {vi j|i, j = 1,..., n}. It is easy to see that B Xn spans A. We will show that B is also linearly independent. Let αi jvi j = 0. Now i, j=1 αi1 = α1i = 0 for i = 2,..., n. As there is only one element with a 1 in each of these

coordinates. Similarly we have αil = 0 for 2 ≤ l ≤ n − 2, and i = l + 1,..., n − 1 and

αk j = 0 for 2 ≤ k ≤ n − 2, and j = k + 2,..., n. The diagonal coordinates from the positions nn,..., 22 give the following equations

X αi j + αnn = 0, i, j X αi j + αn−1,n−1 + αnn = 0, i, j . . X Xn αi j + αkk = 0. i, j k=1

These equations imply αii = 0 for i = 1, 2,..., n − 1. Then αn,i+1 + αii = 0 for i = 1, 2,..., n − 2 gives αn,i+1 = 0 for i = 1, 2,..., n − 2. Also for k = 2,..., n − 2 we have

Pk−1 αk,k+1 + i=1 αii = 0. Since αii = 0 for i = 1, 2,..., n − 1 we have αk,k+1 = 0 for 22 k = 2,..., n − 2. For the n − 1, n and n, n − 1 coordinates we obtain the equations

Xn−1 αn−1,n + αii = 0, i=1

αn,n−1 + αn−2,n−2 + αn,n−1 = 0.

Therefore, αn−1,n = αn,n−1 = 0. Finally from the 1, 1 position we have the equation Xn αi j = 0, and since all entries are zero except for αnn we conclude αnn = 0. Hence, B is i, j=1 an R-basis for Mn(R).

−1 Now we have the inverses for vi j are as follows. If i , j then vi j = vnn − ei j. If i = j

−1 T −1 and i < n − 1 then vi j = vi j. Also vn−1,n−1 = vnn − en−1,n−1 + en−1,n + en,n−1 − 2enn. It is easy to see that these inverses form a basis also.  23

The following example illustrates the bases for matrix rings introduced in Proposition 3.2.1.

Example 3.2.2. For an arbitrary ring R, consider M (R). Then B consists of:      3   1 0 0   1 1 0   1 0 1              v11 =  0 0 1  v12 =  0 1 0  v13 =  0 1 0               0 1 0   0 0 1   0 0 1         1 0 0   1 0 0   1 0 0              v21 =  1 1 0  v22 =  0 1 1  v23 =  0 1 1               0 0 1   0 1 0   0 0 1         1 0 0   1 0 0   1 0 0              v31 =  0 1 0  v32 =  0 1 0  v33 =  0 1 0 ,              1 0 1   0 1 1   0 0 1  and B−1 consists of:        1 0 0   1 −1 0   1 0 −1              v−1 =  0 0 1  v−1 =  0 1 0  v−1 =  0 1 0  11   12   13          0 1 0   0 0 1   0 0 1         1 0 0   1 0 0   1 0 0              v−1 =  −1 1 0  v−1 =  0 0 1  v−1 =  0 1 −1  21   22   23          0 0 1   0 1 −1   0 0 1         1 0 0   1 0 0   1 0 0              v−1 =  0 1 0  v−1 =  0 1 0  v−1 =  0 1 0 . 31   32   33          −1 0 1   0 −1 1   0 0 1 

Lemma 3.2.3. Let T be invertible over S and S invertible over R. Then T is invertible over R. If furthermore S is invertible-2 (respectively, invertible-3 or invertible-4) over R and, in addition, T has an invertible-2 (respectively, invertible-3 or invertible-4) basis A 24 such that S commutes with A then T is invertible-2 (respectively, invertible-3 or invertible-4) over R.

Proof. Let A = {ai|i ∈ I} be an invertible basis for S over R and B = {b j| j ∈ J} be an

invertible basis for T over S . We claim C = {aib j|i ∈ I, j ∈ J} is an invertible basis for T P P P over R. Let i, j αi jaib j = 0. Then j( i αi jai)b j = 0. Then for all j we must have P i αi jai = 0 since B is linearly independent over S . Then we must have αi j = 0 for all i and j since A is linearly independent over R. Therefore, C is a linearly independent set P over R. Now let x ∈ T. Then x = j β jb j where β j ∈ S and b j ∈ T. Further, for each j we P may write β j = i αi jai j where αi j ∈ R and ai j ∈ S . Then X X X X x = β jb j = ( αi jai j)b j = αi j(ai jb j). j j i i, j Therefore, C is a generating set. Since C consists of invertible elements, C is an invertible basis for T over R. Assume now that the bases A and B above are in fact both invertible-2 and that S commutes with A. Then, A−1 and B−1 are also bases. Furthermore, the basis obtained from them in the way C was obtained from A and B coincides with C−1 by virtue of the fact that the elements of B−1 ⊂ S commute with those in A and, consequently, with those in A−1. Therefore, C−1 is a also basis and therefore C is invertible-2. The corresponding implications for the cases invertible-3 and invertible-4 follow similar reasonings.



Proposition 3.2.4. A simple finite dimensional algebra A over a field F has an invertible-2 basis.

Proof. Since A is simple and finite dimensional, there exists a finite dimensional division

ring extension K of the field F and A  Mn(K). Now apply Propositions 3.1.5 and 3.2.1 and Lemma 3.2.3.  25

Proposition 3.2.5. Let A and B be finite dimensional invertible (invertible-2) algebras over a ring R such that there exists x ∈ R such that x is invertible and 1 − x is invertible. Then C = A ⊕ B is a finite dimensional invertible (invertible-2) algebra.

Proof. Let A and B be invertible bases for A and B respectively. Let

C = (A, b1) ∪ (a1, B\{b1}) ∪ {(a1, xb1)}

for a1 ∈ A, b1 ∈ B and x ∈ R such that x is invertible and 1 − x is invertible. Clearly C ⊂ U(C). Consider

X X 0 = αa(a, b1) + βb(a1, b) + γ(a1, xb1)

a∈A b∈B\{b1} for αa, βb, γ ∈ R. So, X X ( αa + γx)b1 + βbb = 0

a∈A b∈B\{b1} and X X (αa1 + βb + γ)a1 + αaa = 0. b∈B\{b1} a∈A\{a1}

By linear independence we have that for a ∈ A \ {a1}, αa = 0 and for b ∈ B \ {b1}, βb = 0. Then

(αa1 + γx)b1 = 0 and

(αa1 + γ)a1 = 0.

This implies αa1 = γ = 0 which shows C is a linearly independent set. Now we have

−1 −1 (A, b1) − [(1 − x) (a1, b1) − (1 − x) (a1, xb1)] = (A, 0).

Therefore, we can generate anything of the form (A, 0) and thus generate (0, B). Following the same argument above it can be shown that if A and B are invertible-2 then C

is an invertible-2 basis. Therefore, A ⊕ B is invertible-2.  26

Rings in which the identity is the sum of two units have appeared earlier in the literature. In particular, in [4], right self-injective rings in which the identity is the sum of two units are characterized as being precisely those right self-injective rings that do not have any quotient ring isomorphic to F2.

Proposition 3.2.6. A finite direct sum of invertible algebras over a right self-injective ring

R which does not have a factor ring isomorphic to F2 is also an invertible algebra over R.

Using Proposition 3.2.5, it can be shown that any finite direct sum of of algebras with a finite invertible basis will have a finite invertible basis. This leads to the following interesting Corollary.

Corollary 3.2.7. Any finite dimensional semisimple algebra over a field F , F2 is invertible.

Proof. A consequence of Lemma 3.2.4 and Proposition 3.2.5. 

Remark 3.2.8. F2 ⊕ F2 is a finite dimensional semisimple non-invertible algebra over F2, showing that Corollary 3.2.7 cannot be extended further.

We remark this because proposition 3.2.5 does not include F2. Therefore, we need to address this possibility. This leads us to the following definition.

Definition 3.2.9. An invertible F2-algebra is nice if there exists an invertible basis containing a subset of an even number of elements whose sum is invertible.

Lemma 3.2.10. Let A be an invertible algebra over F2. If there exists an invertible element a ∈ A such that it is the sum of an even number of invertible elements from A then

A does not have a factor ring isomorphic to F2. In particular, nice invertible algebras do

not have a factor ring isomorphic to F2. 27

¯ A Proof. Assume A has an ideal I such that A = I  F2. Let a ∈ A be invertible. Assume a = a1 + ··· + an such that a1,..., an ∈ A are invertible where n is even. Since A¯  F2, a¯1 = ··· = a¯n = a¯ , 0. So, ai = e + bi for some e ∈ A invertible and bi ∈ I. Then a = ne + b1 + ··· + bn = b1 + ··· + bn ∈ I. This is a contradiction since a < I. 

Since F2 ⊕ F2 is not an invertible algebra we must address the question of when the direct sum of F2-invertible algebras is invertible. That is the subject of the following three propositions.

Proposition 3.2.11. Let A and B be finite dimensional invertible algebras over F2. Assume A is nice. Then C = A ⊕ B is invertible.

Proof. Let A and B be invertible bases for A and B respectively. Let a ∈ A such that it is the sum of an even number of elements from A. Let

C = (A, b1) ∪ (a1, B\{b1}) ∪ {(a, b1)}

for a1 ∈ A and b1 ∈ B. Similarly as in Proposition 3.2.5, it can be shown that C is an invertible basis for C. 

Proposition 3.2.12. Any factor ring of an invertible algebra over a field F is also invertible.

Proof. Let A be a finite dimensional invertible algebra over a field F and I / A. Let B be ¯ A an invertible basis for C and define A = I . Since B consists of invertible elements it is clear that B¯ = {v + I|v ∈ B} is a spanning set of invertible elements for A¯. So, there is a subset of B¯ that is a basis for A¯. 

Proposition 3.2.13. An invertible algebra over F2 that is a direct sum of invertible algebras has at most one direct summand isomorphic to F2. 28

Proof. Let A be a finite dimensional invertible algebra over F2. Assume A has multiple copies of F2 as direct summands. Let I, J / A such that A = I ⊕ J and I  F2 ⊕ F2. By

Proposition 3.2.12, I is also an invertible algebra. Remark 3.2.8 shows F2 ⊕ F2 is not

invertible. Therefore, this is a contradiction and A cannot have multiple copies of F2 as direct summands. 

We will consider next the invertibility of factor rings of polynomial rings over a field in one variable. We will give a complete characterization on which ones have an invertible basis.

F[x] Proposition 3.2.14. Let F , F2 be a field. Then h f (x)i is invertible-2 for all f (x).

n n−1 Proof. Let f (x) = αn x + αn−1 x + ··· + α1 x + α0 with α0 , 0. Since α0 , 0 we have

n−1 n−2 x(αn x + αn−1 x + ··· + α2 x + α1) = −α0. Therefore, x is invertible and so B = {1, x, x2,..., xn−1} is an invertible basis. We will show this basis is actually

Pn−1 −i n−1 invertible-2. Let i=0 αi x = 0. Then multiply through by x and we obtain

Pn−1 (n−1)−i 2 n−1 i=0 αi x = 0. But since {1, x, x ,..., x } is a basis, we must have αi = 0 for − B−1 { −1 −2 −(n−1)} F[x] i = 0,..., n 1. Thus = 1, x , x ,..., x is a basis for h f (x)i . F[x] j Now in the factor ring hxmi , {x | j = 1, 2,..., m − 1} consists of nilpotent elements. Therefore, for every j, 1 + x j is invertible. So A = {1} ∪ {1 + x j| j = 1,..., m − 1}. Let

i −1 −1 Pm−1 j i j Pm−1 −1 v0 = 1, and vi = 1 + x . Then v0 = 1 and vi = j=0 (−1) x . Let i=0 αivi = 0. As −1 P −1 −1 α1v1 is the only term including x, α1 = 0. So i,1 αivi = 0 and α2v2 is the only term 2 including x . Then α2 = 0. Continuing this way we obtain that αi = 0 for i = 1,..., m − 1.

−1 It then follows that α0 is also zero. Therefore, A , having the same number of elements as A, is a basis. It only rests to consider the case f (x) ∈ F[x] with f (0) = 0 but f (x) not a power of x. F[x] F[x] ⊕ F[x] Under that assumption, by the Chinese remainder theorem, h f (x)i  hxmi g(x) for some 29 positive integer m and g(x) such that g(x) , 0. The result then follows from the first two cases and Proposition 3.2.5. 

F[x] F[x] Proposition 3.2.15. Let F = F2. Then the factor rings A1 = h(x+1)ni and A2 = hxni are both invertible but neither is nice.

Proof. The proof of the above proposition requires the hypothesis that F be other than F2 only when it comes to applying Proposition 3.2.5 for the third case. So, the same arguments as above show that A1 and A2 are invertible. The fact that they are not nice is a consequence of Lemma 3.2.10. 

∈ F[x] Proposition 3.2.16. Let F = F2 and f (x) F[x] then A = h f (x)i is invertible if and only if x(x + 1) does not divide f (x).

F[x] m n Proof. Let F = F2 and A = h f (x)i . Write f (x) = x (x + 1) g(x) where x and (x + 1) do not · F[x] divide g(x). We show that A is invertible if and only if n m = 0. Suppose A = h f (x)i is F[x] F[x] invertible. Observe hxmi and h(x+1)ni both have factor rings isomorphic to F2. Then by

Proposition 3.2.13 we can only have one direct summand isomorphic to F2. Therefore, either n or m must be 0. Now suppose n · m = 0. Then either n or m is 0 then we have at most one direct

summand isomorphic to F2 and by Proposition 3.2.13 we are done. 

3.3 Trivial Units

Since group rings motivated the idea of invertible algebras we have studied properties of group rings in hopes we could extend these to invertible algebras. We next turn our attention to trivial units, which appear in [12] and [10]. A trivial unit of a group ring R[G] is an element of the form αg with α ∈ U(R) and g ∈ G. Likewise we provide the definition of trivial units in an invertible algebra. 30

Definition 3.3.1. Given an R-algebra with invertible basis B we say unit α ∈ A is trivial with respect to B if α = uv, where u ∈ U(R) and v ∈ B.

Remark 3.3.2. Clearly if all units of an invertible algebra A with basis B are trivial with respect to B then B is scalar closed under products and inverses. In particular, when the base ring R is F2 then invertible algebras with only trivial units are group rings, furthermore, by [12] we have an invertible F2-algebra with only trivial units must be one of F2C2 or F2C3.

The idea of trivial units is basically saying there are not many ways to form units in our algebra. Next we connect this thought with not just units but bases. We will consider when our algebra will have trivial invertible bases. An algebra with trivial invertible bases can be thought of the same way as trivial units, there are few ways to form an invertible basis.

Let A be an invertible R-algebra with invertible basis B = {v1, v2,..., vn}. We consider two methods to form a different invertible basis:

1. Let x ∈ U(A). Then xB is another invertible basis.

2. Let α1, α2, . . . , αn ∈ U(R). Then {α1v1, α2v2, . . . , αnvn} is also an invertible basis.

Definition 3.3.3. Let A be an invertible R-algebra A with invertible basis B. We say that an invertible base B∗ is trivial with respect to B if it is obtained by either multiplying B by a unit of A or by multiplying each element of B by a unit of R.

Next we give two examples. The first provides an example of an algebra with only trivial invertible bases, while the second example will give us an algebra with not only trivial invertible bases.

Example 3.3.4. consider A = F2C3 where C3 is the cyclic group of order 3 generated by

x. Notice there are eight elements total in F2C3 and a simple inspection shows that the only invertible ones are the three elements of the basis {1, x, x2}. 31

We go ahead and note here that it is also easily seen F3C2 and F2C2 have only trivial invertible bases. It will be apparent shortly why we bring this up.

2 Example 3.3.5. Consider A = F3C3. Again we have our group G = {1, x, x } is an invertible basis. However, the set B∗ = {2, 1 + x, 1 + x2} is also an invertible basis. Notice 1(2) = 2, x(1 + x2) = 1 + x, and x2(1 + x) = 1 + x2. Since 2, 1 + x2, and 1 + x are not unit-scalar multiples of one another the two operations defined above would not yield B∗ from G.

In Passman [10] it is proven that given a group G that is not torsion free and a field of characteristic p ≥ 0, then with the exception of F2C2, F3C2, and F2, C3, the group ring FG has nontrivial units. We connect this idea with our notion of trivial invertible bases in the following proposition.

Proposition 3.3.6. Let G be a group that is not torsion free and F a field of characteristic p ≥ 0. Let A = F[G] be the group algebra. Then A has only trivial units if and only if A has only trivial invertible bases.

Proof. Suppose A has only trivial units. Then by [10] we know A must be one of

F2C2, F3C2, or F2C3. By example 3.3.4 we see A has only trivial invertible bases.

Now suppose A has only trivial invertible bases with basis G = {g1, g2,..., gn}. Assume to the contrary there exists a nontrivial unit, say g. Then g is of the form

Pt g = i=1 αigi with |{αt|αt , 0}| ≥ 2. Note it is possible for t = n but we only need consider

∗ ∗ {αi|αi , 0}. WLOG let α1 , 0. Then consider the set G = {g, g2,..., gn}. Clearly G generates as we can obtain G from G∗. To prove linear independence let

Pn βg + i=2 βigi = 0. Then Xn Xn βg + βigi = β(α1g1 + ... + αtgt) + βigi i=2 i=2 Xt Xn = βα1g1 + (β + αiβi)gi + β jg j = 0. i=2 j=t+1 32

Since G is linear independent we have βα1 = 0, (β + βi) = 0 for all i, and β j = 0 for all j.

∗ Clearly β j = 0 for all j and since β = 0 we must have βi = 0 for all i so G is a linearly independent set, whence a basis. Furthermore, G∗ is an invertible basis and is not obtained from G via either of the operations defined for trivial invertible bases, a contradiction.

Thus there does not exist a nontrivial unit. 

We tried to extend this idea to invertible algebras over a field of characteristic p ≥ 0. In other words our conjecture is the following.

Conjecture 3.3.7. Let A be a finite dimensional invertible algebra over a field of characteristic p ≥ 0. If A has only trivial invertible bases then A must be a group algebra.

The only problem we run into is if A is scalar closed under products. This idea is more along the thought of trivial units in a crossed product, which to our knowledge, has not been researched yet. 33 4 Further Results on Invertible Algebras

We begin this chapter with a few examples of infinite dimensional invertible algebras. We follow with a few new definitions that group rings possess. We will show these properties though are not restricted to group rings alone, and they will allow for the map P P φ : A → R defined by φ( i αivi) = i αi, to become a ring homomorphism. Then we end this chapter with some definitions that would pertain to more of a crossed product setting.

4.1 Infinite Dimensional Invertible Algebras

Having shown in Proposition 3.2.1 that matrix rings are always invertible algebras over their base rings, we turn next our attention to infinite matrix rings. Proposition 3.1.1 handles a family of matrices we nickname ”kite matrices” (for hopefully obvious reasons). A family of matrix algebras is considered and is the subject of Theorem 3.1.3. To show that these algebras are indeed invertible, we make use of Proposition 3.1.2 which is an infinite version of Lemma 3.2.3.        A 0 0 0 ...          0 a 0 0 ...            Proposition 4.1.1. Let R be any ring. Let M =  0 0 a 0 ...  |A ∈ Mn(R), a ∈ R        ..    0 0 0 a .       . . . . .    ......   and

• vi j = I + ei j for i , j

• v11 = I − e11 + e12 + e21

• vii = I − eii + ei−1,i−1 + ei−1,i + ei,i−1 for i ≥ 2

Then [ B = I {vi j}i, j ≥ 1 34 is an R-basis for M.

Proof.

ei j = vi j − I for i , j and

e11 = I − v11 + e12 + e21 = I − v11 + v12 − I + v21 − I.

Then by induction using this fact, eii = I − vii − ei−1,i−1 + ei−1,i + ei,i−1 for i ≥ 2, the rest of P the standard matrix units can be generated. Now let i j αi jvi j = 0 where α00 is the coefficient for I. Since this is a finite sum there exists some n such that for all m > n, the m, m coordinate is 1 for all vi j. Also note that if |i − j| ≥ 2, then αi j = 0 as vi j is the only element with a non-zero support in that position. Below we summarize the equations according to their coordinate positions. X αi j = 0 (m, m), m > n (4.1) i j X X X αii + αi+1,i + αi,i+1 + α00 = 0 (n, n) (4.2) i,n i i X X X αii + αi+1,i + αi,i+1 + α00 = 0 ( j, j), 1 ≤ j < n (4.3) i, j, j+1 i i αii + αi,i−1 = 0 (i − 1, i), i , 2 (4.4)

α11 + α22 + α12 = 0 (1, 2) (4.5)

Subtracting equations (1) and (2) we get αnn = 0. Subtracting equations (1) and (3) we obtain

αii + α j+1, j+1 = 0 for 1 ≤ j < n. Now using the fact αnn = 0 and by an iterative process we obtain αii = 0 for

1 ≤ i ≤ n. From equation (4) and the previous facts we have αi,i−1 = 0 for i , 2. Finally equation (5) along with α11 = α22 = 0 gives α12 = 0 and similarly we get α21 = 0.

Therefore, αi j = 0 for all i and j and B is a linearly independent set. Lastly we see all

−1 elements of B are self-invertible except v11 but v11 = I − 2e11 − e22.  35

We follow with a nice proposition about a chain of invertible algebras. In the previous chapter we gave a proof for a finite chain. In this next proposition our chain is not necessarily finite.

Proposition 4.1.2. Let {Ai}i∈N be a chain of algebras A0 ⊆ A1 ⊆ · · · such that for all i ∈ N

Ai+1 is an invertible algebra over Ai. Assume Bi+1 is an invertible basis for Ai+1 over Ai ∈ B S such that 1 i+1. Let A = i∈N Ai. Then with the obvious operations, A is an invertible

Ai-algebra for all i ∈ N. Assume further that for all i ≥ 1,Ai commutes with Bi+1. Then if

for all i ∈ N Ai+1 is invertible-2 (respectively, invertible-3 or invertible-4) over Ai then A is

invertible-2 (respectively, invertible-3 or invertible-4) over A0.

Proof. Simple induction and Proposition 3.3 of [7] show that the results hold for Ai over

A0 for all i ≥ 1. Remarkably, to get to this stage one need not assume that each Bi contains 1. The proof of our theorem goes as follows.

∗ ∗ ∗ ∗ ∗ Let B1 = B1, and for all i > 1 define Bi = Bi−1Bi. Clearly B1 ⊆ B2 ⊆ · · · . Let S ∗ B = i≥1 Bi . We claim that B is an invertible basis for A over A0. ∈ S ∈ B∗ Let α A. Since A = i∈N Ai, there exists some i such that α Ai. Then i generates ∗ P α and since Bi ⊆ B we have B is a generating set. Let i βivi = 0 where vi ∈ B and ∗ ∗ βi ∈ A0. Then there exists some j such that {vi}i ⊆ B j. Now B j is a basis for A j over A0.

Therefore, we must have βi = 0 for all i. Thus B is a linearly independent set.

−1 Now suppose for all i, Bi is an invertible-2 basis. Then Bi is also a basis. So we let

−1 −1 C1 = B1 , and for all i > 1 define Ci = Ci−1Bi . We construct our basis as before, letting S −1 −1 −1 C = i≥1 Ci. Then, a typical element of C is of the form v1 v2 ··· vn for vi ∈ Bi. Notice −1 −1 −1 −1 v1 v2 ··· vn = (vn ··· v1) , and under the commutativity assumption we have −1 −1 −1 −1 (vn ··· v1) = (v1 ··· vn) ∈ B . Therefore, B is also a basis and therefore B is invertible-2.

Proving A is an invertible-3 (respectively, invertible-4) algebra over Ai by assuming that for all i, Bi is an invertible-3 (respectively, invertible-4) basis follow similarly.  36     A 0 0 ...             0 A 0 ...     | ∈  Theorem 4.1.3. Let R be any ring. Let M =   A Mn(R). Then M is  ...    0 0 A       ......    . . . .   an invertible R-algebra.

Proof. A consequence of Proposition 4.1.2. 

4.2 Linear Closed Under Products and Inverses

Recall that a group ring R[G] is self-injective if R is a self-injective ring and G is a finite group. Also if R[G] is self-injective, then R is self-injective and G is finite [1]. The next example illustrates how this result does not extend to invertible algebras.

Example 4.2.1. We show here that a finite dimensional invertible algebra over a self-injective ring need not be self-injective. It is well known that for a finite-dimensional commutative local algebra A over a field R = F, is self-injective if and only if A has a

F3[x,y] unique minimal ideal [6]. Consider the F3-algebra A = hx,yi2 . It is easily checked that the basis B = {1, 1 + x, 1 + y} is an invertible basis for A. Now hx, yi is the unique maximal ideal of A and so A is a local ring. Now hxi, and hyi are both minimal ideals and therefore A does not have a unique minimal ideal. Therefore, A is not self-injective even though it is finite dimensional over F (which, as all fields, is self-injective.)

A key element of the proof in [1] of the above characterization of self-injective group P P rings is the fact that the R-homomorphism φ : R[G] → R given by φ( g∈G αgg) = g∈G αg is a ring homomorphism. It seem reasonable to ask for a basis B of an algebra A over a P P ring R when the map φ : A → R given by φ( b∈B αbb) = b∈B αb is a ring homomorphism. We introduce next a few definitions that will be essential components of the answer to that question provided by Proposition 4.2.7 below. Furthermore, these definitions and 37

Proposition 4.2.7 will be instrumental in providing a partial converse to Proposition 2.2 in [7].

Definition 4.2.2. Let A be an R-algebra with basis B. We say that R commutes linearly B ∈ B ∈ P P with if for all v and β R, if vβ = vk∈B δkvk then δk = β.

Definition 4.2.3. Let A be an R-algebra with basis B. We call B linearly closed under ∈ B P P products if for all v, w , if vw = vi∈B αivi then vi∈B αi = 1.

Definition 4.2.4. Let A be an R-algebra with invertible basis B. We say B is linearly ∈ B −1 P P closed under inverses if for all v , if v = vi∈B αivi then vi∈B αi = 1.

Clearly Definition 4.2.3 is satisfied by group rings. However, in [7], an example of an F [x, y] invertible algebra that is not a group ring is given, namely 2 . An invertible basis of hx, yi2 this algebra also illustates that Definition 4.2.3 does not just pertain to group rings.

F [x, y] Example 4.2.5. Consider A = 2 . As stated in [7] we know A is not a group ring. hx, yi2 However, A is an invertible algebra with invertible basis B0 = {1, 1 + x, 1 + y}. The product of 1 + x and 1 + y is

(1 + x)(1 + y) = 1 + x + y = 1(1) + 1(1 + x) + 1(1 + y).

The sum of the coefficients is 1. The other combinations are trivial. Therefore, B satisfies Definition 4.2.3.

An obvious question is are there other examples of algebras with bases that are linearly closed under products and inverses, yet are not group rings. The previous example

R[x1, x2,..., xn] inspired a consideration of algebras of the form m . The following hx1, x2,..., xni proposition will show there are numerous examples of algebras that are linearly closed under products and inverses and not group rings. 38

R[x1, x2,..., xn] Proposition 4.2.6. Let A = m where R is any ring. Then A has an invertible hx1, x2,..., xni basis that is linearly closed under products and inverses, namely, Xn S r1 r2 rn B = {1} {1 + x1 x2 ··· xn } where 0 ≤ ri < m for all i and 1 ≤ ri < m. i=1

s1 s2 sn t1 t2 tn Proof. Let 1 + x1 x2 ··· xn , 1 + x1 x2 ··· xn ∈ B. Then

s1 s2 sn t1 t2 tn v = (1 + x1 x2 ··· xn )(1 + x1 x2 ··· xn )

s1 s2 sn t1 t2 tn s1+t1 s2+t2 sn+tn = 1 + x1 x2 ··· xn + x1 x2 ··· xn + x1 x2 ··· xn . Xn s1+t1 s2+t2 sn+tn First suppose (si + ti) < m. Then 1 + x1 x2 ··· xn ∈ B. Writing v as a linear i−1 combination of elements from B we have

s1 s2 sn t1 t2 tn s1+t1 s2+t2 sn+tn v = −2(1) + (1 + x1 x2 ··· xn ) + (1 + x1 x2 ··· xn ) + (1 + x1 x2 ··· xn ).

The sum of the coefficients of the basis elements is −2 + 1 + 1 + 1 = 1. So in this case B is linearly closed under products. Xn s1+t1 s2+t2 sn+tn Now assume (si + ti) ≥ m. Then x1 x2 ··· xn = 0. Using this information i−1 we write v as a linear combination of elements from B to obtain

s1 s2 sn t1 t2 tn v = −1(1) + (1 + x1 x2 ··· xn ) + (1 + x1 x2 ··· xn ).

Again we notice the sum of the coefficients is 1 and B is linearly closed under products.

s1 s2 sn Given an arbitrary element w = 1 + x1 x2 ··· xn ∈ B we see the inverse is

−1 s1 s2 sn s1 s2 sn 2 s1 s2 sn k w = 1 − (x1 x2 ··· xn ) + (x1 x2 ··· xn ) − · · · + (x1 x2 ··· xn ) Xn −1 where k is the smallest integer such that (k + 1) si ≥ m. If k is odd then writing w as a i=1 linear combination of elements from B we have

−1 s1 s2 sn s1 s2 sn 2 s1 s2 sn k w = 2(1) − (1 + x1 x2 ··· xn ) + (1 + (x1 x2 ··· xn ) ) − ... + (1 − (x1 x2 ··· xn ) )

has coefficients adding up to 1. 39

On the other hand, if k is even

−1 s1 s2 sn s1 s2 sn 2 s1 s2 sn k w = 1(1) − (1 + x1 x2 ··· xn ) + (1 + (x1 x2 ··· xn ) ) − ... + (1 + (x1 x2 ··· xn ) ).

Now there are an even number of terms of w−1 not including 1. As the signs alternate we see they all cancel out and therefore when we add of the coefficients we have 1.

Combining these two results we have B is linearly closed under inverses. 

We note that as far as we know it is possible that the ring A in the previous proposition may indeed be a group ring. However, in the case when R is self-injective, then A is not a group ring for A is local and does not have a unique minimal ideal and then A is not self-injective.

Proposition 4.2.7. Let A be a left R-algebra with basis B = {vi|i ∈ I} and φ : A → R the P P R-homomorphism given by φ( i αivi) = i αi Then the following are equivalent:

1. φ is a ring homomorphism.

2. B is linearly closed under products and R commutes linearly with B.

P P 3. For every v, w ∈ B and β ∈ R, if vβw = v∈B βvv then v∈B βv = β.

Proof. (1) ⇒ (2): First suppose φ is a ring homomorphism. Let v, w ∈ B and write P P vw = i αivi. Now φ(v)φ(w) = 1 · 1 = 1 and φ(vw) = i αi. Since φ is a ring P homomorphism we have φ(v)φ(w) = φ(vw) which gives i αi = 1. P P P P If vβ = j γ jv j, then φ(vβ) = j γ j. Write 1 = k δkvk. So β = β k δkvk. Since φ is a ring homomorphism we have φ(1) = 1, but also X X φ(1) = φ( δkvk) = δk k k P giving k δk = 1. Now X X X X φ(β) = φ(β δkvk) = φ( βδkvk) = βδk = β δk = β · 1 = β. k k k k 40 P So φ(vβ) = φ(v)φ(β) = 1 · β = β. Hence β = j γ j. P P (2) ⇒ (3): Let v, w ∈ B and β ∈ R, if vβw = i βivi then φ(vβw) = i βi. P P Alternatively, (vβ)w = i αi(viw), where i αi = β. It follows that

X X (vβ)w = αi δi jv j i j P where for all i, j δi j = 1. Thus,

X X X φ(vβw) = αi δi j = αi = β. i j i P Therefore, i βi = β. P (3) ⇒ (1) It suffices to show that φ is multiplicative. Let r, s ∈ A. So r = i αivi and P s = j β jv j. Then

X X X φ(rs) = φ( αivi β jv j) = φ( αi(viβ j)v j). i j i, j P P Now, for every i, j, write viβ jv j = k δi jkvk. By (3), k δi jk = β j. So,

X X X X X X φ(rs) = φ( αiδi jkvk) = αi δi jk αi β j. i, j,k i j k i j

It is easy to see that this is also the value of φ(r)φ(s). 

Notice that the invertible basis in Example 4.2.1 satisfies condition (2) of Proposition 4.2.7.

Proposition 4.2.8. Let A be a skew invertible R-algebra with invertible basis B. If R linearly commutes with B then R commutes with B.

Proof. For a given r ∈ R and v ∈ B we have vr = rσ(v)v. Since R linearly commutes with B

X X X σ(v) we have if vr = αivi then αi = r. Therefore, r = αi = r . Since v was

vi∈B i i arbitrary we have σ(v) is the identity automorphism in Aut(R).  41

Proposition 4.2.9. Let F ⊂ E be a proper field extension with basis

B = {e1, e2,..., en,...}. Then B is not linearly closed under products.

Proof. Clearly F linearly commutes with B. Suppose B is linearly closed under products. P P Then by 4.2.7 we have φ : E → F, the F-homomorphism given by φ( i αivi) = i αi is a ring homomorphism. Then ker(φ) = E or ker(φ) = 0. However, e1 − e2 ∈ ker(φ) so ker(φ) , 0. Also e1 < ker(φ) and so ker(φ) , E. Thus B is not linearly closed under products. 

4.3 Scalar Closed Under Products and Inverses

The following definition gives a softer form of invertibility-3 for invertible bases.

Definition 4.3.1. Let B be an invertible basis for an R-algebra A. We say B is scalar closed under inverses if for all v ∈ B we have αv−1 ∈ B for some α ∈ U(R).

Example 4.3.2. It is straightforward to see that the basis B = G¯ of a R ∗ G is scalar closed under inverses. An example of a different type is the field extension of C over R. An invertible basis for this field extension is B = {1, i}. Notice that B is such that the inverse of each element in B is a scalar multiple of some element in B since i−1 = (−1)i.

A natural question at this would be to ask whether every algebra having an invertible basis that is scalar closed under inverses is an invertible-3 algebra. In the next example we show that is not the case.

Example 4.3.3. Consider the quaternions H as an algebra over the field R of real numbers.

The basis B = {1, i, j, k} is clearly scalar closed under inverses. Suppose A = {v1, v2, v3, v4} is an invertible-3 basis for the H over R. We have two cases. If v1 = 1 or −1 then, without

−1 loss of generality v2 = v3. This forces v4 to be its own inverse. However, in a division ring the only elements who are their own inverses are 1 and −1, a contradiction. So, assume that neither 1 nor −1 belong to A. Note that since A is invertible-3 and 42

∈ −1 1 − − − considering that for v = a + bi + c j + dk H, v = a2+b2+c2+d2 (a bi c j dk), it follows −1 that no element of A can have a = 0. So, let v1 = a + bi + c j + dk with a , 0 and v2 = v1 . −1 1 − − − Then write v3 = α + βi + γ j + δk with α , 0. Then v4v3 = α2+β2+γ2+δ2 (α βi γ j δk). It is easy to see that the span of A has dimension at most three, contradicting our assumption that A is indeed a basis. Therefore H is not an invertible-3 algebra over R.

It was shown Proposition 2.12 of [7] that if an algebra has an invertible basis which is closed under products, and R commutes with B then the algebra is indeed a group ring. However, Crossed Products are invertible-2 algebras having a basis that satisfies an alternative version of product closure. Noticing that the basis G¯ satisfies that for x, y ∈ G we havex ¯y¯ = τ(x, y)xy, we arrive at the following definition.

Definition 4.3.4. Let A be an invertible algebra over a field F with invertible basis B. We say B is scalar closed under products if for all v, w ∈ B we have α(vw) ∈ B for some α ∈ F.

Example 4.3.5. The basis B = {1, i, j, k} of H over R is clearly scalar closed under products.

Lemma 4.3.6. Let A be an invertible R-algebra with invertible basis B. Assume B is linearly closed under products and scalar closed under products. Then B is a group.

Proof. Let vi, v j ∈ B. Since B is scalar closed under products we have viv j = αvk. But since B is linearly closed under products we must have α = 1. Therefore, B is closed under products and by Proposition 2.12 in [7] B is a group. 

Lemma 4.3.7. Let A be an invertible R-algebra with invertible basis B. Assume B is linearly closed under inverses and scalar closed under inverses. Then B is an invertible-3 basis. 43

Proof. Let v ∈ B. Writing v−1 as a linear combination of elements from B, −1 P P B v = vi∈B αivi, we have i αi = 1 since is linearly closed under inverses. However, since αv−1 ∈ B, as B is scalar closed under inverses, we have v−1 = αv−1 and α = 1.

Therefore B is invertible-3. 

Lemma 4.3.8. Let A be an invertible R-algebra with invertible basis B = {v1, v2,...}. Further assume B is scalar closed under products and scalar closed under inverses. Then there exists a necessarily unique vi ∈ B such that vi ∈ U(R). Further, WLOG we may assume 1 ∈ B.

−1 Proof. For any vi ∈ B there exists some α ∈ U(R) such that αvi ∈ B as B is scalar closed under inverses. Since B is scalar closed under products there exists some β ∈ U(R) such

−1 that β(αvi vi) = βα ∈ B. Also βα ∈ U(R). 

Proposition 4.3.9. Let A be a commutative invertible R-algebra with basis B. Further suppose B is scalar closed under products. Then B−1 is also an invertible basis for A.

Proof. Let B = {v1, v2,..., vn} and take any vi ∈ B. Consider the set viB = {viv1, viv2,..., vivn}. Since B is scalar closed under products we have viB = {α jv j|α j ∈ U(R), v j ∈ B}. Therefore, we notice multiplying B by an element of B it just permutes the basis elements around with a multiple of U(R). So viv j , αvivk for

−1 vi, v j, vk ∈ B with j , k and α ∈ U(R). Now we wish to show linear independence of B .

−1 −1 −1 Qn Q So let α1v1 + α2v2 + ··· + αnvn = 0. Let v = i=1 vi. Denote wk as wk = k,i vi. Then

−1 −1 −1 v(α1v1 + α2v2 + ··· + αnvn )

= α1β1w1 + α2β2w2 + ··· + αnβnwn, where β1 ∈ U(R) obtained by multiplying the basis elements together as B is scalar closed under products. However, as noted earlier we have {wk} = B. Therefore, by linear independence of B we must have αtβt = 0 for all t = 1,..., n. But each βt ∈ U(R) and thus 44

−1 αt = 0 for all t. To complete the proof we just note that B and B have the same dimension. 

Next we consider a generalization of invertibility-3 which arises naturally in the context of crossed products. As a reminder, we include the definition of crossed product here:

Definition 4.3.10. Let R be a ring with 1 and G be a group. Then a crossed product R ∗ G is an associative ring with G¯, a copy of G, as an R-basis. Multiplication is determined by the following two rules:

1. For x, y ∈ G there exists a unit τ(x, y) ∈ U(R) such thatx ¯y¯ = τ(x, y)xy. This action is called the twisting of the crossed product.

2. For x ∈ G there exists a σ(x) ∈ Aut(R) such that for every r ∈ R,xr ¯ = rσ(x) x¯. This action is called the skewing of the crossed product.

An important particular case is also of interest:

Definition 4.3.11. When the assignment σ(x) in the above definition is the identity automorphism in Aut(R), we say that R ∗ G is a twisted group ring and we denote it as Rt[G].

Just as group rings (and field extensions) are the archetypes of the notions of invertibility and its modifications from [7], crossed products naturally motivate definitions 4.3.1 and 4.3.4 below. Before we go there, however, we will show that crossed products are indeed invertible-2.

Proposition 4.3.12. The crossed product R ∗ G is invertible-2.

Proof. We know G¯ = {x¯|x ∈ G} is a basis for R ∗ G. We will show that (G¯)−1 = {x¯−1|x ∈ G} is also a basis for R ∗ G. Nowx ¯x−1 = t(x, x−1)1.¯ Sox ¯−1 = x−1t(x, x−1)−1. Then 45 X X −1 −1 −1 −1 −1 −1 −1 t(x, x )x ¯ = x ∈ G¯. Let αx x¯ = 0. Then αxt(x, x ) x = 0. Since G¯ is a

−1 −1 −1 −1 basis we have αxt(x, x ) = 0 for all x. Since t(x, x ) ∈ U(R) we have αx = 0 for all x. So R ∗ G is invertible-2. 

A straightforward application of Proposition 3.3 in [7] yields the following result.

Proposition 4.3.13. Let S be an invertible algebra over a ring R, then

1. the cross product S ∗ G is also an invertible algebra over R.

2. if S is indeed invertible-2 then the twisted group ring S t[G] is also an invertible-2 algebra over R.

Proof. This follows from Propositions 4.3.12 and Proposition 3.3 in [7]. 

Proposition 4.3.14. Let A be a commutative invertible R-algebra with an invertible basis,

B = {v1,..., vn,... }, which is scalar closed under inverses and scalar closed under

∗ products. Assume that ri and s(i, j) have order two for all i and j. Then the map : A → A  ∗   X X    −1 defined by  αvv =  αvv  is an involution. v∈B v∈B ∗ P ∗ Proof. Clearly is additive. Let α ∈ A. Then α = i αivi. Applying , we have X ∗ −1 −1 α = αivi . Since B is scalar closed under inverses we have vi = rivσ(i) where i ∗ P −1 σ : {1,..., n} → {1,..., n}. Therefore, α = i αirivσ(i). Notice that vσ(i) = rivi. Then

∗ ∗ X −1 X X (α ) = αirivσ(i) = αiririvi = αivi = α i i i

2 as ri = 1. P Let β ∈ A and write β = j β jv j. Then

∗ ∗ ∗ X  X  (αβ) = αiβ jviv j = αiβ j si jvη(i, j) since B is scalar closed under products. Applying ∗ we get

∗ X  X −1 X αiβ j si jvη(i, j) = αiβ j si jvη(i, j) = αiβ j si jrη(i, j)vσ(η(i, j)). 46

We wish to show this is equal to α∗β∗. So

∗ ∗ X −1 X −1 X −1 −1 α β = αivi β jv j = αiβ jvi v j

X −1 X −1 = αiβ j(viv j) = αiβ j(si jvη(i, j))

since B is scalar closed under products. But

X −1 X −1 −1 X −1 αiβ j(si jvη(i, j)) = αiβ j si j vη(i, j) = αiβ j si j rη(i, j)vσ(η(i, j))

−1 as B is scalar closed under inverses. Finally by assumption we have s(i, j) = s(i, j) and therefore, (αβ)∗ = α∗β∗. 

Lemma 4.3.15. Let A be an invertible R-algebra with a basis B that is scalar closed under products. Then B contains an element from U(R), the units of R.

P Proof. Let B = {v1, v2,...}. Write k αkvk = 1 and let v j ∈ B. Then multiplying by v j we obtain the following equation X αkβk jvk j = v j. k

Since v j ∈ B and {βk jvk j} ⊆ B, there exists some i such that αk = 0 for all k , i and

αiβi jvi j = v j. But we have βi jvi j = viv j and therefore, αiviv j = v j. Thus we conclude

−1 −1 vi = αI . Since vi ∈ B and αi ∈ U(R) we are done. 

Lemma 4.3.16. Let A be an invertible R-algebra. Suppose B is a basis that is scalar closed under products. Then B is scalar closed under inverses.

−1 Proof. Let v j ∈ B. We need to show that for some α ∈ U(R) we have αv j ∈ B. By

−1 Lemma 4.3.15 there exists some β ∈ U(R) such that β ∈ B. We write the element βv j as a linear combination of B obtaining

X −1 αkvk = βv j . k 47

Multiplying each side by v j we get

X αkvkv j = β. k

Since B is scalar closed under products we may write vkv j = γk jvk j, where γk j ∈ U(R) and

vk j ∈ B for all k and j. Then we have teh equation

X αkγk jvk j = β. k

Since β ∈ B there exists some i such that αk = 0 for all k , i and αiγi jvi j = β. Since

−1 −1 γi jvi j = viv j we have αviv j = β. Therefore, vi = αi βv j , where vi ∈ B and

−1 αi β ∈ U(R). 

Lemma 4.3.17. Let A be an invertible R-algebra. Suppose B is a basis that is scalar closed under products and R commutes with B. If α ∈ U(R) then αB is also a basis that is scalar closed under products.

Proof. Let B = {v1, v2,...} and α ∈ U(R). Consider αB = {αv1, αv2,...}. Choose

αvi, αv j ∈ αB, and note viv j = βvk for some β ∈ U(R) and vk ∈ B as B is scalar closed under products. Then

αviαv j = αviv jα = αβvkα = αβ(αvk).

Since αvk ∈ αβ we see αB is scalar closed under products. 

Definition 4.3.18. Let A be an algebra over a ring R, and B an R-basis for A. We say R scalarly commutes with B if there exists some σ ∈ Aut(R) such that for all r ∈ R, v ∈ B we have vr = σv(r)v.

Lemma 4.3.19. Let A be an invertible R-algebra. Suppose B is an invertible basis that is scalar closed under products and R scalarly commutes with B. Then αB is also a basis that is scalar closed under products and R scalarly commutes with αB if α ∈ B T U(R). 48

Corollary 4.3.20. Let A be an invertible R-algebra. Supposed B is an invertible basis that is scalar closed under products and R commutes with B. Then there exists a basis B∗ such that B∗ is scalar closed under products, contains 1, and R commutes with B∗.

Proof. By Lemma 4.3.15 there exists α ∈ B such that α ∈ U(R). Consider the basis α−1B. By Lemma 4.3.17 we have B∗ is scalar closed under products and clearly B∗ contains 1. Now let r ∈ R. Since r commutes with B in particular r commutes with α. Therefore, r commutes with α−1. Since B∗ consists of products of B and α−1 we see r commutes with

B∗. 

Proposition 4.3.21. Let A be an invertible R-algebra. Then there exists an invertible basis B that is scalar closed under products such that R scalarly commutes with B if and only if A is a crossed product.

Proof. Suppose A is a crossed product with basis G¯. Then by definition of a crossed product G¯ is scalar closed under products and R scalarly commutes with G¯.

Now let B = {1, v2, v2,...} be an invertible basis that is scalar closed under products such that R scalarly commutes with B. Note by Corollary 4.3.20 we may assume 1 ∈ B and by Lemma 4.3.19 we may assume R scalarly commutes with B. Define

? : B × B 7→ B by vi ? v j = vk where viv j = αvk.

We claim (B,?) is a group. By definition of ? and since B is scalar closed under products we have (B,?) is closed under products. Also note 1 ? v = v and 1 ∈ B. So we need only show for v ∈ (B,?) we have v−1 ∈ B. So write v−1 as a linear combination of elements

P −1 P −1 from B and get k αkvk = v . Then k αkvk ? v = v ? v = 1. Since 1 ∈ (B,?), there

−1 exists a unique i such that αk = 0 for all k , i and αivi ? v = 1. However, viv = αi · 1,

−1 which gives v ? vi = 1. Now vi ∈ (B,?) and since vi = v we have (B,?) is a group. Therefore, A is a crossed product.  49

From this proposition we have two corollaries relating twisted group rings and skew group rings to the concepts of scarlar closed under products and scalar commuting.

Corollary 4.3.22. Let A be an invertible R-algebra. Then the following are equivalent:

1. There exists a basis B that is scalar closed under products such that 1 ∈ B and R commutes with B.

2. There exists a basis B that is scalar closed under products such that there exists

α ∈ U(R) such that α ∈ B and R commutes with B.

3. A is a twisted group ring.

Corollary 4.3.23. Let A be an invertible R-algebra. Then the following are equivalent:

1. There exists a basis B that is closed under products such that 1 ∈ B and R scalarly commutes with B.

2. There exists a basis B that is closed under products such that there exists α ∈ U(R) such that α ∈ B and R scalarly commutes with B.

3. A is a skew group ring. 50

Invertible Algebras !!aa !! aa !! aa !! aa Invertible-2 !! aa Scalar Closed !! aa Algebras Under Inverses

Invertible-3 Scalar Closed Algebras Under Products           Invertible-4  Closed  Algebras  Under Products  \   \   \ Crossed  \  \ Products   Q \\  Q   Q Twisted  Q Skew Group Rings Group Rings \  \  \  \  \ Group  Rings Figure 4.1: Hierarchy of Invertibility 51 5 Fluidity

5.1 Definitions and Preliminary Results

Considering invertible-2 algebras naturally leads you to consider questions about when the sets of inverses of linearly independent sets of units are linearly independent. From the onset, this condition sounds hard to satisfy and we shall see that algebras that satisfy it are somewhat rare. We start by introducing appropriate terminology.

Definition 5.1.1. 1. A linearly independent set S of units of the algebra A is said to be fluid if S −1, the set of inverses of its elements, is also linearly independent.

2. An algebra A is said to be fluid if every linearly independent set of units S of A is fluid.

3. In order to prevent vacuous nonsensical consequences, we must introduce the following parameter: for an algebra A over a ring R, the mojo of A (mojo(A)) is the largest number of linearly independent units one can find in A. Clearly, if A is free

as a module over R then mo jo(A) ≤ rankR(A). When A has finite as a free

module over R then mo jo(A) = rankR(A) if and only if A is an invertible algebra in the sense of the previous chapters.

4. For a number t ≤ mo jo(A), we say that the algebra A is t-fluid if every linearly independent set of units S with at most t elements is fluid. The fluidity of A

(fluid(A)) is the largest t such that A is t-fluid. Clearly if f luid(A) = rankR(A) then A is invertible-2.

Before we give a first example of a fluid algebra, we will mention some immediate remarks.

Remark 5.1.2. (i) Every linearly independent set of units S having exactly two elements is fluid. 52

(ii) If mo jo(A) ≥ 2 then f luid(A) ≥ 2.

(iii) A = F[x] is an example of an F-algebra with mo jo(A) = f luid(A) = 1.

(iv) If A is a subalgebra of B then if B is fluid then so is A.

Proof. The proofs are either obvious or straightforward. 

The following provides an example of a fluid algebra.

Example 5.1.3. Let R be any and A = M2(R). Then for any subset S ⊆ A consisting of invertible elements that are linearly independent satisfies that S −1 is    a(i) a(i)   11 12  also linearly independent. Let S = {A1,..., Ak} where for i = 1,..., k, Ai =   ,  (i) (i)  a13 a14 be a set of linearly independent invertible elements of A. Then the inverse of Ai is   (i) (i)  a −a  −1 1  14 12  Ai = d   , where di is the of matrix Ai. Identify each matrix i  (i) (i)  −a13 a11    a b    ∈ 4 4 → 4   with the 4-tuple (a, b, c, d) R . Notice that the map φ : R R given by  c d  φ(a, b, c, d) = (d, −b, −c, a) is an of R-modules. Consequently, φ maps linearly independent sets to linearly independent sets. Since the process of taking a matrix inverse assigns to each a unit-multiple of its image under φ, the result follows.

As we can see, the concept of fluid still relates the notions of invertibility and linear independence. Just as we studied some natural questions in the invertible setting, we study some of those questions in the fluid setting. First we show the direct sum of fluid algebras need not be fluid.

Lemma 5.1.4. Let F , F2 be a field and A1 and A2 be F-algebras.

(i) If A = A1 ⊕ A2 is fluid then so are A1 and A2. 53

(ii) The converse of part (i) does not hold.

Proof. Assume A1 is not fluid so there exists a set of units {v1, v2,..., vn} ⊂ A1 such that

−1 −1 −1 {v1, v2,..., vn} is linearly independent and {v1 , v2 ,..., vn } is linearly dependent. First

assume that n is even. Let {α1, α2, . . . , αn} ⊂ F such that

−1 −1 −1 α1v1 + α2v2 + ··· + αnvn = 0.

WLOG we may assume αi , 0 for all i. Then consider the set

−1 i B = {(αi vi, (−1) )|i = 1,..., n}. Notice B is a linearly independent set since {v1, v2,..., vn}

−1 −1 i is linearly independent. However, B = {(αivi , (−1) )|i = 1,..., n} is a linearly dependent set as n n n X −1 i X −1 X i (αivi , (−1) ) = ( αivi , (−1) ) = (0, 0) i=1 i=1 i=1 −1 −1 −1 since α1v1 + α2v2 + ··· + αnvn = 0 and n is even. Therefore, A is not fluid. ∗ Now assume n is odd. First note since F , F2 there exists β, γ ∈ F such that 1 = β + γ. Now consider

−1 i −1 −1 −1 −1 −1 B = {αi vi, (−1) )|i = 1,..., n − 3} ∪ (αn−2vn−2, β ) ∪ (αn−1vn−1, γ ) ∪ (αn vn, −1).

Again we have B is a linearly independent set as {v1, v2,..., vn} is linearly independent. However,

−1 −1 i −1 −1 B = {(αivi , (−1) )|i = 1,..., n − 3} ∪ (αn−2vn−2, β) ∪ (αn−1vn−1, γ) ∪ (αnvn, −1) is linearly dependent since

n−3 X −1 i −1 −1 (αivi , (−1) ) + (αn−2vn−2, β) + (αn−1vn−1, γ) + (αnvn, −1) i=1

n n−3 X −1 X i = ( αivi , (−1) + β + γ − 1) = (0, 0). i=1 i=1 Where the last equality holds because n − 3 is even and 1 = β + γ. Therefore, A is not fluid. 54

For part (ii) we provide an example. First note that by Example 4.16 we have M2(R) is a fluid algebra. So consider A = M2(F3) ⊕ M2(F3). Let                          1 0   −1 0   1 −1   0 1   1 1   1 −1  C =   ,   ,   ,   ,   ,   .              0 1   −1 −1   0 1   1 0   0 1   0 1 

It is easy to see that C forms a linearly dependent set. However, it is straightforward to check                          1 0   −1 0   1 1   0 1   1 −1   1 1  C−1 =   ,   ,   ,   ,   ,                0 1   1 −1   0 1   1 0   0 1   0 1  forms a linearly independent set. 

Remark 5.1.5. We bring to the reader’s attention that Lemma 5.1.4( i) should not be mistaken for a trivial consequence of Remark 5.1.2( iv) because algebra direct summands are not subalgebras since they do not share the same unity.

5.2 Fluid Field Extensions and Related Ideas

Due to the affinity of the notions of invertible and fluid algebras, it seems natural to start out by investigating when some invertible algebras are fluid. Our first goal is to characterize those field extensions that are fluid.

Proposition 5.2.1. Let F ⊆ E be fields with |E : F| = k. Then E/F is fluid if and only if k = 2.

Proof. Assume k = 2. Then we know that given any two linearly independent invertible elements we have their inverses are also linearly independent (see Remark 5.1.2 (i)). Since k = 2 there cannot be more than two linearly independent elements and therefore E/F is fluid. Now suppose |E : F| = k ≥ 3. We proceed by finding a linearly dependent set whose inverse is linearly independent. Let f (x) be a primitive polynomial of degree k over F. 55

F[x] Then E  h f (x)i . Since f (x) has nonzero constant term we may write f (x) = g(x) + c where c ∈ F and g(x) is divisible by x with the degree of g(x) = k. So consider the set C = {1, x, 1 + x} which is clearly linearly dependent. We claim C−1 is linearly independent. Since f (x) has nonzero constant term −c ∈ F, write f (x) = xh(x) − c where h(x) is of degree k − 1. Therefore, x−1 = c−1h(x). Now let p(x) = (1 + x)−1 so we have (1 + x)p(x) = 1. Now suppose there exists α, β ∈ F, both non-zero, such that α(1) + β(x−1) = p(x). After substituting in x−1 = c−1h(x) we have

α(1) + β(c−1h(x)) = p(x).

But p(x) = (1 + x)−1 so we have

(1 + x)(α + β(c−1h(x)) = 1.

Distributing we get α + β(c−1h(x)) + αx + β = 1.

The only way for this to hold is if deg(h(x)) = 1. However, recall we have k ≥ 3 and deg(h(x)) ≥ (k − 1) = 2, which is a contradiction. 

Proposition 5.2.2. Let F ⊆ E be an infinite field extension. Then E is never fluid over F.

Proof. We proceed by finding a subalgebra that has fluidity 2. Then by Remark 5.1.2 (iv), we have E/F is not a fluid algebra. First suppose that E/F is an algebraic extension. Let α ∈ E be algebraic over F. Then |F(α): F| < ∞. By Proposition 5.2.1 we see F(α) over F has fluidity 2. Now suppose E is a transcendental extension over F. Let α ∈ E be transcendental over F. Then consider the subalgebra F(α)/F. Since α is transcendental we have F(α)  F(x). Then consider the elements {1, x, 1 + x} ⊂ F(x). Clearly this is a { 1 1 } linearly dependent set over F. However, the set of inverses is 1, x , 1+x , which is a linearly independent set. Therefore, we have found a subalgebra, namely F(x)/F, of E/F

which is not fluid.  56

Fq[x] Next we will characterize when h f (x)i is a fluid algebra. However, there are a few

F3[x] F4[x] F4[x] special cases that we will address first. Namely x(x−1)(x+1) , (x+a)(x+b)(x+c) , and x(x+a)(x+b)(x+1) , a, b, c ∈ F4,, where a, b, c are distinct, are considered separately in the following example.

F3[x] F4[x] F4[x] Example 5.2.3. In the case of A1 = x(x−1)(x+1) , A2 = (x+a)(x+b)(x+c) , and A3 = x(x+a)(x+b)(x+1) ,

a, b, c ∈ F4, we have A1, A2, and A3 are all fluid. By the Chinese Remainder Theorem we L3 L3 L4 n → n have A1  i=1 F3, A2  i=1 F4, and A3  i=1 F4. Let φ be the map from F3 F3 −1 n defined by φ(x) = x for 0 , x ∈ F3 and φ(0) = 0. Now φ is an a linear isomorphism on n F3. Therefore, if {v1, v2,..., vn} is a linearly independent set of A1 we have −1 −2 −1 {φ(v1), φ(v2), . . . , φ(vn)} = {v1 , v2 ,..., vn } is also a linearly independent set. Therefore, n n −1 A1 is fluid. Likewise, consider the map ψ : F4 → F4 defined by ψ(y) = y for all n 0 , y ∈ F4 and ψ(0) = 0. It is easy to see that ψ is a pseudo- on F4. Since

pseudo-linear maps preserve linear independence we have A2 and A3 are fluid.

Fq[x] Proposition 5.2.4. Let A = h f (x)i , q , 2. Then A is not fluid if and only if there exists some g(x) such that f (x) = g(x)h(x) with (g(x), h(x)) = 1 and deg(g(x)) ≥ 3, and there exists a, b ∈ F, a , b, such that (a + x) 6 |g(x) and (b + x) 6 |g(x).

Proof. Let a, b ∈ F, a , b, such that (a + x), (b + x) 6 |g(x) where g(x)| f (x), ≥ Fq[x] (g(x), h(x)) = 1, and deg(g(x)) 3. Consider the summand Ag = hg(x)i and note that (a + x) and (b + x) are units here. Clearly C = {1, a + x, b + x} is a linearly dependent set of units

in Ag. We proceed by showing the inverses are linearly independent. Suppose there exists α, β, γ ∈ F such that α(1) + β(a + x)−1 + γ(b + x)−1 = 0.

Multiplying through by (a + x)(b + x) we obtain

α(a + x)(b + x) + β(b + x) + γ(a + x).

After distributing and combining like terms we have

α(ab) + βb + γa + (αa + αb + β + γ)x + αx2 = 0. 57

Since deg(g(x)) ≥ 3 the only way to kill the x2 term is if α = 0. Letting α = 0 we have βb + γa + βx + γx = 0. Then either β, γ = 0, or β = −γ. In the first case we are done and we have α, β, γ = 0. So assume β = −γ. Then βb − βa = 0 gives β(b − a) = 0, and since β ∈ F we have b − a = 0 and b = a. A contradiction to the hypothesis. Therefore,

−1 −1 −1 α, β, γ = 0 and C = {1, (a + x) , (b + x) } is a linearly independent and Ag is not fluid. By Lemma 5.1.4 we have A is not fluid also. Now for the other implication. We proceed by contraposition. Suppose for every g(x) of degree 3 or greater dividing f (x) and for all a, b ∈ F, a , b we have (a + x)|g(x) or (b + x)|g(x). We want to show A is fluid under these conditions.

k1 k2 kn Write f (x) = f1 f2 ··· fn where fi is irrecucible for all i = 1,..., n. If the degree of some fi is greater than 2 then there exists a, b ∈ F, a , b, such that a + x and b + x does not divide fi, a contradiction. Therefore, each fi must have degree 2 or less. Now suppose

ki the degree of fi is 2 and ki ≥ 2. Then fi has degree 4 and again there exists a, b ∈ F,

ki a , b, such that a + x and b + x does not divide fi . Thus if the degree of fi is 2 then

ki ki ∈ {0, 1}. Now suppose the degree of fi is 1 and ki ≥ 3, so the degree of fi is at least 3.

ki Since q , 2 there exists a, b ∈ F, a , b, such that a + x and b + x does not divide fi .

Therefore, if the degree of fi = 1 then ki ∈ {0, 1, 2}. If q ≥ 5 then f (x) has one of the three following options:

• f (x) is irreducible of degree 2,

• f (x) is a product of two linear factors,

• or f (x) is linear.

However, any algebra of degree 1 or 2 is always fluid. Therefore, if q ≥ 5 we are done. So suppose q = 3 or 4. Just as stated above we need not worry about anything of degree 1 or

2. However, we cannot proceed the same way as F3 and F4 do not have enough elements. 58

F3[x] F4[x] Therefore, the only ones left to consider are A1 = x(x−1)(x+1) , A2 = (x+a)(x+b)(x+c) , and F4[x] ∈ A3 = x(x+a)(x+b)(x+1) , a, b, c F4. This was accomplished in Example 5.2.3. 

Proposition 5.2.5. Let F = F2 and A1 and A2 be F-algebras. If A1 is not fluid, so C is a set of linearly independent units such that the set of inverses C−1 is linearly dependent, and the cardinality of C is even then A1 ⊕ A2 is not fluid.

Proof. Notice the first part of the proof in Lemma 5.1.4 (i), does not require the assumption F , F2. This is precisely the part of the proof that would assume the cardinality of C is even. Therefore, the same proof would be sufficient in proving this proposition. However, we will provide the proof for clarity. Assume A1 is not fluid and let

−1 −1 −1 C = {v1, v2,..., vn} ⊂ A1 be a set of linearly independent units such that {v1 , v2 ,..., vn } is linearly dependent with n even. Let {α1, α2, . . . , αn} ⊂ F such that

−1 −1 −1 α1v1 + α2v2 + ··· + αnvn = 0.

−1 WLOG we may assume αi , 0 for all i. Then consider the set B = {(αi vi, 1)|i = 1,..., n}.

Notice B is a linearly independent set since {v1, v2,..., vn} is linearly independent.

−1 −1 However, B = {(αivi , 1)|i = 1,..., n} is a linearly dependent set as n n n X −1 X −1 X (αivi , 1) = ( αivi , 1) = (0, 0) i=1 i=1 i=1 −1 −1 −1 since α1v1 + α2v2 + ··· + αnvn = 0 and n is even. Therefore, A is not fluid. 

Let A be an R-algebra and φ : U(A) → U(A) be defined by φ(a) = a−1. We say φ is additive (or inversion is additive) if given a, b ∈ U(A) such that a + b ∈ U(A) we have φ(a + b) = φ(a) + φ(b). If an algebra satisfies inversion being additive then that particular algebra is a fluid algebra.

F[x] Proposition 5.2.6. Let F = F2 and A = h f (x)i . For i = 0, 1 denote mi to be the multiplicity of i as a root of f (x). Then A is fluid if and only if f (x) = x2 + x + 1 or f (x) = xm0 (x + 1)m1

such that 0 ≤ mo ≤ 4 and 0 ≤ m1 ≤ 4. 59

F[x] Proof. Suppose 0 and 1 are roots of multiplicity at most 4. If can be shown that hxni , ≤ F[x] F[x] n 4, satisfies inversion being additive. Since hxni  h(1+x)ni , we also have inversion is F[x] additive for h(1+x)ni . Since the direct sum of algebras satisfying inversion being additive F[x] ≤ ≤ also satisfies inversion being additive we have hxn(1+x)mi is fluid if n 4 and m 4. Also clearly if f (x) = x2 + x + 1 then A is fluid as A has dimension 2. Now suppose 0 or 1 is a root of f (x) of multiplicity greater than 4. WLOG assume the multiplicity of 0 is greater than 4. Then consider the set B = {1, 1 + x, 1 + x2, 1 + x + x2}. Clearly B is a linearly dependent set. However it is straightforward to show B−1 = {1, 1 + x + x2 + ··· , 1 + x2 + x4 + ··· , 1 + (x + x2) + (x + x2)2 + ···} is linearly independent. Before we provide the proof we mention x + x2 + ··· , 1 + x2 + x4 + ··· , 1 + (x + x2) + (x + x2)2 + ··· each contain an x4 term and since n > 4 we see it remains the same. We mention this as we only need to consider the coefficients of 1, x, x2, and x4 in the proof. Let

2 2 4 2 2 2 α1(1) + α2(1 + x + x + ··· ) + α3(1 + x + x + ··· ) + α4(1 + (x + x ) + (x + x ) + ··· = 0. This gives the following equations.

α1 + α2 + α3 + α4 = 0 (5.1)

α2 + α4 = 0 (5.2)

α2 + α3 = 0 (5.3)

α2 + α3 + α4 = 0 (5.4)

−1 From these equations it is clear to see α1 = α2 = α3 = α4 = 0. Therefore, B is linearly independent. We should note if the multiplicity of 1 is greater than 4 this proof also works

F[x] F[x] as we stated earlier, hxni  h(1+x)ni .  60

5.3 Fluid Matrix Algebras

The pattern we are following which is (as stated at the beginning of the previous section) to consider the fluidity of invertible algebras. Considering that matrix rings and field extensions are invertible algebras, they are a natural next stop. Clearly we have that if an algebra has dimension n and is fluid, then the fluidity is also n. Likewise if an algebra has dimension n and also has fluidity n, then our algebra is

fluid. As stated earlier M2(R) is fluid and therefore, the fluidity of M2(R) is 4. As long as an algebra has two linearly independent invertible elements we always have the fluidity is

n at least 2. For example ⊕i=1F2 only has one invertible element and therefore fluidity of 1. Note also this is a fluid algebra, yet not an invertible algebra. Then for every algebra we always have the following

f luid(A) ≤ mo jo(A) ≤ dim(A).

The following proposition shows that matrix rings have fluidity 2 under certain restrictions of the ring.

Proposition 5.3.1. Let R be a commutative ring such that 1 = a + b where a and b are units and n ≥ 3. Then the fluidity of Mn(R) is 2.

Proof. We want to find a set of 3 linearly independent invertible elements such that their inverses are linearly dependent. So consider the set              1 1 1   a 1 0   b 0 1              C =  0 1 1  ,  0 a 1  ,  0 b 0  .              0 0 1   0 0 a   0 0 b  Clearly C is a linearly dependent set. However, we have        1 −1 0   a−1 −a−2 a−3   b−1 0 −b−2              −1       C =  0 1 −1  ,  0 a−1 −a−2  ,  0 b−1 0  .              0 0 1   0 0 a−1   0 0 b−1  61

Let        1 −1 0   a−1 −a−2 a−3   b−1 0 −b−2              α  0 1 −1  + β  0 a−1 −a−2  + γ  0 b−1 0  = 0.              0 0 1   0 0 a−1   0 0 b−1  Then we have the following equations

α + β(a−1) + γ(b−1) = 0

−α − β(a−2) = 0

β(a−3) − γ(b−2) = 0.

From the latter two we have α = −β(a−2) and γ = β(b2a−3). Substituting these in to the first equation we have β(a−2) + β(a−1) + β(ba−3) = 0.

Multiplying through by a3 and using the fact that 1 = a + b we obtain

−β(a) + β(a2) + β(b) = β(−a + a2 + b) = β(a(−1 + a) + b)

= β(a(−b) + b) = β(b(−a + 1)) = β(b2) = 0.

Thus, β must be zero as b is invertible. Therefore, α = γ = 0, and C−1 is a linearly independent set. This is easily extended to n > 3 by using the same elements and putting them in the bottom right corner of I, aI, and bI, where I is the identity matrix. In general for an n × n matrix our set of linear dependent units would consist of

v1 = I + en−2,n−1 + en−1,n + en−2,n,

v2 = aI + en−2,n−1 + en−1,n,

v3 = bI + en−2,n. 62

The set of inverses, being linear independent would consist of

−1 v1 = I − en−2,n−1 − en−1,n, −1 −1 −2 −2 −3 v2 = a I − a en−2,n−1 − a en−1,n + a en−2,n, −1 −1 −2 v3 = b I − b en−2,n. 

Remark 5.3.2. The previous proposition holds for all fields F , F2. Suppose we can find

3 linearly dependent units in M3(F2) such that their inverses are linearly independent. We

cannot extend this to all Mn(F2) by mimicing the same idea as in Proposition 5.3.1 as we

are over F2 now, and putting those elements in the bottom corner of an n × n identity matrix no longer keeps linear dependence among the 3 elements. So we proceed by giving

a general formula for 3 linearly dependent units in Mn(F2), n ≥ 3, such that their inverses are linearly independent.

Proposition 5.3.3. Let R = F2 and n ≥ 3. Then the fluidity of Mn(R) is 2.

Proof. As usual let I denote the identity matrix and ei j the matrix units. Let

v1 = I P v2 = I + j−i=1 ei j + en1 + en2 P v3 = j−i=1 ei j + en1 + en2.

Clearly we have C = {v1, v2, v3} is a linearly dependent set over F2, as v1 + v2 + v3 = 0. It is

−1 straightforward to check C is a linearly independent set. Therefore, Mn(F2) has fluidity

2, and by the previous proposition Mn(F) has fluidity 2 for any field F. 

We provide an example of the construction of the previous remark for clarity.

Example 5.3.4. Consider A = M3(F2). Let              1 0 0   1 1 0   0 1 0              C =  0 1 0  ,  0 1 1  ,  0 0 1  ⊆ M3(F2).              0 0 1   1 1 1   1 1 0  63

Clearly this set is linearly dependent. Now it is straightforward to check that        1 0 0   0 1 1   1 0 1              −1       C =  0 1 0  ,  1 1 1  ,  1 0 0               0 0 1   1 0 1   0 1 0  is a linearly independent set, giving the fluidity of M3(F2) is 2.

Notice that we may extend Propositions 5.2.1 and 5.2.2 by adding the fluidity of these field extensions. We restate these propositions here with the additional concept of fluidity.

Proposition 5.3.5. Let F ⊆ E be finite fields with |E : F| = k. Then E/F is fluid if and only if k = 2. Furthermore, we have that the fluidity of any finite field extension is 2.

Proof. Using the elements in the proof of Proposition 5.2.1, we have found three linearly independent units such that their inverses are linearly dependent. Thus any finite field extension has fluidity 2. 

Proposition 5.3.6. Let F ⊆ E be an infinite field extension. Then E is never fluid over F, and furthermore, the fluidity of E over F is 2.

Proof. Notice in Proposition 5.2.2, the set of units that fail consists of 3 elements, therefore, the fluidity of E/F is 2. 

The next example gives us an algebra, A, such that f luid(A) < mo jo(A) < dim(A).

Example 5.3.7. Let A = T3(F2), the ring of upper triangular matrices over F2. Clearly we have dim(A) = 6. Now if v ∈ A is invertible then the diagonal consists solely of 1’s. Therefore, if there are three linearly independent invertible elements, then their inverses must also be linearly independent. We always have if two invertible elements of an algebra are linearly independent then their inverses are also linearly independent. 64

Therefore, if {v1, v2, v3} are invertible and linearly independent and their inverses are linearly dependent they would need to be dependent in the following way

αv1 + βv2 + γv3 = 0

where α, β, γ , 0. But since F2 has characteristic 2, this would be impossible as each diagonal consists of 1’s. Therefore, f luid(A) ≥ 3. So consider                  1 1 1   1 0 1   1 1 0   1 0 0                  C =  0 1 1  ,  0 1 0  ,  0 1 0  ,  0 1 1  ,                  0 0 1   0 0 1   0 0 1   0 0 1  which is a linearly dependent set. It is straightforward to check the set of inverses          1 1 0   1 0 1   1 1 0   1 0 0                  −1         C =  0 1 1  ,  0 1 0  ,  0 1 0  ,  0 1 1                   0 0 1   0 0 1   0 0 1   0 0 1  is a linearly independent set. Therefore, f luid(A) < 4 and from above we have f luid(A) ≥ 3 giving f luid(A) = 3. This also provides us with an example of 4 linearly independent units. Thus our mojo is at least 4. Now consider U(A), the group of units of A. Since each element of A has 10 s down the diagonal it is not possible to generate the diagonal from U(A). Therefore, we could not have 6 linearly independent units. Thus, mo jo(A) < dim(A). Therefore, we have the inequality f luid(A) < mo jo(A) < dim(A).

We bring this example into view to demonstrate that there certainly are algebras where these three values are distinct. Already mentioned was the fact that if mo jo(A) = dim(A) then our algebra is invertible and if f luid(A) = mo jo(A) then our algebra is fluid. We have just started the research on invertible algebras and fluid algebras. We have shown there are many families of invertible algebras and fluid algebras. Also, we have 65 seen connections with the existing literature as far as rings generated by their units, S -rings, and k-good rings. This certainly warrants further research to be done in these areas. 66 References

[1] Ian G. Connell. On the group ring. Canad. J. Math., 15:650–685, 1963.

[2] Melvin Henriksen. Two classes of rings generated by their units. J. Algebra, 31:182–193, 1974.

[3] Xiang-dong Hou, Sergio R. Lopez-Permouth,´ and Benigno Parra-Avila. Rational power series, sequential codes and periodicity of sequences. J. Pure Appl. Algebra, 213:1157–1169, 2009.

[4] Dinesh Khurana and Ashish K. Srivastava. Right self-injective rings in which every element is a sum of two units. J. Algebra Appl., 6(2):281–286, 2007.

[5] Marek Kuczma. An introduction to the theory of functional equations and inequalities. Prace naukowe Uniwersytetu Slaskiego w Katowicach. University of Silesia, Warzawa, 1985. Panstwowe Wydawnictwo Naukowe.

[6] T. Y. Lam. Lectures on modules and rings, volume 189 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999.

[7] Sergio R. Lopez-Permouth,´ Jeremy Moore, and Steve Szabo. Algebras having bases consisting entirely of units. Contemporary Mathematics, Volume 499, 2009.

[8] Richard D. Mabry. No nontrivial Hamel basis is closed under multiplication. Aequationes Math., 71(3):294–299, 2006.

[9] D. S. Passman. Algebraic crossed products. In Group actions on rings (Brunswick, Maine, 1984), volume 43 of Contemp. Math., pages 209–225. Amer. Math. Soc., Providence, RI, 1985.

[10] Donald S. Passman. The algebraic structure of group rings. Robert E. Krieger Publishing Co. Inc., Melbourne, FL, 1985. Reprint of the 1977 original.

[11] Donald S. Passman. Infinite crossed products, volume 135 of Pure and Applied Mathematics. Academic Press Inc., Boston, MA, 1989.

[12]C esar´ Polcino Milies and Sudarshan K. Sehgal. An introduction to group rings, volume 1 of Algebras and Applications. Kluwer Academic Publishers, Dordrecht, 2002.

[13] R. Raphael. Rings which are generated by their units. J. Algebra, 28:199–205, 1974.

[14] Peter Vamos.´ 2-good rings. Q. J. Math., 56(3):417–430, 2005.