Linear Algebra Review 3

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra Review 3 INTRODUCTION TO GROUP REPRESENTATIONS JULY 1, 2012 LINEAR ALGEBRA REVIEW 3 Here we describe some of the main linear algebra constructions used in representation theory. Since we are working with finite-dimensional vector spaces, a choice of basis iden- n tifies each new space with some C : What makes these interesting is how they behave with respect to linear transformation, which in turn gives ways to construct new representations from old ones. Let V and W be finite-dimensional vector spaces over C, and choose bases B = fv1; : : : ; vmg for V and C = fw1; : : : ; wng for W . Direct Sum: There are two types of direct sums, internal and external. The internal direct sum expresses a given vector space V 0 in terms of two (or more) subspaces V and W . Definition: We say V 0 is the internal direct sum of subspaces V and W , written V 0 = V + W; if each vector in V 0 can be written uniquely as a sum v0 = v + w with v in V and w in W . Equivalently, this condition holds if and only if B [ C is a basis for V 0: In turn, we also have this condition if dim(V ) + dim(W ) = dim(V 0) and V \ W = f0g: In the special case where V 0 admits an inner product and W = V ?, we call V 0 = V ⊕W an orthogonal direct sum. If B and C are orthonormal bases, then B [C is an orthonormal basis for V 0: On the other hand, the main idea here can be applied to combine two known vector spaces. Definition: The external direct sum of V and W , also written V 0 = V ⊕ W , is first defined as the set of all ordered pairs (v; w) with v in V and w in W . Scalar multiplication is defined by c(v; w) = (cv; cw), and addition is defined by (v; w)+(v0; w0) = (v+v0; w+w0). One checks the other axioms for a vector space. Note that the external direct sum of V and W can be expressed as the internal direct sum of (V; 0) and (0;W ): A basis for V ⊕ W is given by f(vi; 0)g [ f(0; wj)g: If S : V ! V 0 and T : W ! W 0 are linear transformations, we obtain a linear transfor- mation (S; T ): V ⊕ W ! V 0 ⊕ W 0 by (S; T )(v; w) = (Sv; T w): The reader should verify that (S; T ) is linear. This definition allows a natural method for constructing direct sums of representations. Date: July 1, 2012. 1 2 INTRODUCTION TO GROUP REPRESENTATIONS Dual Vector Spaces: Definition: A linear functional on V is a linear transformation f : V ! C, and the vector space of all linear functionals on V is called the dual vector space V ∗: If V also admits an inner product, then one may describe each linear functional f 0 0 uniquely in the form fv(v ) = hv ; vi for some v in V . In fact, one may choose any nonzero v in (Kerf)? and rescale to match values to f. In turn, V ∗ admits an inner product defined by hfv; fwi∗ = hw; vi = hv; wi: If T : V ! W is a linear transformation, then we have an induced linear transformation T ∗ : W ∗ ! V ∗ by defining (T ∗w∗)(v) = w∗(T v): If V and W admit Hermitian inner products, (T ∗f)(v) = f(T v) = hT v; wi = hv; T ∗wi; where the latter T ∗ represents the adjoint of T with respect to the inner product. If we k ∗ are working in coordinate spaces C , then T is conjugate transpose of matrices: w∗T v = w∗(T ∗)∗v = (T ∗w)∗v: Suppose S : W ! X, so that ST : V ! X: If f is in X∗ then [(ST )∗f](v) = f(ST v) = (S∗f)(T v) = [T ∗(S∗f)](v): Thus (ST )∗ = T ∗S∗: For representations, if π∗ is to be a group action on V ∗; we need π∗(gh) = π∗(g)π∗(h): We correct the order using inverses. ∗ ∗ ∗ ∗ Definition: The dual basis B for V with respect to B is the set fv1; : : : ; vmg; where ∗ each vi is defined by ∗ ∗ vi (vi) = 1; and vj (vi) = 0 otherwise. If we have an inner product space with orthonormal basis B, then the corresponding dual ∗ ∗ basis has vectors vi = h·; vii: Note that dim(V ) = dim(V ): If T : V ! W is an invertible linear transformation, then fT v1; : : : ; T vng is a basis for ∗ ∗ ∗ ∗ −1 W with corresponding dual basis f(T v1) ;:::; (T vn) g; where (T vi) (w) = vi (T w): Note that ∗ ∗ −1 ∗ (T vi) (T vj) = vi (T T vj) = vi (vj); confirming the dual basis property. One way to interpret: Suppose we wish to replace V with an isomorphic vector space W: To convert from linear functionals on V to linear functionals on W , we send v∗ to (T −1)∗v∗: For representations, W = V and T = π(g): That is, we define the group action by [π∗(g)v∗](v0) = v∗(π(g)−1v0): Another way to interpret: A group action expresses symmetries on an object X; that is, the group action leaves some quality of X unchanged. If T : V ! V is an isomorphism, then the basis B = fvig is carried to the basis BT = fT vig: Note that the definition of v∗(v0) requires no basis; that is, this quantity remains unchanged no matter how we pass INTRODUCTION TO GROUP REPRESENTATIONS 3 to coordinates. So if we change V by an isomorphism T , interpreted as a change of basis, the dual basis changes by (T −1)∗; and v0(v) is unchanged. For representations, we again arrive at the group action on V ∗: 2 Example: Consider the linear functional f : C ! C defined by f(x; y; z) = x − y: Then x f(x; y; z) = [1 − 1] : y 1 1 −1 If T : 3 ! 2 is represented by the matrix A = ; then C C 2 0 3 2x3 ∗ (T f)(x; y; z) = [1 − 1]A 4y5 = −x + y − 4z: z Linear Transformation Spaces: Definition: The set of all linear transformations T : V ! W is denoted by HomC(V; W ): As a vector space, we define scalar multiplication by (cT )(v) = c(T (v)) and addition by (S + T )(v) = (Sv) + (T v): One verifies the other axioms for a vector space. With respect to choice of bases B and C, we can identify each T with an n × m matrix MT such that [T v]C = MT [v]B: This identification yields an isomorphism of HomC(V; W ) with the matrix vector space M(n; m; C): ∗ In turn, HomC(V; W ) admits a Hermitian inner product defined by hT1;T2i = T race(T1T2 ): With a choice of orthonormal bases for V and W , the norm squared of T1 with associated P 2 matrix MT1 = [ci;j] equals jci;jj : On the other hand, each linear functional is a linear transformation into C; and we have ∗ seen how to identify V with M(1; m; C), the space of row vectors. Now every element of HomC(V; W ) may be written uniquely in the form X ∗ T (v) = ci;jvj (v)wi: i;j Again T is identified with the n × m matrix [ci;j]: Thus we see that fTi;jg is a basis for ∗ HomC(V; W ), where Ti;j(v) = vj (v)wi; and dim(HomC(V; W )) = mn: 0 0 Now suppose T1 : V ! V and T2 : W ! W are linear transformations. Then 0 0 T2TT1 is an element of HomC(V ;W ); and, after choosing bases, the associated matrix is MT2 MT MT1 : Suppose we wish to replace V and W with isomorphic vector spaces V 0 and W 0 using 0 0 T1 : V ! V and T2 : W ! W : Then we replace the element T in Hom(V; W ) with −1 0 0 0 0 T2TT1 in HomC(V ;W ): For representations, V = V , W = W , T1 = π(g) acts on V , 0 0 −1 ∗ and T2 = π (g) acts on W to give σ(g)T = π (g)T π(g) : For the dual space V , we use 0 the trivial action on W = C; so π = I: 4 INTRODUCTION TO GROUP REPRESENTATIONS 3 2 Example: Let T : C ! C be given by T (x; y; z) = (x + y − z; 2x + 3y): Suppose 3 3 T1 : C ! C is given by T1(x; y; z) = (3x + y + z; 2x + y; −y + 2z) and 2 2 T2 : C ! C is given by T2(x; y) = (x + y; 2x − 3y): Then 23 1 13 2x3 1 1 1 1 −1 T TT (x; y; z) = 2 1 0 y 2 1 2 −3 2 3 0 4 5 4 5 0 −1 2 z = (−17x + 8y + z; −26x − 9y − 8z): Tensor Products: If the direct sum is thought of as a means to add vector spaces, then tensoring is the way we multiply vector spaces. We will take a somewhat naive approach here, as our focus is working with concrete examples of representations. In general, tensors are defined in terms of universal mapping properties without reference to coordinates. Tensors are the natural objects for moving from linearity to multilinearity. Definition: With our choices of bases B and C for V and W , we form the set B ⊗ C = fvi ⊗ wjg: The set V ⊗ W is defined as the set of all linear combinations of elements in B ⊗ C: That is, a tensor in V ⊗ W , the tensor product of V and W , is an element X t = ci;j(vi ⊗ wj); i;j and the vector space of tensors has dimension mn: For any v in V and w in W , the monomial tensor v ⊗ w is defined in terms of the basis by applying the following relations: (1) c(v ⊗ w) = (cv) ⊗ w) = v ⊗ (cw); (2) (v + v0) ⊗ w = v ⊗ w + v0 ⊗ w; and (3) v ⊗ (w + w0) = v ⊗ w + v ⊗ w0: Often it is sufficient to work with monomials and check that results extend linearly to general tensors.
Recommended publications
  • Direct Products and Homomorphisms
    Commuting properties Direct products and homomorphisms Simion Breaz logo Simion Breaz Direct products and homomorphisms Products and coproducts Commuting properties Contravariant functors Covariant functors Outline 1 Commuting properties Products and coproducts Contravariant functors Covariant functors logo Simion Breaz Direct products and homomorphisms Products and coproducts Commuting properties Contravariant functors Covariant functors Introduction Important properties of objects in particular categories (e.g. varieties of universal algebras) can be described using commuting properties of some canonical functors. For instance, in [Ad´amek and Rosicki: Locally presentable categories] there are the following examples: If V is a variety of finitary algebras and A ∈ V then A is finitely generated iff the functor Hom(A, −): V → Set preserves direct unions (i.e. directed colimits of monomorphisms); A is finitely presented (i.e. it is generated by finitely many generators modulo finitely many relations) iff the functor Hom(A, −): V → Set preserves directed colimits. logo Simion Breaz Direct products and homomorphisms Products and coproducts Commuting properties Contravariant functors Covariant functors Introduction Important properties of objects in particular categories (e.g. varieties of universal algebras) can be described using commuting properties of some canonical functors. For instance, in [Ad´amek and Rosicki: Locally presentable categories] there are the following examples: If V is a variety of finitary algebras and A ∈ V then A is finitely generated iff the functor Hom(A, −): V → Set preserves direct unions (i.e. directed colimits of monomorphisms); A is finitely presented (i.e. it is generated by finitely many generators modulo finitely many relations) iff the functor Hom(A, −): V → Set preserves directed colimits.
    [Show full text]
  • Lecture 1.3: Direct Products and Sums
    Lecture 1.3: Direct products and sums Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 8530, Advanced Linear Algebra M. Macauley (Clemson) Lecture 1.3: Direct products and sums Math 8530, Advanced Linear Algebra 1 / 5 Overview In previous lectures, we learned about vectors spaces and subspaces. We learned about what it meant for a subset to span, to be linearly independent, and to be a basis. In this lecture, we will see how to create new vector spaces from old ones. We will see several ways to \multiply" vector spaces together, and will learn how to construct: the complement of a subspace the direct sum of two subspaces the direct product of two vector spaces M. Macauley (Clemson) Lecture 1.3: Direct products and sums Math 8530, Advanced Linear Algebra 2 / 5 Complements and direct sums Theorem 1.5 (a) Every subspace Y of a finite-dimensional vector space X is finite-dimensional. (b) Every subspace Y has a complement in X : another subspace Z such that every vector x 2 X can be written uniquely as x = y + z; y 2 Y ; z 2 Z; dim X = dim Y + dim Z: Proof Definition X is the direct sum of subspaces Y and Z that are complements of each other. More generally, X is the direct sum of subspaces Y1;:::; Ym if every x 2 X can be expressed uniquely as x = y1 + ··· + ym; yi 2 Yi : We denote this as X = Y1 ⊕ · · · ⊕ Ym. M. Macauley (Clemson) Lecture 1.3: Direct products and sums Math 8530, Advanced Linear Algebra 3 / 5 Direct products Definition The direct product of X1 and X2 is the vector space X1 × X2 := (x1; x2) j x1 2 X1; x2 2 X2 ; with addition and multiplication defined component-wise.
    [Show full text]
  • Right Ideals of a Ring and Sublanguages of Science
    RIGHT IDEALS OF A RING AND SUBLANGUAGES OF SCIENCE Javier Arias Navarro Ph.D. In General Linguistics and Spanish Language http://www.javierarias.info/ Abstract Among Zellig Harris’s numerous contributions to linguistics his theory of the sublanguages of science probably ranks among the most underrated. However, not only has this theory led to some exhaustive and meaningful applications in the study of the grammar of immunology language and its changes over time, but it also illustrates the nature of mathematical relations between chunks or subsets of a grammar and the language as a whole. This becomes most clear when dealing with the connection between metalanguage and language, as well as when reflecting on operators. This paper tries to justify the claim that the sublanguages of science stand in a particular algebraic relation to the rest of the language they are embedded in, namely, that of right ideals in a ring. Keywords: Zellig Sabbetai Harris, Information Structure of Language, Sublanguages of Science, Ideal Numbers, Ernst Kummer, Ideals, Richard Dedekind, Ring Theory, Right Ideals, Emmy Noether, Order Theory, Marshall Harvey Stone. §1. Preliminary Word In recent work (Arias 2015)1 a line of research has been outlined in which the basic tenets underpinning the algebraic treatment of language are explored. The claim was there made that the concept of ideal in a ring could account for the structure of so- called sublanguages of science in a very precise way. The present text is based on that work, by exploring in some detail the consequences of such statement. §2. Introduction Zellig Harris (1909-1992) contributions to the field of linguistics were manifold and in many respects of utmost significance.
    [Show full text]
  • Formal Introduction to Digital Image Processing
    MINISTÉRIO DA CIÊNCIA E TECNOLOGIA INSTITUTO NACIONAL DE PESQUISAS ESPACIAIS INPE–7682–PUD/43 Formal introduction to digital image processing Gerald Jean Francis Banon INPE São José dos Campos July 2000 Preface The objects like digital image, scanner, display, look–up–table, filter that we deal with in digital image processing can be defined in a precise way by using various algebraic concepts such as set, Cartesian product, binary relation, mapping, composition, opera- tion, operator and so on. The useful operations on digital images can be defined in terms of mathematical prop- erties like commutativity, associativity or distributivity, leading to well known algebraic structures like monoid, vector space or lattice. Furthermore, the useful transformations that we need to process the images can be defined in terms of mappings which preserve these algebraic structures. In this sense, they are called morphisms. The main objective of this book is to give all the basic details about the algebraic ap- proach of digital image processing and to cover in a unified way the linear and morpholog- ical aspects. With all the early definitions at hand, apparently difficult issues become accessible. Our feeling is that such a formal approach can help to build a unified theory of image processing which can benefit the specification task of image processing systems. The ultimate goal would be a precise characterization of any research contribution in this area. This book is the result of many years of works and lectures in signal processing and more specifically in digital image processing and mathematical morphology. Within this process, the years we have spent at the Brazilian Institute for Space Research (INPE) have been decisive.
    [Show full text]
  • Forcing with Copies of Countable Ordinals
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 143, Number 4, April 2015, Pages 1771–1784 S 0002-9939(2014)12360-4 Article electronically published on December 4, 2014 FORCING WITH COPIES OF COUNTABLE ORDINALS MILOSˇ S. KURILIC´ (Communicated by Mirna Dˇzamonja) Abstract. Let α be a countable ordinal and P(α) the collection of its subsets isomorphic to α. We show that the separative quotient of the poset P(α), ⊂ is isomorphic to a forcing product of iterated reduced products of Boolean γ algebras of the form P (ω )/Iωγ ,whereγ ∈ Lim ∪{1} and Iωγ is the corre- sponding ordinal ideal. Moreover, the poset P(α), ⊂ is forcing equivalent to + a two-step iteration of the form (P (ω)/Fin) ∗ π,where[ω] “π is an ω1- + closed separative pre-order” and, if h = ω1,to(P (ω)/Fin) . Also we analyze δ I the quotients over ordinal ideals P (ω )/ ωδ and the corresponding cardinal invariants hωδ and tωδ . 1. Introduction The posets of the form P(X), ⊂,whereX is a relational structure and P(X) the set of (the domains of) its isomorphic substructures, were considered in [7], where a classification of the relations on countable sets related to the forcing-related properties of the corresponding posets of copies is described. So, defining two structures to be equivalent if the corresponding posets of copies produce the same generic extensions, we obtain a rough classification of structures which, in general, depends on the properties of the model of set theory in which we work. For example, under CH all countable linear orders are partitioned in only two classes.
    [Show full text]
  • Ring (Mathematics) 1 Ring (Mathematics)
    Ring (mathematics) 1 Ring (mathematics) In mathematics, a ring is an algebraic structure consisting of a set together with two binary operations usually called addition and multiplication, where the set is an abelian group under addition (called the additive group of the ring) and a monoid under multiplication such that multiplication distributes over addition.a[›] In other words the ring axioms require that addition is commutative, addition and multiplication are associative, multiplication distributes over addition, each element in the set has an additive inverse, and there exists an additive identity. One of the most common examples of a ring is the set of integers endowed with its natural operations of addition and multiplication. Certain variations of the definition of a ring are sometimes employed, and these are outlined later in the article. Polynomials, represented here by curves, form a ring under addition The branch of mathematics that studies rings is known and multiplication. as ring theory. Ring theorists study properties common to both familiar mathematical structures such as integers and polynomials, and to the many less well-known mathematical structures that also satisfy the axioms of ring theory. The ubiquity of rings makes them a central organizing principle of contemporary mathematics.[1] Ring theory may be used to understand fundamental physical laws, such as those underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a ring first arose from attempts to prove Fermat's last theorem, starting with Richard Dedekind in the 1880s. After contributions from other fields, mainly number theory, the ring notion was generalized and firmly established during the 1920s by Emmy Noether and Wolfgang Krull.[2] Modern ring theory—a very active mathematical discipline—studies rings in their own right.
    [Show full text]
  • A Generalization of Conjugacy in Groups Rendiconti Del Seminario Matematico Della Università Di Padova, Tome 40 (1968), P
    RENDICONTI del SEMINARIO MATEMATICO della UNIVERSITÀ DI PADOVA OLAF TAMASCHKE A generalization of conjugacy in groups Rendiconti del Seminario Matematico della Università di Padova, tome 40 (1968), p. 408-427 <http://www.numdam.org/item?id=RSMUP_1968__40__408_0> © Rendiconti del Seminario Matematico della Università di Padova, 1968, tous droits réservés. L’accès aux archives de la revue « Rendiconti del Seminario Matematico della Università di Padova » (http://rendiconti.math.unipd.it/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ A GENERALIZATION OF CONJUGACY IN GROUPS OLAF TAMASCHKE*) The category of all groups can be embedded in the category of all S-semigroups [I]. The S-semigroup is a mathematical structure that is based on the group structure. The intention is to use it as a tool in group theory. The Homomorphism Theorem and the Iso- morphism Theorems, the notions of normal subgroup and of sub- normal subgroup, the Theorem of Jordan and Holder, and some other statements of group theory have been generalized to ~S-semi- groups ([l] and [2]). The direction of these generalizations, and the intention behind them, may become clearer by the remark that the double coset semigroups form a category properly between the ca- tegory of all groups and the category of all S-semigroups. By the double coset semigroup G/H of a group G modulo an arbitrary subgroup H of G we mean the semigroup generated by all 9 E G, with respect to the « complex &#x3E;&#x3E; multiplication, considered as an S-semigroup.
    [Show full text]
  • Monoid Modules and Structured Document Algebra (Extendend Abstract)
    Monoid Modules and Structured Document Algebra (Extendend Abstract) Andreas Zelend Institut fur¨ Informatik, Universitat¨ Augsburg, Germany [email protected] 1 Introduction Feature Oriented Software Development (e.g. [3]) has been established in computer science as a general programming paradigm that provides formalisms, meth- ods, languages, and tools for building maintainable, customisable, and exten- sible software product lines (SPLs) [8]. An SPL is a collection of programs that share a common part, e.g., functionality or code fragments. To encode an SPL, one can use variation points (VPs) in the source code. A VP is a location in a pro- gram whose content, called a fragment, can vary among different members of the SPL. In [2] a Structured Document Algebra (SDA) is used to algebraically de- scribe modules that include VPs and their composition. In [4] we showed that we can reason about SDA in a more general way using a so called relational pre- domain monoid module (RMM). In this paper we present the following extensions and results: an investigation of the structure of transformations, e.g., a condi- tion when transformations commute, insights into the pre-order of modules, and new properties of predomain monoid modules. 2 Structured Document Algebra VPs and Fragments. Let V denote a set of VPs at which fragments may be in- serted and F(V) be the set of fragments which may, among other things, contain VPs from V. Elements of F(V) are denoted by f1, f2,... There are two special elements, a default fragment 0 and an error . An error signals an attempt to assign two or more non-default fragments to the same VP within one module.
    [Show full text]
  • Zappa-Szép Products of Semigroups
    Applied Mathematics, 2015, 6, 1047-1068 Published Online June 2015 in SciRes. http://www.scirp.org/journal/am http://dx.doi.org/10.4236/am.2015.66096 Zappa-Szép Products of Semigroups Suha Wazzan Mathematics Department, King Abdulaziz University, Jeddah, Saudi Arabia Email: [email protected] Received 12 May 2015; accepted 6 June 2015; published 9 June 2015 Copyright © 2015 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Abstract The internal Zappa-Szép products emerge when a semigroup has the property that every element has a unique decomposition as a product of elements from two given subsemigroups. The external version constructed from actions of two semigroups on one another satisfying axiom derived by G. Zappa. We illustrate the correspondence between the two versions internal and the external of Zappa-Szép products of semigroups. We consider the structure of the internal Zappa-Szép product as an enlargement. We show how rectangular band can be described as the Zappa-Szép product of a left-zero semigroup and a right-zero semigroup. We find necessary and sufficient conditions for the Zappa-Szép product of regular semigroups to again be regular, and necessary conditions for the Zappa-Szép product of inverse semigroups to again be inverse. We generalize the Billhardt λ-semidirect product to the Zappa-Szép product of a semilattice E and a group G by constructing an inductive groupoid. Keywords Inverse Semigroups, Groups, Semilattice, Rectangular Band, Semidiret, Regular, Enlargement, Inductive Groupoid 1.
    [Show full text]
  • Math 395: Category Theory Northwestern University, Lecture Notes
    Math 395: Category Theory Northwestern University, Lecture Notes Written by Santiago Can˜ez These are lecture notes for an undergraduate seminar covering Category Theory, taught by the author at Northwestern University. The book we roughly follow is “Category Theory in Context” by Emily Riehl. These notes outline the specific approach we’re taking in terms the order in which topics are presented and what from the book we actually emphasize. We also include things we look at in class which aren’t in the book, but otherwise various standard definitions and examples are left to the book. Watch out for typos! Comments and suggestions are welcome. Contents Introduction to Categories 1 Special Morphisms, Products 3 Coproducts, Opposite Categories 7 Functors, Fullness and Faithfulness 9 Coproduct Examples, Concreteness 12 Natural Isomorphisms, Representability 14 More Representable Examples 17 Equivalences between Categories 19 Yoneda Lemma, Functors as Objects 21 Equalizers and Coequalizers 25 Some Functor Properties, An Equivalence Example 28 Segal’s Category, Coequalizer Examples 29 Limits and Colimits 29 More on Limits/Colimits 29 More Limit/Colimit Examples 30 Continuous Functors, Adjoints 30 Limits as Equalizers, Sheaves 30 Fun with Squares, Pullback Examples 30 More Adjoint Examples 30 Stone-Cech 30 Group and Monoid Objects 30 Monads 30 Algebras 30 Ultrafilters 30 Introduction to Categories Category theory provides a framework through which we can relate a construction/fact in one area of mathematics to a construction/fact in another. The goal is an ultimate form of abstraction, where we can truly single out what about a given problem is specific to that problem, and what is a reflection of a more general phenomenom which appears elsewhere.
    [Show full text]
  • 1.7 Categories: Products, Coproducts, and Free Objects
    22 CHAPTER 1. GROUPS 1.7 Categories: Products, Coproducts, and Free Objects Categories deal with objects and morphisms between objects. For examples: objects sets groups rings module morphisms functions homomorphisms homomorphisms homomorphisms Category theory studies some common properties for them. Def. A category is a class C of objects (denoted A, B, C,. ) together with the following things: 1. A set Hom (A; B) for every pair (A; B) 2 C × C. An element of Hom (A; B) is called a morphism from A to B and is denoted f : A ! B. 2. A function Hom (B; C) × Hom (A; B) ! Hom (A; C) for every triple (A; B; C) 2 C × C × C. For morphisms f : A ! B and g : B ! C, this function is written (g; f) 7! g ◦ f and g ◦ f : A ! C is called the composite of f and g. All subject to the two axioms: (I) Associativity: If f : A ! B, g : B ! C and h : C ! D are morphisms of C, then h ◦ (g ◦ f) = (h ◦ g) ◦ f. (II) Identity: For each object B of C there exists a morphism 1B : B ! B such that 1B ◦ f = f for any f : A ! B, and g ◦ 1B = g for any g : B ! C. In a category C a morphism f : A ! B is called an equivalence, and A and B are said to be equivalent, if there is a morphism g : B ! A in C such that g ◦ f = 1A and f ◦ g = 1B. Ex. Objects Morphisms f : A ! B is an equivalence A is equivalent to B sets functions f is a bijection jAj = jBj groups group homomorphisms f is an isomorphism A is isomorphic to B partial order sets f : A ! B such that f is an isomorphism between A is isomorphic to B \x ≤ y in (A; ≤) ) partial order sets A and B f(x) ≤ f(y) in (B; ≤)" Ex.
    [Show full text]
  • RING THEORY 1. Ring Theory a Ring Is a Set a with Two Binary Operations
    CHAPTER IV RING THEORY 1. Ring Theory A ring is a set A with two binary operations satisfying the rules given below. Usually one binary operation is denoted `+' and called \addition," and the other is denoted by juxtaposition and is called \multiplication." The rules required of these operations are: 1) A is an abelian group under the operation + (identity denoted 0 and inverse of x denoted x); 2) A is a monoid under the operation of multiplication (i.e., multiplication is associative and there− is a two-sided identity usually denoted 1); 3) the distributive laws (x + y)z = xy + xz x(y + z)=xy + xz hold for all x, y,andz A. Sometimes one does∈ not require that a ring have a multiplicative identity. The word ring may also be used for a system satisfying just conditions (1) and (3) (i.e., where the associative law for multiplication may fail and for which there is no multiplicative identity.) Lie rings are examples of non-associative rings without identities. Almost all interesting associative rings do have identities. If 1 = 0, then the ring consists of one element 0; otherwise 1 = 0. In many theorems, it is necessary to specify that rings under consideration are not trivial, i.e. that 1 6= 0, but often that hypothesis will not be stated explicitly. 6 If the multiplicative operation is commutative, we call the ring commutative. Commutative Algebra is the study of commutative rings and related structures. It is closely related to algebraic number theory and algebraic geometry. If A is a ring, an element x A is called a unit if it has a two-sided inverse y, i.e.
    [Show full text]