Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces 1 1.1 Definitions..........................................1 1.1.1 Basics........................................1 1.1.2 Linear Combinations, Spans, Bases, and Dimensions..............1 1.1.3 Sums and Products of Vector Spaces and Subspaces..............3 1.2 Basic Vector Space Theory................................5 2 Linear Transformations 15 2.1 Definitions.......................................... 15 2.1.1 Linear Transformations, or Vector Space Homomorphisms........... 15 2.2 Basic Properties of Linear Transformations....................... 18 2.3 Monomorphisms and Isomorphisms............................ 20 2.4 Rank-Nullity Theorem................................... 23 3 Matrix Representations of Linear Transformations 25 3.1 Definitions.......................................... 25 3.2 Matrix Representations of Linear Transformations................... 26 4 Change of Coordinates 31 4.1 Definitions.......................................... 31 4.2 Change of Coordinate Maps and Matrices........................ 31 1 Vector Spaces 1.1 Definitions 1.1.1 Basics A vector space (linear space) V over a field F is a set V on which the operations addition, + : V × V ! V , and left F -action or scalar multiplication, · : F × V ! V , satisfy: for all x; y; z 2 V and a; b; 1 2 F VS1 x + y = y + x 9 VS2 (x + y) + z = x + (y + z) => (V; +) is an abelian group VS3 90 2 V such that x + 0 = x > VS4 8x 2 V; 9y 2 V such that x + y = 0 ; VS5 1x = x VS6 (ab)x = a(bx) VS7 a(x + y) = ax + ay VS8 (a + b)x = ax + ay If V is a vector space over F , then a subset W ⊆ V is called a subspace of V if W is a vector space over the same field F and with addition and scalar multiplication +jW ×W and ·jF ×W . 1 1.1.2 Linear Combinations, Spans, Bases, and Dimensions If ? 6= S ⊆ V , v1; : : : ; vn 2 S and a1; : : : ; an 2 F , then a linear combination of v1; : : : ; vn is the finite sum a1v1 + ··· + anvn (1.1) which is a vector in V . The ai 2 F are called the coefficients of the linear combination. If a1 = ··· = an = 0, then the linear combination is said to be trivial. In particular, considering the special case of 0 in V , the zero vector, we note that 0 may always be represented as a linear combination of any vectors u1; : : : ; un 2 V , 0u1 + ··· + 0un = 0 | {z } |{z} 02F 02V This representation is called the trivial representation of 0 by u1; : : : ; un. If, on the other hand, there are vectors u1; : : : ; un 2 V and scalars a1; : : : ; an 2 F such that a1u1 + ··· + anun = 0 where at least one ai 6= 0 2 F , then that linear combination is called a nontrivial representation of 0. Using linear combinations we can generate subspaces, as follows. If S 6= ? is a subset of V , then the span of S is given by span(S) := fv 2 V j v is a linear combination of vectors in Sg (1.2) The span of ? is by definition span(?) := f0g (1.3) In this case, S ⊆ V is said to generate or span V , and to be a generating or spanning set. If span(S) = V , then S said to be a generating set or a spanning set for V . A nonempty subset S of a vector space V is said to be linearly independent if, taking any finite number of distinct vectors u1; : : : ; un 2 S, we have for all a1; : : : ; an 2 F that a1u1 + a2u2 + ··· + anun = 0 =) a1 = ··· = an = 0 That is S is linearly independent if the only representation of 0 2 V by vectors in S is the trivial one. In this case, the vectors u1; : : : ; un themselves are also said to be linearly independent. Otherwise, if there is at least one nontrivial representation of 0 by vectors in S, then S is said to be linearly dependent. Given ? 6= S ⊆ V , a nonzero vector v 2 S is said to be an essentially unique linear combination of the vectors in S if, up to order of terms, there is one and only one way to express v as a linear combination of u1; : : : ; un 2 S. That is, if there are a1; : : : ; an; b1; : : : ; bm 2 F nf0g and distinct u1; : : : ; un 2 S and distinct v1; : : : ; vm 2 S distinct, then, re-indexing the bis if necessary, ) ( ) v = a1u1 + ··· + anun ai = bi =) m = n and 8i = 1; : : : ; n = b1v1 + ··· + bmvm ui = vi A subset β ⊆ V is called a (Hamel) basis if it is linearly independent and span(β) = V . We also say that the vectors of β form a basis for V . Equivalently, as explained in Theorem 1.13 below, β is a basis if every nonzero vector v 2 V is an essentially unique linear combination of vectors in β. In the context of inner product spaces of inifinite dimension, there is a difference between a vector space basis, the Hamel basis of V , and an orthonormal basis for V , the Hilbert basis for V , because though the two always exist, they are not always equal unless dim(V ) < 1. 2 The dimension of a vector space V is the cardinality of any basis for V , and is denoted dim(V ). V finite-dimensional if it is the zero vector space f0g or if it has a basis of finite cardinality. Otherwise, if it's basis has infinite cardinality, it is called infinite-dimensional. In the former case, dim(V ) = jβj = n < 1 for some n 2 N, and V is said to be n-dimensional, while in the latter case, dim(V ) = jβj = κ, where κ is a cardinal number, and V is said to be κ-dimensional. If V is finite-dimensional, say of dimension n, then an ordered basis for V a finite sequence or n-tuple (v1; : : : ; vn) of linearly independent vectors v1; : : : ; vn 2 V such that fv1; : : : ; vng is a basis for V . If V is infinite-dimensional but with a countable basis, then an ordered basis is a sequence (vn)n2N such that the set fvn j n 2 Ng is a basis for V . The term standard ordered basis is n ! B n applied only to the ordered bases for F ; (F )0; (F )0 and F [x] and its subsets Fn[x]. For F , the standard ordered basis is given by (e1; e2;:::; en) where e1 = (1; 0;:::; 0); e2 = (0; 1; 0;:::; 0);:::; en = (0;:::; 0; 1). Since F is any field, 1 represents the multiplicative identity and 0 the additive identity. For F = R, the standard ordered basis is just as above with the usual 0 and 1. If F = C, then 1 = (1; 0) and 0 = (0; 0), so the ith standard basis vector for Cn looks like this: ith term ei = (0; 0);:::; (1; 0) ;:::; (0; 0) ! For (F )0 the standard ordered basis is given by (en)n2N; where en(k) = δnk B and generally for (F )0 the standard ordered basis is given by: ( 1; if x = b (eb)b2B; where eb(x) = 0; if x 6= b When the term standard ordered basis is applied to the vector space Fn[x], the subspace of F [x] of polynomials of degree less than or equal to n, it refers the basis (1; x; x2; : : : ; xn) For F [x], the standard ordered basis is given by (1; x; x2;::: ) 1.1.3 Sums and Products of Vector Spaces and Subspaces If S1;:::;Sk 2 P (V ) are subsets or subspaces of a vector space V , then their (internal) sum is given by k X S1 + ··· + Sk = Si := fv1 + ··· + vk j vi 2 Si for 1 ≤ i ≤ kg (1.4) i=1 More generally, if fSi j i 2 Ig ⊆ P (V ) is any collection of subsets or subspaces of V , then their sum is defined to be X n [ o Si = v1 + ··· + vn si 2 Si and n 2 N (1.5) i2I i2I If S is a subspace of V and v 2 V , then the sum fvg + S = fv + s j s 2 Sg is called a coset of S and is usually denoted v + S (1.6) 3 We are of course interested in the (internal) sum of subspaces of a vector space. In this case, we may have the special condition k X X V = Si and Sj \ Si = f0g; 8j = 1; : : : ; k (1.7) i=1 i6=j When this happens, V is said to be the (internal) direct sum of S1;:::;Sk, and this is symbolically denoted k M V = S1 ⊕ S2 ⊕ · · · ⊕ Sk = Si = fs1 + ··· + sk j si 2 Sig (1.8) i=1 In the infinite-dimensional case we proceed similarly. If F = fSi j i 2 Ig is a family of subsets of V such that X X V = Si and Sj \ Si = f0g; 8j 2 I (1.9) i2I i6=j then V is a direct sum of the Si 2 F, denoted M M n [ o V = F = Si = s1 + ··· + sn sj 2 Si; and sji 2 Si if sji 6= 0 (1.10) i2I i2I We can also define the (external) sum of distinct vector spaces which do not lie inside a larger vector space: if V1;:::;Vn are vector spaces over the same field F , then their external direct sum is the cartesian product V1 × · · · × Vn, with addition and scalar multiplication defined componentwise. And we denote the sum, confusingly, by the same notation: V1 ⊕ V2 ⊕ · · · ⊕ Vn := f(v1; v2; : : : ; vn) j vi 2 Vi; i = 1; : : : ; ng (1.11) In the infinite-dimensional case, we have two types of external direct sum, one where there is no restriction on the sequences, the other where we only allow sequences with finite support.
Recommended publications
  • MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam
    MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 2006, 09:30am - 10:50am Problem 1 (15 points). Let the matrix A be given by 1 −2 −1 2 −1 5 6 3 : 5 −4 5 4 5 (a) Find the inverse A−1 of A, if it exists. (b) Based on your answer in (a), determine whether the columns of A span R3. (Justify your answer!) Solution. (a) To check whether A is invertible, we row reduce the augmented matrix [A I3]: 1 −2 −1 1 0 0 1 −2 −1 1 0 0 2 −1 5 6 0 1 0 3 ∼ : : : ∼ 2 0 3 5 1 1 0 3 : 5 −4 5 0 0 1 0 0 0 −7 −2 1 4 5 4 5 Since the last row in the echelon form of A contains only zeros, A is not row equivalent to I3. Hence, A is not invertible, and A−1 does not exist. (b) Since A is not invertible by (a), the Invertible Matrix Theorem says that the columns of A cannot span R3. Problem 2 (15 points). Let the vectors b1; : : : ; b4 be defined by 3 2 −1 0 0 5 1 0 −1 1 0 1 1 0 0 1 b1 = ; b2 = ; b3 = ; and b4 = : −2 −5 3 0 B C B C B C B C B 4 C B 7 C B 0 C B −3 C @ A @ A @ A @ A (a) Determine if the set B = fb1; b2; b3; b4g is linearly independent by computing the determi- nant of the matrix B = [b1 b2 b3 b4].
    [Show full text]
  • Direct Products and Homomorphisms
    Commuting properties Direct products and homomorphisms Simion Breaz logo Simion Breaz Direct products and homomorphisms Products and coproducts Commuting properties Contravariant functors Covariant functors Outline 1 Commuting properties Products and coproducts Contravariant functors Covariant functors logo Simion Breaz Direct products and homomorphisms Products and coproducts Commuting properties Contravariant functors Covariant functors Introduction Important properties of objects in particular categories (e.g. varieties of universal algebras) can be described using commuting properties of some canonical functors. For instance, in [Ad´amek and Rosicki: Locally presentable categories] there are the following examples: If V is a variety of finitary algebras and A ∈ V then A is finitely generated iff the functor Hom(A, −): V → Set preserves direct unions (i.e. directed colimits of monomorphisms); A is finitely presented (i.e. it is generated by finitely many generators modulo finitely many relations) iff the functor Hom(A, −): V → Set preserves directed colimits. logo Simion Breaz Direct products and homomorphisms Products and coproducts Commuting properties Contravariant functors Covariant functors Introduction Important properties of objects in particular categories (e.g. varieties of universal algebras) can be described using commuting properties of some canonical functors. For instance, in [Ad´amek and Rosicki: Locally presentable categories] there are the following examples: If V is a variety of finitary algebras and A ∈ V then A is finitely generated iff the functor Hom(A, −): V → Set preserves direct unions (i.e. directed colimits of monomorphisms); A is finitely presented (i.e. it is generated by finitely many generators modulo finitely many relations) iff the functor Hom(A, −): V → Set preserves directed colimits.
    [Show full text]
  • Lecture 1.3: Direct Products and Sums
    Lecture 1.3: Direct products and sums Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 8530, Advanced Linear Algebra M. Macauley (Clemson) Lecture 1.3: Direct products and sums Math 8530, Advanced Linear Algebra 1 / 5 Overview In previous lectures, we learned about vectors spaces and subspaces. We learned about what it meant for a subset to span, to be linearly independent, and to be a basis. In this lecture, we will see how to create new vector spaces from old ones. We will see several ways to \multiply" vector spaces together, and will learn how to construct: the complement of a subspace the direct sum of two subspaces the direct product of two vector spaces M. Macauley (Clemson) Lecture 1.3: Direct products and sums Math 8530, Advanced Linear Algebra 2 / 5 Complements and direct sums Theorem 1.5 (a) Every subspace Y of a finite-dimensional vector space X is finite-dimensional. (b) Every subspace Y has a complement in X : another subspace Z such that every vector x 2 X can be written uniquely as x = y + z; y 2 Y ; z 2 Z; dim X = dim Y + dim Z: Proof Definition X is the direct sum of subspaces Y and Z that are complements of each other. More generally, X is the direct sum of subspaces Y1;:::; Ym if every x 2 X can be expressed uniquely as x = y1 + ··· + ym; yi 2 Yi : We denote this as X = Y1 ⊕ · · · ⊕ Ym. M. Macauley (Clemson) Lecture 1.3: Direct products and sums Math 8530, Advanced Linear Algebra 3 / 5 Direct products Definition The direct product of X1 and X2 is the vector space X1 × X2 := (x1; x2) j x1 2 X1; x2 2 X2 ; with addition and multiplication defined component-wise.
    [Show full text]
  • Forcing with Copies of Countable Ordinals
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 143, Number 4, April 2015, Pages 1771–1784 S 0002-9939(2014)12360-4 Article electronically published on December 4, 2014 FORCING WITH COPIES OF COUNTABLE ORDINALS MILOSˇ S. KURILIC´ (Communicated by Mirna Dˇzamonja) Abstract. Let α be a countable ordinal and P(α) the collection of its subsets isomorphic to α. We show that the separative quotient of the poset P(α), ⊂ is isomorphic to a forcing product of iterated reduced products of Boolean γ algebras of the form P (ω )/Iωγ ,whereγ ∈ Lim ∪{1} and Iωγ is the corre- sponding ordinal ideal. Moreover, the poset P(α), ⊂ is forcing equivalent to + a two-step iteration of the form (P (ω)/Fin) ∗ π,where[ω] “π is an ω1- + closed separative pre-order” and, if h = ω1,to(P (ω)/Fin) . Also we analyze δ I the quotients over ordinal ideals P (ω )/ ωδ and the corresponding cardinal invariants hωδ and tωδ . 1. Introduction The posets of the form P(X), ⊂,whereX is a relational structure and P(X) the set of (the domains of) its isomorphic substructures, were considered in [7], where a classification of the relations on countable sets related to the forcing-related properties of the corresponding posets of copies is described. So, defining two structures to be equivalent if the corresponding posets of copies produce the same generic extensions, we obtain a rough classification of structures which, in general, depends on the properties of the model of set theory in which we work. For example, under CH all countable linear orders are partitioned in only two classes.
    [Show full text]
  • Ring (Mathematics) 1 Ring (Mathematics)
    Ring (mathematics) 1 Ring (mathematics) In mathematics, a ring is an algebraic structure consisting of a set together with two binary operations usually called addition and multiplication, where the set is an abelian group under addition (called the additive group of the ring) and a monoid under multiplication such that multiplication distributes over addition.a[›] In other words the ring axioms require that addition is commutative, addition and multiplication are associative, multiplication distributes over addition, each element in the set has an additive inverse, and there exists an additive identity. One of the most common examples of a ring is the set of integers endowed with its natural operations of addition and multiplication. Certain variations of the definition of a ring are sometimes employed, and these are outlined later in the article. Polynomials, represented here by curves, form a ring under addition The branch of mathematics that studies rings is known and multiplication. as ring theory. Ring theorists study properties common to both familiar mathematical structures such as integers and polynomials, and to the many less well-known mathematical structures that also satisfy the axioms of ring theory. The ubiquity of rings makes them a central organizing principle of contemporary mathematics.[1] Ring theory may be used to understand fundamental physical laws, such as those underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a ring first arose from attempts to prove Fermat's last theorem, starting with Richard Dedekind in the 1880s. After contributions from other fields, mainly number theory, the ring notion was generalized and firmly established during the 1920s by Emmy Noether and Wolfgang Krull.[2] Modern ring theory—a very active mathematical discipline—studies rings in their own right.
    [Show full text]
  • Math 395: Category Theory Northwestern University, Lecture Notes
    Math 395: Category Theory Northwestern University, Lecture Notes Written by Santiago Can˜ez These are lecture notes for an undergraduate seminar covering Category Theory, taught by the author at Northwestern University. The book we roughly follow is “Category Theory in Context” by Emily Riehl. These notes outline the specific approach we’re taking in terms the order in which topics are presented and what from the book we actually emphasize. We also include things we look at in class which aren’t in the book, but otherwise various standard definitions and examples are left to the book. Watch out for typos! Comments and suggestions are welcome. Contents Introduction to Categories 1 Special Morphisms, Products 3 Coproducts, Opposite Categories 7 Functors, Fullness and Faithfulness 9 Coproduct Examples, Concreteness 12 Natural Isomorphisms, Representability 14 More Representable Examples 17 Equivalences between Categories 19 Yoneda Lemma, Functors as Objects 21 Equalizers and Coequalizers 25 Some Functor Properties, An Equivalence Example 28 Segal’s Category, Coequalizer Examples 29 Limits and Colimits 29 More on Limits/Colimits 29 More Limit/Colimit Examples 30 Continuous Functors, Adjoints 30 Limits as Equalizers, Sheaves 30 Fun with Squares, Pullback Examples 30 More Adjoint Examples 30 Stone-Cech 30 Group and Monoid Objects 30 Monads 30 Algebras 30 Ultrafilters 30 Introduction to Categories Category theory provides a framework through which we can relate a construction/fact in one area of mathematics to a construction/fact in another. The goal is an ultimate form of abstraction, where we can truly single out what about a given problem is specific to that problem, and what is a reflection of a more general phenomenom which appears elsewhere.
    [Show full text]
  • 1.7 Categories: Products, Coproducts, and Free Objects
    22 CHAPTER 1. GROUPS 1.7 Categories: Products, Coproducts, and Free Objects Categories deal with objects and morphisms between objects. For examples: objects sets groups rings module morphisms functions homomorphisms homomorphisms homomorphisms Category theory studies some common properties for them. Def. A category is a class C of objects (denoted A, B, C,. ) together with the following things: 1. A set Hom (A; B) for every pair (A; B) 2 C × C. An element of Hom (A; B) is called a morphism from A to B and is denoted f : A ! B. 2. A function Hom (B; C) × Hom (A; B) ! Hom (A; C) for every triple (A; B; C) 2 C × C × C. For morphisms f : A ! B and g : B ! C, this function is written (g; f) 7! g ◦ f and g ◦ f : A ! C is called the composite of f and g. All subject to the two axioms: (I) Associativity: If f : A ! B, g : B ! C and h : C ! D are morphisms of C, then h ◦ (g ◦ f) = (h ◦ g) ◦ f. (II) Identity: For each object B of C there exists a morphism 1B : B ! B such that 1B ◦ f = f for any f : A ! B, and g ◦ 1B = g for any g : B ! C. In a category C a morphism f : A ! B is called an equivalence, and A and B are said to be equivalent, if there is a morphism g : B ! A in C such that g ◦ f = 1A and f ◦ g = 1B. Ex. Objects Morphisms f : A ! B is an equivalence A is equivalent to B sets functions f is a bijection jAj = jBj groups group homomorphisms f is an isomorphism A is isomorphic to B partial order sets f : A ! B such that f is an isomorphism between A is isomorphic to B \x ≤ y in (A; ≤) ) partial order sets A and B f(x) ≤ f(y) in (B; ≤)" Ex.
    [Show full text]
  • Notes on Change of Bases Northwestern University, Summer 2014
    Notes on Change of Bases Northwestern University, Summer 2014 Let V be a finite-dimensional vector space over a field F, and let T be a linear operator on V . Given a basis (v1; : : : ; vn) of V , we've seen how we can define a matrix which encodes all the information about T as follows. For each i, we can write T vi = a1iv1 + ··· + anivn 2 for a unique choice of scalars a1i; : : : ; ani 2 F. In total, we then have n scalars aij which we put into an n × n matrix called the matrix of T relative to (v1; : : : ; vn): 0 1 a11 ··· a1n B . .. C M(T )v := @ . A 2 Mn;n(F): an1 ··· ann In the notation M(T )v, the v showing up in the subscript emphasizes that we're taking this matrix relative to the specific bases consisting of v's. Given any vector u 2 V , we can also write u = b1v1 + ··· + bnvn for a unique choice of scalars b1; : : : ; bn 2 F, and we define the coordinate vector of u relative to (v1; : : : ; vn) as 0 1 b1 B . C n M(u)v := @ . A 2 F : bn In particular, the columns of M(T )v are the coordinates vectors of the T vi. Then the point of the matrix M(T )v is that the coordinate vector of T u is given by M(T u)v = M(T )vM(u)v; so that from the matrix of T and the coordinate vectors of elements of V , we can in fact reconstruct T itself.
    [Show full text]
  • RING THEORY 1. Ring Theory a Ring Is a Set a with Two Binary Operations
    CHAPTER IV RING THEORY 1. Ring Theory A ring is a set A with two binary operations satisfying the rules given below. Usually one binary operation is denoted `+' and called \addition," and the other is denoted by juxtaposition and is called \multiplication." The rules required of these operations are: 1) A is an abelian group under the operation + (identity denoted 0 and inverse of x denoted x); 2) A is a monoid under the operation of multiplication (i.e., multiplication is associative and there− is a two-sided identity usually denoted 1); 3) the distributive laws (x + y)z = xy + xz x(y + z)=xy + xz hold for all x, y,andz A. Sometimes one does∈ not require that a ring have a multiplicative identity. The word ring may also be used for a system satisfying just conditions (1) and (3) (i.e., where the associative law for multiplication may fail and for which there is no multiplicative identity.) Lie rings are examples of non-associative rings without identities. Almost all interesting associative rings do have identities. If 1 = 0, then the ring consists of one element 0; otherwise 1 = 0. In many theorems, it is necessary to specify that rings under consideration are not trivial, i.e. that 1 6= 0, but often that hypothesis will not be stated explicitly. 6 If the multiplicative operation is commutative, we call the ring commutative. Commutative Algebra is the study of commutative rings and related structures. It is closely related to algebraic number theory and algebraic geometry. If A is a ring, an element x A is called a unit if it has a two-sided inverse y, i.e.
    [Show full text]
  • Supplement. Direct Products and Semidirect Products
    Supplement: Direct Products and Semidirect Products 1 Supplement. Direct Products and Semidirect Products Note. In Section I.8 of Hungerford, we defined direct products and weak direct n products. Recall that when dealing with a finite collection of groups {Gi}i=1 then the direct product and weak direct product coincide (Hungerford, page 60). In this supplement we give results concerning recognizing when a group is a direct product of smaller groups. We also define the semidirect product and illustrate its use in classifying groups of small order. The content of this supplement is based on Sections 5.4 and 5.5 of Davis S. Dummitt and Richard M. Foote’s Abstract Algebra, 3rd Edition, John Wiley and Sons (2004). Note. Finitely generated abelian groups are classified in the Fundamental Theo- rem of Finitely Generated Abelian Groups (Theorem II.2.1). So when addressing direct products, we are mostly concerned with nonabelian groups. Notice that the following definition is “dull” if applied to an abelian group. Definition. Let G be a group, let x, y ∈ G, and let A, B be nonempty subsets of G. (1) Define [x, y]= x−1y−1xy. This is the commutator of x and y. (2) Define [A, B]= h[a,b] | a ∈ A,b ∈ Bi, the group generated by the commutators of elements from A and B where the binary operation is the same as that of group G. (3) Define G0 = {[x, y] | x, y ∈ Gi, the subgroup of G generated by the commuta- tors of elements from G under the same binary operation of G.
    [Show full text]
  • Arxiv:1406.1932V2 [Math.GR] 2 Oct 2014 B Cited
    Comments, corrections, and related references welcomed, as always! TEXed January 23, 2018 HOMOMORPHISMS ON INFINITE DIRECT PRODUCTS OF GROUPS, RINGS AND MONOIDS GEORGE M. BERGMAN Abstract. We study properties of a group, abelian group, ring, or monoid B which (a) guarantee that every homomorphism from an infinite direct product QI Ai of objects of the same sort onto B factors through the direct product of finitely many ultraproducts of the Ai (possibly after composition with the natural map B → B/Z(B) or some variant), and/or (b) guarantee that when a map does so factor (and the index set has reasonable cardinality), the ultrafilters involved must be principal. A number of open questions and topics for further investigation are noted. 1. Introduction A direct product Qi∈I Ai of infinitely many nontrivial algebraic structures is in general a “big” object: it has at least continuum cardinality, and if the operations of the Ai include a vector-space structure, it has at least continuum dimension. But there are many situations where the set of homomorphisms from such a product to a fixed object B is unexpectedly restricted. The poster child for this phenomenon is the case where the objects are abelian groups, and B is the infinite cyclic group. In that situation, if the index set I is countable (or, indeed, of less than an enormous cardinality – some details are recalled in §4), then every homomorphism Qi∈I Ai → B factors through the projection of Qi∈I Ai onto the product of finitely many of the Ai. An abelian group B which, like the infinite cyclic group, has this property, is called “slender”.
    [Show full text]
  • Linear Algebra Review 3
    INTRODUCTION TO GROUP REPRESENTATIONS JULY 1, 2012 LINEAR ALGEBRA REVIEW 3 Here we describe some of the main linear algebra constructions used in representation theory. Since we are working with finite-dimensional vector spaces, a choice of basis iden- n tifies each new space with some C : What makes these interesting is how they behave with respect to linear transformation, which in turn gives ways to construct new representations from old ones. Let V and W be finite-dimensional vector spaces over C, and choose bases B = fv1; : : : ; vmg for V and C = fw1; : : : ; wng for W . Direct Sum: There are two types of direct sums, internal and external. The internal direct sum expresses a given vector space V 0 in terms of two (or more) subspaces V and W . Definition: We say V 0 is the internal direct sum of subspaces V and W , written V 0 = V + W; if each vector in V 0 can be written uniquely as a sum v0 = v + w with v in V and w in W . Equivalently, this condition holds if and only if B [ C is a basis for V 0: In turn, we also have this condition if dim(V ) + dim(W ) = dim(V 0) and V \ W = f0g: In the special case where V 0 admits an inner product and W = V ?, we call V 0 = V ⊕W an orthogonal direct sum. If B and C are orthonormal bases, then B [C is an orthonormal basis for V 0: On the other hand, the main idea here can be applied to combine two known vector spaces.
    [Show full text]