Unit 4: Matrices, Linear Maps and Change of Basis

Total Page:16

File Type:pdf, Size:1020Kb

Unit 4: Matrices, Linear Maps and Change of Basis Unit 4: Matrices, Linear maps and change of basis Juan Luis Melero and Eduardo Eyras September 2018 1 Contents 1 Linear maps3 1.1 Operations of linear maps....................3 1.1.1 Scaling...........................3 1.1.2 Reflection.........................4 1.1.3 Pure rotation.......................4 1.2 Definition of linear map.....................6 1.3 Image of a map..........................6 1.4 Kernel (nullspace) of a map...................8 1.5 Types of maps........................... 10 1.5.1 Monomorphism (injective or one-to-one map)..... 10 1.5.2 Epimorphism (surjective or onto map)......... 11 1.5.3 Isomorphism (bijective or "one-to-one and onto" maps) 11 1.6 Matrix representation of a linear map.............. 13 1.7 Properties of the matrix associated to a linear map, kernel and image............................... 16 1.7.1 Definitions......................... 16 1.7.2 Application of the properties............... 18 2 Change of basis 20 3 Composition of linear maps 22 4 Inverse of a linear map 23 5 Path of linear maps 23 6 Exercises 24 7 R practical 28 7.1 Kernel of a linear map...................... 28 7.2 Image of a linear map...................... 28 2 1 Linear maps 1.1 Operations of linear maps A linear map is an operation on a vector space to transform one vector into another and which can be represented as a matrix multiplication: n m fA : R ! R m ~u !~v = fA(~u) = A~u 2 R For instance, in three dimensions: 0 1 0 1 0 1 a11 a12 a13 u1 v1 ~v = fA(~u) = A~u = @a21 a22 a23A @u2A = @v2A a31 a32 a33 u3 v3 Many operations can be represented with linear maps. We describe below some interesting ones. 1.1.1 Scaling A scaling operation returns a vector in the same direction. The matrices associated to this operation are diagonal matrices. u au a 0 u f(~u) = a~u ! f(~u) = a 1 = 1 = 1 = A~u u2 au2 0 a u2 Figure 1: Example of a scaling operation. The thick arrow represents the original vector, whereas the thin arrow represents the scaled vector. For example: 2 2 0 2 4 ~u = ! f(~u) = A~u = = 1 0 2 1 2 3 1.1.2 Reflection The reflection operation returns a vector mirrored by a given axis. The matrix has the property of being diagonal and such that the square of the matrix is the unit matrix. If the matrix X is a reflection, then 2 A = In For instance: 1 0 u1 u1 ~v = fA(~u) = A~u = = 0 −1 u2 −u2 2 1 0 2 2 ! = 1 0 −1 1 −1 Figure 2: Example of a reflection operation. The thick arrow represents the original vector, whereas the thin arrow represents the reflected vector. 1.1.3 Pure rotation A pure rotation returns the vector rotated with a certain angle. That is, it does not change its norm. Additionally, a rotation does not change the relative angle between vectors, so in particular, it preserves the orthogonality between vectors. 4 It can then be proven (left as exercise) that pure rotation matrices fulfill the T T property AA = A A = In. This is the general definition of an orthonormal matrix (preserves norms and relative angles). In particular, an orthonormal matrix is formed by column or row vectors that mutually orthogonal and have norm (module) 1. For instance, in two dimensions one can show that: 8 a2 + c2 = 1 a b <> A = A−1 = AT ! b2 + d2 = 1 c d :>ab + cd = 0 We can parametrize the matrix with the angle using trigonometric functions. If recall that sin2 θ + cos2 θ = 1, and use the fact that row or column vectors must be orthogonal, we can reparametrize the matrix as: cos θ − sin θ A(θ) = sin θ cos θ cos θ − sin θ u u cos θ − u sin θ ~v = A(θ)~u = 1 = 1 2 sin θ cos θ u2 u1 sin θ + u2 cos θ For example: π 0 1 2 1 θ = ! u~0 = A(π=2)~u = = 2 −1 0 1 −2 Figure 3: Example of pure rotation operation. Here u represents the original vector, whereas u0 represents the rotated vector. 5 1.2 Definition of linear map A map f, also called application or function, is a relation between two vector spaces M; N such that every vector in M has a corresponding vector in N: f : M! N u !f(u) f is a map () 8u; 9!f(u) Such a transformation is a linear map if it fulfills these two properties: 1. u; v 2 M =) f(u + v) = f(u) + f(v) 2 N 2. λ 2 R; u 2 M =) f(λu) = λf(u) 2 N For instance, consider the following map between R2 and R: f : R2 ! R (x; y)!f(x; y) = 2x − y We show property 1. Consider the map on the sum of any two vectors in R2: f((x; y)+(w; z)) = f(x+w; y+z) = 2(x+w)−(y+z) = 2x−y+2w−z = f(x; y)+f(w; z) Similarly, we show property 2: f(λ(x; y)) = f(λx, λy) = 2λx − λy = λ(2x − y) = λf(x; y) 1.3 Image of a map The image of a map is the set of all elements of the target set that are described by the map (given by the map): Im(f) = fw 2 N j 9u 2 M : f(u) = wg ⊆ N As we will see, linear maps can be represented with matrices. In this repre- sentation, the image of a map is the vector subspace generated by the column vectors of the matrix A representing the map: Span(f(a11; a21; : : : ; an1);:::; (a1m; a2m; : : : ; anm)g). 6 Proposition: the image of a linear map is a vector subspace. The proof follows from the definition of linear map. We show that the ele- ments of the Image fulfill the closure (under vector addition and multiplica- tion by scalars) and include the neutral element: 1. 8f(u); f(v) 2 Im(f) ! f(u) + f(v) = f(u + v) 2 Im(f) 2. 8a 2 R; 8f(u) 2 Im(f) ! af(u) = f(au) 2 Im(f) 3. f(u) 2 Im(f) ! f(0) = f(u − u) = f(u) − f(u) 2 Im(f) ! f(0) 2 Im(f) Figure 4: Illustration of the image of a map. Example. In this example we use the fact that a linear map can be repre- sented as a matrix and that the columns of the matrix are the vectors that span the target space. Thus, the number of rows is the dimension of the image (more details on this later). Consider the following linear map: f : R2 ! R3 (x; y)!f(x; y) = (x + y; 2 + y; y − x) We can represent this linear map with the following matrix: 0 1 11 0 1 11 0 x + y 1 x A = 2 1 since 2 1 = 2x + y f @ A @ A y @ A −1 1 −1 1 −x + y 7 The image of the linear map is the span of the column vectors: 2 3 Im(f) = f(R ) = Spanf(1; 2; −1); (1; 1; 1)g ⊆ R The image of this linear map is the set of all the linear combinations of these two vectors. 1.4 Kernel (nullspace) of a map The kernel of a map is the set of the elements which map to the zero (neutral) vector. f : M ! N n o null(f) = f −1(0) = Ker(f) = ~v 2 M j f(~v) = ~0 Proposition: the kernel of a map is a vector space. The proof follows from the definition of linear map: 1. 8u; v 2 Ker(f) ! f(u+v) = f(u)+f(v) = 0+0 = 0 ! u+v 2 Ker(f) 2. 8a 2 R; 8u 2 Ker(f) ! f(au) = af(u) = a · 0 = 0 ! au 2 Ker(f) 3. f(0) = f(u − u) = f(u) − f(u) = 0 − 0 = 0 ! 0 2 Ker(f) Figure 5: Illustration of the kernel of a map. Example: given a linear map, using its associated matrix, we want to find which vectors in the domain set map to the zero vector in the target space. Consider the same linear map as the example of the image (section 1.3). 0 1 11 Af = @ 2 1A −1 1 8 We find those vectors that map to the zero vector: 0 1 11 001 x A ~u = ~0 2 1 = 0 f @ A y @ A −1 1 0 x + y = 0 9 = x = 0 x = 0 2x + y = 0 ! ! x = y y = 0 −x + y = 0 ; The kernel of this linear map is: 0 Ker(f) = 0 0 Let us consider now an example where the Ker(f) is not . Consider the 0 following matrix associated to a linear map: 3 2 f : R ! R 1 1 1 A = 1 −1 −2 0x1 1 1 1 0 y = 1 −1 −2 @ A 0 z x = z=2 9 x + y + z = 0 = ! y = −3z=2 x − y − 2z = 0 z = z ; So the kernel is: 80 1 9 < z=2 = Ker(f) = @−3z=2A ; z 2 R : z ; This the parametric representation of the Kernel as a vector space. We can also represent a vector space as the span of the basis vectors. In the case of this Kernel: Ker(f) = Spanf(1=2; −3=2; 1)g 9 1.5 Types of maps 1.5.1 Monomorphism (injective or one-to-one map) A monomorphism is a linear map which, for different vectors, return different images.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • MAS4107 Linear Algebra 2 Linear Maps And
    Introduction Groups and Fields Vector Spaces Subspaces, Linear . Bases and Coordinates MAS4107 Linear Algebra 2 Linear Maps and . Change of Basis Peter Sin More on Linear Maps University of Florida Linear Endomorphisms email: [email protected]fl.edu Quotient Spaces Spaces of Linear . General Prerequisites Direct Sums Minimal polynomial Familiarity with the notion of mathematical proof and some experience in read- Bilinear Forms ing and writing proofs. Familiarity with standard mathematical notation such as Hermitian Forms summations and notations of set theory. Euclidean and . Self-Adjoint Linear . Linear Algebra Prerequisites Notation Familiarity with the notion of linear independence. Gaussian elimination (reduction by row operations) to solve systems of equations. This is the most important algorithm and it will be assumed and used freely in the classes, for example to find JJ J I II coordinate vectors with respect to basis and to compute the matrix of a linear map, to test for linear dependence, etc. The determinant of a square matrix by cofactors Back and also by row operations. Full Screen Close Quit Introduction 0. Introduction Groups and Fields Vector Spaces These notes include some topics from MAS4105, which you should have seen in one Subspaces, Linear . form or another, but probably presented in a totally different way. They have been Bases and Coordinates written in a terse style, so you should read very slowly and with patience. Please Linear Maps and . feel free to email me with any questions or comments. The notes are in electronic Change of Basis form so sections can be changed very easily to incorporate improvements.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Linear Algebra Handbook
    CS419 Linear Algebra January 7, 2021 1 What do we need to know? By the end of this booklet, you should know all the linear algebra you need for CS419. More specifically, you'll understand: • Vector spaces (the important ones, at least) • Dirac (bra-ket) notation and why it's quite nice to use • Linear combinations, linearly independent sets, spanning sets and basis sets • Matrices and linear transformations (they're the same thing) • Changing basis • Inner products and norms • Unitary operations • Tensor products • Why we care about linear algebra 2 Vector Spaces In quantum computing, the `vectors' we'll be working with are going to be made up of complex numbers1. A vector space, V , over C is a set of vectors with the vital property2 that αu + βv 2 V for all α; β 2 C and u; v 2 V . Intuitively, this means we can add together and scale up vectors in V, and we know the result is still in V. Our vectors are going to be lists of n complex numbers, v 2 Cn, and Cn will be our most important vector space. Note we can just as easily define vector spaces over R, the set of real numbers. Over the course of this module, we'll see the reasons3 we use C, but for all this linear algebra, we can stick with R as everyone is happier with real numbers. Rest assured for the entire module, every time you see something like \Consider a vector space V ", this vector space will be Rn or Cn for some n 2 N.
    [Show full text]
  • OBJ (Application/Pdf)
    GROUPOIDS WITH SEMIGROUP OPERATORS AND ADDITIVE ENDOMORPHISM A THESIS SUBMITTED TO THE FACULTY OF ATLANTA UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS POR THE DEGREE OF MASTER OF SCIENCE BY FRED HENDRIX HUGHES DEPARTMENT OF MATHEMATICS ATLANTA, GEORGIA JUNE 1965 TABLE OF CONTENTS Chapter Page I' INTRODUCTION. 1 II GROUPOIDS WITH SEMIGROUP OPERATORS 6 - Ill GROUPOIDS WITH ADDITIVE ENDOMORPHISM 12 BIBLIOGRAPHY 17 ii 4 CHAPTER I INTRODUCTION A set is an undefined termj however, one can say a set is a collec¬ tion of objects according to our sight and perception. Several authors use this definition, a set is a collection of definite distinct objects called elements. This concept is the foundation for mathematics. How¬ ever, it was not until the latter part of the nineteenth century when the concept was formally introduced. From this concept of a set, mathematicians, by placing restrictions on a set, have developed the algebraic structures which we employ. The structures are closely related as the diagram below illustrates. Quasigroup Set The first structure is a groupoid which the writer will discuss the following properties: subgroupiod, antigroupoid, expansive set homor- phism of groupoids in semigroups, groupoid with semigroupoid operators and groupoids with additive endormorphism. Definition 1.1. — A set of elements G - f x,y,z } which is defined by a single-valued binary operation such that x o y ■ z é G (The only restriction is closure) is called a groupiod. Definition 1.2. — The binary operation will be a mapping of the set into itself (AA a direct product.) 1 2 Definition 1.3» — A non-void subset of a groupoid G is called a subgroupoid if and only if AA C A.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]
  • Ring (Mathematics) 1 Ring (Mathematics)
    Ring (mathematics) 1 Ring (mathematics) In mathematics, a ring is an algebraic structure consisting of a set together with two binary operations usually called addition and multiplication, where the set is an abelian group under addition (called the additive group of the ring) and a monoid under multiplication such that multiplication distributes over addition.a[›] In other words the ring axioms require that addition is commutative, addition and multiplication are associative, multiplication distributes over addition, each element in the set has an additive inverse, and there exists an additive identity. One of the most common examples of a ring is the set of integers endowed with its natural operations of addition and multiplication. Certain variations of the definition of a ring are sometimes employed, and these are outlined later in the article. Polynomials, represented here by curves, form a ring under addition The branch of mathematics that studies rings is known and multiplication. as ring theory. Ring theorists study properties common to both familiar mathematical structures such as integers and polynomials, and to the many less well-known mathematical structures that also satisfy the axioms of ring theory. The ubiquity of rings makes them a central organizing principle of contemporary mathematics.[1] Ring theory may be used to understand fundamental physical laws, such as those underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a ring first arose from attempts to prove Fermat's last theorem, starting with Richard Dedekind in the 1880s. After contributions from other fields, mainly number theory, the ring notion was generalized and firmly established during the 1920s by Emmy Noether and Wolfgang Krull.[2] Modern ring theory—a very active mathematical discipline—studies rings in their own right.
    [Show full text]
  • Review a Basis of a Vector Space 1
    Review • Vectors v1 , , v p are linearly dependent if x1 v1 + x2 v2 + + x pv p = 0, and not all the coefficients are zero. • The columns of A are linearly independent each column of A contains a pivot. 1 1 − 1 • Are the vectors 1 , 2 , 1 independent? 1 3 3 1 1 − 1 1 1 − 1 1 1 − 1 1 2 1 0 1 2 0 1 2 1 3 3 0 2 4 0 0 0 So: no, they are dependent! (Coeff’s x3 = 1 , x2 = − 2, x1 = 3) • Any set of 11 vectors in R10 is linearly dependent. A basis of a vector space Definition 1. A set of vectors { v1 , , v p } in V is a basis of V if • V = span{ v1 , , v p} , and • the vectors v1 , , v p are linearly independent. In other words, { v1 , , vp } in V is a basis of V if and only if every vector w in V can be uniquely expressed as w = c1 v1 + + cpvp. 1 0 0 Example 2. Let e = 0 , e = 1 , e = 0 . 1 2 3 0 0 1 3 Show that { e 1 , e 2 , e 3} is a basis of R . It is called the standard basis. Solution. 3 • Clearly, span{ e 1 , e 2 , e 3} = R . • { e 1 , e 2 , e 3} are independent, because 1 0 0 0 1 0 0 0 1 has a pivot in each column. Definition 3. V is said to have dimension p if it has a basis consisting of p vectors. Armin Straub 1 [email protected] This definition makes sense because if V has a basis of p vectors, then every basis of V has p vectors.
    [Show full text]
  • WOMP 2001: LINEAR ALGEBRA Reference Roman, S. Advanced
    WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S. Advanced Linear Algebra, GTM #135. (Not very good.) 1. Vector spaces Let k be a field, e.g., R, Q, C, Fq, K(t),. Definition. A vector space over k is a set V with two operations + : V × V → V and · : k × V → V satisfying some familiar axioms. A subspace of V is a subset W ⊂ V for which • 0 ∈ W , • If w1, w2 ∈ W , a ∈ k, then aw1 + w2 ∈ W . The quotient of V by the subspace W ⊂ V is the vector space whose elements are subsets of the form (“affine translates”) def v + W = {v + w : w ∈ W } (for which v + W = v0 + W iff v − v0 ∈ W , also written v ≡ v0 mod W ), and whose operations +, · are those naturally induced from the operations on V . Exercise 1. Verify that our definition of the vector space V/W makes sense. Given a finite collection of elements (“vectors”) v1, . , vm ∈ V , their span is the subspace def hv1, . , vmi = {a1v1 + ··· amvm : a1, . , am ∈ k}. Exercise 2. Verify that this is a subspace. There may sometimes be redundancy in a spanning set; this is expressed by the notion of linear dependence. The collection v1, . , vm ∈ V is said to be linearly dependent if there is a linear combination a1v1 + ··· + amvm = 0, some ai 6= 0. This is equivalent to being able to express at least one of the vi as a linear combination of the others. Exercise 3. Verify this equivalence. Theorem. Let V be a vector space over a field k.
    [Show full text]
  • Classifying Categories the Jordan-Hölder and Krull-Schmidt-Remak Theorems for Abelian Categories
    U.U.D.M. Project Report 2018:5 Classifying Categories The Jordan-Hölder and Krull-Schmidt-Remak Theorems for Abelian Categories Daniel Ahlsén Examensarbete i matematik, 30 hp Handledare: Volodymyr Mazorchuk Examinator: Denis Gaidashev Juni 2018 Department of Mathematics Uppsala University Classifying Categories The Jordan-Holder¨ and Krull-Schmidt-Remak theorems for abelian categories Daniel Ahlsen´ Uppsala University June 2018 Abstract The Jordan-Holder¨ and Krull-Schmidt-Remak theorems classify finite groups, either as direct sums of indecomposables or by composition series. This thesis defines abelian categories and extends the aforementioned theorems to this context. 1 Contents 1 Introduction3 2 Preliminaries5 2.1 Basic Category Theory . .5 2.2 Subobjects and Quotients . .9 3 Abelian Categories 13 3.1 Additive Categories . 13 3.2 Abelian Categories . 20 4 Structure Theory of Abelian Categories 32 4.1 Exact Sequences . 32 4.2 The Subobject Lattice . 41 5 Classification Theorems 54 5.1 The Jordan-Holder¨ Theorem . 54 5.2 The Krull-Schmidt-Remak Theorem . 60 2 1 Introduction Category theory was developed by Eilenberg and Mac Lane in the 1942-1945, as a part of their research into algebraic topology. One of their aims was to give an axiomatic account of relationships between collections of mathematical structures. This led to the definition of categories, functors and natural transformations, the concepts that unify all category theory, Categories soon found use in module theory, group theory and many other disciplines. Nowadays, categories are used in most of mathematics, and has even been proposed as an alternative to axiomatic set theory as a foundation of mathematics.[Law66] Due to their general nature, little can be said of an arbitrary category.
    [Show full text]