Kernel, Image, Nullity, and Rank Math 130 Linear Algebra for the Time Being, We’Ll Look at Ranks and Nullity D Joyce, Fall 2015 of Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Kernel, Image, Nullity, and Rank Math 130 Linear Algebra for the Time Being, We’Ll Look at Ranks and Nullity D Joyce, Fall 2015 of Transformations Definition 3. The dimensions of the kernel and image of a transformation T are called the trans- formation's rank and nullity, and they're denoted rank(T ) and nullity(T ), respectively. Since a ma- trix represents a transformation, a matrix also has a rank and nullity. Kernel, image, nullity, and rank Math 130 Linear Algebra For the time being, we'll look at ranks and nullity D Joyce, Fall 2015 of transformations. We'll come back to these topics again when we interpret our results for matrices. Definition 1. Let T : V ! W be a linear trans- The above theorem implies this corollary. formation between vector spaces. The kernel of T , T U Corollary 4. Let V ! W and W ! X. Then also called the null space of T , is the inverse image of the zero vector, 0, of W , nullity(T ) ≤ nullity(U ◦ T ) ker(T ) = T −1(0) = fv 2 V j T v = 0g: and rank(U ◦ T ) ≤ rank(U): It's sometimes denoted N(T ) for null space of T . The image of T , also called the range of T , is the Systems of linear equations and linear trans- set of values of T , formations. We've seen how a system of m lin- ear equations in n unknowns can be interpreted as a T (V ) = fT (v) 2 W j v 2 V g: single matrix equation Ax = b, where x is the n×1 column vector whose entries are the n unknowns, This image is also denoted im(T ), or R(T ) for range and b is the m × 1 column vector of constants on of T . the right sides of the m equations. We can also interpret a system of linear equations Both of these are vector spaces. ker(T ) is a sub- in terms of a linear transformation. Let the linear space of V , and T (V ) is a subspace of W . (Why? transformation T : Rn ! Rm correspond to the Prove it.) matrix A, that is, T (x) = Ax. Then the matrix We can prove something about kernels and im- equation Ax = b becomes ages directly from their definition. T (x) = b: Theorem 2. Let V !T W and W !U X be linear transformations. Then Solving the equation means looking for a vector x in the inverse image T −1(b). It will exist if and ker(T ) ⊆ ker(U ◦ T ) only if b is in the image T (V ). When the system of linear equations is homoge- and neous, then b = 0. Then the solution set is the im(U ◦ T ) ⊆ im(U): subspace of V we've called the kernel of T . Thus, kernels are solutions to homogeneous linear equa- Proof. For the first statement, just note that tions. T (v) = 0 implies U(T (v)) = 0. For the second When the system is not homogeneous, then the statement, just note that an element of the form solution set is not a subspace of V since it doesn't U(T (v) in X is automatically of the form U(w) contain 0. In fact, it will be empty when b is not in where w = T (v). q.e.d. the image of T . If it is in the image, however, there 1 is at least one solution a 2 V with T (a) = b. All In general, the solution set for the nonhomoge- the rest can be found from a by adding solutions x neous equation Ax = b won't be a one-dimensional of the associated homogeneous equations, that is, line. It's dimension will be the nullity of A, and it will be parallel to the solution space of the associ- T (a + x) = b iff T (x) = 0: ated homogeneous equation Ax = 0. Geometrically, the solution set is a translate of the kernel of T , which is a subspace of V , by the vector The dimension theorem. The rank and nullity a. of a transformation are related. Specifically, their Example 5. Consider this nonhomogeneous linear sum is the dimension of the domain of the trans- system Ax = b: formation. That equation is sometimes called the dimension theorem. 2x3 2 0 1 1 In terms of matrices, this connection can be y = 1 1 3 4 5 2 stated as the rank of a matrix plus its nullity equals z the number of rows of the matrix. Solve it by row reducing the augmented matrix Before we prove the Dimension Theorem, first 2 0 1 1 we'll find a characterization of the image of a trans- 1 1 3 2 formation. to Theorem 6. The image of a transformation is 1 0 1=2 1=2 spanned by the image of the any basis of its do- 0 1 5=2 3=2 main. For T : V ! W , if β = fb1; b2;:::; bng is a Then z can be chosen freely, and x and y deter- basis of V , then T (β) = fT (b1);T (b2);:::;T (bn)g mined from z, that is, spans the image of T . 2 3 2 1 1 3 2 1 3 2 1 3 x 2 − 2 z 2 − 2 3 5 3 5 Although T (β) spans the image, it needn't be a x = 4y5 = 4 2 − 2 z5 = 4 2 5 + 4− 2 5 z z z 0 1 basis because its vectors needn't be independent. This last equation is the parametric equation of a Proof. A vector in the image of T is of the form line in R3, that is to say, the solution set is a line. T (v) where v 2 V . But β is a basis of V , But it's not a line through the origin. There is, so v is a linear combination of the basis vectors however, a line through the origin, namely fb1;:::; bng. Therefore, T (v) is same linear combi- 2 3 2 1 3 nation of the vectors T (b1);:::;T (bn). Therefore, x − 2 5 every vector in the image of T is a linear combina- x = 4y5 = 4− 2 5 z z 1 tion of vectors in T (β). q.e.d. and that line is the solution space of the associated Theorem 7 (Dimension Theorem). If the domain homogeneous system Ax = 0. Furthermore, these of a linear transformation is finite dimensional, then 2 3 1=2 that dimension is the sum of the rank and nullity two lines are parallel, and the vector 43=25 shifts of the transformation. 0 the line through the origin to the other line. In Proof. Let T : V ! W be a linear transformation, summary, for this example, the solution set for the let n be the dimension of V , let r be the rank of T nonhomogeneous equation Ax = b is a line in R3 and k the nullity of T . We'll show n = r + k. parallel to the solution space for the homogeneous Let β = fb1;:::; bkg be a basis of the kernel equation Ax = 0. of T . This basis can be extended to a basis γ = 2 fb1;:::; bk;:::; bng of all of V . We'll show that Theorem 8. A transformation is one-to-one if and the image of the vectors we appended, only if its kernel is trivial, that is, its nullity is 0. C = fT (bk+1);:::;T (bn)g Proof. Let T : V ! W . Suppose that T is one-to- one. Then since T (0) = 0, therefore T can send no is a basis of T (V ). That will show that r = n − k other vector to 0. Thus, the kernel of T consists as required. of 0 alone. That's a 0-dimensional space, so the First, we need to show that the set C spans the nullity of T is 0. image T (V ). From the previous theorem, we know Conversely, suppose that the nullity of T is 0, that T (γ) spans T (V ). But all the vectors in T (β) that is, its kernel consists only of 0. We'll show T is are 0, so they don't help in spanning T (V ). That one-to-one. Suppose that u 6= v. Then u − v 6= 0. leaves C = T (γ) − T (β) to span T (V ). Then u − v 6= 0 does not lie in the kernel of T Next, we need to show that the vectors in C are which means that T (u − v) 6= 0. But T (u − v) = linearly independent. Suppose that 0 is a linear T (u) − T (v), therefore T (u) 6= T (v). Thus, T is combination of them, one-to-one. q.e.d. ck+1T (bk+1) + ··· + cnT (bn) = 0 Since the rank plus the nullity of a transforma- tion equals the dimension of its domain, r + k = n, where the c 's are scalars. Then i we have the following corollary. T (c b + ··· + b v ) = 0 k+1 k+1 n n Corollary 9. For a transformation T whose do- main is finite dimensional, T is one-to-one if and Therefore, v = ck+1bk+1 + ··· + cnbn lies in the kernel of T . Therefore, v is a linear combination of only if that dimension equals its rank. the basis vectors β, v = c0b0 +···+ckbk: These last two equations imply that 0 is a linear combination Characterization of isomorphisms. If two of the entire basis γ of V , vector spaces V and W are isomorphic, their prop- erties with respect to addition and scalar multipli- c0b0 + ··· + ckbk − ck+1bk+1 − · · · − cnbn = 0: cation will be the same.
Recommended publications
  • ADDITIVE MAPS on RANK K BIVECTORS 1. Introduction. Let N 2 Be an Integer and Let F Be a Field. We Denote by M N(F) the Algeb
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 36, pp. 847-856, December 2020. ADDITIVE MAPS ON RANK K BIVECTORS∗ WAI LEONG CHOOIy AND KIAM HEONG KWAy V2 Abstract. Let U and V be linear spaces over fields F and K, respectively, such that dim U = n > 2 and jFj > 3. Let U n−1 V2 V2 be the second exterior power of U. Fixing an even integer k satisfying 2 6 k 6 n, it is shown that a map : U! V satisfies (u + v) = (u) + (v) for all rank k bivectors u; v 2 V2 U if and only if is an additive map. Examples showing the indispensability of the assumption on k are given. Key words. Additive maps, Second exterior powers, Bivectors, Ranks, Alternate matrices. AMS subject classifications. 15A03, 15A04, 15A75, 15A86. 1. Introduction. Let n > 2 be an integer and let F be a field. We denote by Mn(F) the algebra of n×n matrices over F. Given a nonempty subset S of Mn(F), a map : Mn(F) ! Mn(F) is called commuting on S (respectively, additive on S) if (A)A = A (A) for all A 2 S (respectively, (A + B) = (A) + (B) for all A; B 2 S). In 2012, using Breˇsar'sresult [1, Theorem A], Franca [3] characterized commuting additive maps on invertible (respectively, singular) matrices of Mn(F). He continued to characterize commuting additive maps on rank k matrices of Mn(F) in [4], where 2 6 k 6 n is a fixed integer, and commuting additive maps on rank one matrices of Mn(F) in [5].
    [Show full text]
  • Do Killingâ•Fiyano Tensors Form a Lie Algebra?
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Physics Department Faculty Publication Series Physics 2007 Do Killing–Yano tensors form a Lie algebra? David Kastor University of Massachusetts - Amherst, [email protected] Sourya Ray University of Massachusetts - Amherst Jennie Traschen University of Massachusetts - Amherst, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/physics_faculty_pubs Part of the Physics Commons Recommended Citation Kastor, David; Ray, Sourya; and Traschen, Jennie, "Do Killing–Yano tensors form a Lie algebra?" (2007). Classical and Quantum Gravity. 1235. Retrieved from https://scholarworks.umass.edu/physics_faculty_pubs/1235 This Article is brought to you for free and open access by the Physics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Physics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Do Killing-Yano tensors form a Lie algebra? David Kastor, Sourya Ray and Jennie Traschen Department of Physics University of Massachusetts Amherst, MA 01003 ABSTRACT Killing-Yano tensors are natural generalizations of Killing vec- tors. We investigate whether Killing-Yano tensors form a graded Lie algebra with respect to the Schouten-Nijenhuis bracket. We find that this proposition does not hold in general, but that it arXiv:0705.0535v1 [hep-th] 3 May 2007 does hold for constant curvature spacetimes. We also show that Minkowski and (anti)-deSitter spacetimes have the maximal num- ber of Killing-Yano tensors of each rank and that the algebras of these tensors under the SN bracket are relatively simple exten- sions of the Poincare and (A)dS symmetry algebras.
    [Show full text]
  • Kernel and Image
    Math 217 Worksheet 1 February: x3.1 Professor Karen E Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. T Definitions: Given a linear transformation V ! W between vector spaces, we have 1. The source or domain of T is V ; 2. The target of T is W ; 3. The image of T is the subset of the target f~y 2 W j ~y = T (~x) for some x 2 Vg: 4. The kernel of T is the subset of the source f~v 2 V such that T (~v) = ~0g. Put differently, the kernel is the pre-image of ~0. Advice to the new mathematicians from an old one: In encountering new definitions and concepts, n m please keep in mind concrete examples you already know|in this case, think about V as R and W as R the first time through. How does the notion of a linear transformation become more concrete in this special case? Think about modeling your future understanding on this case, but be aware that there are other important examples and there are important differences (a linear map is not \a matrix" unless *source and target* are both \coordinate spaces" of column vectors). The goal is to become comfortable with the abstract idea of a vector space which embodies many n features of R but encompasses many other kinds of set-ups. A. For each linear transformation below, determine the source, target, image and kernel. 2 3 x1 3 (a) T : R ! R such that T (4x25) = x1 + x2 + x3.
    [Show full text]
  • Derived Functors and Homological Dimension (Pdf)
    DERIVED FUNCTORS AND HOMOLOGICAL DIMENSION George Torres Math 221 Abstract. This paper overviews the basic notions of abelian categories, exact functors, and chain complexes. It will use these concepts to define derived functors, prove their existence, and demon- strate their relationship to homological dimension. I affirm my awareness of the standards of the Harvard College Honor Code. Date: December 15, 2015. 1 2 DERIVED FUNCTORS AND HOMOLOGICAL DIMENSION 1. Abelian Categories and Homology The concept of an abelian category will be necessary for discussing ideas on homological algebra. Loosely speaking, an abelian cagetory is a type of category that behaves like modules (R-mod) or abelian groups (Ab). We must first define a few types of morphisms that such a category must have. Definition 1.1. A morphism f : X ! Y in a category C is a zero morphism if: • for any A 2 C and any g; h : A ! X, fg = fh • for any B 2 C and any g; h : Y ! B, gf = hf We denote a zero morphism as 0XY (or sometimes just 0 if the context is sufficient). Definition 1.2. A morphism f : X ! Y is a monomorphism if it is left cancellative. That is, for all g; h : Z ! X, we have fg = fh ) g = h. An epimorphism is a morphism if it is right cancellative. The zero morphism is a generalization of the zero map on rings, or the identity homomorphism on groups. Monomorphisms and epimorphisms are generalizations of injective and surjective homomorphisms (though these definitions don't always coincide). It can be shown that a morphism is an isomorphism iff it is epic and monic.
    [Show full text]
  • Math 120 Homework 3 Solutions
    Math 120 Homework 3 Solutions Xiaoyu He, with edits by Prof. Church April 21, 2018 [Note from Prof. Church: solutions to starred problems may not include all details or all portions of the question.] 1.3.1* Let σ be the permutation 1 7! 3; 2 7! 4; 3 7! 5; 4 7! 2; 5 7! 1 and let τ be the permutation 1 7! 5; 2 7! 3; 3 7! 2; 4 7! 4; 5 7! 1. Find the cycle decompositions of each of the following permutations: σ; τ; σ2; στ; τσ; τ 2σ. The cycle decompositions are: σ = (135)(24) τ = (15)(23)(4) σ2 = (153)(2)(4) στ = (1)(2534) τσ = (1243)(5) τ 2σ = (135)(24): 1.3.7* Write out the cycle decomposition of each element of order 2 in S4. Elements of order 2 are also called involutions. There are six formed from a single transposition, (12); (13); (14); (23); (24); (34), and three from pairs of transpositions: (12)(34); (13)(24); (14)(23). 3.1.6* Define ' : R× ! {±1g by letting '(x) be x divided by the absolute value of x. Describe the fibers of ' and prove that ' is a homomorphism. The fibers of ' are '−1(1) = (0; 1) = fall positive realsg and '−1(−1) = (−∞; 0) = fall negative realsg. 3.1.7* Define π : R2 ! R by π((x; y)) = x + y. Prove that π is a surjective homomorphism and describe the kernel and fibers of π geometrically. The map π is surjective because e.g. π((x; 0)) = x. The kernel of π is the line y = −x through the origin.
    [Show full text]
  • Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand
    Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand To cite this version: Michel Couprie, Gilles Bertrand. Discrete Topological Transformations for Image Processing. Brimkov, Valentin E. and Barneva, Reneta P. Digital Geometry Algorithms, 2, Springer, pp.73-107, 2012, Lecture Notes in Computational Vision and Biomechanics, 978-94-007-4174-4. 10.1007/978-94- 007-4174-4_3. hal-00727377 HAL Id: hal-00727377 https://hal-upec-upem.archives-ouvertes.fr/hal-00727377 Submitted on 3 Sep 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chapter 3 Discrete Topological Transformations for Image Processing Michel Couprie and Gilles Bertrand Abstract Topology-based image processing operators usually aim at trans- forming an image while preserving its topological characteristics. This chap- ter reviews some approaches which lead to efficient and exact algorithms for topological transformations in 2D, 3D and grayscale images. Some transfor- mations which modify topology in a controlled manner are also described. Finally, based on the framework of critical kernels, we show how to design a topologically sound parallel thinning algorithm guided by a priority function. 3.1 Introduction Topology-preserving operators, such as homotopic thinning and skeletoniza- tion, are used in many applications of image analysis to transform an object while leaving unchanged its topological characteristics.
    [Show full text]
  • Kernel Methodsmethods Simple Idea of Data Fitting
    KernelKernel MethodsMethods Simple Idea of Data Fitting Given ( xi,y i) i=1,…,n xi is of dimension d Find the best linear function w (hyperplane) that fits the data Two scenarios y: real, regression y: {-1,1}, classification Two cases n>d, regression, least square n<d, ridge regression New sample: x, < x,w> : best fit (regression), best decision (classification) 2 Primary and Dual There are two ways to formulate the problem: Primary Dual Both provide deep insight into the problem Primary is more traditional Dual leads to newer techniques in SVM and kernel methods 3 Regression 2 w = arg min ∑(yi − wo − ∑ xij w j ) W i j w = arg min (y − Xw )T (y − Xw ) W d(y − Xw )T (y − Xw ) = 0 dw ⇒ XT (y − Xw ) = 0 w = [w , w ,L, w ]T , ⇒ T T o 1 d X Xw = X y L T x = ,1[ x1, , xd ] , T −1 T ⇒ w = (X X) X y y = [y , y ,L, y ]T ) 1 2 n T −1 T xT y =< x (, X X) X y > 1 xT X = 2 M xT 4 n n×xd Graphical Interpretation ) y = Xw = Hy = X(XT X)−1 XT y = X(XT X)−1 XT y d X= n FICA Income X is a n (sample size) by d (dimension of data) matrix w combines the columns of X to best approximate y Combine features (FICA, income, etc.) to decisions (loan) H projects y onto the space spanned by columns of X Simplify the decisions to fit the features 5 Problem #1 n=d, exact solution n>d, least square, (most likely scenarios) When n < d, there are not enough constraints to determine coefficients w uniquely d X= n W 6 Problem #2 If different attributes are highly correlated (income and FICA) The columns become dependent Coefficients
    [Show full text]
  • Limits Commutative Algebra May 11 2020 1. Direct Limits Definition 1
    Limits Commutative Algebra May 11 2020 1. Direct Limits Definition 1: A directed set I is a set with a partial order ≤ such that for every i; j 2 I there is k 2 I such that i ≤ k and j ≤ k. Let R be a ring. A directed system of R-modules indexed by I is a collection of R modules fMi j i 2 Ig with a R module homomorphisms µi;j : Mi ! Mj for each pair i; j 2 I where i ≤ j, such that (i) for any i 2 I, µi;i = IdMi and (ii) for any i ≤ j ≤ k in I, µi;j ◦ µj;k = µi;k. We shall denote a directed system by a tuple (Mi; µi;j). The direct limit of a directed system is defined using a universal property. It exists and is unique up to a unique isomorphism. Theorem 2 (Direct limits). Let fMi j i 2 Ig be a directed system of R modules then there exists an R module M with the following properties: (i) There are R module homomorphisms µi : Mi ! M for each i 2 I, satisfying µi = µj ◦ µi;j whenever i < j. (ii) If there is an R module N such that there are R module homomorphisms νi : Mi ! N for each i and νi = νj ◦µi;j whenever i < j; then there exists a unique R module homomorphism ν : M ! N, such that νi = ν ◦ µi. The module M is unique in the sense that if there is any other R module M 0 satisfying properties (i) and (ii) then there is a unique R module isomorphism µ0 : M ! M 0.
    [Show full text]
  • Kernel Methods for Knowledge Structures
    Kernel Methods for Knowledge Structures Zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften (Dr. rer. pol.) von der Fakultät für Wirtschaftswissenschaften der Universität Karlsruhe (TH) genehmigte DISSERTATION von Dipl.-Inform.-Wirt. Stephan Bloehdorn Tag der mündlichen Prüfung: 10. Juli 2008 Referent: Prof. Dr. Rudi Studer Koreferent: Prof. Dr. Dr. Lars Schmidt-Thieme 2008 Karlsruhe Abstract Kernel methods constitute a new and popular field of research in the area of machine learning. Kernel-based machine learning algorithms abandon the explicit represen- tation of data items in the vector space in which the sought-after patterns are to be detected. Instead, they implicitly mimic the geometry of the feature space by means of the kernel function, a similarity function which maintains a geometric interpreta- tion as the inner product of two vectors. Knowledge structures and ontologies allow to formally model domain knowledge which can constitute valuable complementary information for pattern discovery. For kernel-based machine learning algorithms, a good way to make such prior knowledge about the problem domain available to a machine learning technique is to incorporate it into the kernel function. This thesis studies the design of such kernel functions. First, this thesis provides a theoretical analysis of popular similarity functions for entities in taxonomic knowledge structures in terms of their suitability as kernel func- tions. It shows that, in a general setting, many taxonomic similarity functions can not be guaranteed to yield valid kernel functions and discusses the alternatives. Secondly, the thesis addresses the design of expressive kernel functions for text mining applications. A first group of kernel functions, Semantic Smoothing Kernels (SSKs) retain the Vector Space Models (VSMs) representation of textual data as vec- tors of term weights but employ linguistic background knowledge resources to bias the original inner product in such a way that cross-term similarities adequately con- tribute to the kernel result.
    [Show full text]
  • Preserivng Pieces of Information in a Given Order in HRR and GA$ C$
    Proceedings of the Federated Conference on ISBN 978-83-60810-22-4 Computer Science and Information Systems pp. 213–220 Preserivng pieces of information in a given order in HRR and GAc Agnieszka Patyk-Ło´nska Abstract—Geometric Analogues of Holographic Reduced Rep- Secondly, this method of encoding will not detect similarity resentations (GA HRR or GAc—the continuous version of between eye and yeye. discrete GA described in [16]) employ role-filler binding based A quantum-like attempt to tackle the problem of information on geometric products. Atomic objects are real-valued vectors in n-dimensional Euclidean space and complex statements belong to ordering was made in [1]—a version of semantic analysis, a hierarchy of multivectors. A property of GAc and HRR studied reformulated in terms of a Hilbert-space problem, is compared here is the ability to store pieces of information in a given order with structures known from quantum mechanics. In particular, by means of trajectory association. We describe results of an an LSA matrix representation [1], [10] is rewritten by the experiment: finding the alignment of items in a sequence without means of quantum notation. Geometric algebra has also been the precise knowledge of trajectory vectors. Index Terms—distributed representations, geometric algebra, used extensively in quantum mechanics ([2], [4], [3]) and so HRR, BSC, word order, trajectory associations, bag of words. there seems to be a natural connection between LSA and GAc, which is the ground for fututre work on the problem I. INTRODUCTION of preserving pieces of information in a given order. VER the years several attempts have been made to As far as convolutions are concerned, the most interesting O preserve the order in which the objects are to be remem- approach to remembering information in a given order has bered with the help of binding and superposition.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • Abelian Categories
    Abelian Categories Lemma. In an Ab-enriched category with zero object every finite product is coproduct and conversely. π1 Proof. Suppose A × B //A; B is a product. Define ι1 : A ! A × B and π2 ι2 : B ! A × B by π1ι1 = id; π2ι1 = 0; π1ι2 = 0; π2ι2 = id: It follows that ι1π1+ι2π2 = id (both sides are equal upon applying π1 and π2). To show that ι1; ι2 are a coproduct suppose given ' : A ! C; : B ! C. It φ : A × B ! C has the properties φι1 = ' and φι2 = then we must have φ = φid = φ(ι1π1 + ι2π2) = ϕπ1 + π2: Conversely, the formula ϕπ1 + π2 yields the desired map on A × B. An additive category is an Ab-enriched category with a zero object and finite products (or coproducts). In such a category, a kernel of a morphism f : A ! B is an equalizer k in the diagram k f ker(f) / A / B: 0 Dually, a cokernel of f is a coequalizer c in the diagram f c A / B / coker(f): 0 An Abelian category is an additive category such that 1. every map has a kernel and a cokernel, 2. every mono is a kernel, and every epi is a cokernel. In fact, it then follows immediatly that a mono is the kernel of its cokernel, while an epi is the cokernel of its kernel. 1 Proof of last statement. Suppose f : B ! C is epi and the cokernel of some g : A ! B. Write k : ker(f) ! B for the kernel of f. Since f ◦ g = 0 the map g¯ indicated in the diagram exists.
    [Show full text]