18.06.16: the Four Fundamental Subspaces

Total Page:16

File Type:pdf, Size:1020Kb

18.06.16: the Four Fundamental Subspaces 18.06.16: The four fundamental subspaces Lecturer: Barwick Monday 14 March 2016 18.06.16: The four fundamental subspaces Here it is again: Theorem (Rank-Nullity Theorem). If 퐴 is an 푚 × 푛 matrix, then dim(ker(퐴)) + dim(im(퐴)) = 푛. We saw a proof of this by reducing to rref or rcef, and then checking it there. There’s just one thing that might bug us here. If I think of the linear map 푛 푚 푇퐴∶ R R , then we see that ker(퐴) is a subspace of the source R푛, but im(퐴) is a subspace of the target R푚. So why should these spaces be related? 18.06.16: The four fundamental subspaces To answer this question, there’s another matrix we can contemplate, the trans- pose 퐴⊺. This is an 푛 × 푚 matrix, and so it corresponds to a linear map in the other direction: 푚 푛 푇퐴⊺ ∶ R R . This is the map that takes a column vector ~푣 and builds the column vector 퐴⊺~푣, but we can perform a trick here. Instead of thinking about transposing 퐴, we can think about transposing the vectors. 18.06.16: The four fundamental subspaces So (R푚)∨ will be the set of all 푚-dimensional row vectors; equivalently, the set of all 1 × 푚 matrices; equivalently again, the set of all transposes 푣 ≔ (~푣)⊺ ~ of vectors ~푣 ∈ R푚. We call (R푚)∨ the dual R푚. 1 0 So, e.g., if ~푣 = ( −2 ), then 푣 = ( 1 0 −2 1 0 ). ~ 1 ( 0 ) 18.06.16: The four fundamental subspaces The neat thing about row vectors is that they do stuff to column vectors. If ~푣 ∈ R푛 and 푤 ∈ (R푛)∨, then 푤~푣 is a number. (Question: How is this related to ~ ~ the dot product?) If 푉 ⊆ R푛 is a vector subspace, then we define 푉⟂ ≔ {푤 ∈ (R푛)∨ | for any ~푣 ∈ 푉, 푤~푣 = 0} ⊆ (R푛)∨. ~ ~ Dually, if 푊 ⊆ (R푛)∨ is a vector subspace, then we define 푊⟂ ≔ {~푣 ∈ R푛 | for any 푤 ∈ 푊, 푤~푣 = 0} ⊆ R푛. ~ ~ Fact: dim(푉) = 푛 − dim(푉⟂), and dim(푊) = 푛 − dim(푊⟂). (Why?) 18.06.16: The four fundamental subspaces Now, since (퐴⊺~푣)⊺ = (~푣)⊺퐴 = 푣퐴, ~ we can leave 퐴 just as it is, and we can consider the linear map ∨ 푚 ∨ 푛 ∨ 푇퐴 ∶ (R ) (R ) given by the formula 푇∨(푣) ≔ 푣퐴. 퐴 ~ ~ 18.06.16: The four fundamental subspaces So when we contemplate the kernel and image of 퐴⊺, we can think about it ∨ via the map 푇퐴 . For example, ker(퐴⊺) is the set of all vectors 푣 ∈ (R푛)∨ such that 푣퐴 = 0. This ~ ~ ~ space is also called the cokernel or the left kernel of 퐴. I write coker(퐴). We also have the image of 퐴⊺, which is the set of all row vectors 푤 ∈ (R푛)∨ ~ such that there exists a row vector 푣 ∈ (R푚)∨ for which 푤 = 푣퐴. This space is ~ ~ ~ also called the coimage of 퐴, or, since its the span of the columns of 퐴⊺, which is the span of the rows of 퐴, it is also called the row space of 퐴. I write coim(퐴). 18.06.16: The four fundamental subspaces In all, we have four vector spaces that are what Strang call the fundamental subspaces attached to 퐴: ker(퐴), im(퐴), coker(퐴) ≔ ker(퐴⊺), coim(퐴) ≔ im(퐴⊺). 18.06.16: The four fundamental subspaces Here’s the abstract statement of the Rank-Nullity Theorem: (1) ker(퐴) = coim(퐴)⟂, so that dim(ker(퐴)) = 푛 − dim(coim(퐴)). (2) im(퐴) = coker(퐴)⟂, so that dim(im(퐴)) = 푚 − dim(coker(퐴)). (3) 퐴 provides a bijection coim(퐴) ≅ im(퐴), so that dim(coim(퐴)) = dim(im(퐴))..
Recommended publications
  • Kernel and Image
    Math 217 Worksheet 1 February: x3.1 Professor Karen E Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. T Definitions: Given a linear transformation V ! W between vector spaces, we have 1. The source or domain of T is V ; 2. The target of T is W ; 3. The image of T is the subset of the target f~y 2 W j ~y = T (~x) for some x 2 Vg: 4. The kernel of T is the subset of the source f~v 2 V such that T (~v) = ~0g. Put differently, the kernel is the pre-image of ~0. Advice to the new mathematicians from an old one: In encountering new definitions and concepts, n m please keep in mind concrete examples you already know|in this case, think about V as R and W as R the first time through. How does the notion of a linear transformation become more concrete in this special case? Think about modeling your future understanding on this case, but be aware that there are other important examples and there are important differences (a linear map is not \a matrix" unless *source and target* are both \coordinate spaces" of column vectors). The goal is to become comfortable with the abstract idea of a vector space which embodies many n features of R but encompasses many other kinds of set-ups. A. For each linear transformation below, determine the source, target, image and kernel. 2 3 x1 3 (a) T : R ! R such that T (4x25) = x1 + x2 + x3.
    [Show full text]
  • Derived Functors and Homological Dimension (Pdf)
    DERIVED FUNCTORS AND HOMOLOGICAL DIMENSION George Torres Math 221 Abstract. This paper overviews the basic notions of abelian categories, exact functors, and chain complexes. It will use these concepts to define derived functors, prove their existence, and demon- strate their relationship to homological dimension. I affirm my awareness of the standards of the Harvard College Honor Code. Date: December 15, 2015. 1 2 DERIVED FUNCTORS AND HOMOLOGICAL DIMENSION 1. Abelian Categories and Homology The concept of an abelian category will be necessary for discussing ideas on homological algebra. Loosely speaking, an abelian cagetory is a type of category that behaves like modules (R-mod) or abelian groups (Ab). We must first define a few types of morphisms that such a category must have. Definition 1.1. A morphism f : X ! Y in a category C is a zero morphism if: • for any A 2 C and any g; h : A ! X, fg = fh • for any B 2 C and any g; h : Y ! B, gf = hf We denote a zero morphism as 0XY (or sometimes just 0 if the context is sufficient). Definition 1.2. A morphism f : X ! Y is a monomorphism if it is left cancellative. That is, for all g; h : Z ! X, we have fg = fh ) g = h. An epimorphism is a morphism if it is right cancellative. The zero morphism is a generalization of the zero map on rings, or the identity homomorphism on groups. Monomorphisms and epimorphisms are generalizations of injective and surjective homomorphisms (though these definitions don't always coincide). It can be shown that a morphism is an isomorphism iff it is epic and monic.
    [Show full text]
  • Math 120 Homework 3 Solutions
    Math 120 Homework 3 Solutions Xiaoyu He, with edits by Prof. Church April 21, 2018 [Note from Prof. Church: solutions to starred problems may not include all details or all portions of the question.] 1.3.1* Let σ be the permutation 1 7! 3; 2 7! 4; 3 7! 5; 4 7! 2; 5 7! 1 and let τ be the permutation 1 7! 5; 2 7! 3; 3 7! 2; 4 7! 4; 5 7! 1. Find the cycle decompositions of each of the following permutations: σ; τ; σ2; στ; τσ; τ 2σ. The cycle decompositions are: σ = (135)(24) τ = (15)(23)(4) σ2 = (153)(2)(4) στ = (1)(2534) τσ = (1243)(5) τ 2σ = (135)(24): 1.3.7* Write out the cycle decomposition of each element of order 2 in S4. Elements of order 2 are also called involutions. There are six formed from a single transposition, (12); (13); (14); (23); (24); (34), and three from pairs of transpositions: (12)(34); (13)(24); (14)(23). 3.1.6* Define ' : R× ! {±1g by letting '(x) be x divided by the absolute value of x. Describe the fibers of ' and prove that ' is a homomorphism. The fibers of ' are '−1(1) = (0; 1) = fall positive realsg and '−1(−1) = (−∞; 0) = fall negative realsg. 3.1.7* Define π : R2 ! R by π((x; y)) = x + y. Prove that π is a surjective homomorphism and describe the kernel and fibers of π geometrically. The map π is surjective because e.g. π((x; 0)) = x. The kernel of π is the line y = −x through the origin.
    [Show full text]
  • Arxiv:2008.00486V2 [Math.CT] 1 Nov 2020
    Anticommutativity and the triangular lemma. Michael Hoefnagel Abstract For a variety V, it has been recently shown that binary products com- mute with arbitrary coequalizers locally, i.e., in every fibre of the fibration of points π : Pt(C) → C, if and only if Gumm’s shifting lemma holds on pullbacks in V. In this paper, we establish a similar result connecting the so-called triangular lemma in universal algebra with a certain cat- egorical anticommutativity condition. In particular, we show that this anticommutativity and its local version are Mal’tsev conditions, the local version being equivalent to the triangular lemma on pullbacks. As a corol- lary, every locally anticommutative variety V has directly decomposable congruence classes in the sense of Duda, and the converse holds if V is idempotent. 1 Introduction Recall that a category is said to be pointed if it admits a zero object 0, i.e., an object which is both initial and terminal. For a variety V, being pointed is equivalent to the requirement that the theory of V admit a unique constant. Between any two objects X and Y in a pointed category, there exists a unique morphism 0X,Y from X to Y which factors through the zero object. The pres- ence of these zero morphisms allows for a natural notion of kernel or cokernel of a morphism f : X → Y , namely, as an equalizer or coequalizer of f and 0X,Y , respectively. Every kernel/cokernel is a monomorphism/epimorphism, and a monomorphism/epimorphism is called normal if it is a kernel/cokernel of some morphism.
    [Show full text]
  • Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand
    Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand To cite this version: Michel Couprie, Gilles Bertrand. Discrete Topological Transformations for Image Processing. Brimkov, Valentin E. and Barneva, Reneta P. Digital Geometry Algorithms, 2, Springer, pp.73-107, 2012, Lecture Notes in Computational Vision and Biomechanics, 978-94-007-4174-4. 10.1007/978-94- 007-4174-4_3. hal-00727377 HAL Id: hal-00727377 https://hal-upec-upem.archives-ouvertes.fr/hal-00727377 Submitted on 3 Sep 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chapter 3 Discrete Topological Transformations for Image Processing Michel Couprie and Gilles Bertrand Abstract Topology-based image processing operators usually aim at trans- forming an image while preserving its topological characteristics. This chap- ter reviews some approaches which lead to efficient and exact algorithms for topological transformations in 2D, 3D and grayscale images. Some transfor- mations which modify topology in a controlled manner are also described. Finally, based on the framework of critical kernels, we show how to design a topologically sound parallel thinning algorithm guided by a priority function. 3.1 Introduction Topology-preserving operators, such as homotopic thinning and skeletoniza- tion, are used in many applications of image analysis to transform an object while leaving unchanged its topological characteristics.
    [Show full text]
  • Kernel Methodsmethods Simple Idea of Data Fitting
    KernelKernel MethodsMethods Simple Idea of Data Fitting Given ( xi,y i) i=1,…,n xi is of dimension d Find the best linear function w (hyperplane) that fits the data Two scenarios y: real, regression y: {-1,1}, classification Two cases n>d, regression, least square n<d, ridge regression New sample: x, < x,w> : best fit (regression), best decision (classification) 2 Primary and Dual There are two ways to formulate the problem: Primary Dual Both provide deep insight into the problem Primary is more traditional Dual leads to newer techniques in SVM and kernel methods 3 Regression 2 w = arg min ∑(yi − wo − ∑ xij w j ) W i j w = arg min (y − Xw )T (y − Xw ) W d(y − Xw )T (y − Xw ) = 0 dw ⇒ XT (y − Xw ) = 0 w = [w , w ,L, w ]T , ⇒ T T o 1 d X Xw = X y L T x = ,1[ x1, , xd ] , T −1 T ⇒ w = (X X) X y y = [y , y ,L, y ]T ) 1 2 n T −1 T xT y =< x (, X X) X y > 1 xT X = 2 M xT 4 n n×xd Graphical Interpretation ) y = Xw = Hy = X(XT X)−1 XT y = X(XT X)−1 XT y d X= n FICA Income X is a n (sample size) by d (dimension of data) matrix w combines the columns of X to best approximate y Combine features (FICA, income, etc.) to decisions (loan) H projects y onto the space spanned by columns of X Simplify the decisions to fit the features 5 Problem #1 n=d, exact solution n>d, least square, (most likely scenarios) When n < d, there are not enough constraints to determine coefficients w uniquely d X= n W 6 Problem #2 If different attributes are highly correlated (income and FICA) The columns become dependent Coefficients
    [Show full text]
  • Limits Commutative Algebra May 11 2020 1. Direct Limits Definition 1
    Limits Commutative Algebra May 11 2020 1. Direct Limits Definition 1: A directed set I is a set with a partial order ≤ such that for every i; j 2 I there is k 2 I such that i ≤ k and j ≤ k. Let R be a ring. A directed system of R-modules indexed by I is a collection of R modules fMi j i 2 Ig with a R module homomorphisms µi;j : Mi ! Mj for each pair i; j 2 I where i ≤ j, such that (i) for any i 2 I, µi;i = IdMi and (ii) for any i ≤ j ≤ k in I, µi;j ◦ µj;k = µi;k. We shall denote a directed system by a tuple (Mi; µi;j). The direct limit of a directed system is defined using a universal property. It exists and is unique up to a unique isomorphism. Theorem 2 (Direct limits). Let fMi j i 2 Ig be a directed system of R modules then there exists an R module M with the following properties: (i) There are R module homomorphisms µi : Mi ! M for each i 2 I, satisfying µi = µj ◦ µi;j whenever i < j. (ii) If there is an R module N such that there are R module homomorphisms νi : Mi ! N for each i and νi = νj ◦µi;j whenever i < j; then there exists a unique R module homomorphism ν : M ! N, such that νi = ν ◦ µi. The module M is unique in the sense that if there is any other R module M 0 satisfying properties (i) and (ii) then there is a unique R module isomorphism µ0 : M ! M 0.
    [Show full text]
  • (Lec 8) Multilevel Min. II: Cube/Cokernel Extract Copyright
    (Lec(Lec 8) 8) MultilevelMultilevel Min.Min. II:II: Cube/CokernelCube/Cokernel Extract Extract ^ What you know X 2-level minimization a la ESPRESSO X Boolean network model -- let’s us manipulate multi-level structure X Algebraic model -- simplified model of Boolean eqns lets us factor stuff X Algebraic division / Kerneling: ways to actually do Boolean division ^ What you don’t know X Given a big Boolean network... X ... how do we actually extract a good set of factors over all the nodes? X What’s a good set of factors to try to find? ^ This is “extraction” X Amazingly enough, there is a unifying “covering” problem here whose solution answers all these extraction problems (Concrete examples from Rick Rudell’s 1989 Berkeley PhD thesis) © R. Rutenbar 2001 18-760, Fall 2001 1 CopyrightCopyright NoticeNotice © Rob A. Rutenbar 2001 All rights reserved. You may not make copies of this material in any form without my express permission. © R. Rutenbar 2001 18-760, Fall 2001 2 Page 1 WhereWhere AreAre We?We? ^ In logic synthesis--how multilevel factoring really works M T W Th F Aug 27 28 29 30 31 1 Introduction Sep 3 4 5 6 7 2 Advanced Boolean algebra 10 11 12 13 14 3 JAVA Review 17 18 19 20 21 4 Formal verification 24 25 26 27 28 5 2-Level logic synthesis Oct 1 2 3 4 5 6 Multi-level logic synthesis 8 9 10 11 12 7 Technology mapping 15 16 17 18 19 8 Placement 22 23 24 25 26 9 Routing 29 30 31 1 2 10 Static timing analysis Nov 5 6 7 8 9 11 12 13 14 15 16 12 Electrical timing analysis Thnxgive 19 20 21 22 23 13 Geometric data structs & apps 26 27 28 29 30 14 Dec 3 4 5 6 7 15 10 11 12 13 14 16 © R.
    [Show full text]
  • Abelian Categories
    Abelian Categories Lemma. In an Ab-enriched category with zero object every finite product is coproduct and conversely. π1 Proof. Suppose A × B //A; B is a product. Define ι1 : A ! A × B and π2 ι2 : B ! A × B by π1ι1 = id; π2ι1 = 0; π1ι2 = 0; π2ι2 = id: It follows that ι1π1+ι2π2 = id (both sides are equal upon applying π1 and π2). To show that ι1; ι2 are a coproduct suppose given ' : A ! C; : B ! C. It φ : A × B ! C has the properties φι1 = ' and φι2 = then we must have φ = φid = φ(ι1π1 + ι2π2) = ϕπ1 + π2: Conversely, the formula ϕπ1 + π2 yields the desired map on A × B. An additive category is an Ab-enriched category with a zero object and finite products (or coproducts). In such a category, a kernel of a morphism f : A ! B is an equalizer k in the diagram k f ker(f) / A / B: 0 Dually, a cokernel of f is a coequalizer c in the diagram f c A / B / coker(f): 0 An Abelian category is an additive category such that 1. every map has a kernel and a cokernel, 2. every mono is a kernel, and every epi is a cokernel. In fact, it then follows immediatly that a mono is the kernel of its cokernel, while an epi is the cokernel of its kernel. 1 Proof of last statement. Suppose f : B ! C is epi and the cokernel of some g : A ! B. Write k : ker(f) ! B for the kernel of f. Since f ◦ g = 0 the map g¯ indicated in the diagram exists.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • Groups and Categories
    \chap04" 2009/2/27 i i page 65 i i 4 GROUPS AND CATEGORIES This chapter is devoted to some of the various connections between groups and categories. If you already know the basic group theory covered here, then this will give you some insight into the categorical constructions we have learned so far; and if you do not know it yet, then you will learn it now as an application of category theory. We will focus on three different aspects of the relationship between categories and groups: 1. groups in a category, 2. the category of groups, 3. groups as categories. 4.1 Groups in a category As we have already seen, the notion of a group arises as an abstraction of the automorphisms of an object. In a specific, concrete case, a group G may thus consist of certain arrows g : X ! X for some object X in a category C, G ⊆ HomC(X; X) But the abstract group concept can also be described directly as an object in a category, equipped with a certain structure. This more subtle notion of a \group in a category" also proves to be quite useful. Let C be a category with finite products. The notion of a group in C essentially generalizes the usual notion of a group in Sets. Definition 4.1. A group in C consists of objects and arrows as so: m i G × G - G G 6 u 1 i i i i \chap04" 2009/2/27 i i page 66 66 GROUPSANDCATEGORIES i i satisfying the following conditions: 1.
    [Show full text]
  • Imbedding of Abelian Categories, by Saul Lubkin
    Imbedding of Abelian Categories Author(s): Saul Lubkin Reviewed work(s): Source: Transactions of the American Mathematical Society, Vol. 97, No. 3 (Dec., 1960), pp. 410-417 Published by: American Mathematical Society Stable URL: http://www.jstor.org/stable/1993379 . Accessed: 15/01/2013 15:24 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. American Mathematical Society is collaborating with JSTOR to digitize, preserve and extend access to Transactions of the American Mathematical Society. http://www.jstor.org This content downloaded on Tue, 15 Jan 2013 15:24:14 PM All use subject to JSTOR Terms and Conditions IMBEDDING OF ABELIAN CATEGORIES BY SAUL LUBKIN 1. Introduction. In this paper, we prove the following EXACT IMBEDDING THEOREM. Every abelian category (whose objects form a set) admits an additive imbedding into the category of abelian groups which carries exact sequences into exact sequences. As a consequence of this theorem, every object A of (i has "elements"- namely, the elements of the image A' of A under the imbedding-and all the usual propositions and constructions performed by means of "diagram chas- ing" may be carried out in an arbitrary abelian category precisely as in the category of abelian groups.
    [Show full text]