Range and Kernel

Total Page:16

File Type:pdf, Size:1020Kb

Range and Kernel Range and kernel Brian Krummel October 9, 2019 Recall from Chapter 2 that given an m × n matrix A: • Nul A is the solution set of the homogeneous equation Ax = 0. • Col A is the span of the columns of A. Let's review a few things about null spaces and column spaces. Example 1. Is v in Nul A, where 2 1 3 2 5 3 2 3 v = 4 −3 5 ;A = 4 −1 1 2 5 ? 2 2 4 5 Answer. Yes, since we can simply compute Av to show that it equals zero: 2 5 3 2 3 2 1 3 2 5 · 1 + 3 · (−3) + 2 · 2 3 2 5 − 9 + 4 3 2 0 3 4 −1 1 2 5 4 −3 5 = 4 −1 · 1 + 1 · (−3) + 2 · 2 5 = 4 −1 − 3 + 4 5 = 4 0 5 : 2 4 5 2 2 · 1 + 4 · (−3) + 5 · 2 2 − 12 + 10 0 Note that simply computing Av is much quicker than using row reduction to compute Ax = 0. Moreover, if we solved Ax = 0, then we would have to write v as a linear combination of the basis vectors for Nul A. Note that checking if v is in Col A is when we row reduce the augmented matrix A v . Example 2. Is 82 r 3 9 < r + 2s = 3t = W = s : 4 5 4r − 5t = 0 : t ; as vector space? Answer. Yes. Let's rewrite the first equation as r +2s−3t = 0 and recall that the second equation is 4r − 5t = 0. Put the coefficients of r; s; t for each equation as the rows of a 2 × 3 matrix: 1 2 −3 A = 4 0 −5 Then W = Nul A as W is the solution set to Ax = 0: 2 r 3 1 2 −3 r + 2s − 3t 0 s = = : 4 0 −5 4 5 4r − 5t 0 t Hence W is a subspace of R3. It follows that W is a vector space. 1 Example 3. Is 82 r 3 9 < 1 2 −3 0 = W = s : r + s + t = 4 5 4 0 −5 0 : t ; as vector space? Answer. Yes, this is the exact same solution set W as in the previous example, except we rewrote the linear system as a vector equation. Example 4. Is 82 r 3 9 < r + 2s = 7 + 3t = W = s : 4 5 4r − 5t = 0 : t ; as vector space? Answer. No, W is a solution set to a non-homogeneous linear equation and in particular W does not contain the zero vector. Example 5. Is 82 3 9 < 2r + s = W = 4 4r + 5s 5 : r; s in R : 6r + 7s ; as vector space? Answer. Yes. Put the coefficients of r in the 1st column of a matrix A: 2 2 ∗ 3 A = 4 4 ∗ 5 6 ∗ Then put the coefficients of s in the 2nd column of the matrix A: 2 2 1 3 A = 4 4 5 5 : 6 7 Then W = Col A as 2 2 1 3 2 2r + s 3 r 4 5 = 4r + 5s 4 5 s 4 5 6 7 6r + 7s is a general vector in W . Hence W is a subspace of R3. It follows that W is a vector space. 2 Compare and contrast column space and null space. Let A be an m × n matrix. Nul A Col A Subspace of Rn Rm Definition Implicitly defined: Explicitly defined: Solution set to Ax = 0. Span of columns of A. Find vector v Hard: Easy: v is a linear in the space Solve Av = 0. combination of columns of A. Checking if v Easy: Hard: Check if is in the space Check Av = 0. Ax = v is consistent. Basis for space Solve Ax = 0 Pivot columns of A. in vector parametric form. Existence and Uniqueness , Existence , uniqueness for Ax = b Nul A = f0g b in Col A Transformations One-to-one , Onto , one-to-one and onto Nul A = f0g Col A = Rn Now let's generalize the notion of null space and column space to abstract vector spaces. Definition 1. Let V and W be vector spaces. Recall that a transformation T : V ! W is a rule which assigns each x in V a unique vector T (x) in W . We call V the domain of T and W the codomain of T . A transformation T : V ! W is linear if (i) T (u + v) = T (u) + T (v) for every u; v in V and (ii) T (cu) = c T (u) for every scalar c and every u in V . Theorem 1. Let V and W be vector spaces and T : V ! W be a linear transformation. Then (i) T (0) = 0; (ii) T (c1v1 + c2v2 + ··· + cpvp) = c1T (v1) + c2T (v2) + ··· + cpT (vp) for all scalars c1; c2; : : : ; cp and vectors v1; v2;:::; vp. Reason. The argument is exactly the same as in Section 1.8. Now we define the subspaces associated with a linear transformation T , the kernel and range. Definition 2. Let V and W be vector spaces and T : V ! W be a linear transformation. • The kernel of T is the set of all x in V such that T (x) = 0. • The range of T is the set of all images T (x) in W over all x in V . Example 6 (Matrix transformation). Let T : Rn ! Rm be a linear transformation. In Section 1.9 we showed that every such linear transformation T takes the form T (x) = Ax for all x in Rn, where A is an m × n matrix. Then kernel T = Nul A and range T = Col A. 3 Example 7 (Evaluation map). Recall that P2 is the space of all polynomials of degree at most 2. Take a point in R, say 0, and define the evaluation map T : P2 ! R by T (p) = p(0) 2 for each polynomial p(x) = a0 + a1x + a2x in P2. Then T is a linear transformation. To check this, given polynomials p; q in P2 and a scalar c: T (p + q) = (p + q)(0) = p(0) + q(0) = T (p) + T (q); T (cp) = (cp)(0) = c p(0) = c T (p); using the definitions of polynomial addition and scalar multiplication. To find the kernel and range of T we can write the evaluation map as 2 2 T (a0 + a1x + a2x ) = a0 + a1 · 0 + a2 · 0 = a0 so that T maps a polynomial to its constant coefficient a0. One can readily see that the range of T is the set of all possible constant coefficients a0, i.e. range T = R. In particular, T maps onto R. The kernel of T is 2 2 ker T = a1x + a2x : a1; a2 in R = fx (a1 + a2x): a1; a2 in Rg = Span x; x : Of course, we could also consider the evaluation map at another point. For instance, if we defined T by T (p) = p(1) for each polynomial p in P2, then T is onto and 2 ker T = f(x − 1) (a1 + a2x): a1; a2 in Rg = Span x − 1; x − x : Example 8 (Derivative). Let C1([0; 1]) be the space of continuously differentiable real-valued functions f : [0; 1] ! R, i.e. f takes values f(x) at each 0 ≤ x ≤ 1. Let T : C1([0; 1]) ! C1([0; 1]) be the derivative df (T f)(x) = (x) for each 0 ≤ x ≤ 1 dx and for each continuously differentiable function f : [0; 1] ! R. By the standard properties df of derivatives, T is linear. As a consequence of the Mean Value Theorem, dx (x) = 0 for each 0 ≤ x ≤ 1 if and only if f is a constant function, i.e. kernel T is the set of all constant functions. df Clearly for each continuously differential function f, its derivative T f = dx is continuous and thus the range of T is the space of continuous functions. In fact, by the Fundamental Theorem of Calculus, for each continuous function f : [0; 1] ! R there is an anti-derivative F : [0; 1] ! R given by Z x F (x) = f(t) dt 0 such that dF (x) = f(x) dx for each x in [0; 1]. The general anti-derivative of f is given by Z f(x) dx = F (x) + C for all x in [0; 1], where C is a constant and represents a function in the kernel of the derivative operator. This is why one always wrote \+C" when integrating functions. 4 Theorem 2. Let V and W be vector spaces and T : V ! W be a transformation. Then the kernel of T is a subspace of V and the range of T is a subspace of W . Reason. To check that the kernel of T is a subspace, we need to check the three properties of a subspace. This proceeds exactly as it did in the case that T was given by matrix multiplication using the linearity of T . • Zero vector: Since T is a linear transformation, T (0) = 0 and thus 0 is in the kernel of T . • Addition: Suppose u; v lie in the kernel of T ; that is, T (u) = 0 and T (v) = 0. Since T is linear, T (u + v) = T (u) + T (v) = 0 + 0 = 0 so u + v is in the kernel of T . • Scaling: Suppose u lies in the kernel of T and c is a scalar. That is, T (u) = 0. Since T is linear, T (c u) = c T (u) = c 0 = 0 so cu is in the kernel of T .
Recommended publications
  • Kernel and Image
    Math 217 Worksheet 1 February: x3.1 Professor Karen E Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. T Definitions: Given a linear transformation V ! W between vector spaces, we have 1. The source or domain of T is V ; 2. The target of T is W ; 3. The image of T is the subset of the target f~y 2 W j ~y = T (~x) for some x 2 Vg: 4. The kernel of T is the subset of the source f~v 2 V such that T (~v) = ~0g. Put differently, the kernel is the pre-image of ~0. Advice to the new mathematicians from an old one: In encountering new definitions and concepts, n m please keep in mind concrete examples you already know|in this case, think about V as R and W as R the first time through. How does the notion of a linear transformation become more concrete in this special case? Think about modeling your future understanding on this case, but be aware that there are other important examples and there are important differences (a linear map is not \a matrix" unless *source and target* are both \coordinate spaces" of column vectors). The goal is to become comfortable with the abstract idea of a vector space which embodies many n features of R but encompasses many other kinds of set-ups. A. For each linear transformation below, determine the source, target, image and kernel. 2 3 x1 3 (a) T : R ! R such that T (4x25) = x1 + x2 + x3.
    [Show full text]
  • Math 120 Homework 3 Solutions
    Math 120 Homework 3 Solutions Xiaoyu He, with edits by Prof. Church April 21, 2018 [Note from Prof. Church: solutions to starred problems may not include all details or all portions of the question.] 1.3.1* Let σ be the permutation 1 7! 3; 2 7! 4; 3 7! 5; 4 7! 2; 5 7! 1 and let τ be the permutation 1 7! 5; 2 7! 3; 3 7! 2; 4 7! 4; 5 7! 1. Find the cycle decompositions of each of the following permutations: σ; τ; σ2; στ; τσ; τ 2σ. The cycle decompositions are: σ = (135)(24) τ = (15)(23)(4) σ2 = (153)(2)(4) στ = (1)(2534) τσ = (1243)(5) τ 2σ = (135)(24): 1.3.7* Write out the cycle decomposition of each element of order 2 in S4. Elements of order 2 are also called involutions. There are six formed from a single transposition, (12); (13); (14); (23); (24); (34), and three from pairs of transpositions: (12)(34); (13)(24); (14)(23). 3.1.6* Define ' : R× ! {±1g by letting '(x) be x divided by the absolute value of x. Describe the fibers of ' and prove that ' is a homomorphism. The fibers of ' are '−1(1) = (0; 1) = fall positive realsg and '−1(−1) = (−∞; 0) = fall negative realsg. 3.1.7* Define π : R2 ! R by π((x; y)) = x + y. Prove that π is a surjective homomorphism and describe the kernel and fibers of π geometrically. The map π is surjective because e.g. π((x; 0)) = x. The kernel of π is the line y = −x through the origin.
    [Show full text]
  • Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand
    Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand To cite this version: Michel Couprie, Gilles Bertrand. Discrete Topological Transformations for Image Processing. Brimkov, Valentin E. and Barneva, Reneta P. Digital Geometry Algorithms, 2, Springer, pp.73-107, 2012, Lecture Notes in Computational Vision and Biomechanics, 978-94-007-4174-4. 10.1007/978-94- 007-4174-4_3. hal-00727377 HAL Id: hal-00727377 https://hal-upec-upem.archives-ouvertes.fr/hal-00727377 Submitted on 3 Sep 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chapter 3 Discrete Topological Transformations for Image Processing Michel Couprie and Gilles Bertrand Abstract Topology-based image processing operators usually aim at trans- forming an image while preserving its topological characteristics. This chap- ter reviews some approaches which lead to efficient and exact algorithms for topological transformations in 2D, 3D and grayscale images. Some transfor- mations which modify topology in a controlled manner are also described. Finally, based on the framework of critical kernels, we show how to design a topologically sound parallel thinning algorithm guided by a priority function. 3.1 Introduction Topology-preserving operators, such as homotopic thinning and skeletoniza- tion, are used in many applications of image analysis to transform an object while leaving unchanged its topological characteristics.
    [Show full text]
  • Kernel Methodsmethods Simple Idea of Data Fitting
    KernelKernel MethodsMethods Simple Idea of Data Fitting Given ( xi,y i) i=1,…,n xi is of dimension d Find the best linear function w (hyperplane) that fits the data Two scenarios y: real, regression y: {-1,1}, classification Two cases n>d, regression, least square n<d, ridge regression New sample: x, < x,w> : best fit (regression), best decision (classification) 2 Primary and Dual There are two ways to formulate the problem: Primary Dual Both provide deep insight into the problem Primary is more traditional Dual leads to newer techniques in SVM and kernel methods 3 Regression 2 w = arg min ∑(yi − wo − ∑ xij w j ) W i j w = arg min (y − Xw )T (y − Xw ) W d(y − Xw )T (y − Xw ) = 0 dw ⇒ XT (y − Xw ) = 0 w = [w , w ,L, w ]T , ⇒ T T o 1 d X Xw = X y L T x = ,1[ x1, , xd ] , T −1 T ⇒ w = (X X) X y y = [y , y ,L, y ]T ) 1 2 n T −1 T xT y =< x (, X X) X y > 1 xT X = 2 M xT 4 n n×xd Graphical Interpretation ) y = Xw = Hy = X(XT X)−1 XT y = X(XT X)−1 XT y d X= n FICA Income X is a n (sample size) by d (dimension of data) matrix w combines the columns of X to best approximate y Combine features (FICA, income, etc.) to decisions (loan) H projects y onto the space spanned by columns of X Simplify the decisions to fit the features 5 Problem #1 n=d, exact solution n>d, least square, (most likely scenarios) When n < d, there are not enough constraints to determine coefficients w uniquely d X= n W 6 Problem #2 If different attributes are highly correlated (income and FICA) The columns become dependent Coefficients
    [Show full text]
  • Abelian Categories
    Abelian Categories Lemma. In an Ab-enriched category with zero object every finite product is coproduct and conversely. π1 Proof. Suppose A × B //A; B is a product. Define ι1 : A ! A × B and π2 ι2 : B ! A × B by π1ι1 = id; π2ι1 = 0; π1ι2 = 0; π2ι2 = id: It follows that ι1π1+ι2π2 = id (both sides are equal upon applying π1 and π2). To show that ι1; ι2 are a coproduct suppose given ' : A ! C; : B ! C. It φ : A × B ! C has the properties φι1 = ' and φι2 = then we must have φ = φid = φ(ι1π1 + ι2π2) = ϕπ1 + π2: Conversely, the formula ϕπ1 + π2 yields the desired map on A × B. An additive category is an Ab-enriched category with a zero object and finite products (or coproducts). In such a category, a kernel of a morphism f : A ! B is an equalizer k in the diagram k f ker(f) / A / B: 0 Dually, a cokernel of f is a coequalizer c in the diagram f c A / B / coker(f): 0 An Abelian category is an additive category such that 1. every map has a kernel and a cokernel, 2. every mono is a kernel, and every epi is a cokernel. In fact, it then follows immediatly that a mono is the kernel of its cokernel, while an epi is the cokernel of its kernel. 1 Proof of last statement. Suppose f : B ! C is epi and the cokernel of some g : A ! B. Write k : ker(f) ! B for the kernel of f. Since f ◦ g = 0 the map g¯ indicated in the diagram exists.
    [Show full text]
  • 2.2 Kernel and Range of a Linear Transformation
    2.2 Kernel and Range of a Linear Transformation Performance Criteria: 2. (c) Determine whether a given vector is in the kernel or range of a linear trans- formation. Describe the kernel and range of a linear transformation. (d) Determine whether a transformation is one-to-one; determine whether a transformation is onto. When working with transformations T : Rm → Rn in Math 341, you found that any linear transformation can be represented by multiplication by a matrix. At some point after that you were introduced to the concepts of the null space and column space of a matrix. In this section we present the analogous ideas for general vector spaces. Definition 2.4: Let V and W be vector spaces, and let T : V → W be a transformation. We will call V the domain of T , and W is the codomain of T . Definition 2.5: Let V and W be vector spaces, and let T : V → W be a linear transformation. • The set of all vectors v ∈ V for which T v = 0 is a subspace of V . It is called the kernel of T , And we will denote it by ker(T ). • The set of all vectors w ∈ W such that w = T v for some v ∈ V is called the range of T . It is a subspace of W , and is denoted ran(T ). It is worth making a few comments about the above: • The kernel and range “belong to” the transformation, not the vector spaces V and W . If we had another linear transformation S : V → W , it would most likely have a different kernel and range.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • On the Range-Kernel Orthogonality of Elementary Operators
    140 (2015) MATHEMATICA BOHEMICA No. 3, 261–269 ON THE RANGE-KERNEL ORTHOGONALITY OF ELEMENTARY OPERATORS Said Bouali, Kénitra, Youssef Bouhafsi, Rabat (Received January 16, 2013) Abstract. Let L(H) denote the algebra of operators on a complex infinite dimensional Hilbert space H. For A, B ∈ L(H), the generalized derivation δA,B and the elementary operator ∆A,B are defined by δA,B(X) = AX − XB and ∆A,B(X) = AXB − X for all X ∈ L(H). In this paper, we exhibit pairs (A, B) of operators such that the range-kernel orthogonality of δA,B holds for the usual operator norm. We generalize some recent results. We also establish some theorems on the orthogonality of the range and the kernel of ∆A,B with respect to the wider class of unitarily invariant norms on L(H). Keywords: derivation; elementary operator; orthogonality; unitarily invariant norm; cyclic subnormal operator; Fuglede-Putnam property MSC 2010 : 47A30, 47A63, 47B15, 47B20, 47B47, 47B10 1. Introduction Let H be a complex infinite dimensional Hilbert space, and let L(H) denote the algebra of all bounded linear operators acting on H into itself. Given A, B ∈ L(H), we define the generalized derivation δA,B : L(H) → L(H) by δA,B(X)= AX − XB, and the elementary operator ∆A,B : L(H) → L(H) by ∆A,B(X)= AXB − X. Let δA,A = δA and ∆A,A = ∆A. In [1], Anderson shows that if A is normal and commutes with T , then for all X ∈ L(H) (1.1) kδA(X)+ T k > kT k, where k·k is the usual operator norm.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Positive Semigroups of Kernel Operators
    Positivity 12 (2008), 25–44 c 2007 Birkh¨auser Verlag Basel/Switzerland ! 1385-1292/010025-20, published online October 29, 2007 DOI 10.1007/s11117-007-2137-z Positivity Positive Semigroups of Kernel Operators Wolfgang Arendt Dedicated to the memory of H.H. Schaefer Abstract. Extending results of Davies and of Keicher on !p we show that the peripheral point spectrum of the generator of a positive bounded C0-semigroup of kernel operators on Lp is reduced to 0. It is shown that this implies con- vergence to an equilibrium if the semigroup is also irreducible and the fixed space non-trivial. The results are applied to elliptic operators. Mathematics Subject Classification (2000). 47D06, 47B33, 35K05. Keywords. Positive semigroups, Kernel operators, asymptotic behaviour, trivial peripheral spectrum. 0. Introduction Irreducibility is a fundamental notion in Perron-Frobenius Theory. It had been introduced in a direct way by Perron and Frobenius for matrices, but it was H. H. Schaefer who gave the definition via closed ideals. This turned out to be most fruit- ful and led to a wealth of deep and important results. For applications Ouhabaz’ very simple criterion for irreduciblity of semigroups defined by forms (see [Ouh05, Sec. 4.2] or [Are06]) is most useful. It shows that for practically all boundary con- ditions, a second order differential operator in divergence form generates a positive 2 N irreducible C0-semigroup on L (Ω) where Ωis an open, connected subset of R . The main question in Perron-Frobenius Theory, is to determine the asymp- totic behaviour of the semigroup. If the semigroup is bounded (in fact Abel bounded suffices), and if the fixed space is non-zero, then irreducibility is equiv- alent to convergence of the Ces`aro means to a strictly positive rank-one opera- tor, i.e.
    [Show full text]
  • The Kernel of a Linear Transformation Is a Vector Subspace
    The kernel of a linear transformation is a vector subspace. Given two vector spaces V and W and a linear transformation L : V ! W we define a set: Ker(L) = f~v 2 V j L(~v) = ~0g = L−1(f~0g) which we call the kernel of L. (some people call this the nullspace of L). Theorem As defined above, the set Ker(L) is a subspace of V , in particular it is a vector space. Proof Sketch We check the three conditions 1 Because we know L(~0) = ~0 we know ~0 2 Ker(L). 2 Let ~v1; ~v2 2 Ker(L) then we know L(~v1 + ~v2) = L(~v1) + L(~v2) = ~0 + ~0 = ~0 and so ~v1 + ~v2 2 Ker(L). 3 Let ~v 2 Ker(L) and a 2 R then L(a~v) = aL(~v) = a~0 = ~0 and so a~v 2 Ker(L). Math 3410 (University of Lethbridge) Spring 2018 1 / 7 Example - Kernels Matricies Describe and find a basis for the kernel, of the linear transformation, L, associated to 01 2 31 A = @3 2 1A 1 1 1 The kernel is precisely the set of vectors (x; y; z) such that L((x; y; z)) = (0; 0; 0), so 01 2 31 0x1 001 @3 2 1A @yA = @0A 1 1 1 z 0 but this is precisely the solutions to the system of equations given by A! So we find a basis by solving the system! Theorem If A is any matrix, then Ker(A), or equivalently Ker(L), where L is the associated linear transformation, is precisely the solutions ~x to the system A~x = ~0 This is immediate from the definition given our understanding of how to associate a system of equations to M~x = ~0: Math 3410 (University of Lethbridge) Spring 2018 2 / 7 The Kernel and Injectivity Recall that a function L : V ! W is injective if 8~v1; ~v2 2 V ; ((L(~v1) = L(~v2)) ) (~v1 = ~v2)) Theorem A linear transformation L : V ! W is injective if and only if Ker(L) = f~0g.
    [Show full text]