4 Images, Kernels, and Subspaces

Total Page:16

File Type:pdf, Size:1020Kb

4 Images, Kernels, and Subspaces 4 Images, Kernels, and Subspaces In our study of linear transformations we've examined some of the conditions under which a transformation is invertible. Now we're ready to investigate some ideas similar to invertibility. Namely, we would like to measure the ways in which a transformation that is not invertible fails to have an inverse. 4.1 The Image and Kernel of a Linear Transformation Definition. The image of a function consists of all the values the function assumes. If f : X ! Y is a function from X to Y , then im(f) = ff(x): x 2 Xg: Notice that im(f) is a subset of Y . Definition. The kernel of a function whose range is Rn consists of all the values in its domain at which the function assumes the value 0. If f : X ! Rn is a function from X to Rn, then ker(f) = fx 2 X : f(x) = 0g: Notice that ker(f) is a subset of X. Also, if T (x) = Ax is a linear transformation from Rm to Rn, then ker(T ) (also denoted ker(A)) is the set of solutions to the equation Ax = 0. The kernel gives us some new ways to characterize invertible matrices. Theorem 1. Let A be an n × n matrix. Then the following statements are equivalent. 1. A is invertible. 2. The linear system Ax = b has a unique solution x for every b 2 Rn. 3. rref(A) = In. 4. rank(A) = n. 5. im(A) = Rn. 6. ker(A) = f0g. Example 13. (x3.1, Exercise 39 of [1]) Consider an n × p matrix A and a p × m matrix B. (a) What is the relationship between ker(AB) and ker(B)? Are they always equal? Is one of them always contained in the other? (b) What is the relationship between im(A) and im(AB)? (Solution) 22 (a) Recall that ker(AB) is the set of vectors x 2 Rm for which ABx = 0, and similarly that ker(B) is the set of vectors x 2 Rm for which Bx = 0. Now if x is in ker(B), then Bx = 0, so ABx = 0. This means that x is in ker(AB), so we see that ker(B) must always be contained in ker(AB). On the other hand, ker(AB) might not be a subset of ker(B). For instance, suppose that 0 0 1 0 A = and B = : 0 0 0 1 Then B is the identity matrix, so ker(B) = f0g. But every vector has image zero under AB, so ker(AB) = R2. Certainly ker(B) does not contain ker(AB) in this case. (b) Suppose y is in the image of AB. Then y = ABx for some x 2 Rm. That is, y = ABx = A(Bx); so y is the image of Bx under multiplication by A, and is thus in the image of A. So im(A) contains im(AB). On the other hand, consider 1 0 0 0 A = and B = : 0 0 0 1 Then 0 0 AB = ; 0 0 1 so im(AB) = f0g, but the image of A is the span of the vector . So im(AB) does 0 not necessarily contain im(A). ♦ Example 14. (x3.1, Exercise 48 of [1]) Consider a 2 × 2 matrix A with A2 = A. (a) If w is in the image of A, what is the relationship between w and Aw? (b) What can you say about A if rank(A) = 2? What if rank(A) = 0? (c) If rank(A) = 1, show that the linear transformation T (x) = Ax is the projection onto im(A) along ker(A). (Solution) (a) If w is in the image of A, then w = Av for some v 2 R2. Then Aw = A(Av) = A2v = Av = w; since A2 = A. So Aw = w. 23 (b) If rank(A) = 2, then A is invertible. Since A2 = A, we see that −1 −1 2 −1 A = I2A = (A A)A = A A = A A = I2: So the only rank 2 2 × 2 matrix with the property that A2 = A is the identity matrix. On the other hand, if rank(A) = 0 then A must be the zero matrix. (c) If rank(A) = 1, then A is not invertible, so ker(A) 6= f0g. But we also know that A is not the zero matrix, so ker(A) 6= R2. We conclude that ker(A) must be a line in R2. Next, suppose we have w 2 ker(A) \ im(A). Then Aw = 0 and, according to part (a), Aw = w. So w is the zero vector, meaning that ker(A) \ im(A) = f0g. Since im(A) is neither 0 nor all of R2, it also must be a line in R2. So ker(A) and im(A) are non-parallel lines in R2. Now choose x 2 R2 and let w = x − Ax. Notice that Aw = Ax − A2x = 0, so w 2 ker(A). Then we may write x as the sum of an element of im(A) and an element of ker(A): x = Ax + w: According to Exercise 2.2.33, the map T (x) = Ax is then the projection onto im(A) along ker(A). ♦ Example 15. (x3.1, Exercise 50 of [1]) Consider a square matrix A with ker(A2) = ker(A3). Is ker(A3) = ker(A4)? Justify your answer. (Solution) Suppose x 2 ker(A3). Then A3x = 0, so A4x = A(A3x) = A0 = 0; meaning that x 2 ker(A4). So ker(A3) is contained in ker(A4). On the other hand, suppose x 2 ker(A4). Then A4x = 0, so A3(Ax) = 0. This means that Ax is in the kernel of A3, and thus in ker(A2). So A3x = A2(Ax) = 0; meaning that x 2 ker(A3). So ker(A4) is contained in ker(A3). Since each set contains the 3 4 other, the two are equal: ker(A ) = ker(A ). ♦ 4.2 Subspaces Definition. A subset W of the vector space Rn is called a subspace of Rn if it (i) contains the zero vector; (ii) is closed under vector addition; (iii) is closed under scalar multiplication. 24 One important observation we can immediately make is that for any n × m matrix A, ker(A) is a subspace of Rm and im(A) is a subspace of Rn. n Definition. Suppose we have vectors v1;:::; vm in R . We say that a vector vi is re- dundant if vi is a linear combination of the preceding vectors v1;:::; vi−1. We say that the set of vectors v1;:::; vm is linearly independent if none of them is redundant, and linearly dependent otherwise. If the vectors v1;:::; vm are linearly independent and span n a subspace V of R , we say that v1;:::; vm form a basis of V . Example 16. (x3.2, Exercise 26 of [1]) Find a redundant column vector of the following matrix and write it as a linear combination of the preceding columns. Use this representation to write a nontrivial relation among the columns, and thus find a nonzero vector in the kernel of A. 21 3 63 A = 41 2 55 : 1 1 4 (Solution) First we notice that 213 233 263 3 415 + 425 = 455 ; 1 1 4 meaning that the third vector of A is redundant. This allows us to write a nontrivial relation 213 233 263 203 3 415 + 1 425 − 1 455 = 405 1 1 4 0 among the vectors. Finally, these coefficients give us a nonzero element of ker(A), since 21 3 63 2 3 3 213 233 263 203 41 2 55 4 1 5 = 3 415 + 1 425 − 1 455 = 405 : 1 1 4 −1 1 1 4 0 ♦ Example 17. (x3.2, Exercise 53 of [1]) Consider a subspace V of Rn. We define the orthog- onal complement V ? of V as the set of those vectors w in Rn that are perpendicular to all vectors in V ; that is, w · v = 0, for all v in V . Show that V ? is a subspace of Rn. (Solution) We have three properties to check: that V ? contains the zero vector, that it is closed under addition, and that it is closed under scalar multiplication. Certainly 0 · v = 0 ? ? for every v 2 V , so 0 2 V . Next, suppose we have vectors w1 and w2 in V . Then (w1 + w2) · v = w1 · v + w2 · v = 0 + 0; 25 ? ? since w1 · v = 0 and w2 · v = 0. So w1 + w2 is in V , meaning that V is closed under addition. Finally, suppose we have w in V ? and a scalar k. Then (kw) · v = k(w · v) = 0; ? ? n so kw 2 V . So V is closed under scalar addition, and is thus a subspace of R . ♦ 213 Example 18. (x3.2, Exercise 54 of [1]) Consider the line L spanned by 425 in R3. Find a 3 basis of L?. See Exercise 53. ? (Solution) Suppose v, with components v1; v2; and v3, is in L . Then 2 3 2 3 v1 1 0 = 4v25 · 425 = v1 + 2v2 + 3v3: v3 3 This is a linear equation in three variables. Its solution set has two free variables { v2 and v3 { and the remaining variable can be given in terms of these: v1 = −2v2 − 3v3: Consider the vectors 2−23 2−33 u1 = 4 1 5 and u2 = 4 0 5 : 0 1 ? We can check that u1 and u2 are both in L , and since neither is a scalar multiple of the other, these two vectors are linearly independent.
Recommended publications
  • ADDITIVE MAPS on RANK K BIVECTORS 1. Introduction. Let N 2 Be an Integer and Let F Be a Field. We Denote by M N(F) the Algeb
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 36, pp. 847-856, December 2020. ADDITIVE MAPS ON RANK K BIVECTORS∗ WAI LEONG CHOOIy AND KIAM HEONG KWAy V2 Abstract. Let U and V be linear spaces over fields F and K, respectively, such that dim U = n > 2 and jFj > 3. Let U n−1 V2 V2 be the second exterior power of U. Fixing an even integer k satisfying 2 6 k 6 n, it is shown that a map : U! V satisfies (u + v) = (u) + (v) for all rank k bivectors u; v 2 V2 U if and only if is an additive map. Examples showing the indispensability of the assumption on k are given. Key words. Additive maps, Second exterior powers, Bivectors, Ranks, Alternate matrices. AMS subject classifications. 15A03, 15A04, 15A75, 15A86. 1. Introduction. Let n > 2 be an integer and let F be a field. We denote by Mn(F) the algebra of n×n matrices over F. Given a nonempty subset S of Mn(F), a map : Mn(F) ! Mn(F) is called commuting on S (respectively, additive on S) if (A)A = A (A) for all A 2 S (respectively, (A + B) = (A) + (B) for all A; B 2 S). In 2012, using Breˇsar'sresult [1, Theorem A], Franca [3] characterized commuting additive maps on invertible (respectively, singular) matrices of Mn(F). He continued to characterize commuting additive maps on rank k matrices of Mn(F) in [4], where 2 6 k 6 n is a fixed integer, and commuting additive maps on rank one matrices of Mn(F) in [5].
    [Show full text]
  • Do Killingâ•Fiyano Tensors Form a Lie Algebra?
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Physics Department Faculty Publication Series Physics 2007 Do Killing–Yano tensors form a Lie algebra? David Kastor University of Massachusetts - Amherst, [email protected] Sourya Ray University of Massachusetts - Amherst Jennie Traschen University of Massachusetts - Amherst, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/physics_faculty_pubs Part of the Physics Commons Recommended Citation Kastor, David; Ray, Sourya; and Traschen, Jennie, "Do Killing–Yano tensors form a Lie algebra?" (2007). Classical and Quantum Gravity. 1235. Retrieved from https://scholarworks.umass.edu/physics_faculty_pubs/1235 This Article is brought to you for free and open access by the Physics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Physics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Do Killing-Yano tensors form a Lie algebra? David Kastor, Sourya Ray and Jennie Traschen Department of Physics University of Massachusetts Amherst, MA 01003 ABSTRACT Killing-Yano tensors are natural generalizations of Killing vec- tors. We investigate whether Killing-Yano tensors form a graded Lie algebra with respect to the Schouten-Nijenhuis bracket. We find that this proposition does not hold in general, but that it arXiv:0705.0535v1 [hep-th] 3 May 2007 does hold for constant curvature spacetimes. We also show that Minkowski and (anti)-deSitter spacetimes have the maximal num- ber of Killing-Yano tensors of each rank and that the algebras of these tensors under the SN bracket are relatively simple exten- sions of the Poincare and (A)dS symmetry algebras.
    [Show full text]
  • Kernel and Image
    Math 217 Worksheet 1 February: x3.1 Professor Karen E Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. T Definitions: Given a linear transformation V ! W between vector spaces, we have 1. The source or domain of T is V ; 2. The target of T is W ; 3. The image of T is the subset of the target f~y 2 W j ~y = T (~x) for some x 2 Vg: 4. The kernel of T is the subset of the source f~v 2 V such that T (~v) = ~0g. Put differently, the kernel is the pre-image of ~0. Advice to the new mathematicians from an old one: In encountering new definitions and concepts, n m please keep in mind concrete examples you already know|in this case, think about V as R and W as R the first time through. How does the notion of a linear transformation become more concrete in this special case? Think about modeling your future understanding on this case, but be aware that there are other important examples and there are important differences (a linear map is not \a matrix" unless *source and target* are both \coordinate spaces" of column vectors). The goal is to become comfortable with the abstract idea of a vector space which embodies many n features of R but encompasses many other kinds of set-ups. A. For each linear transformation below, determine the source, target, image and kernel. 2 3 x1 3 (a) T : R ! R such that T (4x25) = x1 + x2 + x3.
    [Show full text]
  • Math 120 Homework 3 Solutions
    Math 120 Homework 3 Solutions Xiaoyu He, with edits by Prof. Church April 21, 2018 [Note from Prof. Church: solutions to starred problems may not include all details or all portions of the question.] 1.3.1* Let σ be the permutation 1 7! 3; 2 7! 4; 3 7! 5; 4 7! 2; 5 7! 1 and let τ be the permutation 1 7! 5; 2 7! 3; 3 7! 2; 4 7! 4; 5 7! 1. Find the cycle decompositions of each of the following permutations: σ; τ; σ2; στ; τσ; τ 2σ. The cycle decompositions are: σ = (135)(24) τ = (15)(23)(4) σ2 = (153)(2)(4) στ = (1)(2534) τσ = (1243)(5) τ 2σ = (135)(24): 1.3.7* Write out the cycle decomposition of each element of order 2 in S4. Elements of order 2 are also called involutions. There are six formed from a single transposition, (12); (13); (14); (23); (24); (34), and three from pairs of transpositions: (12)(34); (13)(24); (14)(23). 3.1.6* Define ' : R× ! {±1g by letting '(x) be x divided by the absolute value of x. Describe the fibers of ' and prove that ' is a homomorphism. The fibers of ' are '−1(1) = (0; 1) = fall positive realsg and '−1(−1) = (−∞; 0) = fall negative realsg. 3.1.7* Define π : R2 ! R by π((x; y)) = x + y. Prove that π is a surjective homomorphism and describe the kernel and fibers of π geometrically. The map π is surjective because e.g. π((x; 0)) = x. The kernel of π is the line y = −x through the origin.
    [Show full text]
  • Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand
    Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand To cite this version: Michel Couprie, Gilles Bertrand. Discrete Topological Transformations for Image Processing. Brimkov, Valentin E. and Barneva, Reneta P. Digital Geometry Algorithms, 2, Springer, pp.73-107, 2012, Lecture Notes in Computational Vision and Biomechanics, 978-94-007-4174-4. 10.1007/978-94- 007-4174-4_3. hal-00727377 HAL Id: hal-00727377 https://hal-upec-upem.archives-ouvertes.fr/hal-00727377 Submitted on 3 Sep 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chapter 3 Discrete Topological Transformations for Image Processing Michel Couprie and Gilles Bertrand Abstract Topology-based image processing operators usually aim at trans- forming an image while preserving its topological characteristics. This chap- ter reviews some approaches which lead to efficient and exact algorithms for topological transformations in 2D, 3D and grayscale images. Some transfor- mations which modify topology in a controlled manner are also described. Finally, based on the framework of critical kernels, we show how to design a topologically sound parallel thinning algorithm guided by a priority function. 3.1 Introduction Topology-preserving operators, such as homotopic thinning and skeletoniza- tion, are used in many applications of image analysis to transform an object while leaving unchanged its topological characteristics.
    [Show full text]
  • Kernel Methodsmethods Simple Idea of Data Fitting
    KernelKernel MethodsMethods Simple Idea of Data Fitting Given ( xi,y i) i=1,…,n xi is of dimension d Find the best linear function w (hyperplane) that fits the data Two scenarios y: real, regression y: {-1,1}, classification Two cases n>d, regression, least square n<d, ridge regression New sample: x, < x,w> : best fit (regression), best decision (classification) 2 Primary and Dual There are two ways to formulate the problem: Primary Dual Both provide deep insight into the problem Primary is more traditional Dual leads to newer techniques in SVM and kernel methods 3 Regression 2 w = arg min ∑(yi − wo − ∑ xij w j ) W i j w = arg min (y − Xw )T (y − Xw ) W d(y − Xw )T (y − Xw ) = 0 dw ⇒ XT (y − Xw ) = 0 w = [w , w ,L, w ]T , ⇒ T T o 1 d X Xw = X y L T x = ,1[ x1, , xd ] , T −1 T ⇒ w = (X X) X y y = [y , y ,L, y ]T ) 1 2 n T −1 T xT y =< x (, X X) X y > 1 xT X = 2 M xT 4 n n×xd Graphical Interpretation ) y = Xw = Hy = X(XT X)−1 XT y = X(XT X)−1 XT y d X= n FICA Income X is a n (sample size) by d (dimension of data) matrix w combines the columns of X to best approximate y Combine features (FICA, income, etc.) to decisions (loan) H projects y onto the space spanned by columns of X Simplify the decisions to fit the features 5 Problem #1 n=d, exact solution n>d, least square, (most likely scenarios) When n < d, there are not enough constraints to determine coefficients w uniquely d X= n W 6 Problem #2 If different attributes are highly correlated (income and FICA) The columns become dependent Coefficients
    [Show full text]
  • Preserivng Pieces of Information in a Given Order in HRR and GA$ C$
    Proceedings of the Federated Conference on ISBN 978-83-60810-22-4 Computer Science and Information Systems pp. 213–220 Preserivng pieces of information in a given order in HRR and GAc Agnieszka Patyk-Ło´nska Abstract—Geometric Analogues of Holographic Reduced Rep- Secondly, this method of encoding will not detect similarity resentations (GA HRR or GAc—the continuous version of between eye and yeye. discrete GA described in [16]) employ role-filler binding based A quantum-like attempt to tackle the problem of information on geometric products. Atomic objects are real-valued vectors in n-dimensional Euclidean space and complex statements belong to ordering was made in [1]—a version of semantic analysis, a hierarchy of multivectors. A property of GAc and HRR studied reformulated in terms of a Hilbert-space problem, is compared here is the ability to store pieces of information in a given order with structures known from quantum mechanics. In particular, by means of trajectory association. We describe results of an an LSA matrix representation [1], [10] is rewritten by the experiment: finding the alignment of items in a sequence without means of quantum notation. Geometric algebra has also been the precise knowledge of trajectory vectors. Index Terms—distributed representations, geometric algebra, used extensively in quantum mechanics ([2], [4], [3]) and so HRR, BSC, word order, trajectory associations, bag of words. there seems to be a natural connection between LSA and GAc, which is the ground for fututre work on the problem I. INTRODUCTION of preserving pieces of information in a given order. VER the years several attempts have been made to As far as convolutions are concerned, the most interesting O preserve the order in which the objects are to be remem- approach to remembering information in a given order has bered with the help of binding and superposition.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • Abelian Categories
    Abelian Categories Lemma. In an Ab-enriched category with zero object every finite product is coproduct and conversely. π1 Proof. Suppose A × B //A; B is a product. Define ι1 : A ! A × B and π2 ι2 : B ! A × B by π1ι1 = id; π2ι1 = 0; π1ι2 = 0; π2ι2 = id: It follows that ι1π1+ι2π2 = id (both sides are equal upon applying π1 and π2). To show that ι1; ι2 are a coproduct suppose given ' : A ! C; : B ! C. It φ : A × B ! C has the properties φι1 = ' and φι2 = then we must have φ = φid = φ(ι1π1 + ι2π2) = ϕπ1 + π2: Conversely, the formula ϕπ1 + π2 yields the desired map on A × B. An additive category is an Ab-enriched category with a zero object and finite products (or coproducts). In such a category, a kernel of a morphism f : A ! B is an equalizer k in the diagram k f ker(f) / A / B: 0 Dually, a cokernel of f is a coequalizer c in the diagram f c A / B / coker(f): 0 An Abelian category is an additive category such that 1. every map has a kernel and a cokernel, 2. every mono is a kernel, and every epi is a cokernel. In fact, it then follows immediatly that a mono is the kernel of its cokernel, while an epi is the cokernel of its kernel. 1 Proof of last statement. Suppose f : B ! C is epi and the cokernel of some g : A ! B. Write k : ker(f) ! B for the kernel of f. Since f ◦ g = 0 the map g¯ indicated in the diagram exists.
    [Show full text]
  • 2.2 Kernel and Range of a Linear Transformation
    2.2 Kernel and Range of a Linear Transformation Performance Criteria: 2. (c) Determine whether a given vector is in the kernel or range of a linear trans- formation. Describe the kernel and range of a linear transformation. (d) Determine whether a transformation is one-to-one; determine whether a transformation is onto. When working with transformations T : Rm → Rn in Math 341, you found that any linear transformation can be represented by multiplication by a matrix. At some point after that you were introduced to the concepts of the null space and column space of a matrix. In this section we present the analogous ideas for general vector spaces. Definition 2.4: Let V and W be vector spaces, and let T : V → W be a transformation. We will call V the domain of T , and W is the codomain of T . Definition 2.5: Let V and W be vector spaces, and let T : V → W be a linear transformation. • The set of all vectors v ∈ V for which T v = 0 is a subspace of V . It is called the kernel of T , And we will denote it by ker(T ). • The set of all vectors w ∈ W such that w = T v for some v ∈ V is called the range of T . It is a subspace of W , and is denoted ran(T ). It is worth making a few comments about the above: • The kernel and range “belong to” the transformation, not the vector spaces V and W . If we had another linear transformation S : V → W , it would most likely have a different kernel and range.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • On the Range-Kernel Orthogonality of Elementary Operators
    140 (2015) MATHEMATICA BOHEMICA No. 3, 261–269 ON THE RANGE-KERNEL ORTHOGONALITY OF ELEMENTARY OPERATORS Said Bouali, Kénitra, Youssef Bouhafsi, Rabat (Received January 16, 2013) Abstract. Let L(H) denote the algebra of operators on a complex infinite dimensional Hilbert space H. For A, B ∈ L(H), the generalized derivation δA,B and the elementary operator ∆A,B are defined by δA,B(X) = AX − XB and ∆A,B(X) = AXB − X for all X ∈ L(H). In this paper, we exhibit pairs (A, B) of operators such that the range-kernel orthogonality of δA,B holds for the usual operator norm. We generalize some recent results. We also establish some theorems on the orthogonality of the range and the kernel of ∆A,B with respect to the wider class of unitarily invariant norms on L(H). Keywords: derivation; elementary operator; orthogonality; unitarily invariant norm; cyclic subnormal operator; Fuglede-Putnam property MSC 2010 : 47A30, 47A63, 47B15, 47B20, 47B47, 47B10 1. Introduction Let H be a complex infinite dimensional Hilbert space, and let L(H) denote the algebra of all bounded linear operators acting on H into itself. Given A, B ∈ L(H), we define the generalized derivation δA,B : L(H) → L(H) by δA,B(X)= AX − XB, and the elementary operator ∆A,B : L(H) → L(H) by ∆A,B(X)= AXB − X. Let δA,A = δA and ∆A,A = ∆A. In [1], Anderson shows that if A is normal and commutes with T , then for all X ∈ L(H) (1.1) kδA(X)+ T k > kT k, where k·k is the usual operator norm.
    [Show full text]