8. Linear Maps

Total Page:16

File Type:pdf, Size:1020Kb

8. Linear Maps 176 8. Linear Maps In this chapter, we study the notion of a linear map of abstract vector spaces. This is the abstraction of the notion of a linear transformation on Rn. 8.1. Basic definitions Definition 8.1. Let U, V be vector spaces. A linear map L : U V (reads L from U → to V ) is a rule which assigns to each element U an element in V , such that L preserves addition and scaling. That is, L(u1 + u2)=L(u1)+L(u2),L(cu)=cL(u). Notation: we sometimes write L : u v,ifL(u)=v,andwecallv the image of u under → L. The two defining properties of a linear map are called linearity.Wheredoesa linear map L : U V send the zero element O in U? By linearity, we have → L(O)=L(0O)=0L(O). But 0L(O) is the zero element, which we also denote by O,inV .Thus L(O)=O. Example. For any vector space U, we have a linear map Id : U U given by U → Id : u u. U → 177 This is called the identity map. For any other vector space V , we also have a linear map O : U V given by → O : u O. → This is called the zero map. Example. Let A be an m n matrix, and define × L : Rn Rm,L(X)=AX. A → A We have seen, in chapter 3, that this map is linear. We have also seen that every linear map L : Rn Rm is represented by a matrix A,ie.ifL = L for some m n matrix A. → A × Example. Let A an m n matrix, and define × L : M(n, k) M(m, k),L(B)=AB. A → A This map is linear because A(B1 + B2)=AB1 + AB2,A(cB)=c(AB) for any B ,B ,B M(n, k) and any number c (chapter 3). 1 2 ∈ Exercise. Is L : Rn R defined by L(X)=X X, linear? → · Exercise. Fix a vector A in Rn.IsL : Rn R defined by L(X)=A X, linear? → · Example. (With calculus) Let C∞ be the vector space of smooth functions, and let D(f)=f .ThenD : C∞ C∞ is linear. differentiation → Example. (With calculus) Continuing with the preceding example, let V = Span 1,t,..,tn n { } k k 1 in C∞.SinceD(t )=kt − for k =0, 1, 2,.., it follows that D(f) lands in Vn 1 for any − function f in Vn. Thus we can regard D as a linear map D : Vn Vn 1. → − Theorem 8.2. Let U, V be vector spaces, S be a basis of U. Let f : S V be any map. → Then there is a unique linear map L : U V such that L(u)=f(u) for every element u → 178 in S. Proof: If u1,..,uk are distinct elements in S,thenwedefine k k L( aiui)= aif(ui) i=1 i=1 for any numbers ai.Sinceeveryelementw in U can be expressed uniquely in the form k a u , this defines a map L : U V . Moreover, L(u)=l(u) for every u in S.Soit i=1 i i → remains to verify that L is linear. Let w1,w2,w be elements in U, and c be a scalar. We can write k w1 = ai ui i=1 k w2 = aiui i=1 k w = aiui i=1 where u1,..,uk are distinct elements in S, and the a’s are numbers. Then L(w1 + w2)=L[ (ai + ai)ui] = (ai + ai)f(ui) = ai f(ui)+ aif(ui) = L(w1)+L(w2) L(cw)=L[c aiui] = L[ (cai)ui] = (cai)f(ui) = c aif(ui) = cL(w). Thus L is linear. For the rest of this section, L : U V will be a linear map. → Definition 8.3. The set Ker L = u U L(u)=O { ∈ | } 179 is called the kernel of L. The set Im L = L(u) u U { | ∈ } is called the image of L. More generally, for any subspace W U,wewrite ⊂ L(W )= L(w) w W . { | ∈ } Lemma 8.4. Ker L is linear subspace of U. Im L is a linear subspace of V . Proof: If u ,u Ker L,then 1 2 ∈ L(u1 + u2)=L(u1)+L(u2)=O + O = O. Thus u + u Ker L.Ifc is any number and u Ker L,then 1 2 ∈ ∈ L(cu)=cL(u)=cO = O. Thus cu Ker L.ThusKer L is closed under addition and scaling in U, hence is a linear ∈ subspace of U. The proof for Im L is similar and is left as an exercise. Example. Let A an m n matrix, and L : Rn Rm with L (X)=AX.Then × A → A t Ker LA = Null(A), and Im LA = Row(A ). This shows that the notions of kernel and image are abstractions of the notions of null space and row space. Example. (With calculus) Consider D : C∞ C∞ with D(f)=f . Ker D consists → of just constant functions. Any smooth function is the derivative of a smooth function (fundamental theorem of calculus). Thus Im D = C∞. n Example. (With calculus) As before, let V = Span 1,t,..,t in C∞, and consider n { } k k 1 D : Vn Vn 1. As before, Ker D consists of the constant functions. Since D(t )=kt − → − n 1 for k =0, 1, 2,.., it follows 1,t,..,t − are all in Im D. Thus any of their linear combination is also in Im D.SowehaveIm D = Vn 1. − Example. Fix a nonzero vector A in Rn, and define L : Rn R,L: X A X. → → · 180 Then Ker L = V ⊥ where V is the line spanned by A, and Im L = R. Example. Let L : U V be a linear map, and W be a linear subspace of U.Wedefine → a new map L : W V as follows: |W → L (w)=L(w). |W This map is linear. L is called the restriction of L to W . |W 8.2. A dimension relation Throughout this section, L : U V will be a linear map of finite dimensional → vector spaces. Lemma 8.5. Suppose that Ker L = O and that u ,..,u is a linearly independent { } { 1 k} set in U. Then L(u1),..,L(uk) form a linearly independent set. Proof: Let k xiL(ui)=O. i=1 By linearity of L, this reads k L( xiui)=O. i=1 Since Ker L = O , this implies that { } k xiui = O. i=1 Since u ,..,u is linearly independent, it follows that the x’s are all zero. Thus { 1 k} L(u1),..,L(uk) form a linearly independent set. Theorem 8.6. (Rank-nullity relation) dim U = dim (Ker L)+dim (Im L). Proof: If Im L is the zero space, then Ker L = U, and the theorem holds trivially. Suppose Im L is not the zero space, and let v ,..,v be a basis of Im L. Let L(u )=v . { 1 s} i i 181 Let w ,..,w be a basis of Ker L U. We will show that the elements u ,..,u ,w ,..,w { 1 r} ⊂ 1 s 1 r form a linearly independent set that spans U. Linear independence: Let • x u + + x u + y w + + y w = O. 1 1 ··· s s 1 1 ··· r r Applying L to this, we get x v + + x v = O. 1 1 ··· s s Since v ,..,v is linearly independent, the x’s are all zero, and so { 1 s} y w + + y w = O. 1 1 ··· r r Since w ,..,w is linearly independent, the y’s are all zero. This shows that { 1 r} u1,..,us,w1,..,wr form a linearly independent set. Spanning property: Let u U.Since v ,..,v spans Im L,wehave • ∈ { 1 s} L(u)=a v + + a v 1 1 ··· s s for some a . This gives L(u)=a L(u )+ + a L(u ). By linearity of L,thisyields i 1 1 ··· s s L(u a u a u )=0, − 1 1 −···− s s which means that u a u a u Ker L.So − 1 1 −···− s s ∈ u a u a u = b w + + b w − 1 1 −···− s s 1 1 ··· r r because w ,..,w spans Ker L. This shows that { 1 r} u = a u + + a u + b w + + b w . 1 1 ··· s s 1 1 ··· r r This completes the proof. Example. Let A be an m n matrix, and consider L : Rn Rm, L (X)=AX.Since × A → A t t Ker LA = Null(A),ImLA = Row(A ),rankA= dim Row(A ), The preceding theorem yields n = rank A + dim Null(A). We have proven this in chapter 4 by using an orthogonal basis. Note that the preceding theorem does not involve using any inner product. 182 n Example. (With calculus) As before, let V = Span 1,t,..,t in C∞, and consider n { } D : Vn Vn 1. Note that dim Vn = n + 1. We have seen that Ker D consists of the → − constant functions, and that Im D = Vn 1.Sodim(Ker D) = 1 and dim(Im D)=n. − Thus, in this case, dim Vn = dim(Ker D)+dim(Im D) as expected. Corollary 8.7. Let U, W be finite dimensional linear subspaces of V . Then dim(U)+dim(W )=dim(U + W )+dim(U W ). ∩ Proof: Define a linear map L : U W V ,(u, w) u w.Then ⊕ → → − Ker L = (u, u) u U, u W ,ImL= U + W.
Recommended publications
  • The Rational and Jordan Forms Linear Algebra Notes
    The Rational and Jordan Forms Linear Algebra Notes Satya Mandal November 5, 2005 1 Cyclic Subspaces In a given context, a "cyclic thing" is an one generated "thing". For example, a cyclic groups is a one generated group. Likewise, a module M over a ring R is said to be a cyclic module if M is one generated or M = Rm for some m 2 M: We do not use the expression "cyclic vector spaces" because one generated vector spaces are zero or one dimensional vector spaces. 1.1 (De¯nition and Facts) Suppose V is a vector space over a ¯eld F; with ¯nite dim V = n: Fix a linear operator T 2 L(V; V ): 1. Write R = F[T ] = ff(T ) : f(X) 2 F[X]g L(V; V )g: Then R = F[T ]g is a commutative ring. (We did considered this ring in last chapter in the proof of Caley-Hamilton Theorem.) 2. Now V acquires R¡module structure with scalar multiplication as fol- lows: Define f(T )v = f(T )(v) 2 V 8 f(T ) 2 F[T ]; v 2 V: 3. For an element v 2 V de¯ne Z(v; T ) = F[T ]v = ff(T )v : f(T ) 2 Rg: 1 Note that Z(v; T ) is the cyclic R¡submodule generated by v: (I like the notation F[T ]v, the textbook uses the notation Z(v; T ).) We say, Z(v; T ) is the T ¡cyclic subspace generated by v: 4. If V = Z(v; T ) = F[T ]v; we say that that V is a T ¡cyclic space, and v is called the T ¡cyclic generator of V: (Here, I di®er a little from the textbook.) 5.
    [Show full text]
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Projective Geometry: a Short Introduction
    Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Master MOSIG Introduction to Projective Geometry Contents 1 Introduction 2 1.1 Objective . .2 1.2 Historical Background . .3 1.3 Bibliography . .4 2 Projective Spaces 5 2.1 Definitions . .5 2.2 Properties . .8 2.3 The hyperplane at infinity . 12 3 The projective line 13 3.1 Introduction . 13 3.2 Projective transformation of P1 ................... 14 3.3 The cross-ratio . 14 4 The projective plane 17 4.1 Points and lines . 17 4.2 Line at infinity . 18 4.3 Homographies . 19 4.4 Conics . 20 4.5 Affine transformations . 22 4.6 Euclidean transformations . 22 4.7 Particular transformations . 24 4.8 Transformation hierarchy . 25 Grenoble Universities 1 Master MOSIG Introduction to Projective Geometry Chapter 1 Introduction 1.1 Objective The objective of this course is to give basic notions and intuitions on projective geometry. The interest of projective geometry arises in several visual comput- ing domains, in particular computer vision modelling and computer graphics. It provides a mathematical formalism to describe the geometry of cameras and the associated transformations, hence enabling the design of computational ap- proaches that manipulates 2D projections of 3D objects. In that respect, a fundamental aspect is the fact that objects at infinity can be represented and manipulated with projective geometry and this in contrast to the Euclidean geometry. This allows perspective deformations to be represented as projective transformations. Figure 1.1: Example of perspective deformation or 2D projective transforma- tion. Another argument is that Euclidean geometry is sometimes difficult to use in algorithms, with particular cases arising from non-generic situations (e.g.
    [Show full text]
  • A Note on Bilinear Forms
    A note on bilinear forms Darij Grinberg July 9, 2020 Contents 1. Introduction1 2. Bilinear maps and forms1 3. Left and right orthogonal spaces4 4. Interlude: Symmetric and antisymmetric bilinear forms 14 5. The morphism of quotients I 15 6. The morphism of quotients II 20 7. More on orthogonal spaces 24 1. Introduction In this note, we shall prove some elementary linear-algebraic properties of bi- linear forms. These properties generalize some of the standard facts about non- degenerate bilinear forms on finite-dimensional vector spaces (e.g., the fact that ? V? = V for a vector subspace V of a vector space A equipped with a nonde- generate symmetric bilinear form) to bilinear forms which may be degenerate. They are unlikely to be new, but I have not found them explored anywhere, whence this note. 2. Bilinear maps and forms We fix a field k. This field will be fixed for the rest of this note. The notion of a “vector space” will always be understood to mean “k-vector space”. The 1 A note on bilinear forms July 9, 2020 word “subspace” will always mean “k-vector subspace”. The word “linear” will always mean “k-linear”. If V and W are two k-vector spaces, then Hom (V, W) denotes the vector space of all linear maps from V to W. If S is a subset of a vector space V, then span S will mean the span of S (that is, the subspace of V spanned by S). We recall the definition of a bilinear map: Definition 2.1.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Bornologically Isomorphic Representations of Tensor Distributions
    Bornologically isomorphic representations of distributions on manifolds E. Nigsch Thursday 15th November, 2018 Abstract Distributional tensor fields can be regarded as multilinear mappings with distributional values or as (classical) tensor fields with distribu- tional coefficients. We show that the corresponding isomorphisms hold also in the bornological setting. 1 Introduction ′ ′ ′r s ′ Let D (M) := Γc(M, Vol(M)) and Ds (M) := Γc(M, Tr(M) ⊗ Vol(M)) be the strong duals of the space of compactly supported sections of the volume s bundle Vol(M) and of its tensor product with the tensor bundle Tr(M) over a manifold; these are the spaces of scalar and tensor distributions on M as defined in [?, ?]. A property of the space of tensor distributions which is fundamental in distributional geometry is given by the C∞(M)-module isomorphisms ′r ∼ s ′ ∼ r ′ Ds (M) = LC∞(M)(Tr (M), D (M)) = Ts (M) ⊗C∞(M) D (M) (1) (cf. [?, Theorem 3.1.12 and Corollary 3.1.15]) where C∞(M) is the space of smooth functions on M. In[?] a space of Colombeau-type nonlinear generalized tensor fields was constructed. This involved handling smooth functions (in the sense of convenient calculus as developed in [?]) in par- arXiv:1105.1642v1 [math.FA] 9 May 2011 ∞ r ′ ticular on the C (M)-module tensor products Ts (M) ⊗C∞(M) D (M) and Γ(E) ⊗C∞(M) Γ(F ), where Γ(E) denotes the space of smooth sections of a vector bundle E over M. In[?], however, only minor attention was paid to questions of topology on these tensor products.
    [Show full text]
  • Sisältö Part I: Topological Vector Spaces 4 1. General Topological
    Sisalt¨ o¨ Part I: Topological vector spaces 4 1. General topological vector spaces 4 1.1. Vector space topologies 4 1.2. Neighbourhoods and filters 4 1.3. Finite dimensional topological vector spaces 9 2. Locally convex spaces 10 2.1. Seminorms and semiballs 10 2.2. Continuous linear mappings in locally convex spaces 13 3. Generalizations of theorems known from normed spaces 15 3.1. Separation theorems 15 Hahn and Banach theorem 17 Important consequences 18 Hahn-Banach separation theorem 19 4. Metrizability and completeness (Fr`echet spaces) 19 4.1. Baire's category theorem 19 4.2. Complete topological vector spaces 21 4.3. Metrizable locally convex spaces 22 4.4. Fr´echet spaces 23 4.5. Corollaries of Baire 24 5. Constructions of spaces from each other 25 5.1. Introduction 25 5.2. Subspaces 25 5.3. Factor spaces 26 5.4. Product spaces 27 5.5. Direct sums 27 5.6. The completion 27 5.7. Projektive limits 28 5.8. Inductive locally convex limits 28 5.9. Direct inductive limits 29 6. Bounded sets 29 6.1. Bounded sets 29 6.2. Bounded linear mappings 31 7. Duals, dual pairs and dual topologies 31 7.1. Duals 31 7.2. Dual pairs 32 7.3. Weak topologies in dualities 33 7.4. Polars 36 7.5. Compatible topologies 39 7.6. Polar topologies 40 PART II Distributions 42 1. The idea of distributions 42 1.1. Schwartz test functions and distributions 42 2. Schwartz's test function space 43 1 2.1. The spaces C (Ω) and DK 44 1 2 2.2.
    [Show full text]
  • A Symplectic Banach Space with No Lagrangian Subspaces
    transactions of the american mathematical society Volume 273, Number 1, September 1982 A SYMPLECTIC BANACHSPACE WITH NO LAGRANGIANSUBSPACES BY N. J. KALTON1 AND R. C. SWANSON Abstract. In this paper we construct a symplectic Banach space (X, Ü) which does not split as a direct sum of closed isotropic subspaces. Thus, the question of whether every symplectic Banach space is isomorphic to one of the canonical form Y X Y* is settled in the negative. The proof also shows that £(A") admits a nontrivial continuous homomorphism into £(//) where H is a Hilbert space. 1. Introduction. Given a Banach space E, a linear symplectic form on F is a continuous bilinear map ß: E X E -> R which is alternating and nondegenerate in the (strong) sense that the induced map ß: E — E* given by Û(e)(f) = ü(e, f) is an isomorphism of E onto E*. A Banach space with such a form is called a symplectic Banach space. It can be shown, by essentially the argument of Lemma 2 below, that any symplectic Banach space can be renormed so that ß is an isometry. Any symplectic Banach space is reflexive. Standard examples of symplectic Banach spaces all arise in the following way. Let F be a reflexive Banach space and set E — Y © Y*. Define the linear symplectic form fiyby Qy[(^. y% (z>z*)] = z*(y) ~y*(z)- We define two symplectic spaces (£,, ß,) and (E2, ß2) to be equivalent if there is an isomorphism A : Ex -» E2 such that Q2(Ax, Ay) = ß,(x, y). A. Weinstein [10] has asked the question whether every symplectic Banach space is equivalent to one of the form (Y © Y*, üy).
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • Linear Algebra Midterm Exam, April 5, 2007 Write Clearly, with Complete
    Mathematics 110 Name: GSI Name: Linear Algebra Midterm Exam, April 5, 2007 Write clearly, with complete sentences, explaining your work. You will be graded on clarity, style, and brevity. If you add false statements to a correct argument, you will lose points. Be sure to put your name and your GSI’s name on every page. 1. Let V and W be finite dimensional vector spaces over a field F . (a) What is the definition of the transpose of a linear transformation T : V → W ? What is the relationship between the rank of T and the rank of its transpose? (No proof is required here.) Answer: The transpose of T is the linear transformation T t: W ∗ → V ∗ sending a functional φ ∈ W ∗ to φ ◦ T ∈ V ∗. It has the same rank as T . (b) Let T be a linear operator on V . What is the definition of a T - invariant subspace of V ? What is the definition of the T -cyclic subspace generated by an element of V ? Answer: A T -invariant subspace of V is a linear subspace W such that T w ∈ W whenever w ∈ W . The T -cyclic subspace of V generated by v is the subspace of V spanned by the set of all T nv for n a natural number. (c) Let F := R, let V be the space of polynomials with coefficients in R, and let T : V → V be the operator sending f to xf 0, where f 0 is the derivative of f. Let W be the T -cyclic subspace of V generated by x2 + 1.
    [Show full text]