Grassmanians and the Pl¨Ucker Embedding 1

Total Page:16

File Type:pdf, Size:1020Kb

Grassmanians and the Pl¨Ucker Embedding 1 GRASSMANIANS AND THE PLUCKER¨ EMBEDDING J. WARNER 1. Grassmanians Definition 1.1. Let V be an n-dimensional vector space over a field k. Let r 2 Z with 1 ≤ r ≤ n. Define Grass(r; V ) := fr-dimensional subspaces of Vg, the Grassmanian of r-planes in V . ∼ Notice that Grass(1;V ) = P(V ) as sets. Fix a basis for V = fv1; : : : ; vng. We then have a map r G p : Mn;r(k) ! Grass(i; V ) i=1 P given by sending A = (aij) to the subspace spanfu1; : : : ; urg where uj = aijvi. Under this map, the zero matrix is sent to the unique zero dimensional subspace. Example 1.2. Let n = 4, r = 2, and consider the 4 × 2 matrices 0 1 1 1 0 1 1 1 0 1 1 1 B 1 1 C B 1 0 C B 0 1 C A = B C B = B C C = B C @ 1 1 A @ 0 1 A @ 1 0 A 1 1 0 0 0 0 Then p(A) is the one-dimensional subspace spanned by v1 + v2 + v3 + v4 and p(B) and p(C) are both the two-dimensional subspace spanned by v1 + v2 and v1 + v3. Notice that p(A) 2 Grass(r; V ) if and only if rank(A) = r. The following proposition shows that the full rank matrices in Mn;r(k) form an open set in the Zariski topology. Proposition 1.3. Let Σ be a subset of f1; 2; : : : ; ng of cardinality r, and for any A 2 Mn;r(k) let ΣA be the r × r matrix formed from the rows corresponding to the elements of Σ. Then rank(A) = max rank(ΣA) Σ Proof. The proof follows from the fact that the rank of a matrix is equal to the maximum number of linearly independent rows. Corollary 1.4. Rank(A) < r if and only if det(ΣA) = 0 for all Σ. o o Let Mn;r(k) be the set of full rank matrices. Then the corollary shows that Mn;r(k) is the ∼ nr complement of the zero locus of a collection of polynomial equations in Mn;r(k) = A : Thus o nr Mn;r(k) is open in the Zariski topology of A . By restriction, we now have a map o Mn;r(k) ! Grass(r; V ) Date: Spring 2013. 1 which we still denote by p. Notice that p is surjective, as any subspace of dimension r can be spanned by r vectors. However, p is not injective as the example shows. The map is injective up to an action of GLr(k). Proposition 1.5. p(A) = p(B) if and only if there is C 2 GLr(k) with A = BC. P 0 P Proof. Suppose p(A) = p(B) = U. Let uj = aijvi and uj = bijvi. Then since 0 0 spanfu1; : : : ; urg = U = spanfu1; : : : ; urg, there exists C = (cij) 2 GLr(k) such that P 0 uj = cijui (C is invertible because each basis for U is linearly independent). Then we have X X 0 X X X X akjvk = uj = cijui = cij bkivk = bkicijvk k i i k k i It follows that A = BC. Next suppose there is C 2 GLr(k) with A = BC. Then using the P 0 notation above uj = cijui, so that p(A) = p(B). Example 1.6. In the example above, notice that p(B) = p(C), and that the invertible matrix 0 1 D = 1 0 satisfies B = CD. The proposition shows that GLr acts freely and transitively on the fibers of p. Thus, every fiber is in one-to-one correspondence with the elements of GLr, but there is no distinguished identity in the elements of the fiber. We can choose such a distinguished element if we restrict our attention to certain open sets of Grass(r; V ). For any Σ ⊂ f1; : : : ; ng of cardinality r, let UΣ ⊂ Grass(r; V ) be the set of all r-planes U such that for any matrix AU with p(AU ) = U, the r × r matrix ΣAU is invertible. Notice UΣ r(n−r) is an open set in bijection with A , and the UΣ form an open covering of Grass(r; V ). Σ −1 Σ For U 2 UΣ, let AU be the distinguished element of p (U) such that ΣAU = Ir, where Ir is o Σ the r × r identity matrix. The map s : UΣ ! Mn;r(k) sending U to AU is a section of p over UΣ. This distinguised choice of element in every fiber above the open set UΣ allows us to make −1 ∼ the identification p (UΣ) = UΣ × GLr. The above discussion can be summarized by saying that p is a principal GLr torsor, locally trivial in the Zariski topology. 2. Plucker¨ Embedding Here we show how to consider the Grassmanian as a projective variety inside of P(ΛrV ). Consider p : Grass(r; V ) ! P(ΛrV ) given by U = spanfu1; : : : ; urg 7! u1 ^ ::: ^ ur To show this map embeds Grass(r; V ) in P(ΛrV ) as a closed projective variety, we must show the map is well-defined, injective, and its image is the zero locus of some homogeneous polynomials. 0 To see that the map is well-defined, suppose ui and ui define the same r-plane in V . Let 0 C be the matrix expressing the ui as a linear combination of the ui. Then we have 2 X 0 X 0 u1 ^ ::: ^ ur = ci1ui ^ ::: ^ cirui i i X 0 0 = sgn(σ)c1σ(1) : : : crσ(r)u1 ^ ::: ^ ur σ2Sr 0 0 = det(C)u1 ^ ::: ^ ur 0 r so that the ui and ui define the same element in P(Λ V ). Hence p is well-defined. To see that p is injective, we use the following lemma. Lemma 2.1. If U = spanfu1; : : : ; urg, then U = fv 2 V j v ^ u1 ^ ::: ^ ur = 0g Proof. If v 2 U, it follows that v 2 V j v ^ u1 ^ ::: ^ ur = 0. Next, suppose that v 2 V with v ^u1 ^:::^ur = 0. Extend fu1; : : : ; urg to a basis of V , and write v as a linear combination of this basis. Then we have n n X X 0 = v ^ u1 ^ ::: ^ ur = aiui ^ u1 ^ ::: ^ ur = aiui ^ u1 ^ ::: ^ ur i=1 i=r+1 However, the elements ui ^ u1 ^ ::: ^ ur are linearly independent for i ≥ r + 1, so that by uniqueness of expression, ai = 0 for i ≥ r + 1 and v 2 U. With the lemma, we can now show that p is injective. Suppose that p(U) = p(U 0), ie, 0 0 that u1 ^ ::: ^ ur = c(u1 ^ ::: ^ ur) for some c 6= 0. Then we have u 2 U if and only if 0 0 0 0 u ^ u1 ^ ::: ^ ur = 0 if and only if c(u ^ u1 ^ ::: ^ ur) = 0 if and only if u ^ u1 ^ ::: ^ ur) = 0 if and only if u 2 U 0. Before verifying that the image of p is a closed subset of P(ΛrV ), lets consider an example. Example 2.2. Consider the 2-plane in V of dimension 4 spanned by v1 + v2 and v1 + v3 (ie, the 2-plane p(B) from our first example). Then p(U) = (v1 + v2) ^ (v1 + v3) = −(v1 ^ v2) + 2 (v1 ^ v3) + (v2 ^ v3). If we order the basis of Λ V lexicographically, we see that U is mapped onto the point in P5 with homogeneous coordinates [−1 : 1 : 0 : 1 : 0 : 0]. Notice that the coordinates correspond to the determinant of ΣB. The example gives an alternate and equivalent way of defining the Pl¨ucker embedding. If p(AU ) = U, let pΣ(AU ) be the minor of AU defined by Σ. Then we can define p(U) = [pΣ(AU )] for some choice of ordering of Σ. It is usually convenient to order lexicographically as in the example. If we order the basis of ΛrV in the same way, this is equal to the embedding defined above. Now, let's see that the image of p satisfies a collection of homogeneous polynomial equa- n tions in r variables. Definition 2.3. Let w 2 ΛrV . We say that w is divisible by v 2 V if w = v ^ ' for some r−1 ' 2 Λ V , and that w is totally decomposable if w = v1 ^ ::: ^ vr for vectors vi 2 V . The image of p consists equivalence classes of all totally decomposable vectors in ΛrV . First notice that v divides w if and only if v ^ w = 0 2 Λr+1V: The forward direction is clear. For the reverse direction, extend v to a basis and write w as a sum of multivectors of degree k in this basis. If v ^ w = 0, then every multivector in the expansion of w is divisible by v, and hence so is w. From this fact, it follows that w is totally decomposable if and only if 3 the space of vectors dividing it has dimension k (notice the dimension is never larger than k). To each w consider the linear map '(w): V ! Λr+1V v ! w ^ v Then w is totally decomposable if and only if the rank of '(w) is equal to n − r. Since the rank can never be smaller, this is equivalent to the rank('(w)) ≤ n − r: What are the entries of '(w)? Since the map w ! '(w) is also linear, it follows that the entries of '(w) are n k-linear combinations of the r homogeneous coordinates of w. If we view the homogeneous coordinates of w as coordinate functions, that is, as variables xi, then the entries of '(w) are homogeneous polynomials of degree 1 in the xi.
Recommended publications
  • ADDITIVE MAPS on RANK K BIVECTORS 1. Introduction. Let N 2 Be an Integer and Let F Be a Field. We Denote by M N(F) the Algeb
    Electronic Journal of Linear Algebra, ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 36, pp. 847-856, December 2020. ADDITIVE MAPS ON RANK K BIVECTORS∗ WAI LEONG CHOOIy AND KIAM HEONG KWAy V2 Abstract. Let U and V be linear spaces over fields F and K, respectively, such that dim U = n > 2 and jFj > 3. Let U n−1 V2 V2 be the second exterior power of U. Fixing an even integer k satisfying 2 6 k 6 n, it is shown that a map : U! V satisfies (u + v) = (u) + (v) for all rank k bivectors u; v 2 V2 U if and only if is an additive map. Examples showing the indispensability of the assumption on k are given. Key words. Additive maps, Second exterior powers, Bivectors, Ranks, Alternate matrices. AMS subject classifications. 15A03, 15A04, 15A75, 15A86. 1. Introduction. Let n > 2 be an integer and let F be a field. We denote by Mn(F) the algebra of n×n matrices over F. Given a nonempty subset S of Mn(F), a map : Mn(F) ! Mn(F) is called commuting on S (respectively, additive on S) if (A)A = A (A) for all A 2 S (respectively, (A + B) = (A) + (B) for all A; B 2 S). In 2012, using Breˇsar'sresult [1, Theorem A], Franca [3] characterized commuting additive maps on invertible (respectively, singular) matrices of Mn(F). He continued to characterize commuting additive maps on rank k matrices of Mn(F) in [4], where 2 6 k 6 n is a fixed integer, and commuting additive maps on rank one matrices of Mn(F) in [5].
    [Show full text]
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • Do Killingâ•Fiyano Tensors Form a Lie Algebra?
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Physics Department Faculty Publication Series Physics 2007 Do Killing–Yano tensors form a Lie algebra? David Kastor University of Massachusetts - Amherst, [email protected] Sourya Ray University of Massachusetts - Amherst Jennie Traschen University of Massachusetts - Amherst, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/physics_faculty_pubs Part of the Physics Commons Recommended Citation Kastor, David; Ray, Sourya; and Traschen, Jennie, "Do Killing–Yano tensors form a Lie algebra?" (2007). Classical and Quantum Gravity. 1235. Retrieved from https://scholarworks.umass.edu/physics_faculty_pubs/1235 This Article is brought to you for free and open access by the Physics at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Physics Department Faculty Publication Series by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Do Killing-Yano tensors form a Lie algebra? David Kastor, Sourya Ray and Jennie Traschen Department of Physics University of Massachusetts Amherst, MA 01003 ABSTRACT Killing-Yano tensors are natural generalizations of Killing vec- tors. We investigate whether Killing-Yano tensors form a graded Lie algebra with respect to the Schouten-Nijenhuis bracket. We find that this proposition does not hold in general, but that it arXiv:0705.0535v1 [hep-th] 3 May 2007 does hold for constant curvature spacetimes. We also show that Minkowski and (anti)-deSitter spacetimes have the maximal num- ber of Killing-Yano tensors of each rank and that the algebras of these tensors under the SN bracket are relatively simple exten- sions of the Poincare and (A)dS symmetry algebras.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Preserivng Pieces of Information in a Given Order in HRR and GA$ C$
    Proceedings of the Federated Conference on ISBN 978-83-60810-22-4 Computer Science and Information Systems pp. 213–220 Preserivng pieces of information in a given order in HRR and GAc Agnieszka Patyk-Ło´nska Abstract—Geometric Analogues of Holographic Reduced Rep- Secondly, this method of encoding will not detect similarity resentations (GA HRR or GAc—the continuous version of between eye and yeye. discrete GA described in [16]) employ role-filler binding based A quantum-like attempt to tackle the problem of information on geometric products. Atomic objects are real-valued vectors in n-dimensional Euclidean space and complex statements belong to ordering was made in [1]—a version of semantic analysis, a hierarchy of multivectors. A property of GAc and HRR studied reformulated in terms of a Hilbert-space problem, is compared here is the ability to store pieces of information in a given order with structures known from quantum mechanics. In particular, by means of trajectory association. We describe results of an an LSA matrix representation [1], [10] is rewritten by the experiment: finding the alignment of items in a sequence without means of quantum notation. Geometric algebra has also been the precise knowledge of trajectory vectors. Index Terms—distributed representations, geometric algebra, used extensively in quantum mechanics ([2], [4], [3]) and so HRR, BSC, word order, trajectory associations, bag of words. there seems to be a natural connection between LSA and GAc, which is the ground for fututre work on the problem I. INTRODUCTION of preserving pieces of information in a given order. VER the years several attempts have been made to As far as convolutions are concerned, the most interesting O preserve the order in which the objects are to be remem- approach to remembering information in a given order has bered with the help of binding and superposition.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • 2.2 Kernel and Range of a Linear Transformation
    2.2 Kernel and Range of a Linear Transformation Performance Criteria: 2. (c) Determine whether a given vector is in the kernel or range of a linear trans- formation. Describe the kernel and range of a linear transformation. (d) Determine whether a transformation is one-to-one; determine whether a transformation is onto. When working with transformations T : Rm → Rn in Math 341, you found that any linear transformation can be represented by multiplication by a matrix. At some point after that you were introduced to the concepts of the null space and column space of a matrix. In this section we present the analogous ideas for general vector spaces. Definition 2.4: Let V and W be vector spaces, and let T : V → W be a transformation. We will call V the domain of T , and W is the codomain of T . Definition 2.5: Let V and W be vector spaces, and let T : V → W be a linear transformation. • The set of all vectors v ∈ V for which T v = 0 is a subspace of V . It is called the kernel of T , And we will denote it by ker(T ). • The set of all vectors w ∈ W such that w = T v for some v ∈ V is called the range of T . It is a subspace of W , and is denoted ran(T ). It is worth making a few comments about the above: • The kernel and range “belong to” the transformation, not the vector spaces V and W . If we had another linear transformation S : V → W , it would most likely have a different kernel and range.
    [Show full text]