Linear Transformations and Matrix Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Linear Transformations and Matrix Algebra A. Havens Department of Mathematics University of Massachusetts, Amherst February 10-16, 2018 A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Outline 1 Representing Linear Maps with Matrices n The Standard Basis of R Finding Matrices Representing Linear Maps 2 Existence/Uniqueness Redux Reframing via Linear Transformations Surjectivity, or Onto Maps Injectivity, or One-To-One Maps Theorems on Existence and Uniqueness 3 Matrix Algebra Composition of Maps and Matrix Multiplication Matrices as Vectors: Scaling and Addition Transposition A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra n The Standard Basis of R Components Revisited 2 Observe that any x 2 R can be written as a linear combination of vectors along the standard rectangular coordinate axes using their components relative to this standard rectangular coordinate system: ñ ô ñ ô ñ ô x1 1 0 x = = x1 + x2 : x2 0 1 These two vectors along the coordinate axes will form the standard 2 basis for R . A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra n The Standard Basis of R Elementary Vectors Definition 2 The vectors along the standard rectangular coordinate axes of R are denoted ñ 1 ô ñ 0 ô e := ; e := : 1 0 2 1 They are called elementary vectors (hence the notation ei , i = 1; 2), and the ordered list (e1; e2) is called the standard basis 2 of R . 2 Observe that Span fe1; e2g = R . A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra n The Standard Basis of R Elementary Vectors n We can also define elementary vectors and a standard basis in R , by taking the unit vectors along the n different coordinate axes of the standard rectangular coordinate system: Definition The n vectors 2 1 3 2 0 3 2 0 3 2 0 3 6 0 7 6 1 7 6 . 7 6 . 7 6 7 6 7 6 . 7 6 . 7 6 7 6 7 6 7 6 7 e1 := 6 0 7 ; e2 := 6 0 7 ;:::; en−1 := 6 7 ; en := 6 7 6 7 6 7 6 0 7 6 0 7 6 . 7 6 . 7 6 7 6 7 4 . 5 4 . 5 4 1 5 4 0 5 0 0 0 1 are called elementary vectors for the standard rectangular n coordinate system on R . A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra n The Standard Basis of R A Remark on Notation Remark Note that we use the symbols e1; e2;::: to respectively represent the first, second, etc elementary vectors in whatever real vector space we are working in, indexed with respect to the order of our coordinate axes. The number of components necessary to represent a given ei n depends on the particular R with which we are working, and will be clear from context. Thus, the notation ei always refers to a vector with a 1 in the ith component, but the vector may have however many zeroes we need. A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra n The Standard Basis of R The Standard Basis of Rn Observation n Clearly, the set fe1;:::; eng ⊂ R is linearly independent, as the matrix [e1 ::: en] has precisely n columns and n pivots. Definition The ordered n-tuple of vectors (e1;:::; en) is called the standard n basis of R . n X n Observe that x = xi ei =) Span fe1;:::; eng = R . i=1 2 Therefore in analogy to the case of R , the n-tuple (e1;:::; en) earns the title of a basis because it is an ordered, linearly n independent collection of vectors that spans the whole of R . A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra n The Standard Basis of R RREF and the Standard Basis Observation Given a collection of n linearly independent vectors v1 ;:::; vn, the matrix with these vectors as columns has 2 1 0 ::: 0 3 6 7 6 0 1 ::: 0 7 RREF[v1 ::: vn] = [e1 ::: en] = 6 . 7 : 6 . .. 7 4 . 5 0 ::: 0 1 Thus, any n × n matrix whose columns are linearly independent is row equivalent to the matrix whose columns are the standard basis. A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps Matrix Representations Definition n m Given a linear map T : R ! R , we will say that an m × n matrix A is a matrix representing the linear transformation T if the image n of a vector x in R is given by the matrix vector product T (x) = Ax : Our aim is to find out how to find a matrix A representing a linear transformation T . In particular, we will see that the columns of A come directly from examining the action of T on the standard basis vectors. Before we state the formal result, let us consider a simple two dimensional reflection, and try to represent it as a matrix-vector product. A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps An Example Example 2 2 Find the matrix representing the map M : R ! R that reflects a vector x through the line Span fe1 + e2g. First, note that ñ 1 ô e + e = ; 1 2 1 so Span fe1 + e2g is the solution set of the linear equation x1 − x2 = 0, i.e., it is the line x1 = x2. You should convince yourself that reflection through this line swaps the vectors e1 and e2, and in general acts on a vector by swapping its components. A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps An Example Example So M(e1) = e2 and M(e2) = e1. If we consider an arbitrary 2-vector x = x1e1 + x2e2, it is easy to check that because M is linear and swaps the elementary vectors, M must swap the components of x. Indeed by linearity M(x) = M(x1e1 + x2e2) = x1M(e1) + x2M(e2) = x1e2 + x2e1. A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps An Example Example Comparing the image ñ x ô M(x) = 2 x1 with ñ a b ô ñ x ô ñ ax + bx ô 1 = 1 2 c d x2 cx1 + dx2 we see that ñ x ô ñ 0x + 1x ô ñ 0 1 ô ñ x ô 2 = 1 2 = 1 : x1 1x1 + 0x2 1 0 x2 A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps An Example Example So we can represent the reflection map x 7! M(x) by the matrix-vector product map ñ 0 1 ô M(x) = x = [e e ]x : 1 0 2 1 It is not a coincidence that the matrix of M is [e2 e1] = [M(e1) M(e2)]! Indeed, consider the matrix vector product Aei for an arbitrary m × n matrix A and ei the ith elementary vector of the standard n basis of R . What is the vector Aei ? A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps Selecting the Matrix Columns Since ei has a one in the ith coordinate, and zeroes in all other coordinates, we deduce that Aei is the linear combination 0a1 + ::: + 0ai−1 + 1a1 + 0ai+1 + ::: + 0an = ai ; that is, Aei is just the ith column of A. If T is some linear map, and A is a matrix representing it, then we can deduce that the image of an elementary vector ei under the map T is T (ei ) = ai , so the columns of the matrix are precisely the images of the standard basis by the map T ! A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps The Theorem Theorem n m A linear transformation T : R ! R may be uniquely represented as a matrix-vector product T (x) = Ax for the m × n matrix A whose columns are the images of the standard basis (e1;:::; en) of n R by the transformation T . Specifically, the ith column of A is m the vector T (ei ) 2 R and î ó T (x) = Ax = T (e1) T (e2) ::: T (en) x : A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps A Satisfying, Simple Proof Proof. The result is a consequence of the calculation ÄPn ä Pn T (x) = T i=1xi ei = i=1xi T (ei ) î ó = T (e1) T (e2) ::: T (en) x =: Ax ; where the first equality follows from the representation of x in the standard basis, the second equality follows from properties of linearity, and the third equality follows from the definition of the matrix vector product Ax as being the linear combination of column vectors of A taking the components xi as the weights. A. Havens Linear Transformations and Matrix Algebra Representing Linear Maps with Matrices Existence/Uniqueness Redux Matrix Algebra Finding Matrices Representing Linear Maps Using this Result There are two ways in which this result is useful: Given a linear map described geometrically, one can examine its effect on basis elements ei and then describe the matrix representing the map, Given a matrix, one can try to understand the geometry of the map x 7! Ax by examining the columns, and understanding how the matrix acts on the frame (e1;:::; en).
Recommended publications
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Enhancing Self-Reflection and Mathematics Achievement of At-Risk Urban Technical College Students
    Psychological Test and Assessment Modeling, Volume 53, 2011 (1), 108-127 Enhancing self-reflection and mathematics achievement of at-risk urban technical college students Barry J. Zimmerman1, Adam Moylan2, John Hudesman3, Niesha White3, & Bert Flugman3 Abstract A classroom-based intervention study sought to help struggling learners respond to their academic grades in math as sources of self-regulated learning (SRL) rather than as indices of personal limita- tion. Technical college students (N = 496) in developmental (remedial) math or introductory col- lege-level math courses were randomly assigned to receive SRL instruction or conventional in- struction (control) in their respective courses. SRL instruction was hypothesized to improve stu- dents’ math achievement by showing them how to self-reflect (i.e., self-assess and adapt to aca- demic quiz outcomes) more effectively. The results indicated that students receiving self-reflection training outperformed students in the control group on instructor-developed examinations and were better calibrated in their task-specific self-efficacy beliefs before solving problems and in their self- evaluative judgments after solving problems. Self-reflection training also increased students’ pass- rate on a national gateway examination in mathematics by 25% in comparison to that of control students. Key words: self-regulation, self-reflection, math instruction 1 Correspondence concerning this article should be addressed to: Barry Zimmerman, PhD, Graduate Center of the City University of New York and Center for Advanced Study in Education, 365 Fifth Ave- nue, New York, NY 10016, USA; email: [email protected] 2 Now affiliated with the University of California, San Francisco, School of Medicine 3 Graduate Center of the City University of New York and Center for Advanced Study in Education Enhancing self-reflection and math achievement 109 Across America, faculty and policy makers at two-year and technical colleges have been deeply troubled by the low academic achievement and high attrition rate of at-risk stu- dents.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Reflection Invariant and Symmetry Detection
    1 Reflection Invariant and Symmetry Detection Erbo Li and Hua Li Abstract—Symmetry detection and discrimination are of fundamental meaning in science, technology, and engineering. This paper introduces reflection invariants and defines the directional moments(DMs) to detect symmetry for shape analysis and object recognition. And it demonstrates that detection of reflection symmetry can be done in a simple way by solving a trigonometric system derived from the DMs, and discrimination of reflection symmetry can be achieved by application of the reflection invariants in 2D and 3D. Rotation symmetry can also be determined based on that. Also, if none of reflection invariants is equal to zero, then there is no symmetry. And the experiments in 2D and 3D show that all the reflection lines or planes can be deterministically found using DMs up to order six. This result can be used to simplify the efforts of symmetry detection in research areas,such as protein structure, model retrieval, reverse engineering, and machine vision etc. Index Terms—symmetry detection, shape analysis, object recognition, directional moment, moment invariant, isometry, congruent, reflection, chirality, rotation F 1 INTRODUCTION Kazhdan et al. [1] developed a continuous measure and dis- The essence of geometric symmetry is self-evident, which cussed the properties of the reflective symmetry descriptor, can be found everywhere in nature and social lives, as which was expanded to 3D by [2] and was augmented in shown in Figure 1. It is true that we are living in a spatial distribution of the objects asymmetry by [3] . For symmetric world. Pursuing the explanation of symmetry symmetry discrimination [4] defined a symmetry distance will provide better understanding to the surrounding world of shapes.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]