Advanced Linear Algebra (Mim 2019, Spring)

Total Page:16

File Type:pdf, Size:1020Kb

Advanced Linear Algebra (Mim 2019, Spring) Advanced Linear Algebra (MiM 2019, Spring) P.I.Katsylo Contents 1 A Review of Linear Algebra 2 1.1 Algebraic Fields . 2 1.2 Vector Spaces and Subspaces . 4 1.3 Dual Spaces and Linear Maps . 8 1.4 Change of Basis . 11 1.5 Forms . 14 1.6 Problems . 17 2 Tensors 20 2.1 Tensor Product of Vector Spaces . 20 2.2 Symmetric Powers of Vector Spaces . 25 2.3 Exterior Powers of Vector Spaces . 27 2.4 Tensor Product of Linear maps . 28 2.5 Canonical Forms of Tensors . 29 2.6 *Multilinear Maps . 31 2.7 *Tensor Fields . 34 2.8 Problems . 37 3 Operators 39 3.1 Realification and Complexification . 39 3.2 Jordan Normal Form . 40 3.3 Operators in Real Inner Product Spaces . 44 3.4 Operators in Hermitian Spaces . 48 3.5 Operators Decompositions . 53 3.6 *Normed Spaces . 57 3.7 *Functions of Operators . 59 3.8 Problems . 61 4 Rings and Modules 67 4.1 Rings . 67 4.2 Modules . 71 4.3 Tensor Products of Modules . 74 4.4 *A Proof of Cayley-Hamilton's Theorem . 75 4.5 *Tor and Ext . 78 4.6 Problems . 80 1 Chapter 1 A Review of Linear Algebra 1.1 Algebraic Fields Definition 1.1.1. An algebraic field or, simply, a field is a set k with fixed elements 0k 0 (zero) and 1k 1 (identity), where 0 1, and two operations: an addition = = k k k; λ,≠ µ λ µ and a multiplication × → ( ) ↦ + k k k; λ, µ λµ such that for all λ, µ, : : : k we have × → ( ) ↦ 1. λ µ µ λ (commutativity∈ of the addition); 2. λ+ µ= +γ λ µ γ (associativity of the addition); 3. (λ +0 )λ+; = + ( + ) 4. for+ every= λ k there exists λ k such that λ λ 0; 5. λµ µλ (commutativity∈ of the− multiplication);∈ + (− ) = 6. λ µγ= λµ γ (associativity of the multiplication); 7. 1λ( λ);= ( ) ∗ − − 8. for= every λ k k 0 there exists λ 1 k such that λλ 1 1; 9. λ µ γ λµ∈ λγ∶= (distributivity).∖ { } ∈ = In a( field+ )k=we introduce+ the operations: the subtraction k k k; λ, µ λ µ λ µ and the division × → ( ) ↦ − ∶= + (− ) k k∗ k; λ, µ λµ−1: × → 2( ) ↦ Loosely speaking, a field is a set with the zero element 0, the identity element 1, and four arithmetic operation: addition, subdtraction, multiplication, and division with natural properties. Examples 1.1.2. The basic examples of fields are Q { the field of rational numbers, R { the field of real numbers, C { the field of complex numbers, Fp 0; 1;:::; p 1 { the field integers modulo p. Cosider= { the field C− of} complex numbers. The map C C; z a b 1 z a b 1 √ √ is an automorphism of⋅ ∶C. This→ means= that+ − ↦ = − − 0 0, 1 1; z = w z= w; zw+ z=w.+ Complex= number z is called conjugate of z. A field k is called algebraically closed whenever for every nonconstant polynomial f t there is t0 k such that f t0 0; that is, every nonconstant polynomial over k has a root in k. If a field k is algebraically closed, then every nonconstant polynomial can( ) be decomposed∈ into product( ) of= polynomials of degree one. Exercise 1.1.3. Prove that 1. The fields Q and R are not algebraically closed. 2. Any finite field is not algebraically closed. Theorem 1.1.4 (Fundamental Theorem of Algebra). The field C of complex num- bers is algebraically closed. For a proof see complex analysis textbooks. 3 1.2 Vector Spaces and Subspaces Definition 1.2.1. Let k be a field (elements of k are called scalars). A vector space over k is a set V (elements of V are called vectors) with the fixed vector 0 V (the zero vector) and compositions: an addition of vectors ∈ V V V; v; u v u (1.2.1) and a multiplication of vectors× by→ scalars ( ) ↦ + k V V; λ, v λv such that for all λ, µ k, v; u; w× V →we have( ) ↦ 1. v u w v ∈ u w (associativity∈ of the addition); 2. (v +u ) v+ u=(commutativity+ ( + ) of the addition); 3. v + 0 = v;+ 4. for+ every= v V there exist v V such that v v 0; 5. λ v u λv∈ λu (distributivity);− ∈ + (− ) = 6. λ( +µ v) = λv + µv (distributivity); 7. (λµ+ v) λ= µv +; 8.( 1v )v.= ( ) Vector= spaces over R are called real and vector spaces over C are called complex vector spaces. Examples 1.2.2. The zero space 0 (consists of the zero vector only). R1 { the \usual" 1-dimensional line{ with} an origin 0 and a basis i : R1 0 -i ( ) ⋅ R2 { the \usual" 2-dimensional plane with an origin 0 and a basis i; j : ( ) 6j R2 0 -i ⋅ 4 R3 { the \usual" 3-dimensional space with an origin 0 and a basis i; j; k : ( ) 6k R3 j¨¨ 0 ¨* i ¨¨ - ⋅ The n-dimensional coordinate vector space over a field k, a1 kn ai k : ⎛an⎞ = {⎜ ⋮ ⎟ S ∈ } In particular, the real and the complex⎝ n⎠-dimensional coordinate vector spaces. The space k t of polynomials in the variable t with coefficients in the field k. The space k[t]⩽n of polynomials of degree n in the variable t with coefficients in the field k. [ ] ⩽ The space Matn×m k of n m matrices with entries in the field k. Consider vector V1;:::;V( ) n spaces× over the same field. Then the set-theoretical direct product V1 ::: Vn v1; : : : ; vn vi Vi with the compositions × × = {( )S ∈ } ′ ′ ′ ′ v1; : : : ; vn v1; : : : ; vn v1 v1; : : : ; vn vn ; (λ v1; : : : ; v)n + ( λv1; : : :) ; λv∶= n( + + ) is a vector( space.) It∶= ( is called the)direct product of V1;:::;Vn and denoted by V1 ::: Vn (the same as of the set-theoretical direct product). Also it is called the direct sum of V1;:::;Vn and denoted by V1 ::: Vn. × × Remark 1.2.3. Suppose that J is a set and Vj j∈J is a set of vector spaces. Then we have the following spaces. ⊕ ⊕ { } The direct product Vj vj Vj j∈J j∈J ×J of spaces Vj j∈J . In patricular,M if ∶V=j{{V ∈for all} j}, then we have V j∈J V . The direct{ sum} = ∶= ∏ Vj vj Vj vj 0 for finitely many j j∈J j∈J ? ∶= {{ ∈ S ≠ } } ⊕J of spaces Vj j∈J . In patricular, if Vj V for all j, then we have V j∈J V . { } 5 = ∶= ⊕ Note that the direct product of an infinite number of vector spaces is not the direct sum of those vector spaces. Let V be a vector space over a field k. A subset W V is called a linear subspace or, simply, subspace of V whenever W 0 and ⊂ w1; : : : ; wn W; λ1; : : : ; λn k∋ imply λ1w1 ::: λnwn W: Clearly, a subspace is∈ a vector space over∈ the same field.+ + ∈ Examples 1.2.4. In a vector space V , 1. we always have two subspaces: 0 { the zero subspace; V itself; { } 2. vectors v1; : : : ; vn V span the subspace Span∈ v1; : : : ; vn λ1u1 ::: λnun λi k V: Example 1.2.5. A system( of linear) ∶ equations= { + + S ∈ } ⊂ 1 2 n a1 x1 a1 x2 ::: a1 xn 0 ; (1.2.2) ⎧::::::::::::::::::::::::::::::::::::::::::: ⎪ 1 + 2 + + n = ⎪amx1 amx2 ::: amxn 0 ⎨ ⎪ j ⎪ where ai k, defines the subspace⎩ + + + = ∈ x1 x2 n x1; : : : ; xn satisfy (1.2.2) k : ⎛ ⎞ ⎜x ⎟ {⎜ n⎟ S } ⊂ ⎜ ⋮ ⎟ Examples 1.2.6. If W1⎝;:::;W⎠ n are subspaces of a vector space V , then 1. their intersection W1 ::: Wn is a subspace of V ; 2. their sum ∩ ∩ W1 ::: Wn w1 ::: wn wi Wi is a subspace of V ; the sum W ::: W is called direct whenever + + 1 ∶= { +n + S ∈ } w1 ::: wn 0; wi + Wi + imply wi 0 for 1 i n: In linear algebra,+ a totally+ = ordered∈ set of vectors is called= a list⩽ of⩽ vectors. Definition 1.2.7. Suppose that vi is a list of vectors of a vector space V . Then ( ) 6 A finite sum of the form λi1 vi1 λi2 vi2 ::: λil vil ; where λi k is called a linear combination of the list v ; the scalars λ ; λ ; : : : ; λ are + + + i ∈ i1 i2 il called the coefficients of this linear combination. ( ) The list vi is called linearly dependent whenever some nontrivial linear com- bination of vi is equal to 0. A list consisting( ) of one vector is linearly dependent whenever this vector is the zero vector. A list( of) vectors is linearly dependent whenever some vector of the list is a linear combination of the others. In particular, if some vector of the list of vectors is the zero vector, then the list of vectors is linearly dependent. A list of vectors ei of V is called a basis of V whenever every vector v V can be uniquely represented as a linear combination of ei . Theorem 1.2.8. For( a) vector space (even infinite-dimensional) there exists∈ a basis. ( ) Remark 1.2.9. A basis of infinite-dimensional space is called a Hamel's bases of that space. Examples 1.2.10. In R1 we have the basis i , in R2 we have the basis i; j , and in R3 we have the basis i; j; k . ( ) ( ) n In k we have the standard basis( ) 1 0 0 0 1 0 e1 ; e2 ; : : : ; en : ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜0⎟ ⎜0⎟ ⎜1⎟ = ⎜ ⎟ = ⎜ ⎟ = ⎜ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟ 2 n In k t ⩽n we have the basis⎝ ⎠1; t; t ; :⎝ : : ;⎠ t . ⎝ ⎠ 2 In k[t] we have the Hamel's( basis 1; t; t );::: .
Recommended publications
  • On Regular Matrix Semirings
    Thai Journal of Mathematics Volume 7 (2009) Number 1 : 69{75 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Regular Matrix Semirings S. Chaopraknoi, K. Savettaseranee and P. Lertwichitsilp Abstract : A ring R is called a (von Neumann) regular ring if for every x 2 R; x = xyx for some y 2 R. It is well-known that for any ring R and any positive integer n, the full matrix ring Mn(R) is regular if and only if R is a regular ring. This paper examines this property on any additively commutative semiring S with zero. The regularity of S is defined analogously. We show that for a positive integer n, if Mn(S) is a regular semiring, then S is a regular semiring but the converse need not be true for n = 2. And for n ≥ 3, Mn(S) is a regular semiring if and only if S is a regular ring. Keywords : Matrix ring; Matrix semiring; Regular ring; Regular semiring. 2000 Mathematics Subject Classification : 16Y60, 16S50. 1 Introduction A triple (S; +; ·) is called a semiring if (S; +) and (S; ·) are semigroups and · is distributive over +. An element 0 2 S is called a zero of the semiring (S; +; ·) if x + 0 = 0 + x = x and x · 0 = 0 · x = 0 for all x 2 S. Note that every semiring contains at most one zero. A semiring (S; +; ·) is called additively [multiplicatively] commutative if x + y = y + x [x · y = y · x] for all x; y 2 S: We say that (S; +; ·) is commutative if it is both additively and multiplicatively commutative.
    [Show full text]
  • Math 317: Linear Algebra Notes and Problems
    Math 317: Linear Algebra Notes and Problems Nicholas Long SFASU Contents Introduction iv 1 Efficiently Solving Systems of Linear Equations and Matrix Op- erations 1 1.1 Warmup Problems . .1 1.2 Solving Linear Systems . .3 1.2.1 Geometric Interpretation of a Solution Set . .9 1.3 Vector and Matrix Equations . 11 1.4 Solution Sets of Linear Systems . 15 1.5 Applications . 18 1.6 Matrix operations . 20 1.6.1 Special Types of Matrices . 21 1.6.2 Matrix Multiplication . 22 2 Vector Spaces 25 2.1 Subspaces . 28 2.2 Span . 29 2.3 Linear Independence . 31 2.4 Linear Transformations . 33 2.5 Applications . 38 3 Connecting Ideas 40 3.1 Basis and Dimension . 40 3.1.1 rank and nullity . 41 3.1.2 Coordinate Vectors relative to a basis . 42 3.2 Invertible Matrices . 44 3.2.1 Elementary Matrices . 44 ii CONTENTS iii 3.2.2 Computing Inverses . 45 3.2.3 Invertible Matrix Theorem . 46 3.3 Invertible Linear Transformations . 47 3.4 LU factorization of matrices . 48 3.5 Determinants . 49 3.5.1 Computing Determinants . 49 3.5.2 Properties of Determinants . 51 3.6 Eigenvalues and Eigenvectors . 52 3.6.1 Diagonalizability . 55 3.6.2 Eigenvalues and Eigenvectors of Linear Transfor- mations . 56 4 Inner Product Spaces 58 4.1 Inner Products . 58 4.2 Orthogonal Complements . 59 4.3 QR Decompositions . 59 Nicholas Long Introduction In this course you will learn about linear algebra by solving a carefully designed sequence of problems. It is important that you understand every problem and proof.
    [Show full text]
  • Math 651 Homework 1 - Algebras and Groups Due 2/22/2013
    Math 651 Homework 1 - Algebras and Groups Due 2/22/2013 1) Consider the Lie Group SU(2), the group of 2 × 2 complex matrices A T with A A = I and det(A) = 1. The underlying set is z −w jzj2 + jwj2 = 1 (1) w z with the standard S3 topology. The usual basis for su(2) is 0 i 0 −1 i 0 X = Y = Z = (2) i 0 1 0 0 −i (which are each i times the Pauli matrices: X = iσx, etc.). a) Show this is the algebra of purely imaginary quaternions, under the commutator bracket. b) Extend X to a left-invariant field and Y to a right-invariant field, and show by computation that the Lie bracket between them is zero. c) Extending X, Y , Z to global left-invariant vector fields, give SU(2) the metric g(X; X) = g(Y; Y ) = g(Z; Z) = 1 and all other inner products zero. Show this is a bi-invariant metric. d) Pick > 0 and set g(X; X) = 2, leaving g(Y; Y ) = g(Z; Z) = 1. Show this is left-invariant but not bi-invariant. p 2) The realification of an n × n complex matrix A + −1B is its assignment it to the 2n × 2n matrix A −B (3) BA Any n × n quaternionic matrix can be written A + Bk where A and B are complex matrices. Its complexification is the 2n × 2n complex matrix A −B (4) B A a) Show that the realification of complex matrices and complexifica- tion of quaternionic matrices are algebra homomorphisms.
    [Show full text]
  • 1 Jun 2020 Flat Semimodules & Von Neumann Regular Semirings
    Flat Semimodules & von Neumann Regular Semirings* Jawad Abuhlail† Rangga Ganzar Noegraha‡ [email protected] [email protected] Department of Mathematics and Statistics Universitas Pertamina King Fahd University of Petroleum & Minerals Jl. Teuku Nyak Arief 31261Dhahran,KSA Jakarta12220,Indonesia June 2, 2020 Abstract Flat modules play an important role in the study of the category of modules over rings and in the characterization of some classes of rings. We study the e-flatness for semimodules introduced by the first author using his new notion of exact sequences of semimodules and its relationships with other notions of flatness for semimodules over semirings. We also prove that a subtractive semiring over which every right (left) semimodule is e-flat is a von Neumann regular semiring. Introduction Semirings are, roughly, rings not necessarily with subtraction. They generalize both rings and distributive bounded lattices and have, along with their semimodules many applications in Com- arXiv:1907.07047v2 [math.RA] 1 Jun 2020 puter Science and Mathematics (e.g., [1, 2, 3]). Some applications can be found in Golan’s book [4], which is our main reference on this topic. A systematic study of semimodules over semirings was carried out by M. Takahashi in a series of papers 1981-1990. However, he defined two main notions in a way that turned out to be not natural. Takahashi’s tensor products [5] did not satisfy the expected Universal Property. On *MSC2010: Primary 16Y60; Secondary 16D40, 16E50, 16S50 Key Words: Semirings; Semimodules; Flat Semimodules; Exact Sequences; Von Neumann Regular Semirings; Matrix Semirings The authors would like to acknowledge the support provided by the Deanship of Scientific Research (DSR) at King Fahd University of Petroleum & Minerals (KFUPM) for funding this work through projects No.
    [Show full text]
  • Notes, Since the Quadratic Form Associated with an Inner Product (Kuk2 = Hu, Ui) Must Be Positive Def- Inite
    Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases 1 n I have two favorite vector spaces : R and the space Pd of polynomials of degree at most d. For Rn, we have a canonical basis: n R = spanfe1; e2; : : : ; eng; where ek is the kth column of the identity matrix. This basis is frequently convenient both for analysis and for computation. For Pd, an obvious- seeming choice of basis is the power basis: 2 d Pd = spanf1; x; x ; : : : ; x g: But this obvious-looking choice turns out to often be terrible for computation. Why? The short version is that powers of x aren't all that strongly linearly dependent, but we need to develop some more concepts before that short description will make much sense. The range space of a matrix or a linear map A is just the set of vectors y that can be written in the form y = Ax. If A is full (column) rank, then the columns of A are linearly independent, and they form a basis for the range space. Otherwise, A is rank-deficient, and there is a non-trivial null space consisting of vectors x such that Ax = 0. Rank deficiency is a delicate property2. For example, consider the matrix 1 1 A = : 1 1 This matrix is rank deficient, but the matrix 1 + δ 1 A^ = : 1 1 is not rank deficient for any δ 6= 0. Technically, the columns of A^ form a basis for R2, but we should be disturbed by the fact that A^ is so close to a singular matrix.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • CLIFFORD ALGEBRAS and THEIR REPRESENTATIONS Introduction
    CLIFFORD ALGEBRAS AND THEIR REPRESENTATIONS Andrzej Trautman, Uniwersytet Warszawski, Warszawa, Poland Article accepted for publication in the Encyclopedia of Mathematical Physics c 2005 Elsevier Science Ltd ! Introduction Introductory and historical remarks Clifford (1878) introduced his ‘geometric algebras’ as a generalization of Grassmann alge- bras, complex numbers and quaternions. Lipschitz (1886) was the first to define groups constructed from ‘Clifford numbers’ and use them to represent rotations in a Euclidean ´ space. E. Cartan discovered representations of the Lie algebras son(C) and son(R), n > 2, that do not lift to representations of the orthogonal groups. In physics, Clifford algebras and spinors appear for the first time in Pauli’s nonrelativistic theory of the ‘magnetic elec- tron’. Dirac (1928), in his work on the relativistic wave equation of the electron, introduced matrices that provide a representation of the Clifford algebra of Minkowski space. Brauer and Weyl (1935) connected the Clifford and Dirac ideas with Cartan’s spinorial represen- tations of Lie algebras; they found, in any number of dimensions, the spinorial, projective representations of the orthogonal groups. Clifford algebras and spinors are implicit in Euclid’s solution of the Pythagorean equation x2 y2 + z2 = 0 which is equivalent to − y x z p = 2 p q (1) −z y + x q ! " ! " # $ so that x = q2 p2, y = p2 + q2, z = 2pq. If the numbers appearing in (1) are real, then − this equation can be interpreted as providing a representation of a vector (x, y, z) R3, null ∈ with respect to a quadratic form of signature (1, 2), as the ‘square’ of a spinor (p, q) R2.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Generic Properties of Symmetric Tensors
    2006 – 1/48 – P.Comon Generic properties of Symmetric Tensors Pierre COMON I3S - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University I3S 2006 – 2/48 – P.Comon Tensors & Arrays Definitions Table T = {Tij..k} Order d of T def= # of its ways = # of its indices def Dimension n` = range of the `th index T is Square when all dimensions n` = n are equal T is Symmetric when it is square and when its entries do not change by any permutation of indices I3S 2006 – 3/48 – P.Comon Tensors & Arrays Properties Outer (tensor) product C = A ◦ B: Cij..` ab..c = Aij..` Bab..c Example 1 outer product between 2 vectors: u ◦ v = u vT Multilinearity. An order-3 tensor T is transformed by the multi-linear map {A, B, C} into a tensor T 0: 0 X Tijk = AiaBjbCkcTabc abc Similarly: at any order d. I3S 2006 – 4/48 – P.Comon Tensors & Arrays Example Example 2 Take 1 v = −1 Then 1 −1 −1 1 v◦3 = −1 1 1 −1 This is a “rank-1” symmetric tensor I3S 2006 – 5/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... I3S 2006 – 6/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity I3S 2006 – 7/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity • Proportional slices I3S 2006 – 8/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .
    [Show full text]
  • On the Complexification of the Classical Geometries And
    On the complexication of the classical geometries and exceptional numb ers April Intro duction The classical groups On R Spn R and GLn C app ear as the isometry groups of sp ecial geome tries on a real vector space In fact the orthogonal group On R represents the linear isomorphisms of arealvector space V of dimension nleaving invariant a p ositive denite and symmetric bilinear form g on V The symplectic group Spn R represents the isometry group of a real vector space of dimension n leaving invariant a nondegenerate skewsymmetric bilinear form on V Finally GLn C represents the linear isomorphisms of a real vector space V of dimension nleaving invariant a complex structure on V ie an endomorphism J V V satisfying J These three geometries On R Spn R and GLn C in GLn Rintersect even pairwise in the unitary group Un C Considering now the relativeversions of these geometries on a real manifold of dimension n leads to the notions of a Riemannian manifold an almostsymplectic manifold and an almostcomplex manifold A symplectic manifold however is an almostsymplectic manifold X ie X is a manifold and is a nondegenerate form on X so that the form is closed Similarly an almostcomplex manifold X J is called complex if the torsion tensor N J vanishes These three geometries intersect in the notion of a Kahler manifold In view of the imp ortance of the complexication of the real Lie groupsfor instance in the structure theory and representation theory of semisimple real Lie groupswe consider here the question on the underlying geometrical
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Parallel Spinors and Connections with Skew-Symmetric Torsion in String Theory*
    ASIAN J. MATH. © 2002 International Press Vol. 6, No. 2, pp. 303-336, June 2002 005 PARALLEL SPINORS AND CONNECTIONS WITH SKEW-SYMMETRIC TORSION IN STRING THEORY* THOMAS FRIEDRICHt AND STEFAN IVANOV* Abstract. We describe all almost contact metric, almost hermitian and G2-structures admitting a connection with totally skew-symmetric torsion tensor, and prove that there exists at most one such connection. We investigate its torsion form, its Ricci tensor, the Dirac operator and the V- parallel spinors. In particular, we obtain partial solutions of the type // string equations in dimension n = 5, 6 and 7. 1. Introduction. Linear connections preserving a Riemannian metric with totally skew-symmetric torsion recently became a subject of interest in theoretical and mathematical physics. For example, the target space of supersymmetric sigma models with Wess-Zumino term carries a geometry of a metric connection with skew-symmetric torsion [23, 34, 35] (see also [42] and references therein). In supergravity theories, the geometry of the moduli space of a class of black holes is carried out by a metric connection with skew-symmetric torsion [27]. The geometry of NS-5 brane solutions of type II supergravity theories is generated by a metric connection with skew-symmetric torsion [44, 45, 43]. The existence of parallel spinors with respect to a metric connection with skew-symmetric torsion on a Riemannian spin manifold is of importance in string theory, since they are associated with some string solitons (BPS solitons) [43]. Supergravity solutions that preserve some of the supersymmetry of the underlying theory have found many applications in the exploration of perturbative and non-perturbative properties of string theory.
    [Show full text]