Chapter 2: Linear Algebra User's Manual

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 2: Linear Algebra User's Manual Preprint typeset in JHEP style - HYPER VERSION Chapter 2: Linear Algebra User's Manual Gregory W. Moore Abstract: An overview of some of the finer points of linear algebra usually omitted in physics courses. May 3, 2021 -TOC- Contents 1. Introduction 5 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 6 2.1 Rings 6 2.2 Fields 7 2.2.1 Finite Fields 8 2.3 Modules 8 2.4 Vector Spaces 9 2.5 Algebras 10 3. Linear Transformations 14 4. Basis And Dimension 16 4.1 Linear Independence 16 4.2 Free Modules 16 4.3 Vector Spaces 17 4.4 Linear Operators And Matrices 20 4.5 Determinant And Trace 23 5. New Vector Spaces from Old Ones 24 5.1 Direct sum 24 5.2 Quotient Space 28 5.3 Tensor Product 30 5.4 Dual Space 34 6. Tensor spaces 38 6.1 Totally Symmetric And Antisymmetric Tensors 39 6.2 Algebraic structures associated with tensors 44 6.2.1 An Approach To Noncommutative Geometry 47 7. Kernel, Image, and Cokernel 47 7.1 The index of a linear operator 50 8. A Taste of Homological Algebra 51 8.1 The Euler-Poincar´eprinciple 54 8.2 Chain maps and chain homotopies 55 8.3 Exact sequences of complexes 56 8.4 Left- and right-exactness 56 { 1 { 9. Relations Between Real, Complex, And Quaternionic Vector Spaces 59 9.1 Complex structure on a real vector space 59 9.2 Real Structure On A Complex Vector Space 64 9.2.1 Complex Conjugate Of A Complex Vector Space 66 9.2.2 Complexification 67 9.3 The Quaternions 69 9.4 Quaternionic Structure On A Real Vector Space 79 9.5 Quaternionic Structure On Complex Vector Space 79 9.5.1 Complex Structure On Quaternionic Vector Space 81 9.5.2 Summary 81 9.6 Spaces Of Real, Complex, Quaternionic Structures 81 10. Some Canonical Forms For a Matrix Under Conjugation 85 10.1 What is a canonical form? 85 10.2 Rank 86 10.3 Eigenvalues and Eigenvectors 87 10.4 Jordan Canonical Form 89 10.4.1 Proof of the Jordan canonical form theorem 94 10.5 The stabilizer of a Jordan canonical form 96 10.5.1 Simultaneous diagonalization 98 11. Sesquilinear forms and (anti)-Hermitian forms 100 12. Inner product spaces, normed linear spaces, and bounded operators 101 12.1 Inner product spaces 101 12.2 Normed linear spaces 103 12.3 Bounded linear operators 104 12.4 Constructions with inner product spaces 105 13. Hilbert space 106 14. Banach space 109 15. Projection operators and orthogonal decomposition 110 16. Unitary, Hermitian, and normal operators 113 17. The spectral theorem: Finite Dimensions 116 17.1 Normal and Unitary matrices 118 17.2 Singular value decomposition and Schmidt decomposition 118 17.2.1 Bidiagonalization 118 17.2.2 Application: The Cabbibo-Kobayashi-Maskawa matrix, or, how bidi- agonalization can win you the Nobel Prize 119 17.2.3 Singular value decomposition 121 17.2.4 Schmidt decomposition 122 { 2 { 18. Operators on Hilbert space 123 18.1 Lies my teacher told me 123 18.1.1 Lie 1: The trace is cyclic: 123 18.1.2 Lie 2: Hermitian operators have real eigenvalues 123 18.1.3 Lie 3: Hermitian operators exponentiate to form one-parameter groups of unitary operators 124 18.2 Hellinger-Toeplitz theorem 124 18.3 Spectrum and resolvent 126 18.4 Spectral theorem for bounded self-adjoint operators 131 18.5 Defining the adjoint of an unbounded operator 136 18.6 Spectral Theorem for unbounded self-adjoint operators 138 18.7 Commuting self-adjoint operators 139 18.8 Stone's theorem 140 18.9 Traceclass operators 141 19. The Dirac-von Neumann axioms of quantum mechanics 144 20. Canonical Forms of Antisymmetric, Symmetric, and Orthogonal matri- ces 151 20.1 Pairings and bilinear forms 151 20.1.1 Perfect pairings 151 20.1.2 Vector spaces 152 20.1.3 Choosing a basis 153 20.2 Canonical forms for symmetric matrices 153 20.3 Orthogonal matrices: The real spectral theorem 156 20.4 Canonical forms for antisymmetric matrices 157 20.5 Automorphism Groups of Bilinear and Sesquilinear Forms 158 21. Other canonical forms: Upper triangular, polar, reduced echelon 160 21.1 General upper triangular decomposition 160 21.2 Gram-Schmidt procedure 160 21.2.1 Orthogonal polynomials 161 21.3 Polar decomposition 163 21.4 Reduced Echelon form 165 22. Families of Matrices 165 22.1 Families of projection operators: The theory of vector bundles 165 22.2 Codimension of the space of coinciding eigenvalues 169 22.2.1 Families of complex matrices: Codimension of coinciding character- istic values 170 22.2.2 Orbits 171 22.2.3 Local model near Ssing 172 22.2.4 Families of Hermitian operators 173 22.3 Canonical form of a family in a first order neighborhood 175 { 3 { 22.4 Families of operators and spectral covers 176 22.5 Families of matrices and differential equations 181 22.5.1 The WKB expansion 183 22.5.2 Monodromy representation and Hilbert's 21st problem 186 22.5.3 Stokes' phenomenon 186 23. Z2-graded, or super-, linear algebra 191 23.1 Super vector spaces 191 23.2 Linear transformations between supervector spaces 195 23.3 Superalgebras 197 23.4 Modules over superalgebras 203 23.5 Free modules and the super-General Linear Group 207 23.6 The Supertrace 209 23.7 The Berezinian of a linear transformation 210 23.8 Bilinear forms 214 23.9 Star-structures and super-Hilbert spaces 215 23.9.1 SuperUnitary Group 218 23.10Functions on superspace and supermanifolds 219 23.10.1Philosophical background 219 23.10.2The model superspace Rpjq 221 23.10.3Superdomains 222 23.10.4A few words about sheaves 223 23.10.5Definition of supermanifolds 225 23.10.6Supervector fields and super-differential forms 228 23.11Integration over a superdomain 231 23.12Gaussian Integrals 235 23.12.1Reminder on bosonic Gaussian integrals 235 23.12.2Gaussian integral on a fermionic point: Pfaffians 235 23.12.3Gaussian integral on Rpjq 240 23.12.4Supersymmetric Cancelations 241 23.13References 242 24. Determinant Lines, Pfaffian Lines, Berezinian Lines, and anomalies 243 24.1 The determinant and determinant line of a linear operator in finite dimensions243 24.2 Determinant line of a vector space and of a complex 245 24.3 Abstract defining properties of determinants 247 24.4 Pfaffian Line 247 24.5 Determinants and determinant lines in infinite dimensions 249 24.5.1 Determinants 249 24.5.2 Fredholom Operators 250 24.5.3 The determinant line for a family of Fredholm operators 251 24.5.4 The Quillen norm 252 24.5.5 References 253 { 4 { 24.6 Berezinian of a free module 253 24.7 Brief Comments on fermionic path integrals and anomalies 254 24.7.1 General Considerations 254 24.7.2 Determinant of the one-dimensional Dirac operator 255 24.7.3 A supersymmetric quantum mechanics 257 24.7.4 Real Fermions in one dimension coupled to an orthogonal gauge field 258 24.7.5 The global anomaly when M is not spin 259 24.7.6 References 260 25. Quadratic Forms And Lattices 260 25.1 Definition 261 25.2 Embedded Lattices 263 25.3 Some Invariants of Lattices 268 25.3.1 The characteristic vector 274 25.3.2 The Gauss-Milgram relation 274 25.4 Self-dual lattices 276 25.4.1 Some classification results 279 25.5 Embeddings of lattices: The Nikulin theorem 283 25.6 References 283 26. Positive definite Quadratic forms 283 27. Quivers and their representations 283 1. Introduction Linear algebra is of course very important in many areas of physics. Among them: 1. Tensor analysis - used in classical mechanics and general relativity. 2. The very formulation of quantum mechanics is based on linear algebra: The states in a physical system are described by \rays" in a projective Hilbert space, and physical observables are identified with Hermitian linear operators on Hilbert space. 3. The realization of symmetry in quantum mechanics is through representation theory of groups which relies heavily on linear algebra. For this reason linear algebra is often taught in physics courses. The problem is that it is often mis-taught. Therefore we are going to make a quick review of basic notions stressing some points not usually emphasized in physics courses. We also want to review the basic canonical forms into which various types matrices can be put. These are very useful when discussing various aspects of matrix groups. { 5 { For more information useful references are Herstein, Jacobsen, Lang, Eisenbud, Com- mutative Algebra Springer GTM 150, Atiyah and MacDonald, Introduction to Commutative Algebra. For an excellent terse summary of homological algebra consult S.I. Gelfand and Yu. I. Manin, Homological Algebra. We will only touch briefly on some aspects of functional analysis - which is crucial to quantum mechanics. The standard reference for physicists is: Reed and Simon, Methods of Modern Mathematical Physics, especially, vol. I. 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 2.1 Rings In the previous chapter we talked about groups. We now overlay some extra structure on an abelian group R, with operation + and identity 0, to define what is called a ring. The new structure is a second binary operation (a; b) ! a · b 2 R on elements a; b 2 R. We demand that this operation be associative, a · (b · c) = (a · b) · c, and that it is compatible with the pre-existing additive group law.
Recommended publications
  • Superregular Matrices and Applications to Convolutional Codes
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repositório Institucional da Universidade de Aveiro Superregular matrices and applications to convolutional codes P. J. Almeidaa, D. Nappa, R.Pinto∗,a aCIDMA - Center for Research and Development in Mathematics and Applications, Department of Mathematics, University of Aveiro, Aveiro, Portugal. Abstract The main results of this paper are twofold: the first one is a matrix theoretical result. We say that a matrix is superregular if all of its minors that are not trivially zero are nonzero. Given a a × b, a ≥ b, superregular matrix over a field, we show that if all of its rows are nonzero then any linear combination of its columns, with nonzero coefficients, has at least a − b + 1 nonzero entries. Secondly, we make use of this result to construct convolutional codes that attain the maximum possible distance for some fixed parameters of the code, namely, the rate and the Forney indices. These results answer some open questions on distances and constructions of convolutional codes posted in the literature [6, 9]. Key words: convolutional code, Forney indices, optimal code, superregular matrix 2000MSC: 94B10, 15B33, 15B05 1. Introduction Several notions of superregular matrices (or totally positive) have appeared in different areas of mathematics and engineering having in common the specification of some properties regarding their minors [2, 3, 5, 11, 14]. In the context of coding theory these matrices have entries in a finite field F and are important because they can be used to generate linear codes with good distance properties.
    [Show full text]
  • Tight Frames and Their Symmetries
    Technical Report 9 December 2003 Tight Frames and their Symmetries Richard Vale, Shayne Waldron Department of Mathematics, University of Auckland, Private Bag 92019, Auckland, New Zealand e–mail: [email protected] (http:www.math.auckland.ac.nz/˜waldron) e–mail: [email protected] ABSTRACT The aim of this paper is to investigate symmetry properties of tight frames, with a view to constructing tight frames of orthogonal polynomials in several variables which share the symmetries of the weight function, and other similar applications. This is achieved by using representation theory to give methods for constructing tight frames as orbits of groups of unitary transformations acting on a given finite-dimensional Hilbert space. Along the way, we show that a tight frame is determined by its Gram matrix and discuss how the symmetries of a tight frame are related to its Gram matrix. We also give a complete classification of those tight frames which arise as orbits of an abelian group of symmetries. Key Words: Tight frames, isometric tight frames, Gram matrix, multivariate orthogonal polynomials, symmetry groups, harmonic frames, representation theory, wavelets AMS (MOS) Subject Classifications: primary 05B20, 33C50, 20C15, 42C15, sec- ondary 52B15, 42C40 0 1. Introduction u1 u2 u3 2 The three equally spaced unit vectors u1, u2, u3 in IR provide the following redundant representation 2 3 f = f, u u , f IR2, (1.1) 3 h ji j ∀ ∈ j=1 X which is the simplest example of a tight frame. Such representations arose in the study of nonharmonic Fourier series in L2(IR) (see Duffin and Schaeffer [DS52]) and have recently been used extensively in the theory of wavelets (see, e.g., Daubechies [D92]).
    [Show full text]
  • On Supermatrix Operator Semigroups 1. Introduction
    Quasigroups and Related Systems 7 (2000), 71 − 88 On supermatrix operator semigroups Steven Duplij Abstract One-parameter semigroups of antitriangle idempotent su- permatrices and corresponding superoperator semigroups are introduced and investigated. It is shown that t-linear idempo- tent superoperators and exponential superoperators are mutu- ally dual in some sense, and the rst give additional to expo- nential dierent solution to the initial Cauchy problem. The corresponding functional equation and analog of resolvent are found for them. Dierential and functional equations for idem- potent (super)operators are derived for their general t power- type dependence. 1. Introduction Operator semigroups [1] play an important role in mathematical physics [2, 3, 4] viewed as a general theory of evolution systems [5, 6, 7]. Its development covers many new elds [8, 9, 10, 11], but one of vital for modern theoretical physics directions supersymmetry and related mathematical structures was not considered before in application to operator semigroup theory. The main dierence between previous considerations is the fact that among building blocks (e.g. elements of corresponding matrices) there exist noninvertible objects (divisors 2000 Mathematics Subject Classication: 25A50, 81Q60, 81T60 Keywords: Cauchy problem, idempotence, semigroup, supermatrix, superspace 72 S. Duplij of zero and nilpotents) which by themselves can form another semi- group. Therefore, we have to take that into account and investigate it properly, which can be called a semigroup × semigroup method. Here we study continuous supermatrix representations of idempo- tent operator semigroups rstly introduced in [12, 13] for bands. Usu- ally matrix semigroups are dened over a eld K [14] (on some non- supersymmetric generalizations of K-representations see [15, 16]).
    [Show full text]
  • Reed-Solomon Code and Its Application 1 Equivalence
    IERG 6120 Coding Theory for Storage Systems Lecture 5 - 27/09/2016 Reed-Solomon Code and Its Application Lecturer: Kenneth Shum Scribe: Xishi Wang 1 Equivalence of Codes Two linear codes are said to be equivalent if one of them can be obtained from the other by means of a sequence of transformations of the following types: (i) a permutation of the positions of the code; (ii) multiplication of symbols in a fixed position by a non-zero scalar in F . Note that these transformations can be applied to all code symbols. 2 Reed-Solomon Code In RS code each symbol in the codewords is the evaluation of a polynomial at one point α, namely, 2 1 3 6 α 7 6 2 7 6 α 7 f(α) = c0 c1 c2 ··· ck−1 6 7 : 6 . 7 4 . 5 αk−1 The whole codeword is given by n such evaluations at distinct points α1; ··· ; αn, 2 1 1 ··· 1 3 6 α1 α2 ··· αn 7 6 2 2 2 7 6 α1 α2 ··· αn 7 T f(α1) f(α2) ··· f(αn) = c0 c1 c2 ··· ck−1 6 7 = c G: 6 . .. 7 4 . 5 k−1 k−1 k−1 α1 α2 ··· αn G is the generator matrix of RSq(n; k; d) code. The order of writing down the field elements α1 to αn is not important as far as the code structure is concerned, as any permutation of the αi's give an equivalent code. i j=1;:::;n The generator matrix G is a Vandermonde matrix, which is of the form [aj]i=0;:::;k−1.
    [Show full text]
  • Clifford Algebras, Spinors and Supersymmetry. Francesco Toppan
    IV Escola do CBPF – Rio de Janeiro, 15-26 de julho de 2002 Algebraic Structures and the Search for the Theory Of Everything: Clifford algebras, spinors and supersymmetry. Francesco Toppan CCP - CBPF, Rua Dr. Xavier Sigaud 150, cep 22290-180, Rio de Janeiro (RJ), Brazil abstract These lectures notes are intended to cover a small part of the material discussed in the course “Estruturas algebricas na busca da Teoria do Todo”. The Clifford Algebras, necessary to introduce the Dirac’s equation for free spinors in any arbitrary signature space-time, are fully classified and explicitly constructed with the help of simple, but powerful, algorithms which are here presented. The notion of supersymmetry is introduced and discussed in the context of Clifford algebras. 1 Introduction The basic motivations of the course “Estruturas algebricas na busca da Teoria do Todo”consisted in familiarizing graduate students with some of the algebra- ic structures which are currently investigated by theoretical physicists in the attempt of finding a consistent and unified quantum theory of the four known interactions. Both from aesthetic and practical considerations, the classification of mathematical and algebraic structures is a preliminary and necessary require- ment. Indeed, a very ambitious, but conceivable hope for a unified theory, is that no free parameter (or, less ambitiously, just few) has to be fixed, as an external input, due to phenomenological requirement. Rather, all possible pa- rameters should be predicted by the stringent consistency requirements put on such a theory. An example of this can be immediately given. It concerns the dimensionality of the space-time.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Gram Matrix and Orthogonality in Frames 1
    U.P.B. Sci. Bull., Series A, Vol. 80, Iss. 1, 2018 ISSN 1223-7027 GRAM MATRIX AND ORTHOGONALITY IN FRAMES Abolhassan FEREYDOONI1 and Elnaz OSGOOEI 2 In this paper, we aim at introducing a criterion that determines if f figi2I is a Bessel sequence, a frame or a Riesz sequence or not any of these, based on the norms and the inner products of the elements in f figi2I. In the cases of Riesz and Bessel sequences, we introduced a criterion but in the case of a frame, we did not find any answers. This criterion will be shown by K(f figi2I). Using the criterion introduced, some interesting extensions of orthogonality will be presented. Keywords: Frames, Bessel sequences, Orthonormal basis, Riesz bases, Gram matrix MSC2010: Primary 42C15, 47A05. 1. Preliminaries Frames are generalizations of orthonormal bases, but, more than orthonormal bases, they have shown their ability and stability in the representation of functions [1, 4, 10, 11]. The frames have been deeply studied from an abstract point of view. The results of such studies have been used in concrete frames such as Gabor and Wavelet frames which are very important from a practical point of view [2, 9, 5, 8]. An orthonormal set feng in a Hilbert space is characterized by a simple relation hem;eni = dm;n: In the other words, the Gram matrix is the identity matrix. Moreover, feng is an orthonor- mal basis if spanfeng = H. But, for frames the situation is more complicated; i.e., the Gram Matrix has no such a simple form.
    [Show full text]
  • Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
    UNIQUENESS OF LOW-RANK MATRIX COMPLETION BY RIGIDITY THEORY AMIT SINGER∗ AND MIHAI CUCURINGU† Abstract. The problem of completing a low-rank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to efficient randomized algorithms for testing necessary and sufficient conditions for local completion and for testing sufficient conditions for global completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, iterative methods, collaborative filtering. AMS subject classifications. 05C10, 05C75, 15A48 1. Introduction. Can the missing entries of an incomplete real valued matrix be recovered? Clearly, a matrix can be completed in an infinite number of ways by replacing the missing entries with arbitrary values.
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • Linear Algebra Handbook
    CS419 Linear Algebra January 7, 2021 1 What do we need to know? By the end of this booklet, you should know all the linear algebra you need for CS419. More specifically, you'll understand: • Vector spaces (the important ones, at least) • Dirac (bra-ket) notation and why it's quite nice to use • Linear combinations, linearly independent sets, spanning sets and basis sets • Matrices and linear transformations (they're the same thing) • Changing basis • Inner products and norms • Unitary operations • Tensor products • Why we care about linear algebra 2 Vector Spaces In quantum computing, the `vectors' we'll be working with are going to be made up of complex numbers1. A vector space, V , over C is a set of vectors with the vital property2 that αu + βv 2 V for all α; β 2 C and u; v 2 V . Intuitively, this means we can add together and scale up vectors in V, and we know the result is still in V. Our vectors are going to be lists of n complex numbers, v 2 Cn, and Cn will be our most important vector space. Note we can just as easily define vector spaces over R, the set of real numbers. Over the course of this module, we'll see the reasons3 we use C, but for all this linear algebra, we can stick with R as everyone is happier with real numbers. Rest assured for the entire module, every time you see something like \Consider a vector space V ", this vector space will be Rn or Cn for some n 2 N.
    [Show full text]
  • A List of All the Errata Known As of 19 December 2013
    14 Linear Algebra that is nonnegative when the matrices are the same N L N L † ∗ 2 (A, A)=TrA A = AijAij = |Aij| ≥ 0 (1.87) !i=1 !j=1 !i=1 !j=1 which is zero only when A = 0. So this inner product is positive definite. A vector space with a positive-definite inner product (1.73–1.76) is called an inner-product space,ametric space, or a pre-Hilbert space. A sequence of vectors fn is a Cauchy sequence if for every ϵ>0 there is an integer N(ϵ) such that ∥fn − fm∥ <ϵwhenever both n and m exceed N(ϵ). A sequence of vectors fn converges to a vector f if for every ϵ>0 there is an integer N(ϵ) such that ∥f −fn∥ <ϵwhenever n exceeds N(ϵ). An inner-product space with a norm defined as in (1.80) is complete if each of its Cauchy sequences converges to a vector in that space. A Hilbert space is a complete inner-product space. Every finite-dimensional inner-product space is complete and so is a Hilbert space. But the term Hilbert space more often is used to describe infinite-dimensional complete inner-product spaces, such as the space of all square-integrable functions (David Hilbert, 1862–1943). Example 1.17 (The Hilbert Space of Square-Integrable Functions) For the vector space of functions (1.55), a natural inner product is b (f,g)= dx f ∗(x)g(x). (1.88) "a The squared norm ∥ f ∥ of a function f(x)is b ∥ f ∥2= dx |f(x)|2.
    [Show full text]
  • Howe Pairs, Supersymmetry, and Ratios of Random Characteristic
    HOWE PAIRS, SUPERSYMMETRY, AND RATIOS OF RANDOM CHARACTERISTIC POLYNOMIALS FOR THE UNITARY GROUPS UN by J.B. Conrey, D.W. Farmer & M.R. Zirnbauer Abstract. — For the classical compact Lie groups K UN the autocorrelation functions of ratios of characteristic polynomials (z,w) Det(z≡ k)/Det(w k) are studied with k K as random variable. Basic to our treatment7→ is a property− shared− by the spinor repre- sentation∈ of the spin group with the Shale-Weil representation of the metaplectic group: in both cases the character is the analytic square root of a determinant or the reciprocal thereof. By combining this fact with Howe’s theory of supersymmetric dual pairs (g,K), we express the K-Haar average product of p ratios of characteristic polynomials and q conjugate ratios as a character χ which is associated with an irreducible representation of χ the Lie superalgebra g = gln n for n = p+q. The primitive character is shown to extend | to an analytic radial section of a real supermanifold related to gln n , and is computed by | invoking Berezin’s description of the radial parts of Laplace-Casimir operators for gln n . The final result for χ looks like a natural transcription of the Weyl character formula| to the context of highest-weight representations of Lie supergroups. While several other works have recently reproduced our results in the stable range N max(p,q), the present approach covers the full range of matrix dimensions N N. ≥To make this paper accessible to the non-expert reader, we have included a chapter∈ containing the required background material from superanalysis.
    [Show full text]