Notes on Mathematics - 1021

Total Page:16

File Type:pdf, Size:1020Kb

Notes on Mathematics - 1021 Notes on Mathematics - 1021 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 DefinitionofaMatrix .................................. .... 9 1.1.1 SpecialMatrices ................................... .. 10 1.2 OperationsonMatrices ............................... ...... 10 1.2.1 Multiplication of Matrices . .. 12 1.3 SomeMoreSpecialMatrices.. .. .. .. .. .. .. .. .. .. .. .. ....... 13 1.3.1 SubmatrixofaMatrix................................ .. 14 1.3.1 BlockMatrices ..................................... 15 1.4 MatricesoverComplexNumbers . ....... 17 2 Linear System of Equations 19 2.1 Introduction...................................... ...... 19 2.2 Definition and a Solution Method . ...... 20 2.2.1 ASolutionMethod................................... 21 2.3 RowOperationsandEquivalentSystems. ......... 21 2.3.1 Gauss Elimination Method . 24 2.4 RowReducedEchelonFormofaMatrix . ....... 26 2.4.1 Gauss-Jordan Elimination . ... 27 2.4.2 ElementaryMatrices................................ ... 29 2.5 RankofaMatrix..................................... .... 30 2.6 Existence of Solution of Ax = b ................................. 33 2.6.1 Example.......................................... 33 2.6.2 MainTheorem ...................................... 34 2.6.3 Exercises ........................................ 35 2.7 InvertibleMatrices .................................. ...... 35 2.7.1 InverseofaMatrix................................. ... 35 2.7.2 Equivalent conditions for Invertibility . ..... 37 2.7.3 InverseandGauss-JordanMethod . ....... 39 2.8 Determinant....................................... ..... 40 2.8.1 AdjointofaMatrix .................................. 43 2.8.2 Cramer’sRule ...................................... 45 2.9 Miscellaneous Exercises . ...... 46 3 Finite Dimensional Vector Spaces 49 3.1 VectorSpaces ..................................... ..... 49 3.1.1 Definition......................................... 49 3.1.2 Examples ......................................... 51 3 4 CONTENTS 3.1.3 Subspaces....................................... .. 53 3.1.4 Linear Combinations . .. 54 3.2 LinearIndependence ................................ ....... 57 3.3 Bases ............................................ ... 58 3.3.1 ImportantResults ................................. ... 60 3.4 OrderedBases ..................................... ..... 66 4 Linear Transformations 69 4.1 Definitions and Basic Properties . ....... 69 4.2 Matrix of a linear transformation . ........ 72 4.3 Rank-NullityTheorem................................. ..... 75 4.4 Similarity of Matrices . .... 80 5 Inner Product Spaces 87 5.1 Definition and Basic Properties . ....... 87 5.2 Gram-Schmidt Orthogonalisation Process . .......... 92 5.3 Orthogonal Projections and Applications . ...........100 5.3.1 MatrixoftheOrthogonalProjection . .......103 6 Eigenvalues, Eigenvectors and Diagonalization 107 6.1 IntroductionandDefinitions. ........107 6.2 diagonalization . .113 6.3 Diagonalizable matrices . .116 6.4 Sylvester’s Law of Inertia and Applications . .........121 II Ordinary Differential Equation 129 7 Differential Equations 131 7.1 Introduction and Preliminaries . .......131 7.2 SeparableEquations ................................. .. ..134 7.2.1 Equations Reducible to Separable Form . .134 7.3 ExactEquations .................................... .136 7.3.1 IntegratingFactors............................... .138 7.4 LinearEquations .................................... .141 7.5 MiscellaneousRemarks................................ .. ..143 7.6 Initial Value Problems . .145 7.6.1 OrthogonalTrajectories . .. ..149 7.7 NumericalMethods.................................. .. ..150 8 Second Order and Higher Order Equations 153 8.1 Introduction...................................... .. ..153 8.2 MoreonSecondOrderEquations . ........156 8.2.1 Wronskian........................................ 156 8.2.2 MethodofReductionofOrder . .159 8.3 Second Order equations with Constant Coefficients . ............160 8.4 NonHomogeneousEquations . .......162 8.5 VariationofParameters .............................. .......164 8.6 Higher Order Equations with Constant Coefficients . ...........166 CONTENTS 5 8.7 MethodofUndeterminedCoefficients. .........170 9 Solutions Based on Power Series 175 9.1 Introduction...................................... .. ..175 9.1.1 PropertiesofPowerSeries. .177 9.2 Solutions in terms of Power Series . .......179 9.3 Statement of Frobenius Theorem for Regular (Ordinary) Point . ..............180 9.4 Legendre Equations and Legendre Polynomials . ..........181 9.4.1 Introduction ..................................... 181 9.4.2 Legendre Polynomials . 182 III Laplace Transform 189 10 Laplace Transform 191 10.1Introduction..................................... .......191 10.2DefinitionsandExamples .............................. .. ..191 10.2.1 Examples ........................................ 192 10.3 PropertiesofLaplaceTransform . ..........194 10.3.1 InverseTransformsof RationalFunctions . ..........199 10.3.2 TransformofUnitStepFunction . .. ..199 10.4SomeUsefulResults ................................ .......200 10.4.1 Limiting Theorems . 200 10.5 Application to Differential Equations . .........202 10.6 TransformoftheUnit-ImpulseFunction . ...........204 IV Numerical Applications 207 11 Newton’s Interpolation Formulae 209 11.1Introduction..................................... .......209 11.2DifferenceOperator ................................ .......209 11.2.1 ForwardDifferenceOperator . .. ..209 11.2.2 BackwardDifferenceOperator . .. ..211 11.2.3 CentralDifferenceOperator. .......213 11.2.4 ShiftOperator ................................... .214 11.2.5 AveragingOperator ............................... .214 11.3 RelationsbetweenDifferenceoperators . ............214 11.4 Newton’s Interpolation Formulae . .........215 12 Lagrange’s Interpolation Formula 221 12.1Introduction..................................... .......221 12.2DividedDifferences.................................. .. ..221 12.3 Lagrange’s Interpolation formula . ..........224 12.4 Gauss’s and Stirling’s Formulas . .......226 13 Numerical Differentiation and Integration 229 13.1Introduction..................................... .......229 13.2 Numerical Differentiation . .......229 13.3NumericalIntegration . .. .. .. .. .. .. .. .. .. .. .. .. ........233 6 CONTENTS 13.3.1 AGeneralQuadratureFormula. .. ..233 13.3.2 TrapezoidalRule .................................. .234 13.3.3 Simpson’s Rule . 235 14 Appendix 239 14.1 SystemofLinearEquations . ........239 14.2Determinant...................................... .. ..242 14.3 PropertiesofDeterminant . .........246 14.4 Dimension of M + N .......................................250 14.5 ProofofRank-NullityTheorem . ........251 14.6 ConditionforExactness .. .. .. .. .. .. .. .. .. .. .. .. ........252 Part I Linear Algebra 7 Chapter 1 Matrices 1.1 Definition of a Matrix Definition 1.1.1 (Matrix) A rectangular array of numbers is called a matrix. We shall mostly be concerned with matrices having real numbers as entries. The horizontal arrays of a matrix are called its rows and the vertical arrays are called its columns. A matrix having m rows and n columns is said to have the order m n. × A matrix A of order m n can be represented in the following form: × a a a 11 12 ··· 1n a21 a22 a2n A = ··· , . .. . a a a m1 m2 ··· mn th th where aij is the entry at the intersection of the i row and j column. In a more concise manner, we also denote the matrix A by [aij ] by suppressing its order. a a a 11 12 ··· 1n a21 a22 a2n Remark 1.1.2 Some books also use ··· to represent a matrix. .. . a a a m1 m2 ··· mn 1 3 7 Let A = . Then a11 =1, a12 =3, a13 =7, a21 =4, a22 =5, and a23 =6. "4 5 6# A matrix having only one column is called a column vector; and a matrix with only one row is called a row vector. Whenever a vector is used, it should be understood from the context whether it is a row vector or a column vector. Definition 1.1.3 (Equality of two Matrices) Two matrices A = [aij ] and B = [bij ] having the same order m n are equal if a = b for each i =1, 2,...,m and j =1, 2, . , n. × ij ij In other words, two matrices are said to be equal if they have the same order and their corresponding entries are equal. 9 10 CHAPTER 1. MATRICES Example 1.1.4 The linear system of equations 2x +3y = 5 and 3x +2y = 5 can be identified with the 2 3: 5 matrix . "3 2: 5# 1.1.1 Special Matrices Definition 1.1.5 1. A matrix in which each entry is zero is called a zero-matrix, denoted by 0. For example, 0 0 0 0 0 02 2 = and 02 3 = . × "0 0# × "0 0 0# 2. A matrix having the number of rows equal to the number of columns is called a square matrix. Thus, its order is m m (for some m) and is represented by m only. × 3. In a square matrix, A = [aij ], of order n, the entries a11,a22,...,ann are called the diagonal entries and form the principal diagonal of A. 4. A square matrix A = [a ] is said to be a diagonal matrix if a = 0 for i = j. In other words, the ij ij 6 4 0 non-zero entries appear only on the principal diagonal. For example, the zero matrix 0n and "0 1# are a few diagonal matrices. A diagonal matrix D of order n with the diagonal entries d1, d2,...,dn is denoted by D = diag(d1,...,dn). If di = d for all i =1, 2,...,n then the diagonal matrix D is called a scalar matrix. 1 if i = j 5. A square matrix A = [aij ] with aij = 0 if i = j ( 6 is called the identity matrix, denoted by In. 1 0 0 1 0 For example, I2 = , and I3 = 0 1 0 . 0
Recommended publications
  • HILBERT SPACE GEOMETRY Definition: a Vector Space Over Is a Set V (Whose Elements Are Called Vectors) Together with a Binary Operation
    HILBERT SPACE GEOMETRY Definition: A vector space over is a set V (whose elements are called vectors) together with a binary operation +:V×V→V, which is called vector addition, and an external binary operation ⋅: ×V→V, which is called scalar multiplication, such that (i) (V,+) is a commutative group (whose neutral element is called zero vector) and (ii) for all λ,µ∈ , x,y∈V: λ(µx)=(λµ)x, 1 x=x, λ(x+y)=(λx)+(λy), (λ+µ)x=(λx)+(µx), where the image of (x,y)∈V×V under + is written as x+y and the image of (λ,x)∈ ×V under ⋅ is written as λx or as λ⋅x. Exercise: Show that the set 2 together with vector addition and scalar multiplication defined by x y x + y 1 + 1 = 1 1 + x 2 y2 x 2 y2 x λx and λ 1 = 1 , λ x 2 x 2 respectively, is a vector space. 1 Remark: Usually we do not distinguish strictly between a vector space (V,+,⋅) and the set of its vectors V. For example, in the next definition V will first denote the vector space and then the set of its vectors. Definition: If V is a vector space and M⊆V, then the set of all linear combinations of elements of M is called linear hull or linear span of M. It is denoted by span(M). By convention, span(∅)={0}. Proposition: If V is a vector space, then the linear hull of any subset M of V (together with the restriction of the vector addition to M×M and the restriction of the scalar multiplication to ×M) is also a vector space.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Formal Power Series - Wikipedia, the Free Encyclopedia
    Formal power series - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Formal_power_series Formal power series From Wikipedia, the free encyclopedia In mathematics, formal power series are a generalization of polynomials as formal objects, where the number of terms is allowed to be infinite; this implies giving up the possibility to substitute arbitrary values for indeterminates. This perspective contrasts with that of power series, whose variables designate numerical values, and which series therefore only have a definite value if convergence can be established. Formal power series are often used merely to represent the whole collection of their coefficients. In combinatorics, they provide representations of numerical sequences and of multisets, and for instance allow giving concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. Contents 1 Introduction 2 The ring of formal power series 2.1 Definition of the formal power series ring 2.1.1 Ring structure 2.1.2 Topological structure 2.1.3 Alternative topologies 2.2 Universal property 3 Operations on formal power series 3.1 Multiplying series 3.2 Power series raised to powers 3.3 Inverting series 3.4 Dividing series 3.5 Extracting coefficients 3.6 Composition of series 3.6.1 Example 3.7 Composition inverse 3.8 Formal differentiation of series 4 Properties 4.1 Algebraic properties of the formal power series ring 4.2 Topological properties of the formal power series
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
    Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]