Linear Algebra Review
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
On Multivariate Interpolation
On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case. -
Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve. -
Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin. -
On the Eigenvalues of Euclidean Distance Matrices
“main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS. -
Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized. -
18.06 Linear Algebra, Problem Set 5 Solutions
18.06 Problem Set 5 Solution Total: points Section 4.1. Problem 7. Every system with no solution is like the one in problem 6. There are numbers y1; : : : ; ym that multiply the m equations so they add up to 0 = 1. This is called Fredholm’s Alternative: T Exactly one of these problems has a solution: Ax = b OR A y = 0 with T y b = 1. T If b is not in the column space of A it is not orthogonal to the nullspace of A . Multiply the equations x1 − x2 = 1 and x2 − x3 = 1 and x1 − x3 = 1 by numbers y1; y2; y3 chosen so that the equations add up to 0 = 1. Solution (4 points) Let y1 = 1, y2 = 1 and y3 = −1. Then the left-hand side of the sum of the equations is (x1 − x2) + (x2 − x3) − (x1 − x3) = x1 − x2 + x2 − x3 + x3 − x1 = 0 and the right-hand side is 1 + 1 − 1 = 1: Problem 9. If AT Ax = 0 then Ax = 0. Reason: Ax is inthe nullspace of AT and also in the of A and those spaces are . Conclusion: AT A has the same nullspace as A. This key fact is repeated in the next section. Solution (4 points) Ax is in the nullspace of AT and also in the column space of A and those spaces are orthogonal. Problem 31. The command N=null(A) will produce a basis for the nullspace of A. Then the command B=null(N') will produce a basis for the of A. -
Discover Linear Algebra Incomplete Preliminary Draft
Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations . -
Triangular Factorization
Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix. -
Linear Independence, the Wronskian, and Variation of Parameters
LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent. -
4.1 RANK of a MATRIX Rank List Given Matrix M, the Following Are Equal
page 1 of Section 4.1 CHAPTER 4 MATRICES CONTINUED SECTION 4.1 RANK OF A MATRIX rank list Given matrix M, the following are equal: (1) maximal number of ind cols (i.e., dim of the col space of M) (2) maximal number of ind rows (i.e., dim of the row space of M) (3) number of cols with pivots in the echelon form of M (4) number of nonzero rows in the echelon form of M You know that (1) = (3) and (2) = (4) from Section 3.1. To see that (3) = (4) just stare at some echelon forms. This one number (that all four things equal) is called the rank of M. As a special case, a zero matrix is said to have rank 0. how row ops affect rank Row ops don't change the rank because they don't change the max number of ind cols or rows. example 1 12-10 24-20 has rank 1 (maximally one ind col, by inspection) 48-40 000 []000 has rank 0 example 2 2540 0001 LetM= -2 1 -1 0 21170 To find the rank of M, use row ops R3=R1+R3 R4=-R1+R4 R2ØR3 R4=-R2+R4 2540 0630 to get the unreduced echelon form 0001 0000 Cols 1,2,4 have pivots. So the rank of M is 3. how the rank is limited by the size of the matrix IfAis7≈4then its rank is either 0 (if it's the zero matrix), 1, 2, 3 or 4. The rank can't be 5 or larger because there can't be 5 ind cols when there are only 4 cols to begin with. -
Irreducibility in Algebraic Groups and Regular Unipotent Elements
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 141, Number 1, January 2013, Pages 13–28 S 0002-9939(2012)11898-2 Article electronically published on August 16, 2012 IRREDUCIBILITY IN ALGEBRAIC GROUPS AND REGULAR UNIPOTENT ELEMENTS DONNA TESTERMAN AND ALEXANDRE ZALESSKI (Communicated by Pham Huu Tiep) Abstract. We study (connected) reductive subgroups G of a reductive alge- braic group H,whereG contains a regular unipotent element of H.Themain result states that G cannot lie in a proper parabolic subgroup of H. This result is new even in the classical case H =SL(n, F ), the special linear group over an algebraically closed field, where a regular unipotent element is one whose Jor- dan normal form consists of a single block. In previous work, Saxl and Seitz (1997) determined the maximal closed positive-dimensional (not necessarily connected) subgroups of simple algebraic groups containing regular unipotent elements. Combining their work with our main result, we classify all reductive subgroups of a simple algebraic group H which contain a regular unipotent element. 1. Introduction Let H be a reductive linear algebraic group defined over an algebraically closed field F . Throughout this text ‘reductive’ will mean ‘connected reductive’. A unipo- tent element u ∈ H is said to be regular if the dimension of its centralizer CH (u) coincides with the rank of H (or, equivalently, u is contained in a unique Borel subgroup of H). Regular unipotent elements of a reductive algebraic group exist in all characteristics (see [22]) and form a single conjugacy class. These play an important role in the general theory of algebraic groups. -
Span, Linear Independence and Basis Rank and Nullity
Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.