Matrices and Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Matrices and Linear Algebra Chapter 2 Matrices and Linear Algebra 2.1 Basics Definition 2.1.1. A matrix is an m n array of scalars from a given field F . The individual values in the matrix× are called entries. Examples. 213 12 A = B = 124 34 − The size of the array is–written as m n,where × m n × number of rows number of columns Notation a11 a12 ... a1n a21 a22 ... a2n rows ←− A = a a ... a n1 n2 mn columns A := uppercase denotes a matrix a := lower case denotes an entry of a matrix a F. ∈ Special matrices 33 34 CHAPTER 2. MATRICES AND LINEAR ALGEBRA (1) If m = n, the matrix is called square.Inthiscasewehave (1a) A matrix A is said to be diagonal if aij =0 i = j. (1b) A diagonal matrix A may be denoted by diag(d1,d2,... ,dn) where aii = di aij =0 j = i. The diagonal matrix diag(1, 1,... ,1) is called the identity matrix and is usually denoted by 10... 0 01 In = . . .. 01 or simply I,whenn is assumed to be known. 0 = diag(0,... ,0) is called the zero matrix. (1c) A square matrix L is said to be lower triangular if ij =0 i<j. (1d) A square matrix U is said to be upper triangular if uij =0 i>j. (1e) A square matrix A is called symmetric if aij = aji. (1f) A square matrix A is called Hermitian if aij =¯aji (¯z := complex conjugate of z). (1g) Eij has a 1 in the (i, j) position and zeros in all other positions. (2) A rectangular matrix A is called nonnegative if aij 0alli, j. ≥ It is called positive if aij > 0alli, j. Each of these matrices has some special properties, which we will study during this course. 2.1. BASICS 35 Definition 2.1.2. The set of all m n matrices is denoted by Mm,n(F ), × where F is the underlying field (usually R or C). In the case where m = n we write Mn(F ) to denote the matrices of size n n. × Theorem 2.1.1. Mm,n is a vector space with basis given by Eij, 1 i m, 1 j n. ≤ ≤ ≤ ≤ Equality, Addition, Multiplication Definition 2.1.3. Two matrices A and B are equal if and only if they have thesamesizeand aij = bij all i, j. Definition 2.1.4. If A is any matrix and α F then the scalar multipli- cation B = αA is defined by ∈ bij = αaij all i, j. Definition 2.1.5. If A and B are matrices of the same size then the sum A and B is defined by C = A + B,where cij = aij + bij all i, j We can also compute the difference D = A B by summing A and ( 1)B − − D = A B = A +( 1)B. − − matrix subtraction. Matrix addition “inherits” many properties from the field F . Theorem 2.1.2. If A, B, C Mm,n(F ) and α, β F ,then ∈ ∈ (1) A + B = B + A commutivity (2) A +(B + C)=(A + B)+C associativity (3) α(A + B)=αA + αB distributivity of a scalar (4) If B =0(a matrix of all zeros) then A + B = A +0=A (4) (α + β)A = αA + βA 36 CHAPTER 2. MATRICES AND LINEAR ALGEBRA (5) α(βA)=αβA (6) 0A =0 (7) α 0=0. Definition 2.1.6. If x and y Rn, ∈ x =(x1 ...xn) y =(y1 ...yn). Then the scalar or dot product of x and y is given by n x, y = xiyi. i=1 Remark 2.1.1. (i) Alternate notation for the scalar product: x, y = x y. (ii) The dot product is defined only for vectors of the same length. · Example 2.1.1. Let x =(1, 0, 3, 1) and y =(0, 2, 1, 2) then x, y = 1(0) + 0(2) + 3( 1) 1(2) = 5. − − − − − Definition 2.1.7. If A is m n and B is n p.Letri(A) denote the vector th × × with entries given by the i row of A,andletcj(B) denote the vector with entries given by the jth row of B. The product C = AB is the m p matrix defined by × cij = ri(A),cj(B) th where ri(A) is the vector in Rn consisting of the i row of A and similarly th cj(B) is the vector formed from the j column of B. Other notation for C = AB n cij = aikbkj 1 i m k=1 ≤ ≤ 1 j p. ≤ ≤ Example 2.1.2. Let 21 101 A = and B = 30. 321 11 − Then 12 AB = . 11 4 2.1. BASICS 37 Properties of matrix multiplication (1) If AB exists, does it happen that BA exists and AB = BA?The answer is usually no. First AB and BA exist if and only if A ∈ Mm,n(F )andB Mn,m(F ). Even if this is so the sizes of AB and BA are different∈ (AB is m m and BA is n n) unless m = n. However even if m = n we may× have AB = BA×.Seetheexamples below. They may be different sizes and if they are the same size (i.e. A and B aresquare)theentriesmaybedifferent 1 A =[1, 2] B = AB =[1] −1 1 2 BA = −12− 12 11 13 A = B = AB = 34 −01 −37 − 22 BA = 34 (2) If A is square we define A1 = A, A2 = AA, A3 = A2A = AAA n n 1 A = A − A = A A (n factors). ··· (3) I = diag(1,... ,1). If A Mm,n(F )then ∈ AIn = A and ImA = A. Theorem 2.1.3 (Matrix Multiplication Rules). Assume A, B,andC are matrices for which all products below make sense. Then (1) A(BC)=(AB)C (2) A(B C)=AB AC and (A B)C = AC BC ± ± ± ± (3) AI = A and IA = A (4) c(AB)=(cA)B (5) A0=0and 0B =0 38 CHAPTER 2. MATRICES AND LINEAR ALGEBRA (6) For A square ArAs = AsAr for all integers r, s 1. ≥ Fact: If AC and BC are equal, it does not follow that A = B. See Exercise 60. Remark 2.1.2. We use an alternate notation for matrix entries. For any matrix B denote the (i, j)-entry by (B)ij. Definition 2.1.8. Let A Mm,n(F ). ∈ (i) Define the transpose of A, denoted by AT ,tobethen m matrix with entries × T (A )ij = aji. (ii) Define the adjoint of A, denoted by A∗,tobethen m matrix with entries × (A∗)ij =¯aji complex conjugate Example 2.1.3. 15 123 A = AT = 24 541 31 In words... “The rows of A become the columns of AT , taken in the same order.” The following results are easy to prove. T T Theorem 2.1.4 (Laws of transposes). (1) (A ) = A and (A∗)∗ = A (2) (A B)T = AT BT (and for ) ± ± ∗ T T (3) (cA) = cA (cA)∗ =¯cA∗ (4) (AB)T = BT AT (5) If A is symmetric A = AT 2.1. BASICS 39 (6) If A is Hermitian A = A∗. More facts about symmetry. T T T T T Proof. (1) We know (A )ij = aji.So((A ) )ij = aij.Thus(A ) = A. T T T T (2) (A B) = aji bji.So(A B) = A B . ± ± ± ± Proposition 2.1.1. (1) A is symmetric if and only if AT is symmetric. (1)∗ A is Hermitian if and only if A∗ is Hermitian. (2) If A is symmetric, then A2 is also symmetric. (3) If A is symmetric, then An is also symmetric for all n. Definition 2.1.9. A matrix is called skew-symmetric if AT = A. − Example 2.1.4. The matrix 012 A = 10 3 −23− 0 − is skew-symmetric. Theorem 2.1.5. (1) If A is skew symmetric, then A is a square matrix and aii =0, i =1,... ,n. (2) For any matrix A Mn(F ) ∈ A AT − is skew-symmetric while A + AT is symmetric. (3) Every matrix A Mn(F ) can be uniquely written as the sum of a skew-symmetric and∈ symmetric matrix. T T Proof. (1) If A Mm,n(F ), then A Mn,m(F ). So, if A = A we must have m∈= n.Also ∈ − aii = aii − for i =1,... ,n.Soaii =0foralli. 40 CHAPTER 2. MATRICES AND LINEAR ALGEBRA (2) Since (A AT )T = AT A = (A AT ), it follows that A AT is skew-symmetric.− − − − − (3) Let A = B + C be a second such decomposition. Subtraction gives 1 1 (A + AT ) B = C (A AT ). 2 − − 2 − The left matrix is symmetric while the right matrix is skew-symmetric. Hence both are the zero matrix. 1 1 A = (A + AT )+ (A AT ). 2 2 − 0 1 Examples. A = 10− is skew-symmetric. Let 12 B = 14 − 1 1 BT = 24− 03 B BT = − 30 − 21 B + BT = . 18 Then 1 1 B = (B BT )+ (B + BT ). 2 − 2 An important observation about matrix multiplication is related to ideas from vector spaces. Indeed, two very important vector spaces are associated with matrices. Definition 2.1.10. Let A Mm,n(C). (i)Denote by ∈ th cj(A):=j column of A cj(A) Cm. We call the subspace of Cm spanned by the columns of A the ∈ column space of A.Withc1 (A) ,...,cn (A) denoting the columns of A 2.1. BASICS 41 the column space is S (c1 (A) ,...,cn (A)) . (ii) Similarly, we call the subspace of Cn spanned by the rows of A the row space of A.Withr1 (A) ,...,rm (A) denoting the rows of A the row space is therefore S (r1 (A) ,...,rm (A)) .
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Linear Algebra: Example Sheet 2 of 4
    Michaelmas Term 2014 SJW Linear Algebra: Example Sheet 2 of 4 1. (Another proof of the row rank column rank equality.) Let A be an m × n matrix of (column) rank r. Show that r is the least integer for which A factorises as A = BC with B 2 Matm;r(F) and C 2 Matr;n(F). Using the fact that (BC)T = CT BT , deduce that the (column) rank of AT equals r. 2. Write down the three types of elementary matrices and find their inverses. Show that an n × n matrix A is invertible if and only if it can be written as a product of elementary matrices. Use this method to find the inverse of 0 1 −1 0 1 @ 0 0 1 A : 0 3 −1 3. Let A and B be n × n matrices over a field F . Show that the 2n × 2n matrix IB IB C = can be transformed into D = −A 0 0 AB by elementary row operations (which you should specify). By considering the determinants of C and D, obtain another proof that det AB = det A det B. 4. (i) Let V be a non-trivial real vector space of finite dimension. Show that there are no endomorphisms α; β of V with αβ − βα = idV . (ii) Let V be the space of infinitely differentiable functions R ! R. Find endomorphisms α; β of V which do satisfy αβ − βα = idV . 5. Find the eigenvalues and give bases for the eigenspaces of the following complex matrices: 0 1 1 0 1 0 1 1 −1 1 0 1 1 −1 1 @ 0 3 −2 A ; @ 0 3 −2 A ; @ −1 3 −1 A : 0 1 0 0 1 0 −1 1 1 The second and third matrices commute; find a basis with respect to which they are both diagonal.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Linear Algebra Review James Chuang
    Linear algebra review James Chuang December 15, 2016 Contents 2.1 vector-vector products ............................................... 1 2.2 matrix-vector products ............................................... 2 2.3 matrix-matrix products ............................................... 4 3.2 the transpose .................................................... 5 3.3 symmetric matrices ................................................. 5 3.4 the trace ....................................................... 6 3.5 norms ........................................................ 6 3.6 linear independence and rank ............................................ 7 3.7 the inverse ...................................................... 7 3.8 orthogonal matrices ................................................. 8 3.9 range and nullspace of a matrix ........................................... 8 3.10 the determinant ................................................... 9 3.11 quadratic forms and positive semidefinite matrices ................................ 10 3.12 eigenvalues and eigenvectors ........................................... 11 3.13 eigenvalues and eigenvectors of symmetric matrices ............................... 12 4.1 the gradient ..................................................... 13 4.2 the Hessian ..................................................... 14 4.3 gradients and hessians of linear and quadratic functions ............................. 15 4.5 gradients of the determinant ............................................ 16 4.6 eigenvalues
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • A New Algorithm to Obtain the Adjugate Matrix Using CUBLAS on GPU González, H.E., Carmona, L., J.J
    ISSN: 2277-3754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 7, Issue 4, October 2017 A New Algorithm to Obtain the Adjugate Matrix using CUBLAS on GPU González, H.E., Carmona, L., J.J. Information Technology Department, ININ-ITTLA Information Technology Department, ININ proved to be the best option for most practical applications. Abstract— In this paper a parallel code for obtain the Adjugate The new transformation proposed here can be obtained from Matrix with real coefficients are used. We postulate a new linear it. Next, we briefly review this topic. transformation in matrix product form and we apply this linear transformation to an augmented matrix (A|I) by means of both a minimum and a complete pivoting strategies, then we obtain the II. LU-MATRICIAL DECOMPOSITION WITH GT. Adjugate matrix. Furthermore, if we apply this new linear NOTATION AND DEFINITIONS transformation and the above pivot strategy to a augmented The problem of solving a linear system of equations matrix (A|b), we obtain a Cramer’s solution of the linear system of Ax b is central to the field of matrix computation. There are equations. That new algorithm present an O n3 computational several ways to perform the elimination process necessary for complexity when n . We use subroutines of CUBLAS 2nd A, b R its matrix triangulation. We will focus on the Doolittle-Gauss rd and 3 levels in double precision and we obtain correct numeric elimination method: the algorithm of choice when A is square, results. dense, and un-structured. nxn Index Terms—Adjoint matrix, Adjugate matrix, Cramer’s Let us assume that AR is nonsingular and that we rule, CUBLAS, GPU.
    [Show full text]