Efficiently Calculating the Determinant of a Matrix
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Lecture # 4 % System of Linear Equations (Cont.)
Lecture # 4 - System of Linear Equations (cont.) In our last lecture, we were starting to apply the Gaussian Elimination Method to our macro model Y = C + 1500 C = 200 + 4 (Y T ) 5 1 T = 100 + 5 Y We de…ned the extended coe¢ cient matrix 1 1 0 1000 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 The objective is to use elementary row operations (ERO’s)to transform our system of linear equations into another, simpler system. – Eliminate coe¢ cient for …rst variable (column) from all equations (rows) except …rst equation (row). – Eliminate coe¢ cient for second variable (column) from all equations (rows) except second equation (rows). – Eliminate coe¢ cient for third variable (column) from all equations (rows) except third equation (rows). The objective is to get a system that looks like this: 1 0 0 s1 0 1 0 s2 2 3 0 0 1 s3 4 5 1 Let’suse our example 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 Multiply …rst row (equation) by 1 and add it to third row 5 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Multiply …rst row by 4 and add it to row 2 5 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Add row 2 to row 3 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Multiply second row by 5 1 1 0 1500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Add row 2 to row 1 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 2 Multiply row 3 by 5 9 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 1 1000 -
Chapter Four Determinants
Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc. -
Determinants
Determinants §1. Prerequisites 1) Every row operation can be achieved by pre-multiplying (left-multiplying) by an invertible matrix E called the elementary matrix for that operation. The matrix E is obtained by applying the ¡ row operation to the appropriate identity matrix. The matrix E is also denoted by Aij c ¡ , Mi c , or Pij, respectively, for the operations add c times row j to i operation, multiply row i by c, and permute rows i and j, respectively. 2) The Row Reduction Theorem asserts that every matrix A can be row reduced to a unique row echelon reduced matrix R. In matrix form: There is a unique row reduced matrix R and some 1 £ £ £ ¤ ¢ ¢ ¢ ¢ ¢ elementary Ei with Ep ¢ E1A R, or equivalently, A F1 FpR where Fi Ei are also elemen- tary. 3) A matrix A determines a linear transformation: It takes vectors x and gives vectors Ax. §2. Restrictions All matrices must be square. Determinants are not defined for non-square matrices. §3. Motivation Determinants determine whether a matrix has an inverse. They give areas and play a crucial role in the change of variables formula in multivariable calculus. ¡ ¥ ¡ Let’s compute the area of the parallelogram determined by vectors a ¥ b and c d . See Figure 1. c a (a+c, b+d) b b (c, d) c d d c (a, b) b b (0, 0) a c Figure 1. 1 1 ¡ ¦ ¡ § ¡ § ¡ § £ ¦ ¦ ¦ § § § £ § The area is a ¦ c b d 2 2ab 2 2cd 2bc ab ad cb cd ab cd 2bc ad bc. Tentative definition: The determinant of a 2 by 2 matrix is ¨ a b a b £ § det £ ad bc © c d c d which is the “signed” area of the parallelogram with sides determine by the rows of the matrix. -
A Weakly Stable Algorithm for General Toeplitz Systems
A Weakly Stable Algorithm for General Toeplitz Systems∗ Adam W. Bojanczyk Richard P. Brent School of Electrical Engineering Computer Sciences Laboratory Cornell University Australian National University Ithaca, NY 14853-5401 Canberra, ACT 0200 Frank R. de Hoog Division of Mathematics and Statistics CSIRO, GPO Box 1965 Canberra, ACT 2601 Report TR-CS-93-15 6 August 1993 (revised 24 June 1994) Abstract We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that RT R is close to AT A. Thus, when the algorithm is used to solve the semi-normal equations RT Rx = AT b, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the full-rank Toeplitz or Hankel least squares problem min kAx − bk2. 1991 Mathematics Subject Classification. Primary 65F25; Secondary 47B35, 65F05, 65F30, 65Y05, 65Y10 Key words and phrases. Cholesky factorization, error analysis, Hankel matrix, least squares, normal equations, orthogonal factorization, QR factorization, semi-normal equa- tions, stability, Toeplitz matrix, weak stability. 1 Introduction arXiv:1005.0503v1 [math.NA] 4 May 2010 Toeplitz linear systems arise in many applications, and there are many algorithms which solve nonsingular n × n Toeplitz systems Ax = b in O(n2) arithmetic operations [2, 13, 44, 45, 49, 54, 55, 65, 67, 77, 78, 80, 81, 85, 86]. Some algorithms are restricted to symmetric systems (A = AT ) and others apply to general Toeplitz systems. Because of their recursive nature, most O(n2) algorithms assume that all leading principal submatrices of A are nonsingular, and break down if this is not the case. -
Linear Algebra: Beware!
LINEAR ALGEBRA: BEWARE! MATH 196, SECTION 57 (VIPUL NAIK) You might be expecting linear algebra to be a lot like your calculus classes at the University. This is probably true in terms of the course structure and format. But it’s not true at the level of subject matter. Some important differences are below. • Superficially, linear algebra is a lot easier, since it relies mostly on arithmetic rather than algebra. The computational procedures involve the systematic and correct application of processes for adding, subtracting, multiplying, and dividing numbers. But the key word here is superficially. • Even for the apparently straightforward computational exercises, it turns out that people are able to do them a lot better if they understand what’s going on. In fact, in past test questions, people have often made fewer errors when doing the problem using full-scale algebraic symbol manipulation rather than the synthetic arithmetic method. • One important difference between linear algebra and calculus is that with calculus, it’s relatively easy to understand ideas partially. One can obtain much of the basic intuition of calculus by un- derstanding graphs of functions. In fact, limit, continuity, differentaaition, and integration all have basic descriptions in terms of the graph. Note: these aren’t fully rigorous, which is why you had to take a year’s worth of calculus class to cement your understanding of the ideas. But it’s a start. With linear algebra, there is no single compelling visual tool that connects all the ideas, and conscious effort is needed even for a partial understanding. • While linear algebra lacks any single compelling visual tool, it requires either considerable visuo- spatial skill or considerable abstract symbolic and verbal skill (or a suitable linear combination thereof). -
Parallel Systems in Symbolic and Algebraic Computation
UCAM-CL-TR-537 Technical Report ISSN 1476-2986 Number 537 Computer Laboratory Parallel systems in symbolic and algebraic computation Mantsika Matooane June 2002 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 http://www.cl.cam.ac.uk/ c 2002 Mantsika Matooane This technical report is based on a dissertation submitted August 2001 by the author for the degree of Doctor of Philosophy to the University of Cambridge, Trinity College. Technical reports published by the University of Cambridge Computer Laboratory are freely available via the Internet: http://www.cl.cam.ac.uk/TechReports/ Series editor: Markus Kuhn ISSN 1476-2986 Abstract This thesis describes techniques that exploit the distributed memory in massively parallel processors to satisfy the peak memory requirements of some very large com- puter algebra problems. Our aim is to achieve balanced memory use, which differen- tiates this work from other parallel systems whose focus is on gaining speedup. It is widely observed that failures in computer algebra systems are mostly due to mem- ory overload: for several problems in computer algebra, some of the best available algorithms suffer from intermediate expression swell where the result is of reason- able size, but the intermediate calculation encounters severe memory limitations. This observation motivates our memory-centric approach to parallelizing computer algebra algorithms. The memory balancing is based on a randomized hashing algorithm for dynamic distribution of data. Dynamic distribution means that the intermediate data is allocated storage space at the time that it is created and therefore the system can avoid overloading some processing elements. -
Arxiv:1810.01634V2 [Cs.SC] 6 Nov 2020 Algebraic Number Fields And
Algebraic number fields and the LLL algorithm M. J´anos Uray [email protected] ELTE – E¨otv¨os Lor´and University (Budapest) Faculty of Informatics Department of Computer Algebra Abstract In this paper we analyze the computational costs of various operations and algorithms in algebraic number fields using exact arithmetic. Let K be an algebraic number field. In the first half of the paper, we calculate the running time and the size of the output of many operations in K in terms of the size of the input and the parameters of K. We include some earlier results about these, but we go further than them, e.g. we also analyze some R-specific operations in K like less-than comparison. In the second half of the paper, we analyze two algorithms: the Bareiss algorithm, which is an integer-preserving version of the Gaussian elimination, and the LLL algorithm, which is for lattice basis reduction. In both cases, we extend the algorithm from Zn to Kn, and give a polynomial upper bound on the running time when the computations in K are performed exactly (as opposed to floating-point approximations). 1 Introduction Exact computation with algebraic numbers is an important feature that most computer algebra systems provide. They use efficient algorithms for the calculations, arXiv:1810.01634v2 [cs.SC] 6 Nov 2020 described in several papers and books, e.g. in [1, 2, 3]. However, the computational costs of these algorithms are often not obvious to calculate, because the bit complexity depends on how much the representing multi-precision integers grow during the computation. -
Laplace Expansion of the Determinant
Geometria Lingotto. LeLing12: More on determinants. Contents: ¯ • Laplace expansion of the determinant. • Cross product and generalisations. • Rank and determinant: minors. • The characteristic polynomial. Recommended exercises: Geoling 14. ¯ Laplace expansion of the determinant The expansion of Laplace allows to reduce the computation of an n × n determinant to that of n (n − 1) × (n − 1) determinants. The formula, expanded with respect to the ith row (where A = (aij)), is: i+1 i+n det(A) = (−1) ai1det(Ai1) + ··· + (−1) aindet(Ain) where Aij is the (n − 1) × (n − 1) matrix obtained by erasing the row i and the column j from A. With respect to the j th column it is: j+1 j+n det(A) = (−1) a1jdet(A1j) + ··· + (−1) anjdet(Anj) Example 0.1. We do it with respect to the first row below. 1 2 1 4 1 3 1 3 4 3 4 1 = 1 − 2 + 1 = (4 − 6) − 2(3 − 5) + (3:6 − 5:4) = 0 6 1 5 1 5 6 5 6 1 The proof of the expansion along the first row is as follows. The determinant's linearity, proved in the previous set of notes, implies 0 1 Ej n BA C X B 2C det(A) = a1j det B . C j=1 @ . A An Ingegneria dell'Autoveicolo, LeLing12 1 Geometria Geometria Lingotto. where Ej is the canonical basis of the rows, i.e. Ej is zero except at position j where there is 1. Thus we have to calculate the determinants 0 0 ··· 0 1 0 0 ··· 0 a a ··· a a a ······ a 21 22 2(j−1) 2j 2(j+1) 2n . -
Mathematical Methods – WS 2021/22 5– Determinant – 1 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 2 / 29 Properties of a Volume Determinant
What is a Determinant? We want to “compute” whether n vectors in Rn are linearly dependent and measure “how far” they are from being linearly dependent, resp. Idea: Chapter 5 Two vectors in R2 span a parallelogram: Determinant vectors are linearly dependent area is zero ⇔ We use the n-dimensional volume of the created parallelepiped for our function that “measures” linear dependency. Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 1 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 2 / 29 Properties of a Volume Determinant We define our function indirectly by the properties of this volume. The determinant is a function which maps an n n matrix × A = (a ,..., a ) into a real number det(A) with the following I Multiplication of a vector by a scalar α yields the α-fold volume. 1 n properties: I Adding some vector to another one does not change the volume. (D1) The determinant is linear in each column: I If two vectors coincide, then the volume is zero. det(..., ai + bi,...) = det(..., ai,...) + det(..., bi,...) I The volume of a unit cube is one. det(..., α ai,...) = α det(..., ai,...) (D2) The determinant is zero, if two columns coincide: det(..., ai,..., ai,...) = 0 (D3) The determinant is normalized: det(I) = 1 Notations: det(A) = A | | Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 3 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 4 / 29 Example – Properties Determinant – Remarks (D1) I Properties (D1)–(D3) define a function uniquely. 1 2 + 10 3 1 2 3 1 10 3 (I.e., such a function does exist and two functions with these properties are identical.) 4 5 + 11 6 = 4 5 6 + 4 11 6 7 8 + 12 9 7 8 9 7 12 9 I The determinant as defined above can be negative. -
Efficient Solutions to Toeplitz-Structured Linear Systems for Signal Processing
EFFICIENT SOLUTIONS TO TOEPLITZ-STRUCTURED LINEAR SYSTEMS FOR SIGNAL PROCESSING A Dissertation Presented to The Academic Faculty by Christopher Kowalczyk Turnes In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Electrical and Computer Engineering Georgia Institute of Technology May 2014 Copyright © 2014 by Christopher Kowalczyk Turnes EFFICIENT SOLUTIONS TO TOEPLITZ-STRUCTURED LINEAR SYSTEMS FOR SIGNAL PROCESSING Approved by: Dr. James McClellan, Committee Chair Dr. Jack Poulson School of Electrical and Computer School of Computational Science and Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Justin Romberg, Advisor Dr. Chris Barnes School of Electrical and Computer School of Electrical and Computer Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Monson Hayes Dr. Doru Balcan School of Electrical and Computer Quantitative Finance Engineering Bank of America Georgia Institute of Technology Date Approved: May 2014 Effort is one of the things that gives meaning to life. Effort means you care about something, that something is important to you and you are willing to work for it. It would be an impoverished existence if you were not willing to value things and commit yourself to working toward them. – CAROL S. DWECK, SELF-THEORIES To my wonderful, amazing, and patient parents Cynthia and Patrick, without whom none of this would be possible. Thank you for everything (but mostly for life). ACKNOWLEDGEMENTS Foremost, I wish to express my sincere gratitude to my advisor, Dr. Justin Romberg, for his support of my work and for his patience, time, and immense knowledge. Under his direction, my technical and communication skills improved dramatically. -
The $ K $-Th Derivatives of the Immanant and the $\Chi $-Symmetric Power of an Operator
The k-th derivatives of the immanant and the χ-symmetric power of an operator S´onia Carvalho∗ Pedro J. Freitas† February, 2013 Abstract In recent papers, R. Bhatia, T. Jain and P. Grover obtained for- mulas for directional derivatives, of all orders, of the determinant, the permanent, the m-th compound map and the m-th induced power map. In this paper we generalize these results for immanants and for other symmetric powers of a matrix. 1 Introduction There is a formula for the derivative of the determinant map on the space of the square matrices of order n, known as the Jacobi formula, which has been well known for a long time. In recent work, T. Jain and R. Bhatia derived formulas for higher order derivatives of the determinant ([7]) and T. Jain also had derived formulas for all the orders of derivatives for the map m arXiv:1305.1143v1 [math.AC] 6 May 2013 ∧ that takes an n n matrix to its m-th compound ([10]). Later, P. Grover, in the same spirit× of Jain’s work, did the same for the permanent map and the for the map m that takes an n n matrix to its m-th induced power. ∨ × The mentioned authors extended the theory in [6]. ∗Centro de Estruturas Lineares e Combinat´oria da Universidade de Lisboa, Av Prof Gama Pinto 2, P-1649-003 Lisboa and Departamento de Matem´atica do ISEL, Rua Con- selheiro Em´ıdio Navarro 1, 1959-007 Lisbon, Portugal ([email protected]). †Centro de Estruturas Lineares e Combinat´oria, Av Prof Gama Pinto 2, P-1649-003 Lis- boa and Departamento de Matem´atica da Faculdade de Ciˆencias, Campo Grande, Edif´ıcio C6, piso 2, P-1749-016 Lisboa. -
Biostatistics 615/815 Lecture 13: Programming with Matrix
Introduction Power Matrix Matrix Computation Linear System Least square Summary Introduction Power Matrix Matrix Computation Linear System Least square Summary . Annoucements . Homework #3 . .. Biostatistics 615/815 Lecture 13: Homework 3 is due today • Programming with Matrix If you’re using Visual C++ and still have problems in using boost . • .. library, you can ask for another extension . .. Hyun Min Kang . Homework #4 . .. Homework 4 is out • February 17th, 2011 Floyd-Warshall algorithm • Note that some key information was not covered in the class. • Fair/biased coint HMM . • .. Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 1 / 28 Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 2 / 28 Introduction Power Matrix Matrix Computation Linear System Least square Summary Introduction Power Matrix Matrix Computation Linear System Least square Summary . Last lecture - Conditional independence in graphical models Markov Blanket '()!+" !" '()#*!+" #" '()$*#+" '()&*#+" '()%*#+" $" %" &" If conditioned on the variables in the gray area (variables with direct • dependency), A is independent of all the other nodes. Pr(A, C, D, E B) = Pr(A B) Pr(C B) Pr(D B) Pr(E B) A (U A πA) πA • | | | | | • ⊥ − − | Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 3 / 28 Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 4 / 28 Introduction Power Matrix Matrix Computation Linear System Least square Summary Introduction Power Matrix Matrix Computation Linear System Least square Summary