Efficiently Calculating the Determinant of a Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Efficiently Calculating the Determinant of a Matrix Efficiently Calculating the Determinant of a Matrix Felix Limanta 13515065 Program Studi Teknik Informatika Sekolah Teknik Elektro dan Informatika Institut Teknologi Bandung, Jl. Ganesha 10 Bandung 40132, Indonesia [email protected] [email protected] Abstract—There are three commonly-used algorithms to infotype is either real or integer, depending on calculate the determinant of a matrix: Laplace expansion, LU implementation, while matrix is an arbitrarily-sized decomposition, and the Bareiss algorithm. In this paper, we array of array of infotype. first discuss the underlying mathematical principles behind the algorithms. We then realize the algorithms in pseudocode Finally, we analyze the complexity and nature of the algorithms and compare them after one another. II. BASIC MATHEMATICAL CONCEPTS Keywords—asymptotic time complexity, Bareiss algorithm, A. Determinant determinant, Laplace expansion, LU decomposition. The determinant is a real number associated with every square matrix. The determinant of a square matrix A is commonly denoted as det A, det(A), or |A|. I. INTRODUCTION Singular matrices are matrices which determinant is 0. In this paper, we describe three algorithms to find the Unimodular matrices are matrices which determinant is 1. determinant of a square matrix: Laplace expansion (also The determinant of a 1×1 matrix is the element itself. known as determinant expansion by minors), LU 퐴 = [푎] ⇒ det(퐴) = 푎 decomposition, and the Bareiss algorithm. The determinant of a 2×2 matrix is defined as the product The paper is organized as follows. Section II reviews the of the elements on the main diagonal minus the product of basic mathematical concepts for this paper, which includes the elements off the main diagonal. 푎 푏 determinants, elementary matrix operations, algorithm | | = 푎푑 − 푏푐 푐 푑 complexities, time complexities, and asymptotic time Finding the determinant of larger matrices will be complexities. In Section III, the three methods are discussed in later sections. discussed in-depth mathematically, whereas in Section IV, Some basic properties of determinants are the three methods, as well as subroutines necessary to 1. det(퐼) = 1 implement them, are implemented in a language-agnostic 2. det(퐴푇) = det (퐴) pseudocode. In Section V, the methods implemented in 1 3. det(퐴−1) = Section V are analyzed regarding its asymptotic time det (퐴) complexity and other features before being compared to 4. For square matrices A and B of equal size, one another. Finally, Section VI concludes the paper while det(퐴퐵) = det(퐴) × det(퐵) suggesting further improvements. 5. The determinant of a triangular matrix is the product of the diagonal entries (pivots) d1, d2, …, dn. Terminology and Notation 푛 Unless noted otherwise, all mentions of matrices in this det(퐴) = 푎1,1푎2,2 … 푎푛,푛 = ∏ 푎푖,푖 paper are assumed to be square matrices. 푖=1 Determinants are useful for other mathematical subjects. To describe iterations mathematically, the notation 푖 ∈ For example, following Cramer’s rule, a matrix can be used [푎. 푏] is introduced. The notation is taken from the to represent the coefficients of a system of linear equations, mathematical concept of number intervals, but instead of and the determinant can be used to solve the system of describing whether i is between a and b or not, it denotes linear equations if and only if the matrix is nonsingular. that the iterator i starts from a and ends in b, incrementing Determinants are also used in determining the inverse of a i by 1 after each iteration. It is equivalent to the for syntax matrix, where any matrix has a unique inverse if and only in programming languages, in which 푖 ∈ [푎. 푏] is if the matrix is nonsingular. equivalent to for i = a to b. The pseudocode described in section III follows the B. Elementary Matrix Operations syntax detailed in reference [5]. For convenience, two data types are defined here: Elementary matrix operations play important roles in Makalah IF2120 Matematika Diskrit – Sem. I Tahun 2016/2017 various matrix algebra applications, such as solving a D. Time Complexity system of linear equations using and finding the inverse of The time complexity of an algorithm is measured from a matrix. the number of instructions executed. The time complexity There are three kinds of elementary matrix operations: of an algorithm is not measured from the running time of 1. Switch two rows (or columns). said algorithm because of the differences between every 2. Multiply each element in a row (or column) by a computer’s architecture, every programming language and non-zero number. its compiler, and so on. Because of that, the same algorithm 3. Add or subtract a row (or column) by multiples of implemented in different languages or the same executable another row (or column). run in different machines would yield different running When these operations are performed on rows, they are times. called elementary row operations and, likewise, when they In calculating the time complexity of an algorithm, are performed on columns, they are called elementary ideally, all operations are accounted for. For decently column operations. complex algorithms, this is all but impossible, so Elementary matrix operations have the following effect practically, only basic operations are accounted for. For on determinants. example, the basic operation of a searching algorithm 1. Exchanging two rows or columns from a matrix would be the comparison of the searched element with reverse the sign of its determinant from positive to elements in the array. By counting how many comparisons negative and vice-versa. are made, we can obtain the relative efficiency of the 푎 푏 푐 푑 | | = − | | algorithm. 푐 푑 푎 푏 The time complexity of an algorithm can be divided into 2. Multiplying one row or column of a matrix by t three cases: multiplies the determinant by t. 푡푎 푡푏 푡푎 푏 푎 푏 Tmin(푛): Best case | | = | | = 푡 | | 푐 푑 푡푐 푑 푐 푑 Tmax(푛): Worst case 3. If two rows or two columns of a matrix are equal, or Tavg(푛): Average case a row or column is the multiply of another row or column, its determinant is zero. Example 1. Find the best case, worst case, and average 푎 푏 | | = 0 case time complexity of the following sequential search 푡푎 푡푏 algorithm. 4. If 푖 ≠ 푗, adding or subtracting t times row i from row j or vice-versa does not change the determinant. function SeqSearch (a: array of infotype, 푎 푏 푎 푏 | | = | | n: integer, x: infotype) → integer 푐 − 푡푎 푑 − 푡푏 푐 푑 DICTIONARY 5. If A has a row that is all zeros, then det(퐴) = 0. i: integer 푎 푏 found: boolean | | = 0 0 0 ALGORITHM i ← 1 C. Algorithm Complexity found ← false while ((i ≤ n) and not(found)) do Algorithms are defined as the series of steps necessary found ← (a[i] = x) to solve a problem systematically. An algorithm, first and if not(found) then foremost, is not only required to be effective and actually i ← i + 1 solve the problem, but also expected to be efficient. An if (found) then → i algorithm might be able to effectively solve a problem, but else if the algorithm demands an unreasonable amount of → 0 processing time or takes up disproportionately large amount of memory, the algorithm is next to useless. Best case: 푎1 = 푥 ⇒ Tmin(푛) = 1 In application, every algorithm uses up two resources Worst case: 푎푛 = 푛 or 푛 not found ⇒ Tmax(푛) = 푛 which can be used as measurement: the number of steps 1+2+⋯+푛 푛+1 Average case: T (푛) = = taken (time) and the amount of memory reserved to execute avg 푛 2 those steps (space). The number of steps taken is defined as the time complexity, denoted as T(n), whereas the E. Asymptotic Time Complexity amount of memory reserved is defined as the space Usually, the exact time complexity of an algorithm is complexity, denoted as S(n). unnecessary. Instead, what is needed is a rough estimate of Matrices are a type of data structure which takes up a the time required by the algorithm and how fast that time relatively large amount of memory, compared to more requirement grow with the size of the input. This is because compact data structures such as linked lists. Therefore, the the efficiency of an algorithm would only be apparent at a space complexity of the algorithms discussed later will not large number of inputs. Running the determinant- be examined in favor of focusing on their time complexity. calculating algorithms on a 3×3 matrix would not yield any notable differences between the algorithm running Makalah IF2120 Matematika Diskrit – Sem. I Tahun 2016/2017 times. However, if the algorithms are run on a large matrix More informally, expansion by minors can be written as (say, 5000×5000), differences between running times 푎11 푎12 푎13 ⋯ 푎1푛 푎 푎 푎 ⋯ 푎 would be much more apparent. | 21 22 23 2푛| The asymptotic time complexity is a rough estimate of ⋮ ⋮ ⋮ ⋱ ⋮ the time complexity as the number of inputs approaches 푎푛1 푎푛2 푎푛3 ⋯ 푎푛푛 푎 푎 ⋯ 푎 infinite. The asymptotic time complexity is commonly 22 23 2푛 = 푎 | ⋮ ⋮ ⋱ ⋮ | denoted with the Big-O notation, though other notations 11 푎푛1 푎푛2 ⋯ 푎푛푛 exist. Generally, the asymptotic time complexity of an 푎21 푎23 ⋯ 푎2푛 algorithm can be taken from the most significant term in − 푎12 | ⋮ ⋮ ⋱ ⋮ | + ⋯ the time complexity function. For example, 푎푛2 푎푛2 ⋯ 푎푛푛 5 푛 푛 T(푛) = 192푛 + 2 + log 푛! = O(2 ) 푎21 푎22 ⋯ 푎2(푛−1) 푛 because for non-negative integer values of 푛, 2 ± 푎1푛 | ⋮ ⋮ ⋱ ⋮ | contributes the most to the function T(푛). 푎푛1 푎푛2 ⋯ 푎푛(푛−1) Table I shows categories of algorithm based on its Big- O notation.
Recommended publications
  • Lecture # 4 % System of Linear Equations (Cont.)
    Lecture # 4 - System of Linear Equations (cont.) In our last lecture, we were starting to apply the Gaussian Elimination Method to our macro model Y = C + 1500 C = 200 + 4 (Y T ) 5 1 T = 100 + 5 Y We de…ned the extended coe¢ cient matrix 1 1 0 1000 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 The objective is to use elementary row operations (ERO’s)to transform our system of linear equations into another, simpler system. – Eliminate coe¢ cient for …rst variable (column) from all equations (rows) except …rst equation (row). – Eliminate coe¢ cient for second variable (column) from all equations (rows) except second equation (rows). – Eliminate coe¢ cient for third variable (column) from all equations (rows) except third equation (rows). The objective is to get a system that looks like this: 1 0 0 s1 0 1 0 s2 2 3 0 0 1 s3 4 5 1 Let’suse our example 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 Multiply …rst row (equation) by 1 and add it to third row 5 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Multiply …rst row by 4 and add it to row 2 5 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Add row 2 to row 3 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Multiply second row by 5 1 1 0 1500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Add row 2 to row 1 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 2 Multiply row 3 by 5 9 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 1 1000
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Determinants
    Determinants §1. Prerequisites 1) Every row operation can be achieved by pre-multiplying (left-multiplying) by an invertible matrix E called the elementary matrix for that operation. The matrix E is obtained by applying the ¡ row operation to the appropriate identity matrix. The matrix E is also denoted by Aij c ¡ , Mi c , or Pij, respectively, for the operations add c times row j to i operation, multiply row i by c, and permute rows i and j, respectively. 2) The Row Reduction Theorem asserts that every matrix A can be row reduced to a unique row echelon reduced matrix R. In matrix form: There is a unique row reduced matrix R and some 1 £ £ £ ¤ ¢ ¢ ¢ ¢ ¢ elementary Ei with Ep ¢ E1A R, or equivalently, A F1 FpR where Fi Ei are also elemen- tary. 3) A matrix A determines a linear transformation: It takes vectors x and gives vectors Ax. §2. Restrictions All matrices must be square. Determinants are not defined for non-square matrices. §3. Motivation Determinants determine whether a matrix has an inverse. They give areas and play a crucial role in the change of variables formula in multivariable calculus. ¡ ¥ ¡ Let’s compute the area of the parallelogram determined by vectors a ¥ b and c d . See Figure 1. c a (a+c, b+d) b b (c, d) c d d c (a, b) b b (0, 0) a c Figure 1. 1 1 ¡ ¦ ¡ § ¡ § ¡ § £ ¦ ¦ ¦ § § § £ § The area is a ¦ c b d 2 2ab 2 2cd 2bc ab ad cb cd ab cd 2bc ad bc. Tentative definition: The determinant of a 2 by 2 matrix is ¨ a b a b £ § det £ ad bc © c d c d which is the “signed” area of the parallelogram with sides determine by the rows of the matrix.
    [Show full text]
  • A Weakly Stable Algorithm for General Toeplitz Systems
    A Weakly Stable Algorithm for General Toeplitz Systems∗ Adam W. Bojanczyk Richard P. Brent School of Electrical Engineering Computer Sciences Laboratory Cornell University Australian National University Ithaca, NY 14853-5401 Canberra, ACT 0200 Frank R. de Hoog Division of Mathematics and Statistics CSIRO, GPO Box 1965 Canberra, ACT 2601 Report TR-CS-93-15 6 August 1993 (revised 24 June 1994) Abstract We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that RT R is close to AT A. Thus, when the algorithm is used to solve the semi-normal equations RT Rx = AT b, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the full-rank Toeplitz or Hankel least squares problem min kAx − bk2. 1991 Mathematics Subject Classification. Primary 65F25; Secondary 47B35, 65F05, 65F30, 65Y05, 65Y10 Key words and phrases. Cholesky factorization, error analysis, Hankel matrix, least squares, normal equations, orthogonal factorization, QR factorization, semi-normal equa- tions, stability, Toeplitz matrix, weak stability. 1 Introduction arXiv:1005.0503v1 [math.NA] 4 May 2010 Toeplitz linear systems arise in many applications, and there are many algorithms which solve nonsingular n × n Toeplitz systems Ax = b in O(n2) arithmetic operations [2, 13, 44, 45, 49, 54, 55, 65, 67, 77, 78, 80, 81, 85, 86]. Some algorithms are restricted to symmetric systems (A = AT ) and others apply to general Toeplitz systems. Because of their recursive nature, most O(n2) algorithms assume that all leading principal submatrices of A are nonsingular, and break down if this is not the case.
    [Show full text]
  • Linear Algebra: Beware!
    LINEAR ALGEBRA: BEWARE! MATH 196, SECTION 57 (VIPUL NAIK) You might be expecting linear algebra to be a lot like your calculus classes at the University. This is probably true in terms of the course structure and format. But it’s not true at the level of subject matter. Some important differences are below. • Superficially, linear algebra is a lot easier, since it relies mostly on arithmetic rather than algebra. The computational procedures involve the systematic and correct application of processes for adding, subtracting, multiplying, and dividing numbers. But the key word here is superficially. • Even for the apparently straightforward computational exercises, it turns out that people are able to do them a lot better if they understand what’s going on. In fact, in past test questions, people have often made fewer errors when doing the problem using full-scale algebraic symbol manipulation rather than the synthetic arithmetic method. • One important difference between linear algebra and calculus is that with calculus, it’s relatively easy to understand ideas partially. One can obtain much of the basic intuition of calculus by un- derstanding graphs of functions. In fact, limit, continuity, differentaaition, and integration all have basic descriptions in terms of the graph. Note: these aren’t fully rigorous, which is why you had to take a year’s worth of calculus class to cement your understanding of the ideas. But it’s a start. With linear algebra, there is no single compelling visual tool that connects all the ideas, and conscious effort is needed even for a partial understanding. • While linear algebra lacks any single compelling visual tool, it requires either considerable visuo- spatial skill or considerable abstract symbolic and verbal skill (or a suitable linear combination thereof).
    [Show full text]
  • Parallel Systems in Symbolic and Algebraic Computation
    UCAM-CL-TR-537 Technical Report ISSN 1476-2986 Number 537 Computer Laboratory Parallel systems in symbolic and algebraic computation Mantsika Matooane June 2002 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 http://www.cl.cam.ac.uk/ c 2002 Mantsika Matooane This technical report is based on a dissertation submitted August 2001 by the author for the degree of Doctor of Philosophy to the University of Cambridge, Trinity College. Technical reports published by the University of Cambridge Computer Laboratory are freely available via the Internet: http://www.cl.cam.ac.uk/TechReports/ Series editor: Markus Kuhn ISSN 1476-2986 Abstract This thesis describes techniques that exploit the distributed memory in massively parallel processors to satisfy the peak memory requirements of some very large com- puter algebra problems. Our aim is to achieve balanced memory use, which differen- tiates this work from other parallel systems whose focus is on gaining speedup. It is widely observed that failures in computer algebra systems are mostly due to mem- ory overload: for several problems in computer algebra, some of the best available algorithms suffer from intermediate expression swell where the result is of reason- able size, but the intermediate calculation encounters severe memory limitations. This observation motivates our memory-centric approach to parallelizing computer algebra algorithms. The memory balancing is based on a randomized hashing algorithm for dynamic distribution of data. Dynamic distribution means that the intermediate data is allocated storage space at the time that it is created and therefore the system can avoid overloading some processing elements.
    [Show full text]
  • Arxiv:1810.01634V2 [Cs.SC] 6 Nov 2020 Algebraic Number Fields And
    Algebraic number fields and the LLL algorithm M. J´anos Uray [email protected] ELTE – E¨otv¨os Lor´and University (Budapest) Faculty of Informatics Department of Computer Algebra Abstract In this paper we analyze the computational costs of various operations and algorithms in algebraic number fields using exact arithmetic. Let K be an algebraic number field. In the first half of the paper, we calculate the running time and the size of the output of many operations in K in terms of the size of the input and the parameters of K. We include some earlier results about these, but we go further than them, e.g. we also analyze some R-specific operations in K like less-than comparison. In the second half of the paper, we analyze two algorithms: the Bareiss algorithm, which is an integer-preserving version of the Gaussian elimination, and the LLL algorithm, which is for lattice basis reduction. In both cases, we extend the algorithm from Zn to Kn, and give a polynomial upper bound on the running time when the computations in K are performed exactly (as opposed to floating-point approximations). 1 Introduction Exact computation with algebraic numbers is an important feature that most computer algebra systems provide. They use efficient algorithms for the calculations, arXiv:1810.01634v2 [cs.SC] 6 Nov 2020 described in several papers and books, e.g. in [1, 2, 3]. However, the computational costs of these algorithms are often not obvious to calculate, because the bit complexity depends on how much the representing multi-precision integers grow during the computation.
    [Show full text]
  • Laplace Expansion of the Determinant
    Geometria Lingotto. LeLing12: More on determinants. Contents: ¯ • Laplace expansion of the determinant. • Cross product and generalisations. • Rank and determinant: minors. • The characteristic polynomial. Recommended exercises: Geoling 14. ¯ Laplace expansion of the determinant The expansion of Laplace allows to reduce the computation of an n × n determinant to that of n (n − 1) × (n − 1) determinants. The formula, expanded with respect to the ith row (where A = (aij)), is: i+1 i+n det(A) = (−1) ai1det(Ai1) + ··· + (−1) aindet(Ain) where Aij is the (n − 1) × (n − 1) matrix obtained by erasing the row i and the column j from A. With respect to the j th column it is: j+1 j+n det(A) = (−1) a1jdet(A1j) + ··· + (−1) anjdet(Anj) Example 0.1. We do it with respect to the first row below. 1 2 1 4 1 3 1 3 4 3 4 1 = 1 − 2 + 1 = (4 − 6) − 2(3 − 5) + (3:6 − 5:4) = 0 6 1 5 1 5 6 5 6 1 The proof of the expansion along the first row is as follows. The determinant's linearity, proved in the previous set of notes, implies 0 1 Ej n BA C X B 2C det(A) = a1j det B . C j=1 @ . A An Ingegneria dell'Autoveicolo, LeLing12 1 Geometria Geometria Lingotto. where Ej is the canonical basis of the rows, i.e. Ej is zero except at position j where there is 1. Thus we have to calculate the determinants 0 0 ··· 0 1 0 0 ··· 0 a a ··· a a a ······ a 21 22 2(j−1) 2j 2(j+1) 2n .
    [Show full text]
  • Mathematical Methods – WS 2021/22 5– Determinant – 1 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 2 / 29 Properties of a Volume Determinant
    What is a Determinant? We want to “compute” whether n vectors in Rn are linearly dependent and measure “how far” they are from being linearly dependent, resp. Idea: Chapter 5 Two vectors in R2 span a parallelogram: Determinant vectors are linearly dependent area is zero ⇔ We use the n-dimensional volume of the created parallelepiped for our function that “measures” linear dependency. Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 1 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 2 / 29 Properties of a Volume Determinant We define our function indirectly by the properties of this volume. The determinant is a function which maps an n n matrix × A = (a ,..., a ) into a real number det(A) with the following I Multiplication of a vector by a scalar α yields the α-fold volume. 1 n properties: I Adding some vector to another one does not change the volume. (D1) The determinant is linear in each column: I If two vectors coincide, then the volume is zero. det(..., ai + bi,...) = det(..., ai,...) + det(..., bi,...) I The volume of a unit cube is one. det(..., α ai,...) = α det(..., ai,...) (D2) The determinant is zero, if two columns coincide: det(..., ai,..., ai,...) = 0 (D3) The determinant is normalized: det(I) = 1 Notations: det(A) = A | | Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 3 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 4 / 29 Example – Properties Determinant – Remarks (D1) I Properties (D1)–(D3) define a function uniquely. 1 2 + 10 3 1 2 3 1 10 3 (I.e., such a function does exist and two functions with these properties are identical.) 4 5 + 11 6 = 4 5 6 + 4 11 6 7 8 + 12 9 7 8 9 7 12 9 I The determinant as defined above can be negative.
    [Show full text]
  • Efficient Solutions to Toeplitz-Structured Linear Systems for Signal Processing
    EFFICIENT SOLUTIONS TO TOEPLITZ-STRUCTURED LINEAR SYSTEMS FOR SIGNAL PROCESSING A Dissertation Presented to The Academic Faculty by Christopher Kowalczyk Turnes In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Electrical and Computer Engineering Georgia Institute of Technology May 2014 Copyright © 2014 by Christopher Kowalczyk Turnes EFFICIENT SOLUTIONS TO TOEPLITZ-STRUCTURED LINEAR SYSTEMS FOR SIGNAL PROCESSING Approved by: Dr. James McClellan, Committee Chair Dr. Jack Poulson School of Electrical and Computer School of Computational Science and Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Justin Romberg, Advisor Dr. Chris Barnes School of Electrical and Computer School of Electrical and Computer Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Monson Hayes Dr. Doru Balcan School of Electrical and Computer Quantitative Finance Engineering Bank of America Georgia Institute of Technology Date Approved: May 2014 Effort is one of the things that gives meaning to life. Effort means you care about something, that something is important to you and you are willing to work for it. It would be an impoverished existence if you were not willing to value things and commit yourself to working toward them. – CAROL S. DWECK, SELF-THEORIES To my wonderful, amazing, and patient parents Cynthia and Patrick, without whom none of this would be possible. Thank you for everything (but mostly for life). ACKNOWLEDGEMENTS Foremost, I wish to express my sincere gratitude to my advisor, Dr. Justin Romberg, for his support of my work and for his patience, time, and immense knowledge. Under his direction, my technical and communication skills improved dramatically.
    [Show full text]
  • The $ K $-Th Derivatives of the Immanant and the $\Chi $-Symmetric Power of an Operator
    The k-th derivatives of the immanant and the χ-symmetric power of an operator S´onia Carvalho∗ Pedro J. Freitas† February, 2013 Abstract In recent papers, R. Bhatia, T. Jain and P. Grover obtained for- mulas for directional derivatives, of all orders, of the determinant, the permanent, the m-th compound map and the m-th induced power map. In this paper we generalize these results for immanants and for other symmetric powers of a matrix. 1 Introduction There is a formula for the derivative of the determinant map on the space of the square matrices of order n, known as the Jacobi formula, which has been well known for a long time. In recent work, T. Jain and R. Bhatia derived formulas for higher order derivatives of the determinant ([7]) and T. Jain also had derived formulas for all the orders of derivatives for the map m arXiv:1305.1143v1 [math.AC] 6 May 2013 ∧ that takes an n n matrix to its m-th compound ([10]). Later, P. Grover, in the same spirit× of Jain’s work, did the same for the permanent map and the for the map m that takes an n n matrix to its m-th induced power. ∨ × The mentioned authors extended the theory in [6]. ∗Centro de Estruturas Lineares e Combinat´oria da Universidade de Lisboa, Av Prof Gama Pinto 2, P-1649-003 Lisboa and Departamento de Matem´atica do ISEL, Rua Con- selheiro Em´ıdio Navarro 1, 1959-007 Lisbon, Portugal ([email protected]). †Centro de Estruturas Lineares e Combinat´oria, Av Prof Gama Pinto 2, P-1649-003 Lis- boa and Departamento de Matem´atica da Faculdade de Ciˆencias, Campo Grande, Edif´ıcio C6, piso 2, P-1749-016 Lisboa.
    [Show full text]
  • Biostatistics 615/815 Lecture 13: Programming with Matrix
    Introduction Power Matrix Matrix Computation Linear System Least square Summary Introduction Power Matrix Matrix Computation Linear System Least square Summary . Annoucements . Homework #3 . .. Biostatistics 615/815 Lecture 13: Homework 3 is due today • Programming with Matrix If you’re using Visual C++ and still have problems in using boost . • .. library, you can ask for another extension . .. Hyun Min Kang . Homework #4 . .. Homework 4 is out • February 17th, 2011 Floyd-Warshall algorithm • Note that some key information was not covered in the class. • Fair/biased coint HMM . • .. Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 1 / 28 Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 2 / 28 Introduction Power Matrix Matrix Computation Linear System Least square Summary Introduction Power Matrix Matrix Computation Linear System Least square Summary . Last lecture - Conditional independence in graphical models Markov Blanket '()!+" !" '()#*!+" #" '()$*#+" '()&*#+" '()%*#+" $" %" &" If conditioned on the variables in the gray area (variables with direct • dependency), A is independent of all the other nodes. Pr(A, C, D, E B) = Pr(A B) Pr(C B) Pr(D B) Pr(E B) A (U A πA) πA • | | | | | • ⊥ − − | Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 3 / 28 Hyun Min Kang Biostatistics 615/815 - Lecture 13 February 17th, 2011 4 / 28 Introduction Power Matrix Matrix Computation Linear System Least square Summary Introduction Power Matrix Matrix Computation Linear System Least square Summary
    [Show full text]