2.1 Use of Permutation (Similarity) in Solving Block-Diagonal System

Total Page:16

File Type:pdf, Size:1020Kb

2.1 Use of Permutation (Similarity) in Solving Block-Diagonal System MATH3602 Chapter 2 Advanced Direct Methods 1 2 Advanced Direct Methods 2.1 Use of Permutation (Similarity) in Solving Block-Diagonal System • Suppose we are to" solve the following# system of linear equation: Ax = b where D11 D12 A = and Dij = Diag(Dij; ··· ;Dij) D21 D22 1 n are n × n diagonal matrices for i; j = 1; 2. Here A is a matrix of 2 × 2 block (same size n × n of diagonal matrix). • 2 3 2 3 2 3 D11 D12 x b 6 1 1 7 6 1 7 6 1 7 6 ... ... 7 6 . 7 6 . 7 6 7 6 7 6 7 6 D11 D12 7 6 x 7 6 b 7 6 n n 7 6 n 7 = 6 n 7 6 21 22 7 6 7 6 7 6 D1 D1 7 6 xn+1 7 6 bn+1 7 4 ... ... 5 4 . 5 4 . 5 21 22 Dn Dn x2n b2n • LU factorization may need O(n3) operation? MATH3602 Chapter 2 Advanced Direct Methods 2 • Any strategy you can propose apart from LU factorization? We consider the following example:2 3 6 5 0 0 2 0 0 7 6 7 6 0 4 0 0 1 0 7 6 7 6 0 0 5 0 0 2 7 6 7 · A = 6 7 = L U 6 1 0 0 4 0 0 7 6 7 4 0 3 0 0 3 0 5 0 0 1 0 0 5 2 3 2 3 6 1:00 0 0 0 0 0 7 6 5:00 0 0 2 0 0 7 6 7 6 7 6 0 1:00 0 0 0 0 7 6 0 4:00 0 0 1 0 7 6 7 6 7 6 0 0 1:00 0 0 0 7 6 0 0 5:00 0 0 2 7 · ≡ 6 7 · 6 7 L U 6 7 6 7 6 0:20 0 0 1:00 0 0 7 6 0 0 0 3:60 0 0 7 6 7 6 7 4 0 0:75 0 0 1:00 0 5 4 0 0 0 0 2:25 0 5 0 0 0:20 0 0 1:00 0 0 0 0 0 4:60 MATH3602 Chapter 2 Advanced Direct Methods 3 • One can consider a re-arrangement of both the equations and variables and get the following equivalent system of equations A~x~ = b~: 2 3 2 3 2 3 D11 D12 x b 6 1 1 7 6 1 7 6 1 7 6 21 22 7 6 7 6 7 6 D1 D1 7 6 xn+1 7 6 bn+1 7 6 11 12 7 6 7 6 7 6 D2 D2 7 6 x2 7 6 b2 7 6 7 6 7 6 7 6 D21 D22 7 6 x 7 6 b 7 6 2 2 7 6 n+2 7 6 n+2 7 6 . 7 6 . 7 = 6 . 7 6 .. 7 6 . 7 6 . 7 6 7 6 7 6 7 6 ... 7 6 . 7 6 . 7 6 7 6 7 6 7 4 11 12 5 4 5 4 5 Dn Dn xn bn 21 22 Dn Dn x2n b2n MATH3602 Chapter 2 Advanced Direct Methods 4 • It is clear to see that the linear system has a unique solution iff " #! 11 12 Di Di 6 det 21 22 = 0 for i = 1; 2; : : : ; n: Di Di • The solution is given by (i = 1; 2; : : : ; n) " # " # x 1 D22b − D12b i = i i i n+i 11 22 − 21 12 11 − 21 xn+i Di Di Di Di Di bn+i Di bi • The computational cost is O(n). • The memory cost is also O(n). • There exists a permutation matrix P such that A~ = P · A · P T where P T = P −1. MATH3602 Chapter 2 Advanced Direct Methods 5 • The following is an example when n = 2: 2 3 2 3 2 3 2 3 11 12 11 12 6 D1 D1 7 6 1 0 0 0 7 6 D1 D1 7 6 1 0 0 0 7 6 21 22 7 6 7 6 11 12 7 6 7 6 D1 D1 7 6 0 0 1 0 7 6 D2 D2 7 6 0 0 1 0 7 6 11 12 7 = 6 7 6 21 22 7 6 7 4 D2 D2 5 4 0 1 0 0 5 4 D1 D1 5 4 0 1 0 0 5 21 22 21 22 D2 D2 0 0 0 1 D2 D2 0 0 0 1 2 3 2 3 2 3 11 12 11 12 6 D1 D1 7 6 1 0 0 0 7 6 D1 D1 7 6 21 22 7 6 7 6 11 12 7 • 6 D1 D1 7 6 0 0 1 0 7 6 D2 D2 7 6 11 12 7 = 6 7 6 21 22 7 4 D2 D2 5 4 0 1 0 0 5 4 D1 D1 5 D21 D22 0 0 0 1 D21 D22 2 2 2 3 2 32 2 2 3 11 12 11 12 6 D1 D1 7 6 D1 D1 7 6 1 0 0 0 7 6 21 22 7 6 21 22 7 6 7 • 6 D1 D1 7 6 D1 D1 7 6 0 0 1 0 7 6 11 12 7 = 6 11 12 7 6 7 4 D2 D2 5 4 D2 D2 5 4 0 1 0 0 5 21 22 21 22 D2 D2 D2 D2 0 0 0 1 MATH3602 Chapter 2 Advanced Direct Methods 6 • To solve Ax = b, i.e., to solve AP~ x = P b: Stage 1 : A~y = P b (solve for y) Stage 2 : P x = y (solve for x) • The computational cost of P x can be ignored or is negligible. • The idea can be extended to a matrix of m × m block (same size of diagonal matrix). • If m << n, then the computational cost will be O(m3n). Why? • The method can be implemented in a parallel computer easily. If there are O(n) processors, then the computational cost can be reduced to O(m3). MATH3602 Chapter 2 Advanced Direct Methods 7 • LU factorization can be applied wisely in diagonal block form: 2 3 11 12 6 D1 D1 7 6 . 7 6 .. .. 7 6 7 6 D11 D12 7 6 n n 7 6 21 22 7 6 D1 D1 7 6 7 4 ... ... 5 21 22 Dn Dn 2 3 2 3 1 2 6 1 7 6 U1 U1 7 6 . 7 6 . 7 6 .. 7 6 .. .. 7 6 7 6 7 6 1 7 6 U 1 U 2 7 = 6 7 6 n n 7 6 7 6 1 7 6 L1 1 7 6 Un+1 7 6 7 6 7 4 ... ... 5 4 ... 5 1 Ln 1 U2n • The idea can be extended to a matrix of m × m block (same size of diagonal matrix). If m << n, the computational cost will be O(m3n). MATH3602 Chapter 2 Advanced Direct Methods 8 2.2 Use of Similarity in Solving Circulant Matrix System • A matrix is called a circulant matrix if it takes the following form: 2 3 ······ 6 a1 a2 an−1 an 7 6 7 6 an a1 a2 ······ an−1 7 6 . 7 6 a − a a ······ . 7 6 n 1 n 1 7 An = 6 . 7 : 6 . .. .. .. .. 7 6 . 7 4 a3 ······ .. a1 a2 5 a2 a3 ··· an−1 an a1 • We note that each row is just the previous row cycled forward for one step. • Thus an n × n circulant matrix is characterized by n coefficients only. 0 0 • In an n × n circulant matrix, we have Cij = Ci0j0 if i − j ≡ i − j (mod n). MATH3602 Chapter 2 Advanced Direct Methods 9 • In fact, we can write Xn−1 i An = ai+1Cn (2.1) i=0 where C is a permutation matrix n 2 3 0 1 0 ··· 0 6 7 6 7 6 0 0 1 0 7 6 . 7 Cn = 6 . .. .. 7 : 6 7 4 . ... ... 1 5 1 0 ······ 0 • i 0 n We note that Cn are all permutation matrices and Cn = Cn = In. • Moreover, An is diagonalized if Cn is diagonalizable. MATH3602 Chapter 2 Advanced Direct Methods 10 • i It can be shown that all Cn can be diagonalized by using the discrete Fast Fourier Transform (FFT) matrix Fn. • The discrete FFT matrix is defined2 as follows: 3 0·0 0·1 ··· 0·(n−1) 6 wn wn wn 7 6 1·0 1·1 ··· 1·(n−1) 7 6 wn wn wn 7 Fn = Fn(wn) = 6 7 4 . 5 (n−1)·0 (n−1)·1 (n−1)·(n−1) wn wn ··· wn where ( ) ( ) −2πi 2π 2π w = e n = cos − i sin : n n n • It can be shown that (Exercise) 2 3 − · − w−0·0 w−0·1 ··· w 0 (n 1) ( ) 6 n n n 7 6 −1·0 −1·1 ··· −1·(n−1) 7 −1 1 1 1 6 wn wn wn 7 Fn = Fn = 6 . 7 : n wn n 4 . 5 −(n−1)·0 −(n−1)·1 −(n−1)·(n−1) wn wn ··· wn MATH3602 Chapter 2 Advanced Direct Methods 11 • A demonstration2 3 2 3 2 3 2 3 6 0 1 0 0 7 6 0 0 1 0 7 6 0 0 0 1 7 6 1 0 0 0 7 6 7 6 7 6 7 6 7 6 0 0 1 0 7 2 6 0 0 0 1 7 3 6 1 0 0 0 7 4 6 0 1 0 0 7 C4 = 6 7 ;C = 6 7 ;C = 6 7 ;C = 6 7 ; 4 0 0 0 1 5 4 4 1 0 0 0 5 4 4 0 1 0 0 5 4 4 0 0 1 0 5 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 • 2 3 2 3 6 1 1 1 1 7 6 0:25 0:25 0:25 0:25 7 6 − − 7 6 − − 7 6 1 i 1 i 7 −1 6 0:25 0:25i 0:25 0:25i 7 F4 = 6 7 F = 6 7 : 4 1 −1 1 −1 5 4 4 0:25 −0:25 0:25 −0:25 5 1 i −1 −i 0:25 −0:25i −0:25 0:25i • · −1 · · −1 − − Here F4 F4 = I4 and F4 C4 F4 = Diag(1; i; 1; i).
Recommended publications
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
    Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized.
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • Irreducibility in Algebraic Groups and Regular Unipotent Elements
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 141, Number 1, January 2013, Pages 13–28 S 0002-9939(2012)11898-2 Article electronically published on August 16, 2012 IRREDUCIBILITY IN ALGEBRAIC GROUPS AND REGULAR UNIPOTENT ELEMENTS DONNA TESTERMAN AND ALEXANDRE ZALESSKI (Communicated by Pham Huu Tiep) Abstract. We study (connected) reductive subgroups G of a reductive alge- braic group H,whereG contains a regular unipotent element of H.Themain result states that G cannot lie in a proper parabolic subgroup of H. This result is new even in the classical case H =SL(n, F ), the special linear group over an algebraically closed field, where a regular unipotent element is one whose Jor- dan normal form consists of a single block. In previous work, Saxl and Seitz (1997) determined the maximal closed positive-dimensional (not necessarily connected) subgroups of simple algebraic groups containing regular unipotent elements. Combining their work with our main result, we classify all reductive subgroups of a simple algebraic group H which contain a regular unipotent element. 1. Introduction Let H be a reductive linear algebraic group defined over an algebraically closed field F . Throughout this text ‘reductive’ will mean ‘connected reductive’. A unipo- tent element u ∈ H is said to be regular if the dimension of its centralizer CH (u) coincides with the rank of H (or, equivalently, u is contained in a unique Borel subgroup of H). Regular unipotent elements of a reductive algebraic group exist in all characteristics (see [22]) and form a single conjugacy class. These play an important role in the general theory of algebraic groups.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]
  • 9. Properties of Matrices Block Matrices
    9. Properties of Matrices Block Matrices It is often convenient to partition a matrix M into smaller matrices called blocks, like so: 01 2 3 11 ! B C B4 5 6 0C A B M = B C = @7 8 9 1A C D 0 1 2 0 01 2 31 011 B C B C Here A = @4 5 6A, B = @0A, C = 0 1 2 , D = (0). 7 8 9 1 • The blocks of a block matrix must fit together to form a rectangle. So ! ! B A C B makes sense, but does not. D C D A • There are many ways to cut up an n × n matrix into blocks. Often context or the entries of the matrix will suggest a useful way to divide the matrix into blocks. For example, if there are large blocks of zeros in a matrix, or blocks that look like an identity matrix, it can be useful to partition the matrix accordingly. • Matrix operations on block matrices can be carried out by treating the blocks as matrix entries. In the example above, ! ! A B A B M 2 = C D C D ! A2 + BC AB + BD = CA + DC CB + D2 1 Computing the individual blocks, we get: 0 30 37 44 1 2 B C A + BC = @ 66 81 96 A 102 127 152 0 4 1 B C AB + BD = @10A 16 0181 B C CA + DC = @21A 24 CB + D2 = (2) Assembling these pieces into a block matrix gives: 0 30 37 44 4 1 B C B 66 81 96 10C B C @102 127 152 16A 4 10 16 2 This is exactly M 2.
    [Show full text]
  • Circulant and Toeplitz Matrices in Compressed Sensing
    Circulant and Toeplitz Matrices in Compressed Sensing Holger Rauhut Hausdorff Center for Mathematics & Institute for Numerical Simulation University of Bonn Conference on Time-Frequency Strobl, June 15, 2009 Institute for Numerical Simulation Rheinische Friedrich-Wilhelms-Universität Bonn Holger Rauhut Hausdorff Center for Mathematics & Institute for NumericalCirculant and Simulation Toeplitz University Matrices inof CompressedBonn Sensing Overview • Compressive Sensing (compressive sampling) • Partial Random Circulant and Toeplitz Matrices • Recovery result for `1-minimization • Numerical experiments • Proof sketch (Non-commutative Khintchine inequality) Holger Rauhut Circulant and Toeplitz Matrices 2 Recovery requires the solution of the underdetermined linear system Ax = y: Idea: Sparsity allows recovery of the correct x. Suitable matrices A allow the use of efficient algorithms. Compressive Sensing N Recover a large vector x 2 C from a small number of linear n×N measurements y = Ax with A 2 C a suitable measurement matrix. Usual assumption x is sparse, that is, kxk0 := j supp xj ≤ k where supp x = fj; xj 6= 0g. Interesting case k < n << N. Holger Rauhut Circulant and Toeplitz Matrices 3 Compressive Sensing N Recover a large vector x 2 C from a small number of linear n×N measurements y = Ax with A 2 C a suitable measurement matrix. Usual assumption x is sparse, that is, kxk0 := j supp xj ≤ k where supp x = fj; xj 6= 0g. Interesting case k < n << N. Recovery requires the solution of the underdetermined linear system Ax = y: Idea: Sparsity allows recovery of the correct x. Suitable matrices A allow the use of efficient algorithms. Holger Rauhut Circulant and Toeplitz Matrices 3 n×N For suitable matrices A 2 C , `0 recovers every k-sparse x provided n ≥ 2k.
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • Analytical Solution of the Symmetric Circulant Tridiagonal Linear System
    Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 8-24-2014 Analytical Solution of the Symmetric Circulant Tridiagonal Linear System Sean A. Broughton Rose-Hulman Institute of Technology, [email protected] Jeffery J. Leader Rose-Hulman Institute of Technology, [email protected] Follow this and additional works at: https://scholar.rose-hulman.edu/math_mstr Part of the Numerical Analysis and Computation Commons Recommended Citation Broughton, Sean A. and Leader, Jeffery J., "Analytical Solution of the Symmetric Circulant Tridiagonal Linear System" (2014). Mathematical Sciences Technical Reports (MSTR). 103. https://scholar.rose-hulman.edu/math_mstr/103 This Article is brought to you for free and open access by the Mathematics at Rose-Hulman Scholar. It has been accepted for inclusion in Mathematical Sciences Technical Reports (MSTR) by an authorized administrator of Rose-Hulman Scholar. For more information, please contact [email protected]. Analytical Solution of the Symmetric Circulant Tridiagonal Linear System S. Allen Broughton and Jeffery J. Leader Mathematical Sciences Technical Report Series MSTR 14-02 August 24, 2014 Department of Mathematics Rose-Hulman Institute of Technology http://www.rose-hulman.edu/math.aspx Fax (812)-877-8333 Phone (812)-877-8193 Analytical Solution of the Symmetric Circulant Tridiagonal Linear System S. Allen Broughton Rose-Hulman Institute of Technology Jeffery J. Leader Rose-Hulman Institute of Technology August 24, 2014 Abstract A circulant tridiagonal system is a special type of Toeplitz system that appears in a variety of problems in scientific computation. In this paper we give a formula for the inverse of a symmetric circulant tridiagonal matrix as a product of a circulant matrix and its transpose, and discuss the utility of this approach for solving the associated system.
    [Show full text]