Displacement Structure Approach to Cauchy and Cauchy

Total Page:16

File Type:pdf, Size:1020Kb

Displacement Structure Approach to Cauchy and Cauchy Journal of Computational and Applied Mathematics 138 (2002) 259–272 www.elsevier.com/locate/cam Displacement structure approach to Cauchy and Cauchy–Vandermonde matrices: inversion formulas and fast algorithms Zheng-Hong Yanga;b; ∗, Yong-Jian Hub aDepartment of Mathematics, P.O. Box 71, East Campus, China Agricultural University, 100083 Beijing, China bDepartment of Mathematics, Beijing Normal University, 100875 Beijing, China Received 2 September 2000; received in revised form 22 February 2001 Abstract The con4uent Cauchy and Cauchy–Vandermonde matrices are considered, which were studied earlier by various authors in di5erent ways. In this paper, we use another way called displacement structure approach to deal with matrices of this kind. We showthat the Cauchy and Cauchy–Vandermonde matrices satisfy some special type of matrix equations. This leads quite naturally to the inversion formulas and fast algorithms for matrices of this kind. c 2002 Elsevier Science B.V. All rights reserved. MSC: 15A57; 65D05 Keywords: Cauchy–Vandermonde matrix; Hermite rational interpolation; Displacement structure; Inversion formula; Fast algorithm 1. Introduction and preliminaries Let c =(c1;:::;cn) and d =(d1;:::;dk ) (1.1) be two arrays of nodes with all ci and dj distinct pairwise. Associated with c and d the (simple) Cauchy matrix is deÿned as 1 n;k C(c; d)= ; (1.2) c − d i j i;j=1 Partially Supported by National Science Foundation of China. ∗ Correspondence address: Department of Mathematics, P.O. Box 71, East Campus, China Agricultural University, 100083 Beijing, China. 0377-0427/02/$ - see front matter c 2002 Elsevier Science B.V. All rights reserved. PII: S 0377-0427(01)00378-8 260 Z.-H. Yang, Y.-J. Hu / Journal of Computational and Applied Mathematics 138 (2002) 259–272 and the Cauchy–Vandermonde (CV) matrix is deÿned as Cm(c; d)=[C(c; d) Vm(c)]; (1.3) where m−1 1 c1 ··· c1 1 c ··· cm−1 2 2 V (c)= ∈ Cn×m (1.4) m . . ··· . m−1 1 cn ··· cn is the (simple) Vandermonde matrix corresponding to c. One of the most important properties of Cauchy–Vandermonde matrices is their application to rational interpolation with ÿxed poles. For a given array of interpolation data (ci;Ái); we want to k ÿnd a proper rational function f(x)=q(x)=p(x) (deg q(x) ¡ deg p(x)) with p(x)= j=1(x − dj) such that f(ci)=Ái; (i =1;:::;n): (1.5) k Let f(x) have a partial fraction decomposition of the form f(x)= j=1 j=(x − dj); then, the above problem is equivalent to solving the following system of equations: C(c; d) = Á; (1.6) T T with =(1;:::;k ) and Á =(Á1;:::;Án) . If f(x) is not proper, then the Euclidean algorithm yields k j m−1 f(x)= + k+1 + k+2x + ···+ k+mx x − dj j=1 and the above interpolation problem is equivalent to solving the linear system of equations Cm(c; d) = Á (1.7) T with =(1;:::;k+m) and Á as above. Cauchy and Cauchy–Vandermonde matrices appeared in a sequence of recent papers [9–11,1,3]. In [10,9] MuhlbachK derived the determinant and inverse representations and the Lagrange–Hermite interpolation formula for Cauchy–Vandermonde matrix. In [11] VavÄrÃNn gave the factorization and inversion formulas for Cauchy and CV matrices in another way. The method in [10] is direct computation and seems to be rather complicated. Although in [11] the relations between Cauchy– Vandermonde matrices and rational interpolation problems were pointed out, the algorithm process for interpolants was not given. In the present paper we make use of another method called displacement structure approach to deal with Cauchy and CV matrices. We point out that con4uent Cauchy and CV matrices satisfy some of the special matrix equations and deriving their inverses is equivalent to solving only two linear systems of equations with Cauchy and CV matrices as coePcient matrices, and we also give the fast algorithms for solving such kinds of linear systems. Z.-H. Yang, Y.-J. Hu / Journal of Computational and Applied Mathematics 138 (2002) 259–272 261 The concept of displacement structure was ÿrst introduced by Kailath et al. [5,6] for Toeplitz and Hankel matrices using the Stein-type operator (·) given by (R)=R − FRA: (1.8) For details of displacement structure theory we refer the reader to the survey article of Kailath and Sayed [7]. In general, the generalized displacement operator is deÿned by {;;F;A}(R)=R − FRA; (1.9) where ; ; F and A are speciÿed matrices such that {;;F;A}(R) has lowrank, say r; independent of n. Then R is said to be a structured or a displacement structure matrix with respect to the displacement operator deÿned by (1.9), and r is referred to as the displacement rank of R.A special case of (1.9) will have a more simple form called an equation of Sylvester-type {;A}(R)=R − RA (1.10) which was ÿrst studied by Heinig [4] for Cauchy-like matrices. Indeed, the Cauchy matrix C(c; d) is the unique solution of the following Sylvester-type matrix equation: 1 1 D(c)C(c; d) − C(c; d)D(d)= . [1 1 ··· 1]; (1.11) . 1 n k where D(c) = diag(ci)i=1 and D(d) = diag(dj)j=1 are the diagonal matrices corresponding to c and d; respectively. Eq. (1.11) implies that the Cauchy matrix C(c; d) has displacement rank 1 with respect to the op- −1 erator {D(c);D(d)}(·). When n=k; multiplying Eq. (1.11) by C(c; d) from both left- and right-hand sides we get 1 1 D(d)C(c; d)−1 − C(c; d)−1D(c)=−C(c; d)−1 . [1 1 ··· 1]C(c; d)−1: (1.12) . 1 T T If we denote by u =(u1;:::;un) and v =(v1;:::;vn) the solutions of the following two equations: C(c; d)u =[1 ··· 1]T and vTC(c; d)=[1 ··· 1]; (1.13) then Eq. (1.12) by elementwise comparison shows that u v n C(c; d)−1 = − i j = −D(u)C(d; c)D(v); (1.14) d − c i j i;j=1 262 Z.-H. Yang, Y.-J. Hu / Journal of Computational and Applied Mathematics 138 (2002) 259–272 n n where D(u) = diag(ui)i=1 and D(v) = diag(vj)j=1; which implies that computation of the inverse of the Cauchy matrix C(c; d) is equivalent to solving only two systems of equations and one system in the symmetric case. 2. Conuent Cauchy and Cauchy–Vandermonde matrices: displacement structures and inversion formulas In this section, we shall derive the displacement structures and fast inversion formulas for con4uent Cauchy and CV matrices. Let us begin by recalling the deÿnitions of con4uent Cauchy and Cauchy– Vandermonde matrices. Our deÿnitions are in accordance with those in [11] and di5er slightly from those in [10]. Let c =(c ;:::;c ;c ;:::;c ;:::;c ;:::;c ) (2.1) 1 1 2 2 p p n1 n2 np and d =(d ;:::;d ;d ;:::;d ;:::;d ;:::;d ) (2.2) 1 1 2 2 q q k1 k2 kq p q be two sequences of interpolation nodes with all ci;dj distinct pairwise and i=1 ni =n; j=1 kj =k. With c and d we associate two classes of matrices. Deÿnition 1. We call the n × k block matrix p;q C(c; d)=(Cij)i;j=1 (2.3) con4uent Cauchy matrix corresponding to c and d; where st ni−1;kj−1 ni×kj Cij =(Cij )s=0;t=0 ∈ C and 1 @s+t 1 x=ci s + t (−1)s Cst = = : ij s!t! @xs@yt x − y s (c − d )s+t+1 y=dj i j Deÿnition 2. We call the n × (k + m) block matrix Cm(c; d)=[C(c; d);Vm(c)] (m ¿ 0) (2.4) con4uent Cauchy–Vandermonde matrix corresponding to c and d; where Vm(c1) t ni−1;m−1 V (c)= . ; with V (c )= ct−s ; (2.5) m . m i s i s=0;t=0 Vm(cp) is the con4uent Vandermonde matrix of size n × m corresponding to c. Z.-H. Yang, Y.-J. Hu / Journal of Computational and Applied Mathematics 138 (2002) 259–272 263 When m = 0 we assume that C0(c; d)=C(c; d). As in the simple case, con4uent Cauchy and CV matrices correspond to rational interpolation q kj problems with prescribed possibly repeated poles. Indeed, let p(x)= j=1(x − dj) and k −1 q(x) q j m f(x)= = jt + xj−1: t+1 j p(x) (x − dj) j=1 t=0 j=1 Then the interpolation problem 1 dsf(x) = Á s! dxs is x=ci (i =1;:::;p; s=0;:::;ni − 1) is reduced to solve the following system: Cm(c; d) = Á (2.6) where kj−1 q col(col(jt)t=0 )j=1 ni−1 p = and Á = col[col(Áis) ] : m s=0 i=1 col(j)j=1 Hereafter, col(ai) denotes the column vector with ai as components. 2.1. Displacement structures In this subsection we derive the displacement structures for con4uent Cauchy and CV matrices. Our starting point is the following Lemma. Lemma 3. Suppose that J(#) and J($) are the m × m and n × n lower triangular Jordan blocks with # = $; respectively; and m−1;n−1 % =(%st)s=0;t=0 is any m × n matrix. Then the Sylvester matrix equation J(#)X − XJ($)T = % (2.7) m−1;n−1 has the unique solution X =(Xst)s=0;t=0 given by s t ' + ÿ (−1)' X = % : (2.8) st s−';t−ÿ ' (# − $)'+ÿ+1 '=0 ÿ=0 Note that when # = $, the uniqueness of the solution of Eq.
Recommended publications
  • Seminar VII for the Course GROUP THEORY in PHYSICS Micael Flohr
    Seminar VII for the course GROUP THEORY IN PHYSICS Mic~ael Flohr The classical Lie groups 25. January 2005 MATRIX LIE GROUPS Most Lie groups one ever encouters in physics are realized as matrix Lie groups and thus as subgroups of GL(n, R) or GL(n, C). This is the group of invertibel n × n matrices with coefficients in R or C, respectively. This is a Lie group, since it forms as an open subset of the vector space of n × n matrices a manifold. Matrix multiplication is certainly a differentiable map, as is taking the inverse via Cramer’s rule. The only condition defining the open 2 subset is that the determinat must not be zero, which implies that dimKGL(n, K) = n is the same as the one of the vector space Mn(K). However, GL(n, R) is not connected, because we cannot move continuously from a matrix with determinant less than zero to one with determinant larger than zero. It is worth mentioning that gl(n, K) is the vector space of all n × n matrices over the field K, equipped with the standard commutator as Lie bracket. We can describe most other Lie groups as subgroups of GL(n, K) for either K = R or K = C. There are two ways to do so. Firstly, one can give restricting equations to the coefficients of the matrices. Secondly, one can find subgroups of the automorphisms of V =∼ Kn, which conserve a given structure on Kn. In the following, we give some examples for this: SL(n, K).
    [Show full text]
  • Inertia of the Matrix [(Pi + Pj) ]
    isid/ms/2013/12 October 20, 2013 http://www.isid.ac.in/estatmath/eprints r Inertia of the matrix [(pi + pj) ] Rajendra Bhatia and Tanvi Jain Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi{110 016, India r INERTIA OF THE MATRIX [(pi + pj) ] RAJENDRA BHATIA* AND TANVI JAIN** Abstract. Let p1; : : : ; pn be positive real numbers. It is well r known that for every r < 0 the matrix [(pi + pj) ] is positive def- inite. Our main theorem gives a count of the number of positive and negative eigenvalues of this matrix when r > 0: Connections with some other matrices that arise in Loewner's theory of oper- ator monotone functions and in the theory of spline interpolation are discussed. 1. Introduction Let p1; p2; : : : ; pn be distinct positive real numbers. The n×n matrix 1 C = [ ] is known as the Cauchy matrix. The special case pi = i pi+pj 1 gives the Hilbert matrix H = [ i+j ]: Both matrices have been studied by several authors in diverse contexts and are much used as test matrices in numerical analysis. The Cauchy matrix is known to be positive definite. It possessesh ai ◦r 1 stronger property: for each r > 0 the entrywise power C = r (pi+pj ) is positive definite. (See [4] for a proof.) The object of this paper is to study positivity properties of the related family of matrices r Pr = [(pi + pj) ]; r ≥ 0: (1) The inertia of a Hermitian matrix A is the triple In(A) = (π(A); ζ(A); ν(A)) ; in which π(A); ζ(A) and ν(A) stand for the number of positive, zero, and negative eigenvalues of A; respectively.
    [Show full text]
  • Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions
    City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports CUNY Academic Works 2013 TR-2013011: Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions Victor Y. Pan How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_cs_tr/386 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions ? Victor Y. Pan Department of Mathematics and Computer Science Lehman College and the Graduate Center of the City University of New York Bronx, NY 10468 USA [email protected], home page: http://comet.lehman.cuny.edu/vpan/ Abstract. The papers [MRT05], [CGS07], [XXG12], and [XXCBa] have combined the advanced FMM techniques with transformations of matrix structures (traced back to [P90]) in order to devise numerically stable algorithms that approximate the solutions of Toeplitz, Hankel, Toeplitz- like, and Hankel-like linear systems of equations in nearly linear arith- metic time, versus classical cubic time and quadratic time of the previous advanced algorithms. We show that the power of these approximation al- gorithms can be extended to yield similar results for computations with other matrices that have displacement structure, which includes Van- dermonde and Cauchy matrices, as well as to polynomial and rational evaluation and interpolation. The resulting decrease of the running time of the known approximation algorithms is again by order of magnitude, from quadratic to nearly linear.
    [Show full text]
  • An Accurate Computational Solution of Totally Positive Cauchy Linear Systems
    AN ACCURATE COMPUTATIONAL SOLUTION OF TOTALLY POSITIVE CAUCHY LINEAR SYSTEMS Abdoulaye Cissé Andrew Anda Computer Science Department Computer Science Department St. Cloud State University St. Cloud State University St. Cloud, MN 56301 St. Cloud, MN 56301 [email protected] [email protected] Abstract ≈ ’ ∆ 1 The Cauchy linear systems C(x, y)V = b where C(x, y) = ∆ , x = (xi ) , y = ( yi ) , « xi − y j V = (vi ) , b = (bi ) can be solved very accurately regardless of condition number using Björck-Pereyra-type methods. We explain this phenomenon by the following facts. First, the Björck-Pereyra methods, written in matrix form, are essentially an accurate and efficient bidiagonal decomposition of the inverse of the matrix, as obtained by Neville Elimination with no pivoting. Second, each nontrivial entry of this bidiagonal decomposition is a product of two quotients of (initial) minors. We use this method to derive and implement a new method of accurately and efficiently solving totally positive Cauchy linear systems in time complexity O(n2). Keywords: Accuracy, Björck-Pereyra-type methods, Cauchy linear systems, floating point architectures, generalized Vandermonde linear systems, Neville elimination, totally positive, LDU decomposition. Note: In this paper we assume that the nodes xi and yj are pair wise distinct and C is totally positive 1 Background Definition 1.1 A Cauchy matrix is defined as: ≈ ’ ∆ 1 C(x, y) = ∆ , for i, j = 1,2,..., n (1.1) « xi − y j where the nodes xi and yj are assumed pair wise distinct. C(x,y) is totally positive (TP) if 2 2 0 < y1 < y2 < < yn−1 < yn < x1 < x2 < < xn−1 < xn .
    [Show full text]
  • Displacement Rank of the Drazin Inverse Huaian Diaoa, Yimin Weib;1, Sanzheng Qiaoc;∗;2
    Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147–161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diaoa, Yimin Weib;1, Sanzheng Qiaoc;∗;2 aInstitute of Mathematics, Fudan University, Shanghai 200433, PR China bDepartment of Mathematics, Fudan University, Shanghai 200433, PR China cDepartment of Computing and Software, McMaster University, Hamilton, Ont., Canada L8S 4L7 Received 20 February 2003; received in revised form 4 August 2003 Abstract In this paper, we study the displacement rank of the Drazin inverse. Both Sylvester displacement and the generalized displacement are discussed. We present upper bounds for the ranks of the displacements of the Drazin inverse. The general results are applied to the group inverse of a structured matrix such as close-to- Toeplitz, generalized Cauchy, Toeplitz-plus-Hankel, and Bezoutians. c 2003 Elsevier B.V. All rights reserved. MSC: 15A09; 65F20 Keywords: Drazin inverse; Displacement; Structured matrix; Jordan canonical form 1. Introduction Displacement gives a quantitative way of identifying the structure of a matrix. Consider an n × n Toeplitz matrix t0 t−1 ··· t−n+1 . t .. .. 1 T = ; . .. .. . t−1 tn−1 ··· t1 t0 ∗ Corresponding author. Tel.: +1-905-525-9140; fax: +1-905-524-0340. E-mail addresses: [email protected] (Y. Wei), [email protected] (S. Qiao). 1 Supported by National Natural Science Foundation of China under Grant 19901006. 2 Partially supported by Natural Science and Engineering Research Council of Canada. 0377-0427/$ - see front matter c 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2003.09.050 148 H.
    [Show full text]
  • Arxiv:2104.06123V2 [Nlin.SI] 19 Jul 2021
    LINEAR INTEGRAL EQUATIONS AND TWO-DIMENSIONAL TODA SYSTEMS YUE YIN AND WEI FU Abstract. The direct linearisation framework is presented for the two-dimensional Toda equations associated with the infinite-dimensional (1) (2) (1) (2) Z+ Lie algebras A∞, B∞ and C∞, as well as the Kac–Moody algebras Ar , A2r , Cr and Dr+1 for arbitrary integers r ∈ , from the aspect of a set of linear integral equations in a certain form. Such a scheme not only provides a unified perspective to understand the underlying integrability structure, but also induces the direct linearising type solution potentially leading to the universal solution space, for each class of the two-dimensional Toda system. As particular applications of this framework to the two-dimensional Toda lattices, we rediscover the Lax pairs and the adjoint Lax pairs and simultaneously construct the generalised Cauchy matrix solutions. 1. Introduction In the modern theory of integrable systems, the notion of integrability of nonlinear equations often refers to the property that a differential/difference equation is exactly solvable under an initial-boundary condition, which in many cases allows us to construct explicit solutions. Motivated by this, many mathematical methods were invented and developed to search for explicit solutions of nonlinear models, for instance, the inverse scattering transform, the Darboux transform, Hirota’s bilinear method, as well as the algebro-geometric method, etc., see e.g. the monographs [1,11,16,26]. These techniques not only explained the nonlinear phenomena such as solitary waves and periodic waves in nature mathematically, but also motivated the discoveryof a huge number of nonlinear equations that possess “nice” algebraic and geometric properties.
    [Show full text]
  • Fast Approximate Computations with Cauchy Matrices and Polynomials ∗
    Fast Approximate Computations with Cauchy Matrices and Polynomials ∗ Victor Y. Pan Departments of Mathematics and Computer Science Lehman College and the Graduate Center of the City University of New York Bronx, NY 10468 USA [email protected] http://comet.lehman.cuny.edu/vpan/ Abstract Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and nu- merical computing. The known algorithms solve both problems over any field of constants in nearly linear arithmetic time, but the cost grows to quadratic for numerical solution. We fix this discrepancy: our new numerical algorithms run in nearly linear arithmetic time. At first we restate our goals as the multiplication of an n × n Vandermonde matrix by a vector and the solution of a Vandermonde linear system of n equations. Then we transform the matrix into a Cauchy structured matrix with some special features. By exploiting them, we approximate the matrix by a generalized hierarchically semiseparable matrix, which is a structured matrix of a different class. Finally we accelerate our solution to the original problems by applying Fast Multipole Method to the latter matrix. Our resulting numerical algorithms run in nearly optimal arithmetic time when they perform the above fundamental computations with polynomials, Vandermonde matrices, transposed Vandermonde matrices, and a large class of Cauchy and Cauchy-like matrices. Some of our techniques may be of independent interest. Key words: Polynomial evaluation; Rational evaluation; Interpolation; Vandermonde matrices; Transfor- mation of matrix structures; Cauchy matrices; Fast Multipole Method; HSS matrices; Matrix compression AMS Subject Classification: 12Y05, 15A04, 47A65, 65D05, 68Q25 1 Introduction 1.1 The background and our progress Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and numerical arXiv:1506.02285v3 [math.NA] 17 Apr 2017 computing.
    [Show full text]
  • Positivity Properties of the Matrix (I + J)I+J
    isid/ms/2015/11 September 21, 2015 http://www.isid.ac.in/∼statmath/index.php?module=Preprint Positivity properties of the matrix (i + j)i+j Rajendra Bhatia and Tanvi Jain Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi{110 016, India POSITIVITY PROPERTIES OF THE MATRIX [(i + j)i+j] RAJENDRA BHATIA AND TANVI JAIN Abstract. Let p1 < p2 < ··· < pn be positive real numbers. pi+pj It is shown that the matrix whose i; j entry is (pi + pj) is infinitely divisible, nonsingular and totally positive. 1. Introduction Matrices whose entries are obtained by assembling natural numbers in special ways often possess interesting properties. The most famous h 1 i example of such a matrix is the Hilbert matrix H = i+j−1 which has inspired a lot of work in diverse areas. Some others are the min i+j matrix M = min(i; j) ; and the Pascal matrix P = i . There is a considerable body of literature around each of these matrices, a sample of which can be found in [3], [5] and [7]. In this note we initiate the study of one more matrix of this type. Let A be the n×n matrix with its (i; j) entry equal to (i+j −1)i+j−1. Thus 2 1 22 33 ··· nn 3 6 22 33 44 ··· (n + 1)n+1 7 6 3 4 5 7 A = 6 3 4 5 ······ 7 : (1) 6 7 4 ··············· 5 nn ········· (2n − 1)2n−1 More generally, let p1 < p2 < ··· < pn be positive real numbers, and consider the n × n matrix pi+pj B = (pi + pj) : (2) The special choice pi = i − 1=2 in (2) gives us the matrix (1).
    [Show full text]
  • Combined Matrices of Matrices in Special Classes
    Combined matrices of matrices in special classes Miroslav Fiedler Institute of Computer Science AS CR May 2010 Combined matrices of matrices in special classes Let A be a nonsingular (complex, or even more general) matrix. In [3], we called the Hadamard (entrywise) product A ± (A¡1)T the combined matrix of A, and denoted it as C(A). It has good properties: All its row- and column-sums are equal to one. If we multiply A by a nonsingular diagonal matrix from the left or from the right, C(A) will not change. It is a long standing problem to characterize the range of C(A) if A runs through the set of all positive de¯nite n £ n matrices. A partial answer describing the diagonal entries of C(A) in this case was given in [2]: Theorem 1. A necessary and su±cient condition that the numbers a11, a22, : : : ; ann are diagonal entries of a positive de¯nite matrix and ®11; ®22;:::, ®nn the diagonal entries of its inverse matrix, is ful¯lling of : 1. aii > 0, ®ii > 0 for all i, 2. a ® ¡ 1 ¸ 0 for all i, ii ii p P p 3. 2 maxi( aii®ii ¡ 1) · i( aii®ii ¡ 1): In the notation above, it means that a matrix M = [mik] can serve as C(A) for A positive de¯nite, only if its diagonal entries satisfy m ¸ 1 and p P p ii 2 maxi( mii ¡ 1) · i( mii ¡ 1): In fact, there is also another necessary condition, namely, that the matrix M ¡ I, I being the identity matrix, is positive semide¯nite.
    [Show full text]
  • Arxiv:1811.08406V1 [Math.NA]
    MATRI manuscript No. (will be inserted by the editor) Bj¨orck-Pereyra-type methods and total positivity Jos´e-Javier Mart´ınez Received: date / Accepted: date Abstract The approach to solving linear systems with structured matrices by means of the bidiagonal factorization of the inverse of the coefficient matrix is first considered, the starting point being the classical Bj¨orck-Pereyra algo- rithms for Vandermonde systems, published in 1970 and carefully analyzed by Higham in 1987. The work of Higham showed the crucial role of total pos- itivity for obtaining accurate results, which led to the generalization of this approach to totally positive Cauchy, Cauchy-Vandermonde and generalized Vandermonde matrices. Then, the solution of other linear algebra problems (eigenvalue and singular value computation, least squares problems) is addressed, a fundamental tool being the bidiagonal decomposition of the corresponding matrices. This bidi- agonal decomposition is related to the theory of Neville elimination, although for achieving high relative accuracy the algorithm of Neville elimination is not used. Numerical experiments showing the good behaviour of these algorithms when compared with algorithms which ignore the matrix structure are also included. Keywords Bj¨orck-Pereyra algorithm · Structured matrix · Totally positive matrix · Bidiagonal decomposition · High relative accuracy Mathematics Subject Classification (2010) 65F05 · 65F15 · 65F20 · 65F35 · 15A23 · 15B05 · 15B48 arXiv:1811.08406v1 [math.NA] 20 Nov 2018 J.-J. Mart´ınez Departamento de F´ısica y Matem´aticas, Universidad de Alcal´a, Alcal´ade Henares, Madrid 28871, Spain E-mail: [email protected] 2 Jos´e-Javier Mart´ınez 1 Introduction The second edition of the Handbook of Linear Algebra [25], edited by Leslie Hogben, is substantially expanded from the first edition of 2007 and, in connec- tion with our work, it contains a new chapter by Michael Stewart entitled Fast Algorithms for Structured Matrix Computations (chapter 62) [41].
    [Show full text]
  • On the Q-Analogue of Cauchy Matrices Alessandro Neri
    I am a friend of Finite Geometry! On the q-Analogue of Cauchy Matrices Alessandro Neri 17-21 June 2019 - VUB Alessandro Neri 19 June 2019 1 / 19 On the q-Analogue of Cauchy Matrices Alessandro Neri I am a friend of Finite Geometry! 17-21 June 2019 - VUB Alessandro Neri 19 June 2019 1 / 19 q-Analogues Model Finite set Finite dim vector space over Fq Element 1-dim subspace ; f0g Cardinality Dimension Intersection Intersection Union Sum Alessandro Neri 19 June 2019 1 / 19 Examples 1. Binomials and q-binomials: k−1 k−1 n Y n − i n Y 1 − qn−i = ; = : k k − i k 1 − qk−i i=0 q i=0 2. (Chu)-Vandermonde and q-Vandermonde identity: k k m + n X m n m + n X m n = ; = qj(m−k+j): k j k − j k k − j j j=0 q j=0 q q 3. Polynomials and q-polynomials: k q qk a0 + a1x + ::: + ak x ; a0x + a1x + ::: + ak x : 4. Gamma and q-Gamma functions: 1 − qx Γ(z + 1) = zΓ(z); Γ (x + 1) = Γ (x): q 1 − q q Alessandro Neri 19 June 2019 2 / 19 For n = k, det(V ) 6= 0 if and only if the αi 's are all distinct. jfα1; : : : ; αngj = n: In particular, all the k × k minors of V are non-zero. Vandermonde Matrix Let k ≤ n. 0 1 1 ::: 1 1 B α1 α2 : : : αn C B C B α2 α2 : : : α2 C V = B 1 2 n C B .
    [Show full text]
  • Indian Institute of Technology, Bombay Ma 106
    INDIAN INSTITUTE OF TECHNOLOGY, BOMBAY MA 106 Autumn 2012-13 Note 6 APPLICATIONS OF ROW REDUCTION As we saw at the end of the last note, the Gaussian reduction provides a very practical method for solving systems of linear equations. Its superiority can be seen, for example, by comparing it with other methods, e.g. the Crammer’s rule. This rule has a very limited applicability, since it applies only when the number of equations equals the number of unknowns, i.e. when the coefficient matrix A in the system Ax = b (1) is a square matrix (of order n, say). Even when this is so, the rule is silent about what happens if A is not invertible, i.e. when |A| = 0. And even when it does speak, the formula it gives for the solution viz. △1 △2 △n x1 = , x2 = ,...,x = (2) △ △ n △ (where △ = |A|, the determinant of A and for i = 1, 2,...,n, △i is the determinant of the matrix obtained by replacing the i-th column of A by the column vector b) involves the computation of n + 1 determinants of order n each. This is a horrendous task even for small values of n. With row reduction, we not only get a complete answer applicable regard- less of the size of A, we get it very efficiently. The efficiency of an algorithm depends on the number of various elementary operations such as addition and multiplication that have to be carried out in its execution. This num- ber, in general, depends not only on the algorithm but also on the particular piece of data to which it is applied.
    [Show full text]