53 Adjacency Matrix

Total Page:16

File Type:pdf, Size:1020Kb

53 Adjacency Matrix Index A-optimality, 356 augmented associativity matrix, 314 absolute error, 395, 404, 434 augmented connectivity matrix, 314 ACM Transactions on Mathematical Automatically Tuned Linear Algebra Software, 505 Software (ATLAS), 456 adj(·), 53 axpy, 10, 36, 65 adjacency matrix, 265, 266, 314 axpy elementary operator matrix, 65 adjoint (see also conjugate transpose), 44 backward error analysis, 404, 410 adjoint, classical (see also adjugate), 53 Banachiewicz factorization, 195 adjugate, 53, 488 base, 380 adjugate and inverse, 93 base point, 379 affine group, 90, 141 basis, 14, Exercise 2.3:, 38 affine space, 32 batch algorithm, 421 Beowulf (cluster computing), 471 affine transformation, 176 bias, in exponent of floating-point Aitken’s integral, 167 number, 381 algebraic multiplicity, 113 big endian, 391 algorithm, definition, 417 big O (order), 406, 413 Amdahl’s law, 416 big omega (order), 407 angle between matrices, 132 bilinear form, 69, 105 angle between vectors, 26, 174, 287 bit, 375 ANSI (standards), 387, 447, 448 bitmap, 376 Applied Statistics algorithms, 505 BLACS (software), 460, 470 approximation of a matrix, 137, 271, BLAS (software), 454, 455, 460, 470, 354, 439 472 approximation of a vector, 30 BMvN distribution, 169, 473 arithmetic mean, 24, 26 Bolzano-Weierstrass theorem for Arnoldi method, 252 orthogonal matrices, 105 artificial ill-conditioning, 206 Boolean matrix, 314 ASCII code, 375 Box M statistic, 298 association matrix, 261, 265, 287, 295, byte, 375 296, 299 ATLAS (Automatically Tuned Linear C (programming language), 387, 401, Algebra Software), 456 447–461 520 Index C++ (programming language), 388 complex data type, 388, 401, 402, 447 CALGO (Collected Algorithms of the condition (problem or data), 408 ACM), 505 condition number, 202, 218, 225, 346, cancellation error, 399, 410 409, 411, 431, 440 canonical form, equivalent, 86 condition number for nonfull rank canonical form, similar, 116 matrices, 225 canonical singular value factorization, condition number for nonsquare 127 matrices, 225 Cartesian geometry, 24, 57 condition number with respect to catastrophic cancellation, 397 computing a sample standard Cauchy matrix, 313 deviation, 411 Cauchy-Schwarz inequality, 16, 75 condition number with respect to Cauchy-Schwarz inequality for matrices, inversion, 203, 218 75, 140 conditional inverse, 102 Cayley multiplication, 59 cone, 14, 32 Cayley-Hamilton theorem, 109 configuration matrix, 299 CDF (Common Data Format), 376 conjugate gradient method, 213–217 centered matrix, 223, 293 conjugate norm, 71 centered vector, 35 conjugate transpose, 44, 104 chaining of operations, 396 conjugate vectors, 71, 105 character data, 376 connected vertices, 263, 267 character string, 376 connectivity matrix, 265, 266, 314 characteristic equation, 108 consistency property, 128 characteristic polynomial, 108 consistency test, 435, 475 characteristic value (see also eigenvalue), consistent system of equations, 82, 206, 106 211 characteristic vector (see also eigenvec- constrained least squares, equality tor), 106 constraints, 337, Exercise 9.4d:, chasing, 250 366 Chebyshev norm, 17 continuous function, 147 Cholesky decomposition, 194, 276, 354 contrast, 333 classification, 313 convergence criterion, 417 cluster analysis, 313 convergence of a sequence of matrices, cluster computing, 471 105, 118, 134 Cochran’s theorem, 283, 325 convergence of a sequence of vectors, 20 cofactor, 52, 488 convergence of powers of a matrix, 135, Collected Algorithms of the ACM 305 (CALGO), 505 convergence rate, 417 collinearity, 202, 329, 350 convex cone, 14, 32, 279 column rank, 76 Corr(·, ·), 37 column space, 41, 69, 81, 82 correlation, 37 column-major, 430, 446, 448 correlation matrix, 295, Exercise 8.8:, column-sum norm, 130 318, 342 Common Data Format (CDF), 376 correlation matrix, positive definite companion matrix, 109, 241 approximation, 353 complementary projection matrix, 286 cost matrix, 299 complete graph, 262 Cov(·, ·), 36 complete pivoting, 210 covariance, 36 completing the Gramian, 139 covariance matrix, 295 Index 521 cross product of vectors, 33 dissimilarity matrix, 299 cross products matrix, 196, 288 distance matrix, 299 cross products, computing sum of distributed linear algebra machine, 470 Exercise 10.17c:, 426 distribution vector, 307 Crout method, 187 divide and conquer, 415 curse of dimensionality, 419 dominant eigenvalue, 111 Doolittle method, 187 D-optimality, 356–358, 439 dot product of matrices, 74 daxpy,10 dot product of vectors, 15, 69 decomposable matrix, 303 double precision, 385, 391 defective (deficient) matrix, 116, 117 doubly stochastic matrix, 306 deficient (defective) matrix, 116, 117 Drazin inverse, 286 deflation, 243–244 dual cone, 32 degrees of freedom, 291, 292, 331, 350 derivative with respect to a vector or E(·), 168 matrix, 145 E-optimality, 356 det(·), 50 echelon form, 86 determinant, 50–58, 276, 278, 356, 439 edge of a graph, 262 determinant as a volume, 57 effective degrees of freedom, 292, 350 determinant of a partitioned matrix, 96 determinant of the inverse, 92 eigenpair, 106 determinant of the transpose, 54 eigenspace, 113 diag(·), 45 eigenvalue, 105–128, 131, 241–256 diag(·) (matrix arguments), 47 eigenvalues of a graph, 314 diagonal element, 42 eigenvalues of a polynomial Exer- diagonal expansion, 57 cise 3.17:, 141 diagonal factorization, 116, 119 eigenvector, 105–128, 241–256 diagonal matrix, 42 eigenvector, left, 106, 123 diagonalizable matrix, 116–119 eigenvectors, linear independence of, diagonalization, 116 112 diagonally dominant matrix, 42, 46, 78, EISPACK, 457 277 elementary operation, 61 differential, 149 elementary operator matrix, 62, 78, 186, differentiation of vectors and matrices, 207 145 elliptic metric, 71 digraph, 266 elliptic norm, 71 digraph of a matrix, 266 endian, 391 dim(·), 12 equivalence of norms, 19, 133 dimension of vector space, 11 equivalence relation, 361 dimension reduction, 287, 345 equivalent canonical factorization, 87 direct method for solving linear systems, equivalent canonical form, 86, 87 201 equivalent matrices, 86 direct product, 73 error bound, 406 direct sum, 13, 48 error of approximation, 407 direct sum of matrices, 47 error, cancellation, 399, 410 directed dissimilarity matrix, 299 error, discretization, 408 direction cosines, 27, 178 error, measures of, 219, 395, 404–406, discrete Legendre polynomials, 309 434 discretization error, 408, 418 error, rounding, 399, 404, 405 522 Index error, rounding, models of, 405, g4 inverse (see also Moore-Penrose Exercise 10.9:, 424 inverse), 102 error, truncation, 408 gamma function, 169, 484 error-free computations, 399 GAMS (Guide to Available Mathemati- errors-in-variables, 329 cal Software), 445 essentially disjoint vector spaces, 12, 48 Gauss (software), 461 estimable combinations of parameters, Gauss-Seidel method, 212 332 Gaussian elimination, 66, 186, 207, 251 Euclidean distance, 22, 299 general linear group, 90, 105 Euclidean distance matrix, 299 generalized eigenvalue, 126, 252 Euclidean matrix norm (see also generalized inverse, 97, 100–103, 189, Frobenius norm), 131 289 Euclidean vector norm, 17 generalized least squares, 337 Euler’s constant Exercise 10.2:, 423 generalized least squares with equality Euler’s integral, 484 constraints Exercise 9.4d:, 366 Euler’s rotation theorem, 177 generalized variance, 296 exact computations, 399 generating set, 14 exception, in computer operations, 394, generating set of a cone, 15 398 generation of random numbers, 358 exponent, 380 geometric multiplicity, 113 exponential order, 413 geometry, 24, 57, 175, 178 extended precision, 385 Givens transformation (rotation), extrapolation, 418 182–185, 192, 251 GMRES, 216 factorization of a matrix, 85, 87, 114, GNU Scientific Library (GSL), 457 116, 173–174, 185–198, 206, 209 graceful underflow, 383 fan-in algorithm, 397, 416 gradient of a function, 151, 152 fast Givens rotation, 185, 433 gradual underflow, 383, 398 fill-in, 197, 434 Gram-Schmidt transformation, 27, 29, Fisher information, 163 192, 432 fixed-point representation, 379 Gramian matrix, 90, 92, 196, 224, flat, 32 288–290 floating-point representation, 379 graph of a matrix, 265 FLOP, or flop, 415 graph theory, 8, 262, 313 FLOPS, or flops, 415 greedy algorithm, 416 Fortran, 388, 389, 415, 447–461 group, 90, 105 Fourier coefficient, 29, 30, 76, 122, 128, GSL (GNU Scientific Library), 457 133 guard digit, 396 Fourier expansion, 25, 29, 75, 122, 128, 133 H¨oldernorm, 17 Frobenius norm, 131–134, 138, 248, 271, H¨older’sinequality, 38 299 Haar distribution, 169, Exercise 4.7:, full precision, 390 171, Exercise 8.8:, 318, 473 full rank, 77, 78, 80, 87, 88 Haar invariant measure, 169 full rank factorization, 85 Hadamard matrix, 310 full rank partitioning, 80, 95 Hadamard multiplication, 72 half precision, 390 g1 inverse, 102 Hankel matrix, 312 g2 inverse, 102 Hankel norm, 312 Index 523 hat matrix, 290, 331 intersection graph, 267 HDF (Hierarchical Data Format), 376 interval arithmetic, 402, 403 Helmert matrix, 308, 333 invariance property, 175 Hemes formula, 221, 339 invariant vector (eigenvector), 106 Hermite form, 87 inverse of a matrix, 83 Hermitian matrix, 42, 45 inverse of a partitioned matrix, 95 Hessenberg matrix, 44, 250 inverse of products or sums of matrices, Hessian of a function, 153 93 hidden bit, 381 inverse of the transpose, 83 Hierarchical Data Format (HDF), 376 inverse, determinant of, 92 high-performance computing, 412 IRLS (iteratively reweighted least Hilbert matrix, 472 squares), 232 Hilbert-Schmidt norm (see also irreducible Markov chain, 361 Frobenius norm), 132 irreducible
Recommended publications
  • Realizing Euclidean Distance Matrices by Sphere Intersection
    Realizing Euclidean distance matrices by sphere intersection Jorge Alencara,∗, Carlile Lavorb, Leo Libertic aInstituto Federal de Educa¸c~ao,Ci^enciae Tecnologia do Sul de Minas Gerais, Inconfidentes, MG, Brazil bUniversity of Campinas (IMECC-UNICAMP), 13081-970, Campinas-SP, Brazil cCNRS LIX, Ecole´ Polytechnique, 91128 Palaiseau, France Abstract This paper presents the theoretical properties of an algorithm to find a re- alization of a (full) n × n Euclidean distance matrix in the smallest possible embedding dimension. Our algorithm performs linearly in n, and quadratically in the minimum embedding dimension, which is an improvement w.r.t. other algorithms. Keywords: Distance geometry, sphere intersection, euclidean distance matrix, embedding dimension. 2010 MSC: 00-01, 99-00 1. Introduction Euclidean distance matrices and sphere intersection have a strong mathe- matical importance [1, 2, 3, 4, 5, 6], in addition to many applications, such as navigation problems, molecular and nanostructure conformation, network 5 localization, robotics, as well as other problems of distance geometry [7, 8, 9]. Before defining precisely the problem of interest, we need the formal defini- tion for Euclidean distance matrices. Let D be a n × n symmetric hollow (i.e., with zero diagonal) matrix with non-negative elements. We say that D is a (squared) Euclidean Distance Matrix (EDM) if there are points x1; x2; : : : ; xn 2 RK (for a positive integer K) such that 2 D(i; j) = Dij = kxi − xjk ; i; j 2 f1; : : : ; ng; where k · k denotes the Euclidean norm. The smallest K for which such a set of points exists is called the embedding dimension of D, denoted by dim(D).
    [Show full text]
  • Arxiv:1701.03378V1 [Math.RA] 12 Jan 2017 Setao Apihe Rusadfe Probability” Free and Groups T Lamplighter by on Supported Technology
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by TUGraz OPEN Library Linearizing the Word Problem in (some) Free Fields Konrad Schrempf∗ January 13, 2017 Abstract We describe a solution of the word problem in free fields (coming from non- commutative polynomials over a commutative field) using elementary linear algebra, provided that the elements are given by minimal linear representa- tions. It relies on the normal form of Cohn and Reutenauer and can be used more generally to (positively) test rational identities. Moreover we provide a construction of minimal linear representations for the inverse of non-zero elements. Keywords: word problem, minimal linear representation, linearization, realiza- tion, admissible linear system, rational series AMS Classification: 16K40, 16S10, 03B25, 15A22 Introduction arXiv:1701.03378v1 [math.RA] 12 Jan 2017 Free (skew) fields arise as universal objects when it comes to embed the ring of non-commutative polynomials, that is, polynomials in (a finite number of) non- commuting variables, into a skew field [Coh85, Chapter 7]. The notion of “free fields” goes back to Amitsur [Ami66]. A brief introduction can be found in [Coh03, Section 9.3], for details we refer to [Coh95, Section 6.4]. In the present paper we restrict the setting to commutative ground fields, as a special case. See also [Rob84]. In [CR94], Cohn and Reutenauer introduced a normal form for elements in free fields in order to extend results from the theory of formal languages. In particular they characterize minimality of linear representations in terms of linear independence of ∗Contact: [email protected], Department of Discrete Mathematics (Noncommutative Structures), Graz University of Technology.
    [Show full text]
  • Seminar VII for the Course GROUP THEORY in PHYSICS Micael Flohr
    Seminar VII for the course GROUP THEORY IN PHYSICS Mic~ael Flohr The classical Lie groups 25. January 2005 MATRIX LIE GROUPS Most Lie groups one ever encouters in physics are realized as matrix Lie groups and thus as subgroups of GL(n, R) or GL(n, C). This is the group of invertibel n × n matrices with coefficients in R or C, respectively. This is a Lie group, since it forms as an open subset of the vector space of n × n matrices a manifold. Matrix multiplication is certainly a differentiable map, as is taking the inverse via Cramer’s rule. The only condition defining the open 2 subset is that the determinat must not be zero, which implies that dimKGL(n, K) = n is the same as the one of the vector space Mn(K). However, GL(n, R) is not connected, because we cannot move continuously from a matrix with determinant less than zero to one with determinant larger than zero. It is worth mentioning that gl(n, K) is the vector space of all n × n matrices over the field K, equipped with the standard commutator as Lie bracket. We can describe most other Lie groups as subgroups of GL(n, K) for either K = R or K = C. There are two ways to do so. Firstly, one can give restricting equations to the coefficients of the matrices. Secondly, one can find subgroups of the automorphisms of V =∼ Kn, which conserve a given structure on Kn. In the following, we give some examples for this: SL(n, K).
    [Show full text]
  • Inertia of the Matrix [(Pi + Pj) ]
    isid/ms/2013/12 October 20, 2013 http://www.isid.ac.in/estatmath/eprints r Inertia of the matrix [(pi + pj) ] Rajendra Bhatia and Tanvi Jain Indian Statistical Institute, Delhi Centre 7, SJSS Marg, New Delhi{110 016, India r INERTIA OF THE MATRIX [(pi + pj) ] RAJENDRA BHATIA* AND TANVI JAIN** Abstract. Let p1; : : : ; pn be positive real numbers. It is well r known that for every r < 0 the matrix [(pi + pj) ] is positive def- inite. Our main theorem gives a count of the number of positive and negative eigenvalues of this matrix when r > 0: Connections with some other matrices that arise in Loewner's theory of oper- ator monotone functions and in the theory of spline interpolation are discussed. 1. Introduction Let p1; p2; : : : ; pn be distinct positive real numbers. The n×n matrix 1 C = [ ] is known as the Cauchy matrix. The special case pi = i pi+pj 1 gives the Hilbert matrix H = [ i+j ]: Both matrices have been studied by several authors in diverse contexts and are much used as test matrices in numerical analysis. The Cauchy matrix is known to be positive definite. It possessesh ai ◦r 1 stronger property: for each r > 0 the entrywise power C = r (pi+pj ) is positive definite. (See [4] for a proof.) The object of this paper is to study positivity properties of the related family of matrices r Pr = [(pi + pj) ]; r ≥ 0: (1) The inertia of a Hermitian matrix A is the triple In(A) = (π(A); ζ(A); ν(A)) ; in which π(A); ζ(A) and ν(A) stand for the number of positive, zero, and negative eigenvalues of A; respectively.
    [Show full text]
  • Left Eigenvector of a Stochastic Matrix
    Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T .
    [Show full text]
  • Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions
    City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports CUNY Academic Works 2013 TR-2013011: Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions Victor Y. Pan How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_cs_tr/386 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Fast Approximation Algorithms for Cauchy Matrices, Polynomials and Rational Functions ? Victor Y. Pan Department of Mathematics and Computer Science Lehman College and the Graduate Center of the City University of New York Bronx, NY 10468 USA [email protected], home page: http://comet.lehman.cuny.edu/vpan/ Abstract. The papers [MRT05], [CGS07], [XXG12], and [XXCBa] have combined the advanced FMM techniques with transformations of matrix structures (traced back to [P90]) in order to devise numerically stable algorithms that approximate the solutions of Toeplitz, Hankel, Toeplitz- like, and Hankel-like linear systems of equations in nearly linear arith- metic time, versus classical cubic time and quadratic time of the previous advanced algorithms. We show that the power of these approximation al- gorithms can be extended to yield similar results for computations with other matrices that have displacement structure, which includes Van- dermonde and Cauchy matrices, as well as to polynomial and rational evaluation and interpolation. The resulting decrease of the running time of the known approximation algorithms is again by order of magnitude, from quadratic to nearly linear.
    [Show full text]
  • An Accurate Computational Solution of Totally Positive Cauchy Linear Systems
    AN ACCURATE COMPUTATIONAL SOLUTION OF TOTALLY POSITIVE CAUCHY LINEAR SYSTEMS Abdoulaye Cissé Andrew Anda Computer Science Department Computer Science Department St. Cloud State University St. Cloud State University St. Cloud, MN 56301 St. Cloud, MN 56301 [email protected] [email protected] Abstract ≈ ’ ∆ 1 The Cauchy linear systems C(x, y)V = b where C(x, y) = ∆ , x = (xi ) , y = ( yi ) , « xi − y j V = (vi ) , b = (bi ) can be solved very accurately regardless of condition number using Björck-Pereyra-type methods. We explain this phenomenon by the following facts. First, the Björck-Pereyra methods, written in matrix form, are essentially an accurate and efficient bidiagonal decomposition of the inverse of the matrix, as obtained by Neville Elimination with no pivoting. Second, each nontrivial entry of this bidiagonal decomposition is a product of two quotients of (initial) minors. We use this method to derive and implement a new method of accurately and efficiently solving totally positive Cauchy linear systems in time complexity O(n2). Keywords: Accuracy, Björck-Pereyra-type methods, Cauchy linear systems, floating point architectures, generalized Vandermonde linear systems, Neville elimination, totally positive, LDU decomposition. Note: In this paper we assume that the nodes xi and yj are pair wise distinct and C is totally positive 1 Background Definition 1.1 A Cauchy matrix is defined as: ≈ ’ ∆ 1 C(x, y) = ∆ , for i, j = 1,2,..., n (1.1) « xi − y j where the nodes xi and yj are assumed pair wise distinct. C(x,y) is totally positive (TP) if 2 2 0 < y1 < y2 < < yn−1 < yn < x1 < x2 < < xn−1 < xn .
    [Show full text]
  • Displacement Rank of the Drazin Inverse Huaian Diaoa, Yimin Weib;1, Sanzheng Qiaoc;∗;2
    Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147–161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diaoa, Yimin Weib;1, Sanzheng Qiaoc;∗;2 aInstitute of Mathematics, Fudan University, Shanghai 200433, PR China bDepartment of Mathematics, Fudan University, Shanghai 200433, PR China cDepartment of Computing and Software, McMaster University, Hamilton, Ont., Canada L8S 4L7 Received 20 February 2003; received in revised form 4 August 2003 Abstract In this paper, we study the displacement rank of the Drazin inverse. Both Sylvester displacement and the generalized displacement are discussed. We present upper bounds for the ranks of the displacements of the Drazin inverse. The general results are applied to the group inverse of a structured matrix such as close-to- Toeplitz, generalized Cauchy, Toeplitz-plus-Hankel, and Bezoutians. c 2003 Elsevier B.V. All rights reserved. MSC: 15A09; 65F20 Keywords: Drazin inverse; Displacement; Structured matrix; Jordan canonical form 1. Introduction Displacement gives a quantitative way of identifying the structure of a matrix. Consider an n × n Toeplitz matrix t0 t−1 ··· t−n+1 . t .. .. 1 T = ; . .. .. . t−1 tn−1 ··· t1 t0 ∗ Corresponding author. Tel.: +1-905-525-9140; fax: +1-905-524-0340. E-mail addresses: [email protected] (Y. Wei), [email protected] (S. Qiao). 1 Supported by National Natural Science Foundation of China under Grant 19901006. 2 Partially supported by Natural Science and Engineering Research Council of Canada. 0377-0427/$ - see front matter c 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2003.09.050 148 H.
    [Show full text]
  • Arxiv:2104.06123V2 [Nlin.SI] 19 Jul 2021
    LINEAR INTEGRAL EQUATIONS AND TWO-DIMENSIONAL TODA SYSTEMS YUE YIN AND WEI FU Abstract. The direct linearisation framework is presented for the two-dimensional Toda equations associated with the infinite-dimensional (1) (2) (1) (2) Z+ Lie algebras A∞, B∞ and C∞, as well as the Kac–Moody algebras Ar , A2r , Cr and Dr+1 for arbitrary integers r ∈ , from the aspect of a set of linear integral equations in a certain form. Such a scheme not only provides a unified perspective to understand the underlying integrability structure, but also induces the direct linearising type solution potentially leading to the universal solution space, for each class of the two-dimensional Toda system. As particular applications of this framework to the two-dimensional Toda lattices, we rediscover the Lax pairs and the adjoint Lax pairs and simultaneously construct the generalised Cauchy matrix solutions. 1. Introduction In the modern theory of integrable systems, the notion of integrability of nonlinear equations often refers to the property that a differential/difference equation is exactly solvable under an initial-boundary condition, which in many cases allows us to construct explicit solutions. Motivated by this, many mathematical methods were invented and developed to search for explicit solutions of nonlinear models, for instance, the inverse scattering transform, the Darboux transform, Hirota’s bilinear method, as well as the algebro-geometric method, etc., see e.g. the monographs [1,11,16,26]. These techniques not only explained the nonlinear phenomena such as solitary waves and periodic waves in nature mathematically, but also motivated the discoveryof a huge number of nonlinear equations that possess “nice” algebraic and geometric properties.
    [Show full text]
  • Fast Approximate Computations with Cauchy Matrices and Polynomials ∗
    Fast Approximate Computations with Cauchy Matrices and Polynomials ∗ Victor Y. Pan Departments of Mathematics and Computer Science Lehman College and the Graduate Center of the City University of New York Bronx, NY 10468 USA [email protected] http://comet.lehman.cuny.edu/vpan/ Abstract Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and nu- merical computing. The known algorithms solve both problems over any field of constants in nearly linear arithmetic time, but the cost grows to quadratic for numerical solution. We fix this discrepancy: our new numerical algorithms run in nearly linear arithmetic time. At first we restate our goals as the multiplication of an n × n Vandermonde matrix by a vector and the solution of a Vandermonde linear system of n equations. Then we transform the matrix into a Cauchy structured matrix with some special features. By exploiting them, we approximate the matrix by a generalized hierarchically semiseparable matrix, which is a structured matrix of a different class. Finally we accelerate our solution to the original problems by applying Fast Multipole Method to the latter matrix. Our resulting numerical algorithms run in nearly optimal arithmetic time when they perform the above fundamental computations with polynomials, Vandermonde matrices, transposed Vandermonde matrices, and a large class of Cauchy and Cauchy-like matrices. Some of our techniques may be of independent interest. Key words: Polynomial evaluation; Rational evaluation; Interpolation; Vandermonde matrices; Transfor- mation of matrix structures; Cauchy matrices; Fast Multipole Method; HSS matrices; Matrix compression AMS Subject Classification: 12Y05, 15A04, 47A65, 65D05, 68Q25 1 Introduction 1.1 The background and our progress Multipoint polynomial evaluation and interpolation are fundamental for modern symbolic and numerical arXiv:1506.02285v3 [math.NA] 17 Apr 2017 computing.
    [Show full text]
  • ON EUCLIDEAN DISTANCE MATRICES of GRAPHS∗ 1. Introduction. a Matrix D ∈ R N×N Is a Euclidean Distance Matrix (EDM), If Ther
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 574-589, August 2013 ELA ON EUCLIDEAN DISTANCE MATRICES OF GRAPHS∗ GASPERˇ JAKLICˇ † AND JOLANDA MODIC‡ Abstract. In this paper, a relation between graph distance matrices and Euclidean distance matrices (EDM) is considered. It is proven that distance matrices of paths and cycles are EDMs. The proofs are constructive and the generating points of studied EDMs are given in a closed form. A generalization to weighted graphs (networks) is tackled. Key words. Graph, Euclidean distance matrix, Distance, Eigenvalue. AMS subject classifications. 15A18, 05C50, 05C12. n n 1. Introduction. A matrix D R × is a Euclidean distance matrix (EDM), ∈ if there exist points x Rr, i =1, 2,...,n, such that d = x x 2. The minimal i ∈ ij k i − j k possible r is called an embedding dimension (see, e.g., [3]). Euclidean distance matri- ces were introduced by Menger in 1928, later they were studied by Schoenberg [13], and other authors. They have many interesting properties, and are used in various applications in linear algebra, graph theory, and bioinformatics. A natural problem is to study configurations of points xi, where only distances between them are known. n n Definition 1.1. A matrix D = (d ) R × is hollow, if d = 0 for all ij ∈ ii i =1, 2,...,n. There are various characterizations of EDMs. n n Lemma 1.2. [8, Lemma 5.3] Let a symmetric hollow matrix D R × have only ∈ one positive eigenvalue λ and the corresponding eigenvector e := [1, 1,..., 1]T Rn.
    [Show full text]
  • A Submodular Approach
    SPARSEST NETWORK SUPPORT ESTIMATION: A SUBMODULAR APPROACH Mario Coutino, Sundeep Prabhakar Chepuri, Geert Leus Delft University of Technology ABSTRACT recovery of the adjacency and normalized Laplacian matrix, in the In this work, we address the problem of identifying the underlying general case, due to the convex formulation of the recovery problem, network structure of data. Different from other approaches, which solutions that only retrieve the support approximately are obtained, are mainly based on convex relaxations of an integer problem, here i.e., thresholding operations are required. we take a distinct route relying on algebraic properties of a matrix Taking this issue into account, and following arguments closely representation of the network. By describing what we call possible related to the ones used for network topology estimation through ambiguities on the network topology, we proceed to employ sub- spectral templates [10], the main aim of this work is twofold: (i) to modular analysis techniques for retrieving the network support, i.e., provide a proper characterization of a particular set of matrices that network edges. To achieve this we only make use of the network are of high importance within graph signal processing: fixed-support modes derived from the data. Numerical examples showcase the ef- matrices with the same eigenbasis, and (ii) to provide a pure com- fectiveness of the proposed algorithm in recovering the support of binatorial algorithm, based on the submodular machinery that has sparse networks. been proven useful in subset selection problems within signal pro- cessing [18–22], as an alternative to traditional convex relaxations. Index Terms— Graph signal processing, graph learning, sparse graphs, network deconvolution, network topology inference.
    [Show full text]