The Inverse Eigenvalue Problem for Symmetric Doubly Stochastic Matrices

Total Page:16

File Type:pdf, Size:1020Kb

The Inverse Eigenvalue Problem for Symmetric Doubly Stochastic Matrices Linear Algebra and its Applications 379 (2004) 77–83 www.elsevier.com/locate/laa The inverse eigenvalue problem for symmetric doubly stochastic matrices Suk-Geun Hwang a,∗,1, Sung-Soo Pyo b,2 aDepartment of Mathematics Education, Kyungpook National University, Taegu 702-701, Republic of Korea bSchool of Electrical Engineering and Computer Science, Kyungpook National University, Taegu 702-701, Republic of Korea Received 22 July 2002; accepted 24 December 2002 Submitted by F. Uhlig Abstract s × For a positive integer n and for a real number s, let n denote the set of all n n real matrices whose rows and columns have sum s. In this note, by an explicit constructive method, we prove the following. T λ1 (i) Given any real n-tuple = (λ1,λ2,...,λn) , there exists a symmetric matrix in n whose spectrum is . T (ii) For a real n-tuple = (1,λ2,...,λn) with 1 λ2 ··· λn, if 1 λ λ λn + 2 + 3 +···+ 0, n n(n − 1) n(n − 2) 2 · 1 then there exists a symmetric doubly stochastic matrix whose spectrum is . The second assertion enables us to show that for any λ2,...,λn ∈[−1/(n − 1), 1], there T is a symmetric doubly stochastic matrix whose spectrum is (1,λ2,...,λn) and also that any number β ∈ (−1, 1] is an eigenvalue of a symmetric positive doubly stochastic matrix of any order. © 2003 Elsevier Inc. All rights reserved. ∗ Corresponding author. Tel.: +82-53-950-5887; fax: +82-53-950-6811. E-mail addresses: [email protected] (S.-G. Hwang), [email protected] (S.-S. Pyo). 1 Supported by Com2MaC-KOSEF. 2 This work was done while the second author had a BK21 Post-Doc. position at Kyungpook National University. 0024-3795/$ - see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0024-3795(03)00366-5 78 S.-G. Hwang, S.-S. Pyo / Linear Algebra and its Applications 379 (2004) 77–83 AMS classification: 15A51; 15A18 Keywords: Doubly stochastic matrix; Inverse eigenvalue problem; Spectrum 1. Introduction A real matrix A is called nonnegative (resp. positive), written A O (resp. A> O), if all of its entries are nonnegative (resp. positive). A square nonnegative matrix is called doubly stochastic if all of its rows and columns have sum 1. The set of all n × n doubly stochastic matrices is denoted by n. For a square matrix A, let σ (A) denote the spectrum of A. Given an n-tuple T = (λ1,λ2,...,λn) of numbers, real or complex, deciding the existence of a ma- trix A with some specific properties such that σ (A) = has long time been one of the problems of main interest in the theory of matrices. T Given = (λ1,λ2,...,λn) , in order that = σ (A) for some positive matrix A, it is necessary that λ1 + λ2 +···+λn > 0. Sufficient conditions for the exis- tence of a positive matrix A with σ (A) = have been investigated by several au- thors such as Borobia [1], Fiedler [2], Kellog [4], and Salzmann [5]. In this paper we deal with the existence of certain real symmetric matrices and that of symmetric doubly stochastic matrices with prescribed spectrum under certain conditions. Let A ∈ n. Then the well known Gershgorin’s theorem (see [3], for instance) implies that σ (A) is contained in the closed unit disc centered at the origin in the complex plane. Thus if, in addition, A is symmetric, then σ (A) lies in the real closed interval [−1, 1]. T Given a real n-tuple = (λ1,λ2,...,λn) with λ1 λ2 ··· λn, in order that there exists A ∈ n with σ (A) = , it is necessary that λ1 = 1,λn −1,λ1 + λ2 +···+λn 0. (1) Thus, for instance, there exists no 3 × 3 doubly stochastic matrix with spectrum (1, −0.5, −0.6)T or (1, 1, −1.1)T. T Certainly the condition (1) is not sufficient for = (λ1,λ2,...,λn) with λ1 λ2 ··· λn to be the spectrum of a doubly stochastic matrix. T In this paper, we find a sufficient condition on = (λ1,λ2,...,λn) with λ1 λ2 ··· λn satisfying (1) for the existence of a symmetric doubly stochastic matrix having as its spectrum by a constructive method. For an n-tuple x = (x1, T ←− x2,...,xn) , let x and x be defined by T x = (x1 − x2,x2 − x3,...,xn−1 − xn,xn) , ←− T x = (xn,xn−1,...,x2,x1) . S.-G. Hwang, S.-S. Pyo / Linear Algebra and its Applications 379 (2004) 77–83 79 ←− Note that x is the difference sequence of the finite sequence 0, xn,xn−1, ..., x2,x1. Let 1 1 1 T h = 1, , ,..., . n 2 3 n Then 1 1 1 1 T h = , ,..., , . n 1 · 2 2 · 3 (n − 1) · n n We see that the ith component of hn is the length of the ith subinterval of the [ ] − 1 1 1 division of the unit interval 0, 1 by inserting the n 1 points 2 , 3 ,..., n . s × For a real number s, let n denote the set of all n n real matrices all of whose 1 rows and columns have sum s. The set n consists of all nonnegative matrices in n. T In what follows we show that for any real n-tuple = (λ1,λ2,...,λn) , there λ1 exists a symmetric matrix in n whose spectrum is , and also that if a real n-tuple T = (1,λ2,...,λn) with 1 λ2 ··· λn satisfies (1) and one of the two equiv- alent conditions (a) and (b) in the following Lemma 1, then there exists a symmetric doubly stochastic matrix of order n with as its spectrum. T T Lemma 1. For a real n-tuple = (1,λ2,...,λn) . Let = (δ1,δ2,...,δn) . Then the following are equivalent. 1 λ λ λ (a) + 2 + 3 +···+ n 0, (2) n n(n − 1) (n − 1)(n − 2) 2 · 1 δ δ δ − (b) 1 + 2 +···+ n 1 + δ 0. (3) n n − 1 2 n Proof. We see that 1 + λ2 + λ3 +···+ λn − − − · n n(n 1) (n 1)(n 2) 2 1 1 1 1 1 1 = 1 − 0 + λ − +···+λ − n 2 n − 1 n n 1 2 1 1 1 = (1 − λ ) + (λ − λ ) +···+(λ − − λ ) + λ 2 n 2 3 n − 1 n 1 n 2 n δ δ δ − = 1 + 2 +···+ n 1 + δ . n n − 1 2 n Thus the equivalence of (a) and (b) follows. Notice that 1 λ λ 1 ←−− + 2 + 3 +···+ = Th , n n(n − 1) (n − 1)(n − 2) 2 · 1 n δ δ δ − ←− 1 + 2 +···+ n 1 + δ = ()Th . n n − 1 2 n n 80 S.-G. Hwang, S.-S. Pyo / Linear Algebra and its Applications 379 (2004) 77–83 Thus the inequalities (2) and (3) can be expressed as ←−− · hn 0, (4) ←− · hn 0(5) respectively, where ‘ · ’ stands for the Euclidean inner product. In the sequel we de- note by In,Jn,On the identity matrix, the all 1’s matrix and the zero matrix of order n respectively. We let en denote a column of Jn. We sometimes write I,J,O,e in place of In,Jn,On, en in case that the size of the matrix or vector is clear within the context. 2. Main result n , For 2 let 1 1 eT Qn = n n . (6) −e In−1 Then clearly Qn is nonsingular. We first observe the effect of the similarity transformation X → QXQ−1 on cer- tain sets of matrices on which our discussion relies. Lemma 2. Let Qn be the matrix defined in (6), then −1 = ⊕ (a)QnJnQn nI1 On−1, ⊕ −1 = ⊕ ∈ 1 (b)Qn(I1 A)Qn I1 A, for any A n−1. Proof. Clearly 1 eT n 0T Q J = = Q , n n 0 O 0 O n and (a) follows. To show (b), observe that 1 0T 1 1 eTA 1 1 eT Q = n n = n n , n 0 A −e A −e A 1 0T 1 1 eT 1 1 eT Q = n n = n n , 0 A n −Ae A −e A ∈ 1 for any A n−1. Thus (b) holds. s We first find a matrix with a prescribed spectrum in the set n−1. T Theorem 3. Let n 2. Then for any real n-tuple = (λ1,λ2,...,λn) , there λ1 exists a symmetric matrix A ∈ n with σ (A) = . S.-G. Hwang, S.-S. Pyo / Linear Algebra and its Applications 379 (2004) 77–83 81 Proof. Let (x1,x2,...,xn) be an n-tuple of indeterminates and let 1 1 1 A = x J + x I ⊕ J − +···+x I − ⊕ J . (7) 1 n n 2 1 n − 1 n 1 n n 1 1 1 We show that A is similar to diag(y1,y2,...,yn) where yi = xi + xi+1 +···+xn (i = 1, 2,...,n). We proceed by induction on n. If n = 2, then x x 1 − 1 1 1 + x 1 1 − x + x 0 Q AQ 1 = 2 2 2 2 2 2 = 1 2 , 2 2 − x1 x1 + x 1 0 x 11 2 2 2 1 2 2 and the induction starts. Suppose that n>2. We have, by Lemma 1, that −1 1 Q AQ = x I ⊕ O − + x I ⊕ J − n n 1 ( 1 n 1) 2 1 − n 1 n 1 1 1 + x I ⊕ J − +···+x I − ⊕ J 3 2 n − 2 n 2 n n 1 1 1 = y1I1 ⊕ B, where 1 1 1 B = x J − + x I ⊕ J − +···+x I − ⊕ J .
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • Arxiv:1306.4805V3 [Math.OC] 6 Feb 2015 Used Greedy Techniques to Reorder Matrices
    CONVEX RELAXATIONS FOR PERMUTATION PROBLEMS FAJWEL FOGEL, RODOLPHE JENATTON, FRANCIS BACH, AND ALEXANDRE D’ASPREMONT ABSTRACT. Seriation seeks to reconstruct a linear order between variables using unsorted, pairwise similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We write seri- ation as an optimization problem by proving the equivalence between the seriation and combinatorial 2-SUM problems on similarity matrices (2-SUM is a quadratic minimization problem over permutations). The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we derive several convex relax- ations for 2-SUM to improve the robustness of seriation solutions in noisy settings. These convex relaxations also allow us to impose structural constraints on the solution, hence solve semi-supervised seriation problems. We derive new approximation bounds for some of these relaxations and present numerical experiments on archeological data, Markov chains and DNA assembly from shotgun gene sequencing data. 1. INTRODUCTION We study optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper is centered on the seriation problem: we are given a similarity matrix between a set of n variables and assume that the variables can be ordered along a chain, where the similarity between variables decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear ordering based on unsorted, possibly noisy, pairwise similarity information. This problem has its roots in archeology [Robinson, 1951] and also has direct applications in e.g.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Doubly Stochastic Matrices Whose Powers Eventually Stop
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 330 (2001) 25–30 www.elsevier.com/locate/laa Doubly stochastic matrices whose powers eventually stopୋ Suk-Geun Hwang a,∗, Sung-Soo Pyo b aDepartment of Mathematics Education, Kyungpook National University, Taegu 702-701, South Korea bCombinatorial and Computational Mathematics Center, Pohang University of Science and Technology, Pohang, South Korea Received 22 June 2000; accepted 14 November 2000 Submitted by R.A. Brualdi Abstract In this note we characterize doubly stochastic matrices A whose powers A, A2,A3,... + eventually stop, i.e., Ap = Ap 1 =···for some positive integer p. The characterization en- ables us to determine the set of all such matrices. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A51 Keywords: Doubly stochastic matrix; J-potent 1. Introduction Let R denote the real field. For positive integers m, n,letRm×n denote the set of all m × n matrices with real entries. As usual let Rn denote the set Rn×1.We call the members of Rn the n-vectors. The n-vector of 1’s is denoted by e,andthe identity matrix of order n is denoted by In. For two matrices A, B of the same size, let A B denote that all the entries of A − B are nonnegative. A matrix A is called nonnegative if A O. A nonnegative square matrix is called a doubly stochastic matrix if all of its row sums and column sums equal 1.
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Notes on Birkhoff-Von Neumann Decomposition of Doubly Stochastic Matrices Fanny Dufossé, Bora Uçar
    Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices Fanny Dufossé, Bora Uçar To cite this version: Fanny Dufossé, Bora Uçar. Notes on Birkhoff-von Neumann decomposition of doubly stochastic matri- ces. Linear Algebra and its Applications, Elsevier, 2016, 497, pp.108–115. 10.1016/j.laa.2016.02.023. hal-01270331v6 HAL Id: hal-01270331 https://hal.inria.fr/hal-01270331v6 Submitted on 23 Apr 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices Fanny Dufoss´ea, Bora U¸carb,∗ aInria Lille, Nord Europe, 59650, Villeneuve d'Ascq, France bLIP, UMR5668 (CNRS - ENS Lyon - UCBL - Universit´ede Lyon - INRIA), Lyon, France Abstract Birkhoff-von Neumann (BvN) decomposition of doubly stochastic matrices ex- presses a double stochastic matrix as a convex combination of a number of permutation matrices. There are known upper and lower bounds for the num- ber of permutation matrices that take part in the BvN decomposition of a given doubly stochastic matrix. We investigate the problem of computing a decom- position with the minimum number of permutation matrices and show that the associated decision problem is strongly NP-complete.
    [Show full text]
  • On Matrices with a Doubly Stochastic Pattern
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF MATHEMATICALANALYSISAND APPLICATIONS 34,648-652(1971) On Matrices with a Doubly Stochastic Pattern DAVID LONDON Technion-Israel Institute of Technology, Haifa Submitted by Ky Fan 1. INTRODUCTION In [7] Sinkhorn proved that if A is a positive square matrix, then there exist two diagonal matrices D, = {@r,..., d$) and D, = {dj2),..., di2r) with positive entries such that D,AD, is doubly stochastic. This problem was studied also by Marcus and Newman [3], Maxfield and Mint [4] and Menon [5]. Later Sinkhorn and Knopp [8] considered the same problem for A non- negative. Using a limit process of alternately normalizing the rows and columns sums of A, they obtained a necessary and sufficient condition for the existence of D, and D, such that D,AD, is doubly stochastic. Brualdi, Parter and Schneider [l] obtained the same theorem by a quite different method using spectral properties of some nonlinear operators. In this note we give a new proof of the same theorem. We introduce an extremal problem, and from the existence of a solution to this problem we derive the existence of D, and D, . This method yields also a variational characterization for JJTC1(din dj2’), which can be applied to obtain bounds for this quantity. We note that bounds for I7y-r (d,!‘) dj”)) may be of interest in connection with inequalities for the permanent of doubly stochastic matrices [3]. 2. PRELIMINARIES Let A = (aij) be an 12x n nonnegative matrix; that is, aij > 0 for i,j = 1,***, n.
    [Show full text]
  • An Inequality for Doubly Stochastic Matrices*
    JOURNAL OF RESEARCH of the National Bureau of Standards-B. Mathematical Sciences Vol. 80B, No.4, October-December 1976 An Inequality for Doubly Stochastic Matrices* Charles R. Johnson** and R. Bruce Kellogg** Institute for Basic Standards, National Bureau of Standards, Washington, D.C. 20234 (June 3D, 1976) Interrelated inequalities involving doubly stochastic matrices are presented. For example, if B is an n by n doubly stochasti c matrix, x any nonnegative vector and y = Bx, the n XIX,· •• ,x" :0:::; YIY" •• y ... Also, if A is an n by n nonnegotive matrix and D and E are positive diagonal matrices such that B = DAE is doubly stochasti c, the n det DE ;:::: p(A) ... , where p (A) is the Perron· Frobenius eigenvalue of A. The relationship between these two inequalities is exhibited. Key words: Diagonal scaling; doubly stochasti c matrix; P erron·Frobenius eigenvalue. n An n by n entry·wise nonnegative matrix B = (b i;) is called row (column) stochastic if l bi ; = 1 ;= 1 for all i = 1,. ',n (~l bij = 1 for all j = 1,' . ',n ). If B is simultaneously row and column stochastic then B is said to be doubly stochastic. We shall denote the Perron·Frobenius (maximal) eigenvalue of an arbitrary n by n entry·wise nonnegative matrix A by p (A). Of course, if A is stochastic, p (A) = 1. It is known precisely which n by n nonnegative matrices may be diagonally scaled by positive diagonal matrices D, E so that (1) B=DAE is doubly stochastic. If there is such a pair D, E, we shall say that A has property (").
    [Show full text]
  • Doubly Stochastic Normalization for Spectral Clustering
    Doubly Stochastic Normalization for Spectral Clustering Ron Zass and Amnon Shashua ∗ Abstract In this paper we focus on the issue of normalization of the affinity matrix in spec- tral clustering. We show that the difference between N-cuts and Ratio-cuts is in the error measure being used (relative-entropy versus L1 norm) in finding the closest doubly-stochastic matrix to the input affinity matrix. We then develop a scheme for finding the optimal, under Frobenius norm, doubly-stochastic approxi- mation using Von-Neumann’s successive projections lemma. The new normaliza- tion scheme is simple and efficient and provides superior clustering performance over many of the standardized tests. 1 Introduction The problem of partitioning data points into a number of distinct sets, known as the clustering problem, is central in data analysis and machine learning. Typically, a graph-theoretic approach to clustering starts with a measure of pairwise affinity Kij measuring the degree of similarity between points xi, xj, followed by a normalization step, followed by the extraction of the leading eigenvectors which form an embedded coordinate system from which the partitioning is readily available. In this domain there are three principle dimensions which make a successful clustering: (i) the affinity measure, (ii) the normalization of the affinity matrix, and (iii) the particular clustering algorithm. Common practice indicates that the former two are largely responsible for the performance whereas the particulars of the clustering process itself have a relatively smaller impact on the performance. In this paper we focus on the normalization of the affinity matrix. We first show that the existing popular methods Ratio-cut (cf.
    [Show full text]
  • Majorization Results for Zeros of Orthogonal Polynomials
    Majorization results for zeros of orthogonal polynomials ∗ Walter Van Assche KU Leuven September 4, 2018 Abstract We show that the zeros of consecutive orthogonal polynomials pn and pn−1 are linearly connected by a doubly stochastic matrix for which the entries are explicitly computed in terms of Christoffel numbers. We give similar results for the zeros of pn (1) and the associated polynomial pn−1 and for the zeros of the polynomial obtained by deleting the kth row and column (1 ≤ k ≤ n) in the corresponding Jacobi matrix. 1 Introduction 1.1 Majorization Recently S.M. Malamud [3, 4] and R. Pereira [6] have given a nice extension of the Gauss- Lucas theorem that the zeros of the derivative p′ of a polynomial p lie in the convex hull of the roots of p. They both used the theory of majorization of sequences to show that if x =(x1, x2,...,xn) are the zeros of a polynomial p of degree n, and y =(y1,...,yn−1) are the zeros of its derivative p′, then there exists a doubly stochastic (n − 1) × n matrix S such that y = Sx ([4, Thm. 4.7], [6, Thm. 5.4]). A rectangular (n − 1) × n matrix S =(si,j) is said to be doubly stochastic if si,j ≥ 0 and arXiv:1607.03251v1 [math.CA] 12 Jul 2016 n n− 1 n − 1 s =1, s = . i,j i,j n j=1 i=1 X X In this paper we will use the notion of majorization to rephrase some known results for the zeros of orthogonal polynomials on the real line.
    [Show full text]
  • The Symbiotic Relationship of Combinatorics and Matrix Theory
    The Symbiotic Relationship of Combinatorics and Matrix Theory Richard A. Brualdi* Department of Mathematics University of Wisconsin Madison. Wisconsin 53 706 Submitted by F. Uhlig ABSTRACT This article demonstrates the mutually beneficial relationship that exists between combinatorics and matrix theory. 3 1. INTRODUCTION According to The Random House College Dictionary (revised edition, 1984) the word symbiosis is defined as follows: symbiosis: the living together of two dissimilar organisms, esp. when this association is mutually beneficial. In applying, as I am, this definition to the relationship between combinatorics and matrix theory, I would have preferred that the qualifying word “dissimilar” was omitted. Are combinatorics and matrix theory really dissimilar subjects? On the other hand I very much like the inclusion of the phrase “when the association is mutually beneficial” because I believe the development of each of these subjects shows that combinatorics and matrix theory have a mutually beneficial relationship. The fruits of this mutually beneficial relationship is the subject that I call combinatorial matrix theory or, for brevity here, CMT. Viewed broadly, as I do, CMT is an amazingly rich and diverse subject. If it is * Research partially supported by National Science Foundation grant DMS-8901445 and National Security Agency grant MDA904-89 H-2060. LZNEAR ALGEBRA AND ITS APPLZCATZONS 162-164:65-105 (1992) 65 0 Elsevier Science Publishing Co., Inc., 1992 655 Avenue of the Americas, New York, NY 10010 0024.3795/92/$5.00 66 RICHARD A. BRUALDI combinatorial and uses matrix theory r in its formulation or proof, it’s CMT. If it is matrix theory and has a combinatorial component to it, it’s CMT.
    [Show full text]