ABSTRACT TAYLOR, VALERIE EOWYN. The

Total Page:16

File Type:pdf, Size:1020Kb

ABSTRACT TAYLOR, VALERIE EOWYN. The ABSTRACT TAYLOR, VALERIE EOWYN. The Birkhoff-von Neumann Decomposition and its Applications. (Under the direction of Dr. Arvind Krishna Saibaba). This paper explores the Birkhoff-von Neumann decomposition theorem which is a celebrated theorem applicable to a specific class of matrices, called doubly stochastic matrices. The Birkhoff- von Neumann decomposition has many application ranging from theoretical areas such as matrix approximation to applied areas such as graph isomorphisms and assignment. The purpose of this paper is to review the literature, to collect and organize, various statements and applications of this theorem. This paper will start with presenting two different and equivalent statements of the theorem. We will present proofs for both of these statements of the theorem and highlight the connections between them. We then go into the applications of this theorem. We mention several theoretical applications where the Birkhoff-von Neumann is used in the proof of other results. We then address some more applied results such as graph isomorphism and the assignment problem. The Birkhoff-von Neumann Decomposition and its Applications by Valerie Eowyn Taylor A thesis submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the degree of Master of Science Mathematics Raleigh, North Carolina 2018 APPROVED BY: Dr. Arvind Krishna Saibaba Dr. Ernie Stitzinger Chair of Advisory Committee Dr. Agnes Szanto ii BIOGRAPHY The author was born in Chapel Hill, NC on April 8th, 1991. They were homeschooled by their mother until attending public high school in 9th grade at Cedar Ridge High School in Hillsborough, NC. They then completed their undergraduate career at the University of North Carolina at Wilm- ington, receiving their Bachelor of Arts in Mathematics Education. Upon graduating, they began teaching math at John T. Hoggard High School in Wilmington. They taught at JTH for three years until entering graduate school at North Carolina State University. They plan on returning to the education field upon completion of the graduate program at NCSU. iii TABLE OF CONTENTS 1. Introduction ......................................................................... 1 2. Birkhoff-von Neumann Decomposition....................................... 2 3. Applications ......................................................................... 8 3.1. Von Neumann Trace Inequality....................................................... 8 3.2. Hoffman-Wielandt Theorem........................................................... 13 3.3. Other Applications....................................................................... 15 Graph Isomorphisms ........................................................... 15 Assignment Problem ........................................................... 16 Majorization ...................................................................... 18 Fan Dominance Principle ..................................................... 19 4. Conclusion............................................................................ 20 References ............................................................................ 21 APPENDICES ...................................................................... 22 Appendix A: Additional Proofs....................................................... 23 Appendix B: Examples ................................................................. 24 1 1. Introduction Doubly stochastic matrices are square matrices with non-negative entries such that all the rows and columns sum to 1. Doubly stochastic matrices have many applications. These matrices are used in intercity population migration models. An example of this would be that there are cities C1;:::;Cn with n ≥ 2 and each day a constant fraction aij of the current population of city j moves to city i for all distinct i; j 2 f1; : : : ; ng. Problems like this are necessary for issues such as planning city services and capital investment. These migration models quickly become complication and doubly stochastic matrices can be extremely useful in the calculations [4]. Doubly stochastic matrices are also used in Markov Chains and modeling problems in economics and operations research. Another application related to modeling deals with communication theory and satellites orbiting the earth [2]. This paper will explore Birkhoff-von Neumann decomposition and various applications of the theorem. We will first present two different, and equivalent, statements of the theorem along with two different proofs. Then we will look at how this result impacted the development of other results also well as some applied applications of the theorem. Throughout this document let A = [aij] be an n × n matrix in R, unless otherwise stated. We denote by π a permutation of the integers f1; : : : ; ng, and it's entries by π(i) for i = 1; : : : ; n. Denote the columns of the n × n identity matrix by e1; : : : ; en. Associated with every permutation π is the permutation matrix P given by P = : eπ(1) : : : eπ(n) 2 2. Birkhoff-von Neumann Decomposition The Birkhoff-von Neumann Decomposition theorem is an important result for a special class of matrices known as doubly stochastic, a definition of which follows below. Definition 1 (Doubly Stochastic Matrices). A doubly stochastic matrix is a square n × n matrix A, with non-negative entries aij ≥ 0 for i; j = 1; : : : ; n and n n X X aij = 1; j = 1; : : : ; n and aij = 1; i = 1; : : : n: i=1 j=1 This definition simply means that all the columns and the rows sum to 1. Alternatively, given T n a doubly stochastic matrix A and the vector of all ones e = (1;:::; 1) 2 R , then Ae = e and eT A = eT . This second definition implies that 1 is always an eigenvalue value of a doubly stochastic matrix, and the corresponding right and left eigenvector is e. Given this definition, we can state the first version of the Birkhoff-von Neumann decomposition theorem. Theorem 2 (Birkhoff-von Neumann Decomposition). Let A be a doubly stochastic ma- Pk trix, there exist constants α1; α2; : : : ; αk 2 (0; 1) with i=1 αi = 1 and permutation matrices P1;P2;:::;Pk such that A = α1Pi + ::: + αkPk: That is, a doubly stochastic matrix can be expressed as a convex combination of permutation matrices. Conversely, a single permutation matrix is also a doubly stochastic matrix. Note the summation is what is referred to as the Birkhoff-von Neumann decomposition. Before we can prove this theorem, we need to establish the following lemma. 3 Lemma 3. Let A be a n × n doubly stochastic matrix that is not the identity matrix. There is a permutation π of f1; : : : ng that is not the identity permutation and is such that a1π(1) ··· anπ(n) > 0: This means that we can find n nonzero elements of A, one in each column. Recall that all of the entries in A are positive, and this lemma is saying that their product is positive, which is simply implying that none of the entries are zero. The proof of this lemma is located in Appendix A, so that we can directly give the proof of the Birkhoff-von Neumann theorem. The following proof is adapted from the proof presented by Marshall, Olkin and Arnold [6]. Proof of Theorem 2. Let A be doubly stochastic. If A is a permutation matrix, there is nothing to prove. So, assume that A is not a permutation matrix. Let π be a permutation of (1; : : : ; n) that is not the identity permutation, such that the product a1π(1)a2π(2) : : : anπ(n) 6= 0, whose existence is ensured by Lemma 3. Denote the corresponding permutation matrix by P1. Let c1 = minfa1π(1); : : : ; anπ(n)g and define R by A = c1P1 + R. Note that c1 ≤ 1 since A is doubly stochastic. Also note that c1 6= 0 since none of the values fa1π(1); : : : ; anπ(n)g can equal 0 since their product is not 0. Because c1P1 has element c1 in positions 1π(1); 2π(2); : : : ; nπ(n) and A has elements a1π(1); : : : anπ(n) in the corresponding positions, the choice of c1 = minfa1π(1); : : : ; anπ(n)g ensures that alπ(l) −c1 ≥ 0, with equality for some l. Consequently, R has non-negative elements, since rlπ(l) = alπ(l) − c1, and contains at least one more zero element than A, since for some l we know that alπ(l) = c1 which implies rlπ(l) = 0. Observe that for e = (1; 1;:::; 1)T we have that e = Ae = (c1P1 + R)e = c1P1e + Re = c1e + Re: 4 Now we have two cases to look at. First consider if c1 = 1. This implies that R = 0 and A = P1 so A is already a permutation matrix and the desired decomposition is trivial. For the second case consider c1 < 1. From our earlier statement that e = c1e + Re we can say that e = A e where A = R . A is doubly stochastic since all entries are positive and A e = e, 1 1 1−c1 1 1 which is the definition of doubly stochastic. In this case, we apply the same procedure to A1 to continue the decomposition. Each time we reduce the number of nonzero entries in the remainder, until we get the zero matrix. Note that each time we pick a permutation of f1; : : : ; ng we necessarily pick a permutation that is not the identity. If the identity permutation is the only one available, then A = I as shown in the proof of Lemma 3, and hence the decomposition is done. Consequently, for some k, when the remainder is 0, we have A = c1P1 + ::: + ckPk where each Pi is a permutation matrix. In remains to observe that e = Ae = c1P1e + ::: + ckPke = (c1 + : : : ck)e Pk which implies that i=1 ci = 1. This completes the proof. It is relevant here to note a bound on the number of iterations needed to reach the decomposition and a bound on k, the number of summands. We know that a doubly stochastic matrix has at most n2 − n zero entries. This is because there are a total of n2 entries and there must be at least one nonzero entry in each column, meaning a least n non-zero entries. Since there are at most n2 − n zero entries this implies there are at most n2 − n iterations in the decomposition process. After this many iterations we would have n2 − n + 1 summands, that is k ≤ n2 − n + 1. This bound has since been improved upon by several sources. For reference on these sources consult [1, Section II pg. 38] and [6, Section 2 Theorem F.2].
Recommended publications
  • Arxiv:1306.4805V3 [Math.OC] 6 Feb 2015 Used Greedy Techniques to Reorder Matrices
    CONVEX RELAXATIONS FOR PERMUTATION PROBLEMS FAJWEL FOGEL, RODOLPHE JENATTON, FRANCIS BACH, AND ALEXANDRE D’ASPREMONT ABSTRACT. Seriation seeks to reconstruct a linear order between variables using unsorted, pairwise similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We write seri- ation as an optimization problem by proving the equivalence between the seriation and combinatorial 2-SUM problems on similarity matrices (2-SUM is a quadratic minimization problem over permutations). The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we derive several convex relax- ations for 2-SUM to improve the robustness of seriation solutions in noisy settings. These convex relaxations also allow us to impose structural constraints on the solution, hence solve semi-supervised seriation problems. We derive new approximation bounds for some of these relaxations and present numerical experiments on archeological data, Markov chains and DNA assembly from shotgun gene sequencing data. 1. INTRODUCTION We study optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper is centered on the seriation problem: we are given a similarity matrix between a set of n variables and assume that the variables can be ordered along a chain, where the similarity between variables decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear ordering based on unsorted, possibly noisy, pairwise similarity information. This problem has its roots in archeology [Robinson, 1951] and also has direct applications in e.g.
    [Show full text]
  • Doubly Stochastic Matrices Whose Powers Eventually Stop
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 330 (2001) 25–30 www.elsevier.com/locate/laa Doubly stochastic matrices whose powers eventually stopୋ Suk-Geun Hwang a,∗, Sung-Soo Pyo b aDepartment of Mathematics Education, Kyungpook National University, Taegu 702-701, South Korea bCombinatorial and Computational Mathematics Center, Pohang University of Science and Technology, Pohang, South Korea Received 22 June 2000; accepted 14 November 2000 Submitted by R.A. Brualdi Abstract In this note we characterize doubly stochastic matrices A whose powers A, A2,A3,... + eventually stop, i.e., Ap = Ap 1 =···for some positive integer p. The characterization en- ables us to determine the set of all such matrices. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A51 Keywords: Doubly stochastic matrix; J-potent 1. Introduction Let R denote the real field. For positive integers m, n,letRm×n denote the set of all m × n matrices with real entries. As usual let Rn denote the set Rn×1.We call the members of Rn the n-vectors. The n-vector of 1’s is denoted by e,andthe identity matrix of order n is denoted by In. For two matrices A, B of the same size, let A B denote that all the entries of A − B are nonnegative. A matrix A is called nonnegative if A O. A nonnegative square matrix is called a doubly stochastic matrix if all of its row sums and column sums equal 1.
    [Show full text]
  • Some Inequalities About Trace of Matrix
    Available online www.jsaer.com Journal of Scientific and Engineering Research, 2019, 6(7):89-93 ISSN: 2394-2630 Research Article CODEN(USA): JSERBR Some Inequalities about Trace of Matrix TU Yuanyuan*1, SU Runqing2 1Department of Mathematics, Taizhou College, Nanjing Normal University, Taizhou 225300, China Email:[email protected] 2College of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, 210044, China Abstract In this paper, we studied the inequality of trace by using the properties of the block matrix, singular value and eigenvalue of the matrix. As a result, some new inequalities about trace of matrix under certain conditions are given, at the same time, we extend the corresponding results. Keywords eigenvalue, singular value, trace, inequality 1. Introduction For a complex number x a ib , where a,b are all real, we write Re x a, Im x b , as usual , let I n mn be an n n unit matrix, M be the set of m n complex matrices, diag d1,,L dn be the diagonal matrix with diagonal elements dd1,,L n , if A be a matrix, denote the eigenvalues of by i A, H the singular values of A by i A, the trace of by trA , the associate matrix of A by A , A be a semi- positive definite matrix if be a Herimite matrix and i A 0. In 1937 , Von Neumann gave the famous inequality [1] n (1.1) |tr ( AB ) | ii ( A ) ( B ), i1 where AB, are nn complex matrices. Scholars were very active in the study of this inequality, and obtained many achievements.
    [Show full text]
  • Arxiv:1904.05239V2 [Math.FA]
    ON MATRIX REARRANGEMENT INEQUALITIES RIMA ALAIFARI, XIUYUAN CHENG, LILLIAN B. PIERCE AND STEFAN STEINERBERGER Abstract. Given two symmetric and positive semidefinite square matrices A, B, is it true that any matrix given as the product of m copies of A and n copies of B in a particular sequence must be dominated in the spectral norm by the ordered matrix product AmBn? For example, is kAABAABABBk ≤ kAAAAABBBBk? Drury [10] has characterized precisely which disordered words have the property that an inequality of this type holds for all matrices A, B. However, the 1-parameter family of counterexamples Drury constructs for these characterizations is comprised of 3 × 3 matrices, and thus as stated the characterization applies only for N × N matrices with N ≥ 3. In contrast, we prove that for 2 × 2 matrices, the general rearrangement inequality holds for all disordered words. We also show that for larger N × N matrices, the general rearrangement inequality holds for all disordered words, for most A, B (in a sense of full measure) that are sufficiently small perturbations of the identity. 1. Introduction 1.1. Introduction. Rearrangement inequalities for functions have a long history; we refer to Lieb and Loss [20] for an introduction and an example of their ubiquity in Analysis, Mathematical Physics, and Partial Differential Equations. A natural question that one could ask is whether there is an operator-theoretic variant of such rearrangement inequalities. For example, given two operators A : X → X and B : X → X, is there an inequality kABABAk≤kAAABBk where k·k is a norm on operators? In this paper, we will study the question for A, B being symmetric and positive semidefinite square matrices and k·k denoting the classical operator norm kMk = sup kMxk2.
    [Show full text]
  • Spectral Inequalities for Schr¨Odinger Operators with Surface Potentials
    SPECTRAL INEQUALITIES FOR SCHRODINGER¨ OPERATORS WITH SURFACE POTENTIALS RUPERT L. FRANK AND ARI LAPTEV Dedicated to M. Sh. Birman on the occasion of his 80th birthday Abstract. We prove sharp Lieb-Thirring inequalities for Schr¨odingeroperators with po- tentials supported on a hyperplane and we show how these estimates are related to Lieb- Thirring inequalities for relativistic Schr¨odingeroperators. 1. Introduction The Cwikel-Lieb-Rozenblum and the Lieb-Thirring inequalities estimate the number and N moments of eigenvalues of Schr¨odingeroperators −∆ − V in L2(R ) in terms of an integral of the potential V . They state that the bound Z γ γ+N=2 tr(−∆ − V )− ≤ Lγ;N V (x)+ dx (1.1) N R holds with a constant Lγ;N independent of V iff γ ≥ 1=2 for N = 1, γ > 0 for N = 2 and γ ≥ 0 for N ≥ 3. Here and below t± := maxf0; ±tg denotes the positive and negative part of a real number, a real-valued function or a self-adjoint operator t. In particular, the problem of finding the optimal value of the constant Lγ;N has attracted a lot of attention recently. We refer to the review articles [H2, LW2] for background information, references and applications of (1.1). The purpose of the present paper is twofold. First, we would like to find an analog of in- equality (1.1) for Schr¨odingeroperators with singular potentials V (x) = v(x1; : : : ; xd)δ(xN ), d := N − 1, supported on a hyperplane. It turns out that such an inequality is indeed valid, provided the integral on the right hand side of (1.1) is replaced by Z 2γ+d v(x1; : : : ; xd)+ dx1 : : : dxd : d R We determine the complete range of γ's for which the resulting inequality holds.
    [Show full text]
  • Notes on Birkhoff-Von Neumann Decomposition of Doubly Stochastic Matrices Fanny Dufossé, Bora Uçar
    Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices Fanny Dufossé, Bora Uçar To cite this version: Fanny Dufossé, Bora Uçar. Notes on Birkhoff-von Neumann decomposition of doubly stochastic matri- ces. Linear Algebra and its Applications, Elsevier, 2016, 497, pp.108–115. 10.1016/j.laa.2016.02.023. hal-01270331v6 HAL Id: hal-01270331 https://hal.inria.fr/hal-01270331v6 Submitted on 23 Apr 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices Fanny Dufoss´ea, Bora U¸carb,∗ aInria Lille, Nord Europe, 59650, Villeneuve d'Ascq, France bLIP, UMR5668 (CNRS - ENS Lyon - UCBL - Universit´ede Lyon - INRIA), Lyon, France Abstract Birkhoff-von Neumann (BvN) decomposition of doubly stochastic matrices ex- presses a double stochastic matrix as a convex combination of a number of permutation matrices. There are known upper and lower bounds for the num- ber of permutation matrices that take part in the BvN decomposition of a given doubly stochastic matrix. We investigate the problem of computing a decom- position with the minimum number of permutation matrices and show that the associated decision problem is strongly NP-complete.
    [Show full text]
  • On Matrices with a Doubly Stochastic Pattern
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF MATHEMATICALANALYSISAND APPLICATIONS 34,648-652(1971) On Matrices with a Doubly Stochastic Pattern DAVID LONDON Technion-Israel Institute of Technology, Haifa Submitted by Ky Fan 1. INTRODUCTION In [7] Sinkhorn proved that if A is a positive square matrix, then there exist two diagonal matrices D, = {@r,..., d$) and D, = {dj2),..., di2r) with positive entries such that D,AD, is doubly stochastic. This problem was studied also by Marcus and Newman [3], Maxfield and Mint [4] and Menon [5]. Later Sinkhorn and Knopp [8] considered the same problem for A non- negative. Using a limit process of alternately normalizing the rows and columns sums of A, they obtained a necessary and sufficient condition for the existence of D, and D, such that D,AD, is doubly stochastic. Brualdi, Parter and Schneider [l] obtained the same theorem by a quite different method using spectral properties of some nonlinear operators. In this note we give a new proof of the same theorem. We introduce an extremal problem, and from the existence of a solution to this problem we derive the existence of D, and D, . This method yields also a variational characterization for JJTC1(din dj2’), which can be applied to obtain bounds for this quantity. We note that bounds for I7y-r (d,!‘) dj”)) may be of interest in connection with inequalities for the permanent of doubly stochastic matrices [3]. 2. PRELIMINARIES Let A = (aij) be an 12x n nonnegative matrix; that is, aij > 0 for i,j = 1,***, n.
    [Show full text]
  • An Inequality for Doubly Stochastic Matrices*
    JOURNAL OF RESEARCH of the National Bureau of Standards-B. Mathematical Sciences Vol. 80B, No.4, October-December 1976 An Inequality for Doubly Stochastic Matrices* Charles R. Johnson** and R. Bruce Kellogg** Institute for Basic Standards, National Bureau of Standards, Washington, D.C. 20234 (June 3D, 1976) Interrelated inequalities involving doubly stochastic matrices are presented. For example, if B is an n by n doubly stochasti c matrix, x any nonnegative vector and y = Bx, the n XIX,· •• ,x" :0:::; YIY" •• y ... Also, if A is an n by n nonnegotive matrix and D and E are positive diagonal matrices such that B = DAE is doubly stochasti c, the n det DE ;:::: p(A) ... , where p (A) is the Perron· Frobenius eigenvalue of A. The relationship between these two inequalities is exhibited. Key words: Diagonal scaling; doubly stochasti c matrix; P erron·Frobenius eigenvalue. n An n by n entry·wise nonnegative matrix B = (b i;) is called row (column) stochastic if l bi ; = 1 ;= 1 for all i = 1,. ',n (~l bij = 1 for all j = 1,' . ',n ). If B is simultaneously row and column stochastic then B is said to be doubly stochastic. We shall denote the Perron·Frobenius (maximal) eigenvalue of an arbitrary n by n entry·wise nonnegative matrix A by p (A). Of course, if A is stochastic, p (A) = 1. It is known precisely which n by n nonnegative matrices may be diagonally scaled by positive diagonal matrices D, E so that (1) B=DAE is doubly stochastic. If there is such a pair D, E, we shall say that A has property (").
    [Show full text]
  • Doubly Stochastic Normalization for Spectral Clustering
    Doubly Stochastic Normalization for Spectral Clustering Ron Zass and Amnon Shashua ∗ Abstract In this paper we focus on the issue of normalization of the affinity matrix in spec- tral clustering. We show that the difference between N-cuts and Ratio-cuts is in the error measure being used (relative-entropy versus L1 norm) in finding the closest doubly-stochastic matrix to the input affinity matrix. We then develop a scheme for finding the optimal, under Frobenius norm, doubly-stochastic approxi- mation using Von-Neumann’s successive projections lemma. The new normaliza- tion scheme is simple and efficient and provides superior clustering performance over many of the standardized tests. 1 Introduction The problem of partitioning data points into a number of distinct sets, known as the clustering problem, is central in data analysis and machine learning. Typically, a graph-theoretic approach to clustering starts with a measure of pairwise affinity Kij measuring the degree of similarity between points xi, xj, followed by a normalization step, followed by the extraction of the leading eigenvectors which form an embedded coordinate system from which the partitioning is readily available. In this domain there are three principle dimensions which make a successful clustering: (i) the affinity measure, (ii) the normalization of the affinity matrix, and (iii) the particular clustering algorithm. Common practice indicates that the former two are largely responsible for the performance whereas the particulars of the clustering process itself have a relatively smaller impact on the performance. In this paper we focus on the normalization of the affinity matrix. We first show that the existing popular methods Ratio-cut (cf.
    [Show full text]
  • Some Trace Inequalities for Operators in Hilbert Spaces
    Kragujevac Journal of Mathematics Volume 41(1) (2017), Pages 33–55. SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES SILVESTRU SEVER DRAGOMIR1;2 Abstract. Some new trace inequalities for operators in Hilbert spaces are provided. The superadditivity and monotonicity of some associated functionals are investigated and applications for power series of such operators are given. Some trace inequalities for matrices are also derived. Examples for the operator exponential and other similar functions are presented as well. 1. Introduction Let (H; h·; ·i) be a complex Hilbert space and feigi2I an orthonormal basis of H. We say that A 2 B (H) is a Hilbert-Schmidt operator if X 2 (1.1) kAeik < 1: i2I It is well know that, if feigi2I and ffjgj2J are orthonormal bases for H and A 2 B (H) then X 2 X 2 X ∗ 2 (1.2) kAeik = kAfjk = kA fjk i2I j2I j2I showing that the definition (1.1) is independent of the orthonormal basis and A is a Hilbert-Schmidt operator iff A∗ is a Hilbert-Schmidt operator. Key words and phrases. Trace class operators, Hilbert-Schmidt operators, trace, Schwarz inequal- ity, trace inequalities for matrices, power series of operators. 2010 Mathematics Subject Classification. Primary: 47A63. Secondary: 47A99. Received: May 18, 2016. Accepted: July 4, 2016. 33 34 S. S. DRAGOMIR Let B2 (H) the set of Hilbert-Schmidt operators in B (H). For A 2 B2 (H) we define 1 ! 2 X 2 kAk2 := kAeik ; i2I for feigi2I an orthonormal basis of H. This definition does not depend on the choice of the orthonormal basis.
    [Show full text]
  • Majorization Results for Zeros of Orthogonal Polynomials
    Majorization results for zeros of orthogonal polynomials ∗ Walter Van Assche KU Leuven September 4, 2018 Abstract We show that the zeros of consecutive orthogonal polynomials pn and pn−1 are linearly connected by a doubly stochastic matrix for which the entries are explicitly computed in terms of Christoffel numbers. We give similar results for the zeros of pn (1) and the associated polynomial pn−1 and for the zeros of the polynomial obtained by deleting the kth row and column (1 ≤ k ≤ n) in the corresponding Jacobi matrix. 1 Introduction 1.1 Majorization Recently S.M. Malamud [3, 4] and R. Pereira [6] have given a nice extension of the Gauss- Lucas theorem that the zeros of the derivative p′ of a polynomial p lie in the convex hull of the roots of p. They both used the theory of majorization of sequences to show that if x =(x1, x2,...,xn) are the zeros of a polynomial p of degree n, and y =(y1,...,yn−1) are the zeros of its derivative p′, then there exists a doubly stochastic (n − 1) × n matrix S such that y = Sx ([4, Thm. 4.7], [6, Thm. 5.4]). A rectangular (n − 1) × n matrix S =(si,j) is said to be doubly stochastic if si,j ≥ 0 and arXiv:1607.03251v1 [math.CA] 12 Jul 2016 n n− 1 n − 1 s =1, s = . i,j i,j n j=1 i=1 X X In this paper we will use the notion of majorization to rephrase some known results for the zeros of orthogonal polynomials on the real line.
    [Show full text]
  • The Symbiotic Relationship of Combinatorics and Matrix Theory
    The Symbiotic Relationship of Combinatorics and Matrix Theory Richard A. Brualdi* Department of Mathematics University of Wisconsin Madison. Wisconsin 53 706 Submitted by F. Uhlig ABSTRACT This article demonstrates the mutually beneficial relationship that exists between combinatorics and matrix theory. 3 1. INTRODUCTION According to The Random House College Dictionary (revised edition, 1984) the word symbiosis is defined as follows: symbiosis: the living together of two dissimilar organisms, esp. when this association is mutually beneficial. In applying, as I am, this definition to the relationship between combinatorics and matrix theory, I would have preferred that the qualifying word “dissimilar” was omitted. Are combinatorics and matrix theory really dissimilar subjects? On the other hand I very much like the inclusion of the phrase “when the association is mutually beneficial” because I believe the development of each of these subjects shows that combinatorics and matrix theory have a mutually beneficial relationship. The fruits of this mutually beneficial relationship is the subject that I call combinatorial matrix theory or, for brevity here, CMT. Viewed broadly, as I do, CMT is an amazingly rich and diverse subject. If it is * Research partially supported by National Science Foundation grant DMS-8901445 and National Security Agency grant MDA904-89 H-2060. LZNEAR ALGEBRA AND ITS APPLZCATZONS 162-164:65-105 (1992) 65 0 Elsevier Science Publishing Co., Inc., 1992 655 Avenue of the Americas, New York, NY 10010 0024.3795/92/$5.00 66 RICHARD A. BRUALDI combinatorial and uses matrix theory r in its formulation or proof, it’s CMT. If it is matrix theory and has a combinatorial component to it, it’s CMT.
    [Show full text]