Sign Matrix Polytopes.Pdf (2.035Mb)
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Two Poset Polytopes
Discrete Comput Geom 1:9-23 (1986) G eometrv)i.~.reh, ~ ( :*mllmlati~ml © l~fi $1~ter-Vtrlq New Yorklu¢. t¢ Two Poset Polytopes Richard P. Stanley* Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139 Abstract. Two convex polytopes, called the order polytope d)(P) and chain polytope <~(P), are associated with a finite poset P. There is a close interplay between the combinatorial structure of P and the geometric structure of E~(P). For instance, the order polynomial fl(P, m) of P and Ehrhart poly- nomial i(~9(P),m) of O(P) are related by f~(P,m+l)=i(d)(P),m). A "transfer map" then allows us to transfer properties of O(P) to W(P). In particular, we transfer known inequalities involving linear extensions of P to some new inequalities. I. The Order Polytope Our aim is to investigate two convex polytopes associated with a finite partially ordered set (poset) P. The first of these, which we call the "order polytope" and denote by O(P), has been the subject of considerable scrutiny, both explicit and implicit, Much of what we say about the order polytope will be essentially a review of well-known results, albeit ones scattered throughout the literature, sometimes in a rather obscure form. The second polytope, called the "chain polytope" and denoted if(P), seems never to have been previously considered per se. It is a special case of the vertex-packing polytope of a graph (see Section 2) but has many special properties not in general valid or meaningful for graphs. -
Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix. -
Clustering by Left-Stochastic Matrix Factorization
Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element. -
Double Posets and Real Invariant Varieties Dissertation
Fachbereich Mathematik und Informatik der Freien Universität Berlin Double posets and real invariant varieties Two interactions between combinatorics and geometry Dissertation eingereicht von Tobias Friedl Berlin 2017 Advisor and first reviewer: Prof. Dr. Raman Sanyal Second reviewer: Prof. Dr. Francisco Santos Third reviewer: Priv.-Doz. Dr. Christian Stump Date of the defense: May 19, 2017 Acknowledgements My deepest thanks go to my advisor Raman Sanyal. A PhD-student can only hope for an advisor who is as dedicated and enthusiastic about mathematics as you are. Thank you for getting your hands dirty and spending many hours in front of the blackboard teaching me how to do research. I spent the last years in the amazing and inspiring work group directed by Günter Ziegler at FU Berlin. Thank you for providing such a welcoming and challenging work environment. I really enjoyed the time with my friends and colleagues at the "villa", most of all Francesco Grande, Katharina Jochemko, Katy Beeler, Albert Haase, Lauri Loiskekoski, Philip Brinkmann, Nevena Palić and Jean-Philippe Labbé. Thanks also to Elke Pose, who provides valuable support and keeps bureaucracy at a low level. I want to thank my coauthor Cordian Riener for many fruitful mathematical discussions and for an interesting and enjoyable week of research in Helsinki at Aalto University. Moreover, I’d like to express my gratitude towards my coauthor and friend Tom Chappell. Thanks to the reviewers Paco Santos and Christian Stump for their helpful com- ments and to Christian additionally for many discussions regarding the combinatorics of reflection groups and posets throughout the last years. -
Similarity-Based Clustering by Left-Stochastic Matrix Factorization
JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem. -
Alternating Sign Matrices and Polynomiography
Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics. -
Arxiv:1902.07301V3 [Math.CO] 13 Feb 2020 Which N H Itiuinwhere Distribution the and of Ojcuefo 8]Wihhsntbe Eovd H Oere Sole the Resolved
MINUSCULE DOPPELGANGERS,¨ THE COINCIDENTAL DOWN-DEGREE EXPECTATIONS PROPERTY, AND ROWMOTION SAM HOPKINS Abstract. We relate Reiner, Tenner, and Yong’s coincidental down-degree expec- tations (CDE) property of posets to the minuscule doppelg¨anger pairs studied by Hamaker, Patrias, Pechenik, and Williams. Via this relation, we put forward a series of conjectures which suggest that the minuscule doppelg¨anger pairs behave “as if” they had isomorphic comparability graphs, even though they do not. We further explore the idea of minuscule doppelg¨anger pairs pretending to have isomor- phic comparability graphs by considering the rowmotion operator on order ideals. We conjecture that the members of a minuscule doppelg¨anger pair behave the same way under rowmotion, as they would if they had isomorphic comparability graphs. Moreover, we conjecture that these pairs continue to behave the same way under the piecewise-linear and birational liftings of rowmotion introduced by Einstein and Propp. This conjecture motivates us to study the homomesies (in the sense of Propp and Roby) exhibited by birational rowmotion. We establish the birational analog of the antichain cardinality homomesy for the major examples of posets known or conjectured to have finite birational rowmotion order (namely: minuscule posets and root posets of coincidental type). 1. Introduction Let P be a finite poset. The down-degree of p ∈ P is the number of elements of P which p covers. Consider two probability distributions on P : the uniform distribution; and the distribution where p ∈ P occurs proportional to the number of maximal chains of P containing p. We say that P has the coincidental down-degree expectations (CDE) property if the expected value of the down-degree statistic is the same for these two distributions. -
Representations of Stochastic Matrices
Rotational (and Other) Representations of Stochastic Matrices Steve Alpern1 and V. S. Prasad2 1Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom. email: [email protected] 2Department of Mathematics, University of Massachusetts Lowell, Lowell, MA. email: [email protected] May 27, 2005 Abstract Joel E. Cohen (1981) conjectured that any stochastic matrix P = pi;j could be represented by some circle rotation f in the following sense: Forf someg par- tition Si of the circle into sets consisting of …nite unions of arcs, we have (*) f g pi;j = (f (Si) Sj) = (Si), where denotes arc length. In this paper we show how cycle decomposition\ techniques originally used (Alpern, 1983) to establish Cohen’sconjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue proba- bility space. Representations by pointwise and setwise periodic automorphisms are also established. While this paper is largely expository, all the proofs, and some of the results, are new. Keywords: rotational representation, stochastic matrix, cycle decomposition MSC 2000 subject classi…cations. Primary: 60J10. Secondary: 15A51 1 Introduction An automorphism of a Lebesgue probability space (X; ; ) is a bimeasurable n bijection f : X X which preserves the measure : If S = Si is a non- ! f gi=1 trivial (all (Si) > 0) measurable partition of X; we can generate a stochastic n matrix P = pi;j by the de…nition f gi;j=1 (f (Si) Sj) pi;j = \ ; i; j = 1; : : : ; n: (1) (Si) Since the partition S is non-trivial, the matrix P has a positive invariant (stationary) distribution v = (v1; : : : ; vn) = ( (S1) ; : : : ; (Sn)) ; and hence (by de…nition) is recurrent. -
Left Eigenvector of a Stochastic Matrix
Advances in Pure Mathematics, 2011, 1, 105-117 doi:10.4236/apm.2011.14023 Published Online July 2011 (http://www.SciRP.org/journal/apm) Left Eigenvector of a Stochastic Matrix Sylvain Lavalle´e Departement de mathematiques, Universite du Quebec a Montreal, Montreal, Canada E-mail: [email protected] Received January 7, 2011; revised June 7, 2011; accepted June 15, 2011 Abstract We determine the left eigenvector of a stochastic matrix M associated to the eigenvalue 1 in the commu- tative and the noncommutative cases. In the commutative case, we see that the eigenvector associated to the eigenvalue 0 is (,,NN1 n ), where Ni is the ith principal minor of NMI= n , where In is the 11 identity matrix of dimension n . In the noncommutative case, this eigenvector is (,P1 ,Pn ), where Pi is the sum in aij of the corresponding labels of nonempty paths starting from i and not passing through i in the complete directed graph associated to M . Keywords: Generic Stochastic Noncommutative Matrix, Commutative Matrix, Left Eigenvector Associated To The Eigenvalue 1, Skew Field, Automata 1. Introduction stochastic free field and that the vector 11 (,,PP1 n ) is fixed by our matrix; moreover, the sum 1 It is well known that 1 is one of the eigenvalue of a of the Pi is equal to 1, hence they form a kind of stochastic matrix (i.e. the sum of the elements of each noncommutative limiting probability. row is equal to 1) and its associated right eigenvector is These results have been proved in [1] but the proof the vector (1,1, ,1)T . -
Alternating Sign Matrices, Extensions and Related Cones
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope. -
The $ H^* $-Polynomial of the Order Polytope of the Zig-Zag Poset
THE h∗-POLYNOMIAL OF THE ORDER POLYTOPE OF THE ZIG-ZAG POSET JANE IVY COONS AND SETH SULLIVANT Abstract. We describe a family of shellings for the canonical tri- angulation of the order polytope of the zig-zag poset. This gives a new combinatorial interpretation for the coefficients in the numer- ator of the Ehrhart series of this order polytope in terms of the swap statistic on alternating permutations. 1. Introduction and Preliminaries The zig-zag poset Zn on ground set {z1,...,zn} is the poset with exactly the cover relations z1 < z2 > z3 < z4 > . That is, this partial order satisfies z2i−1 < z2i and z2i > z2i+1 for all i between n−1 1 and ⌊ 2 ⌋. The order polytope of Zn, denoted O(Zn) is the set n of all n-tuples (x1,...,xn) ∈ R that satisfy 0 ≤ xi ≤ 1 for all i and xi ≤ xj whenever zi < zj in Zn. In this paper, we introduce the “swap” permutation statistic on alternating permutations to give a new combinatorial interpretation of the numerator of the Ehrhart series of O(Zn). We began studying this problem in relation to combinatorial proper- ties of the Cavender-Farris-Neyman model with a molecular clock (or CFN-MC model) from mathematical phylogenetics [4]. We were inter- ested in the polytope associated to the toric variety obtained by ap- plying the discrete Fourier transform to the Cavender-Farris-Neyman arXiv:1901.07443v2 [math.CO] 1 Apr 2020 model with a molecular clock on a given rooted binary phylogenetic tree. We call this polytope the CFN-MC polytope. -
Contents 5 Eigenvalues and Diagonalization
Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.