Discrete Optimization 2010 Lecture 6 Total Unimodularity & Matchings

Total Page:16

File Type:pdf, Size:1020Kb

Discrete Optimization 2010 Lecture 6 Total Unimodularity & Matchings Total Unimodularity Matching Problem Discrete Optimization 2010 Lecture 6 Total Unimodularity & Matchings Marc Uetz University of Twente [email protected] Lecture 6: sheet 1 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Outline 1 Total Unimodularity 2 Matching Problem Lecture 6: sheet 2 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Question Under what conditions on A and b is it true that all vertices of n P = fx 2 R j Ax ≤ b; x ≥ 0g happen to be integer? Answer: If A is totally unimodular, and b integer Lecture 6: sheet 3 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem (Totally) Unimodular Matrices Definition n×n 1 An integer square matrix B 2 Z is unimodular if det(B) = ±1 m×n 2 An integer matrix A 2 Z is totally unimodular (TU) if each square submatrix B of A has det(B) 2 f0; ±1g. Some easy facts: entries of a TU matrix are f0; ±1g by definition (1 × 1 submatrices) if A is TU, adding or deleting a row or column vector (0;:::; 1;:::; 0), the result is again TU if A is TU, multiplying any row/column by −1, the result is again TU Lecture 6: sheet 4 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Motivation n If all vertices of fx 2 R j Ax ≤ b; x ≥ 0g are integer, for any c the linear program max ct x s.t. Ax ≤ b; x ≥ 0 (if not infeasible or unbounded) has an integer optimal solution as the set of optimal solutions of P is a face of fx j Ax ≤ b; x ≥ 0g In that case, we can even solve integer linear programming problems by solving an LP only (say with Simplex), because an optimum solution (if it exists) also occurs at a vertex which happens to be integer. Lecture 6: sheet 5 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Integrality of Linear Equality Systems Theorem Let B be an integer square matrix that is nonsingular (that is, det(B) 6= 0), consider system Bx = b. Then x is integer for any integer right-hand-side b if and only if B is unimodular. "if" x = B−1b, and by Cramer's rule (Linear Algebra): det(Bj ) x = j det(B) with Bj = B, but with jth column of B replaced by b "only if" x = B−1b integer for all b, also for bt = (0;:::; 1;:::; 0) such an x exactly equals a column of B−1 so all columns of B−1 are integer, and so is det(B−1) but det(B) det(B−1) = 1, both integer, so both are ±1 Lecture 6: sheet 6 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem TU Matrices and Integrality m×n m Define for A 2 R , b 2 R n P(A) = fx 2 R j Ax ≤ b; x ≥ 0g Theorem If A is TU and b integer, then P(A) has integer vertices only. Lecture 6: sheet 7 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Proof n P(A) = fx 2 R j Ax ≤ b; x ≥ 0g If P(A) = ; nothing to prove. We write A b A P(A) = fx j x ≤ g. As A is TU, so is . −E 0 −E A vertex x of P(A) exists (see Exercise), and is defined by n linearly independent inequalities of this system, so by Aox = bo for A b non-singular subsystem Ao; bo of x ≤ . −E 0 As Ao non-singular and TU, det(Ao) = ±1, so x = (Ao)−1bo is integer. Lecture 6: sheet 8 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem The whole story: Hoffman-Kruskal Theorem n P(A) = fx 2 R j Ax ≤ b; x ≥ 0g We proved sufficiency: Theorem If A is TU and b integer, then P(A) has integer vertices only. It also holds necessity (see Literature, Theorem 6.25): Theorem If P(A) has integer vertices for all integer b, then A is totally unimodular. Lecture 6: sheet 9 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Sufficient Condition Total Unimodularity Theorem A matrix A is totally unimodular if no more than 2 nonzeros are in each column, and if the rows can be partitioned into two sets I1 and I2 such that 1 If a column has two entries of the same sign, their rows are in different sets of the partition 2 If a column has two entries of different sign, their rows are in the same set of the partition Proof: By induction on the size of the square submatrices Lecture 6: sheet 10 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Sufficient Condition Total Unimodularity!"#$%&' !! !" Lecture 6: sheet 11 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Minimum Cost Flows Theorem The node-arc incidence matrix of any directed graph G = (V ; E) is totally unimodular. Proof: Rows have one +1 and one −1, so take I1 = V , I2 = ; Consequence The linear program for Min-Cost Flow always has integer optimal solutions, as long as capacities uij and balances b(i) are integer. The dual linear program always has integer optimal solution, as long as the costs cij are integer. Lecture 6: sheet 12 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Outline 1 Total Unimodularity 2 Matching Problem Lecture 6: sheet 13 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem The Matching Problem !"#$%$&$'%( Definition !"#$%$&$'%()*)+,&-.$%/)0! 1)$%),%)2%3$4"-&"3)/4,5.)6789:1;) A matching in an undirected graph G = (V ; E) is a set M ⊆ E of $<),)<"&)'#)3$<='$%&)"3/"<>pairwise non-incident edges. !"#$%$&$'%()*)Given an undirected+,?$+,@)+,&-.$%/ graph G = (V ; E),0)$<)'%")<2-.)&.,&)0 a maximum matching is "A"B)$<) %'&),)+,&-.$%/)#'4),@@)"one with maximal cardinality.# 1C0>)*)Perfect matching:+,?$+2+)+,&-.$%/jMj = jV j=2 $<),) +,&-.$%/)'#)+,?$+2+)-,43$%,@$&D +,?$+,@ +,?$+,@)E)+,?$+2+ Lecture!"#$%$&$'%()*)+,&-.$%/)0 6: sheet 14 / 29 Marc Uetz ! 1)$<),)Discrete Optimization5"4#"-&)+,&-.$%/ $#),@@) F"4&$-"<),4")$%-$3"%&)&'),%)"3/")"# 0> !"#$%&"&'()$&*+",-(.%/01'2 ! !"#$%&%' ()*+$%)(&,%+&%- ! %' .%"#$/(")"$(&01/&,%&$234,%5$ ! 6,34&()*+$%)&5"#$(&,&/,%7Total Unimodularity Matching Problem"%5&10&)4$&.%"#$/(")"$( Special Case: Bipartite Matching ! 1%&,&(3,8$&'&9)1:;&*:&)1&%- 9<1))1=; ! >/1<8$=?&Special Case Matching in bipartite graphs G = (V ; V ; E), where E ⊆ V × V . @ A"%+&,&=,)34"%5 10&()*+$%)(&)1&.%"#$/(")"1 2 $(&(*34&)4,)&$#$/B<1+B&"(&1 2 =,)34$+C&=,7"%5&)4$&D418$&($)&10&()*+$%)(&,(&4,::B&,(&:1(("<8$ ()*+$%)( .%"#$/(")"$( V1 V2 Lecture 6: sheet 15 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Bipartite Graphs Theorem Graph G = (V ; E) is bipartite if and only if G contains no odd cycle. Proof: (w.l.o.g. assume G connected) Necessity: Trivially, a bipartite graph has no odd cycle. Sufficiency: Given G with no odd cycle. Pick vo 2 V . Define V1 = fv 2 V j dist(vo; v) eveng V2 = fv 2 V j dist(vo; v) oddg Then V1 [ V2 = V is a bipartition with no edges within V1, V2. Lecture 6: sheet 16 / 29 Marc Uetz Discrete Optimization !"#$%&'()*+(,-&#%./0)1"/&$)234"/ Total Unimodularity Matching Problem ! !"#$%&'()&&*+',%(-)"./#0 Matching!.,).'1"./'2'3' Algorithms:!4 Basic Ideas 56'%-7(',6',7'&*(&'8,7'9&',**&*'.-'2:',**'".4 ;)"<",%'-96&)<,."-70'5%1,+6'+"&%*6','#,="#,%'#,.8/"7(4 matchings are an independence system but no matroid greedy algorithm yields maximal matching, but needn't be >)-9maximum%&#0'#"(/.'(&.'6.?8@'1"./' #,.8/"7( ./,.'"67A.'#,="#?#0 for M = maximal matching, and M∗ = maximum matching, 1 ∗ jMj ≥ 2 jM j (Exercise) ! B*&,0 Idea for!.,).'1"./'6-#&'#,="#, matching algorithm: Start%'#,.8/"7('C6&&',9-<&D with maximal matching, look for augmenting56'%-7(',6' paths,%.&)7,."7(',?(#&7."7(''$,./6 8,7'9&'E-?7*:'"#$)-<&0 "2 E)&& !2 "2 !2 !2 "2 "2 !2 E)&& !2 "2 Lecture 6: sheet 17 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Alternating & Augmenting Paths !"#$%&'#(&)*'+),$&#(&)-.'#/0 !"#$%&!#'&() !M "M !M "M !M "M !*(+$&#'&() "M !M "M !M "M free free !M "M !M "M !M ,-.$%/!#'0&)12&1!*(+$&#'&(13!#41+*.#14!/$10551"$&(#46 Lecture 6: sheet 18 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Alternating Path Theorem Observation: Let M be a matching of G = (V ; E), and let P be an augmenting path (for M), then M ⊕ P is a matching with one edge more. M ⊕ P = M [ P n M \ P Theorem (Berge 1957) M is a maximum matching if and only if there are no augmenting alternating paths (for M). Proof: Necessity is clear, sufficiency: Assume matching M0 larger than M and consider M ⊕ M0. Show that this contains an augmenting alternating path for M. (see page 129, reader) Lecture 6: sheet 19 / 29 Marc Uetz Discrete Optimization Total!"#$%&%'($)"*+$+,'!"+-.$/0'12'!"#$%&%'3456 Unimodularity Matching Problem Maximum Bipartite Matching and Maximum Flow v1 w1 v1 w1 v2 w2 v2 w2 s v3 w3 t v3 w3 v4 w4 v4 w4 v5 w5 v5 w5 !""#$!%!$&'&()#(*+!"#, Lecture 6: sheet 20 / 29 Marc Uetz Discrete Optimization Total Unimodularity Matching Problem Maximum Bipartite Matching and Maximum Flow Augmenting Path Algorithm for MaxFlow Flow values are either 0 or 1 flow 1 on (v; w) , edge fv; wg 2 M any feasible flow yields matching (why?) flow value v = size of the matching jMj Computation!"#$"#%&'(%)*#+"#%&,-+./0&1"2-3+"+*&43-2./ time O( v(n + m)) 2 O( nm ) !"#$%&'(%)*+,%)*$$%-.%/01%)*+,%"22%345+6$7.%+528% v1 w1 ,"19:;5<%$6<$%,"8%/$%07$6%1+%9+51;50$%=;)%;1%$>;717? w v Question1 4 v2 w2 v1 w2 v2 What are the flow augmenting paths? w3 v5 v3 w3 red arc: flow 1, wbackward4 v3 arc win5 residual graph @2";,A%B%7:+*1$71%"0<,$51;5<%C"1:%;7%)+056%=;)%;1%$>;717? black arc: flow 0, forward arc in residual graph v4 w4 D*++)A%E%/;C"*1;1$%! 1:$*$%"*$%5+%+66%9892$7F%(+%;)%1:$*$%;7% "5%"0<,$51;5<%C"1:%71"*1;5<%"1%7+,$%-%$56;5<%"1%7+,$%3%% wwv v So,v flow augmentation = augmentingw alt.
Recommended publications
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • A Brief Introduction to Spectral Graph Theory
    A BRIEF INTRODUCTION TO SPECTRAL GRAPH THEORY CATHERINE BABECKI, KEVIN LIU, AND OMID SADEGHI MATH 563, SPRING 2020 Abstract. There are several matrices that can be associated to a graph. Spectral graph theory is the study of the spectrum, or set of eigenvalues, of these matrices and its relation to properties of the graph. We introduce the primary matrices associated with graphs, and discuss some interesting questions that spectral graph theory can answer. We also discuss a few applications. 1. Introduction and Definitions This work is primarily based on [1]. We expect the reader is familiar with some basic graph theory and linear algebra. We begin with some preliminary definitions. Definition 1. Let Γ be a graph without multiple edges. The adjacency matrix of Γ is the matrix A indexed by V (Γ), where Axy = 1 when there is an edge from x to y, and Axy = 0 otherwise. This can be generalized to multigraphs, where Axy becomes the number of edges from x to y. Definition 2. Let Γ be an undirected graph without loops. The incidence matrix of Γ is the matrix M, with rows indexed by vertices and columns indexed by edges, where Mxe = 1 whenever vertex x is an endpoint of edge e. For a directed graph without loss, the directed incidence matrix N is defined by Nxe = −1; 1; 0 corresponding to when x is the head of e, tail of e, or not on e. Definition 3. Let Γ be an undirected graph without loops. The Laplace matrix of Γ is the matrix L indexed by V (G) with zero row sums, where Lxy = −Axy for x 6= y.
    [Show full text]
  • Incidence Matrices and Interval Graphs
    Pacific Journal of Mathematics INCIDENCE MATRICES AND INTERVAL GRAPHS DELBERT RAY FULKERSON AND OLIVER GROSS Vol. 15, No. 3 November 1965 PACIFIC JOURNAL OF MATHEMATICS Vol. 15, No. 3, 1965 INCIDENCE MATRICES AND INTERVAL GRAPHS D. R. FULKERSON AND 0. A. GROSS According to present genetic theory, the fine structure of genes consists of linearly ordered elements. A mutant gene is obtained by alteration of some connected portion of this structure. By examining data obtained from suitable experi- ments, it can be determined whether or not the blemished portions of two mutant genes intersect or not, and thus inter- section data for a large number of mutants can be represented as an undirected graph. If this graph is an "interval graph," then the observed data is consistent with a linear model of the gene. The problem of determining when a graph is an interval graph is a special case of the following problem concerning (0, l)-matrices: When can the rows of such a matrix be per- muted so as to make the l's in each column appear consecu- tively? A complete theory is obtained for this latter problem, culminating in a decomposition theorem which leads to a rapid algorithm for deciding the question, and for constructing the desired permutation when one exists. Let A — (dij) be an m by n matrix whose entries ai3 are all either 0 or 1. The matrix A may be regarded as the incidence matrix of elements el9 e2, , em vs. sets Sl9 S2, , Sn; that is, ai3 = 0 or 1 ac- cording as et is not or is a member of S3 .
    [Show full text]
  • Contents 5 Eigenvalues and Diagonalization
    Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.
    [Show full text]
  • Eigenvalues of Graphs
    Eigenvalues of graphs L¶aszl¶oLov¶asz November 2007 Contents 1 Background from linear algebra 1 1.1 Basic facts about eigenvalues ............................. 1 1.2 Semide¯nite matrices .................................. 2 1.3 Cross product ...................................... 4 2 Eigenvalues of graphs 5 2.1 Matrices associated with graphs ............................ 5 2.2 The largest eigenvalue ................................. 6 2.2.1 Adjacency matrix ............................... 6 2.2.2 Laplacian .................................... 7 2.2.3 Transition matrix ................................ 7 2.3 The smallest eigenvalue ................................ 7 2.4 The eigenvalue gap ................................... 9 2.4.1 Expanders .................................... 10 2.4.2 Edge expansion (conductance) ........................ 10 2.4.3 Random walks ................................. 14 2.5 The number of di®erent eigenvalues .......................... 16 2.6 Spectra of graphs and optimization .......................... 18 Bibliography 19 1 Background from linear algebra 1.1 Basic facts about eigenvalues Let A be an n £ n real matrix. An eigenvector of A is a vector such that Ax is parallel to x; in other words, Ax = ¸x for some real or complex number ¸. This number ¸ is called the eigenvalue of A belonging to eigenvector v. Clearly ¸ is an eigenvalue i® the matrix A ¡ ¸I is singular, equivalently, i® det(A ¡ ¸I) = 0. This is an algebraic equation of degree n for ¸, and hence has n roots (with multiplicity). The trace of the square matrix A = (Aij ) is de¯ned as Xn tr(A) = Aii: i=1 1 The trace of A is the sum of the eigenvalues of A, each taken with the same multiplicity as it occurs among the roots of the equation det(A ¡ ¸I) = 0. If the matrix A is symmetric, then its eigenvalues and eigenvectors are particularly well behaved.
    [Show full text]
  • Notes on the Proof of the Van Der Waerden Permanent Conjecture
    Iowa State University Capstones, Theses and Creative Components Dissertations Spring 2018 Notes on the proof of the van der Waerden permanent conjecture Vicente Valle Martinez Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/creativecomponents Part of the Discrete Mathematics and Combinatorics Commons Recommended Citation Valle Martinez, Vicente, "Notes on the proof of the van der Waerden permanent conjecture" (2018). Creative Components. 12. https://lib.dr.iastate.edu/creativecomponents/12 This Creative Component is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Creative Components by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Notes on the proof of the van der Waerden permanent conjecture by Vicente Valle Martinez A Creative Component submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Mathematics Program of Study Committee: Sung Yell Song, Major Professor Steve Butler Jonas Hartwig Leslie Hogben Iowa State University Ames, Iowa 2018 Copyright c Vicente Valle Martinez, 2018. All rights reserved. ii TABLE OF CONTENTS LIST OF FIGURES . iii ACKNOWLEDGEMENTS . iv ABSTRACT . .v CHAPTER 1. Introduction and Preliminaries . .1 1.1 Combinatorial interpretations . .1 1.2 Computational properties of permanents . .4 1.3 Computational complexity in computing permanents . .8 1.4 The organization of the rest of this component . .9 CHAPTER 2. Applications: Permanents of Special Types of Matrices . 10 2.1 Zeros-and-ones matrices .
    [Show full text]
  • The Assignment Problem and Totally Unimodular Matrices
    Methods and Models for Combinatorial Optimization The assignment problem and totally unimodular matrices L.DeGiovanni M.DiSumma G.Zambelli 1 The assignment problem Let G = (V,E) be a bipartite graph, where V is the vertex set and E is the edge set. Recall that “bipartite” means that V can be partitioned into two disjoint subsets V1, V2 such that, for every edge uv ∈ E, one of u and v is in V1 and the other is in V2. In the assignment problem we have |V1| = |V2| and there is a cost cuv for every edge uv ∈ E. We want to select a subset of edges such that every node is the end of exactly one selected edge, and the sum of the costs of the selected edges is minimized. This problem is called “assignment problem” because selecting edges with the above property can be interpreted as assigning each node in V1 to exactly one adjacent node in V2 and vice versa. To model this problem, we define a binary variable xuv for every u ∈ V1 and v ∈ V2 such that uv ∈ E, where 1 if u is assigned to v (i.e., edge uv is selected), x = uv 0 otherwise. The total cost of the assignment is given by cuvxuv. uvX∈E The condition that for every node u in V1 exactly one adjacent node v ∈ V2 is assigned to v1 can be modeled with the constraint xuv =1, u ∈ V1, v∈VX2:uv∈E while the condition that for every node v in V2 exactly one adjacent node u ∈ V1 is assigned to v2 can be modeled with the constraint xuv =1, v ∈ V2.
    [Show full text]
  • The Symbiotic Relationship of Combinatorics and Matrix Theory
    The Symbiotic Relationship of Combinatorics and Matrix Theory Richard A. Brualdi* Department of Mathematics University of Wisconsin Madison. Wisconsin 53 706 Submitted by F. Uhlig ABSTRACT This article demonstrates the mutually beneficial relationship that exists between combinatorics and matrix theory. 3 1. INTRODUCTION According to The Random House College Dictionary (revised edition, 1984) the word symbiosis is defined as follows: symbiosis: the living together of two dissimilar organisms, esp. when this association is mutually beneficial. In applying, as I am, this definition to the relationship between combinatorics and matrix theory, I would have preferred that the qualifying word “dissimilar” was omitted. Are combinatorics and matrix theory really dissimilar subjects? On the other hand I very much like the inclusion of the phrase “when the association is mutually beneficial” because I believe the development of each of these subjects shows that combinatorics and matrix theory have a mutually beneficial relationship. The fruits of this mutually beneficial relationship is the subject that I call combinatorial matrix theory or, for brevity here, CMT. Viewed broadly, as I do, CMT is an amazingly rich and diverse subject. If it is * Research partially supported by National Science Foundation grant DMS-8901445 and National Security Agency grant MDA904-89 H-2060. LZNEAR ALGEBRA AND ITS APPLZCATZONS 162-164:65-105 (1992) 65 0 Elsevier Science Publishing Co., Inc., 1992 655 Avenue of the Americas, New York, NY 10010 0024.3795/92/$5.00 66 RICHARD A. BRUALDI combinatorial and uses matrix theory r in its formulation or proof, it’s CMT. If it is matrix theory and has a combinatorial component to it, it’s CMT.
    [Show full text]
  • Sample Quiz 7
    Sample Quiz 7 Sample Quiz 7, Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −2 2 1 0 1 1 0 1 1 0 2 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 4 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x; y = x ; y = x on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 10 + x; y = 10 − x; y = x5 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Hypergeometric Systems on the Group of Unipotent Matrices
    数理解析研究所講究録 1171 巻 2000 年 7-20 7 Gr\"obner Bases of Acyclic Tournament Graphs and Hypergeometric Systems on the Group of Unipotent Matrices Takayuki Ishizeki and Hiroshi Imai Department of Information Science, Faculty of Science, University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan $\mathrm{i}\mathrm{m}\mathrm{a}\mathrm{i}$ $@\mathrm{i}\mathrm{s}.\mathrm{s}.\mathrm{u}$ {ishizeki, } -tokyo. $\mathrm{a}\mathrm{c}$ . jp Abstract Gelfand, Graev and Postnikov have shown that the number of independent solutions of hypergeometric systems on the group of unipotent matrices is equal to the Catalan number. These hypergeometric systems are related with the vertex-edge incidence matrices of acyclic tournament graphs. In this paper, we show that their results can be obtained by analyzing the Gr\"obner bases for toric ideals of acyclic tournament graphs. Moreover, we study an open problem given by Gelfand, Graev and Postnikov. 1 Introduction In recent years, Gr\"obner bases for toric ideals of graphs have been studied and applied to many combinatorial problems such as triangulations of point configurations, optimization problems in graph theory, enumerations of contingency tables, and so on [3, 4, 11, 13, 14, 15]. On the other hand, Gr\"obner bases techniques can be applied to partial differential equations via left ideals in the Weyl algebra $[12, 16]$ . Especially, in [16] Gr\"obner bases techniques have been applied to $GKZ$-hypergeometric systems (or $A$ -hypergeometric equations). Gelfand, Graev and Postnikov [5] studied GKZ-hypergeometric systems called hypergeomet- ric systems on the group of unipotent matrices. The hypergeometric systems on the group of unipotent matrices are hypergeometric systems associated with the set of all positive roots $A_{n-1}^{+}$ of the root system $A_{n-1}$ .
    [Show full text]
  • 1.3 Matrices and Graphs
    Thus Tˆ(bˆj) = ajibˆi, �i and the matrix of Tˆ with respect to Bˆ isA T , the transpose of the matrix ofT with respect toB. Remark: Natural isomorphism betweenV and its double dual Vˆ. We have seen above that everyfinite dimensional vector spaceV has the same dimension as its dual Vˆ, and hence they are isomorphic as vector spaces. Once we choose a basis forV we also define a dual basis for Vˆ and the correspondence between the two bases gives us an explicit isomorphism betweenV and Vˆ. However this isomorphism is not intrinsic to the spaceV, in the sense that it depends upon a choice of basis and cannot be described independently of a choice of basis. ˆ The dual space of Vˆ is denoted Vˆ; it is the space of all linear mappings from Vˆ toF. By all of the above discussion it has the same dimension as Vˆ andV. The reason for mentioning this however is that there is a “natural” isomorphismθ fromV to Vˆ. It is defined in the following way, ˆ forx V andf Vˆ - note thatθ(x) belongs to Vˆ, soθ(x)(f) should be an element ofF. ∈ ∈ θ(x)(f) =f(x). To see thatθ is an isomorphism, suppose thatx 1,...,x k are independent elements ofV. Then ˆ θ(x1),...,θ(x k) are independent elements of Vˆ. To see this leta 1,...,a k be element ofF for which a θ(x ) + +a θ(x ) = 0. This means thatf(a x + +a x ) = 0 for allf Vˆ, which means 1 1 ··· k k 1 1 ··· k k ∈ thata x + +a x = 0, which means that eacha = 0.
    [Show full text]