Empirical Comparison of Greedy Strategies for Learning Markov Networks of Treewidth K

Total Page:16

File Type:pdf, Size:1020Kb

Empirical Comparison of Greedy Strategies for Learning Markov Networks of Treewidth K Empirical Comparison of Greedy Strategies for Learning Markov Networks of Treewidth k K. Nunez, J. Chen and P. Chen G. Ding and R. F. Lax B. Marx Dept. of Computer Science Dept. of Mathematics Dept. of Experimental Statistics LSU LSU LSU Baton Rouge, LA 70803 Baton Rouge, LA 70803 Baton Rouge, LA 70803 Abstract—We recently proposed the Edgewise Greedy Algo- simply one less than the maximum of the orders of the rithm (EGA) for learning a decomposable Markov network of maximal cliques [5], a treewidth k DMN approximation to treewidth k approximating a given joint probability distribution a joint probability distribution of n random variables offers of n discrete random variables. The main ingredient of our algorithm is the stepwise forward selection algorithm (FSA) due computational benefits as the approximation uses products of to Deshpande, Garofalakis, and Jordan. EGA is an efficient alter- marginal distributions each involving at most k + 1 of the native to the algorithm (HGA) by Malvestuto, which constructs random variables. a model of treewidth k by selecting hyperedges of order k +1. In The treewidth k DNM learning problem deals with a given this paper, we present results of empirical studies that compare empirical distribution P ∗ (which could be given in the form HGA, EGA and FSA-K which is a straightforward application of FSA, in terms of approximation accuracy (measured by of a distribution table or in the form of a set of training data), KL-divergence) and computational time. Our experiments show and the objective is to find a DNM (i.e., a chordal graph) that (1) on the average, all three algorithms produce similar G of treewidth k which defines a probability distribution Pµ approximation accuracy; (2) EGA produces comparable or better that ”best” approximates P ∗ according to some criterion. For approximation accuracy and is the most efficient among the three. example, the criterion could be minimizing D (P ∗jjP ), (3) Malvestuto’s algorithm is the least efficient one, although it KL µ tends to produce better accuracy when the treewidth is bigger where DKL(pjjq) is the Kullback-Leibler (KL) divergence than half of the number of random variabls; (4) EGA coupled measuring the difference between two probability distributions with local search has the best approximation accuracy overall, of p and q. For the special case k = 1, a graph of treewidth at a cost of increased computation time by 50 percent. one is a tree. The problem of finding an optimal maximum likelihood tree approximating a given probability distribution I. INTRODUCTION is solved by a well-known work greedy algorithm due to Chow A Markov network is an undirected graphical model of a and Liu [6]. In general finding the optimal graph G for k > 1 joint probability distribution that captures conditional depen- is computationally hard (see [13]), thus greedy algorithms dency among subsets of the random variables involved. Here, have been developed that seek a good (e.g., with small KL- we consider only decomposable Markov networks (DMNs) of divergence to P ∗) DNM of treewidth k as an approximation n discrete random variables. This means that the graphical to P ∗. model is a chordal (or triangulated) graph, and the joint Malvestuto [16] gave a greedy algorithm that constructs a probability distribution factors in a particularly simple way so-called elementary DNM of treewidth k by adding at each involving the product of the marginal distributions of all max- step a hyperedge of order k+1. His algorithm will be called the imal cliques and the product of the marginal distributions of Hyperedge Greedy Algorithm (HGA). While HGA employs intersections of certain maximal cliques (see [19]Lauritzen and a very reasonable greedy strategy to find a good treewidth Theorem 1 below). DMNs have become increasingly popular k DMN, computationally HGA appears to be much less in applications such as computer vision, natural language efficient because it needs to inspect all the cliques of size processing and information retrieval. k + 1 in growing the chordal graph. In fact the computational n Recall that a graph is called chordal if every (simple) cycle complexity of HGA is of the order O( k ). The Edgewise of length at least four has a chord. In this paper, a k-clique is Greedy Algorithm (EGA) [10] proposed by the authors is a a complete subgraph on k vertices of a graph, and a maximal more efficient alternative to HGA. The EGA’s greedy strategy complete subgraph, which some authors call a clique, will be is adding one edge at a time based on maximum conditional called a maximal clique. mutual information. The EGA modifies the stepwise forward The greedy algorithms compared in this paper tackle the selection algorithm (FSA) of Deshpande-Garofalakis-Jordan problem of finding a chordal graph (DMN) of given treewidth [9] by taking into account the treewidth of the model, creating k that best approximates (in some sense) a given probability chordal graphs of increasing treewidth to approximate P ∗. distribution. Given that for chordal graphs, the treewidth is This algorithm can be seen as a generalization of the Chow-Liu algorithm. A local search method was also proposed in [10] factorization of the joint probability distribution that comes to further improve the learned model. Using a straightforward from this decomposable graphical model. application of the FSA by Deshpande-Garofalakis-Jordan [9], Theorem 1: The joint probability distribution and the we can obtain an algorithm FSA-K which constructs a DNM marginal distributions coming from this decomposable graph- of treewidth k by adding one edge at a time from a null graph ical model satisfy without having to first construct DNMs of treewidth 1, etc. Y Y The EGA and FSA-K methods have the same computational pX · pS = pC : complexity, as both utilize F SA as the core component. S2S C2Vc It has been shown [9] that at each edge-addition step, FSA Definition. Let P ∗(X) be a probability distribution with 2 takes O(n ) time to enumerate all the eligible edges to be n random variables X = fX1;X2;:::;Xng. Let the chordal considered, and makes O(n) entropy computations. For an graph G be a DMN defining a probability distribution Pµ(X) ∗ edge-maximal treewidth k chordal graph G with n vertices, approximating P (X). Then Pµ(X) is given by (k+1) G has exactly nk − 2 edges which is of the order nk, Q ∗ P C (XC ) the time complexity of EGA and FSA-K is O([n2 + nE]nk) C2Vc Pµ(X) = Q ∗ where E is the maximal amount of time needed to compute S2S P S(XS); an entropy of k + 1 variables. This indicates that EGA and ∗ where P C (XC ) for a clique C in graph G is the marginal FSA-K are much more efficient than HGA. However, there ∗ distribution from P on the variables XC in the clique C. is no empirical study comparing the approximation accuracies Recall that the entropy H(X) for a set of random vari- of HGA, EGA, and FSA-K, although two examples where ables X (following a distribution P ) is defined to be shown in [10] indicating cases where EGA obtains the optimal H(X) = − P P (x) log P (x). We sometimes denote the solution whereas HGA does not. x entropy H(X) by HP (X) when it is necessary to distinguish In this paper, we report a series of experiments that com- the underlying probability distributions when computing the pare HGA, EGA, FSA-K and EGA with local search. The entropy. In this paper, when we just use H(X) (or H(C)) comparisons are along two aspects: one is the approximation without the subsecipt, we refer to the entropy of the varaibles ∗ accuracy measured by DKL(P jj P µ), and the other is the under the empirical distribution P ∗ or its marginal distribu- computational time spent in constructing the DNMs. tions. The rest of the paper is organized as follows. Some neces- The KL-divergence DKL(pjjq) between probability distribu- sary background information will be presented in Section II. P p(x) tions p and q is defined to be DKL(pjjq) = x p(x) log q(x) . The greey algorithms HGA, EGA and FSA-K will be briefly Proposition 1. The difference between the entropy of a described in Section III. Section IV contains the main part DMN G and the entropy of the “saturated” model (i.e., the of the paper, namely, the experiments settings, results and DMN given by the complete graph on the n discrete random discussions. Section V concludes the paper. variables) equals the Kullback-Leibler divergence between P ∗ and Pµ, namely, II. BACKGROUND ∗ DKL(P jjPµ) = HP (X) − H(X) The notion of a junction tree is useful in defining the µ probability distribution on a DMN G. It will be convenient Thus in finding the best (in terms of KL-divergence) DMN of to define a junction tree in terms of the clique graph, in the treewidth k, it is necessary and sufficient to find a treewidth sense of [11], of a chordal graph. k DNM that minimizing HPµ (X), Definition: Let G = (V; E) be a chordal graph. The clique III. THREE GREEDY ALGORITHMS graph of G, denoted C(G) = (Vc;Ec; µ) is the weighted graph defined by: A. Malvestuto’s Algorithm 1) The vertex set Vc is the set of maximal cliques of G. In Malvestuto’s article, a chordal graph of treewidth k is 2) An edge (C1;C2) belongs to Ec if and only if the called a model of rank k + 1.
Recommended publications
  • On Treewidth and Graph Minors
    On Treewidth and Graph Minors Daniel John Harvey Submitted in total fulfilment of the requirements of the degree of Doctor of Philosophy February 2014 Department of Mathematics and Statistics The University of Melbourne Produced on archival quality paper ii Abstract Both treewidth and the Hadwiger number are key graph parameters in structural and al- gorithmic graph theory, especially in the theory of graph minors. For example, treewidth demarcates the two major cases of the Robertson and Seymour proof of Wagner's Con- jecture. Also, the Hadwiger number is the key measure of the structural complexity of a graph. In this thesis, we shall investigate these parameters on some interesting classes of graphs. The treewidth of a graph defines, in some sense, how \tree-like" the graph is. Treewidth is a key parameter in the algorithmic field of fixed-parameter tractability. In particular, on classes of bounded treewidth, certain NP-Hard problems can be solved in polynomial time. In structural graph theory, treewidth is of key interest due to its part in the stronger form of Robertson and Seymour's Graph Minor Structure Theorem. A key fact is that the treewidth of a graph is tied to the size of its largest grid minor. In fact, treewidth is tied to a large number of other graph structural parameters, which this thesis thoroughly investigates. In doing so, some of the tying functions between these results are improved. This thesis also determines exactly the treewidth of the line graph of a complete graph. This is a critical example in a recent paper of Marx, and improves on a recent result by Grohe and Marx.
    [Show full text]
  • Hamiltonian Cycles in Generalized Petersen Graphs Let N and K Be
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF COMBINATORIAL THEORY, Series B 24, 181-188 (1978) Hamiltonian Cycles in Generalized Petersen Graphs Kozo BANNAI Information Processing Research Center, Central Research Institute of Electric Power Industry, 1-6-I Ohternachi, Chiyoda-ku, Tokyo, Japan Communicated by the Managing Edirors Received March 5, 1974 Watkins (J. Combinatorial Theory 6 (1969), 152-164) introduced the concept of generalized Petersen graphs and conjectured that all but the original Petersen graph have a Tait coloring. Castagna and Prins (Pacific J. Math. 40 (1972), 53-58) showed that the conjecture was true and conjectured that generalized Petersen graphs G(n, k) are Hamiltonian unless isomorphic to G(n, 2) where n E S(mod 6). The purpose of this paper is to prove the conjecture of Castagna and Prins in the case of coprime numbers n and k. 1. INTRODUCTION Let n and k be coprime positive integers with k < n, i.e., (n, k) = 1. The generalized Petersengraph G(n, k) consistsof 2n vertices (0, 1, 2,..., n - 1; O’, l’,..., n - I’} and 3n edgesof the form (i, i + l), (i’, i + l’), (i, ik’) for 0 < i < n - 1, where all numbers are read modulo n. The set of edges ((i, i + 1)) and the set of edges((i’, i f l’)} are called the outer rim and the inner rim, respectively, and edges{(i, ik’)) are called spokes. Note that the above definition of generalized Petersen graphs differs from that of Watkins /4] and Castagna and Prins [2] in the numbering of the vertices.
    [Show full text]
  • Graph in Data Structure with Example
    Graph In Data Structure With Example Tremain remove punctually while Memphite Gerald valuate metabolically or soaks affrontingly. Grisliest Sasha usually reconverts some singlesticks or incage historiographically. If innocent or dignified Verge usually cleeking his pewit clipped skilfully or infests infra and exchangeably, how centum is Rad? What is integer at which means that was merely an organizational level overview of. Removes the specified node. Particularly find and examples. What facial data structure means. Graphs Murray State University. For example with edge going to structure of both have? Graphs in Data Structure Tutorial Ride. That instead of structure and with example as being compared. The traveling salesman problem near a folder example of using a tree algorithm to. When at the neighbors of efficient current node are considered, it marks the current node as visited and is removed from the unvisited list. You with example. There consider no isolated nodes in connected graph. The data structures in a stack of linked lists are featured in data in java some definitions that these files if a direction? We can be incident with example data structures! Vi and examples will only be it can be a structure in a source. What are examples can be identified by edges with. All data structures? A rescue Study a Graph Data Structure ijarcce. We can say that there are very much in its length of another and put a vertical. The edge uv, which functions that, we can be its direct support this essentially means. If they appear in data structures and solve other values for our official venues for others with them is another.
    [Show full text]
  • THE CHROMATIC POLYNOMIAL 1. Introduction a Common Problem in the Study of Graph Theory Is Coloring the Vertices of a Graph So Th
    THE CHROMATIC POLYNOMIAL CODY FOUTS Abstract. It is shown how to compute the Chromatic Polynomial of a sim- ple graph utilizing bond lattices and the M¨obiusInversion Theorem, which requires the establishment of a refinement ordering on the bond lattice and an exploration of the Incidence Algebra on a partially ordered set. 1. Introduction A common problem in the study of Graph Theory is coloring the vertices of a graph so that any two connected by a common edge are different colors. The vertices of the graph in Figure 1 have been colored in the desired manner. This is called a Proper Coloring of the graph. Frequently, we are concerned with determining the least number of colors with which we can achieve a proper coloring on a graph. Furthermore, we want to count the possible number of different proper colorings on a graph with a given number of colors. We can calculate each of these values by using a special function that is associated with each graph, called the Chromatic Polynomial. For simple graphs, such as the one in Figure 1, the Chromatic Polynomial can be determined by examining the structure of the graph. For other graphs, it is very difficult to compute the function in this manner. However, there is a connection between partially ordered sets and graph theory that helps to simplify the process. Utilizing subgraphs, lattices, and a special theorem called the M¨obiusInversion Theorem, we determine an algorithm for calculating the Chromatic Polynomial for any graph we choose. Figure 1. A simple graph colored so that no two vertices con- nected by an edge are the same color.
    [Show full text]
  • INTRODUCTION to GRAPH THEORY and ITS TYPES Rupinder Kaur, Raveena Saini, Sanjivani Assistant Professor Mathematics A.S.B.A.S.J.S
    © 2019 JETIR April 2019, Volume 6, Issue 4 www.jetir.org (ISSN-2349-5162) INTRODUCTION TO GRAPH THEORY AND ITS TYPES Rupinder Kaur, Raveena Saini, Sanjivani Assistant professor Mathematics A.S.B.A.S.J.S. Memorial College Bela, Ropar, India Abstract: Graphs are of simple structures which are made from nodes, vertices or points which are connected by the arcs, edges or lines. Graphs are used to find the relationship between two objects which are connected through nodes and edges. Also there are many types of graphs which represent different properties of graphs. Graphs are used in many fields in modern times. In today’s time graph theory will needed in all aspects. Keywords: Father of graph theory, graphs, uses, type, paths, cycle, walk, Hamiltonian graph, Euler graph, colouring, chromatic numbers. 1. INTRODUCTION Graph theory begins with very simple geometric ideas and has many powerful applications. Graph theory begins in 1736 when Leonhard Euler solved a problem that has been puzzling the good citizens of the town of Konigsberg in Prussia. In modern times Graph theory played very important role in many areas such as communications, engineering, physical sciences, linguistics, social sciences, and many other fields. On the basis of this variety of application it is useful to develop and study the subject in abstract terms and finding the results. 2. WHY DO WE STUDY GRAPH THEORY? Graph theory is very important for “analysing things that we connected to another thing” which applies almost everywhere. It is mathematical structure which is used to study of graphs, to solve pairwise relation between two objects.
    [Show full text]
  • Edge-Odd Graceful Labeling of Sum of K2 & Null Graph with N Vertices
    Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 9 (2017), pp. 4943-4952 © Research India Publications http://www.ripublication.com Edge-Odd Graceful Labeling of Sum of K2 & Null Graph with n Vertices and a Path of n Vertices Merging with n Copies of a Fan with 6 Vertices G. A. Mogan1, M. Kamaraj2 and A. Solairaju3 1Assitant Professor of Mathematics, Dr. Paul’s Engineering College, Paul Nagar, Pulichappatham-605 109., India. 2Associate Professor and Head, Department of Mathematics, Govt. Arts & Science College, Sivakasi-626 124, India. 3Associate Professor of Mathematics, Jamal Mohamed College, Trichy – 620 020, Abstract A (p, q) connected graph G is edge-odd graceful graph if there exists an injective map f: E(G) → {1, 3, …, 2q-1} so that induced map f+: V(G) → {0, 1,2, 3, …, (2k-1)}defined by f+(x) f(xy) (mod 2k), where the vertex x is incident with other vertex y and k = max {p, q} makes all the edges distinct and odd. In this article, the edge-odd graceful labelings of both P2 + Nn and Pn nF6 are obtained. Keywords: graceful graph, edge -odd graceful labeling, edge -odd graceful graph INTRODUCTION: Abhyankar and Bhat-Nayak [2000] found graceful labeling of olive trees. Barrientos [1998] obtained graceful labeling of cyclic snakes, and he also [2007] got graceful labeling for any arbitrary super-subdivisions of graphs related to path, and cycle. Burzio and Ferrarese [1998] proved that the subdivision graph of a graceful tree is a graceful tree. Gao [2007] analyzed odd graceful labeling for certain special cases in terms of union of paths.
    [Show full text]
  • Elementary Graph Theory
    Elementary Graph Theory Robin Truax March 2020 Contents 1 Basic Definitions 2 1.1 Specific Types of Graphs . .2 1.2 Paths and Cycles . .3 1.3 Trees and Forests . .3 1.4 Directed Graphs and Route Planning . .4 2 Finding Cycles and Trails 5 2.1 Eulerian Circuits and Eulerian Trails . .5 2.2 Hamiltonian Cycles . .6 3 Planar Graphs 6 3.1 Polyhedra and Projections . .7 3.2 Platonic Solids . .8 4 Ramsey Theory 9 4.1 Ramsey's Theorem . .9 4.2 An Application of Ramsey's Theorem . 10 4.3 Schur's Theorem and a Corollary . 10 5 The Lindstr¨om-Gessel-ViennotLemma 11 5.1 Proving the Lindstr¨om-Gessel-ViennotLemma . 11 5.2 An Application in Linear Algebra . 12 5.3 An Application in Tiling . 13 6 Coloring Graphs 13 6.1 The Five-Color Theorem . 14 7 Extra Topics 15 7.1 Matchings . 15 1 1 Basic Definitions Definition 1 (Graphs). A graph G is a pair (V; E) where V is the set of vertices and E is a list of \edges" (undirected line segments) between pairs of (not necessarily distinct) vertices. Definition 2 (Simple Graphs). A graph G is called a simple graph if there is at most one edge between any two vertices and if no edge starts and ends at the same vertex. Below is an example of a very famous graph, called the Petersen graph, which happens to be simple: Right now, our definitions have a key flaw: two graphs that have exactly the same setup, except one vertex is a quarter-inch to the left, are considered completely different.
    [Show full text]
  • Zero-Sum Magic Graphs and Their Null Sets
    Zero-Sum Magic Graphs and Their Null Sets Ebrahim Salehi Department of Mathematical Sciences University of Nevada Las Vegas Las Vegas, NV 89154-4020. [email protected] Abstract For any h 2 N; a graph G = (V; E) is said to be h-magic if there exists a labeling l : E(G) ! + Zh ¡ f0g such that the induced vertex set labeling l : V (G) ! Zh de¯ned by X l+(v) = l(uv) uv2E(G) is a constant map. When this constant is 0 we call G a zero-sum h-magic graph. The null set of G is the set of all natural numbers h 2 N for which G admits a zero-sum h-magic labeling. In this paper we will identify several classes of zero sum magic graphs and will determine their null sets. Key Words: magic, non-magic, zero-sum, null set. AMS Subject Classi¯cation: 05C15 05C78 1 Introduction For an abelian group A; written additively, any mapping l : E(G) ! A ¡ f0g is called a labeling. Given a labeling on the edge set of G one can introduce a vertex set labeling l+ : V (G) ! A by X l+(v) = l(uv): uv2E(G) A graph G is said to be A-magic if there is a labeling l : E(G) ! A ¡ f0g such that for each vertex v; the sum of the labels of the edges incident with v are all equal to the same constant; that is, l+(v) = c for some ¯xed c 2 A: In general, a graph G may admit more than one labeling to become A-magic; for example, if jAj > 2 and l : E(G) ! A ¡ f0g is a magic labeling of G with sum c; Ars Combinatoria 82 (2007), 41-53.
    [Show full text]
  • Graph Theory Types of Graphs
    GGRRAAPPHH TTHHEEOORRYY -- TTYYPPEESS OOFF GGRRAAPPHHSS http://www.tutorialspoint.com/graph_theory/types_of_graphs.htm Copyright © tutorialspoint.com There are various types of graphs depending upon the number of vertices, number of edges, interconnectivity, and their overall structure. We will discuss only a certain few important types of graphs in this chapter. Null Graph A graph having no edges is called a Null Graph. Example In the above graph, there are three vertices named ‘a’, ‘b’, and ‘c’, but there are no edges among them. Hence it is a Null Graph. Trivial Graph A graph with only one vertex is called a Trivial Graph. Example In the above shown graph, there is only one vertex ‘a’ with no other edges. Hence it is a Trivial graph. Non-Directed Graph A non-directed graph contains edges but the edges are not directed ones. Example In this graph, ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’ are the vertices, and ‘ab’, ‘bc’, ‘cd’, ‘da’, ‘ag’, ‘gf’, ‘ef’ are the edges of the graph. Since it is a non-directed graph, the edges ‘ab’ and ‘ba’ are same. Similarly other edges also considered in the same way. Directed Graph In a directed graph, each edge has a direction. Example In the above graph, we have seven vertices ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, and ‘g’, and eight edges ‘ab’, ‘cb’, ‘dc’, ‘ad’, ‘ec’, ‘fe’, ‘gf’, and ‘ga’. As it is a directed graph, each edge bears an arrow mark that shows its direction. Note that in a directed graph, ‘ab’ is different from ‘ba’. Simple Graph A graph with no loops and no parallel edges is called a simple graph.
    [Show full text]
  • Graph Theory & Combinatorics
    Graph Theory & Combinatorics McGill University Instructor: Prof. Sergey Norin Notes by: Tommy Reddad Last updated: September 12, 2015 Contents 1 Introduction . .2 2 Connectivity . .3 3 Trees and Forests . .5 4 Spanning Trees . .7 5 Shortest Paths . .9 6 Euler’s Theorem and Hamiltonian Cycles . 10 7 Bipartite Graphs . 12 8 Matchings in Bipartite Graphs . 14 9 Menger’s Theorem and Separations . 17 10 Directed Graphs and Network Flows . 19 11 Independent Sets and Gallai’s Equations . 22 12 Ramsey’s Theorem . 24 13 Matchings and Tutte’s Theorem . 26 14 Vertex Colouring . 30 15 Edge Colouring . 33 16 Series-Parallel Graphs . 35 17 Planar Graphs . 38 18 Kuratowki’s Theorem . 40 19 Colouring Planar Graphs . 43 20 Perfect Graphs . 49 21 Stable Matchings and List Colouring . 53 1 1 Introduction As a disclaimer, these notes may include mistakes, inaccuracies and in- complete reasoning. On that note, we begin. Graph theory is the study of dots and lines: sets and pairwise relations between their elements. Definition. A graph G is an ordered pair (V(G), E(G)), where V(G) is a set of vertices, E(G) is a set of edges, and a edge is said to be incident to one or two vertices, called its ends. If e is incident to vertices u and v, we write e = uv = vu. When V(G) and E(G) are finite, G is a finite graph. In this course, we only study and consider finite graphs. Definition. A loop is an edge with only one end. Definition. Edges are said to be parallel if they are incident to the same two vertices.
    [Show full text]
  • Hadwiger's Conjecture
    Hadwiger’s conjecture Paul Seymour∗ Abstract This is a survey of Hadwiger’s conjecture from 1943, that for all t ≥ 0, every graph either can be t-coloured, or has a subgraph that can be contracted to the com- plete graph on t + 1 vertices. This is a tremendous strengthening of the four-colour theorem, and is probably the most famous open problem in graph theory. 1 Introduction The four-colour conjecture (or theorem as it became in 1976), that every planar graph is 4-colourable, was the central open problem in graph theory for a hundred years; and its proof is still not satisfying, requiring as it does the extensive use of a computer. (Let us call it the 4CT.) We would very much like to know the “real” rea- son the 4CT is true; what exactly is it about planarity that implies that four colours suffice? Its statement is so simple and appealing that the massive case analysis of the computer proof surely cannot be the book proof. So there have been attempts to pare down its hypotheses to a minimum core, in the hope of hitting the essentials; to throw away planarity, and impose some weaker condition that still works, and perhaps works with greater transparency so we can comprehend it. This programme has not yet been successful, but it has given rise to some beautiful problems. Of these, the most far-reaching is Hadwiger’s conjecture. (One notable other at- tempt is Tutte’s 1966 conjecture [78] that every 2-edge-connected graph containing no subdivision of the Petersen graph admits a “nowhere-zero 4-flow”, but that is P.
    [Show full text]
  • Graphs Eulerian and Hamiltonian Applications Graph Layout Software
    Graphs Eulerian and Hamiltonian Applications Graph layout software Graphs SET07106 Mathematics for Software Engineering School of Computing Edinburgh Napier University Module Leader: Uta Priss 2010 Copyright Edinburgh Napier University Graphs Slide 1/31 Graphs Eulerian and Hamiltonian Applications Graph layout software Outline Graphs Eulerian and Hamiltonian Applications Graph layout software Copyright Edinburgh Napier University Graphs Slide 2/31 Graphs Eulerian and Hamiltonian Applications Graph layout software Binary relations 5 1 3 45 7 9 a e a 4 c 7 d d z 9 3 e c 1 z { (a,5), (e,5),(e,4),(z,7) } Copyright Edinburgh Napier University Graphs Slide 3/31 Graphs Eulerian and Hamiltonian Applications Graph layout software A binary relation can be represented as a graph Students Subjects Tim maths Mary biology John computing Sue Pete science John Tim biology Petecomputing maths Sue Mary science Copyright Edinburgh Napier University Graphs Slide 4/31 Graphs Eulerian and Hamiltonian Applications Graph layout software A graph consists of nodes (vertices) and edges John Tim biology Petecomputing maths Sue Mary science Copyright Edinburgh Napier University Graphs Slide 5/31 Graphs Eulerian and Hamiltonian Applications Graph layout software Undirected and directed graphs John Tim biology Petecomputing maths Sue Mary science John Tim biology Petecomputing maths Sue Mary science Copyright Edinburgh Napier University Graphs Slide 6/31 Graphs Eulerian and Hamiltonian Applications Graph layout software The complete graphs for n ≤ 6 Copyright Edinburgh
    [Show full text]